125 26 4MB
English Pages 561 [552] Year 2010
Supply Chain Engineering
Alexandre Dolgui · Jean-Marie Proth
Supply Chain Engineering Useful Methods and Techniques
123
Prof. Alexandre Dolgui Ecole des Mines de Saint-Étienne Centre for Industrial Engineering and Computer Science 158 cours Fauriel 42023 Saint-Étienne CX 02 France [email protected]
Jean-Marie Proth Université de Metz Institut National de Recherche en Informatique et Automatique Île du Saulcy 57045 Metz CX 1 France [email protected]
ISBN 978-1-84996-016-8 e-ISBN 978-1-84996-017-5 DOI 10.1007/978-1-84996-017-5 Springer London Dordrecht Heidelberg New York British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Library of Congress Control Number: 2010928838 © Springer-Verlag London Limited 2010 Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms of licences issued by the Copyright Licensing Agency. Enquiries concerning reproduction outside those terms should be sent to the publishers. The use of registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant laws and regulations and therefore free for general use. The publisher makes no representation, express or implied, with regard to the accuracy of the information contained in this book and cannot accept any legal responsibility or liability for any errors or omissions that may be made. Cover design: eStudioCalamar, Figueres/Berlin Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com)
Preface
Supply chain engineering is an emerging field based on analysis and comprehension of the essential principles of production and distribution systems. This scientific domain concerns the methodical evaluation and optimization of production systems, logistics networks, and their management policies to increase the effectiveness of multifaceted demand and supply chains. Worldwide competition has grown ever stronger since the beginning of the 1980s. The pressure of the competitive global market has intensely affected the production systems, calling for: • integration of the activities that cover the whole production spectrum from customers’ requirements to payment; • flexibility in the face of customer-demand changes; • drastic reduction of production costs. To reach these objectives, radical changes have been introduced in production systems, thanks to new manufacturing technologies that increase efficiency and IT technologies that improve system organization and management. Furthermore, dynamical pricing and revenue management, which proposes approaches that define the price of the products based on market situations, attracts more and more researchers and practitioners. Pricing stresses the return on the investment. Supply chains are emblematic examples of the renewal of production systems in recent decades. It is through this new paradigm that cost reduction and service enhancement can be achieved. To make this easier to implement, new types of manufacturing systems have been introduced, for example: reconfigurable manufacturing systems (RMS), assembly lines with worker’s flexibility, bucket brigades or U-shaped assembly lines. Over the same period, new technologies arose to monitor the state of systems in real time. We can mention radio-frequency identification (RFID), Internet applications or “intelligent” storage facilities, to name just a few. These technologies favor one of the most important objectives of production systems management: the ability to make a decision almost immediately. Radical changes in the criteria that express the new objectives of production systems in the face of competition are another important aspect. The introduction of some new criteria reflects the just-in-time (JIT) requirements. For instance, conventional scheduling optimization is now restricted, in the best case, to decid-
vi
Preface
ing the order products are launched in production. In other words, the conventional scheduling activity migrated from the tactical to the strategic level. In actual production systems, this is replaced by a real-time scheduling, also called realtime assignment. Other criteria are used to reflect quality, flexibility and work-inprogress (WIP): adequate quality is now unavoidable to meet customers’ satisfaction; flexibility is a necessary condition to remain competitive in an ever-changing market; and reduction of WIP is a factor to minimize the production cost and the probability of obsolescence. The authors of this book collaborate closely with companies. They have been charged with numerous contracts covering a wide range of industrial activities, including steelmaking, aerospace research, car manufacturing, microelectronics, and in the machining industry. In these areas, the authors worked on design and management problems. They reached the following conclusions: • The difficulty for companies lies much more in determining the exact nature of a problem and defining the criteria to be taken into account, than in solving the problem itself. • The models available in the literature are often difficult to apply in real life, due to the assumptions that were made in order to have a treatable model. • To be acceptable to companies, models must be simple, easy to apply and adjustable to the systems under study. These conclusions are also taken into account in this book. This reference work presents a general view of new methods, techniques, resources and organizations that have erupted in the production domain during the past two decades. The objective of the authors was to furnish the best applied approaches. Of course, if a theoretical approach is more convenient to provide the essentials of the system under study, then it was included. Furthermore, when a simple and efficient model exists to represent an industrial situation, we discard more complicated models that do not provide significantly better results, even if they are widely cited in the literature. The book is organized into 11 chapters with 5 appendices. The following topics are covered. Chapter 1 should be considered as an introduction to pricing. After outlining the importance of pricing to increase revenue, and providing the most common definitions in use in the field, pricing strategies are presented. The mechanisms that link costs, price and margin are analyzed. The selling curve is introduced and several methods to find the characteristics of importance to customers are developed. This chapter ends with price strategy in the oligopoly market. Chapter 2 has as a main goal to introduce stochastic dynamic pricing models with salvage values. The constraints to apply to make this model manageable are given. Pricing models for time-dated items, with no supply option, in a monopolistic environment, and with myopic customers are presented in detail. Chapter 3 concerns outsourcing, which is little studied academically in spite of its importance in the actual global market place. After defining the main notions, the most common benefits that may be expected from outsourcing are presented.
Preface
vii
The steps that lead to outsourcing are detailed. In particular, a vendor selection and evaluation model is developed and several approaches to solve this multicriteria problem are proposed. The strategic outsourcing in the case of a duopoly market is then developed exhaustively. The arguments of the pro and cons are explored. One of the original aspects of this chapter lies in the analysis of offshoring in China. Some arguments in this discussion are quite different from those usually used in the literature. The reason may be that the effects of offshoring on workers in developed countries are taken into account. Chapter 4 considers inventory management in supply chains. The advantages of sharing information among the different levels of a supply chain are discussed. Particular attention is given to the bullwhip effect and the actions to be taken in order to reduce this undesirable phenomenon. Some usual and robust models are highlighted, such as the newsboy (or newsvendor) model, finite-horizon model with stochastic demands and the well-known (R, Q) and (s, S) models. For the last two models, we show how simulation can be used to find the “optimal” values for their parameters. Echelon stock policies, which are tools that meet supply chain requirements, are analyzed along with their complimentary tools, such as material requirements planning (MRP) and manufacturing resources planning (MRP2). Due to the importance of the subject (mainly at the design level), we also review the most common lot-sizing models. Chapter 5 gives a brief description of the RFID (radio-frequency identification) technology. An analysis of the parameters of importance when selecting tags is conducted and a succinct guideline for RFID deployment is suggested. Some applications are reviewed and the importance of this technology for the efficiency of supply chains is outlined. The main domains where RFID is applied routinely are listed. The evaluation of this technology, and in particular the financial implication, is performed. A special section deals with privacy concerns that are an important problem of RFID in today’s situation. The last section raises the problem of authentication, which is especially significant for the case of counterfeit tags. Chapter 6 presents an overview of manufacturing system organizations. This chapter is influenced by the heavy demands for flexibility and adaptability of manufacturing systems in the modern environments. The history of the idea of flexibility is presented and the majority of production system concepts are analyzed: dedicated manufacturing lines (DML), flexible manufacturing systems (FMS), agile manufacturing systems (AMS), reconfigurable manufacturing systems (RMS) and lean manufacturing systems (LMS). Each is defined, its advantages and drawbacks are studied and some illustrative examples are reported. Comparisons are made among them and their appropriateness to supply chains is highlighted. Chapter 7 develops a complex and essential issue (particularly for lean manufacturing) of line balancing, which consists in minimizing the total idle time. The models examined in this chapter have deterministic times. The COMSOAL approach is analyzed comprehensively along with possible improvements. Other algorithms the most frequently mentioned in the literature, such as RPW, KW-like heuristic, B&B-based and mathematical programming approaches, are also pre-
viii
Preface
sented and illustrated. The use of metaheuristics is shown in the third part of this chapter. Simulated annealing, tabu search and genetic algorithms are discussed. Then, the properties and evaluation of line-balancing solutions are underlined. We also go over the evaluation criteria for line balancing from the literature. Chapter 8 generalizes assembly-line-balancing models presented in the previous chapter to stochastic operation times. The problems tackled in this chapter are examined from a practical point of view. In particular, probabilities are defined as is usually done in companies, i.e., by three parameters: minimum, most frequent and maximum values of the variable under consideration. This leads to the notion of triangular density. Problems are solved numerically. A powerful tool is proposed for computing the integration of functions: Tchebycheff's polynomial approach. Numerical examples are presented to illustrate these realistic solutions. Mixed assembly-line models with several types of products are also considered. Other interesting generalizations, such as line balancing with equipment selection, are introduced. Finally, the new concepts of dynamical work sharing are explained using the examples of the bucket brigades and U-shaped assembly lines. Chapter 9 is devoted to the control reactivity, which is becoming a pivotal factor for competitiveness. We show that the static scheduling is slowly vanishing from the industrial environment. It is being replaced by dynamic scheduling and real-time assignment approaches that are able to provide an optimal or nearoptimal solution in real time. The most popular priority (or dispatching) rules are presented first, followed by a second type of dynamic scheduling called the “repair-based approach” that consists of computing a static schedule at the beginning of the working period and adjusting it in the case of unexpected events. Chapter 10 concerns facility layout design. Until the 1980s, the objective was to optimize the layouts assuming that the environment remained basically steady. This situation is referred to as static facility layout (SFL). Linear layouts, functional department layouts and cellular layouts are studied in the first part of this chapter, as well as tools and algorithms used to perform optimal layout designs. In the middle of the 1990s, the problem evolved toward dynamic facilities layouts (DFL) and robust layouts (RL) to meet the needs of enterprises manufacturing multiple products in a rapidly changing market. Most results in these areas concern only the location of manufacturing entities on the available factory surface at the design stage. Rapid advances in mechanical engineering and manufacturing organization may lead to the possibility of real-time rearrangement in the near future. Chapter 11 presents warehousing. Certainly, warehouses are critical components of production systems. In this chapter, their usefulness is highlighted and various functions and equipment are analyzed. Recent advances such as the valueadded services and their corresponding areas are covered. Special attention is paid to the warehouse management, in particular, to the main difficulties faced by their managers. The design stage is also extensively considered via developing storage algorithms for unit-load warehouse as well as examining warehouse sizing static and dynamic models. The last section of this chapter concerns the location of warehouses. Single- and multiflow location problems are put forth. Remember
Preface
ix
that layout techniques, which also concern warehouses, were presented in Chapter 10. Five types of optimization techniques are reported and illustrated at the end of the book in the appendices. Each of the approaches covered in these appendices has been used in at least one chapter to solve real-life problems: • The first appendix explains the stimulated annealing method. • The second is devoted to dynamic programming based on the optimality principle. • The well-known branch-and-bound (B&B) approach is explained in the third. • The fourth presents tabu search techniques. • Genetic algorithms are presented in the last appendix. We had several audiences in mind when this book was written. In companies, the people in charge of management, production, logistics, supply chains, and those looking for suggestions to improve the efficiency of their systems, will be certainly interested in many of the advances covered in this book. They also will appreciate the way explanations are given by using basic examples, providing detailed algorithms, while discarding complex and unnecessary theoretical developments. This book is written for managers and engineers with analytical backgrounds, who are interested in capturing the potentials and limits of the recent advances in production and operations management. The academic audience consists of the many researchers working in topics related to operations research, supply chain management, production system design, facility layout, scheduling, organization, etc. This book will also be useful to professors who teach industrial and systems engineering, management science, operations management as well as business management specifically because of the carefully chosen examples that are provided and the application oriented approach in which the notions are introduced. To summarize, this book is within the comprehension of industrial managers having an analytical background and eager to improve the efficiency of their company, as well as researchers and students working in various related areas. The authors acknowledge Mrs. Marie-Line Barneoud for help in the formatting of this book and Mr. Chris Yukna for his help in proofreading the English. France, February 2009
Alexandre Dolgui Jean-Marie Proth
Contents
1 Introduction to Pricing.......................................................................................1 1.1 Introduction..................................................................................................1 1.2 Definitions and Notations ............................................................................3 1.3 High- and Low-price Strategies ...................................................................4 1.4 Adjustable Strategies ...................................................................................5 1.4.1 Market Segmentation (or Price Discrimination) Strategy ....................6 1.4.2 Discount Strategy .................................................................................7 1.4.3 Price Skimming ....................................................................................8 1.4.4 Penetration Pricing ...............................................................................9 1.4.5 Yield Management (Revenue Management) ........................................9 1.5 Margin, Price, and Selling Level .................................................................9 1.5.1 Notations ............................................................................................10 1.5.2 Basic Relation ....................................................................................10 1.5.3 Equilibrium Point ...............................................................................12 1.5.4 Items Sold with Regard to Price (Margin Being Constant) ................13 1.6 Price Versus Sales Volume: the Selling Curve ..........................................15 1.6.1 Introduction ........................................................................................15 1.6.2 Cost-plus Method ...............................................................................16 1.6.3 Price Testing.......................................................................................16 1.6.4 Estimation Made by Experts ..............................................................17 1.6.5 Market Analysis .................................................................................17 1.6.6 Customer Surveying ...........................................................................20 1.7 Conjoint Measurement...............................................................................20 1.7.1 Introduction and Definitions...............................................................20 1.7.2 Profile Method....................................................................................21 1.7.3 Two-factor Method.............................................................................26 1.7.4 Clustering for Market Segmentation ..................................................29 1.8 Price Strategy in Oligopoly Markets..........................................................32 1.8.1 Reactions of Competitors ...................................................................33 1.8.2 Decreasing Prices ...............................................................................33 1.8.3 Increasing Prices.................................................................................35 1.9 Conclusion .................................................................................................37 References........................................................................................................38 Further Reading ............................................................................................... 38
xii
Contents
2 Dynamic Pricing Models.................................................................................. 41 2.1 Introduction ............................................................................................... 41 2.2 Time-dated Items: a Deterministic Model ................................................. 43 2.2.1 Problem Setting.................................................................................. 43 2.2.2 Solving the Problem: Overall Approach ............................................ 44 2.2.3 Solving the Problem: Example for a Given Price Function ............... 45 2.2.4 Remarks ............................................................................................. 49 2.3 Dynamic Pricing for Time-dated Products: a Stochastic Model................ 49 2.3.1 Problem Considered ........................................................................... 50 2.3.2 Solution to the Problem...................................................................... 53 2.3.3 Probability for the Number of Items at a Given Point in Time .......... 56 2.3.4 Remarks ............................................................................................. 59 2.4 Stochastic Dynamic Pricing for Items with Salvage Values...................... 60 2.4.1 Problem Studied ................................................................................. 60 2.4.2 Price as a Function of Inventory Levels: General Case...................... 61 2.4.3 Price as a Function of Inventory Levels: a Special Case.................... 71 2.5 Concluding Remarks ................................................................................. 75 Reference......................................................................................................... 75 Further Reading ............................................................................................... 75 3 Outsourcing ...................................................................................................... 77 3.1 Introduction ............................................................................................... 77 3.2 Outsourcing Process .................................................................................. 80 3.3 Vendor Selection and Evaluation Model ................................................... 82 3.3.1 Model Formulation............................................................................. 82 3.3.2 Solution Approaches .......................................................................... 88 3.4 Strategic Outsourcing ................................................................................ 94 3.4.1 Case D0,0 < D1,1 .................................................................................. 95 3.4.2 Case D1,1 < D0,0 ................................................................................... 98 3.5 Pros and Cons of Outsourcing ................................................................... 99 3.6 A Country of Active Offshore Vendors: China ....................................... 100 3.6.1 Recent History.................................................................................. 100 3.6.2 Consequences................................................................................... 101 3.6.3 Chinese Strategy to Acquire Know-how and Technology ............... 103 3.7 Offshore Outsourcing: a Harmful Strategy? ............................................ 104 3.7.1 Introductory Remarks....................................................................... 104 3.7.2 Risk of Introducing Innovations Abroad.......................................... 105 3.7.3 How Could Offshore Outsourcing Be Harmful to Some Groups? ... 105 3.7.4 How Could Offshore Outsourcing Be Harmful to a Country? ......... 105 3.7.5 How Could Offshore Outsourcing Be Harmful to the World?......... 106 3.8 Conclusion ............................................................................................... 106 References ..................................................................................................... 107 Further Reading .............................................................................................107
Contents
xiii
4 Inventory Management in Supply Chains....................................................109 4.1 Introduction..............................................................................................109 4.2 Inventories in Supply Chains ...................................................................113 4.2.1 Definition of a Supply Chain............................................................113 4.2.2 Inventory Problems in a Supply Chain.............................................114 4.2.3 Bullwhip Effect ................................................................................115 4.3 Stochastic Inventory Problems ................................................................122 4.3.1 Newsvendor (or Newsboy) Problem ................................................122 4.3.2 Finite-horizon Model with Stochastic Demand................................125 4.3.3 (R, Q) Policy ....................................................................................127 4.3.4 (s, S) Policy ......................................................................................130 4.4 Echelon Stock Policies.............................................................................132 4.4.1 Introductory Remarks.......................................................................132 4.4.2 Material Requirements Planning (MRP) ..........................................133 4.4.3 Manufacturing Resources Planning (MRP2)....................................138 4.5 Production Smoothing: Lot-size Models .................................................139 4.5.1 Discrete Monoproduct Problem .......................................................140 4.5.2 Continuous Monoproduct Problem ..................................................145 4.5.3 Multiproduct Problem ......................................................................148 4.5.4 Economic Order Quantity (EOQ).....................................................151 4.6 Pull Control Strategies .............................................................................152 4.6.1 Kanban Model ..................................................................................152 4.6.2 Base Stock Policy.............................................................................154 4.6.3 Constant Work-in-progress (CONWIP) ...........................................155 4.6.4 Generalized Kanban .........................................................................156 4.6.5 Extended Kanban .............................................................................157 4.7 Conclusion ...............................................................................................157 References......................................................................................................158 Further Reading .............................................................................................160 5 Radio-frequency Identification (RFID): Technology and Applications ....163 5.1 Introduction..............................................................................................163 5.2 Technical Overview.................................................................................165 5.2.1 Global Description ...........................................................................165 5.2.2 Properties..........................................................................................166 5.2.3 Parameters of Importance when Selecting Tags...............................168 5.2.4 Auto-ID Center at MIT.....................................................................169 5.3 Succinct Guideline for RFID Deployment...............................................169 5.3.1 Choice of the Technology ................................................................169 5.3.2 Analysis of Problems that May Happen ...........................................170 5.3.3 Matching RFID with IT....................................................................170 5.4 RFID Applications...................................................................................171
xiv
Contents
5.4.1 Application to Inventory Systems .................................................... 171 5.4.2 RFID Systems in Supply Chains ...................................................... 174 5.4.3 Various Applications Related to Movement Tracking ..................... 179 5.5 Some Industrial Sectors that Apply RFID ............................................... 180 5.5.1 Retail Industry.................................................................................. 180 5.5.2 Logistics ........................................................................................... 181 5.5.3 Pharmaceutical Industry................................................................... 181 5.5.4 Automotive Industry ........................................................................ 182 5.5.5 Security Industry .............................................................................. 182 5.5.6 Finance and Banking Industry.......................................................... 182 5.5.7 Waste Management .......................................................................... 182 5.5.8 Processed Food Industry .................................................................. 183 5.6 Advantages when Applying RFID Technology to Supply Chains .......... 183 5.7 Expert Opinion on the Matter .................................................................. 185 5.8 Economic Evaluation of the Use of RFID in Supply Chains................... 185 5.8.1 Current Situation .............................................................................. 185 5.8.2 How to Proceed? .............................................................................. 187 5.9 Privacy Concerns ..................................................................................... 188 5.9.1 Main Privacy Concerns .................................................................... 188 5.9.2 How to Protect Privacy?................................................................... 189 5.10 Authentication ....................................................................................... 190 5.11 Conclusion ............................................................................................. 191 References ..................................................................................................... 192 Further Reading ............................................................................................. 192 6 X-manufacturing Systems ............................................................................. 195 6.1 Introduction ............................................................................................. 195 6.2 Mass Production ...................................................................................... 197 6.3 Flexible Manufacturing Systems (FMS).................................................. 197 6.3.1 What Does Flexibility Means? ......................................................... 197 6.3.2 Definition of FMS ............................................................................ 198 6.3.3 Advantages and Limitations of FMS................................................ 202 6.4 Agile Manufacturing Systems (AMS) ..................................................... 203 6.4.1 Definition ......................................................................................... 203 6.4.2 Agile Versus Lean............................................................................ 205 6.4.3 Agile Versus Flexible....................................................................... 205 6.4.4 Cost Stability During the Life of an AMS........................................ 205 6.5 Reconfigurable Manufacturing Systems (RMS)...................................... 207 6.5.1 Motivation ........................................................................................ 207 6.5.2 RMS Definition................................................................................ 208 6.5.3 Reconfiguration for Error Handling ................................................. 210 6.5.4 A Problem Related to RMS.............................................................. 211 6.6 Lean Manufacturing Systems (LMS)....................................................... 218 6.6.1 Definition ......................................................................................... 218
Contents
xv
6.6.2 How to Eliminate Wastes? ...............................................................219 6.6.3 Six Core Methods to Implement Lean Manufacturing .....................220 6.7 Conclusion ...............................................................................................233 References......................................................................................................234 Further Reading .............................................................................................234 7 Design and Balancing of Paced Assembly Lines ..........................................237 7.1 Simple Production Line (SPL) and Simple Assembly Line (SAL)..........237 7.2 Simple Assembly Line Balancing (SALB) ..............................................240 7.3 Problem SALB-1 .....................................................................................241 7.3.1 Common Sense Approach ................................................................241 7.3.2 COMSOAL Algorithm.....................................................................244 7.3.3 Improvement of COMSOAL............................................................246 7.3.4 RPW Method....................................................................................248 7.3.5 Kilbridge and Wester (KW)-like Heuristic.......................................251 7.3.6 Branch and Bound (B&B) Approaches ............................................251 7.3.7 Mathematical Formulation of a SALB-1 Problem ...........................253 7.4 Problem SALB-2 .....................................................................................255 7.4.1 Heuristic Algorithm..........................................................................256 7.4.2 Algorithm Based on Heuristics for SALB-1 ....................................257 7.4.3 Mathematical Formulation of Problem SALB-2 ..............................258 7.5 Using Metaheuristics ...............................................................................258 7.5.1 Simulated Annealing ........................................................................259 7.5.2 Tabu Search......................................................................................259 7.5.3 Genetic Algorithms ..........................................................................261 7.6 Properties and Evaluation of a Line-balancing Solution..........................270 7.6.1 Relationship Cycle Time/Number of Stations/Throughput ..............270 7.6.2 Evaluation of a Line-balancing Solution ..........................................271 7.7 Concluding Remarks................................................................................273 References......................................................................................................274 Further Reading .............................................................................................274 8 Advanced Line-balancing Approaches and Generalizations......................277 8.1 Introduction..............................................................................................277 8.2 Single Type of Product and Triangular Operation Times ........................278 8.2.1 Triangular Density of Probability.....................................................278 8.2.2 Generating a Random Value ............................................................280 8.2.3 Assembly-line Balancing..................................................................280 8.3 Particular Case: Gaussian Operation Times.............................................284 8.3.1 Reminder of Useful Properties .........................................................284 8.3.2 Integration Using Tchebycheff’s Polynomials .................................286 8.3.3 Algorithm Basis................................................................................287 8.3.4 Numerical Example..........................................................................289 8.4 Mixed-model Assembly Line with Deterministic Task Times ................290
xvi
Contents
8.4.1 Introduction ...................................................................................... 290 8.4.2 Ratios are Constant........................................................................... 291 8.4.3 Ratios are Stochastic ........................................................................ 291 8.5 Mixed-model Line Balancing: Stochastic Ratio and Operation Times.... 299 8.5.1 Introduction ...................................................................................... 299 8.5.2 Evaluation of an Operation Time ..................................................... 299 8.5.3 ALB Algorithm in the Most General Case....................................... 300 8.5.4 Numerical Example.......................................................................... 301 8.6 How to React when the Loads of Stations Exceed the Cycle Time by Accident? ....................................................................................................... 304 8.6.1 Model 1 ............................................................................................ 305 8.6.2 Model 2 ............................................................................................ 305 8.6.3 Model 3 ............................................................................................ 305 8.7 Introduction to Parallel Stations .............................................................. 306 8.8 Particular Constraints............................................................................... 307 8.8.1 A Set of Operations Should be Assigned to the Same Station ......... 308 8.8.2 Two Operations Should be Assigned to Different Stations.............. 308 8.8.3 Line Balancing with Equipment Selection ....................................... 308 8.9 Specific Systems with Dynamic Work Sharing....................................... 311 8.9.1 Bucket-brigade Assembly Lines ...................................................... 312 8.9.2 U-shaped Assembly Lines................................................................ 316 8.9.3 Concluding Remarks ........................................................................ 323 References ..................................................................................................... 324 Further Reading ............................................................................................. 324 9 Dynamic Scheduling and Real-time Assignment ......................................... 327 9.1 Introduction and Basic Definitions .......................................................... 327 9.2 Dynamic Scheduling................................................................................ 331 9.2.1 Reactive Scheduling: Priority (or Dispatching) Rules ..................... 331 9.2.2 Predictive-reactive Scheduling......................................................... 337 9.3 Real-time Assignment with Fixed Previous Assignments ....................... 345 9.3.1 Problem Formulation........................................................................ 346 9.3.2 Case of a Linear Production ............................................................. 347 9.3.3 Control of the Production Cycle....................................................... 351 9.3.4 Control of the Production Cycle and the WIP.................................. 353 9.3.5 Assembly Systems ........................................................................... 354 9.4 Real-time Assignment with Possible Limited Adjustment of Previous Assignments .................................................................................................. 359 9.4.1 Setting the Problem .......................................................................... 359 9.4.2 Basic Relations................................................................................. 360 9.4.3 Real-time Algorithm in the Case of Adjustment .............................. 363 9.4.4 Case of a Linear Production ............................................................. 364 9.5 Conclusion ............................................................................................... 367 References ..................................................................................................... 368 Further Reading .............................................................................................369
Contents
xvii
10 Manufacturing Layout .................................................................................371 10.1 Introduction............................................................................................371 10.2 Static Facility Layouts ...........................................................................372 10.2.1 Basic Layout Models......................................................................372 10.2.2 Selection of a Type of Layout ........................................................374 10.2.3 Layout Design ................................................................................376 10.2.4 Design of Manufacturing Entities...................................................377 10.2.5 Location of Manufacturing Entities on an Available Space ...........394 10.2.6 Layout Inside Manufacturing Entities ............................................399 10.2.7 Balancing of the Manufacturing Entities........................................402 10.3 Facility Layout in a Dynamic Environment...........................................403 10.3.1 Changes in the Needs of Manufacturing Systems ..........................403 10.3.2 Robust Layouts...............................................................................405 10.3.3 Dynamic Facility Layout................................................................410 10.4 Conclusion .............................................................................................414 References......................................................................................................415 Further Reading .............................................................................................416 11 Warehouse Management and Design..........................................................419 11.1 Introduction............................................................................................419 11.2 Warehouse Types and Usefulness..........................................................420 11.2.1 Warehouse Taxonomies .................................................................420 11.2.2 Warehouse Usefulness ...................................................................422 11.3 Basic Warehousing Operations..............................................................423 11.3.1 Receiving........................................................................................423 11.3.2 Storage............................................................................................423 11.3.3 Automated Systems ........................................................................427 11.4 Warehouse Management........................................................................429 11.4.1 Warehouse Functions .....................................................................429 11.4.2 Warehouse Management Systems (WMS).....................................431 11.5 Design: Some Remarks..........................................................................431 11.5.1 Warehouse Overview .....................................................................431 11.5.2 Storage in Unit-load Warehouse.....................................................435 11.5.3 Warehouse Sizing...........................................................................436 11.6 Warehouse-location Models ..................................................................440 11.6.1 Introduction ....................................................................................440 11.6.2 Single-flow Hierarchical Location Problem...................................441 11.6.3 Multiflow Hierarchical Location Problem......................................444 11.6.4 Remarks on Location Models.........................................................444 11.7 Conclusion .............................................................................................445 References......................................................................................................445 Further Reading .............................................................................................446
xviii
Contents
A Simulated Annealing ....................................................................................... 449 B Dynamic Programming.................................................................................... 459 C Branch-and-Bound Method ............................................................................. 483 D Tabu Search Method ....................................................................................... 503 E Genetic Algorithms.......................................................................................... 519 Authors’ Biographies .......................................................................................... 531 Index.................................................................................................................... 533
Abbreviations
AGV AIDCS ALB AMS AS/RS ATS B&B BOM CNC COMSOAL CONWIP CORELAP CPM CRAFT CRP DC DFL DML DP EDD EDI EKS EOQ EPC FAFS FIFO FMC FMM FMS GA GKS GPM
Automated Guided Vehicle Automatic Identification, Data Capture, and Sharing Assembly Line Balancing Agile Manufacturing System Automated Storage and Retrieval System Automated Transportation System Branch and Bound Approach Bill-of-material Computer Numerical Control Computer Method of Sequencing Operations for Assembly Lines Constant Work-in-progress Computerized Relationship Layout Planning Critical Path Method Computerized Relative Allocation of Facilities Technique Capacity Requirements Planning Distribution Center Dynamic Facilities Layout Dedicated Manufacturing Line Dynamic Programming Earliest Due Date Priority Rule Electronic Data Interchange Extended Kanban System Economic Order Quantity Electronic Product Code First Arrived, First Served “First In, First Out” Priority Rule Flexible Manufacturing Cell Flexible Manufacturing Module Flexible Manufacturing System Genetic Algorithm Generalized Kanban System Garcia and Proth (GP) Method
xx
IT JIT KW LIFO LMS LP ME MIP MOS MPS MRP MRP2 MS PERT R&D RCCP RFID RL RMS ROI RPW RSW RTV SA SALB SFL SKS SKU SMED SPT SPW SW TPM TS TSP VAT VMI WIP WSPT
Abbreviations
Information Technologies Just-in-time Kilbridge and Wester Method “Last In, First Out” Priority Rule Lean Manufacturing System Linear Programming Manufacturing Entity Mixed Integer Linear Programming Mail Order Selling Master Production Schedule Material Requirement Planning Manufacturing Resources Planning Manufacturing System Program Evaluation and Review Technique Research and Development Rough Cut Capacity Planning Radio-frequency Identification Robust Layout Reconfigurable Manufacturing System Return-on-investments Ranked Positional Weight Method Retailer Supply Warehouse Robotic Transfer Vehicle Simulated Annealing Simple Assembly Line Balancing Static Facility Layout Simple Kanban System Stock Keeping Unit Single Minute Exchange of Die Shortest Processing Time Priority Rule Spare Part Warehouse Special Warehouse Total Productive Maintenance Tabu Search Algorithm Traveling Salesman Problem Value Added Tax Vendor–Management–Inventory Work-in-progress Weighted Shortest Processing Time Priority Rule
Chapter 1
Introduction to Pricing
Abstract Price is a major parameter that affects company revenue significantly. This is why this book starts by presenting basic pricing concepts. The strategies, such as for instance, market segmentation, discount strategy, revenue management, price skimming, are developed and illustrated. Particular attention is paid to the relationships among margin, price and selling level. Then, the impact of prices on selling volume is analyzed, and the notion of a selling curve is introduced. Related pricing methods are presented such as price testing, cost-plus method, involvement of experts, market analysis and customer surveying. Included in the last category is the conjoint measurement concerned with finding what parameters of the items are important to customers. The profile method and a simplified version, the two-factor method, are also detailed and illustrated. They provide a set of partworths (i.e., numerical values) for each tester. In other words, the opinion of each tester can be represented by a point in a space whose dimension is the number of part-worths. By applying a clustering method, specifically K-mean analysis, we obtain a limited number of clusters, each of them representing a market segment. The chapter ends with the introduction of price strategies in oligopoly markets.
1.1 Introduction The revenue of a company depends on three control parameters, namely production cost, volume sold and price. The first actions to improve competitiveness were developed to reduce production costs and increase market shares. Huge efforts have been made in companies to reduce the costs. For instance, in the automotive industry and banking sector, costs have been reduced by 30% to 50% within the last 10 years.
2
1 Introduction to Pricing
Increasing a market share depends on the competitiveness of the company. In turn, competitiveness depends not only on price, but also on the ability of the company to meet customer’s requirements.1 Indeed, while price plays a role to meet this objective, it is not decisive, in particular when the product is new on the market. A recent example is the strategy of Apple to dominate in the MP3 player market: Apple based its marketing strategy on i-Pod quality and aesthetics and won the leadership in the domain despite the fact that the i-Pod was the most expensive among similar products. The introduction of a sophisticated pricing process is more recent. Pricing strategy was a concern for companies prior to academic research. Numerous objectives motivate the use of pricing such as, for instance: • Increase market share in order to decrease the long-term production cost, reaching a given return of investment. • Maximize the revenue in order to maximize long-term profit by increasing market share and lowering costs (scale effect). • Maintain price leadership. • Maximize unit profit margin (useful when the number of items sold is forecasted to be low). • Reach high quality level to position the product as the leader. The first two chapters are dedicated to the influence of prices on companies’ revenues. It should be noted that changing a price is obviously easier and faster than developing a process to reduce production costs or to increase the market share. Furthermore, the price parameter influences directly and strongly the profit margin as well as market share. It has been shown that modifying the price by 1% results in a change of at least 10% in the everyday consumption. Thus, price as an adjustment parameter for profit is the easiest and fastest way to increase competitiveness. Indeed, fixing a price is the first step of any selling process and we will discuss this point, but pricing strategy does more. It tries to take advantage of: • time by playing, for instance, with seasonality of demand; • customers’ preferences and purchasing behavior; • spectrum of available products.
1
This concern is pivotal in supply chains, the most recent paradigm related to production systems.
Remember the definition of a supply chain: A supply chain is a global network of organizations that cooperate to improve the flow of material and information between suppliers and customers at the lowest cost and the highest speed. The objective of a supply chain is customer’s satisfaction. (see Govil and Proth, 2002)
1.2 Definitions and Notations
3
These aspects are the most important where pricing is concerned and they are not exclusive. As mentioned in (Talluri and Van Ryzin, 2004), pricing strategy is beneficial when: • Customers are heterogeneous, which means that their purchasing behavior over time varies, their willingness to pay varies from customer to customer, and they are attracted by different benefits offered by the same type of products. • Demand variability and uncertainty are high, which guarantees a flourishing revenue to those who master pricing. • Production is rigid, which allows playing with prices when demand varies. A successful application of pricing strategy requires a strong commitment from management and a detailed monitoring of the system under consideration that, in turn, implies an efficient information processing and communication system. Initially, pricing was used by the airline industry, followed by retailers and, more recently, by companies in the energy sector. Note that these sectors are characterized by production (or offer) rigidity, variability of demand and heterogeneity of customers.
1.2 Definitions and Notations Production cost is the sum of fixed and variable costs. Fixed costs include maintenance, wages and upkeep. Note that fixed costs may increase when production exceeds some production threshold or when the company invests in a nextgeneration technology. Variable costs depend on the number of items produced. They include components, raw material, working, transportation and inventory costs. The revenue is the total amount of money that flows into the company, coming from product sales, venture capital, government support, personal funds. The average cost of an item is the ratio of the total cost to the number of items sold. The marginal revenue is the increase of the revenue resulting from an additional unit of output. The marginal cost of an additional unit of output is the cost of the additional inputs needed to produce that output. More formally, the marginal cost is the derivative of total production costs with respect to the level of output. Price elasticity of demand measures the responsiveness of the number of items sold to the price of an item. More precisely, elasticity of demand is the percentage of change in quantity of items sold with regard to the percentage of change in price per item:
4
Elasticity of demand =
1 Introduction to Pricing
% change in quantity of items sold % change in price of one item
The demand curve (also called the selling curve) is the curve that represents the relationship between the price of an item and the number of items customers are a willing to purchase during a given period. It is assumed that the environmental conditions are steady during this period. A monopoly market is a market in which we have only one provider for the type of items under consideration. A duopoly market is a market dominated by two firms (providers) that are large enough to influence the equilibrium price (i.e., the market price). The market price is where quantities supplied and quantities produced are equal. An oligopoly market is a market dominated by a small number of providers. Each provider (firm) is aware of the actions of the other providers (competitors) and the actions of one provider influence the others. Providers operate under imperfect competition. Imperfect competition is a market situation in which the characteristics of perfect competition are not satisfied. Perfect competition is characterized by: • numerous providers; • perfect information: all providers and customers know the prices set by all providers; • freedom of entry and exit: a provider can enter or exit the system at any time and freely; • homogeneous output, which means that there is no product differentiation or, in other words, items are perfect substitutes; • all providers have equal access to technologies and resources. Nash equilibrium is a market situation involving at least two providers and where no provider can benefit by changing his/her strategy while the other providers of the system keep their strategy unchanged. Some of the previous definitions will be developed later.
1.3 High- and Low-price Strategies In the previous section, we presented an example of high price that did not prevent the i-Pod to be the leader in the MP3 player market. This high-price strategy was successful because the product was new on the market, the promotion was based on quality and aesthetics, and the potential customers were attracted by technological performance and high-quality acoustics. In general, the amount of money customers are prepared to pay depends on their level of interest in the item. For in-
1.4 Adjustable Strategies
5
stance, if customers are swayed by technological novelties, then they are prepared to pay a lot to purchase a personal computer with new capacities. Another example is the Mercedes-Benz class A. The price of this product has been set at a higher level than the cost analysis result by the car company. Nevertheless, the production capacity was fully utilized during the first production year. The explanation is the power of the corporate image of Mercedes-Benz. Numerous other examples can be found in the cosmetic industry and, more generally, in the luxury goods industry. To summarize, high price is accepted if it agrees with the value of the product perceived by the customers, otherwise such a strategy leads to commercial failure. A low-price strategy may also lead to a commercial success, as we can often observe in the food retailing sector. For instance, low-price retailers such as Lidl, Aldi or Leader Price are currently achieving success in Europe. Another example is Dell Computer that distributes low-price PCs and allows customers to personalize their PC. Amazon.com gained an important share of the book market by reducing the prices by 40 to 50% and providing greater choice. These last two companies base their strategy on the use of the Internet to directly distribute their items to customers, which results in a huge reduction of costs that, in turn, allows a significant reduction of prices and thus improves their competitiveness. The success of a low-price strategy depends on the number of clients attracted by the product since the low margin should be compensated by a huge number of items sold. We will see in this chapter that trying to compensate a reduction of price by attracting more customers is risky. Some disadvantages should be outlined in companies applying a high- or lowprice strategy. For instance, the image of the items sold by the company is frozen and a long-term price expectation is established, which reduces the flexibility of the decision-making system. High- and low-price strategies could be described as frozen strategies since they try to attract clients by making the most of the corporate image. The drawback is the inability of these strategies to adapt themselves to fundamental disturbances. For instance, a global impoverishment of a country may sharply penalize companies devoted to luxury and expensive items. Other strategies are much more adjustable. We provide a short description of these strategies in the next section.
1.4 Adjustable Strategies An adjustable strategy can either evolve with the constraints of the environment or is applied during a limited period. Some examples are introduced in this section.
6
1 Introduction to Pricing
1.4.1 Market Segmentation (or Price Discrimination) Strategy The development of a strategy based upon the fact that different groups of customers attach different levels of importance to diverse benefits offered by a type of product or service is called market segmentation. For instance, the same car model may be proposed in different versions (two-door or four-door, different engine powers, different finishing levels, etc.), and each version may attract a particular type of customer. Numerous other examples can be found in the hotel business (different classes of hotels proposed by the same company or, in the same hotel, different categories of rooms). The tourism industry, and even the food industry, where packaging plays a major role in attracting some segments of customers, is another example. Services are another way to introduce value differentiation. This strategy is applicable to a type of item in the case of a monopoly market. It consists of segmenting the market and charging segments with different prices, depending on the willingness of the customers of each segment to pay more or less to purchase the item. Indeed, some “rate fences” should be introduced in order to make sure that the customers of a segment will pay the price assigned to the segment. These “rate fences” can be the promotion of some benefits that attract the customers of a specific segment, or by offering some particular services to the customers of a specific segment. Again, the customers belonging to a given segment should be similar or, in other words, characterized by the same parameters, and dissimilar from the customers of other segments. Similarity and dissimilarity are related to buying habits. The approach to market segmentation is a four-stage process that can be summarized as follows: • Identify the parameters that customers are interested in. For instance, in the personal computer market, training, software level, memory size, disc and CPU sizes and quality of after-sales service are parameters of interest. This identification is usually done by carrying out a survey among customers. • Identify the part-worths (i.e., the characteristics of the parameters, also called benefits, as defined in Section 1.7) that are of interest for customers. This can be done using conjoint measurement. • Define part-worth (or benefit) subsets that correspond to clusters of customers (using K-mean analysis, as explained in Section 1.7). • Identify the parameters that characterize the customers of a cluster, for instance, adherence to a socioeconomic class, geographic location, consumption habits, gender, religion, age, etc. Market segmentation is the relationship between subsets of customers and subsets of benefits. Each subset of benefits is a market segment. To be acceptable, segments should be homogeneous within the segments and heterogeneous from one segment to another.
1.4 Adjustable Strategies
7
Remark: It is also possible to use correspondence analysis, a method belonging to data analysis, to establish a relationship between subsets of customers and subsets of benefits. Correspondence analysis is a technique that will provide information about the “proximity” of benefits to characteristics of customers.
1.4.2 Discount Strategy A discount sale consists in selling a given set of items at a reduced price for a limited period. Such a price reduction should generate enough supplementary sales to compensate the reduction of incomes; however, this is rarely the case. Few companies realize what the true cost of discounting is. When a discount is offered for a given period, it applies to all sales, which often leads to disastrous consequences. Let us consider a set of items sold at the price c each. During a period T, the retailer usually sold m items and the benefit is b for each item sold. The retailer decides to apply a x% discount, assuming that x c / 100 ≤ b , which means that the rebate is less than the benefit. The question is: how many supplementary items should be sold to compensate the reduction of incomes? We denote by z the number of supplementary items that should be sold. The benefit made during period T when items are sold at the regular price is B0 = m b . The benefit made during the same period when items are sold at the discount price is: B1 = [ m + z ] [ ( 1 − x / 100 ) c − ( c − b ) ] = [ m + z ][ b − x c / 100 ]
We want that B1 ≥ B0 . This leads to: z≥
mxc 100 b − x c
(1.1)
mxc : it is the minimum additional sale that 100 b − x c compensates the reduction of benefit due to the discount.
We denote by z* the expression
Example Assume that 2000 items would be sold at the full price of 100 € and that the benefit would be of 20 € per unit during period T. Assume also that the retailer applies a 10% discount. How many supplementary items should be sold to reach the same benefit as when no discount applies?
8
1 Introduction to Pricing
In this example, m = 2000 , x = 10 , c = 100 and b = 20 . Applying Relation 1.1 it turns out that z ≥ 2000 . For a discount of only 10%, it appears that the retailer has to double the sales to compensate the reduction of benefits! Let us consider z* as the function of x. Since z* should remain positive, this function makes sense only for x ∈ [ 0, 100 b / c ) = I . Furthermore, z* = 0 for x = 0 and z* tends to infinity when x tends to 100 b / c . Let us relax the integrity constraint on the number of items. The derivative of z* with regard to x is: 100 m c b dz * = > 0 for x ∈ I dx ( 100 b − x c )2
Thus, for x ∈ I , z* increases with regard to x. The second derivative is: 200 m c 2 b d2z* = > 0 for x ∈ I dx 2 ( 100 b − x c )3
This function is represented in Figure 1.1. xc ⎡ 100 b ⎞ , + ∞ ⎟ , then the financial loss per item is If x ∈ ⎢ −b . 100 ⎣ c ⎠ z* Minimum supplementary sales
Discount
x
100 b / c Figure 1.1 Minimum supplementary sales versus discount
1.4.3 Price Skimming In this strategy, a relatively high price is fixed at first, and then lowered over time. This strategy is a price-discrimination strategy enriched by the time factor.
1.5 Margin, Price, and Selling Level
9
Price skimming usually applies when customers are relatively less price sensitive (clients of the cosmetic industry, for instance) or when they are attracted by some novelty (in particular electronic items such as computers). Price skimming is useful to reimburse huge investments made for research and development. Indeed, a high price cannot be maintained for a long time, but only as long as the company is in a monopolistic situation with regard to the item’s novelty.
1.4.4 Penetration Pricing Penetration pricing consists of setting an initial price lower than that of the market. The expectation is that this price is low enough to break down the purchasing habits of the customers. The objective of this strategy is to attain a huge market share. This strategy can be defined as the low-price strategy enriched by the time factor. Penetration pricing leads to cost-reduction pressure and discourages the entry of competitors.
1.4.5 Yield Management (Revenue Management) The goal of yield management is to anticipate customers’ and competitors’ behavior in order to maximize revenue. Companies that use yield-management review periodically past situations to analyze the effects of events on customers’ and competitors’ behavior. They also take into account future events to adjust their pricing decisions. Yield management is suitable in the case of time-dated items (airplane tickets) or perishable items (fruits, processed food). A mathematical model will be proposed in the next chapter.
1.5 Margin, Price, and Selling Level In this section, we explain the mechanism that links costs, price and margin. Note that a strong assumption is made: we consider that any number of available items can be sold.
10
1 Introduction to Pricing
1.5.1 Notations Price affects margin, but may also affect cost. The higher the price, the greater the margin per item, but increasing the price may result in a fall-off in sales. On the contrary, when the price decreases, the number of items sold may increase, and thus the production costs may decrease due to the scale effect. Let us introduce the following variables and assumptions: • c(s ) is the variable cost per item when the total number of items sold during a given period (a month for instance) is s . The function c(s ) is a non-increasing function of s. • f (s ) is the fixed cost, that is to say the cost that applies whatever the number of items sold during the same period as the one chosen for the variable cost. Nevertheless, f (s ) may increase with regard to s if some additional facilities are necessary to pass a given production threshold. The function f (s ) is a nondecreasing function of s. In fact, c(s ) (respectively, f (s ) ) is either constant, or piecewise-constant. • R is the margin for the period considered. • p is the price of one item.
1.5.2 Basic Relation We assume that all the items produced are sold, which implies that the market exists and the price remains attractive to customers. Under this hypothesis, which is strong, Relation 1.2 is straightforward: R = s p−s c(s)− f (s)
(1.2)
As we will see below, it is often necessary to compute the number of items to be sold in order to reach a given margin, knowing the price, the variable cost per item and the fixed cost. Let us consider two cases. 1.5.2.1 Both the Variable Cost per Item and the Fixed Cost are Constant
In this case, Equation 1.2 becomes: R=s p−sc− f
which leads to Relation 1.3:
1.5 Margin, Price, and Selling Level
⎡ R+ f ⎤ s=⎢ ⎥ ⎢ p−c ⎥
11
(1.3)
Remember that ⎡a ⎤ is the smallest integer greater than or equal to a. Indeed, the price is always greater than the variable cost per item. The margin being fixed, it is easy to see that the number of items to be sold is a decreasing function of the price and an increasing function of the variable cost per item. 1.5.2.2 At Least One of the Two Costs is Piecewise-constant
In this case, the series of positive integer numbers is divided into consecutive intervals for each cost, and the value of this cost is constant on each one of these intervals. To find the number of items to be sold, we consider all the pairs of intervals, a pair being made with an interval associated with the variable cost per item and an interval associated with the fixed cost. We use the costs corresponding to these intervals to apply Relation 1.3. If the resulting number of items to be sold belongs to both intervals, this number is a solution to the problem; otherwise another pair of intervals is tested. This approach is illustrated by the following example. Example We assume that the price of one item is 120 € and that the required value of the margin is 100 000 €. The fixed cost is 8000 € if the number of items sold is less than 2000 and 11 000 € if this number is greater than or equal to 2000. Similarly, the variable cost per item is 50 € if the number of items sold is less than 1000 and 40 € otherwise. 1. We first assume that the fixed cost is 8000 € and the variable cost per item is 50 €. Applying Relation 1.3 leads to: ⎡ 100 000 + 8000 s=⎢ 120 − 50 ⎢
⎤ ⎥ = 1543 ⎥
To be allowed to select 50 € as variable cost per item, the number of items sold must be less than 1000, which is not the case. Thus, this solution is rejected. 2. We now assume that the fixed cost is 8000 € and the variable cost per item is 40 €. In this case, we obtain:
12
1 Introduction to Pricing
⎡ 100 000 + 8000 s=⎢ 120 − 40 ⎢
⎤ ⎥ = 1350 ⎥
This value is less than 2000, which fits with the fixed cost used, and greater than 1000, which corresponds to a variable cost per item of 40 €. This is an optimal solution. 3. If we consider that the fixed cost is 11 000 € and the variable cost per item is 50 €, then we obtain s = 1586 items. This fits neither with the variable cost per item nor with the fixed cost. Therefore, this solution is rejected. 4. Finally, if the fixed cost is 11 000 € and the variable cost per item is 40 €, then s = 1388 items: this does not fit with the fixed cost. This solution is rejected. To conclude, we have to sell 1350 items to reach the margin of 100 000 €.
1.5.3 Equilibrium Point The equilibrium point is the minimum number of items to sell in order not to lose money. We simply have to assign the value 0 to R and apply Relation 1.3. We still assume that all the items that are produced are sold. We will examine how the equilibrium point evolves with regard to the price in the following example. Example We consider the case when c ( s ) = c = 120 € , f ( s ) = f = 100 000 € and we assume that the price evolves from 200 € to 320 €. The data obtained by applying Relation 1.3 are collected in Table 1.1. Table 1.1 Equilibrium point with regard to price Price Equil. point
200
210
220
230
240
250
260
270
280
290
300
310
320
1250
1112
1000
910
834
770
715
667
625
589
556
527
500
It is not surprising (see Equation 1.3) that the equilibrium point is a decreasing function of the price, and that the slope of the curve decreases as the price increases. The equilibrium point as a function of price is represented in Figure 1.2.
Equilibrium point
1.5 Margin, Price, and Selling Level
13
1350 1250 1150 1050 950 850 750 650 550 450 200 210 220 230 240 250 260 270 280 290 300 310 320 Price
Figure 1.2 Equilibrium point function of price
1.5.4 Items Sold with Regard to Price (Margin Being Constant) We still assume that both the variable cost per item and the fixed cost are constant. In this case, Relation 1.3 holds. If we relax the integrity constraint, this relation becomes:
s=
R+ f p−c
(1.4)
Assume that the price increases by ε , which results in a variation of η in the number of items sold. Relation 1.4 becomes:
s +η =
R+ f p +ε −c
(1.5)
Subtracting (1.4) from (1.5) we obtain:
η = −( R + f )
ε ( p −c) (p +ε −c)
Taking into account Relation 1.4, this equality can be rewritten as:
η=−
sε p+ε −c
14
1 Introduction to Pricing
In terms of ratio, we obtain:
η s
=−
ε
p
(1.6)
p p +ε −c
Assume that R, f and c are given. Let p0 be a price and s0 the corresponding number of items to be sold in order to reach the margin R. According to Relation 1.6:
η % = −ε %
p0 p0 + ε − c
where ε % is the percentage the price increases with regard to p0 and η % the percentage of items sold decreases with regard to s0, to reach a given margin R. Consider the function: f (ε ) = −
p0 p0 + ε − c
This function is increasing and tends to 0 when ε tends to infinity. Furthermore, f ( ε ) is equal to –1 for ε = c , less than –1 when ε < c , and greater than –1 when ε > c . As a consequence, the decrease in the percentage of items sold is faster than the augmentation in the percentage of price increase as long as the increase in price remains less than the variable cost per item. When the increase of price becomes greater than the variable cost per item, the percentage of price increase is greater than the percentage of increase of items sold. Note that these remarks hold for a given margin and a given initial price. Remember also that we assume that the market exists or, in other words, that the market can absorb all items produced. Function f ( ε ) is represented in Figure 1.3. ε c
–1
– p0 / ( p0 – c ) f (ε )
Figure 1.3 Function f (ε )
1.6 Price Versus Sales Volume: the Selling Curve
15
Example In this example, the variable cost per item is equal to 80 €, the initial price p0 is equal to 200 € and the fixed cost is equal to 100 000 €. Both the variable cost per item and the fixed cost are constant. Table 1.2 provides the decrease in the percentage of the number of items sold according to the percentage increase in price. Table 1.2 Effect of the price on the number of items sold % of increase in price
0
10
20
30
40
50
60
70
80
90
100
% of decrease in the number of items sold
0
14.3
25
33.3
40
45.4
50
53.8
57.1
60
62.5
The increase in price of 40% corresponds to an increase of 80 €, which is the variable cost per item. As mentioned before, we observe that the evolution of the difference between the percentage of increase in price and the percentage of decrease in the number of items sold is reversed starting from this point. In other words, the sign of this difference changes from this point.
1.6 Price Versus Sales Volume: the Selling Curve 1.6.1 Introduction Customers intuitively compare the price of an item to the value they associate with it. According to a well-known axiom, customers do not buy items, they buy benefits or, in other words, they buy the promise of what the item will deliver. If the evaluation made by the customer is higher than the price of the item, then the customer will buy it; otherwise, he/she will not. If several types of items are as attractive to the customer, he/she will buy the item with the biggest difference between his/her own evaluation and the price. Several approaches are available to fix the price of an item:
• • • • •
cost-plus method; price testing; estimation made by experts; market analysis; survey. We will examine these approaches in more detail.
16
1 Introduction to Pricing
1.6.2 Cost-plus Method Several types of cost-plus methods are available, but the common thread of these methods is to: 1. Calculate the cost per item. 2. Introduce an additional amount that will be the profit. The profit can be a percentage of the cost or a fixed amount. The cost per item can be calculated either by using a standard accounting method based on arbitrary expense categories for allocating overheads, or by deriving it from the resource used (i.e., linking cost to project), or considering incremental cost. Retailers assume that the purchase price paid to their suppliers is the cost. The pivotal advantages of the cost-plus method are the following:
• The price is easy to calculate, which is of utmost importance when a huge number of prices must be established every day, for instance in volume retailing. • The price is easy to manage. • The method tends to stabilize the market. The most important disadvantages are: 1. Customers and competitors are ignored. 2. Opportunity costs are ignored. The cost-plus methods should be avoided since they ignore customers’ behavior as well as the parameters they use to build their own evaluation. Defining a price requires analyzing the market and the behavior of customers with regard to the price. The objective is to establish a relationship between the number of items sold and the price. The curve that reflects this relationship is called the selling curve.
1.6.3 Price Testing This approach consists of modifying the price of the item under consideration and recording the number of items sold or the market share. This can be done by playing with a scale shop or a shop simulated on a computer. This testing should be done with different classes of customers, these classes being characterized by the age of the customers, their gender, the level of incomes and/or the buying habits, etc. This helps not only to define a price, but also to select the most profitable market niche. The price-testing method is even easier to apply if the item is sold via the Internet since changing the price is straightforward. Unfortunately, in this case, cus-
1.6 Price Versus Sales Volume: the Selling Curve
17
tomers cannot be completely identified, and thus the marketing strategy cannot take advantage of customers’ characteristics.
1.6.4 Estimation Made by Experts This is the only available method when a new type of item is launched, or when a fundamental evolution of the technology modifies an existing item, or when a drastic change appears in the competition. The first step of the method consists in recording the opinion of several experts (at least ten) that are not in contact with each other. One usually obtains selling curves that are quite different from each other. The objective of the next step is to organize working sessions with these experts in order to make their estimations/recommendations move closer to each other. Indeed, the drawback when applying this method is the fact that customers do not intervene in the building process of the selling curve, which increases the probability of error.
1.6.5 Market Analysis This method is based on the history of the item under consideration. This implies that the item (or a similar item) has a history (i.e., is not new in the market) and that the current market environment is the same as (or similar to) the environment when the data were collected. Assume that the previous conditions hold and that the history provides quantities sold at different prices. Let pi , i = 1, 2,L, n be the prices and si , i = 1, 2,L, n the corresponding quantities sold. The objective is to define a function s = a p + b that fits “at the best” with the data. In other words, we compute a and b that minimize: n
ϕ ( a, b ) = ∑ ( a pi + b − si ) 2 i =1
This function is minimal for: ∂ ϕ ( a, b ) ∂ ϕ ( a , b ) = =0 ∂a ∂b
These conditions can be rewritten as:
18
1 Introduction to Pricing
⎧ n ⎪∑ ( a pi + b − si ) pi = 0 ⎪ i =1 ⎨ n ⎪ ∑ ( a pi + b − si ) = 0 ⎪⎩ i =1
Or: n n ⎧ n 2 + = a p b p si pi i ⎪ ∑ i ∑ ∑ ⎪ i =1 i =1 i =1 ⎨ n n ⎪ a ∑ pi + n b = ∑ si i =1 i =1 ⎩⎪
The solution of this linear system is: n n n ⎧ n ∑ si pi − ∑ si ∑ pi ⎪ i =1 i =1 ⎪ a = i =1 n n ⎪ n ∑ pi2 − ( ∑ pi ) 2 ⎪⎪ i =1 i =1 ⎨ n n n n 2 ⎪ si ∑ pi − ∑ si pi ∑ pi ∑ ⎪ i =1 i =1 i =1 i =1 ⎪b = n n 2 ⎪ n ∑ pi − ( ∑ pi ) 2 ⎪⎩ i =1 i =1
(1.7)
In Figure 1.4, we show how a set of 6 points denoted by A, B, C, D, E and F has been interpolated by a straight line that represent equation s = a p + b , the coefficients a and b being derived from the coordinates of the points using Relations 1.7.
Number of items sold F’
E
A’
D’
C
B B’
E’
F
C’ D
A Price
Figure 1.4 Linear interpolation of a set of points
1.6 Price Versus Sales Volume: the Selling Curve
19
Relations 1.7 minimize the sum: ( AA' ) 2 + ( BB ' ) 2 + ( CC ' ) 2 + ( DD ' ) 2 + ( EE ' ) 2 + ( FF ' ) 2
where A ' (respectively, B' , C ' , D' , E ' and F ' ) is the point of the straight line having the same abscissa as A (respectively, B, C, D, E and F). Note that any set of points belonging to R2 can be interpolated by a straight line, even if they are far from being organized around a straight line. We thus have to justify the use of a linear interpolation for a given set of points. This is done using the correlation coefficient r: n
n
∑
si pi −
i =1
r=
n
n
∑p
2 i
−(
i =1
n
∑ ∑p si
i =1
n
∑p
n
i
)
2
n
n
i =1
i
i =1
∑s
2 i
−(
i =1
(1.8)
n
∑s
i
)
2
i =1
If the correlation coefficient is equal to +1, then the points ( pi , si ) are on a straight line, and the slope of this line is positive. If r = −1 , the points are on a straight line, but the slope of this line is negative. In practice, we assume that the interpolation of a set of points by a straight line is possible if the correlation coefficient is either close to –1 or to +1.
Example We propose an example with 10 points defined in Table 1.3. Table 1.3 Number of items sold as a function of price
Price pi
4
5
6
8
9
10 11 13 14 15
Number of items sold si 22 27 33 44 48 53 60 66 72 80
We compute the correlation coefficient in order to verify if this set of points can be considered as being distributed along a straight line. We apply Relation 1.8 and obtain: r = 0.99777, which is very close to +1. As a consequence, we consider that a linear interpolation is applicable to this set of points. We apply Relations 1.7 that lead to: ⎧a = 5.0842912 ⎪ ⎨ ⎪⎩b = 2.1992337
20
1 Introduction to Pricing
1.6.6 Customer Surveying Questioning customers about the price of an item was widely used in the 1960s. At that time, direct questions were asked such as, for instance:
• What is the maximal amount of money you are prepared to pay for this item? • What is, in your opinion, the right price for this item? • Would you buy this product at the price of x monetary units? This approach is rarely in use nowadays since it gives too much importance to the price that is only one of the parameters customers are interested in. Prices can no longer be considered in isolation. This is why methods that try to evaluate characteristics of interest to the customers have been developed. One of them is the well-known conjoint measurement.
1.7 Conjoint Measurement 1.7.1 Introduction and Definitions The conjoint measurement is becoming increasingly popular as a tool to find the characteristics of importance for customers in items (products or services). As a consequence, it provides information to increase the value added of items and market segments. Conjoint measurement starts by listing the parameters that are supposed to be of some importance for customers in the item under consideration. The characteristics of importance will be extracted from this list. For instance, if the item is a personal computer, we may have (if we use figures from recent years):
• A parameter related to the hard disk size. It takes the value 1 if the size belongs to [50 GO, 80 GO], the value 2 if the size belongs to (80 GO, 120 GO] and the value 3 if the size is greater than 120 GO. • A parameter related to the memory size. It takes the value 1 if the size belongs to [256 KO, 512 KO] and 2 otherwise. • A parameter related to the training. It takes the value 2 if the training is free and 1 otherwise. • A parameter related to after-sales service. It takes the values 1, 2 or 3 depending on the type of services. For this example, we have 3 × 2 × 2 × 3 = 36 sequences of five parameter values. Such a sequence is called a stimulus. Thus, this example gives birth to 36 stimuli.
1.7 Conjoint Measurement
21
Note that the value of a parameter does not reflect an assessment, but a choice. This point is important. Splitting the broad evaluation of an item among the values of its parameters implies that the overall evaluation of an item is the sum of the evaluations assigned to the values of the parameters. In other words, the utility function is assumed to be additive. This assumption is strong since it implies that parameters are disjoined (i.e., independent of each other from the point of view of customers’ perception of the item value). The evaluation of a parameter value is called the part-worth. We obtain a set of part-worths from each tester. It helps to answer the following questions:
• How to differentiate customers from each other? Customers that present close part-worths can be considered as similar (i.e., as belonging to the same group in terms of purchasing behavior). • How to fix the price of an item? It is possible to introduce the price as one of the parameters and ask customers to select a price range among a set of prices (i.e., a set of parameter values). • How will the market share evolve if the price changes? • Which parameter value should be improved first in order to increase the perceived value of the item? Conjoint measurement is used not only to evaluate the importance of the values of the parameters of products (i.e., the part-worths). It also applies to service quality, reliability, trademark reputation, etc. Each tester is asked to rank the stimuli in the order of his preference. Another possibility is to give a grade to each stimulus. The objective of the method is to derive from this ranking (or grading) the part-worths or, in other words, the evaluation of each one of the parameter values. The method that requires ranking (or grading) the whole set of stimuli is called the profile method. The drawback of this method is the number of stimuli to rank (or grade) since this number increases exponentially with the number of characteristics and their “values”. Assume, for instance, that an item has 5 parameters and 3 values for each, which is a medium-size problem. Then, 35 = 243 stimuli must be ranked (or graded), which is quite impossible in practice. The two-factor method tries to overcome the previous difficulty. The idea is to consider just two characteristics at a time and then to combine the information to derive the part-worths. We first consider the profile method.
1.7.2 Profile Method Assume that m parameters have been selected for defining a given item and that parameter r ∈ {1, 2, L , m } has Nr values denoted by {1, 2, L , N r } .
22
1 Introduction to Pricing
The number of stimuli is N =
m
∏N
k
.
k =1
Let S be the set of stimuli. Then card(S) = N. A stimuli is the sequence j 2 , L , j r , L , j m } with j r ∈ { 1, L , N r } for r = 1, L, m . The rank (or the grade) assigned by a tester to stimulus { j1 , j 2 , L , j r , L , j m } is denoted by G ( j1 , j 2 , L , j r , L , j m ) . We also denote by wi , j , ji = 1, L, N i and i = 1, L, m the part-worth of the ji-
{ j1 ,
i
th “value” of parameter i and by W = { wi , j }i =1, L, m; i
ji =1, L, N i
the set of part-
worths. N1
Nm
m
j1 =1
j m =1
i =1
Let U ( W ) = ∑ L∑ [ G ( j1 , L , j m ) − ∑ wi , j − a ] 2 , where a is the mean i
value of the ranks (or grades) assigned to the stimuli of S:
a=
1 card ( S ) ( j , j 1
2,
∑
G ( j1 , L , j m )
L, j r , L, j m )∈S
The problem to solve is: Minimize U (W )
(1.9)
subject to: Ni
∑
wi, j
ji =1
i
= 0 for i = 1, L, m
(1.10)
Minimizing U (W ) consists in minimizing the sum of the square values of the difference between the rank (or grade) of the stimuli and the sum of the partworths associated to the components of the stimuli and the mean rank (or grade). We thus apply the least square method. The solution of (1.9) is obtained by solving:
∂ U (W ) = 0 for r = 1, L, m and jr = 1, L, N r ∂ wr , j r
under Constraints 1.10.
(1.11)
1.7 Conjoint Measurement
23
Relation 1.11 leads to: N1
N r −1
Nm
N r +1
∑ ∑ ∑ ∑ j1 =1
L
j r −1 =1 j r +1 =1
L
j m =1
⎧ ⎫ m ⎪ ⎪ − − G ( j , L , j , L , j ) w a ⎨ ⎬ − wr , j = 0 1 r m i, j i =1 ⎪ ⎪ i≠r ⎩ ⎭
∑
i
r
This equality can be rewritten as:
wr , j = r
N1
N r −1
j1 =1
j r −1 =1 j r + 1 =1
Nm
N r +1
∑L∑ ∑
L
∑ G( j , L, j 1
j m =1
r
, L, jm ) −
m
⎛ ⎜ ⎜ ⎝
Ni
∑ ∑w i =1 i≠r
ji =1
i , ji
⎛ ⎞ ⎜ ⎟−⎜ ⎟ ⎠ ⎜ ⎝
m
∏ i =1 i≠r
⎞ ⎟ Ni ⎟ a ⎟ ⎠
Taking into account Relations 1.10 and the definition of a , we obtain: N1
N r −1
j1 =1
j r −1 =1 j r +1 =1
wr , j r = ∑ L
∑
−
for
N r +1
∑
N1
Nm
L ∑ G ( j1 , L , j r , L , j m ) j m =1
Nm
1 ∑ L ∑ G ( j1 , L , j r , L , j m ) N r j1 =1 j m =1
(1.12)
r = 1, L, m j r = 1, L , N r
The less the part-worth wr , jr , the more attractive the pair ( parameter r , “value” j r ) to the customer.
Remark As mentioned above, instead of using the ranks of the stimuli, we can use grades. It is possible to assign a grade (between 1 and 10 for instance) to each stimulus with the following rule: the greater the grade, the more attractive the stimulus. In this case, the greater the part-worth wr , j , the more attractive the pair r
( parameter r , “value” j r ) to the customer.
Let G * ( j1 , j 2 , L , j r , L , j m ) be the approximation of the rank (or grade) of the stimulus { j1 , j 2 , L , j r , L , j m } based on the part-worths. Then:
24
1 Introduction to Pricing m
G * ( j1 , j 2 , L , j r , L , j m ) = ∑ wi , j + a , for i =1
i
{ j1 ,
j 2 , L , j r , L , j m }∈ S (1.13)
This formula sets that the approximation of the grade is the sum of the related part-worths and the mean value of the grades.
Example We consider three parameters for a car:
• the engine power: low = 1, medium = 2, high = 3; • air bags: yes = 1, no = 2; • type of car: sedan = 1, station wagon = 2. The number of stimuli is 3 × 2 × 2 = 12 . Each tester is required to assign a grade (between 1 and 20) to each stimulus. The grades given by a tester are gathered in Table 1.4. Table 1.4 Stimuli evaluation Power
Air bags
Type
Grade
1
1
1
14
1
1
2
12
1
2
1
9
1
2
2
7
2
1
1
16
2
1
2
14
2
2
1
11
2
2
2
9
3
1
1
20
3
1
2
18
3
2
1
15
3
2
2
13
Applying Relation 1.12 we obtain: w1,1 = 42 −
158 158 158 ≈ −10.666 ; w1, 2 = 50 − ≈ −2.666 ; w1,3 = 66 − ≈ 13.333 3 3 3
w2,1 = 94 − 79 = 15 ; w2, 2 = 64 − 79 = −15
1.7 Conjoint Measurement
25
w3,1 = 85 − 79 = 6 ; w3, 2 = 73 − 79 = − 6 It is easy to verify that these results satisfy Constraints 1.10. The greatest part-worth being w2,1 , we can conclude that the most important factor for the tester is the presence of air bags in the car. The second largest partworth being w1,3 , the second factor of importance is the power of the engine. Third, a sedan is preferred to a station wagon. To conclude, a sedan with air bags and a powerful engine is the most attractive car for the tester, assuming that the above three characteristics are the only characteristics of interest. As a consequence, the price of such a car can be fairly high. Furthermore, we can conclude that improving security is the activity that may result in the highest payback in the opinion of the tester. In Table 1.5, we present the grades (already presented in Table 1.4) and their ranking (in the decreasing order of the grades). We also present the linear approximation of the grades based on part-worths (see Relation 1.13) and their ranking. We observe that the ranks of the grades and their approximations are the same for the three highest grades and very similar or identical for the others. This reinforces the assumption that part-worths are effective. The reader should keep in mind that we consider here only the evaluation provided by one tester. Table 1.5 Comparison of stimuli evaluations with its linear approximations
Power
Air bags
Type
Grade
Linear approximation
G
Rank
G*
Rank
1
1
1
14
5/6
23.5
4
1
1
2
12
8
11.5
7
1
2
1
9
10 / 11
–6.5
10
1
2
2
7
12
–18.5
12
2
1
1
16
3
31.5
3
2
1
2
14
5/6
19.5
5
2
2
1
11
9
1.5
9
2
2
2
9
10 / 11
–10.5
11
3
1
1
20
1
47.5
1
3
1
2
18
2
35.5
2
3
2
1
15
4
17.5
6
3
2
2
13
7
5.5
8
26
1 Introduction to Pricing
1.7.3 Two-factor Method The two-factor method consists in considering just two parameters at a time. Some definitions of conjoint measurement refer to this particular approach such as, for instance: Conjoint measurement is a survey technique to measure the relative preference accorded to different parameter values of a product or service. Two possible parameters are presented to the testers at a time to simplify the process of choosing the greater preference. These multiple choices are then arrayed to establish a ranking (or grading) system for the various parameter values.
Assume that m parameters are under consideration. Then, C m2 = m ( m − 1 ) / 2 tests will be conducted to generate the global part-worths. Consider, for instance, the parameter r ∈ { 1, L , m } having N r “values”. This parameter will be tested with parameters 1, L , r − 1, r + 1, L , m to obtain m − 1 part-worths for each parameter value j r of r denoted by w1r , j , wr2, j , L, wrr,−j1 , wrr,+j1 , L, wrm, j . We conr
r
r
r
r
sider that the part-worth associated with the parameter value j r of r is the mean value of the above part-worths: wr*, j = r
1 m i ∑ wr , j m − 1 i =1
(1.14)
r
i≠r
It is easy to show that, if the grade associated to the stimulus ( j r , j k ) in this method is the sum of the grades of the stimuli of S of the profile method, where j r is the r-th component and j k the k-th component, then wr*, j = wr , j , where the r
r
second member of the equality is the part-worth obtained with the profile method. Example We consider the problem introduced in the previous example. Assume that the data corresponding to the pair (power, air bags) are given in Table 1.6. The grades are generated based on the grades of Table 1.4 as explained above. From these data, we derive: w13,1 = 42 −
158 32 158 8 158 40 = − ; w13, 2 = 50 − = − ; w13,3 = 66 − = 3 3 3 3 3 3
w23,1 = 94 −
158 158 = 15 ; w23, 2 = 64 − = −15 2 2
1.7 Conjoint Measurement
27
Table 1.6 Stimuli evaluation for the pair (power, air bags)
Power
Air bags
Grade
1
1
26
1
2
16
2
1
30
2
2
20
3
1
38
3
2
28
Relation 1.10 is verified: w13,1 + w13, 2 + w13,3 = 0 and w23,1 + w23, 2 = 0
Table 1.7 is generated similarly as Table 1.6 for the pair of parameters (power, type). Table 1.7 Stimuli evaluation for the pair (power, type)
Power
Type
Grade
1
1
23
1
2
19
2
1
27
2
2
23
3
1
35
3
2
31
The part-worths are: w12,1 = 42 −
158 32 158 8 158 40 = − ; w12, 2 = 50 − = − ; w12,3 = 66 − = 3 3 3 3 3 3
w32,1 = 85 −
158 158 = 6 ; w32, 2 = 73 − = −6 2 2
Relation 1.10 is verified. Table 1.8 is dedicated to the pair (air bags, type).
28
1 Introduction to Pricing
Table 1.8 Stimuli evaluation for the pair (air bags, type)
Air bags
Type
Grade
1
1
50
1
2
44
2
1
35
2
2
29
We obtain the following part-worths: w12,1 = 94 −
158 158 = 15 ; w12, 2 = 64 − = −15 2 2
w31,1 = 85 −
158 158 = 6 ; w31, 2 = 73 − =6 2 2
Applying Relation 1.14, we obtain:
(
)
(
(
)
(
)
)
w3*,1 =
1 32 1 8 w11,1 + w12,1 = − ; w3*, 2 = w11, 2 + w12, 2 = − 2 3 2 3
w3*,3 =
1 40 w11,3 + w12,3 = 2 3
w2*,1 =
1 1 w12,1 + w23,1 = 15 ; w2*, 2 = ( w12, 2 + w23, 2 ) = −15 2 2
w1*,1 =
1 1 w32,1 + w31,1 = 6 ; w1*, 2 = w32, 2 + w31, 2 = −6 2 2
(
)
(
)
These results are the same as those obtained by applying the profile method, but remember that the grades used in both cases are “coherent” in the sense presented above. Note that the approaches presented up to now do not consider competition, which means that a provider (firm, company) fixes the prices independently from the other providers.
1.7 Conjoint Measurement
29
1.7.4 Clustering for Market Segmentation 1.7.4.1 Introduction
Using the profile method or the two-factor method leads to a set of part-worths for each tester. Thus, each tester can be represented by a point in the m-dimensions space, where m is the number of part-worths provided. Clustering is a technique that classifies the points corresponding to the testers into different groups (i.e., that partitions the sets of part-worths into clusters) so that the sets belonging to the same cluster represent testers (and thus customers) having a similar purchasing behavior. Thus, part-worths allow definition of market segments: the common characteristics of the customers belonging to the same cluster define a segment. In fact, customers belonging to the same cluster have not exactly identical characteristics (identical part-worths) but similar characteristics (close part-worths) and thus similar purchasing behavior. We will see in the next section how a unique set of part-worths can be assigned to each cluster. Each one of these sets will define the corresponding segment. To understand clustering and its use in defining market segments, we present K-mean analysis that is a clustering algorithm. 1.7.4.2 K-mean Analysis
Let W1 , L, Wr be the r sets of m part-worths derived from the evaluation of r
{
testers. Thus, Wi = w1i , L , wmi can be summarized as follows.
} for i = 1, L, r . The K-mean analysis algorithm
Algorithm 1.1. 1. Choose K ( K 0 , which leads to: x>
mA α bA − α
For a given value of α ∈ [ 0, bA ) , any x such that point ( x, α ) is above the curve of Figure 1.5 representing x as a function of α leads to an increase of the benefit of A, and this increase is equal to −mA α + bA x − α x . If point ( x, α ) is below the curve, the benefit of A decreases. For any value of α the benefit of B decreases by x b B . x
bA
α
Figure 1.5 The frontier curve when A decreases his/her price
To conclude, cutting prices in duopoly or oligopoly market is often a mistake: customers are reluctant to switch to the provider who reduces his/her prices since they know that their provider will follow the move and reduce his/her prices.
1.8 Price Strategy in Oligopoly Markets
35
Thus, the best strategy of all the providers is to maintain their prices, except if a company is strong enough to expel the other competitors from the market. Nevertheless, a price war has often happened in the past 20 years, particularly in the food and computer industries, or among Internet access companies or the airlines to quote just a few. A price war originates usually in: • Surplus production capacity, which incites companies to reduce their inventory level at “any” price. • Production of basic products easy to manufacture, which encourages an aggressive competition. • Persistence of a low-growth market, which incites some providers to try gaining market shares at the expense of competitors. • Management style. This list is not exhaustive. An important question is: how to avoid price wars? Or, in other words, how to avoid aggressiveness of competitors when cutting prices? One of the strategies is to differentiate the products (market segmentation) and to reduce the prices in segments that are not important to competitors. Another strategy is to target products that do not attract competitors.
1.8.3 Increasing Prices A similar evolution may be observed if one of the providers increases his/her prices. The results of such a strategy are summarized in Table 1.11. Table 1.11 Prices increase Provider B Keeps prices constant
Increases prices
Keeps prices constant
The revenues of A and B remain constant
The revenue of A increases. The revenue of B may decrease.
Increases prices
The revenue of A may decrease. The revenue of B increases.
Both revenues increase
Provider A
Thus, the provider who increases his/her prices is not certain to increase the benefit, but the benefit of the competitor will certainly increase. If the competitor does not change his/her prices, the move may be dangerous, but if the competitor increases his/her prices, then both providers increase their benefits.
36
1 Introduction to Pricing
Example We consider only one type of item. We use the same notation as in the previous example, except that α represents the increase of cost decided by provider A and x is the number of clients that migrate from A to B. Provider B keeps his/her price unchanged. In this situation: R 1A = ( m A − x ) ( b A + α ) = R A0 + m A α − b A x − x α
and R 1B = ( m B + x ) b B = R B0 + x b B
The benefit of B increases, but the benefit of A decreases only if m A α − b A x − x α < 0 , which can be rewritten as: x>
mA α bA + α
For a given value of α ∈ [ 0, + ∞ ) , any x such that point ( x, α ) is above the curve of Figure 1.6 representing x =
mA α leads to a cut of the benefit of A, and bA + α
this cut is equal to m A α − b A x − α x . If point ( x, α ) is below the curve, the benefit of A increases. x mA
α
Figure 1.6 The frontier curve when A increases his/her price
To conclude, if a provider increases their prices, the result will be a global increase of benefits if all competitors increase their prices accordingly. But some competitors may decide to keep their prices unchanged, which would result in a cut of benefits for the providers who increased their prices. A quite common strat-
1.9 Conclusion
37
egy to avoid this problem is to apply the well-known signal theory. In this approach, one of the competitors gives notice, a long time in advance, of their willingness to increase a price (justified by some novelty or technical improvement). This strategy also applies when the goal is to reduce a price. In doing so, competitors have time to let it be known if they are ready to follow the move or not.
1.9 Conclusion Please consider this chapter as an introduction to pricing. We concentrated the explanations mostly on monopoly systems with myopic customers who do not “play” with providers, i.e., who are not aware of providers’ strategies. Note that this kind of market is the one for which most of the efficient models deal with, probably because management of monopoly systems does not involve psychological components. After showing the importance of pricing to increase revenue, we provided the most common definitions in use in the field. Several sections dealt with pricing strategies: high- and low-price strategies, price discrimination, discount, price skimming, penetration pricing and revenue management. Each one of these strategies targets a particular objective and applies to certain situations. A significant section took into account the relationship between the margin and the pair, cost and quantity sold. The objective was to explain the mechanism that links these parameters. Section 1.6 examined the selling curve that establishes the relationship between price and volume sold. Since we were still in a monopoly environment, only provider and customers are concerned. In this section, several methods were listed, from the well-known cost-plus method (which is not recommended since it ignores customers’ buying behavior) to market analysis (which, in some sense, assumes steady customers’ behavior). The conjoint measurement has been introduced in Section 1.7. It allows to be revealed the characteristics of a product (or service) that are of importance for some testers and that, in turn, allows market segments using K-mean analysis to be defined; finally, assigning a price per unit in each market segment can be done using one of the strategies mentioned above. The last section was dedicated to oligopoly and duopoly markets, and thus was focused on competitors’ reactions when a retailer decreases/increases price.
38
1 Introduction to Pricing
References Govil M, Proth J-M (2002) Supply Chain Design and Management. Strategic and Tactical Perspectives. Academic Press, San Diego, CA Talluri K, Van Ryzin G (2004) Revenue management under a general discrete dedicated choice model of consumer behavior. Manag. Sci. 50(1): 15–33.
Further Reading Aggarwal P, Vaidyanathan R (2003) Eliciting online customers’ preferences: Conjoint vs. selfexplicated attribute-level measurements. J. Mark. Manag. 19(1–2):157–177 Ailawadi K, Neslin SA (1998) The effect of promotion on consumption: Buying more and consuming it faster. J. Mark. Res. 35:390–398 Allenby GM, Arora N, Ginter JL (1998) On the heterogeneity of demand. J. Mark. Res. 35:384– 389 Allenby G, Rossi P (1999) Marketing models of consumer heterogeneity. J. Econometrics 89:57–78 Anderson E, Coughlin AT (1987) International market entry and expansion via independent or integrated channels of distribution. J. Mark. 51:71–82 Benoit JP, Krishna V (1987) Dynamic duopoly: Prices and quantities. Rev. Econ. Stud. 54:23–35 Belobaba PP (1987) Airline yield management: An overview of seat inventory control. Transp. Sci. 21:63–73 Belobaba PP, Wilson JL (1997) Impacts of yield management in competitive airline markets. J. Air Transp. Manag. 3:3–9 Bemmaor AC, Mouchoux D (1991) Measuring the short-term effect of in-store promotion and retail advertising on brand sales: A factorial experiment. J. Mark. Res. 28:202–214 Bernstein F, Federgruen A (2003) Pricing and replenishment strategies in a distribution system with competing retailers. Oper. Res. 51:409–426 Berry S (1994) Estimating discrete-choice models of product differentiation. RAND J. Econ. 25:242–262 Blattberg RC, Neslin SA (1990) Sales Promotion: Concepts, Methods and Strategies. PrenticeHall, Englewood Cliffs, NJ Bodily SE, Weatherford LR (1995) Perishable-asset revenue management: Generic and multipleprice yield management with diversion. Omega 23:173–185 Bolton RN (1989) The relationship between market characteristics and promotional price elasticities. Mark. Sci. 8:153–169 Bulow JI (1982) Durable-goods monopolists. J. Polit. Econ. 90:314–332 Campbell MC (1999) Perceptions of price unfairness: Antecedents and consequences. J. Mark. Res. 36:187–199 Cattin P, Wittink DR (1982) Commercial use of conjoint analysis: A survey. J. Mark. 46:44–53 Chintagunta P (2002) Investigating category pricing behavior at a retail chain. J. Mark. Res. 39(2):141–154 Cross RG (1995) An introduction to revenue management. In: Jenkins D (ed), The Handbook of Airline Economics, The Aviation Weekly Group of the McGraw-Hill Companies, New York, NY Cross RG (1998) Revenue Management: Hardcore Tactics for Market Domination. Broadway Books, Bantam, Doubleday, Dell Publishing Group, New York, NY Dana JD (1998) Advance-purchase discounts and price discrimination in competitive markets. J. Polit. Econ. 106:395–422
Further Reading
39
Dolan RJ, Jeuland AP (1981) Experience curves and dynamic demand models: Implications for optimal pricing strategies. J. Mark. 45:52–62 Gallego G, Van Ryzin GJ (1997) A multi-product dynamic pricing problem and its applications to network yield management. Oper. Res. 45(1):24–41 Greenleaf EA (1995) The impact of reference price effects on the profitability of price promotions. Mark. Sci. 14:82–104 Harrigan KR (1986) Matching vertical integration strategies to competitive conditions. Strat. Manag. J. 7:535–555 Helfat C, David T (1987) Vertical integration and risk reduction. J. Law Econ. Organ. 3:47–68 Johnson RE (2001) The role of cluster analysis in assessing comparability under the U.S. transfer pricing regulations. Bus. Econ., April 1 John G, Barton AW (1988) Forward integration into distribution: An empirical test of transaction cost analysis. J. Law Econ. Organ. 4(2):337–356 Kadiyali V, Chintagunta P, Vilcassim N (2000) Manufacturer-retailer channel interactions and implications for channel power: An empirical investigation of pricing in a local market. Mark. Sci. 19(2):127–148 Kalish C (1983) Monopolist pricing with dynamic demand and production costs. Mark. Sci. 2:135–159 Kannan PK, Kopalle PK (2001) Dynamic pricing on the Internet: Importance and implications for consumer behaviour. Int. J. Electr. Comm. 5(3):63 – 83 Kirman A, Sobel M (1974) Dynamic oligopoly with inventories. Econometrica 42(2):279–287 Klein B, Frazer GL, Roth VJ (1990) A transaction cost analysis model of channel integration in international markets. J. Mark. Res. 27:196–208 Kumar V, Leone RP (1988) Measuring the effect of retail store promotions on brand and store substitution. J. Mark. Res. 25:178–185 Lau AHL, Lau HS (1988) The Newsboy problem with price dependent demand distribution. IIE Trans. 20:168–175 Leloup B, Deveaux L (2001) Dynamic pricing on the Internet: Theory and simulations. J. Electr. Comm. Res. 1(3):265–276 MacMillan IC, Hambrick DC, Pennings JM (1986) Uncertainty reduction and the threat of supplier retaliation: Two views of the backward integration decision. Organ. Stud. 7:263–278 McGill JI, Van Razyn GJ (1999) Revenue management: Research overview and prospects. Transp. Sci. 33(2):233–256 Pasternack BA (1985) Optimal pricing and return policies for perishable commodities. Mark. Sci. 4(2):166–176 Petruzzi NC, Dada M (2002) Dynamic pricing and inventory control with learning. Nav. Res. Log. Quart. 49:304–325 Reibstein DJ, Gatignon H (1984) Optimal product line pricing: The influence of elasticities and cross elasticities. J. Mark. Res. 21:259–267 Salop S, Stiglitz JE (1982) The theory of sales: A simple model of equilibrium price dispersion with identical agents. Amer. Econ. Rev. 72(5):1121–1130 Weatherford LR, Bodily SE (1992) A taxonomy and research overview of perishable-asset revenue management: Yield management, overbooking, and pricing. Oper. Res. 40:831–844
Chapter 2
Dynamic Pricing Models
Abstract In this chapter, some pricing models are presented that are characterized by the following assumptions: (i) the number of potential customers is not limited, and as a consequence, the size of the population is not a parameter of the model, (ii) only one type of item is concerned, (iii) a monopoly situation is considered, and (iv) customers buy items as soon as the price is less than or equal to the price they are prepared to pay (myopic customers). A deterministic model with time-dated items is presented and illustrated first. To build this model, the relationship between the price per item and demand has to be established. Then, the stochastic version of the same model is analyzed. A Poisson process generates customers’ arrivals. Finally, a stochastic model with salvage value where the price is a function of inventory level is considered. Detailed algorithms, numerical examples and figures are provided for each model. These models provide practical insights into pricing mechanisms.
2.1 Introduction Any dynamic pricing model requires establishing how demand responds to changes in price. This chapter is dedicated to mathematical models of monopoly systems. The reader will notice that strong assumptions are made to obtain tractable models. Indeed, such mathematical models can hardly represent real-life situations, but they do illustrate the relationship between price and customers’ purchasing behavior. In this chapter we consider the case of time-dated items, i.e., items that must be sold before a given point in time, say T. Furthermore, there is no supply option before time T. This situation is common in the food industry, the toy business (when toys must be sold before Christmas, for instance), marketing products (products associated with special events like movies, football matches, etc.), fashion apparel
42
2 Dynamic Pricing Models
(because the selling period ends when a season finishes), airplane tickets (which are obsolete when the plane takes off), to quote just a few. The goal is to find a strategy (dynamic pricing, also called yield management or revenue management) that leads to the maximum expected revenue by time T, assuming that the process starts at time 0. This strategy consists in selecting a set of adequate prices for the items that vary according to the number of unsold items and, in some cases, to the time. At a given point in time, we assume that the price of an item is a non-increasing function of the inventory level. For a given inventory level, prices are going down over time. Indeed, a huge number of models exist, depending of the situation at hand and the assumptions made to reach a working model. For instance, selling airplane tickets requires a pricing strategy that leads to very cheap tickets as takeoff nears, while selling fashion apparel is less constrained since a second market exists, i.e., it is still possible to sell these items at discount after the deadline. For the models presented in this chapter, we assume that: • The number of potential customers is infinite. As a consequence, the size of the population does not belong to the set of parameters of the models. • A single type of item is concerned and its sales are not affected by other types of items. • We are in a monopoly situation, which means that there is no competition with other companies selling the same type of item. Note that, due to price discrimination, a company can be monopolistic in one segment of the population while other companies sell the same type of item with slight differences to other segments. This requires a sophisticated fencing strategy that prevents customers from moving to a cheaper segment. • Customers are myopic, which means that they buy as soon as the price is less than the one they are prepared to pay. Strategic customers who optimize their purchasing behavior in response to the pricing strategy of the company are not considered in this chapter; game theory is used when strategic customers are concerned. To summarize, this chapter provides an insight into mathematical pricing models. Note also that few convenient models exist without the assumptions presented above, that is to say a monopoly situation, an infinite number of potential customers who are myopic and no supply option. The reader will also observe that negative exponential functions are often used to make the model manageable and few persuasive arguments are proposed to justify this choice: this is why we consider that most of these models are more useful to understand dynamic pricing than to treat real-life situations.
2.2 Time-dated Items: a Deterministic Model
43
2.2 Time-dated Items: a Deterministic Model 2.2.1 Problem Setting In this model, we know the initial inventory s0 : it is the maximal quantity that can be sold by time T. We assume that demands appear at times 1, 2, L , T , and xt represents the demand at time t. The demands are real and positive. The price of one item at time t is denoted by pt , and this price is a function of the demand and the time: pt = p ( xt , t ) . We assume that there exists a one-to-one relationship between demand and price at any time t ∈ { 1, 2, L T } . Thus, xt = x ( pt , t ) is the relation that provides the demand when the price is fixed. We also assume that: • x ( p, t ) is continuously differentiable with regard to p. • x ( p, t ) is lower and upper bounded and tends to zero as p tends to its maximal value. Finally, the problem can be expressed as follows: T
Maximize
∑p
t
xt
(2.1)
t =1
subject to: T
∑x
t
≤ s0
(2.2)
t =1
xt ≥ 0 for t ∈ {1, 2, L T }
(2.3)
xt ≤ x ( ptmin , t ) for t ∈ {1, 2, L T }
(2.4)
where ptmin is the minimal value of pt . Criterion 2.1 means that the objective is to maximize the total revenue. Constraint 2.2 guarantees that the total demand at horizon T does not exceed the initial inventory. Constraints 2.3 are introduced to make sure that demands are never less than zero. Finally, Constraints 2.4 provide the upper bound of the demand at any time.
44
2 Dynamic Pricing Models
2.2.2 Solving the Problem: Overall Approach To solve this problem, we use the Kuhn and Tucker approach based on Lagrange multipliers. Since pt is a function of xt , then pt xt is the function of xt . Taking into account the constraints of the problem, the Lagrangian is: L ( x1 , L , xT , λ , μ1 , L, μT , l1 , L , lT ) = +
T
∑μ
t
xt −
t =1
T
∑l ( x t
t
− x( p
min t
T
∑ p( x , t ) x t
t =1
t
−λ (
T
∑x
t
− s0 )
t =1
(2.5)
, t ))
t =1
The goal is to solve the T equations: ∂L = 0 for t ∈ {1, 2, L T } ∂ xt
(2.6)
Together with the 2T +1 complementary slackness conditions: T
λ ( ∑ xt − s 0 ) = 0
(2.7)
t =1
μ t xt = 0 l t ( xt − x ( p
min t
⎫ ⎬ for t ∈ {1, 2, L T } , t ) ) = 0⎭
(2.8)
Thus, we have 3T +1 equations for the 3T +1 unknowns that are: x1 , L , xT , λ , μ1 , L, μT , l1 , L , lT
A solution to the system of Equations 2.6–2.8 is admissible if λ ≥ 0, μt ≥ 0, lt ≥ 0 and if Inequalities 2.2–2.4 hold. Note that, due to Relations 2.7 and 2.8:
λ = 0 and/or
T
∑x
t
= s0
t =1
μ t = 0 and/or xt = 0 for t ∈ {1, 2, L T } lt = 0 and/or xt = x ( ptmin , t ) for t ∈ {1, 2, L T }
2.2 Time-dated Items: a Deterministic Model
45
2.2.3 Solving the Problem: Example for a Given Price Function We consider the case: p ( xt , t ) = ( A − B xt )
D D+t
where A, B and D are positive constants. As a consequence: x ( pt , t ) =
1 D+t ( A − pt ) D B
As we can see: • The price is a decreasing function of t. A • The demand must remain less than , otherwise the cost would become negaB tive. The problem to be solved is (see (2.1)–(2.4)): T
Maximize
∑( A − B x ) x t
t
t =1
D D+t
subject to: T
∑x
t
≤ s0
t =1
xt ≥ 0 ⎫ ⎪ A ⎬ for t ∈ { 1, 2, L, T xt ≤ ⎪ B ⎭
}
The last constraints guarantee that prices remain greater than or equal to zero. In this case, the Lagrangian is: L ( x1 , L, xT , λ , μ1 , L, μT , l1 , L, lT ) = +
T
∑μ t =1
t
xt −
T
∑l ( x t
t =1
t
−
A ) B
T
∑( A x t =1
t
− B xt2 )
T D − λ ( xt − s 0 ) D+t t =1
∑
46
2 Dynamic Pricing Models
According to Relations 2.6–2.8, the system of equations to solve is: ( A − 2 B xt )
D − λ + μ t − lt = 0 for t ∈ {1, 2, L, T } D+t
(2.9)
T
λ (∑ xt − s 0 ) = 0
(2.10)
μt xt = 0 for t ∈ {1, 2, L, T }
(2.11)
lt ( xt − A / B ) = 0 for t ∈ {1, 2, L, T }
(2.12)
t =1
Whatever t ∈ {1, 2, L, T } , xt is either equal to 0, or to A / B , or belongs to ( 0, A / B ) (which represents the interval without its limits). This third option is justified as follows. If neither of the first two options holds, then μt = lt = 0 and Relation 2.9 becomes: ( A − 2 B xt )
D −λ = 0 D+t
Let us first assume that xt = A / B . In this case, μt = 0 and Equality 2.9 becomes: ( A − 2 A)
D = λ + lt D+t
The first member of this equality is negative, while the second member is greater than or equal to 0 since both λ and lt must be less than or equal to 0 for the solution to be admissible. As a conclusion, xt cannot be equal to A / B , and therefore, see (2.12), lt = 0 whatever t. Let us now assume that xt = 0 . In this case, and keeping in mind that lt = 0, Equality 2.9 becomes: A
D D − λ + μ t = 0 or λ = A + μt > 0 D+t D+t T
Thus, according to (2.10),
∑x
t
t =1
= s0
2.2 Time-dated Items: a Deterministic Model
47
xt ∈ ( 0, A / B ) . In this case,
Finally, assume that D λ = ( A − 2 B xt ) . D+t
As a consequence, xt ∈ ( 0, A / 2 / B ] and xt =
μ t = lt = 0
and
1 D+t ( A−λ ). 2B D
We have to consider two cases: 1. If xt = A / 2 / B then, according to Equations 2.9 and 2.10:
λ = 0 and
T
∑x
≤ s0
t
(2.13)
t =1
2. If xt ∈ ( 0, A / 2 / B ) , then, according to Equations 2.9 and 2.10:
λ > 0 and
T
∑x
t
= s0
(2.14)
t =1
Let be Y = { t t ∈ {1, 2,L, T }, xt > 0 } and NY the number of elements of Y. From (2.13) and (2.14) it appears that: • If T × A / 2 / B ≤ s0 , then xt = A / 2 / B for t ∈ {1, 2, L, T } is an admissible solution. T D+t 1 • If N Y × A / 2 / B ≥ s0 , then ∑ xt = s0 . Since xt = ( A−λ ) when D 2B t =1 xt > 0 , equality
T
∑x
t
= s0
t =1
λ=
becomes
⎧ 1
∑ ⎨⎩ 2B ( A − λ t∈Y
D+t ⎫ ) ⎬ = s0 D ⎭
and
N Y D A − 2 B D s0 . Finally: NY D + ∑ t t∈Y
xt =
N D A − 2 B D s0 D + t 1 ( A− Y ) for t ∈ Y 2B D NY D + ∑ t t∈Y
We derive an algorithm from the above results.
(2.15)
48
2 Dynamic Pricing Models
Algorithm 2.1. 1. If
T × A / 2 / B ≤ s0 , then
C* =
T
D
∑ ( A − B xt ) xt D + t t =1
xt = A / 2 / B
for
t ∈ {1, 2, L T } , compute the criterion
and set xt* = xt for t ∈ {1, 2, L T } , otherwise set C * = 0 .
2. If T × A / 2 / B ≥ s 0 , then for all sequences Y = [ y1 , y 2 , L, yT ] , where yt = 0 or 1 : 2.1. Set xt = 0 if yt = 0 or compute xt using (2.15) if yt = 1 .
2.2. If ( xt < 0 or xt > A / 2 / B ) for at least one t ∈ {1, 2, L, T } , then go to the next sequence Y. Otherwise, compute C =
D
∑ ( A − B xt ) xt D + t .
t yt =1 *
2.3. If C > C : 2.3.1. Set C * = C . 2.3.2. Set xt* = xt for t ∈ {1, 2, L T } .
{ }
3. The solution of the problem is xt*
t =1, 2, L, T
and C * contains the optimal value.
This algorithm consists of computing the value of the criterion for each of the feasible solutions and keeping the solution with the greater value of the criterion. Indeed, this approach can be applied only to problems of reasonable size since the number of feasible solutions is upper bounded by 2T . Numerical Illustrations We present 3 examples. Demands and prices are rounded and T = 10. They are listed in the increasing order of time. Example 1 A = 200, B = 10 and D = 10 Initial inventory level: 150 Demands: 10, 10, 10, 10, 10, 10, 10, 10, 10, 10 Prices (per item): 90.91, 83.33, 76.92, 71.43, 66.67, 62.5, 58.82, 55.56, 52.63, 50 Total demand: 100 Revenue: 6687.71 Example 2 A = 500, B = 5 and D = 10 Initial inventory level: 150 Demands: 25.56, 23.33, 21.11, 18.89, 16.67, 14.44, 12.22, 10.0, 7.78, 0 Prices (per item): 338.38, 319.44, 303.42, 289.68, 277.78, 267.36, 258.17, 250.0, 242.69, 0 Total demand: 150
2.3 Dynamic Pricing for Time-dated Products: a Stochastic Model
49
Revenue: 44 013.1 Example 3 A = 500, B = 10 and D = 2 Initial inventory level: 200 Demands: 22.67, 22.56, 22.44, 22.33, 22.22, 22.11, 22.0, 21.89, 21.78, 0 Prices (per item): 260.32, 249.49, 239.61, 230.56, 222.22, 214.53, 207.41, 200.79, 194.64, 0 Total demand: 200 Revenue: 44 933.7
2.2.4 Remarks Three remarks can be made concerning this model: • The main difficulty consists in establishing the deterministic relationship between the demand and the price per item. In fact, establishing such a relationship is a nightmare. Several approaches are usually used to reach this objective. One of them is to carry out a survey among a large population, asking customers the price they are prepared to pay for one item. Let n be the size of the population and sp the number of customers who are prepared to pay p or more for one item, then sp / n is the proportion of customers who will buy at cost p. Then, evaluating at k the number of customers who demonstrate some interest in the item, we can consider that the demand is k × s p / n when the price is p.
Another approach is to design a “virtual shop” on the Internet and to play with potential customers to extract the same information as before. This is particularly efficient for products sold via the Internet. Ebay and other auction sites can often provide this initial function for price and demand surveying. • In the model developed in this section, demands and prices are continuous. The problem becomes much more complicated if demands are discrete. Linear interpolation is usually enough to provide a near-optimal solution. • In this model, we also assumed that the value of one item equals zero after time T. We express this situation by saying that there is no salvage value.
2.3 Dynamic Pricing for Time-dated Products: a Stochastic Model In this model, we assume that there is no salvage value, i.e., that the value of an item equals zero at time T.
50
2 Dynamic Pricing Models
We are in the case of imperfect competition, which means that the vendor has the monopoly of the items. The monopoly could be the consequence of a specificity of the items that requires a very special know-how, a technological special feature, a novelty or the existence of item differentiation that results in a very large spectrum of similar items. In the case of imperfect competition, customers respond to the price. Furthermore, this model is risk-neutral, which means that the objective is only to maximize the expected revenue at time T, without taking into account the risk of poor performance. This kind of model applies when the number of problem instances is large enough to annihilate risk, which is a consequence of the “large number” statistical rule. These hypotheses are the same as those introduced in the previous model. The differences will appear in the next subsection. This approach is presented in detail in (Gallego and Van Ryzin, 1994).
2.3.1 Problem Considered To make the explanation simple, consider that possible customers appear at random. Each customer buys an item, or not, depending on the price and the maximum amount of money they are prepared to pay for it. We assume that a Poisson process generates the arrival of the customers.1 Let δ be an “infinitely small” increment of time t. The probability that a customer appears during the period [ t , t + δ ) is λ δ , and at most one customer can appear during this period. In this model, we assume that λ is constant. In particular, λ depends neither on time nor on the number of unsold items. In other words, the arrival process of customers is steady. After arriving in the system, a customer may buy an item. As mentioned before, this decision depends on the price of the item and the amount of money they are prepared to pay for it. We denote by f ( p ) the density of probability reflecting the fact that a customer is prepared to pay p for one unit of product. The following characteristics hold:
In this study, a Poisson process of parameter λ generates the arrival of one potential customer during an “infinitely small” period δ with the probability λ δ and does not generate any customer with
1
the probability 1 − λ δ . In this process, the probability of the arrival of more than one potential cus-
tomer is o(δ ) , which is practically equivalent to zero. Another way to express the Poisson process is to
say that the probability that k potential customers arrive during a period
(λ t ) k Pk = exp(− λ t ) k!
.
[0, t ]
is:
2.3 Dynamic Pricing for Time-dated Products: a Stochastic Model
51
• The density f ( p ) is a decreasing function of p, which means that the more expensive the product, the smaller the probability that a customer is prepared to pay this amount of money. • If the price of an item is p, then any customer who is prepared to pay p1 ≥ p will buy it. Thus, the probability of buying an item when the price is p is: P( p ) =
+∞
∫
u= p
f ( u ) du = 1 −
p
∫ f ( u ) du = 1 − F ( p )
u =0
where F ( p ) is the distribution function of the price. This probability tends to 0 when p tends to infinity and to 1 when p tends to 0. A set of n items are available at time 0. We define the value v ( k , t ) , k ∈ [ 0, n ] and t ∈ [ 0, T ] , as the maximum expected revenue we can obtain by time T from k items available at time t. We assume that v ( k , t ) is continuously differentiable with regard to t. Thus, v ( n, 0 ) is the solution to the problem. Indeed, v ( 0, t ) = 0 , ∀t ∈ [ 0, T ] and v ( k , T ) = 0 , ∀k ∈ [ 0, n ] . In other words, if the inventory is empty at time t, we cannot expect any further revenue. Also, if the inventory is not empty at time T, it is no longer possible to sell the items that are in inventory. Assume that k is the number of items available at time t. Three cases should be considered when the system evolves from time t to time t + δ : • No customer appears during the period [ t , t + δ ) . The probability of this nonevent is 1 − λ δ and the value associated with the state ( k, t + δ ) at time t + δ is v ( k , t + δ ) . • A customer appears during the period [ t , t + δ ) (probability λ δ ), but does not buy anything (probability F ( p ) ). Finally, the probability associated with this case is λ δ F ( p ) and the value associated with the system is still v ( k , t + δ ) at time t + δ . • A customer appears during the period [ t , t + δ ) (probability λ δ ) and buys one item (probability 1 − F ( p ) ). The probability associated with this case is λ δ [1 − F ( p ) ] and the value associated with the system at time t + δ is v ( k − 1, t + δ ) + p − c . In this expression, p is the price of one item and c is the marginal cost when selling one item (cost to invoice, packaging, transportation, for instance). The cost c depends neither on the inventory level nor on the time. It is assumed to be less than p. Figure 2.1 represents the evolution of the number of items during the elementary period [ t , t + δ ) if k items are available at time t.
52
2 Dynamic Pricing Models
Inventory level k
Pr = 1 − λ δ [1 − F ( p )]
v (k,t)
v (k,t+δ)
p–c
Pr = λ δ [1 − F ( p )] k–1
v (k–1,t+δ) t
Time
t+δ
Figure 2.1 Evolution of the number of items
Let p* be the optimal cost of one item at time t when the inventory level is k, and v ( k , t ) the maximum expected revenue for the state ( k , t ) of the system. At time t + δ the maximum expected revenue becomes either v ( k , t + δ ) with the probability 1 − λ δ [ 1 − F ( p * ) ] or v ( k − 1, t + δ ) with the probability λ δ [ 1 − F ( p * ) ] , but, in the later case, some revenue has been taken by the retailer when the item was sold and this amount is p*–c. In terms of flow, we can consider that the flow p*–c of money exited the system during the elementary period [ t , t + δ ) . Thus, writing the balance of the maximum expected revenues, we obtain: v ( k, t ) = [ 1 − λ δ [ 1 − F ( p * ) ] ] v ( k, t + δ ) + λ δ [ 1 − F ( p * ) ] [ v ( k − 1, t + δ ) + p * −c
]
As a consequence: ⎧ ( 1 − λ δ ) v ( k, t + δ ) + λ δ F ( p ) v ( k, t + δ v ( k , t ) = Max ⎨ p ≥0 ⎩ + λ δ [ 1 − F ( p ) ] [ v ( k − 1, t + δ ) + p − c ]
)⎫ ⎬ ⎭
This equality can be rewritten as: ⎧ v ( k, t + δ ) − λ δ [ 1 − F ( p ) ] v ( k, t + δ ) ⎫ v ( k , t ) = Max ⎨ ⎬ p ≥0 ⎩ + λ δ [ 1 − F ( p ) ] [ v ( k − 1, t + δ ) + p − c ] ⎭
This equality leads to: −
v ( k, t + δ ) − v ( k, t
δ
⎫ ) = λ Max ⎧ − [ 1 − F ( p ) ] v ( k , t + δ ) ⎨ ⎬ p ≥0 ( ) [ ] [ ( ) ] + − − + + − 1 F p v k 1 , t δ p c ⎩ ⎭
2.3 Dynamic Pricing for Time-dated Products: a Stochastic Model
53
If δ tends to 0, the previous equality becomes: v t' ( k , t ) =
− λ Min p ≥0
{ [ 1 − F ( p ) ] v ( k , t ) − [ 1 − F ( p ) ] [ v ( k − 1, t ) + p − c ] }
(2.16)
The solution of the problem (i.e., maximizing the revenue) consists of finding, for each pair ( k , t ) , the price p * that minimizes the second member of Equality 2.16, and then solving the differential equation (2.16). Thus, two values will be associated to the pair ( k , t ) : • The price p * ( k , t ) that should be assigned to a unit of product at time t if the inventory level is k. • The maximum expected revenue on period [ t , T ] if the inventory level is k at time t. Unfortunately, it is impossible to find an analytic solution for a general function F ( p ) . From this point onwards, we assume that:
F ( p ) = 1 − e −α
p
(2.17)
where α > 0 .
2.3.2 Solution to the Problem According to Relation 2.17, Equation 2.16 becomes:
{
v t' ( k , t ) = −λ Min e −α p≥0
p
( v ( k , t ) − v ( k − 1, t ) − p + c ) }
(2.18)
Since e −α p > 0 , the minimum value of the second member of (2.18) is obtained for the value p * ( k , t ) of p that makes its derivative with regard to p equal to 0. Thus, p * ( k , t ) is the solution of: e −α p [ − α v ( k , t ) + α v ( k − 1, t ) + α p − α c − 1 ] = 0
54
2 Dynamic Pricing Models
Finally: p * ( k , t ) = v ( k , t ) − v ( k − 1, t ) + c +
By replacing p by p * ( k , t v t'
( k, t ) = − λ α
1
(2.19)
α
) in Equation 2.18 we obtain:
e −α [ v ( k , t )−v ( k −1, t )+c+1/ α ]
(2.20)
Equation 2.20 holds for k > 1. As we can see, the derivative of v with respect to t is negative. This means that the maximum expected revenue decreases when the time increases, whatever the inventory level. In other words, the closer the deadline, the smaller the maximum expected revenue for any given inventory level. For k = 0, the differential equation is useless since we know that: v ( 0, t ) = 0 ∀t ∈ [ 0, T
]
For k = 1, the differential equation (2.20) is rewritten as: v t' ( 1, t ) = −
λ −α [ v ( 1, t )+c+1/ α ] e α
The solution to this differential equation is: v ( k, t ) =
⎧⎪ k λ j e − j ( 1+α c ) ( T − t ln ⎨ ∑ j! α ⎪⎩ j =0 1
)j
⎫⎪ ⎬ ⎪⎭
Let set: k
A ( k, t ) = ∑
λ j e − j ( 1+α c ) (T − t ) j
j =0
j!
With this definition, Relation 2.21 can be rewritten as: v ( k, t ) =
1
α
ln [ A ( k , t
)]
and Relation 2.19 becomes:
(2.21)
2.3 Dynamic Pricing for Time-dated Products: a Stochastic Model
p *( k, t ) =
1
α
⎡ A ( k, t ) ln ⎢ ⎣ A ( k − 1, t )
55
⎤ 1 ⎥+c+ α ⎦
or: p * ( k, t ) =
⎡ e 1+α c A ( k , t ) ⎤ ln ⎢ ⎥ α ⎣ A ( k − 1, t ) ⎦ 1
(2.22)
Finally, for any pair ( k , t ) , we can compute A ( k , t ) and A ( k − 1, t ) , and thus we can compute the optimal price associated to this pair by applying Relation 2.22. Note that, when we refer to the state of the system, we refer to the pair ( k, t ) . In this model, the system change over time according to a frozen control characterized by parameters α that provides the probability for a customer to buy an item, and λ that defines the probability for a customer to appear in the system during an elementary period. Thus, the behavior of customers, as well as their decision-making process, is frozen as soon as λ and α are selected. As a consequence, this model is not very useful in practice, but it is a good example of helping the reader to understand the objective of dynamic pricing that consists in adjusting dynamically the price of the items to the state of the system. Example We illustrate the above presentation using an example defined by the following parameters: • α = 0.8 . Remember that the greater α , the faster the probability of buying a product decreases with the cost. • λ = 1.5 . Remember that the greater λ , the greater the probability that a customer enters the system. • The initial level of the inventory is 10. • T=20. The probability of reaching the inventory level k at time t is represented in Figure 2.2. At time 0, the inventory level is equal to 10 with probability 1. When time increases, the set of possible inventory levels with significant probabilities increases and the mean value of the inventory level decreases. Figure 2.3 represents the optimal price of a product according to the time and the inventory level. For a given inventory level, price decreases with time. Similarly, at a given time, the price increases when the inventory level decreases.
56
2 Dynamic Pricing Models
1 0.9 0.8 0.7 0.6 0.5
Probability
0.4 0.3 1
0.2
17
13
9
5
Time
0.1 0 10 0 2 4 6 8
Inventory level
Figure 2.2 Probability versus time and inventory level
4.5 4 3.5
Price
3 2.5 2
16
12
Time
0
9
4
7
Inventory level
8
5
3
1
1.5
Figure 2.3 Optimal price versus time and inventory level
2.3.3 Probability for the Number of Items at a Given Point in Time Let n be the number of items available at time 0. We denote by r ( k , t ability that k items are still available at time t.
)
the prob-
2.3 Dynamic Pricing for Time-dated Products: a Stochastic Model
Indeed, r ( n, 0 ) = 1 since the initial state ( n, 0 whatever the number k of items, k ∈ { 0, 1, ..., n − 1 } .
)
57
is given and r ( k , 0 ) = 0
Result 1
The probability to have k unsold items at time t is: r ( k, t ) =
[ λ e − ( 1+α c ) t ] n−k A ( k , t (n − k )! A ( n, 0 )
)
(2.23)
Proof Let dt be an elementary increment of t. To have k items at time t + dt we should be in one of the following cases:
• The number of items at time t was k + 1 (this case holds only if k < n), a customer appeared on the time interval [ t , t + dt ) (probability λ dt ) and this customer bought an item ( probability e −α p* ( k +1, t ) ). • The number of items at time t was k and either no customer appeared on the time interval [ t , t + dt ) (probability 1 − λ dt ) or one customer appeared but
(
)
he/she didn’t buy anything (probability λ dt 1 − e −α p* ( k , t ) ). As a consequence, we obtain the following relation: r ( k , t + dt ) = r ( k + 1, t ) λ dt e −α p* ( k +1, t ) + r ( k , t ) [ 1 − λ dt e −α p* ( k , t ) ] (2.24)
when k < n, and r ( n , t + d t ) = r ( n , t ) [ 1 − λ dt e − α
p* ( n , t
)
]
Let us first consider Relation 2.25. It leads to: r t' ( n, t ) = −λ r ( n, t ) e −α
p* ( n , t
)
Using Relation 2.22, we obtain: ln r ( n, t ) = ln A ( n, t ) + W , where W is constant.
Since r (n, 0) = 1 , the previous relation leads to: W = − ln A ( n, 0
)
(2.25)
58
2 Dynamic Pricing Models
Finally: r ( n, t ) =
A ( n, t ) A ( n, 0 )
(2.26)
Thus, Relation 2.23 holds for k = n. From Relation 2.24 and using (2.22), we derive:
r
' t
( k, t ) =
r ( k + 1, t ) λ
e − ( 1+α c ) A ( k , t ) e − ( 1+α c ) A ( k − 1, t − r ( k, t ) λ A ( k + 1, t ) A ( k, t )
)
(2.27)
If we write Equation 2.27 for k = n–1, we can use Equation 2.26 to obtain a differential equation in r ( n − 1, t ) . Solving this equation leads to: r ( n − 1, t ) =
λ e − ( 1+α c ) t A (n − 1, t A (n, 0
)
)
In turn, this result used with Relation 2.27 leads to a differential equation in r ( n − 2, t ) . As soon as the general form of the solution is recognized, a recursion is applied to verify the result. Q.E.D. Result 2 concerns the expected number of items sold at the end of period t. Result 2 The expected number of items sold by time t is Et = λ e − (1+α c ) t A ( n − 1, 0 ) / A ( n, 0
)
Proof Taking into account Result 1: Et =
n [ λ e − ( 1+α c ) t ] n−k A ( k , t ) n k r k , t n k ( ) ( ) ( ) − = − ∑ ∑ ( n − k )! A ( n, 0 ) k =0 k =0 n
This relation can be rewritten as: [ λ e − ( 1+α c ) t ] n−k −1 A ( k , t (n − k − 1 )! A ( n, 0 ) k =0 n −1
Et = λ e −( 1+α c ) t ∑
)
2.3 Dynamic Pricing for Time-dated Products: a Stochastic Model
Et =
λ e − ( 1+α c ) t A (n − 1, 0 A ( n, 0
)
)
n −1
[ λ e − ( 1+α c ) t ]
n −k −1
59
A ( k, t
∑ (n − k − 1 )! A ( n − 1, 0 )
)
k =0
The sum in the second member of the equality is equal to 1 since the elements of this sum are the probability r ( k , t ) assuming that the initial number of items is n–1. This completes the proof. Q.E.D. Example As we can see in Figure 2.4: • The number of items sold by time T is an increasing function of λ when α is fixed; in other words, the lower the average probability that a customer arrives in the system, the lower the number of items sold at time T. • The number of products sold by time T is a decreasing function of α when λ is fixed; in other words, the higher the average price of a product, the lower the number of items sold by time T.
9 8 7 6 Number of 5 products sold 4 3
0.6
0.2
1.8
1.8
1.4
1
1 1.4
Alpha
0.6
0.2
2 1 0 Lambda
Figure 2.4 Number of products sold by time T with regards to λ (lambda) and α (alpha)
2.3.4 Remarks In the model presented in this section, it is assumed that the buying activity proceeds in two steps: first, a buyer enters the system with a given probability that
60
2 Dynamic Pricing Models
depends on parameter α and, second, he/she decides to buy an item or not, depending on the maximum amount of money he/she is prepared to pay for it, and this buying probability depends upon a parameter denoted by λ . Furthermore, the value of one item is equal to zero at time T, the horizon of the problem; in other words, there is no salvage value. The biggest drawback with this model is related to its following characteristics: 1. We have to compute the maximum value of [ 1 − F ( p ) ] [ v ( k , t ) − v ( k − 1, t ) − p + c ] with respect to p. Let p* be the value of p that leads to this maximum value. The problem is that, if we want to solve the differential equation (2.18) analytically, we must be able to express p* as a function of v. 2. We then have to be able to solve the differential equation in which p* is replaced by its function of v. These two conditions make the computation of an analytical solution usually impossible, mainly if the problem at hand is a real-life problem, especially if the probability density is not exponential.
2.4 Stochastic Dynamic Pricing for Items with Salvage Values The difference from the previous model lies not only in the existence of salvage values, but also in the fact that there exists a one-to-one relationship between the demand intensity, denoted by λ , and the price for one item, denoted by p. Thus, the two-stage buying process that was the basis of the previous model vanishes. We assume that the demand follows a Poisson process of parameter λ . We also assume that only a finite number of prices can be chosen by the retailer and that each price is associated with one demand intensity. Some additional assumptions will be made. They will be presented in detail in the next section.
2.4.1 Problem Studied The period available to sell the items is [ 0, T ] and the number of items available
at time 0 is n. We denote by P = { p1 , p 2 , ..., p N , p∞ } the set of prices that can be chosen by the retailer and by Λ = { λ1 , λ2 ,..., λ N , λ∞ } the corresponding demand intensities. Establishing the relationship between the elements of P and the elements of Λ is not an easy task. We assume that this task has been performed at this point of the process.
2.4 Stochastic Dynamic Pricing for Items with Salvage Values
61
The available prices are arranged in their decreasing order, i.e., p1 > p 2 > ... > p N > p∞ , and thus λ1 < λ2 < ... < λ N < λ∞ since the greater the price, the lower the demand intensity: the price is a decreasing function of the demand intensity. As mentioned before, salvage values are included in the model, which means that it is still possible to sell the unsold items in a secondary market after time T. We denote by w ( r , T ) the salvage value of r items unsold at the deadline T. In
other words, w ( r , T ) is the selling price in the second market of the r unsold items at time T. The salvage value w ( r , T ) is assumed to be non-decreasing and concave in r. This means that: 1. The salvage value of the remaining items increases with the number of items, which is realistic. 2. The average price of one item is a non-increasing function of the number of items. It may also happen that the price per unit does not depend upon the number of remaining items: it is the borderline case. Note that this second hypothesis is quite common. We denote by p ( r , t ) the price of one item at time t if the inventory level is r. We also assume that: 1 w ( n, T ) ≤ w ( 1, T ) ≤ p ( n, T n
)
where p ( n, T ) is the price of one item when the inventory level is still full just before the end of the selling period. Indeed, according to the hypotheses made before, p ( n, T ) ≤ p ( k , t ) for k ∈ { 1, 2, ..., n } and t ∈ [ 0, T ) . In other words, selling one item for its salvage value is always worse than selling it before time T, whatever the inventory level. We first consider the case when the price of one item depends on the inventory level only.
2.4.2 Price as a Function of Inventory Levels: General Case 2.4.2.1 Model
In practice, it is rare to assign a different price for each inventory level, except if the items under consideration are very expensive (cars, for instance). This case will be considered in Section 2.4.3. For the time being, we assume that the same
62
2 Dynamic Pricing Models
price applies when the inventory level lies between two given limits. In the following, ki is the rank of the i-th item sold. We would like to recall the following convention: writing x ∈ ( a, b ] means that a does not belong to the interval (i.e., x cannot take the value a) while b does (i.e., x can take the value b). Also, to simplify the notations, we introduce Ni =
i
∑n
j
for i = 1, 2, ..., s , where s is the number of levels, N0 = 0, and n j is
j =1
the number of items sold at price p j . If n is the inventory level at time 0, we assume that: • Price p1 applies to the k1-th item sold when k1 ∈ [ N 0 , N1 ) . • Price p2 applies to the k2-th item sold when k 2 ∈ [ N1 , N 2 ) . … • Price pi applies to the ki-th item sold when ki ∈ [ N i −1 , N i ) . … • Price p s applies to the ks-th item sold when k s ∈ [ N s −1 , N s ) . Indeed, s
n = ∑ n j = Ns j =1
At this point of the discussion, the goal does not consist in finding the values of parameters ni that optimize the mean value of the revenue. We just want to propose a tool that provides the mean value of the revenue when the values of the parameters are given. Let k i ∈ { N i −1 + 1, ..., N i } and let Pr ( k i , [ a, b ] r ) be the probability that ki
items are sold during period [ a, b ] if the inventory level is r at time a. The formulation of the probability has been slightly modified to precisely match the initial value of the inventory, which will be useful in the remainder of the section to avoid confusion. When the initial inventory is n, this information is ignored and we use the previous notation. In Figure 2.5, we provide the structure that underlies the computation of Pr ( k i , [ 0, T ] ). We first write that the probability that ki items are sold during period [ 0, T ] is the integral on [ 0, T ] with regard to t1 of the product of the two following factors (Bayer’s theorem): • The probability that n1 = N1 items are sold in period ( 0, t1 ] , the last item being sold at time t1. This probability is:
2.4 Stochastic Dynamic Pricing for Items with Salvage Values
n1
λ1
n2
λ2
N1
N2 Ni-2
M M M
ki
Ni-1
Ni Ns-1
λ i −1
ni-1 ni
63
Ns
λi
ki M M M
ns
λs
Figure 2.5 Structure used to compute the probability to sell ki items in [ 0, T ]
( λ1 t1 ) n −1 exp (−λ t ) λ dt 1 1 1 1 ( n1 − 1 ) ! 1
• The probability that k i − N1 items are sold during period ( t1 , T ] , knowing that the inventory level at time t1 is n − N1 . This probability is: Pr { k i − N1 , ( t1 , T ] n − N1
}
Finally, Pr ( k i , [ 0, T ] ) is expressed as: Pr ( k i , [ 0, T
T n −1 ] ) = ∫ ( λ1 t1 ) exp (− λ1 t1 ) λ1 Pr ( ki − N1 , ( t1 , T ] ( n1 − 1 ) ! t =0 1
n − N1 ) dt1
1
We now compute Pr { k i − N1 , ( t1 , T ] n − N1
}
as the integral on the interval
( t1 , T ] with respect to t2 of the product of the following two factors:
1. The probability that n2 items are sold on period ( t1 , t 2 ] , the last item being sold at time t2. This probability is: [ λ2 ( t 2 − t1 ) ] n2 −1 exp [ − λ2 ( t 2 − t1 ) ] λ2 dt 2 ( n2 − 1 ) ! 2. The probability that k i − N 2 items are sold during period ( t 2 , T ] , knowing that the inventory level at time t2 is n − N 2 . This probability is:
64
2 Dynamic Pricing Models
Pr ( k i − N 2 , ( t 2 , T ] n − N 2
)
Thus, Pr ( k i − N1 , ( t1 , T ] n − N1 ) T
[ λ2 ( t 2 − t1 ) ] n −1 exp [− λ2 ( t 2 − t1 ( n2 − 1 ) ! 2
∫
=
t2 =t1
) ] λ2 Pr ( ki − N 2 , ( t 2 , T ]
n − N 2 ) dt 2
We further extend this approach to the next lower layers. We obtain: Pr ( ki − N i −3 , ( t i −3 , T ] n − N i −3 T
∫
=
ti − 2 = ti − 3
⎧ [ λi −2 ( ti −2 − ti −3 ) ] n ⎨ ( ni−2 − 1 ) ! ⎩
i − 2 −1
∫
=
ti −1 = ti − 2
exp [− λi −2 ( ti −2 − ti −3
Pr ( k i − N i −2 , ( ti −2 , T ] n − N i −2
Pr ( ki − N i −2 , ( ti −2 , T ] n − N i −2 T
)
⎧ [ λi −1 ( ti −1 − ti −2 ) ] ⎨ ( ni−1 − 1 ) ! ⎩
) } dt
) ] λi−2
i −2
)
ni −1 −1
exp [− λi −1 ( ti −1 − ti −2
Pr ( ki − N i −1 , ( ti −1 , T ] n − N i −1
) } dt
) ] λi−1
i −1
The last echelon of the formulation is slightly different from the previous ones: Pr ( ki − N i −1 , ( ti −1 , T ] n − N i −1
[ λi ( ti − ti−1 ) ]k − N ∫ (ki − N i−1 − 1 ) ! t =t T
=
i
i
i −1 −1
)
exp [− λi ( ti − ti −1
i −1
and: Pr ( 0, ( ti , T ] n − ki ) = exp [− λi ( T − ti
)]
) ] λi Pr ( 0, ( ti , T ]
n − k i ) dt i
2.4 Stochastic Dynamic Pricing for Items with Salvage Values
65
This sequence of equalities can be rewritten as a unique relation: i −1
Pr ( ki , [ 0, T
] )= ∏
(λ ) λ nj j
j =1
∫ ∫ L ∫ {∏ ( ( t T
T
t1 =0 t2 =t1
T
i −1
j
− t j −1
ki − N i −1 i
)
n j −1
i −1
⎛
j =1
⎝
∏ ⎜⎜ ( n
)( t − t i
i −1
⎞ 1 1 ⎟ ⎟ j − 1 ) ! ⎠ ( ki − N i − 1 ) !
) k −N i
i −1 −1
(2.28)
j =1
ti =ti −1
∏ ( exp [ − λ ( t i
j
j
− t j −1
) ] ) exp [− λ ( T − t ) ]} dt i
i
j =1
i
dti −1 L dt1
This relation holds for ki = 1, 2, …, n–1. The probability that any item is sold by horizon T is: Pr ( 0, [ 0, T ] ) = exp ( − λ1 T )
Furthermore, the probability that all the items are sold at time T is: Pr ( n, [ 0, T
n −1
] ) = 1 − ∑ Pr ( k , [ 0, T ] ) k =0
Assuming that the probabilities are known, the mean value of the revenue is: v ( n, T ) =
n
∑ { Pr ( k , [ 0, T ] ) k [ p ( k ) + w ( n − k , T ) ] }
(2.29)
k =0
where p ( k ) = pi if k ∈ { N i −1 + 1, ..., N i } .
2.4.2.2 Computation of the Mean Value of the Revenue
An analytical expression of the integrals of the second member in Relation 2.28 is possible only for very small values of parameters i and ni since the complexity of the solution increases exponentially. This is why a numerical approach is necessary. We chose the Monte-Carlo approach. In order to simplify the notations, we denote by qk the probability Pr ( k , [ 0, T ] ) and by pi the cost of one item if the inventory level k belongs to { N i −1 + 1, ..., N i } . Others notations are those introduced in the previous subsection.
66
2 Dynamic Pricing Models
Algorithm 2.2.
1. Compute q 0 = exp ( − λ1 T ) . 2. For k = 1 to n − 1 do: 2.1. Compute i such that N i −1 < k ≤ N i . 2.2. If i > 1 set K j = n j for j = 1, ..., i − 1 . 2.3. K i = k − N i −1 . 2.4. u = 1. 2.5. For j = 1, ..., i do u = u λ j K j . i −1
At this point, u contains the term
∏ ⎛⎜⎝ λ j
nj
j =1
⎞⎟ λ ⎠
ki − N i −1 i
of Formulae (2.28).
The Monte-Carlo method starts below. 2.6. Set qk = 0. 2.7. For Mc = 1 to M do: M is the number of iterations (around 10 000). 2.7.1. Set t0 = 0. 2.7.2. For j = 1, ..., i generate tj at random on [ t j−1 , T ] . 2.7.3. Set w = 1, s = 0, z = 1. 2.7.4. For j = 1, ..., i do:
( t j − t j −1 ) K −1 . j
2.7.4.1. Compute v =
(
(K j − 1 ) !
)
2.7.4.2. Compute v = v T − t j −1 . 2.7.4.3. Compute w = w v .
(
)
2.7.4.4. If ( j < i ) , then compute z = z exp [ − λ j t j − t j −1 ] . 2.7.4.5. If
[(
j =i
)
and
( k < N i ) ] do:
2.7.4.5.1. Compute z = z exp [ λ j t j −1 ] . 2.7.4.5.2. If 2.7.4.6. If
[(
j =i
( Mc = 1 ) , then compute ) and ( k = N i ) ] do:
(
u = u exp [ − λ j T ] .
)
2.7.4.6.1. Compute z = z exp [ − λ j t j − t j −1 + λ j +1 t j ] . 2.7.4.6.2. If
( Mc = 1 ) , then compute
2.7.5. End of loop j . 2.7.6. Compute w = w z . 2.7.7. Compute q k = q k + w / M . 2.8. End of loop Mc. 2.9. Compute q k = q k u . 3. End of loop k.
Computation of qn. 4. Set u = 0 .
u = u exp [ − λ j+1 T ] .
2.4 Stochastic Dynamic Pricing for Items with Salvage Values
67
5. For k = 0, ..., n − 1 do u = u + q k . 6. Compute q n = 1 − u .
Computation of the mean value of the revenue denoted by Ct. 7. Set Ct = 0. 8. For k = 0 to n do: 8.1. Set cc = 0 . 8.2. Compute i such that N i −1 < k ≤ N i . 8.3. If i > 1 do cc = cc + p j n j for j = 1, ..., i − 1 .
8.4. Compute cc = cc + ( k − Ni −1 ) pi .
8.5. Compute Ct = Ct + q k [ cc + w ( n − k , T
) ].
9. End of loop k.
2.4.2.3 Improvement of the Solution
We denote by ni , i = 1, 2, L, s the initial sizes of the layers, from the upper to the lower one, and by λi (respectively, pi ) the corresponding demand intensities (respectively, prices). Remember that λ1 < λ2 < L < λs and p1 > p2 > L > p s . Since a numerical approach has been used to evaluate the probabilities of the states of the system at time T and the mean value of the revenue knowing the layers, we can also use a numerical approach to reach the layers that maximize the mean value of the revenue. We chose a simulated annealing algorithm to improve a given solution. This method is an iterative approach and some layers may become empty (and thus disappear) during the process. This requires some additional notations. We denote by r0 the number of layers, by ni0 the size of the i-th layer and by
λ0i (respectively, pi0 ) the corresponding demand intensities (respectively, prices) for i = 1, 2, L, r 0 at the beginning of an iteration of the simulated annealing algorithm or the initial stage. Indeed, r 0 ≤ s . At the beginning of the first iteration, r 0 = s , ni0 = ni , λi0 = λi and pi0 = pi for i = 1, 2, L, s . We denote by r1 the number of layers at the end of the first iteration, by ni1 the size of the i-th layer, and by λ1i (respectively, pi1 ) the corresponding demand intensities (respectively, prices) for i = 1, 2, L, r 1 . Indeed, r 1 ≤ s . Furthermore, the corresponding mean value of the revenue is Copt1.
68
2 Dynamic Pricing Models
TT
0
1
0
2
0
3
0
0
4
λ1
λ2
λ3
λ4
λ5
λ6
λ7
λ8
λ9
λ1*
λ*2
λ*3
λ*4
r1=4 r0= s= 9
Figure 2.6 Links between the second and the initial iteration
We introduce a vector TT to link the initial demand intensities (and thus the initial price), with the current demand intensities and prices. This linkage is illustrated in Figure 2.6. Algorithm 2.3 describes the simulated annealing mechanism that we apply to our problem. Note: Algorithm 2.3 contains Algorithm 2.2. Algorithm 2.3.
1. Introduce s, n, λi , pi for i = 1, 2, L, s and the salvage costs w ( k , T ) for k = 1, 2, L, n. 2. Generate at random ni for i = 1, 2, L, s such that
s
∑ ni = n , the value generated being intei =1
ger and positive. The first two steps of the algorithm provide the initial data. 3. Introduce KK. KK is the number of iterations that will be made. For instance, we may assign the value 2000 or 3000 to this variable. 4. Set r 0 = s, λi0 = λi , pi0 = pi and ni0 = ni for i = 1, 2, L, s .
This set of values represents the initial solution that is called S0. 5. Compute the mean value of the revenue corresponding to solution S0. We denote this value Copt0. This is obtained by applying Algorithm 2.2. 6. We set S * ≡ S 0 and Copt* = Copt 0 .
For each iteration, S* contains the best solution and Copt* the greatest mean value of the revenue obtained since the beginning of the algorithm. The simulated annealing process starts at this point. 7. For kkt = 1 to KK do: 7.1. For i = 1 to s set TTi = 0. 7.2. Set i = 1 and j = 1. 7.3. While ( i ≤ r 0 )
2.4 Stochastic Dynamic Pricing for Items with Salvage Values
69
7.3.1. If ( λi = λi ) do: 0
7.3.1.2. Set TT j = i. 7.3.1.3. Set i = i + 1. 7.3.1.4. Set j = j + 1. 7.3.2. If ( λ0i ≠ λi ) do: j = j + 1.
The instructions of Stage 7 lead to vector TT. In the following steps, we modify the layers. 7.4. Generate the integer i at random on { 1, 2, L, r 0 }.
7.5. Generate the integer j at random on { 1, 2, L, s
}.
7.6. If ( TT j ≠ 0 ) do:
In this case, one item will be added to the i-th layer that is not empty. 7.6.1. Set i1 = TT j . 7.6.2. Set n1i1 = ni01 + 1. 7.7. If ( TT j = 0 ) do:
In this case, one item will be added to an empty layer that corresponds to the j-th initial layer. This layer is temporarily set at the last position in the current solution. 7.7.1. Set i1 = r 0 + 1. 7.7.2. Set r 1 = r 0 + 1. 7.7.3. Set ni11 = 1, pi11 = p j , λ1i1 = λ j . 7.8. Set ni1 = ni0 − 1. 7.9. If ( n1i = 0 ) do:
In this case, one layer becomes empty and disappears. 7.9.1. If ( i < r 0 ) do: 7.9.1.1. For k = i to r0 – 1 set n1k = n1k +1 , p1k = p1k +1 , λ1k = λ1k +1 . 7.9.1.2. Set r1 = r0 – 1. The next stage consists in putting the parameters into the increasing order of the demand intensities. 7.10. For i = 1 to r1 – 1 do 7.10.1. For j = i + 1 to r1 7.10.1.1. If ( λ1j < λ1i ) permute λ1j and λ1i , p1j and p1i , n1j and n1i .
At this stage of the computation, a new solution S1 is available. 7.11. Compute the mean value of the revenue corresponding to solution S1. We denote this value by Copt1. This value is obtained by applying Algorithm 2.2. 7.12. If Copt 1 ≥ Copt 0 do: 7.12.1. If Copt1 > Copt * set S * = S 1 and Copt* = Copt1. 7.12.2. Set S 0 = S 1 . 7.13. If Copt 1 < Copt 0 do: 7.13.1. Compute y = exp [ − ( Copt 0 − Copt 1 ) / kkt ].
70
2 Dynamic Pricing Models 7.13.2. Generate x at random on [ 0, 1 ] (probability density 1). 7.13.3. If ( x ≤ y ) set S 0 = S 1 .
2.4.2.4 Numerical Example
In the case presented hereafter, the number of items to sold before time T is 25 (n = 25). A one-to-one relationship has been established between five prices and five demand intensities. These data are presented in Table 2.1. Table 2.1 Price versus demand intensity
Price
20
14
10
7
Demand intensity 0.2 0.4 0.6 0.8
5 1
The salvage value is linear: each item can be sold on the second market for 2 monetary units. The computation starts with five layers numbered from the upper to the lower layer. Each of them is initially made with 5 consecutive inventory levels. The number of iterations made in the simulated annealing process is 5000. Remark: A large number of choices (see Appendix A) are available when applying simulated annealing for defining, in particular: • the number of iterations; • the evolution of the “temperature” that affects the selection of the next state; • the “neighborhood” of a solution. In Table 2.2, we give some intermediate results provided by the simulated annealing algorithm. The last one is the near-optimal solution. Table 2.2 Some intermediate steps of the simulated annealing process Iteration number 1 16 993 1243 3642 4006
Solution Layer size Price Demand intensity Layer size Price Demand intensity Layer size Price Demand intensity Layer size Price Demand intensity Layer size Price Demand intensity Layer size Price Demand intensity
5 20 0.2 2 20 0.2 0 20 0.2 0 20 0.2 0 20 0.2 0 20 0.2
5 14 0.4 4 14 0.4 11 14 0.4 13 14 0.4 1 14 0.4 1 14 0.4
5 10 0.6 6 10 0.6 1 10 0.6 1 10 0.6 15 10 0.6 17 10 0.6
Mean value of the revenue (rounded) 5 7 0.8 7 7 0.8 10 7 0.8 10 7 0.8 1 7 0.8 7 7 0.8
5 5 1 6 6 1 3 6 1 1 6 1 8 6 1 0 6 1
174 184 199 202 205 211
2.4 Stochastic Dynamic Pricing for Items with Salvage Values
71
0.14 0.12
Probability
0.1 0.08 0.06 0.04 0.02
24
22
20
18
16
14
12
10
8
6
4
2
0
0
Inventory level
Figure 2.7 Probabilities at time T for the last structure of layers
Figure 2.7 provides the probabilities of the different inventory levels at time T for the last structure of layers. 2.4.2.5 How to Use the Approach?
The previous approach is used on a periodic basis. This strategy corresponds to the usual behavior of vendors: they choose a pricing policy on a given period (one or two weeks for instance) and they reconsider the pricing policy for the next period according to the inventory level at the end of the previous period, and so on. In other words, they work on a rolling-horizon basis.
2.4.3 Price as a Function of Inventory Levels: a Special Case We assume that the demand intensity, and thus the price, is different from one inventory level to the next. This kind of situation happens when the items are expensive (cars, for instance). In this particular case, it is possible to express analytically the probability that k items are sold at the end of period T. We denote by λi the demand intensity when the inventory level is i and the price of the next item is pi. The initial inventory level is n. Indeed λn < λn−1 < ... < λ2 < λ1 and, as mentioned earlier, pn > pn−1 > ... > p2 > p1 .
72
2 Dynamic Pricing Models
As in the previous section, Pr ( k , [ t1 , t 2 ] ) refers to the probability that k items are sold in period [ t1 , t 2 ] . Since at each inventory level the demand is generated by a Poisson process: • Pr ( 0, [ 0, T • Pr (1, [ 0, T
] ) = exp (− λn T ) T
] )= ∫
exp (−λn t ) λn exp [−λn−1 ( T − t ) ] dt
t =0
=−
λn λn exp (−λn T ) − exp (−λn−1 T ) λn − λn−1 λn−1 − λn
• Pr (2, [ 0, T ] ) =
T
∫ Pr ( 1, [ 0, t ] ) λ
n −1
Pr ( 0, [ t , T ] dt
t =0
=− −
λn λn−1 λn − λn−1
λn λn−1 λn−1 − λn
T
∫ exp ( − λ
n
t ) exp [ − λn−2 ( T − t ) ] dt
t =0
T
∫ exp ( − λ
n −1
t ) exp [ − λn−2 ( T − t ) ] dt
t =0
⎧ exp ( − λn T ) exp ( − λn−1 T ) = λn λn−1 ⎨ + ⎩ ( λn − λn−1 ) ( λn − λn−2 ) ( λn−1 − λn ) ( λn−1 − λn−2 ) +
⎫ exp ( − λn−2 T ) ⎬ ( λn−2 − λn ) ( λn−2 − λn−1 ) ⎭
At this level of the computation, it appears that the formula could be:
Pr ( k , [ 0, T ] ) = ( − 1 ) k
k −1
n −i
i =0
for k = 1,…, n–1
k
∏( λ ) ∑ i =0
⎧ ⎪ ⎪⎪ exp ( − λ T ) n −i ⎨ k ⎪ ( λ n −i − λ n − j ) ⎪∏ j =0 ⎪⎩ j ≠i
⎫ ⎪ ⎪⎪ ⎬ ⎪ ⎪ ⎪⎭
(2.30)
To complete the proof, we will show that if (2.30) holds for k, then it also holds for k +1. If we express Pr ( k + 1, [ 0, T ] ) according to Pr ( k , [ 0, t ] ) (Bayes’ theorem), we obtain:
2.4 Stochastic Dynamic Pricing for Items with Salvage Values
73
T
Pr ( k + 1, [ 0, T ] ) =
∫ Pr ( k , [ 0, t ] ) λ
Pr ( 0, [ t , T ] ) dt
n −k
t =0
= ( −1 ) k
T
k −1
∏ (λ ) ∫ n −i
i =0
t =0
⎡ ⎢ ⎢ k ⎢∑ ⎢ i =0 ⎢ ⎢ ⎣
⎤ ⎧ ⎫ ⎥ ⎪ ⎪ ⎥ ⎪⎪ exp ( − λ t ) ⎪⎪ n −i ⎬ λn−k exp [ − λn−k −1 ( T − t ) ] ⎥⎥ dt ⎨ k ⎪ ( λ n −i − λ n − j ) ⎪ ⎥ ⎪ ⎪∏ j =0 ⎥ ⎪⎩ j ≠i ⎭⎪ ⎦⎥
Developing this expression, we reach the following equality:
k
Pr ( k + 1, [ 0, T ] ) = ( − 1 ) k +1 ∏ ( λn−i i =0
k
k
)∑ i =0
k
− ( − 1 ) k +1 ∏ ( λn−i ) exp ( − λn− k −1 T ) ∑ i =0
i =0
⎧ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪⎩
⎧ ⎪ ⎪⎪ ⎨ ⎪ ⎪ ⎩⎪
⎫ ⎪ exp ( − λn−i T ) ⎪⎪ ⎬ k +1 ( λ n −i − λ n − j ) ⎪ ∏ ⎪ j =0 ⎪⎭ j ≠i
1 k +1
∏ (λ j =0 j ≠i
n −i
− λn− j
⎫ ⎪ ⎪⎪ ⎬ ) ⎪ ⎪ ⎪⎭
Expanding the left side of the following equality, we obtain:
k +1
∑ i =0
⎧ ⎪ ⎪⎪ ⎨ ⎪ ⎪ ⎪⎩
1 k +1
∏ (λ j =0 j ≠i
n −i
− λn − j
⎫ ⎪ ⎪⎪ ⎬=0 ) ⎪ ⎪ ⎪⎭
This equality can be rewritten as:
(2.31)
74
−
2 Dynamic Pricing Models
k
∑ i =0
⎧ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪⎩
1 k +1
∏ (λ
n −i
− λn− j
j =0 j ≠i
⎫ ⎪ ⎪⎪ ⎬= ) ⎪ ⎪ ⎭⎪
1 k +1
∏ (λ
n − k −1
− λn− j )
j =0 j ≠ k +1
Thus, Equation 2.31 becomes:
Pr ( k + 1, [ 0, T ] ) = ( − 1 )
k +1
k
k
i =0
i =0
⎧ ⎪ ⎪⎪ exp ( − λ T ) n −i ⎨ k +1 ⎪ ( λ n −i − λ n − j ) ⎪∏ j =0 ⎪⎩ j ≠i 1
∏ ( λ n −i ) ∑
k
+ ( − 1 ) k +1 ∏ ( λn−i ) exp ( − λn− k −1 T ) i =0
k +1
∏ (λ
n − k −1
⎫ ⎪ ⎪⎪ ⎬ ⎪ ⎪ ⎪⎭
− λn− j )
j =0 j ≠ k +1
k
= ( − 1 ) k +1 ∏ ( λn−i i =0
k +1
)∑ i =0
⎧ ⎫ ⎪ ⎪ ⎪⎪ exp ( − λ T ) ⎪⎪ n −i ⎬ ⎨ k +1 ⎪ ⎪ − ( λ λ ) n −i n− j ⎪ ⎪∏ j =0 ⎪⎩ j ≠i ⎭⎪
This completes the computation. Result 3 is derived from the above development. In this result, according to the usual mathematical convention, we assume that if no factor remains in a product, m
then the product equals 1. For instance,
∏a
i
= 1 if m < n.
i=n
Result 3 The probability that k items are sold in period [ 0, T ] is given by (2.30) for k ∈ { 1,..., n − 1 } . Furthermore, Pr ( n, [ 0, T
n −1
] ) = 1− ∑
Pr ( k , [ 0, T
] ).
Then
k =0
the mean value of the revenue can be obtained applying (2.29), Section 2.4.2.1.
Further Reading
75
2.5 Concluding Remarks The goal of this chapter was to provide an insight into the domain of pricing models. We limited ourselves to time-dated items with no supply option in a monopolistic environment with myopic customers. Although these assumptions drastically simplify the problem, many additional restrictive assumptions are required to obtain a mathematical model that is easy to analyze. Nevertheless, the numerical development of the stochastic dynamic pricing model with salvage values is an interesting tool when integrated in a rollinghorizon approach since it allows prices to be adjusted periodically according to the inventory level and time, as required in the case of sales. Unfortunately, the oneto-one relationship between price and demand intensity remains under the responsibility of the user, and is not a risk-free task. In our opinion, the pricing models are simply tools to help better understand what dynamic pricing is rather than something to solve real-life problems.
Reference Gallego G, Van Ryzin G (1994) Optimal dynamic pricing of inventories with stochastic demand over finite horizons. Manag. Sci. 40:999–1020
Further Reading Belobaba PP (1989) Application of a probabilistic decision model to airline seat inventory control. Oper. Res. 37:183–197 Bitran GR, Mondschein SV (1995) An application of yield management to the hotel industry considering multiple day stays. Oper. Res. 43:427–443 Bitran GR, Gilbert SM (1996) Managing hotel reservations with uncertain arrivals. Oper. Res. 44:35–49 Caroll WJ, Grimes RC (1995) Evolutionary change in product management: Experiences in the car rental industry. Interfaces 25:84–104 Chen F, Federgruen A, Zheng YS (2001) Near-optimal pricing and replenishment strategies for a retail/distribution system. Oper. Res. 49(6):839–853 Chen F, Federgruen A, Zheng YS (2001) Coordination mechanisms for a distribution system with one supplier and multiple retailers. Manag. Sci. 47:693–708 Federgruen A, Heching A (1997) Combined pricing and inventory control under uncertainty. Oper. Res. 47:454–475 Federgruen A, Zipkin P (1986) An inventory model with limited production capacity and uncertain demands. Math. Oper. Res. 11:193–215 Feng Y, Gallego G (1995) Optimal starting times for end-of-season sales and optimal stopping times for promotional fares. Manag. Sci. 41:1371–1391 Gallego G, Van Ryzin G (1997) A multi-product dynamic pricing problem and its application to network yield management. Oper. Res. 45:24–41
76
2 Dynamic Pricing Models
Gaimon C (1988) Simultaneous and dynamic price, production, inventory and capacity decisions. Eur. J. Oper. Res. 35:426–441 Garcia-Diaz A, Kuyumcu A (1997) A Cutting-Plane Procedure for Maximizing Revenues in Yield Management. Comput. Ind. Eng. 33:51–54 Gerchak Y, Parlar M, Yee TKM (1985) Optimal rationing policies and production quantities for products with several demand classes. Can. J. Adm. Sci. 2:161–176 Kimes SE (1989) Yield management: A tool for capacity constrained service firms. J. Oper. Manag. 8:348–363 Levin Y, McGill J, Nediak M (2007) Price guarantees in dynamic pricing and revenue management. Oper. Res. 55(1):75–97 McGill J, Van Ryzin G (1999) Revenue management: Research overview and prospects. Transp. Sci. 33(2):233–256 Petruzzi NC, Dada M (2002) Dynamic pricing and inventory control with learning. Nav. Res. Log. Quart. 49:304–325 Raju CVL, Narahari Y, Ravikumar K (2006) Learning dynamic prices in electronic retail markets with customer segmentation. Ann. Oper. Res. 143(1):59–75 Talluri KT, Van Ryzin GJ (2004) The Theory and Practice of Revenue Management. Kluwer Academic Publishers, Norwell, MA Van Mieghem J, Dada M (1999) Price versus production postponement: Capacity and competition. Manag. Sci. 45(12):1631–1649 Vulcano G, Van Ryzin G, Maglaras C (2002) Optimal dynamic auctions for revenue management. Manag. Sci. 48(11):1388–1407 Zhao W, Zheng Y-S (2000) Optimal dynamic pricing for perishable assets with nonhomogeneous demand. Manag. Sci. 46(3):375–388
Chapter 3
Outsourcing
Abstract The factors and reasons that motivate companies to outsource are analyzed, especially the willingness to concentrate on core competencies. The most common benefits are examined, as well as the negative side effects presented. Underlined is the difference between outsourcing in countries having similar living standards and social welfare, and those with diverging levels. Then, the outsourcing process is analyzed in detail. This includes the choice of the activity to outsource and vendors, as well as the way collaborations are negotiated and monitored. If a vendor is located in another country, the buyer should pay attention to many factors: currency exchange rates, input-price uncertainties, foreign tax rules, etc. A detailed multicriteria vendor selection and evaluation model is proposed and a solution method suggested. Then, strategic outsourcing is analyzed in a duopoly market. In addition, the common pro and con arguments for outsourcing are explained. Finally, the chapter ends with an answer to the following question: is outsourcing a harmful strategy? Note that an important part of this chapter deals with outsourcing to China.
3.1 Introduction Outsourcing is defined as purchasing services, semi-finished products and components from outside companies (referred to as “vendors”) when before these elements were traditionally provided in-house. A company that outsources is referred to as a “buyer”. Outsourcing requires information exchange and coordination between the buyer and vendor. Confusion is frequent between “outsourcing”, “offshore outsourcing” and “offshoring”.
78
3 Outsourcing
When a vendor is located in another country, “outsourcing” becomes “offshore outsourcing”. In both cases, the buyer moves an internal business to an external company or, to simplify, the move concerns a part of the production or service system. When a business process as a whole is relocated to another country, we use the term “offshoring”. Furthermore, outsourcing should not be confused with subcontracting. Subcontracting refers to tasks or services that are simply handed over to a company that has the required specific skill and/or resources to be efficient. A subcontractor works “for” a buyer, while a “vendor” works “with” a buyer. Let us come back to outsourcing. In the US, about 80% of its companies use some form of outsourcing, and they spend about 45% of their budget on outsourcing. According to many papers, it is possible to save between 25 and 30% in production cost if outsourcing is managed properly, however, badly managed customer services like some call centers or computer hot lines may lead to lost customers due to the questionable quality of the service. Furthermore, it appears that staff turnover is much higher abroad than in the buyer country. Surveys have shown that the primary reason for outsourcing is not the reduction of costs but to focus on the core competencies of the company. In their article (Quinn and Hilmer, 1994) define what, in their opinion, effective core competencies are: 1. Have “sets of skills that cut across traditional functions”. These horizontal competencies can improve the efficiency of the activities performed in the company. 2. Have “flexible, long-term platforms” in order to follow the changes of customers’ requirements. 3. Have a limited number of activities. Too many activities makes it difficult to manage, thus limiting competitiveness. 4. Be the only company able to fulfill a knowledge gap of the market. 5. Dominate other companies in at least one area. 6. Be close to customers. 7. Be able to maintain the skills of the company. This includes, in particular, continuing investment in research and development (R&D). When core competencies are strong, outsourcing may improve the competitiveness of the buyer. The most common benefits that may be expected from outsourcing are the following: • Improve company focus and access to world-class capabilities. • Provide access to special know-how or operational platforms that are not in the competency domain of the buyer. This is often the consequence of reengineering conducted in order to improve the efficiency of the buyer’s company.
3.1 Introduction
79
• Receive a very specific service that is beyond the scope of the buyer such as, for instance, delivering products or developing advertising in foreign countries where the buyer company has no representative. • Increase the flexibility of the buyer company: it is much easier to change a vendor than to fire employees, at least in countries with a strong social welfare system. • Reduce the costs. This objective is common in activities that require important manpower (production of cars or domestic appliances, for instance), or skilled and efficient employees who accept low wages (software development in India or China, call centers in East Europe or North Africa, to quote just a few). • Reduce the staff level, since outsourcing an internal business process leads to outsourcing the corresponding staff. Nevertheless, offshore outsourcing requires particular attention. Despite the fact that offshore outsourcing usually offers cost advantages, we have to keep in mind that some negative side effects may arise. The following list is not exhaustive, but collects the aspects that must be checked when vendors are offshore: 1. Outsourcing should apply only to mature products, i.e., to products that are stabilized and do not require important production adjustments. 2. Design changes in outsourced activities may result in quality problems and the risk of an important adjustment period. 3. Since goods should be shipped back, an additional cost takes place, and this cost may be significant, depending on the type of goods and the transportation resources used. 4. Reactivity to customers’ requirements is lower than when activities are performed in-house, which reduces the competitiveness of the company. 5. Depending on the country of the vendor, other expenses have to be taken into account such as customs fees and duties, export taxes, broker’s fees, to quote just a few. 6. Additional inventories are necessary to smooth the flow of products between vendor and buyer. This requires investments and introduces quality risk due to handling. 7. Randomness in deliveries is more likely to occur due to the increase of complexity. As a consequence, production scheduling becomes difficult to establish and, sometimes, useless at the operational level. The solution is to rely on inventories. 8. Communication problems (such as lack of understanding or difficulties in contacting the vendor due to the difference of time between vendor and buyer countries) are likely to occur. 9. Last but not least, outsourcing parts of the production in developing countries like India or China may encourage vendors to become competitors. The reason being the difficulty the buyer has to monitor confidentiality agreements.
80
3 Outsourcing
There is a big difference between offshore outsourcing in countries having similar living standards and social welfare systems to the country of the buyer, and in developing countries where the production costs and social welfare systems are often negligible compared to those in the US and Europe. In the first case, outsourcing allows the buyer to utilize new knowledge produced elsewhere and to combine this with knowledge already available in-house or in its network, which leads to innovation and technological developments, thus improving competitiveness. Furthermore, if vendors evolve and become competitors of buyers, then the new situation improves the efficiency for all. On the contrary, the second case may lead to harmful situations by improving the skills of vendors, turning them into competitors that may offer products and services at much lower costs, due to lower salaries and social insurance contributions in their countries. In this case, buyers often cannot handle the situation and shrink or even disappear. Some of these problems will be detailed in Section 3.6, when analyzing China’s outsourcing strategy and in Section 3.7 were offshore outsourcing is analyzed. Michael Courbet, Fortune Magazine, 1995, has summarized the rationale for outsourcing as follows: The traditional integrated firm is not the only, nor necessarily the best, way to create value – especially in the global economy of the 1990s. Today, almost any organization can gain access to resources. What differentiate companies now are their intellectual capital, their knowledge, and their expertise – not the size and scope of the resources they own and manage. As a result, outsourcing is being adopted by firms from across the corporate spectrum. No firm is too large or too small to consider outsourcing.
It should be noticed that nothing is written on the consequences of outsourcing on employees in buyers’ countries. We must finally outline that most of the publications encountered in the literature are interested in the short-term benefits of the companies that outsource offshore, but few publications analyze the long-term consequences on the economy of the developed countries and fewer publications consider the consequences on the daily life of people living in countries that outsource offshore.
3.2 Outsourcing Process The process that leads to outsourcing comprises the following steps: 1. Select activities or services that could be outsourced. Outsourcing applies to tasks and services that are outside the company’s core business. As a consequence, core activities should be well defined and a strategy should be established with the aim to consolidate and improve these activities. Then, completing the first step of the outsourcing process consists simply of listing tasks and services for which few in-house competencies and resources are available.
3.2 Outsourcing Process
81
2. Select the best candidate activities in the previous list. This step consists in going thoroughly through the elements of the list in order to: – Evaluate the importance of each activity with regard to the general strategy of the company and objectives to achieve. – Define the necessary conditions for successful outsourcing. – Analyze the consequential effects of the outsourcing on the management of the company. – Evaluate the expected costs and benefits of outsourcing. 3. Identify and select vendor. This step is certainly the most difficult due to the fact that it involves conflicting quantitative or qualitative criteria. Among the criteria buyers should look for when selecting a vendor are: – Commitment to continuous improvement. The question to answer is the following: does the vendor conduct research to improve its product or service efficiency? – Willingness to cooperate closely and on the long term with the buyer for technical improvements and employees’ training. – Commitment to cost reduction. The objective consists in reducing the total cost that is made of two types of components: The variable costs that depend on the delivered quantities; The fixed costs, also called setup costs that reflect the costs for maintaining the commercial connection between buyer and vendor. – Dedication to quality. In particular, the buyer should check if the vendor utilizes quality-management procedures. To measure the quality level of production, one usually measures the percentage of defects.1 – Consistency of deliveries with demands. This consistency can be measured in per cent of the deliveries that corresponds to orders. This criterion should take on a value that is as large as possible1. – Reactivity, which is the ability to react immediately to unexpected changes that happen in the buyer company such as, for instance, a last moment increase of demand level, urgent deliveries and unforeseen characteristics required for services. Reactivity is measured in per cent of times the vendor is able to adapt itself to particular demands. – Ability to reduce lead times. The global lead time is usually defined as the sum of the terms (lead time multiplied by quantity) over all the demands.1 4. Negotiate with the vendor the way in which the values of the outsourcing criteria are established. Suggestions have been made for quality, costs and leadtime reductions, reactivity and consistency of deliveries with demands. These suggestions can be used as the basis of the negotiations. Measuring continuous improvement and willingness to cooperate depends on the nature of the produc1 At the selection stage of the vendor, these measures can be established by interviewing other buyers or analyzing similar situations with the same vendor. Since this measure will also be used to monitor the cooperation with the vendor, this measure should be derived from observations of real performances.
82
3 Outsourcing
tion or service under consideration and should be negotiated accordingly. Other criteria that are not listed above could be of importance in very particular cases. For example, in some manufacturing systems: – The order deliveries arrive in the buyer’s company could be of utmost importance. – The quality of the packaging could be an important factor for the efficiency of the manufacturing process. 5. Evaluate periodically or, if possible, continuously the values of the criteria in order to compare them to standard values. The objective is quality control. 6. Have a periodic review of the outsourcing evolution with the vendor. 7. If the vendor is located in another country, the buyer should pay attention to exchange rate and input price uncertainties, as well as tax rules that apply in the vendor’s country. The typical way to face exchange rate and input price uncertainty is to use financial markets. Currency options2 are the most frequently used means to cope with currency risk (O’Brien, 1996). Tax rules are an important factor that deserve particular attention. For example, China has implemented a VAT (value added tax) refund system that may deeply influence the strategy used by the buyer.
3.3 Vendor Selection and Evaluation Model As mentioned above, selecting and evaluating the vendor is a multicriteria problem and some criteria are conflicting. For example, reducing the costs may lead to a deterioration of quality or reactivity. In the model presented in this section, five criteria are taken into account: quality, consistency of deliveries with demands, reactivity, lead time and cost. We assume that one buyer aims at selecting a vendor among a set of vendors. It is assumed that all the parameter values required to make a decision are known.
3.3.1 Model Formulation 3.3.1.1 Parameters This section is devoted to the introduction of the problem parameters: • pt = 1, L, PT are the product types to be outsourced. • v = 1, L, V are the candidates (vendors) to the selection. 2
Currency options are financial instruments that allow a firm by the right to sell or buy currency at a defined price.
3.3 Vendor Selection and Evaluation Model
83
• d pt is the demand of products of type pt . This demand is supposed to be steady, either because we are at the design level of the system and demands are forecasted, or we are monitoring the system and demands are known. • lv , pt , i , pt = 1, L, PT; v = 1, L, V ; i = 0, 1, L, nv − 1 . This parameter is the minimum quantity of product type pt to order to vendor v in order to start paying the ( i +1 ) − th price per unit for i < nv . We set lv , pt , 0 = 0 . Here, nv is the number of different price levels for product type pt when bought from vendor v . • Costs: cv , pt , i , pt = 1, L, PT; v = 1, L, V ; i = 0, 1, L, nv − 1 is the cost per unit when
–
buying a quantity lv , pt , i ≤ q v , pt < l v , pt , i +1 of product type pt from vendor v . Note that lv , pt , n = Cp v , pt , which means that the last upper limit of an interv
val is equal to the production capacity. Note that cv , pt , i ≥ cv , pt , i +1 . In Figure 3.1 we represent a possible evolution of the total cost with regard to the ordered quantity.
Total cost
Quantity ordered
lv, pt, 0
lv, pt, 1
lv, pt, 2
lv, pt, 3
Figure 3.1 Total cost versus quantity ordered
–
Su v , v = 1, L, V , are the fixed costs associated to vendor v .
• Lead time: –
θ v , pt , v = 1, L, V ; pt = 1, L, PT is the lead time of the products of type pt when provided by vendor v .
84
3 Outsourcing
• Quality: –
def v , pt , v = 1, L, V ; pt = 1, L, PT is the per cent of defects for products of
type pt when provided by vendor v .
• Consistency of delivery with demand: –
inc v , pt , v = 1, L, V ; pt = 1, L, PT is the ratio of the number of inconsisten-
cies over the number of deliveries of products of type pt when delivered by vendor v . More precisely:
inc v , pt =
∑ { number of products in delivery d } d∈ D
∑ { number of products in delivery d }
d ∈D
where D is the set of deliveries of products of type pt from vendor v that are inconsistent with the demand and D is the whole set of deliveries of products of type pt from vendor v .
• Reactivity: –
re pt , pt = 1, L, PT is the ratio of unexpected events over the number of
–
orders that concern products of type pt . u v , pt , v = 1, L, V ; pt = 1, L, PT is the ratio of unexpected events that are taken into account by the vendor over the number of orders. This concerns the products of type pt delivered by vendor v .
• Vendor capacities: –
Cp v , pt , v = 1, L, V ; pt = 1, L, PT is the production capacity of vendor v relative to products of type pt .
Remarks: 1. To use this model in order to select a vendor, it is necessary to know the above parameters. This is done by interviewing companies that are already buyers of the candidate vendors, interviewing the vendors, or evaluating these parameters based on previous or current experiences with these vendors. 2. The model is convenient to monitor the characteristics of the vendors. 3. This model handles criteria that can be measured. As a consequence, any qualitative criterion must be translated into a quantitative criterion before using the model.
3.3 Vendor Selection and Evaluation Model
85
3.3.1.2 Decision Variables
Three decision variables will be used in this model. The tradeoff between the five criteria introduced below will be reached through the values assigned to the decision variables. For v = 1, L, V ; pt = 1, L, PT and i = 0, L , nv − 1 : • xv , pt , i is the number of products of type pt provided by vendor v and paid at the price of the i-th price level. • y v , pt = 1 , if vendor v is selected to provide products of type pt , y v , pt = 0 , otherwise. • z v , pt , i = 1 , if products of type pt are sold by vendor v at the i-th price level,
z v , pt , i = 0 , otherwise. Note that z v , pt , n = 0 . v
3.3.1.3 Criteria
As mentioned at the beginning of Section 3.3, we decided to take into account quality, consistency of deliveries with demands, reactivity, lead time and costs. We will express these five criteria mathematically. Quality The measure of quality is expressed as follows: V
QAL = ∑
PT
nv −1
∑ ∑ def
v =1 pt =1 i = 0
V
v , pt
× xv , pt , i = ∑
PT
⎡
∑ ⎢⎢ def
v =1 pt =1
⎣
v , pt
⎛ × ⎜⎜ ⎝
nv −1
∑x i =0
v , pt , i
⎞⎤ ⎟⎥ ⎟ ⎠ ⎥⎦
(3.1)
The second member of the equality is the sum of the per cent of defects weighted by the quantities that have been ordered. The smaller QAL , the better the global quality resulting from outsourcing. Consistency of Deliveries with Demand Remember that parameters inc v , pt refer to individual products, and not to demands. As a consequence, the criterion that measures the consistency is similar to QAL :
86
3 Outsourcing
V
nv −1
PT
CONS = ∑
∑ ∑ inc
v =1 pt =1 i = 0
V
v , pt
× xv , pt , i = ∑
PT
⎡
∑ ⎢⎢ inc
v =1 pt =1
⎣
v , pt
⎛ × ⎜⎜ ⎝
nv −1
∑x
v , pt , i
i =0
⎞ ⎟ ⎟ ⎠
⎤ ⎥ ⎥⎦
(3.2)
The smaller the criterion CONS , the better the consistency of delivery with demand. Reactivity According to the notations of Section 3.3.1.1, the criterion that measures the reactivity is: V
REA = ∑
PT
nv −1
∑ ∑ re
pt
× u v , pt × xv , pt , i =
v =1 pt =1 i = 0
PT
⎧⎪ ⎨ ⎪⎩
V
∑ ∑ pt =1
v =1
⎡⎛ ⎢ ⎜⎜ ⎢⎣ ⎝
nv −1
∑x
v , pt , i
i =0
⎞ ⎟ × u v , pt ⎟ ⎠
(3.3)
⎫⎪ ⎤ ⎥ × re pt ⎬ ⎪⎭ ⎥⎦
The greater the criterion REA , the better the reactivity. Lead time To measure the total lead time of the system, we use the following criterion: V
LT = ∑
PT
nv −1
∑∑θ
v , pt
× xv , pt , i =
v =1 pt =1 i = 0
V
PT
⎡
∑ ∑ ⎢⎢ θ v =1 pt =1
⎣
v , pt
⎛ × ⎜⎜ ⎝
nv −1
∑x
v , pt , i
i =0
⎞ ⎟ ⎟ ⎠
⎤ ⎥ ⎥⎦
(3.4)
The objective is to reduce LT as much as possible. Total cost The total cost is the sum of: • A variable cost that depends on the quantity sold: V
VC = ∑
PT nv −1
∑∑ c
v , pt , i
× xv , pt , i
(3.5.1)
v =1 pt =1 i = 0
• A constant cost that depends on the vendor: V ⎛ CC = ∑ Su v × ⎜ ⎜ v =1 ⎝
PT
∑
pt =1
⎞ y v , pt ⎟ ⎟ ⎠
(3.5.2)
3.3 Vendor Selection and Evaluation Model
87
Finally, the total cost is: TC = VC + CC
(3.5)
Indeed, the objective is to reduce the total cost as much as possible. 3.3.1.4 Constraints
The feasible solutions of the problem must verify the following constraints. Capacity Constraints nv −1
∑x
v , pt , i
i =0
≤ Cp v , pt × y v , pt ; v = 1, L, V ; pt = 1, L, PT
(3.6)
Note that if vendor v is not selected to provide products of type pt , then y v , pt = 0 and, as a consequence, xv , pt , i = 0 for i = 0, L, nv − 1 .
Demand Constraints V
nv −1
v =1
i =0
∑∑x
v , pt , i
= d pt ; pt = 1,L, PT
(3.7)
Note that vendors that are not selected are not excluded from the first member of the equality. They are excluded by the capacity constraints. Linearization As shown in Figure 3.1, costs are not linear. To model this discontinuity, we introduce the following constraints.
( (
) )
⎧ xv , pt , i ≤ l v , pt , i − l v , pt , i −1 z v , pt , i ⎫ ⎨ ⎬ v = 1, L , V ; pt = 1, L , PT; i = 0, L , nv − 1 ⎩ xv , pt , i ≥ l v , pt , i − l v , pt , i −1 z v , pt , i +1 ⎭
(3.8)
From these inequalities, we verify that: • If xv , pt , i = 0 , then z v , pt , i +1 = 0 : this means that if no product of type pt is sold by vendor v at the i-th price, then no product of type pt is sold by vendor v at
the (i + 1) -th price.
88
3 Outsourcing
(
• If xv , pt , i is maximum, that is to say xv , pt , i = l v , pt , i − lv , pt , i −1
) , then
z v , pt , i +1 is
either equal to 0 or 1. In other words, if vendor v provides a maximum quantity of products of type pt at the i-th price, then it may also provide some prod-
ucts of the same type at the (i + 1) -th price. • If xv , pt , i > 0 and xv , pt , i < l v , pt , i − l v , pt , i −1 , then z v , pt , i = 1 and z v , pt , i +1 = 0. This
(
)
means that if vendor v provides a quantity of products of type pt at the i-th price that is not maximal, then no product of type pt is sold by vendor v at the
(i + 1) -th price.
These remarks show that Inequalities 3.8 guarantee that the given characteristics of the costs hold. Domains of the Decisions Variables The domains of the decision variables are as follows: • xv , pt , i ≥ 0 ; v = 1, L, V ; pt = 1, L, PT; i = 0, L, nv − 1 , i.e., these variables take non-negative real values. • y v , pt ∈ { 0, 1 }; v = 1, L, V ; pt = 1, L, PT , i.e., these are binary variables.
• z v, pt , i ∈ { 0, 1 }; v = 1, L, V ; pt = 1, L, PT; i = 0, L, nv are binary variables. We will now present some possible approaches to this problem. In other words, we propose some methods to reach the values to be assigned to the decisions variables in order to reach the “best” tradeoff between the above criteria. The next section will be illustrated by a simple numerical example.
3.3.2 Solution Approaches Since the problem at hand involves conflicting criteria, the objective is to find the “best” tradeoff between these criteria. Several approaches are available: 1. Convert the multicriteria problem into a single criterion problem by treating all but one criterion as constraints. 2. Replace the criteria by a single criterion that is a weighted sum of the criteria. 3. Apply the so-called goal programming method that consists in minimizing a weighted sum of deviations of the criteria to predefined values of these criteria. 4. Apply the so-called compromise programming that consists in approaching some “ideal” solution as closely as possible. We will develop these methods and illustrate them using the model presented in Section 3.3.1. To simplify the presentation, we consider only two criteria (cost
3.3 Vendor Selection and Evaluation Model
89
and quality). The next subsection is devoted to the presentation of the numerical example. 3.3.2.1 Illustrative Model
Two types of products and two vendors are involved in this model. Also, two levels of variable costs are given for each type of product. We use the notations introduced in the previous section. Capacities: Cp1,1 = 800, Cp1, 2 = 900, Cp 2,1 = 600 Cp 2, 2 = 700
(l (l (l (l
Cost parameters (Remember that l v , pt , 0 = 0 whatever v and pt ):
) ( = 12 ) , ( l = 11 ) , ( l = 14 ) , ( l
) =7 ) =6 )
1,1,1
= 100, c1,1,1 = 10 , l1,1, 2 = 800, c1,1, 2 = 8
1, 2 ,1
= 80, c1, 2,1
2 ,1,1
= 50, c2,1,1
2 , 2 ,1
= 100, c2, 2,1
1, 2, 2
= 900, c1, 2, 2
2 ,1, 2
= 600, c2,1, 2
2, 2, 2
= 700, c2, 2, 2 = 9
)
Su 1 = 20, Su 2 = 15 Demands: d1 = 600, d 2 = 1000 Quality: def1,1 = 6, def1, 2 = 3, def 2,1 = 5, def 2, 2 = 4
The Model The simplified model, derived from the above data and Relations 3.1–3.8, is as follows. The quality criterion is:
(
)
(
QAL = 6 × x1,1,1 + x1, 1, 2 + 3 × x1, 2,1 + x1, 2, 2
(
)
(
)
+ 5 × x2,1,1 + x2, 1, 2 + 4 × x2, 2,1 + x2, 2, 2
(3.9)
)
The smaller the value of this criterion, the better the result. The cost criterion is: TC = 10 x1,1,1 + 8 x1,1, 2 + 12 x1, 2,1 + 7 x1, 2, 2 + 11 x2,1,1 + 6 x2,1, 2
(
+ 14 x2, 2,1 + 9 x2, 2, 2 + 20 × ( y1,1 + y1, 2 ) + 15 × y 2,1 + y 2, 2
)
The smaller the value of this criterion, the better the result.
(3.10)
90
3 Outsourcing
The capacity constraints are: ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩
x1,1,1 + x1,1, 2 ≤ 800 y1, 1 ⎫ x1, 2,1 + x1, 2, 2 ≤ 900 y1, 2 ⎪⎪ ⎬ x2,1,1 + x2,1, 2 ≤ 600 y2, 1 ⎪ x2, 2,1 + x2, 2, 2 ≤ 700 y2, 2 ⎪⎭
(3.11)
The demand constraints are: ⎧ x1,1,1 + x1,1, 2 + x2,1,1 + x2,1, 2 = 600 ⎫ ⎬ ⎨ ⎩ x1, 2,1 + x1, 2, 2 + x2, 2,1 + x2, 2, 2 = 1000⎭
(3.12)
The linearization constraints are: ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩
x1,1,1 ≤ 100 z1,1,1 x1,1,1 ≥ 100 z1,1, 2 x1, 2,1 ≤ 80 z1, 2,1 x1, 2,1 ≥ 80 z1, 2, 2 x2,1,1 ≤ 50 z 2,1,1 x2,1,1 ≥ 50 z 2,1, 2 x2, 2,1 ≤ 100 z 2, 2,1 x2, 2,1 ≥ 100 z 2, 2, 2
x1,1, 2 ≤ 700 z1,1, 2 x1,1, 2 ≥ 700 z1,1, 3 x1, 2, 2 ≤ 820 z1, 2, 2 x1, 2, 2 ≥ 820 z1, 2, 3 x2,1, 2 ≤ 550 z 2,1, 2 x2,1, 2 ≥ 550 z 2,1, 3 x2, 2, 2 ≤ 600 z 2, 2, 2 x2, 2, 2 ≥ 600 z 2, 2, 3
⎫ ⎪ ⎪ ⎪ ⎪ ⎪ ⎬ ⎪ ⎪ ⎪ ⎪ ⎪ ⎭
(3.13)
Note that, since z v , pt , n = 0 holds for a number of variables z, then some of the v
inequalities of (3.13) disappear and we obtain: ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩
x1,1,1 ≤ 100 z1,1,1
x1,1, 2 ≤ 700 z1,1, 2 ⎫ ⎪ x1,1,1 ≥ 100 z1,1, 2 ⎪ x1, 2,1 ≤ 80 z1, 2,1 x1, 2, 2 ≤ 820 z1, 2, 2 ⎪ ⎪ x1, 2,1 ≥ 80 z1, 2, 2 ⎪ ⎬ x2,1,1 ≤ 50 z 2,1,1 x2,1, 2 ≤ 550 z 2,1, 2 ⎪ ⎪ x2,1,1 ≥ 50 z 2,1, 2 ⎪ x2, 2,1 ≤ 100 z 2, 2,1 x2, 2, 2 ≤ 600 z 2, 2, 2 ⎪ ⎪ x2, 2,1 ≥ 100 z 2, 2, 2 ⎭
3.3 Vendor Selection and Evaluation Model
91
The domains of the decision variables are defined as follows: variables xv , pt , i take non-negative real values while variables z v , pt , i and y v , pt take either the value 0 or value 1. 3.3.2.2 Treating All But One Criterion as Constraints
In this case, we assume that the budget is known and equal to 14 000. We thus replace Criterion 3.10 by the constraint: TC = 10 x1,1,1 + 8 x1,1, 2 + 12 x1, 2,1 + 7 x1, 2, 2 + 11 x 2,1,1 + 6 x 2,1, 2
(
)
+ 14 x 2, 2,1 + 9 x 2, 2, 2 + 20 × y1,1 + y1, 2
(
)
(3.14)
+ 15 × y 2,1 + y 2, 2 ≤ 14 000
The optimal solution to this problem is: x1,1,1 = 0, x1,1, 2 = 0, x2,1,1 = 50, x2,1, 2 = 550, x1, 2,1 = 80, x1, 2, 2 = 220 x2, 2,1 = 100, x2, 2, 2 = 600
The cost corresponding to this solution is: TC = 13 200 .
3.3.2.3 A Single Criterion that is a Weighted Sum of the Criteria
Constraint 3.14 is relaxed and we consider the weighted sum of Criteria 3.9 and 3.10. We choose a weight equal to 0.5 for each criterion. Thus, the criterion to minimize is: WT1 = 8 x1,1,1 + 7 x1,1, 2 + 7.5 x1, 2,1 + 5 x1, 2, 2 + 8 x 2,1,1 + 5.5 x2,1, 2
(
)
(
+ 9 x2, 2,1 + 6.5 x2, 2, 2 + 10 y1,1 + y1, 2 + 7.5 y 2,1 + y 2, 2
)
(3.15)
The solution to this problem is: x1,1,1 = 0, x1,1, 2 = 0, x2,1,1 = 50, x2,1, 2 = 550, x1, 2,1 = 80, x1, 2, 2 = 820 x2, 2,1 = 100, x2, 2, 2 = 0
Remember that WT1 is a combination of total cost and quality. Thus, it cannot be interpreted to evaluate the solution.
92
3 Outsourcing
Let us now assume that quality is much more important than cost. To reflect this fact, we assign weight 0.8 to criterion QAL and weight 0.2 to TC . With these weights, the criterion to optimize is: WT 2 = 6.8 x1,1,1 + 6.4 x1,1, 2 + 4.8 x1, 2,1 + 3.8 x1, 2, 2 + 6.2 x2,1,1 + 5.2 x 2,1, 2 + 6 x 2, 2,1 + 5 x 2, 2, 2 + 4 ( y1,1 + y1, 2 ) + 3 ( y 2,1 + y 2, 2 )
This approach consists in solving a linear programming problem the criterion of which is the weighted sum of the initial criteria. The solution depends on the weights. Unfortunately, it is rarely easy to find the “appropriate” weights. Another point should be outlined: if some of the initial criteria should be minimized and others maximized, then a specific approach should be used. Assume for instance that we have to consider two criteria K1 and K2, and that K1 should be maximized while K2 should be minimized. In this case, we define an upper bound M of the criterion K1 and build w1 ( M − K1 ) + w2 K 2 that is the criterion to minimize. 3.3.2.4 Goal Programming Method
This method consists in: • Defining a goal for each one of the criteria. For instance, we can choose a goal of 6500 for quality ( GQAL = 6 500 ) and the goal of 35 000 for the cost ( GTC = 35 000 ). • Formulate the problem that leads to the reduction of the gap between the values of the criteria and the related goals.
We apply the method to our example. We set: QAL + EQ− − EQ+ = GQAL
(3.16)
or:
(
) (
QAL = 6 x1,1,1 + x1, 1, 2 + 3 x1, 2,1 + x1, 2, 2
(
) (
)
)
+ 5 x 2,1,1 + x 2, 1, 2 + 4 x 2, 2,1 + x 2, 2, 2 + EQ− − EQ+ = 6 500
and: TC + EC− − EC+ = GTC
(3.17)
3.3 Vendor Selection and Evaluation Model
93
or: 10 x1,1,1 + 8 x1,1, 2 + 12 x1, 2,1 + 7 x1, 2, 2 + 11 x 2,1,1 + 6 x 2,1, 2
(
)
+ 14 x2, 2,1 + 9 x 2, 2, 2 + 20 ( y1,1 + y1, 2 ) + 15 y 2,1 + y 2, 2 + EC− − EC+ = 35 000
Finally, the problem consists of minimizing: GP = w1 EQ+ + w2 EC+
(3.18)
under Constraints 3.11–3.17, and the fact that variables xv , pt , i take non-negative real values while variables z v , pt , i and y v , pt take either the value 0 or 1. Weights w1 and w2 are given. This method allows approaching “at the best” predefined objectives. This approach is particularly convenient when one has a good knowledge of the problem and some idea about the expected values of the criteria. Note that when the criterion X under consideration must be maximized, then we replace E X+ by E X− in Criterion 3.18.
3.3.2.5 Compromise Programming
The compromise programming method aims at approaching the ideal solution as closely as possible. The ideal solution is the set of values taken by criteria when solving the problem for each criterion independently. In the above example, we first minimize (3.9) under Constraints 3.11–3.13, taking into account the feasible domains of the decision variables. The optimal value of criterion is QAL * . We then minimize Criterion 3.10 under the same constraints. The optimal value of criterion TC is TC * . We then build a combined criterion that is the L p distance:
[
L p = w1p ( QAL − QAL * ) + w2p ( TC − TC * ) p
p
]
1/ p
and we minimize this criterion under the same constraints. w1p and w2p are the weights and w1p + w2p = 1 .
94
3 Outsourcing
For p > 1 , the problem is no more a linear programming problem. For p = 1 , this problem is the problem solved in Section 3.3.2.3.3
3.4 Strategic Outsourcing In this section, we will consider the case of a duopoly market.4 The two companies that compose the duopoly are denoted by C1 and C2 . We assume that we are in a situation of Nash equilibrium.5 The two companies are competitors and X is an activity that could be outsourced. If a company outsources activity X , the in-house production facilities are shut down, and the company cannot return to performing X in-house due to the investment and skill required to do that. Thus, it is practically impossible to go back to in-house activity. We define: ⎧ 0 if C i does not outsource X ⎫⎪ xi = ⎨ ⎬ i = 1, 2 1 otherwise ⎪⎭ ⎩
Outsourcing activity X is more expensive than keeping it in-house for both companies (called buyers in this chapter) but the objective of both buyers is to increase their relative efficiency, thus their relative competitiveness. We denote by Pi ( x1 , x2 ) the equilibrium profit of Ci for state ( x1 , x2 ) . The competitiveness of C1 compared with the competitiveness of C2 for state ( x1 , x2 ) is measured by: Dx1 , x 2 = P1 ( x1 , x2 ) − P2 ( x1 , x2
For state
( x1 , x2 ) , if
)
Dx1 , x2 > 0 , the competitiveness of C1 is better than the
competitiveness of C2 and worse otherwise.
3
As mentioned above, other criteria can be introduced into the model. For instance, some researchers consider that outsourcing being long-term oriented, contractual penalties should apply in the case of premature termination of a relationship. In their opinion, the necessity of keeping a long-term relationship between vendor and buyer is motivated when investments have been made for training or developing new technologies and common researches.
4
A duopoly market is a market dominated by two firms that are large enough to influence the market prices.
5
Nash equilibrium is a market situation involving at least two companies and where no company can benefit from changing his/her strategy, while other companies keep their strategy unchanged.
3.4 Strategic Outsourcing
95
Now, we analyze inequalities among the Dx , x and their consequences on the 1
2
strategies of the companies. The consequences are the decisions made by the companies to outsource or not. Indeed, the initial state is ( 0, 0 ) : X is performed in-house in both companies.
3.4.1 Case D0,0 < D1,1 We are in the case where the fact that both companies outsource increases the competitiveness of C1 compared with competitiveness of C2 . We have to consider different situations that depend on the values of D1, 0 and D0,1 compared with D0, 0 and D1,1 .
3.4.1.1 D1,0 < D0,0 < D1,1 and D0,1 < D0,0 < D1,1
According to inequalities D1, 0 < D0, 0 < D1, 1 , company C1 will not outsource activity X since, by doing so, it would decrease its competitiveness compared to C2 . Furthermore, C2 will not outsource, keeping the change in competitiveness in its favor. According to inequalities D0,1 < D0, 0 < D1,1 , if company C2 outsources, it will increase its competitiveness vis-à-vis C1 but, in this case, C1 will also outsource, increasing its competitiveness at a level greater than the situation where none of the companies outsources. As a consequence, none of the companies should outsource and the equilibrium state is ( 0, 0 ) in this case.
3.4.1.2 D0,0 < D1,0 < D1,1 and D0,1 < D0,0 < D1,1
In this case, C1 outsources since D0, 0 < D1, 0 and D0, 0 < D1,1 , which guarantees
that both states ( 1, 0 ) and ( 1, 1 ) are better than ( 0, 0 ) , and C2 will not outsource since it would further increase the competitiveness of C1 . Thus, if C1 outsources first, the equilibrium state is ( 1, 0 ) and the measure of the relative competitiveness is D1, 0 .
96
3 Outsourcing
If C2 outsources first, inequalities D0,1 < D0, 0 < D1,1 show that the relative competitiveness of C1 decreases, but C1 will outsource next, which leads to the measure of relative competitiveness D1,1 , which is greater than D1, 0 . Thus, C1 should outsource alone and the equilibrium state is ( 1, 0 ) .
3.4.1.3 D0,0 < D1,1 < D1,0 and D0,1 < D0,0 < D1,1
If C1 outsources first, its relative competitiveness increases, but C2 outsources next to reduce the relative competitiveness of C1 to D1,1 . If C2 outsources first, inequalities D0,1 < D0, 0 < D1,1 show that the relative competitiveness of C1 decreases, but C1 will outsource next, which leads to the measure of relative competitiveness D1,1 . Finally, either C1 or C2 outsources first and, in both cases, the equilibrium state is ( 1, 1 ) .
3.4.1.4 D1,0 < D0,0 < D1,1 and D0,0 < D0,1 < D1,1
According to D1, 0 < D0, 0 < D1,1 , company C1 will not outsource activity X first since, by doing so, it would decrease its competitiveness compared to C2 , and C2 would not outsource, keeping the change in competitiveness in its favor. If C2 outsources first, D0, 0 < D0,1 < D1,1 show that the relative competitiveness of C1 increases, and C1 would further increase its competitiveness by outsourcing X . Thus, C2 will not outsource first. Finally, neither C1 nor C2 should outsource since this would diminish their position. The equilibrium state is ( 0, 0 ) .
3.4.1.5 D0,0 < D1,0 < D1,1 and D0,0 < D0,1 < D1,1
If C1 outsources first it will increase its relative competitiveness since D0, 0 < D1, 0 and C2 will not outsource since this would further increase the relative competitiveness of C1 .
3.4 Strategic Outsourcing
97
If C2 outsources first, the relative competitiveness of C1 increases since D0, 0 < D0,1 , and C1 would further improve its relative competitiveness by outsourcing next since D0,1 < D1,1 . Thus, C2 will not outsource first. Finally, only C1 outsources and the equilibrium state is ( 1, 0 ) .
3.4.1.6 D0,0 < D1,1 < D1,0 and D0,0 < D0,1 < D1,1
If C1 outsources first it will increase its relative competitiveness since D0, 0 < D1, 0 , but the strategy of C2 will be to reduce this competitiveness by outsourcing next (see D1,1 < D1, 0 ). As in the previous case, C2 should not outsource first. Finally, as in the previous case, C1 outsources first and the equilibrium state is (1,1 ) .
3.4.1.7 D1,0 < D0,0 < D1,1 and D0,0 < D1,1 < D0,1
According to D1, 0 < D0, 0 < D1,1 , company C1 will not outsource activity X first since, by doing so, it would decrease its competitiveness compared to C2 , and C2 would not outsource, keeping the change in competitiveness in its favor. C2 will not outsource first since it would increase the relative competitiveness of C1 ( D0, 0 < D0,1 ), and C1 will not outsource to keep the advantage. Finally, neither C1 nor C2 should outsource since this would diminish their position. The equilibrium state is ( 0, 0 ) .
3.4.1.8 D0,0 < D1,0 < D1,1 and D0,0 < D1,1 < D0,1
Company C1 will outsource ( D0, 0 < D1, 0 ), but C2 will not outsource because this would further increase the advantage of C1 ( D1, 0 < D1,1 ). According to D0, 0 < D1,1 < D0,1 , C2 will not outsource first.
Finally, only C1 outsources and the equilibrium state is ( 1, 0 ) .
98
3 Outsourcing
3.4.1.9 D0,0 < D1,1 < D1,0 and D0,0 < D1,1 < D0,1
Company C1 will outsource ( D0, 0 < D1, 0 ) and C2 will outsource next in order to reduce the relative competitiveness of C1 ( D1,1 < D1, 0 ). According to D0, 0 < D1,1 < D0,1 , C2 will not outsource first.
Finally, C1 outsources first and the equilibrium state is ( 1, 1 ) . These results are summarized in Table 3.1. In Tables 3.1 and 3.2, the element on the left of “ / ” is the company that outsources first and the element on the right is the equilibrium state of the system. Table 3.1 Case D0, 0 < D1,1 D1, 0 < D0, 0 < D1, 1
D0, 0 < D1, 0 < D1, 1
D0, 0 < D1, 1 < D1, 0
D0, 1 < D0, 0 < D1,1
– / ( 0, 0
)
C1 / ( 1, 0
)
C1 / ( 1, 1 )
D0, 0 < D0, 1 < D1,1
– / ( 0, 0
)
C1 / ( 1, 0
)
C1 / ( 1, 1 )
D0,0 < D1,1 < D0, 1
– / ( 0, 0
)
C1 / ( 1, 0
)
C1 / ( 1, 1 )
As we can see in Table 3.1, the results can be summarized as follows. When D0, 0 < D1,1 , i.e., when the advantage of C1 over C2 increases when both companies outsource, then: • None of the companies outsource if D1, 0 < D0, 0 < D1,1 and thus the equilibrium
state is (0, 0). • Only C1 outsources if D0, 0 < D1, 0 < D1,1 and thus the equilibrium state is (1, 0). • Both companies outsource and C1 outsources first if D0, 0 < D1,1 < D1, 0 . Thus,
the equilibrium state is (1, 1).
3.4.2 Case D1,1 < D0,0 In this case, the analysis leads to the results given in Table 3.2. Table 3.2 Case D1,1 < D0, 0
D1, 0 < D1,1 < D0 , 0
D1,1 < D1, 0 < D0 , 0
D1,1 < D0 , 0 < D1, 0
D0 ,1 < D1,1 < D0 , 0
C 2 / ( 1, 1 )
C 2 / ( 1, 1 )
C 2 / ( 1, 1 )
D1,1 < D0 ,1 < D0 , 0
C2 / ( 0,1 )
C2 / ( 0,1 )
C2 / ( 0,1 )
D1,1 < D0 , 0 < D0 ,1
– / ( 0, 0
)
– / ( 0, 0
)
– / ( 0, 0
)
3.5 Pros and Cons of Outsourcing
99
As we can see in Table 3.2, the results can be summarized as follows. When D1,1 < D0, 0 , i.e., when the advantage of C1 over C2 is larger when both companies perform activity X in-house, then: • None of the companies outsource if D1,1 < D0, 0 < D0,1 and thus the equilibrium
state is (0, 0). • Only C2 outsources if D1,1 < D1, 0 < D0, 0 and thus the equilibrium state is (0, 1). • Both companies outsource and C2 outsources first if D0,1 < D1,1 < D0, 0 . Thus the equilibrium state is (1, 1).
3.5 Pros and Cons of Outsourcing In this section, we gather together the most common arguments of the pros and cons of outsourcing. Firms that outsource put forward the following reasons: 1. Cost saving, which requires choosing a vendor that performs the outsourced function more efficiently than the buyer could. 2. Reduce staff, or minimize the fluctuations in staffing due to changes in demands. 3. Free employees from tedious tasks in order to allow them to concentrate on core activities. 4. Gain benefit by taking advantage of external expertise or outsourcing non-core activities. 5. Achieve greater financial flexibility by selling assets that were formerly used in the outsourced activity in order to improve company’s cash flow. Indeed, the vendor may demand a long-term contract, which may penalize flexibility. 6. Gain access to external skills and technologies. 7. Vendors are assumed to provide quality activities: paying for service creates the expectation of performance (in costs, quality, flexibility, etc.). Many arguments against offshore outsourcing to developing countries are developed in the two next sections. To summarize, we can mention loss of the strategic alignment, risk on quality, decrease in company loyalty and impoverishment of innovation, to quote just a few.
100
3 Outsourcing
3.6 A Country of Active Offshore Vendors: China China is certainly one of the most active and involved countries as far as outsourcing is concerned. Nevertheless, outsourcing in China may be more risky in the medium and long terms than in other countries like India, East Europe or North Africa. This is due to the Chinese strategy to capture the know-how and the technology of the buyers in order to develop their own competencies. Apparently, they have as a goal to eventually compete with the former buyers. In this section, we assess the long-term implications of outsourcing in China.
3.6.1 Recent History The recent history of outsourcing in China can be summarized in a few points: • In the 1970s, outsourcing activities concerned low added-value products such as textiles, consumer electronics (TV sets, for example), toys, etc. • In the 1980s, outsourcing incorporated car parts and even full assembly of cars. • In the 1990s, more and more outsourcing requirements concerned products with high added value such as software, semiconductors (for IBM, Intel, Texas Instruments,…), medical equipment, to mention just a few. • Since the middle of the 1990s, outsourcing to China concerns higher addedvalue products and services: research and development (R&D), key auto parts (for Ford, Daimler-Chrysler, and Volkswagen), key airplane components (for Boeing), machinery, and so on.
Since China’s vendors are involved in value-added products and services, the related technologies migrate from Europe and the US to China. The reason is that outsourcing usually occurs through joint venture, equity stakes and coproduction agreements, which leads to sharing know-how and, in addition, makes the buyers more and more dependent on the vendors. Most of the US and European giant companies outsource currently in China. We can add to the companies already mentioned above Philips Electronics, Thomson, Siemens, Airbus, General Motors, Renault, etc. This evolution has resulted in a financial domination of China over Europe and the US. For example, China holds some 260 billion dollars in US Treasury Instruments. This situation is sometimes summarized by saying: “China supports developed countries as the rope supports the hanged person”.
3.6 A Country of Active Offshore Vendors: China
101
3.6.2 Consequences Indeed, outsourcing in China may reduce production costs and increase buyers’ competitiveness in the short term, but leads to important negative side effects such as: 1. competitive dilemmas; 2. loss of initiatives in buyer companies; 3. acquisition by China of European and US high-tech companies and thus acquiring the related technologies and savoir-faire that goes along with them.
3.6.2.1 Competitive Dilemmas
Since European and US companies outsource critical functions, vendors are learning vital know-how and technologies from the buyers, and thus gain access to core competencies of the buyers companies. This generates Chinese competitors who become able to require significant concessions in skills and technologies from foreign firms that are willing to outsource. We summarize the “fatal circle” in Figure 3.2. This “fatal circle” summarizes how years of research efforts and billions spent in research and technology are offered to a country that will use its “social advantage” to destroy companies in developed countries.
Outsources critical functions
BUYER
VENDOR
Pressure of competition
Gains access to core competencies
Increases pressure for further concessions
Figure 3.2 The fatal circle
3.6.2.2 Loss of Initiative in Buyers’ Companies
The following question deserves an answer: does outsourcing handicap companies’ initiative? In other words, does outsourcing penalize the willingness of com-
102
3 Outsourcing
panies to extend their competencies (i.e., to innovate)? To answer this question, we have to check if outsourcing introduces constraints that may impede the companies to develop their own strategy. The unsurprising answer is “yes”. First, excessive reliance on outsourcing introduces constraints that restrict the freedom to undertake new developments. Secondly, companies’ ability to invest may be eroded by outsourcing. The main reason is that vendors impose constraints that, in turn, reduce the freedom of the buyer. Thirdly, the vendor who gained know-how and technical competencies has an increased negotiating advantage over the buyer and thus can extract even more lucrative contracts from the buyer, reducing the level of investment accordingly. Figure 3.3 summarizes the reasons why innovation is penalized by outsourcing.
Reduction of enterprise freedom Outsourcing Decrease in investments
Buyer
Domination of vendor over buyer
Penalization of innovation
Figure 3.3 Penalization of innovation
3.6.2.3 Migration of Production and Services to China
Some economists claim that foreign direct investments in China reached 0.5 trillion dollars during the 15 last years. Thus, foreign companies have built up an enormous manufacturing, service and research facilities in this country. Due to a rising base of skilled engineers and a huge investment capacity, Chinese companies have acquired the know-how and key current and former technologies so that they can design and manufacture their own products to compete with those of their former buyers. The concerned market segments are, among others, software, advanced semiconductors, engineered plastic, electronics, automotive and telecom equipment. These aspects are summarized in Figure 3.4.
3.6 A Country of Active Offshore Vendors: China
Outsourcing
Buyer
103
Increase of know-how and investment capacity
Vendor
Vendor
Build local facilities
Vendor
Competition
Figure 3.4 From outsourcing to competition
3.6.3 Chinese Strategy to Acquire Know-how and Technology The first aspect of the strategy has been implicitly mentioned in the previous sections. It consists in encouraging outsourcing by giving competitive advantages to buyers. As soon as a company outsources in China, attracted not only by low costs, but also by an incentive tax policy, its competitiveness increases, which forces its competitors to do the same in order to remain competitive. At this point, Chinese companies increase the pressure for more technology and more knowledge as payment to allow outsourcing. The second aspect of the strategy is to offer important advantages to Chinese educated abroad (US, Europe, Taiwan, etc.) to encourage them to return to their homeland. The move started at the beginning of the 1980s. Numerous highly educated people accepted the deal since they felt that it was possible to build fruitful businesses. This support is of utmost importance to make the most of the savoirfaire and the technologies obtained from foreign companies as payments for the authorization to outsource. The third aspect of the strategy is to focus investments in strategic industries. The objective is obviously to take advantage of the benefits resulting from outsourcing to improve and develop advanced technologies. Some Chinese economists claim that the objective is to boost national R&D spending to 2.5% of GDP (gross domestic product) by 2010. The number of technically trained persons could fall slightly short of being able to fully take advantage of this financing: another reason to attract Chinese people that were graduated abroad. The fourth – and last – aspect of the strategy is to develop new products and services, and commercialize them on the world market. This strategy is clear and will certainly be efficient. It can be summarized as follows: use knowledge and technologies gained from foreign companies as the starting point of Chinese development, provide resources (money and skilled scientists and engineers) to support research and development in order to improve
104
3 Outsourcing
these technologies, and finally enter in the world market taking advantage of low labor costs. Already, the Chinese automotive industry took its roots when Volkswagen and GM outsourced to China entered the market in 2007 and plans to gain an important market share by 2010.
3.7 Offshore Outsourcing: a Harmful Strategy? 3.7.1 Introductory Remarks In the previous section, we already made some remarks about the consequences of outsourcing in China. In this section, we provide general comments about offshore outsourcing and we examine if we should follow, or not, the “unique thought” that claims that outsourcing is the salvation of the manufacturing sector of developed countries. A number of economists claim that welfare effects can already be noticed as the consequences of offshore outsourcing and that developed countries cannot suffer long-term harm from innovation in developing countries in a world of free trade. They assert that outsourcing is a way to gain new markets and might open the door to new technologies. They point out the reduction of production costs and several short-term advantages such as a close cooperation with other technical structures. Unfortunately, they do not specify who takes advantage of the new social welfare. The reality is that jobs migrate from developed countries to emerging countries day after day, resulting in increasing unemployment in the US and Europe. Another worrying aspect is that only some of the employees whose job was outsourced found new jobs and only some of these jobs offer the same wages to these employees. If one wishes to determine the desirability of offshore outsourcing, one must balance the costs and gains the countries that outsource offshore can expect. We also have to keep in mind that the supplementary benefits resulting from outsourcing improve the shareholder’s welfare more than the salary of the employees. In fact, there is a strong correlation between the level of outsourcing and the number of layoffs in the developed countries concerned. Finally, the question to answer is simple: How harmful is offshore outsourcing to developed countries?
3.7 Offshore Outsourcing: a Harmful Strategy?
105
3.7.2 Risk of Introducing Innovations Abroad When a company enlists a foreign company, that does not possess the required skills, to perform some of its functions, then the outsourcer must furnish this know-how to its new partner. Even if the foreign company possesses the required knowledge at the beginning of collaboration, technology evolves and knowledge transfer is still needed, with the help of the buyer, which is another way to introduce innovation abroad. The technology is not the only transfer from buyer to vendor. The management on the whole as well as research and production are transferred. The only thing that is slow to transfer is salary negotiation and benefits for employees. Thus, has already mentioned before, offshore outsourcing often provides a large spectrum of resources for free to future competitors. Deardorff (2006) outlines that offshore outsourcing, even if it does not induce innovation abroad, may be harmful to particular groups, countries or even the world.
3.7.3 How Could Offshore Outsourcing Be Harmful to Some Groups? As mentioned before, offshore outsourcing leads to unemployment and reduction of wages. Indeed, workless employees may adapt themselves to the changes in the medium term by moving to another city, changing their job, acquiring new skills, etc. But the phenomenon develops so intensively that the adaptation of employees will become, and still is, even more difficult.
3.7.4 How Could Offshore Outsourcing Be Harmful to a Country? Offshore outsourcing can alter world prices and, as a consequence, hurt major economic sectors in developed countries. Many examples can be given. We restrict ourselves to two emblematic examples. In Europe, the automotive industry outsourced to East European countries the production of low-cost cars initially destined for developing countries. The cost of the cars is so low that the product attracted customers in France, Germany, Spain, etc. The consequence was straightforward: sales of the original cars decreased, causing layoffs in several car companies in those countries. The irony of the situation is that these low-cost cars were launched by Renault, a French car builder that has suffered from competition from its own imports. The same phenomenon will happen soon in the airplane industry since the European company EADS outsourced part of its production with the related tech-
106
3 Outsourcing
nology to China, for instance. The argument was that China ordered airplanes from this company. It should be noticed that altering world prices may also hurt countries that are neither buyers nor vendors, depending on their market position. For example, lowcost cars introduce a strong competition, resulting in lowering costs of automobiles. This, in turn, leads to lower profits for car builders and their providers and, as a consequence, to the decrease of research efforts and innovation, opening the door to manufacturing sector shrinkage. It is worth answering the following question: Is it better not to outsource offshore sensitive technology and risk losing a market or to outsource, winning a market on the short term, but in the end losing all markets, including the home market?
3.7.5 How Could Offshore Outsourcing Be Harmful to the World? The simplest way to answer this question is to quote Deardorff (2006) who wrote: The first answer to this (question) should be that offshore outsourcing cannot harm the world as a whole if there are no distortions anywhere in the world economy. We know from elementary general equilibrium theory that, in the absence of distortions (externalities, taxes, and the like) perfectly competitive markets maximize the value of world output as constrained by world factor endowments and technology. The introduction of the ability to do offshore outsourcing only adds to what it is technologically possible to do in the world economy, and it thus relaxes that constraint. It can therefore only increase or leave unchanged the value of world output. But of course the world does have distortions, lots of them. And in that case, as with free trade, examples abound in which the new opportunities make all distortions worse. In another place, Deardorff (2005), I have again suggested a couple of examples of how this might occur.
Again, it seems possible that offshore outsourcing to developing countries, which introduce important price distortions, may hurt the world economy.
3.8 Conclusion Outsourcing remains a pivotal issue that divides managers, economists, workers, customers and the world at large. The main expected benefits awaited from outsourcing have been listed in Section 3.1. To summarize, these benefits result from lower labor costs, especially considering under evaluated local currencies, new intellectual capital and competencies. Among the benefits we can also mention access to world-class capabilities, company flexibility and investment conditions offered to vendors and buyers. Other potential advantages are often neglected by buyers. For instance, an export-oriented tax policy designed by the Chinese government favors foreign in-
Further Reading
107
vestments and purchases in China. Under this policy, a product that is exported, as well as raw materials that are imported, is exempted from VAT (value added tax). This is a powerful incentive to outsource in China. Thus, tax laws may deeply influence the cost of outsourcing in a positive way when outsourcing is made in developing countries and must be analyzed in detail by potential buyers. Criticisms on outsourcing mainly focus on the quality of services and products, consequences for the domestic workforce, loss of skills, security issues when giving sensitive information to vendors located abroad, fraud risks, to quote just a few. But we can also outline that offshore outsourcing that introduces distortions in the world economy may lead to severe consequences in a near future. The explanations provided in Section 3.7 show that, if vendors belong to emerging countries, but especially to emerging superpowers, the consequences might be devastating to buyers and their countries. Note also that examples of companies that decided to stop outsourcing and produce in-house has become quite frequent. These changes of strategy often result from quality problems, transportation costs or problems for managing the outsourced activities or maybe prudence.
References Deardorff AV (2005) Gains from Trade and Fragmentation. Research Seminar in International Economics, Discussion Paper N° 543. University of Michigan, July 21. Deardorff AV (2006) Comments on Mankiw and Swagel, “The politics and economic of offshore outsourcing”. J. Monet. Econ. 53:1057–1061 Quinn J, Hilmer F (1994) Strategic outsourcing. Sloan Manag. Rev. Summer 43–55 O’Brien TJ (1996) Global Financial Management. Wiley, New York, NY
Further Reading Bhutta KS, Huq F (2002) Supplier selection problem: a comparison of the total cost of ownership and analytical hierarchy process. Suppl. Ch. Manag. 7(3/4):126–135 Buehler S, Haucap J (2004) Strategic outsourcing revisited. J. Econ. Beh. Organ. 61:325–338 Byrne PJ, Byrne J, O’Riordan A (2004) Outsourcing: a review of current trends and supporting tools. Log. Solut. 4:26–31 Chaudhry SS, Forst FG, Zydiak JL (1993) Supplier selection with price breaks. Eur. J. Oper. Res. 70:52–66 Chen Y (2005) Vertical disintegration. J. Econ. Manag. Strat. 14:209–229 Chen Y, Ishikawa J, Yu Z (2004) Trade liberalization and strategic outsourcing. J. Int. Econ. 12:551–570 Choi TY, Hartley JL (1996) An exploration of supplier selection practices across the supply chain. J. Oper. Manag. 14:333–343 De Boer L, Labro E, Marlacchi P (2001) A review of methods supporting supplier selection. Eur. J. Purch. Suppl. Manag. 7:75–89
108
3 Outsourcing
Deardorff AV (2001) Fragmentation in simple trade models. N. Amer. J. Econ. Fin. 12:121–137 Degraeve Z, Roodhooft F (1999) Effectively selecting suppliers using total cost of owner ship. J. Suppl. Ch. Manag. 35(1):5–10 Deavers KL (1997) Outsourcing: a corporate competitiveness strategy, not a search for low wages. J. Lab. Res. 18(4):503–519 Ellram LM (1990) The supplier selection decision in strategic partnership. J. Purch. Mater. Manag. 26(4):8–14 Erber G, Aida S-A (2005) Offshore outsourcing - A global shift in the present IT industry. Intereconomics 40(2):100–112 Feenstra R, Hanson G (1999) The impact of outsourcing and high-technology capital on wages: Estimates for the United States 1979 – 1990. Quart. J. Econ. 114:907–940 Gertler J (2009) A macro-economic analysis of the effect of offshoring and rehiring on the US economy. Ann. Rev. Contr. 33(1):94–111 Ghodsypour SH, O’Brien C (1998) A decision support system for supplier selection using an integrated analytic hierarchy process and linear programming. Int. J. Prod. Econ. 56–57:199– 212 Grossman S, Helpman E (2002) Integration versus outsourcing in industry equilibrium. Quart. J. Econ. 117:58–119 Karpak K, Kasuganti RR (1999) An application of visual interactive goal programming: a case in supplier selection decisions. J. Multicrit. Decis. Mak. 8:93–105 Kasilingam RG, Lee CP (1996) Selection of vendors – a mixed integer programming approach. Comput. Ind. Eng. 31:347–350 Lee M-S, Lee Y-H, Jeaong C-S (2003) A high-quality-supplier selection model for supply chain management and ISO 9001 system. Prod. Plann. Contr. 14(3):225–232 McDonald SM, Jacobs TJ (2005) Brand name ‘India’: The rise of Outsourcing. Int. J. Manag. Pract. 1(2):152–174 Min H (1994) International supplier selection: a multi-attribute utility approach. Int. J. Phys. Distr. Log. Manag. 24(5):24–33 Petroni A, Braglia M (2000) Vendor selection using principal component analysis. J. Suppl. Ch. Manag.: Glob. Rev. Purch. Suppl. 36(2):63–69 Samuelson PA (2004) Where Ricardo and Mill rebut and confirm arguments of mainstream economists supporting globalization. J. Econ. Persp. 18(3):135–146 Shy O, Stenbacka R (2003) Strategic outsourcing. J. Econ. Beh. Organ. 50:203–224 Talluri S, Narasimhan R (2004) A methodology for strategic sourcing. Eur. J. Oper. Res. 154:236–250 Weber CA, Current JR, Benton WC (1994) Vendor selection criteria and methods. Eur. J. Oper. Res. 50:2–18
Chapter 4
Inventory Management in Supply Chains
Abstract The major industrial problems and various effective approaches of inventory control in supply chains are presented and analyzed. A large place is reserved for the bullwhip effect and the corresponding methods to reduce its negative consequences along the supply chain. Many other interesting topics are treated including stochastic inventory control, echelon stock policies and their applications. Some original lot-sizing methods for single and multiproduct cases are also suggested. Of special interest is the presentation of novel pull control strategies as for example, base stock policy, CONWIP, generalized and extended Kanban. There are a lot of illustrative examples, algorithms and professional recommendations for practical utilization.
4.1 Introduction In production systems, having items in stock is the kind of situation that does not add value. On the contrary, storage activity requires handling that may damage items, buying resources that are costly and immobilizing items that consequently cannot be sold. Thus storage increases the production cost. Nevertheless, inventories are necessary to deal with unexpected events like machine breakdowns, unforeseen demands, random operation times, quality problems, etc. For decades, academics have studied inventory management to reduce and even remove inventories from production systems, keeping in mind that product should flow unconstrained to minimize production cost, increase reactivity and improve quality. As a result, a large number of mathematical models are available in the literature. The ingredients of these mathematical models are mainly: 1. Inventory Level. This level is assumed to be known precisely at any time or not, that is allowed to be sometimes negative (backlog) or not, that is upper bounded or not, that evolves continuously or not.
110
4 Inventory Management in Supply Chains
2. Demand. The demand may be stationary or dynamic, deterministic or random, continuous or discrete. 3. Number of Stages. For example, multiple stage models are used in cases of assembly, linear systems, and supply chains, to quote just a few. 4. Production Capacity. This capacity is limited (capacitated system) or not. In some cases, particularly when the system belongs simultaneously to several supply chains, the production capacity may change from one period to the next. 5. Criterion. The criterion is a function that must be minimized or maximized, depending on the kind of problem at hand. In inventory management, usually it is the overall cost that should be minimized. Sometimes, several criteria are concerned. In these cases, different methods are used to reduce the set of criteria to a single criterion or to find good tradeoffs among the criteria. 6. Costs. Numerous costs could be taken into account in the model. Usually, the following costs are concerned: Inventory Cost (or Holding Cost) is incurred for keeping items in stock. This includes costs: – resulting from the fact that inventories are capital assets; – related to the risk of obsolescence; – insurance fees; – related to storage means (pallets, containers, etc.). The sum of these costs is usually proportional to the inventory level. Other costs included within inventory cost are, for instance, those: – related to storage facilities (maintenance, depreciation, insurance, etc.); – related to handling activities (salaries, handling facilities, etc.). These costs are fixed and may be significant. The inventory cost is usually defined per unit of time or per elementary period. It is usually an increasing and concave function of the inventory level. Setup Cost (or Ordering Cost, or Production Cost) is the cost incurred each time items are ordered. This is made up of: – administrative costs (salaries in the department in charge of ordering and taking deliveries, costs for writing orders, managing information, etc.); – transportation costs; – costs incurred in the case of promotions; – costs incurred for changing production (setup costs). Usually, the setup cost is composed of a fixed cost independent of the number of items ordered and a variable cost that depends of the size of the order. The variable cost could be proportional to the number of items ordered or be an increasing and concave function of this number. Backlogging Cost (or Shortage Cost) is incurred if an item is commanded and cannot be delivered due to a shortage. This cost may vary a lot, depending on the customer who may agree to wait until his order is made available or choose some other supplier. Furthermore, extra costs occur in the case of backlog, due to additional administrative work, price discounts for late delivery, handling and so on. Backlogging costs are usually difficult to estimate.
4.1 Introduction
111
7. Items. The inventory may concern single or multiple items with correlated inventory decisions. 8. Constraints. They define the set of feasible solutions and thus the possible decisions at each elementary period or decision time. 9. Horizon of the Problem. It specifies the period of time on which the decisions should be optimized. This parameter is expressed in terms of the number of elementary periods when decisions are made at the beginning of each elementary period (periodic review). The horizon may be equal to infinity; in this case, a discount factor belonging to [ 0, 1 ] is introduced to reduce the importance of demands according to the “distance” between the current time and the time the demand appears. An infinite horizon is rarely of interest in the case of real-life situations since forecasting demand to infinity does not make sense. This is why we ignore this aspect in this chapter. In inventory management, randomness is modeled using well-known distributions. For example: 1. The number of customers that arrive in the system during a time t follows a Poisson distribution of parameter λ . Thus, the probability that n customers arrive during period t is: Pr ( n ) =
(λt )n n!
e − λ t for n = 0, 1, L
where λ is a positive parameter. 2. It is often assumed that the demand size d of a customer has a logarithmic distribution:
Pr ( D = d ) =
αd for d = 1, 2, L and α ∈ ( 0, 1 ) ln ( 1 − α ) d
or a geometric distribution: Pr ( D = d ) = ( 1 − α ) α d −1 for d = 1, 2, L and α ∈ ( 0, 1 )
These two distributions concern discrete demands. 3. Continuous demands are often assumed to follow a Gaussian distribution of density: f
( x )=
1
σ
2π
e
−
(
)2
x−m 2σ
2
for − ∞ < x < +∞
112
4 Inventory Management in Supply Chains
where m and σ are positive parameters, or a gamma distribution of density:
( x )= λ ( λ x ) e Γ(r ) r −1
f
−λ x
for x ≥ 0
where λ and r are positive parameters. The gamma distribution is used when the Gaussian distribution leads to a high probability of negative demand. This is the case if σ / m is close to 1. Assumptions made in inventory models rarely conform to real inventories. Multiple reasons have been presented to support this claim: • Since production systems evolve in chaotic environments (unexpected events, continuous influence of sale, marketing, delivery and procurement departments that have antinomic criteria on the short term), the importance of the model variables, and even the architecture of the inventory system, changes over time in an unpredictable manner, making mathematical models inadequate, except if they are simple enough to be easily adapted to changes in the environment. • Even when historical data are known, finding an adequate and tractable distribution that models this data is usually impossible. Thus, assuming that a demand and its arrival behavior follow specific distributions is simplistic. • In practice, qualitative criteria are of great importance to companies. Unfortunately they cannot be precisely converted into quantitative criteria and thus cannot be handled properly in a mathematical model. • Inventory decisions often concern items correlated with others and this correlation is often constrained by external factors like transportation constraints, priority among customers, etc. These external factors that evolve dynamically often make mathematical inventory models, which are stable, inadequate.
Thus, inventory models proposed in the literature are evermore sophisticated and/or configured for more specific situations. However, this seems to be disconnected from practice. As a consequence, there is no evidence that inventory management based on mathematical models is efficient in practice. The question has been discussed by (Meredith, 2001), (Wagner, 2002) and (Zanakis et al., 1980) among others. Moreover, in (Gavirneni and Tiwari, 2007), the authors mention that they “could not find any evidence that managers based their decision on specific inventory models” and, in (Corbett and Van Wassenhove, 1993), the authors mention that the gap is also obvious in operations research. Nevertheless, according to (Silver, 2004), researchers treat problems of real interest to managers and some robust and simple models may be helpful guidelines when designing or redesigning an inventory system. In fact it appears that improvements in inventory management have been made thanks to technological and operations advances (automation, computers, RFID, just-in-time, supply chains) and rarely by mathematical model analysis.
4.2 Inventories in Supply Chains
113
The next section (Section 4.2) deals with inventories in supply chains. The goal is to point out the characteristics of these inventories and to derive the type of approach that should be used in practice. Section 4.3 is devoted to the most popular inventory models, but we avoid sophisticated developments based on unrealistic assumptions. Particular attention is paid to robust models that can be used at the design level. Section 4.4 develops the main aspects of echelon stock policies, while Section 4.5 explores lot-sizing models. Section 4.6 is devoted to Kanban, CONWIP, generalized and extended Kanban. Conclusions are drawn in Section 4.7.
4.2 Inventories in Supply Chains The basic question we are trying to answer is the following: what are the specific characteristics of an inventory that belongs to a supply chain?
4.2.1 Definition of a Supply Chain Several definitions exist in the literature. We propose the following one that can be found in (Govil and Proth, 2002). A supply chain is a global network of organizations that cooperate to improve the flows of material and information between suppliers and customers at the lowest cost and with the highest speed. The objective of a supply chain is customer satisfaction. Often, supply chains emerge from individual companies that form opportunistic alliances to compete for contracts that they could not obtain individually. The organizations involved in the supply chain perform the set of activities required to produce items. Some of these organizations may compete with each other. Although these organizations may belong to different companies, the whole system has a common strategy. The improvement of material flows in a supply chain is the aspect of interest in this chapter. Figure 4.1 is a network that shows a supply chain. In this figure each node represents either a production (supplier, manufacturer) or a transportation unit (distributor), or even an inventory system. We assume that an inventory can be implemented in each production unit. Nodes 4, 6 and 10 represent assembly, nodes 2, 5, 7 and 9 are transformation elements and nodes 1, 3 and 4 are units with divergent flows, such as a disassembly process or an inventory system (warehouse, retailer, etc.).
114
4 Inventory Management in Supply Chains
5 1
8
3
10
6 2
4
9 7
Figure 4.1 Representation of a supply chain
In a supply chain, those at the lower level (node 10 in Figure 4.1) are supplied by their immediate upstream units (nodes 6, 8 and 9) that, in turn, are supplied by their own immediate upstream units, and so on. At the same time, information flows in the opposite direction. Raw material or components are ordered by units 1 and 2, and these products may be delivered more or less randomly. According to the introduction, we do not make any assumption related to the probability distributions that governs randomness. The time spent by a semi-finished product in a production node is known and may be random. Again, we don’t make any particular assumption concerning operation time randomness. We do not assign times to inventory systems, except perhaps to handling activities: the time an item spends in inventory depends on the management of the system. This time is an index that characterizes the fluidity of the material flow.
4.2.2 Inventory Problems in a Supply Chain In (Chung and Leung, 2005), the authors stressed collaboration across the supply chain, especially for multiechelon distribution systems. Other studies, such as for example (Barbarosoglu, 2000) and (Zimmer, 2002), explored the two-echelon model of buyer–vendor systems. The idea of joint optimization for supplier and buyer was initiated in (Goyal, 1976), and the three-echelon model that includes the manufacturer, distribution centre and retailer was developed in (Kreng and Chen, 2007). We will investigate the advantage of sharing information among the different nodes of the supply chain and generalize the concept of collaboration between the units (nodes) of the supply chain. We will also present the different decisionmaking approaches that are available. The goal is not to provide theoretical results derived from specific mathematical models since, as mentioned before, we know that such an approach
4.2 Inventories in Supply Chains
115
is questionable from a practical point of view. We will just suggest some approaches to improve global inventory management. The bullwhip effect (also called the whiplash or Forrester effect) that amplifies inventory variations as one moves upstream in the supply chain should be reduced as much as possible since it is a necessary (but not sufficient) condition to reach adequate inventory management. In this chapter, our objective is to present some models and simple strategies. These will be illustrated by simulation. More precisely, we will focus on simulation models where the way to model randomness is unconstrained. These simulation models open the possibility of introducing various management strategies that will be evaluated numerically. Anyway, we know that even if we introduce probability distributions that “behave well”, we cannot find analytical results for a general supply chain with more than two or three levels.
4.2.3 Bullwhip Effect 4.2.3.1 Introductory Example
Let us consider the three-level system given in Figure 4.2. The demand d is random and discrete. A demand appears at the beginning of each elementary period and takes a value k ∈ { 0, 1, L, K } with probability 1 / ( K + 1) . Indeed, any probability distribution can be used to generate the demands. An order sent by the retailer to the wholesaler becomes effective after n1 elementary periods. In other words, an order sent by the retailer requires n1 elementary periods to arrive in the retailers’ inventory. Similarly, an order sent by the wholesaler to the factory becomes effective after n2 elementary periods.
u
v n2
Factory
yt Wholesaler
d n1
xt Retailer
Figure 4.2 A three-level system
xt is the inventory level of the retailer and yt is the inventory level of the wholesaler at the end of elementary period t. The question is: knowing that the demand at the beginning of elementary period t is d, what should be the order v sent by the retailer to the wholesaler and the order u sent by the wholesaler to the factory to “optimize” the system? Note that we didn’t mention any criterion to optimize in the previous question.
116
4 Inventory Management in Supply Chains
We propose two simulations over L elementary periods to provide some insight into this question. Simulation 1 is based on “installation stock policy”, see (Deuermeyer and Schwarz, 1981) and (Moinzadeh and Lee, 1986) among others. In this simulation, only local stock information is analyzed to make a decision. In other words, inventories are managed in isolation. In Simulation 2, it is assumed that the wholesaler is informed about the demand made to the retailer and knows the level of retailer’s inventory in real time. When stock policies use inventory information from all the downstream levels, we call them “echelon stock policies”. In both simulations: • n1 = n2 = 10 . • K = 40 . • L = 100 . In other words, the simulations will run on 100 elementary periods. • x0 , initial retailer’s inventory level, is equal to ⎣K / 2 + 0.5⎦ , where ⎣ a represents the largest integer that is less than or equal to a . • y0 , initial wholesaler’s inventory level, is equal to ⎣K / 2 + 0.5⎦ .
⎦
• It is assumed that, initially, ⎣K / 2 + 0.5⎦ units of items are in progress in the system for each elementary period of the past (equilibrium situation). • M x is the inventory threshold for the retailer ( M y for the wholesaler). When
an inventory level exceeds the corresponding threshold, the replenishment is penalized to reduce the inventory level. In the two simulations shown hereafter, M x = M y = 40 . • It is assumed that the items provided by the factory are immediately available whatever the quantity required.
Simulation 1 The state equation related to the retailer’s inventory is: xt = xt −1 + vt − n − d t for t = 0, 1, 2, L
(4.1)
⎧dt if 0 ≤ xt ≤ M x or xt −1 ≤ xt < 0 or xt −1 ≥ xt > M x ⎪ ⎪ dt − xt if xt < 0 ≤ xt −1 ⎪ ⎪ ⎪ where: vt = ⎨ dt + xt −1 − xt if xt ≤ xt −1 < 0 ⎪ ⎪ dt − Min ( dt , xt − M x ) if xt −1 ≤ M x < xt ⎪ ⎪ ⎪⎩ dt − Min ( dt , xt − xt −1 ) if M x < xt −1 < xt
(4.2)
1
4.2 Inventories in Supply Chains
117
As shown in (4.2) the control vt , which should be equal to d t , is increased when the inventory is less than 0 and decreased when the inventory is greater than the threshold. The state equation related to the wholesaler’s inventory is: yt = yt −1 + ut −n − vt for t = 0, 1, 2, L
(4.3)
⎧vt if 0 ≤ yt ≤ M y or yt −1 ≤ yt < 0 or yt −1 ≥ yt > M y ⎪ ⎪ vt − yt if yt < 0 ≤ yt −1 ⎪ ⎪ ⎪ vt + yt −1 − yt if yt ≤ yt −1 < 0 where: ut = ⎨ ⎪ ⎪ vt − Min ( vt , yt − M y ) if yt −1 ≤ M y < yt ⎪ ⎪ ⎪⎩ vt − Min ( vt , yt − yt −1 ) if M y < yt −1 < yt
(4.4)
2
Basically, the order ut sent by the wholesaler to the factory should be equal to the quantity vt required by the retailer from the wholesaler. But, as we did to compute vt , we correct it according to the level of the wholesaler’s inventory, see Relations 4.4. As can be seen in Figure 4.3, inventories demonstrate large amplitude fluctuations and the inventory-level variability increases as one moves up from retailer to wholesaler: this is the bullwhip effect that will be properly defined in the following section.
Inventories
Bullwhip effect 150 130 110 90 70 50 30 10 -10 -30 1 -50 -70
21
41
Retailer
Figure 4.3 Simulation 1
61
81
Wholesaler
118
4 Inventory Management in Supply Chains
Simulation 2 In this simulation, we assume that the wholesaler knows the inventory level of the retailer at any time. The retailer’s policy remains the same as before (in other words, (4.1) and (4.2) still hold). The wholesaler’s state equation also holds, but the control vt is no longer corrected according to the wholesaler’s inventory level, but rather to the retailer’s inventory level as shown by Relations 4.5. ⎧vt if 0 ≤ xt ≤ M x or xt −1 ≤ xt < 0 or xt −1 ≥ xt > M x ⎪ ⎪ vt − xt if xt < 0 ≤ xt −1 ⎪ ⎪ ⎪ vt + xt −1 − xt if xt ≤ xt −1 < 0 ut = ⎨ ⎪ ⎪ vt − Min ( dt , xt − M x ) if xt −1 ≤ M x < xt ⎪ ⎪ vt − Min ( dt , xt − xt −1 ) if M x < xt −1 < xt ⎩⎪
(4.5)
The result is shown in Figure 4.4. As the reader can see, the bullwhip effect is significantly reduced. This simple example: • illustrates the bullwhip effect; • shows that the bullwhip effect can be reduced when the partners of the supply chain share information.
The bullwhip effect penalizes production. We will explore systematically its causes and ways to reduce this phenomenon.
Inventories
Reducing the bullwhip effect 150 130 110 90 70 50 30 10 -10 -30 1 -50 -70
21
41
Retailer
Figure 4.4 Simulation 2
61
81
Wholesaler
4.2 Inventories in Supply Chains
119
4.2.3.2 Bullwhip Effect: Definition, Causes, and Consequences
The bullwhip effect was first described by (Forrester, 1961) who wrote: (It has) been observed that a distribution system of cascaded inventories and ordering procedures seems to amplify small disturbances that occur at the retail level.
Later, in (Lee et al., 2000), the bullwhip effect was defined as: The phenomenon of demand variability amplification along a supply chain, from the retailers to distributors, manufacturer and the manufacturer’s suppliers, and so on.
In other words, when a supply chain is not managed optimally, demand variability (i.e., demand variance) increases as one moves upstream in the supply chain. As mentioned in numerous research works such as, for example, (Sterman, 1989), (Krane and Brawn, 1991) and (Blanchard, 1983), the distortion of demand when moving upstream in a supply chain shows three typical behaviors: oscillation, amplification and phase-lag. “Oscillation” refers to the fact that demand is not stable while “amplification” means that demand variability increases when one moves upstream in the supply chain. “Phase-lag” refers to the fact that the order rate tends to peak later as one moves upstream in the supply chain. Let us now mention some factors that contribute to the bullwhip effect.
Overreaction to Backlogs This situation happens when an unexpectedly large demand arises, which causes a backlog. In turn, this backlog may induce several consecutive significant orders when the demand is not considered as accidental, which may lead to an overstock. An overstock may drive the inventory manager to reduce excessively the next orders, as a result of which a new backlog arises, and so on.
Forecast Errors A forecast error consists of under- or overestimating a demand, which may lead to a backlog or an overstock. Forecast errors may play the same role as an overreacting inventory manager and thus trigger the same bullwhip mechanism.
Order Batching The bullwhip effect may originate from the existence of significant fixed ordering costs. In this case, the “optimal” solution consists in aggregating several consecutive demands into a single order (lot-sizing) to reduce the part of the fixed cost for each item. As a consequence, a demand is followed by several periods without demand. This triggers the bullwhip mechanism.
120
4 Inventory Management in Supply Chains
Price Fluctuations Price fluctuations encourage buying products at low price and thus overstocking to anticipate price changes. This triggers the bullwhip effect as order batching does, but usually with greater oscillation, variability and phase-lag.
Product Promotions Promoting a product leads temporarily to a large increase of demands followed by a period of demands that are less than the “usual”. The effects of product promotions are similar to those of price fluctuations, except that the period at which this disturbance takes place is chosen by the company and not decided by the market.
Inventory Rationing When the provider serves several retailers, it may happen that the provider’s inventory runs short. This is the case when, for some reasons, the demands of the retailers are high, all at the same time. Thus, some of the retailers will run out of stock, which will encourage them to increase their next demands that, if not optimally decided, will trigger the bullwhip effect.
Lead Times Material and information delays in a supply chain managed using common methods generates bullwhip effects, especially when lead times are random.
Free Return Policy A free return policy allows returning a purchase within a given period of time after sales. This disturbs the system and triggers the bullwhip effect.
Generous Order Cancellation As for free return policy, generous order cancellation introduces disturbances in the demands that, if important and repetitive, activate a bullwhip effect. Finally, the bullwhip effect results from a discrepancy between the information available at the time the decision is made and the actual situation, this discrepancy is due to one of the reasons listed above. The consequences of the bullwhip effect are straightforward. First, the phenomenon implies provisions for higher inventories and higher capacities, see (Kilger, 2008), which are expensive. Secondly, the bullwhip effect causes frequent backlogs, which also lead to additional costs (more manpower and administrative work, penalties, high-speed transportation, etc.). Furthermore, since the bullwhip effect requires more material handling, it increases the risk of low quality and necessitates more employees. All of which may lead to huge additional costs. Indeed, the image of the supply chain may suffer from delivery delays and poor quality, which, while costly, is difficult to evaluate.
4.2 Inventories in Supply Chains
121
4.2.3.3 How to Reduce the Bullwhip Effect?
(Lee et al., 1997) showed that the bullwhip effect is absent if the following conditions hold: • Demand history (past demands) is not used for forecasting. In other words, there is no correlation between demands arising at two different periods. • Inventory resupply is not limited in quantity. In addition, the lead time, that is to say the number of periods required to make an order effective, is constant. If this condition does not hold, inventory managers may engage in shortagegaming, which causes a bullwhip effect as mentioned before. • There is no fixed order cost. As explained earlier, a fixed order cost may lead to batch ordering, which sets off the bullwhip effect. • Purchase costs are stationary over time.
If these assumptions hold, the optimal order at the beginning of a period equals the demand of the previous period. Unfortunately, none of the above assumptions is realistic but they suggest actions that reduce the bullwhip effect. According to the previous remarks, the bullwhip effect can be substantially reduced by applying the following rules: 1. Reduce the number of decision levels as much as possible. In other words, vertical integration reduces the variance of oscillations. Unfortunately, applying this rule is rarely possible in practice but the designer of a supply chain should keep it in mind. 2. Reduce the amplitude and the frequency of promotions. 3. Launch smaller and more frequent orders. Indeed, the ordering cost may increase, but inventory and backlogging costs will decrease significantly: thus the objective is to find the best tradeoff. 4. Another remedy is suggested in (Lee et al., 1997): “granting the manufacturer access to the demand data at retail outlet”. This rule has been illustrated by the example presented in Section 4.2.3.1. Indeed, this remedy requires close collaboration of the manufacturer with the retailer. In (McCullen and Towill, 2001), the authors suggest linking factory plans to real-time customer demand, which generalizes this rule. These approaches are referred to as methods based on information transparency or supply chain visibility. Vendor–management– inventory (VMI) is an example of this type of organization. VMI is a system in which it is the supplier and not the customer who decides to replenish or not the customer’s inventory. In other words, a given level of the echelon stock initiates the replenishment of the inventory of the next lower level. 5. Some authors also suggest that increasing the forecasting horizon may reduce the upstream oscillation variance. 6. Similarly, authors have demonstrated that the bullwhip effect can be reduced by centralizing demand information. It is another way to make information available to the right inventory manager.
122
4 Inventory Management in Supply Chains
7. Reduce as much as possible the manufacturing lead times and constrain randomness by introducing close collaboration with providers and, more generally, with the next upper level of a hierarchical system. 8. Regulate material flows using appropriate control systems (such as Kanban, for instance).
4.3 Stochastic Inventory Problems In supply chains, demands are random. Thus, only stochastic problems are of interest, except at the design level where parameters are usually deterministic. We will explore such problems in Section 4.4.
4.3.1 Newsvendor (or Newsboy) Problem1 This concerns a stock that should be replenished once to respond to a random demand. Initially, this stock is empty and the goal is to define the quantity I to introduce in the stock to reach the greatest profit. Let us consider a continuous model. The demand, denoted by D , is randomly distributed. We assume that D is continuous and f ( • ) (respectively, F ( • ) ) is the density of probability (respectively, the distribution function) of D . The following parameters are needed: • c : production cost (or purchasing cost) per item used to replenish the stock; • p : selling price of one item; • v : salvage value2 per item.
We assume that the selling price is greater than the production cost, which, in turn, is greater than the salvage value: p>c>v>0
(4.6)
Since the demand is random, we maximize the expected value of the profit Q(I ): E [ Q ( I ) ] = E [ p min ( I , D ) ] + E [ v max ( 0, I − D )] − Selling revenue
Salvage revenue
cI
Production cost
1
Initially, this model was introduced to optimize the situation where a newsvendor has to define the number of newspapers to buy in the morning and distribute during the day in order to maximize his/her profit. Newspapers that are not sold by the end of the day are obsolete.
2
A salvage value is the price at which one item can be sold after a given deadline. In the case of timedated items, v = 0 .
4.3 Stochastic Inventory Problems
123
This relation can be rewritten as: I
E[Q( I )]= p ∫
x=0
I
+v ∫
x =0
+∞
x f ( x ) dx + p I ∫ Selling revenue
x=I
f ( x ) dx
( I − x ) f ( x ) dx − c I Salvage revenue
The function E [ Q ( I ) ] is convex with regard to I . Thus, to maximize, we write that its derivative is equal to 0: p (1− F ( I ) ) + v F ( I ) − c = 0
Finally:
I = F −1 (
p−c ) p−v
(4.7)
According to (4.6): 0
F −1 (
125
p−c ) , then I = 0 is optimal. p−v 2 p−c p−c ) , then I = F −1 ( ) − I 0 is optimal. 2 p−v p−v
4.3.2 Finite-horizon Model with Stochastic Demand We consider the situation where demand is stochastic and the inventory level is reviewed periodically (i.e., at the beginning of each elementary period which could be a day, a week or a month, depending on the type of system). We restrict ourselves to the finite-horizon problem. The horizon T is the number of elementary periods under consideration. Since demand is stochastic, it does not make sense to impose zero stock-out. For each elementary period t ∈ { 1, 2, L , T } , we define an inventory cost ht ( x ) that represents the cost incurred when the inventory level is x at the beginning of the elementary period and a production cost (or ordering cost) ct ( v ) that is the cost incurred when a quantity v is ordered (and released in production) at the beginning of the elementary period. Note that an inventory level x is a real number that can be negative, positive or equal to zero, while the quantity released in production (or ordered) v is greater than or equal to zero. We assume that the initial inventory (i.e., the inventory at the beginning of the first elementary period) is known. It is denoted by x0 . A random demand appears at the end of each elementary period. The demands are independent and denoted by ηt . We assume that E [ η t ] ≤ C < +∞ : the expected value of the demand is upper bounded, whatever the elementary period under consideration. In the theoretical presentation proposed in this subsection, it is assumed that there is a one period delivery (or production) lag. These assumptions will be generalized in the next subsection. The production (or ordering) cost functions are lower semi-continuous3 and there exists ct such that ct ( v ) ≥ ct v . The inventory cost function ht ( x ) is uniformly continuous and there exists ht such that ht ( x ) ≤ ht ( 1 + x ). When x < 0 , ht ( x ) is the backlogging cost,
otherwise it is called the inventory cost.
3
A function f : X → R is lower semi-continuous if f
−1
{
( ( a,+∞ ) ) = x ∈ X f ( x ) > a
}
126
4 Inventory Management in Supply Chains
At the end of an elementary period we introduce a penalty cost R ( x ) that is uniformly continuous with R ( x ) ≤ R ( 1 + x ). When x > 0 the penalty cost is called the salvage cost. Indeed, costs are positive or nil. The state equation that models the fluctuations in inventory level is: xt +1 = xt + vt − ηt +1 for t = 0, 1, L , T − 1
(4.9)
The control is the sequence of production (or ordering) decisions: V = { v0 , v1 , L , vT −1
}
Using the control, the objective is to minimize the expected value of the total cost over the horizon T : T −1
J T (V , x0 ) = ∑ E [ ct (vt ) + ht (xt ) ] + E [ R(xT ) ]
(4.10)
t =0
It has been proven that the following backward dynamic programming equations lead to the optimal solution: ⎧ut ( x ) = ht ( x ) + inf { ct ( v ) + E [ ut +1 ( x + v − η t +1 ⎪ v≥0 ⎨ ⎪⎩uT ( x ) = R ( x )
] }, t = 0, 1, L, T − 1
(4.11)
and Min J T ( V , x 0 ) = u 0 ( x 0 ) V
In practice, it is difficult to apply this result because the number of cases to examine usually increases drastically. The analytical results available in the literature are based on assumptions that often do not fit with most real-life situations. This is why general control policies that depend on parameters have been developed. Usually, only simulation can help define the values of these parameters to reach near-optimal solutions. These policies apply to stochastic demand and/or to stochastic lead time.
4.3 Stochastic Inventory Problems
127
4.3.3 (R, Q) Policy Let us first define the inventory position I p of a monoproduct inventory problem with stochastic lead time and demand: Ip = S +O − B where:
• S is the real inventory level (i.e., the stock on hand). • O is the sum of outstanding quantities ordered (i.e., ordered quantities that are not yet available). • B is the sum of demands that have not been satisfied yet for lack of stock. The inventory level I l is defined as: Il = S − B
The ( R, Q ) policy applies to both periodic and continuous review situations. Assume that the inventory level is checked at the beginning of each elementary period and that a demand appears at the end of each elementary period. We still denote by T the horizon of the problem. At the beginning of each elementary period (each review point):
• If I p < R , then a quantity n Q is ordered, where n is the smallest integer such that I p + n Q ≥ R .
• If I p ≥ R , then nothing is ordered.
R + 2Q R+Q Q
R
2Q
2Q
L= 1 L: Lead time
Figure 4.5 An example of ( R, Q ) policy
L= 3
L= 1
128
4 Inventory Management in Supply Chains
In the case of continuous review, an order is triggered as soon as I p becomes less than R , and a quantity n Q is ordered. In Figure 4.5, we represent an example of ( R, Q ) policy for a case of continuous demand, periodic review and stochastic lead time expressed in a finite number of elementary periods. Indeed, parameters R and Q remain to be defined. The following example shows how simulation can help.
Example We consider the following simple problem with a horizon T = 1000 elementary periods. The initial inventory is S 0 = 50 and the problem is steady (the costs do not depend on the elementary periods). The inventory cost is h + = 1 and the backlogging cost is h − = 2 per unit and per elementary period. When Q is the value of the ordered batch, the ordering (or production) cost is: ⎧ 0 if nothing is ordered ⎪ ⎪⎪ c ( Q ) = ⎨ 0.25 Q + 15 if Q ∈ ( 0, 40 ] ⎪ ⎪ 5 Q + 50 otherwise ⎪⎩ 24 3
The cost c ( Q ) applies each time a batch Q is ordered (i.e., even if several batches are ordered simultaneously). We use the penalty function:
R ( x ) = 2 max ( 0, S * ) − min ( 0, S * ) where S * is the inventory level at the horizon T . The demands are identically distributed on [0, 60] and a demand appears at the end of each elementary period. The lead time is stochastic. In other words, an order made at the beginning of elementary period t can be delivered either at the end of period t (probability 0.1) or at the end of period t + 1 (probability 0.2) or at the end of period t + 2 (probability 0.4) or at the end of period t + 3 (probability 0.2) or at the end of period t + 4 (probability 0.1). Optimization Algorithm Q * being given, we aim at defining R such that applying the ( R, Q * ) policy leads to a near-optimal solution.
4.3 Stochastic Inventory Problems
129
The algorithm A( R, Q ) is summarized as follows (see Algorithm 4.1): we start with R0 defined at random; then, we use a straightforward gradient estimation. Let us define by G ( R | Q * ) the average cost per elementary period when applying the ( R, Q * ) policy. This average cost is obtained by simulation, taking into account the two stochastic aspects introduced above: demand and lead time. ~ We denote by G ( R | Q * ) the result of the simulation. We approximate the gradient using finite difference: ~ ~ G ( R +δ | Q* ) −G ( R −δ | Q* ) g~ ( R | Q * ) = 2δ
where δ is a parameter provided by the user. Algorithm 4.1. 1. Introduce δ . 2. Initialize R = R0 .
~
~
3. Simulate G ( R + δ | Q * ) and G ( R − δ | Q * ) .
~ ( R |Q*). 4. Compute g
~ ( R | Q * ) > 0 then set R = R − δ , otherwise set R = R + δ . 5. If g 6. If the approximated gradient is not the first gradient computed and if the product of this gra-
dient with the previous one is negative, set δ = δ × α . The parameter α is less than 1 and is
used to reduce the speed of the evolution of R when one approaches a domain where the gradient is equal to zero. 7. If δ > l , then go to 3, otherwise stop the algorithm. The parameter l is given by the user. The less l , the greater the number of iterations, and thus the more precise the result.
We applied this algorithm to the example described above for different values of Q * . The results are presented in Table 4.1. The values provided in lines 4 to 6 refer to one elementary period. Table 4.1 “Optimal” state with regard to Q *
Q*
20
R “optimal”
94.4
30
40
50
60
88.55 89.22 87.76 79.04
70 66.5
80
90
100
70.36 67.99 51.29
“Optimal” cost
70.66 63.97 61.88 60.44 59.80 61.11 62.94 63.65 65.29
Average inventory
25.05 25.49 27.52 28.80 30.89 27.99 31.20 35.28 27.20
Average backlog Average order
8.04
8.17
7.80
7.69
7.35
10.09
9.66
8.31
13.55
29.58 29.46 30.07 30.03 29.14 28.99 29.81 29.68 29.23
130
4 Inventory Management in Supply Chains
R versus Q* 100 90 R
80 70 60
10 0
90
80
70
60
50
40
30
20
50
Q*
Figure 4.6 “Optimal” value of R with regard to Q *
Figure 4.6 represents the evolution of R “optimal” with regard to Q * . The dotted line is the result of smoothing (second degree). Remarks 1. In this example, only one parameter R has been defined for each value of Q * . Thus, only one gradient had to be approximated for each iteration in Algorithm 4.1. If the costs are different for different elementary periods, a pair ( Rt , Qt ) should be taken into account for each t ∈ { 1, 2, L, T } . As a consequence, 2 × T gradients must be approximated for the horizon T . 2. Figure 4.6 provides an “optimal” value of R conditional to Q * , but the optimal pair ( R, Q ) , that is to say the pair that leads to the minimum average cost per elementary period, is ( 80.14, 54 ) and the cost reached by applying the conditional approach is 59.51.
4.3.4 (s, S) Policy With this policy, an order is triggered as soon as the inventory position I p becomes less than or equal to s . The ordered quantity is equal to S − I p .4 Thus, the order is no longer a multiple of a batch quantity.
4
A variant of the ( s, S ) policy is the S policy (or base stock policy) where an order is triggered
even if I p > s . This policy may apply in the case of periodic review.
4.3 Stochastic Inventory Problems
131
Indeed, the parameters s and S should be defined. We can apply the same approach as for the previous policy. ss * being given, we need to define dd such that applying the ( ss*, ss * +dd ) policy leads to a near-optimal solution. We start with dd 0 defined at random. The algorithm B ( ss, SS ) is summarized as follows (see Algorithm 4.2). In this algorithm, we use a straightforward gradient estimation. Let us define by G ( dd | ss * ) the average cost per elementary period when applying the ( ss*, ss * + dd ) policy. This average cost is obtained by simulation, taking into ~ account the stochastic demand and lead time. We denote by G ( dd | ss * ) the result of the simulation. In this algorithm, we approximate the gradient using finite difference: ~ ~ G ( dd + δ | ss * ) − G ( dd − δ | ss * ) ~ g ( dd / ss * ) = 2δ
The parameter δ is provided by the user. Algorithm 4.2. 1. Introduce δ . 2. Initialize dd = dd 0 .
~
~
3. Simulate G ( dd + δ | ss * ) and G ( dd − δ | ss * ) . 4. Compute g~ ( dd | ss * ) .
5. If g~ ( dd | ss * ) > 0 then set dd = dd − δ , otherwise set dd = dd + δ . 6. If the approximated gradient is not the first gradient computed and if the product of this gradient with the previous one is negative, set δ = δ × α . The parameter α is less than 1 and is used to reduce the speed of the evolution of dd when one approaches a domain where the gradient is equal to zero. 7. If δ > l , then go to 3, otherwise stop the algorithm. The parameter l is given by the user. The less l , the greater the number of iterations, and thus the more precise the result.
Table 4.2 “Optimal” state with regard to ss * ss*
50
60
70
80
90
SS “optimal”
104
114
106
116
126
“Optimal” cost
69.1
65.09 63.77 61.80 65.04
Average inventory
16.7
23.39 20.76 28.36 37.01
Average backlog Average order
19.79 14.44 13.71
9.58
6.43
29.55 29.49 30.36 29.83 29.80
132
4 Inventory Management in Supply Chains
We applied this algorithm to the example described above for different values of ss * . The results are presented in Table 4.2. The values provided in lines 4 to 6 refer to one elementary period. Figure 4.7 represents the evolution of SS “optimal” with regard to ss * . The dotted line is the result of smoothing (second degree). Note that in the test ss * evolves unit by unit. It is possible to refine this approach by dichotomy. SS versus ss* 135 130
SS
125 120 115 110 105 100 40
45
50
55
60
65
70
75
80
85
90
ss*
Figure 4.7 “Optimal” value of SS with regard to ss *
In this example, the “optimal” pair ( ss, SS ) is ( 78, 126 ) and leads to a cost of 61.50 per elementary period.
4.4 Echelon Stock Policies 4.4.1 Introductory Remarks The ordering policies in multilevel systems, and in particular in supply chains, can be classified into two groups: installation and echelon stock policies. In installation stock policies, only local stock information is available for managing stock. In other words, the only information available is the information issued from the stock under consideration. For example, if the ( R, Q ) and ( s, S ) policies are used in such an environment, they can rely only on the current inventory level and its history. Echelon stock policies, on the contrary, enable the utilization of the information related to other stocks and, more generally, to the whole supply chain and its
4.4 Echelon Stock Policies
133
history. In the example given in Section 4.2.3.1 it has been shown that a simple echelon stock policy performs better than an installation stock policy when the second level of the system is informed about the demand appearing at the lower level. Echelon stock policies are of utmost importance in supply chains. Furthermore, we have to keep in mind that stocks in supply chains are not always parts of a single company, which implies that a necessary (but not sufficient) condition for a successful management is a real-time data-exchange system among the partners of the supply chain. The following remarks are of importance: • The structure of an optimal echelon stock policy is unknown. • The objective when using an echelon stock policy is to coordinate local decisions using information issued from the downstream levels and, more generally, from the whole supply chain. An approach frequently used to coordinate the decisions made at the different levels of the supply chain is MRP (material requirements planning) completed by MRP2 (manufacturing resources planning) that are presented hereafter.
4.4.2 Material Requirements Planning (MRP) MRP is a tool for production planning and inventory control. This approach is used periodically, on a rolling-horizon basis.5 Applying MRP requires data on the components (raw material, semi-finished products) of each end item, quantity of each component, sequence in which the components are required and operations are performed, as well as the operation times. The bill-of-material (BOM) is the key structure used to introduce these data. 4.4.2.1 Bill-of-material (BOM)
The BOM is the key input for the MRP system. It shows all of the components (both purchased and manufactured) at every level of the manufacturing process and the quantity of each component required. In the supply chain environment, the levels of the manufacturing process may take place in different companies. There is one BOM for each end item. It should be noted that purchased items do not require a BOM.
5
A decision is made on a rolling-horizon basis if it is made periodically (period h) for a period H (h ≤ H), taking into account the information available when the decision is made.
134
4 Inventory Management in Supply Chains
An example of a BOM describes how a tablet box (TB) is produced. Tablets are presented in blisters. The final product (i.e., the tablet box) is made with a box (BX), a blister pack of tablets (BPT) and the instructions for use (IU). A blister pack of tablets is obtained from a blister (BL) and tablets (TA). The numbers associated with the arrows define the number of components required to obtain the item of the next upper level. The BOM is presented in Figure 4.8. Another way to represent the BOM of the tablet box is Table 4.3. This BOM has three levels: level 0 (TB), level 1 (IU, BPT, BX) and level 2 (BL and TA).
TB 1
1 1 BPT
IU 1
BX 20
BL
TA
Figure 4.8 An example of a BOM: The tablet box
Table 4.3 The tablet box BOM Level
Quantity
Acronym
Identification
0
1
TB
Tablet box
1
1
IU
Instruction for use
1
1
BPT
Blister pack of tablets
1
1
BX
Box
2
1
BL
Blister
2
20
TA
Tablets
4.4.2.2 Master Production Schedule (MPS)
The MPS is a production program for end items. It is a time-phased schedule of the production required to meet demand (in the case of make-to-order supply chain) or to maintain inventory levels. The MPS is generated based, in particular, on the following information: • forecasted demands; • customers’ orders; • inventory levels (including backlogs);
4.4 Echelon Stock Policies
135
• outstanding orders; • the lead times (which are assumed to be constant).
The end items of the MPS are called “level 0” items. 4.4.2.3 Time-phased Gross Requirements
The gross requirement leads to component requirements by exploding the MPS using the BOM of each item. Consider, for example, the BOM of the tablet box and assume that the operation times related to the components are given in Table 4.4. The operation times are given in terms of the number of elementary periods. Figure 4.9 provides the cumulative lead time of the tablet box. It is equal to 7 elementary periods. Table 4.4 Component requirements for tablet box Level
Operation
Operation time
Identification
0
Assembling IU, BPT and BX
1
TB
1
Making IU available
1
IU
1
Assembling BL and TA
3
BPT
1
Producing and making box available
2
BX
2
Producing and making blister available
2
BL
2
Producing and making tablets available
3
TA
BL BX BPT
TB IU
TA Elementary periods
–7
–6
–5
–4
Figure 4.9 Cumulative lead time for tablet box
–3
–2
–1
.
136
4 Inventory Management in Supply Chains
This shows that if the due date of a tablet box (TB) is given: • The TA item must be released into production 7 elementary periods before the due date. • The BL item must be released into production 6 elementary periods before the due date. • The BPT item must be released into production 4 elementary periods before the due date. • The BX item must be released into production 3 elementary periods before the due date. • The IU item must be released into production 2 elementary periods before the due date. • The TB item must be released into production 1 elementary period before the due date.
The operation times, given in terms of elementary periods, are rough and often overestimated. Furthermore, these times apply whatever the number of elements and, last but not least, they are adjusted from time to time to respond to worker requirements, which often lead to increasing operation times. 4.4.2.4 Adjusting Production and Inventory Levels
The forecasted demand and production are computed, before entering MRP, in the master production schedule. The MRP system adjusts the production objectives and inventory levels from these data, as shown below for the end item and item BPT. Table 4.5 concerns the end item TB. The first row deals with elementary periods. The second and third provide the demands and production quantities respectively, which are inputs of the MRP. The next rows are fulfilled by the MRP system. The fourth row presents the inventory levels corresponding to the two previous rows. Note that a stock is available at the end of an elementary period while a demand is due, and a quantity produced is available at the beginning of the corresponding period. The last row contains the corrective production introduced to suppress stock shortages. These orders are released taking into account the operation time, which is one elementary period in this example. Table 4.5 Inventory and production management of the end item
Elementary period
1
2
3
4
5
6
7
8
9
10
11
Demand
0
10
15
10
8
14
15
13
11
25
12
Production
0
0
30
0
0
0
0
10
20
15
12
Inventory level
30
20
35
25
17
3
–12
–15
–6
–16
–16
Corrective production
0
0
0
0
0
12
3
0
1
0
0
4.4 Echelon Stock Policies
137
Table 4.6 Inventory and production management of the item BPT Elementary period
1
2
3
4
5
6
7
8
9
10
11
Demand
0
20
10
15
10
20
20
18
12
20
10
Corrective demand
0
0
0
0
0
12
3
0
1
0
0
Production
0
0
15
0
0
15
40
0
20
10
50
Inventory level
30
10
15
0
–10
–27
–10
–28
–21
–31
–9
Corrective production
0
10
17
0
1
0
3
0
0
0
0
Table 4.6 concerns item BPT. The mechanism is the same as in Table 4.5, except that an additional row is introduced to modify the demands in order to take into account the corrective production introduced for the end product and the fact that only one BPT is required to perform one end product. When scheduling the corrective production, the system takes into account the fact that three elementary periods are required to produce item BTP. In this table, we assume that item BPT is a component of only the end item TB. The above example is simple. In real situations, the number of end items (i.e., level 0 items) could be very high and a component can be used by several end items. So the computational burden is often very large: this explains why MRP is of utmost importance. It should be noted that the MRP does not consider the capacity of the system. As a consequence, the solution given by the MRP may be not admissible. In this case, the approach provides load profiles for the components of the system, and it is the responsibility of the user to modify the due dates of some items (load smoothing) based on the profiles and, if necessary, to launch the MRP system again, hoping that it will provide a feasible solution. The user may have to iterate several times before reaching a feasible solution. In the above example we used a lot for lot policy. Some more complex lotsizing rules are often applied in practice. They will be presented in the next section.
4.4.2.5 Some Remarks about MRP
The following advantages are usually mentioned: 1. 2. 3. 4. 5.
reducing inventory and WIP levels; reducing the number of late deliveries; better adjustment to changes in the master production plan; improvement of productivity; better use of resources.
138
4 Inventory Management in Supply Chains
There are also several disadvantages when using MRP: • Tendency to perpetuate the errors of the past, in particular to freeze overestimated operation times. • The operation times are defined based on lot sizes that rarely coincide with actual ones which makes it difficult to reach feasible solutions. • The scheduling function is not included in the MRP. • The necessity to carry out a parameterization phase using simulation to take into account the real lot-sizing rules and random factors. • The nervousness of the system due to the frequent changes of production plan. MRP is a widely accepted method for production planning. The MRP software solutions are employed readily. Most industrial decision makers have been made aware of them through all the commercial production control software. MRP software has a well-developed information system and has a proven track record. Nevertheless, MRP is funded on the hypothesis that the demand and lead times are known. Release dates (replenishment order dates) are calculated for a series of discrete time intervals (time buckets) based on the demand and considering the fixed lead time (the release date is equal to the due date for the demand minus the lead time). This premise of deterministic environment seems somewhat awry since often random events arrive and product lead times and finished product demands are rarely foreseen reliably due to machine breakdown, transport delay, customer demand variations, etc. Therefore, in real life, the deterministic assumptions embedded in MRP are frequently too limited. Luckily, the MRP approach can be adapted for replenishment planning under uncertainties by determining the optimal values of its parameters. For random lead times, the planned lead time will be obtained as the sum of the forecasted lead time and the safety lead time (safety stock). These planned lead-time values are a compromise between overstocking and stock-out while optimizing total cost. This is called the MRP parameterization problem. State-of-the-art methods are reported and commented upon in (Dolgui and Prodhon, 2007).
4.4.3 Manufacturing Resources Planning (MRP2) The MRP system is a reorder point system that is updated continuously. Unfortunately, it has an important flaw: as aforementioned it does not take into account the capacity of the production system, which increases complexity and management cost. Manufacturing resource planning (MRP2) is the MRP completed by modules that take care of the capacity problem. MRP2 deals with the capacity problem at two levels, as shown in Figure 4.10.
4.5 Production Smoothing: Lot-size Models
MPS: Production program for end items
NO
139
Rough cut capacity planning (RCCP)
Enough resources ? YES MRP
NO
Capacity requirement planning (CRP)
Enough resources ? YES End of MRP2
Figure 4.10 Structure of MRP2
At the MPS level, it is possible to forecast the production capacity required to perform the operations that have been scheduled. The module in charge of this function is usually called rough cut capacity planning (RCCP). This module is not very accurate since inventories are not considered. When the MRP process is completed, the required capacity is computed and adjusted. This function is referred to as capacity requirements planning (CRP).
4.5 Production Smoothing: Lot-size Models To simplify the management of stocks it is common to disregard the randomness of demands and consider the deterministic lot-sizing problem. The randomness aspect of the demand is left up to safety stocks that are supposed to absorb the variations in demands. In this section, we propose models for the standard economic lot-sizing problems, starting with a general discrete problem on a finite horizon. We will also study continuous and multiproduct cases. The basic hypothesis is that the production and holding costs are known.
140
4 Inventory Management in Supply Chains
4.5.1 Discrete Monoproduct Problem 4.5.1.1 Model
We denote by H the horizon of the problem, by xi the inventory level at the beginning of period i ∈ { 0, 1, L, H } , by vi the replenishment decided at time i and that becomes available at time i + 1 and by di the demand that appears at time i . The demand is known (usually forecasted) and a backlog is not permitted. The state equation is: xi +1 = xi + vi − d i +1 for i = 0, 1, L , H − 1
(4.12)
The following constraints apply:
(
vi ≥ d i +1 − xi ) + for i = 0, 1, L, H − 1
(4.13)
Remember that ( a ) + = Max ( 0, a )
Equation 4.13 guarantees that the inventory level is never negative. x0 ≥ 0 is the initial inventory level that is known. Indeed, d i ≥ 0 for i = 1, L, H . A set V = { v0 , L , v H −1 } that verifies (4.13) is said to be an “admissible” or “feasible” control. We also consider two types of costs: 1. The inventory (or holding) cost f i for i = 0, 1, L, H − 1 . A function f i is defined on ℜ + and takes its values on ℜ + . Furthermore, such a function is concave, increasing and continuous on ℜ + − { 0 } . f i ( x ) is a cost corresponding to a quantity x ≥ 0 holds in stock during the elementary period [ i, i + 1 ) . 2. The production (or setup) cost ci for i = 0, 1, L, H − 1 . ci ( v ) is the cost incurred when deciding to produce v at time i . The functions ci have the same properties as functions f i . A control is optimal if it is admissible and minimizes the total cost: H −1
K ( V ) = ∑ { ci ( vi ) + f i ( x i ) i =0
}
(4.14)
4.5 Production Smoothing: Lot-size Models
141
4.5.1.2 Computation of the Optimal Solution
The backward dynamic programming formulation that leads to the optimal solution is: ⎧ KH = 0 ⎫ ⎪ ⎪ ⎪ ⎪ ⎨ K i ( x ) = f i ( x ) + inf + { ci ( v ) + K i +1 ( x + v − d i +1 ) } ⎬ v ≥ ( d i +1 − x ) ⎪ ⎪ ⎪ for i = H − 1, L , 0 and x ≥ 0 ⎪ ⎩ ⎭
(4.15)
Indeed, (4.15) cannot be used since the computation of K i ( x ) requires considering an infinite number of replenishments v for each value of x . Bensoussan and Proth (1981) established that it is not necessary to consider all the values v ≥ ( d i +1 − x ) + to compute K i ( x ) that, in turn, requires considering only a finite number of values for x . The results that make the dynamic programming approach usable are summarized in Theorem 4.1. To simplify the notations, we set: j
σ i j = ∑ d k keeping in mind that σ i j = 0 when j < i . k =i
Theorem 4.1. Whatever i = 0, 1, L, H − 1 :
1. If 0 ≤ x < d i +1 , then: K i ( x ) = f i ( x ) + Min
k = i +1, L, H
{c
i
( σ ik+1 − x ) + K i +1 ( σ ik+ 2 )
}
2. If σ is+1 ≤ x < σ is++11 , s ∈ { i + 1, L , H − 1 }, then: ⎧ c i ( 0 ) + K i +1 ( x − d i +1 ), ⎫ ⎪ ⎪ K i ( x ) = f i ( x ) + inf ⎨ k k [ c i ( σ i +1 − x ) + K i +1 ( σ i + 2 ) ]⎬ Min ⎪⎩k = s +1, L, H ⎪⎭
This step disappears if i = H − 1 .
3. If x ≥ σ iH+1 , then:
142
4 Inventory Management in Supply Chains
K i ( x ) = f i ( x ) + ci ( 0 ) + K i +1 ( x − d i +1 )
To summarize Theorem 4.1, whatever i = 0, 1, L , H − 1 : • If the inventory level is less than the next demand (i.e., the demand d i +1 ) then
the optimal replenishment leads the inventory level to σ ir+1 for a r belonging to { i + 1, L, H − 1 } . In other words, the optimal replenishment conducts the inventory to a level that will be equal to the sum of a number of consecutive demands, the first of the series being d i +1 . • If the inventory level is greater than the next demand but less than the sum of the remaining demands, then the optimal replenishment is either 0 or leads the inventory to a level that will be equal to the sum of a number of consecutive demands, the first of the series being the next one (i.e., d i +1 ). • If the inventory level is greater than or equal to the sum of the remaining demands, it is optimal not to replenish. Note that a consequence of Theorem 4.1 can be expressed as follows: Whatever i = 1, 2, L, H the optimal inventory level xi* belongs to the set:
{( x
0
{
− σ 1i ) + , σ is+1
}
s = z , L, H
where z is the lowest integer such that xi ≤ σ 1z
}
Thus, the number of inventory levels to explore at time i is finite. This makes it possible to apply a backward dynamic programming approach in order to compute, for each of these inventory levels, the optimal replenishment. Then, starting from x0 and using a forward process, it is easy to reconstruct the sequence of optimal replenishments. The following example presents the complete process in detail. 4.5.1.3 Numerical Example
We propose a short example to illustrate Theorem 4.1. In this example: • The initial inventory level is x0 = 3 and the horizon of the problem is H = 4 . • The production costs are: ⎧ 0 if v = 0 ⎪ for i = 0, 2, 4 ci ( v ) = ⎨ ⎪⎩ v + 3 if v > 0
⎧ 0 if v = 0 ⎪ for i = 1, 3 ci ( v ) = ⎨ ⎪⎩ 0.5 v if v > 0
4.5 Production Smoothing: Lot-size Models
143
• The holding cost is steady: f i ( x ) = x for i = 0, 1, 2, 3, 4. • The demands at the ends of the five periods are given in Table 4.7. Table 4.7 Demands Period
1
2
3
4
5
Demand
5
2
1
6
4
Since the sum of the demands is greater than the initial inventory, there exists an optimal solution such that the inventory level at the end of the last elementary period is equal to 0. We apply the results presented in Theorem 4.1 backwards. We obtain (* denotes the optimal replenishment at each step): K5 ( 0 ) = 0 K 4 ( 0 ) = f 4 ( 0 ) + c4 ( 4 ) = 7 and v4* = 4 K 4 ( 4 ) = f 4 ( 4 ) + c4 ( 0 ) = 4 and v4* = 0 K 3 ( 0 ) = f 3 ( 0 ) + Min { c3 ( 6 ) + K 4 ( 0 ), c3 ( 10 ) + K 4 ( 4 ) } = 9 and v3* = 10 K 3 ( 6 ) = f 3 ( 6 ) + Min { c3 ( 0 ) + K 4 ( 0 ), c3 ( 4 ) + K 4 ( 4 ) } = 12 and v3* = 4 K 3 ( 10 ) = f 3 (10 ) + c3 ( 0 ) + K 4 ( 4 ) = 14 and v3* = 0 K 2 ( 0 ) = f 2 ( 0 ) + Min { c 2 (1 ) + K 3 ( 0 ), c 2 ( 7 ) + K 3 ( 6 ), c 2 (11 ) + K 3 (10 ) }
= 13 and v2* = 1 K 2 ( 1 ) = f 2 ( 1) + Min { c 2 ( 0 ) + K 3 ( 0 ), c 2 ( 6 ) + K 3 ( 6 ), c 2 (10 ) + K 3 (10 ) } = 10
and v2* = 0 K 2 ( 7 ) = f 2 ( 7 ) + Min { c 2 ( 0 ) + K 3 ( 6 ), c 2 ( 4 ) + K 3 ( 10 ) } = 19 and v2* = 0 K 2 ( 11 ) = f 2 ( 11 ) + c2 ( 0 ) + K 3 ( 10 ) = 25 and v2* = 0 K1 ( 0 ) = f1 ( 0 )
144
4 Inventory Management in Supply Chains
⎧ c1 ( 2 ) + K 2 ( 0 ), c1 ( 3 ) + K 2 ( 1 ), c1 ( 9 ) + K 2 ( 7 ),⎫ * + Min ⎨ ⎬ = 11.5 and v1 = 3 ( 13 ) + ( 11 ) c K 2 ⎩ 1 ⎭ K1 ( 2 ) = f1 ( 2 ) ⎧ c1 ( 0 ) + K 2 ( 0 ), c1 ( 1 ) + K 2 ( 1 ), c1 ( 7 ) + K 2 ( 7 ),⎫ + Min ⎨ ⎬ = 12.5 and ⎩c1 ( 11 ) + K 2 ( 11 ) ⎭ v1* = 1
K1 ( 3 ) = f1 ( 3 )
+ Min { c1 ( 0 ) + K 2 ( 1 ), c1 ( 6 ) + K 2 ( 7 ), c1 ( 10 ) + K 2 ( 11 ) } = 13 and v1* = 0 K 1 ( 9 ) = f 1 ( 9 ) + Min { c1 ( 0 ) + K 2 ( 7 ), c1 ( 4 ) + K 2 ( 11 )
} = 28
and
v =0 * 1
K1 ( 13 ) = f1 ( 13 ) + c1 ( 0 ) + K 2 ( 11 ) = 38 and v1* = 0 K0 ( 3 ) = f0 ( 3 ) ⎧ c 0 ( 2 ) + K 1 ( 0 ), c0 ( 4 ) + K1 ( 2 ), c0 ( 5 ) + K 1 ( 3 ), ⎫ + Min ⎨ ⎬ = 19.5 and ⎩ c 0 ( 11 ) + K1 ( 9 ), c0 ( 15 ) + K 1 ( 13 ) ⎭ v0* = 2
The backward process ends at this point: we computed the optimal replenishment at each period and for each possible inventory level. The optimal cost is 19.5. We now start the forward process, the objective of which consists in computing the optimal control. The initial inventory is x0* = 3 , the corresponding optimal replenishment is v0* = 2 and d1 = 5 . Thus, x1* = 3 + 2 − 5 = 0 .
The optimal inventory level at period 2 is x1* = 0 , the corresponding optimal replenishment is v1* = 3 , d 2 = 2 . Thus, x2* = 0 + 3 − 2 = 1 .
4.5 Production Smoothing: Lot-size Models
145
The optimal inventory level at period 3 is x2* = 1 , the corresponding optimal replenishment is v2* = 0 , d3 = 1 . Thus, x3* = 1 + 0 − 1 = 0 . The optimal inventory level at period 4 is x3* = 0 , the corresponding optimal replenishment is v3* = 10 , d 4 = 6 . Thus, x4* = 0 + 10 − 6 = 4 . The optimal inventory level at period 5 is x4* = 4 , the corresponding optimal replenishment is v4* = 0 , d5 = 4 . Thus, x5* = 4 + 0 − 4 = 0 . Finally, the optimal control is V * = ( 2, 3, 0, 10, 0 ) .
4.5.1.4 Fixed-cost Case
The computation of the optimal solution is simplified when neither the production nor holding costs depend on the elementary periods. We denote by c ( • ) and f ( • ) the production and holding cost, respectively. The computation of the optimal solution is based on Theorem 4.2. Theorem 4.2. Whatever i = 0, 1, L, H − 1 :
1. If 0 ≤ x < di +1 , then K i ( x ) = f ( x ) + Min
k =i +1, L, H
{c ( σ
k i +1
− x ) + K i +1 ( σ ik+ 2 )
}
2. If x ≥ d i +1 , then K i ( x ) = f ( x ) + c ( 0 ) + Ki +1 ( x − di +1 ) In other words, replenishment is foreseen only if the inventory level is less than the next demand. When the holding and setup costs are linear, we call this the “Wagner–Whitin” model.
4.5.2 Continuous Monoproduct Problem This model was originally developed in (Bensoussan and Proth, 1984). 4.5.2.1 Model
The initial inventory level x0 is known. The instantaneous demand d ( t ) is known over period [ 0, H ] and measurable. The replenishment function v ( t ) is continuous. It is called admissible (or feasible) if: 1. It lies between a lower bound m and an upper bound M :
146
4 Inventory Management in Supply Chains
0≤m≤v(t )≤M
2. The inventory level x ( t ), t ∈ [ 0, H is the horizon of the problem and:
] remains greater than or equal to 0.
H
t
x ( t ) = x0 +
∫ [ v ( s ) − d ( s ) ] ds
(4.16)
s =0
As in the discrete case, two costs are considered: First, an instantaneous inventory (or holding) cost f ( x, t ), x ≥ 0 and t ∈ [ 0, H ] . At each instant t, the function f ( x, t ) is concave and non decreasing according to x , and integrable in t: H
∫
f ( x, t ) dt < +∞, ∀ x > 0
t =0
Secondly, an instantaneous replenishment (or production) cost c ( v, t ), v ≥ 0 and t ∈ [ 0, H ] . This cost function has the same properties as the inventory cost function. We denote by V ( x 0 ) the set of feasible controls when the initial inventory level is x0 . The total cost corresponding to v ∈ V ( x 0 ) is: H
J ( x0 , v ) =
∫ { c [ v ( t ), t ] + f [ x ( t ), t ] } dt
t =0
where x ( t ) is derived from v ( t ) by applying (4.16). The objective is to compute an optimal control v * defined as: J ( x 0 , v * ) = Min J ( x0 , v ) v∈V ( x0 )
4.5.2.2 Results
The following results have been proven in (Bensoussan and Proth, 1984). Result 4.1. If the lower bound of the control is admissible, it is optimal.
4.5 Production Smoothing: Lot-size Models
147
Result 4.2. An optimal control exists if and only if the upper bound of the control is admissible. The first two results are straightforward. Result 4.3. There exists at least one optimal inventory level function x * ( • ) such that: 1. x ( t ) = 0 for at least one t ∈ [ 0, H ] . 2. If x * ( H ) > 0 , then there exists θ ∈ [ 0, H ) such that x * ( t ) = m for t ∈[ θ , H ] . We now introduce the basic result. Result 4.4. If the instantaneous inventory or/and instantaneous production costs are strictly concave, and if V ( x 0 ) ≠ ∅ there exists at least one optimal control v * ( • ) that is extremal for almost every t such that x * ( t ) > 0. Note that a control is extremal if it is equal to m or to M . Result 4.4 can be rewritten as: There exists at least one optimal control v * ( t ), t ∈ [ 0, H ] such that: y * ( t ) [ M ( t ) − v * ( t ) ] [v * ( t ) − m ( t ) ] = 0, ∀ t ∈ [ 0, T ] .
The properties of the optimal control are summarized in Table 4.8. These results hold even when the costs are not strictly concave but simply concave. Table 4.8 Optimal control
Condition
d (t ) ( x0j − σ 1j, i ) + and xij ≥ d i +j 1.
{
2. vij ∈ σ i +j 1, k − ( x0j − σ 1j, i ) + 3. vij =
{{ 0 } ∪ { σ
j i +1, k
}
k =i +1, L, H
− ( x0j − σ 1j, i ) +
, if xij < d i +j 1 .
}
k =i +1,L, H
} , if x
j i
= ( x0j − σ 1j, i ) + and xij ≥ d i +j 1.
Condition 1 means that there is no replenishment if the inventory level is greater than the next demand and a replenishment has already happened. Condition 2 says that, if the inventory level is less than the next demand, the replenishment follows an order-up-to-Z policy, Z being an inventory level that is able to satisfy exactly a number of consecutive demands, the next demand being the first one of the series. Condition 3 shows that, if no replenishment happened before, then either there is no replenishment or the order-up-to-Z policy applies. Thus, it is possible to apply the backward dynamic programming equations (4.17) and then to reconstruct an optimal control forwards, as we did in the monoproduct case. If the stocks are empty at the beginning of this process (i.e., if x0j = 0 for j = 1, 2, L , M ) it is easy to approximate the number of M-uples to explore. This H
approximation is
∑k
M
.
k =1
Since the volume of the computation is considerable, we developed a heuristic algorithm that leads to a near-optimal solution. 4.5.3.2 Heuristic Algorithm for the Multiproduct Problem
Let V M be a feasible control of the multiproduct problem. We denote by P ( V M , i ) the problem where all the elements of the control V M are frozen except component v i . Thus, this is a mono-product problem. The demands and holding costs are the same as those of the multiproduct problem. The production costs are also the same, except for those of product type i that are replaced by: wi ( v ) = ci ( V M ) − ci ( V{Mi} )
where V{Mi} is obtained by replacing v i by 0 in V M . The heuristic can be summarized as follows. Algorithm 4.3. 1. Generate at random a feasible control V
M
.
2. While the total cost related to problem P ( V
M
, i ) decreases:
4.5 Production Smoothing: Lot-size Models
151
2.1. For i = 1, 2, L, M : 2.1.1. Solve the monoproduct problem P ( V 2.1.2. Modify V
M
M
,i).
i
by replacing v by the optimal solution of P ( V
M
,i).
M
2.1.3. Compute the total cost corresponding to the new control V . 2.2. End of loop.
It has been proven that this algorithm leads to a local optimum that depends on the initial feasible control (first step of the algorithm). Furthermore, this algorithm converges very quickly. Thus, an efficient approach consists in rerunning this algorithm several times, keeping the solution that corresponds to the lowest cost.
4.5.4 Economic Order Quantity (EOQ) This is also called the “Wilson model” and was initialized by (Harris, 1913) and revisited by (Wilson, 1934). This model is a simplified approach that may be useful at the planning level but remains far from reality. It is based on the following assumptions: • The inventory level x should remain positive or equal to zero. • The demand d is constant and continuous. • The ordering (or production or setup) cost c and the holding (or inventory) cost f per item unit and time unit are constant. • The horizon of the problem is infinite. We also introduce the batch quantity (i.e., the replenishment) v and the total cost per time unit K . Considering the assumptions introduced above, the replenishment of the inventory is periodic and constant, as shown in Figure 4.11. Let T be the replenishment period. We compute the total cost on this period.
Inventory level
v
d Time T
Figure 4.11 The EOQ model
T
T
152
4 Inventory Management in Supply Chains
v × f . The ordering cost on the same 2 v c v period is c . Finally, K = × f + . Taking into account the fact that T = , the d T 2 total cost per unit of time becomes:
The holding cost on period T is T ×
K=
T ×d c ×f+ T 2
The derivative of K is KT' =
K T' ' =
d× f c − 2 and the second derivative is: 2 T
2c T3
The second derivative is positive for T > 0 , which means that K is convex. As a consequence, the minimum of this function is reached when KT' = 0 . Thus, the optimal period is T * =
v* = T * × d =
2c . The optimal replenishment v * is: d× f
2×c× d f
As was expected, T * is an increasing function of the ordering cost and a decreasing function of the holding cost.
4.6 Pull Control Strategies 4.6.1 Kanban Model The Kanban approach controls the work-in-progress (WIP) in production systems. This is sometimes referred to as the Toyota Production System. A Kanban is called an “authorization card”. Thus, the Kanban control system depends on one parameter per stage. In a Kanban system, the manufacturing process of a part type is decomposed into stages. A station followed by a buffer takes charge of each stage. A given number of tags (Kanbans) are associated to semi-finished parts at each station.
4.6 Pull Control Strategies
153
A Kanban contains the reference of the part under consideration. Note that different types of Kanbans are used when the system manufactures different types of parts. A Kanban is stuck to a part when it enters a station and is released when the part leaves the buffer that follows the station. The set of Kanbans assigned to a station provides the upper bound of the WIP at the station and the corresponding buffer at any time. A Kanban system is shown in Figure 4.12. The buffer that follows each station allows parts to wait until adequate Kanbans are available in the next station. Initially, each buffer contains as many parts of each type as the number of Kanbans corresponding to this type.
Station
Buffer
Station
Buffer
Station
Buffer
1
1
n
n
N
N
Lot flow
KANBAN flow
Figure 4.12 A Kanban for a linear production system
When an order arrives at the system, there are two possibilities: 1. If a part of the required type is available in the last buffer, then: The part is assigned to the order and leaves the system after releasing the Kanban stuck to it, a Kanban being available, it allows a part of the same type waiting in the next upstream buffer (if any) to release its Kanban and join the last station where the available Kanban is stuck to it. At this point, the part is assigned to the order and leaves the system after releasing the Kanban that was just stuck to it. The process continues upstream: when a Kanban is freed at any level, it allows a part of the corresponding type waiting in the next upstream buffer to move to the next station (after releasing its Kanban) where the Kanban made available in this station is stuck to it. Finally, either a Kanban related to the same type of part appears at the first station and is stuck to raw material dedicated to the corresponding type of part, thus triggering the manufacturing of a new part, or a free Kanban is blocked at the entrance of a station that is not the first one, waiting for a semi-finished product of the same type to be delivered at the next upstream buffer. 2. If no part is available in the last buffer, the order is backordered until a part of the required type is delivered in the last buffer. Then, the above backward process applies. As we can see, the Kanban system is a “pull-system” since it is triggered by the demand. Indeed, the number and types of Kanbans to be assigned to each station
154
4 Inventory Management in Supply Chains
have to be defined. This depends on the operation times, sizes of the buffers and demands to meet. This model controls perfectly the WIP in each pair (stationbuffer). A Kanban also applies to assembly systems, as shown in Figure 4.13. R1 (S1, B1)
(S3, B3)
R2
P1 (S5, B5)
(S2, B2)
(S4, B4)
Figure 4.13 An assembly system
In this example, when an order related to part type P1 arrives at the system, either a part is available or not. In the latter, the order is backordered until a part is delivered in the last buffer. In the former, the part is assigned to the order, leaves the system and the Kanban is freed. This Kanban gives the types of semi-finished parts issued from B3 and B4 that are required for assembling the finished part in S5 . If both parts are available, they are transferred to station S5 and their Kanbans are freed in S 3 and S4 , while the Kanban that has been freed in S5 is attached to the pair of parts; otherwise, the available Kanban remains in S5 , waiting for the required semi-finished parts arrival in B3 and B4 . If a Kanban is available in S 3 and if a semi-finished part is waiting in B1 , this semi-finished part moves to S 3 (after releasing its Kanban in S1 ) where the available Kanban is stuck to it; otherwise, the Kanban available in S 3 waits until a part is delivered to B1 (with its Kanban). Similarly, if a Kanban is available in S4 and if semi-finished parts are available in B1 and B2 then one part of each type moves to S4 (after releasing their Kanbans in S1 and S2 ) where the available Kanban is stuck to the pair of parts. Finally, a Kanban available in S1 (respectively, S2 ) is stuck to raw material R1 (respectively, R2 ), which triggers the manufacturing of a new part.
4.6.2 Base Stock Policy This control policy depends on a single parameter per stage. This parameter is the safety stock defined for each stage. When an order arrives in the system, it is transmitted to all production stages. The production starts at each level if the required semi-finished parts are available; otherwise the production is backordered (totally or partially). This system is reactive.
4.6 Pull Control Strategies
155
4.6.3 Constant Work-in-progress (CONWIP) CONWIP is a pull alternative to Kanban. It was first introduced in (Spearman et al., 1990). It is a “long-pull” production model that controls the WIP by controlling orders released to the shop floor. This is an extension of the Kanban model. Instead of assigning a set of cards to each station, cards are introduced only at the beginning of the line in the CONWIP model. Furthermore, the total amount of work is upper bounded since a new part (or lot) is allowed to enter the system only if a part (or lot) of the same type is completed, which results in releasing a card that is made available at the beginning of the line. Indeed, the total WIP is constant when the system is sufficiently loaded to work non-stop (thus the name CONWIP). Consider an order arriving in the shop floor, see Figure 4.14. If they are enough available cards in the bulletin board located at the beginning of the production line, then the required cards are attached to the order and the product is processed by visiting all the stations successively.
Station
Buffer
1
1
Part flow
Station
Buffer
Station
Buffer
n
n
N
N
Card flow
Figure 4.14 A CONWIP model
Cards are released when the finished product leaves the last station and they are sent back to the bulletin board. In other words, the basic rule assigned to each station in a Kanban model applies to the whole line in the CONWIP model. Once raw material enters the system, the material flows freely and naturally accumulates in front of the bottleneck station. In real-life situations, CONWIP systems are used for mixes of parts having different operation times on stations, and thus there are different bottleneck stations. It is easy to understand that there may be an opportunity to better balance the system than in a Kanban environment. Two fundamental questions are at stake: • How to manage the orders waiting at the entrance of the system (backlog) due to the fact that demands temporarily exceed the production capacity? A simple solution consists in applying a FAFS (first arrived, first served) policy, except if other priorities are imposed by management. • How many cards of each type should be introduced at the beginning of a line? This number depends mainly on the following factors: the demand (forecasted if not steady), time spent by the lots in the stations (manufacturing time), the priority policy that applies between the first and the last station.
156
4 Inventory Management in Supply Chains
It should also be noted that: • It is easier to modify a part mix in a CONWIP system than in a Kanban system, for instance. This is due to the use of a sole card for each lot instead of a card for each lot in each station. • The pacing protocol is flexible since the management is open between the first and the last station. Thus, it is possible to introduce management rules that improve the line balancing and, as a consequence, lead to a better throughput than in a Kanban system.
4.6.4 Generalized Kanban This system combines the advantages of the base stock system that reacts quickly to orders and the Kanban system that coordinates and control WIP. This system is defined by two parameters per stage: the safety stock that allows quick reaction to orders and the number of production authorization cards (Kanbans) that manage the WIP. In the rest of this section, we call “demand” the order of a single part. We refer to Figure 4.12. At the beginning of the production, each buffer Bi , i = 1, 2, L, N − 1 contains ni semi-finished parts (safety stocks), while B N contains n N completed parts. None of these parts are associated with a Kanban since, as explained below the Kanbans are released at each station exit. B0 contains the raw material. When a demand arrives in the system, it is transmitted from the downstream to the upstream stations. If a part is available in buffer B N , it is delivered to the customer and a demand signal is sent to upstream stages. At stage i ∈ { N , N − 1, L, 1 } , if a Kanban is available and a semi-finished part (or raw material) is available in Bi −1 , then the Kanban is stuck to the part (or raw material) and the production starts in station S i . The Kanban is released as soon as the part leaves the station (and not the buffer following the station as in the Kanban system). Lack of Kanban at stage i delays the transfer of demand signal to upstream stations. Lack of a part in Bi −1 delays the production at station S i . There are three major differences between a generalized Kanban system (GKS) and a simple Kanban system (SKS). First, the demand moves upstream separately from the release of a part downstream in GKS, unlike SKS. Secondly, as aforementioned: the Kanban is released as soon as the part leaves the station in GKS, while it is only released when the part leaves the buffer following the station in SKS. Finally, the third difference is: in GKS the parts assigned to the safety stocks are not associated to Kanbans. GKS was first introduced in (Buzacott, 1989).
4.7 Conclusion
157
4.6.5 Extended Kanban Similarly to the GKS, the extended Kanban system (EKS) is defined by two parameters per stage: safety stock and production authorization cards (Kanbans). We refer to Figure 4.12. At the beginning of the production, each buffer Bi , i = 1, 2, L, N − 1 contains ni semi-finished parts (the safety stocks), while B N contains n N completed parts. B0 contains raw material. When a demand arrives in the system, it is immediately transmitted to all the upstream stations. If a part is available in buffer B N , it is delivered to the customer and the Kanban stuck to it is released, allowing a new semi-finished part waiting in BN −1 (if any) to enter in S N . The same process applies to any station i ∈ { N − 1, L, 1 } if a Kanban is available in station S i and a semi-finished part (or row material) is available in Bi −1 . There are three principal differences between EKS and a simple Kanban system (SKS). First, when a demand arrives in an EKS, it is broadcasted to every stage in the system, which is not the case in SKS. The second difference is that, as in GKS, Kanbans move upstream separately from demand. Third, as in GKS, the EKS is defined by two parameters per station: the number of Kanbans, say k i for stage i , and the number of semi-finished (or finished) parts in the safety stock, say si , with ki ≥ si . It has been shown that the SKS and the EKS are equivalent if ki = si for every stage i . The method is developed in (Baynat et al., 2001) as well as other related references.
4.7 Conclusion Supply chains tend to provoke the bullwhip effect, mainly for the following reasons: • Several decision modules often exist in such a system, despite the theoretical requirement for a single decision-making entity in a supply chain. This is due to the complexity of the production performed, which calls for a hierarchical and / or decentralized structure as well as the fact that several companies are involved in the supply chain.
158
4 Inventory Management in Supply Chains
• Batching is often in use, due to the economical objective or to conditions imposed on the buyer by the vendor. Since the bullwhip effect is not only costly but also spreads confusion in the system, reducing reactivity and flexibility, it is important to diminish this phenomenon as much as possible. The main factors that can be used with this aim in view are the following: • Improve drastically the communication in the supply chain, the goal being to guarantee the visibility of the whole system to each participant. • Base the decision making on the overall state of the supply chain including real-time information on the inputs and outputs (demands, deliveries, etc.). • Reduce the lead times by improving collaboration with providers. • Standardize reactions to backlogs in order to reduce the system nervousness. • Limit – and even suppress – disturbing events like promotions, free return policy and easy order cancellation. • Improve demand forecasting. • Use adequate inventory control and lot-sizing models. Standard inventory management control systems in stochastic environment, that is to say newsvendor, (R, Q) and (s, S) were analyzed. The complexity of manufacturing systems often requires a hierarchical approach. This is why MRP and MRP2 were mentioned and illustrated. The lot-sizing models that aim at smoothing production were developed in detail. Continuous and discrete models, as well as multiproduct models, were proposed, analyzed and illustrated by several examples. Finally, production control systems that control work-in-progress (WIP), like Kanban, CONWIP and base stock were presented. Two extensions of the Kanban approach were explained at the end of this chapter: the generalized Kanban and extended Kanban. They not only control the WIP but also increase the reactivity of the system. This panoramic view of the inventory control problems and approaches in a supply chain is useful for better understanding of this complex subject.
References Barbarosoglu G (2000) An integrated supplier-buyer model for improving supply chain coordination. Prod. Plann. Contr. 11(8):732–741 Baynat B, Dallery Y, Di Mascolo M, Frein Y (2001) A multi-class approximation technique for the analysis of kanban-like control systems. Int. J. Prod. Res. 39(2):307–328 Bensoussan A, Proth J-M (1981) Gestion des stocks avec coûts concaves. RAIRO Aut. Syst. Anal. Contr. 15(3):201–220 Bensoussan A, Proth J-M (1984) Inventory planning in a deterministic environment: Continuous time model with concave costs. Eur. J. Oper. Res. 15:335–347
References
159
Blanchard OJ (1983) The production and inventory behavior of the American automobile industry. J. Polit. Econ. 91(3):365–400 Buzacott JA (1989) Queueing models of Kanban and MRP controlled production systems. Eng. Cost. Prod. Econ. 17:3–20 Chung WWC, Leung SWF (2005) Collaborative planning, forecasting and replenishment: a case study in cooper clad laminate industry. Prod. Plann. Contr. 16(6):563–574 Corbett CJ, Van Wassenhove LN (1993) The natural drift: What happened to operations research? Oper. Res. 41(4):625–640 Deuermeyer BL, Schwarz LB (1981) A model for the analysis of system service level in warehouse–retailer distribution systems: the identical retailer case. In: Schwarz LB (ed). Multilevel Production/Inventory Control Systems, TIMS Studies in Management Science, vol 16, Elsevier, New York, NY, pp 163–195 Dolgui A, Prodhon C (2007) Supply planning under uncertainties in MRP environments: a state of the art. Ann. Rev. Contr. 31:269–279 Forrester JW (1961) Industrial Dynamics. MIT Press, Cambridge, MA Gavirneni S, Tiwari V (2007) ASP, The art and science of practice: recoupling inventory control research and practice: guidelines for achieving synergy. Interfaces 37(2):176–186 Govil M, Proth J-M (2002) Supply Chain Design and Management: Strategic and Tactical Perspectives. Academic Press, San Diego, CA Goyal SK (1976) An integrated inventory model for a single supplier – single customer problem. Int. J. Prod. Res. 15(1):107–111 Harris FW (1913) How Many Parts to Make at Once. Fact. Magaz. Manag. 10(152):135–136 Kilger C (2008) The definition of a supply chain project. In: Stadtler H and Kilger C (eds). Supply Chain Management and Advanced Planning, 4th edn. Springer-Verlag, Heidelberg, pp 287-307 Krane SD, Brawn SN (1991) Production smoothing evidence from physical product data. J. Polit. Econ. 99(3):558–581 Kreng VB, Chen F-T (2007) Three echelon buyer-supplier delivery policy – a supply chain collaboration approach. Prod. Plann. Contr. 18(4):338–349 Lee HL, Padmanabhan V, Whang S (1997) The bullwhip effect in supply chains. Sloan Manag. Rev. 38:93–102 Lee HL, So KC, Tang CS (2000) The value of information sharing in a two-level supply chain. Manag. Sci. 46(5):626–643 McCullen P, Towill D (2001) Achieving lean supply through agile manufacturing. Integr. Manuf. Syst. 12(6-7):524–533 Meredith JR (2001) Reconsidering the philosophical basis of OR/MS. Oper. Res. 49(3):325–333 Moinzadeh K, Lee HL (1986) Batch size and stocking levels in multi-echelon repairable systems. Manag. Sci. 32:1567–1581 Silver EA (2004) Process management instead of operations management. Manuf. Serv. Oper. Manag. 6(4):273–279 Spearman ML, Woodruff DL, Hopp WJ (1990) CONWIP: A pull alternative to KANBAN. Int. J. Prod. Res. 28(5):879–894 Sterman JD (1989) Modeling managerial behaviour: misperception of feedback in a dynamic decision making experiment. Manag. Sci. 35:321–339 Wagner HM (2002) And then there were none. Oper. Res. 50(1):217–226 Wilson RH (1934) A scientific routine for stock control. Harv. Bus. Rev. 13:116–128 Zanakis SH, Austin LM, Nowading DC, Silver EA (1980) From teaching to implementing inventory management: Problems of translation. Interfaces 10(6):103–110 Zimmer K (2002) Supply chain coordination with uncertain just-in-time delivery. Int. J. Prod. Econ. 77:1–15
160
4 Inventory Management in Supply Chains
Further Reading Askin RG, Mitwasi MG, Goldberg JB (1993) Determining the number of kanbans in multiitem just-in-time systems. IIE Trans. 25(1):89–98 Axsäter S (2006) Inventory Control. 2nd edn, Springer, New York, NY Bensoussan A, Crouhy M, Proth J-M (1983) Mathematical Theory of Production Planning. Advanced Series in Management, North Holland Publishing, Amsterdam Biggs JA (1979) Heuristic lot-sizing and sequencing rules in a multi-stage production and inventory system. Decis. Sci. 10(1):96–115 Cachon G (2004) The allocation of inventory risk in a supply chain: push, pull and advanced purchase discount contracts. Manag. Sci. 50(2):222–238 Chaouiya C, Liberopoulos G, Dallery Y (2000) The extended Kanban control system for production coordination of assembly manufacturing systems. IIE Trans. 32:999–1012 Chen F, Drezner Z, Ryan JK, Simchi-Levi D (2000) Quantifying the bullwhip effect in a simple supply chain: the impact of forecasting, lead times and information. Manag. Sci. 46(3):436– 443 Chen F, Zheng YS (1997) One-warehouse multiretailer systems with centralized stock information. Oper. Res. 45:275–287 Chiang WK, Feng Y (2007) The value of information sharing in the presence of supply uncertainty and demand volatility. Int. J. Prod. Res. 45(6):1429–1447 Collier DA (1981) Research issues for multi-level lot-sizing in MRP systems. J. Oper. Manag. 2(1):113–123 Dolgui A, Soldek J, Zaikin O (eds) (2005) Supply Chain Optimisation: Product/Process Design, Facility Location and Flow Control, Springer, New York, NY Dolgui A, Louly M-A (2002) A model for supply planning under lead time uncertainty. Int. J. Prod. Econ. 78:145–152 Dolgui A, Pashkevich M (2008) On the performance of binomial and beta-binomial models of demand forecasting for multiple slow-moving inventory items. Comput. Oper. Res. 35(3):893–905 Duri C, Frein Y, Lee H-S (2000) Performance evaluation and design of a CONWIP system with inspections. Int. J. Prod. Econ. 64:219–229 Duri C, Frein Y, Di Mascolo M (2000) Comparison among three control policies: Kanban, Base Stock and Generalized Kanban. Ann. Oper. Res. 93:41–69 Framinan JM, Gonzalez PL, Ruiz-Usano R (2003) The CONWIP production control system: review and research issues. Prod. Plann. Contr. 14(3):255–265 Framinan JM, Gonzalez PL, Ruiz-Usano R (2006) Dynamic card controlling in a CONWIP system. Int. J. Prod. Econ. 99(1–2):102–116 Gaury EGA, Pierreval H, Kleijnen JPC (2000) An evolutionary approach to select a pull system among Kanban, Conwip and Hybrid. J. Intell. Manuf. 11(2):157–167 Grabot B, Geneste L, Reynoso-Castillo G, Vérot S (2005) Integration of uncertain and imprecise orders in the MRP method. J. Intell. Manuf. 16(2):215–234 Grubbström RW, Huynh TTT (2006) Analysis of standard ordering policies within the framework of MRP theory. Int. J. Prod. Res. 44:3759–3773 Hirakawa Y (1996) Performance of a multistage hybrid push/pull control system. Int. J. Prod. Econ. 44:129–135 Hennet J-C, Arda Y (2008) Supply chain coordination: a game-theory approach, Eng. Appl. Art. Intell. 21(3):399–405 Hennet J-C (2009) A globally optimal local inventory control policy for multistage supply chains, Int. J. Prod. Res. 47(2):435–453 Kahn JA (1987) Inventories and the volatility of production. Amer. Econ. Rev. 77(4):667–679 Karaesmen F, Dallery Y (2000) A performance comparison of pull type control mechanisms for multi-stage manufacturing systems. Int. J. Prod. Econ. 68:59–71
Further Reading
161
Lee HG, Na HB, Shin K, Jeong HI, Park J (2007) Performance improvement study for MRP part explosion in ERP environment. Int. J. Adv. Manuf. Techn. 35(3–4):309–324 Leopoulos VI, Proth J-M (1985) Le problème multi-produits avec coûts concaves et incitation aux lancements groupés: le cas général. RAIRO Aut. Prod. Inf. Ind. 19:117–130 Louly M-A, Dolgui A (2002) Newsboy model for supply planning of assembly systems. Int. J. Prod. Res. 40(17):4401–4414 Louly M-A, Dolgui A (2004) The MPS parameterization under lead time uncertainty. Int. J. Prod. Econ. 90:369–376 Machuca JAD, Barajas RP (2004) The impact of electronic data interchange on reducing bullwhip effect and supply chain inventory costs. Transp. Res. Part E 40:209–228 Melnyk SA, Piper CJ (1985) Lead time errors in MRP: the lot-sizing effect. Int. J. Prod. Res. 23(2):253–264 Metters R (1997) Quantifying the bullwhip effect in supply chains. J. Oper. Manag. 15(2):89– 100 Orlicky J (1975) Material Requirements Planning. McGraw-Hill, New York, NY Ovalle OR, Marquez AC (2003) Exploring the utilization of a CONWIP system for supply chain management. A comparison with fully integrated supply chains. Int. J. Prod. Econ. 83:195– 215 Simchi-Levi D, Chen X, Bramel J (1997) The logic of logistics. Theory, algorithms, and applications for logistics and supply chain management. 2nd edn, Springer Series in Operations Research, Springer, New York, NY Sterman JD (1992) Teaching takes off, fight simulators for management education. OR/MS Today 19:40–44 Sterman JD (1995) The beer distribution game. In: Heineke J and Meiles L (eds). Games and Exercises for Operations Management. Prentice-Hall, Englewood Cliffs, NJ, pp 101–112 Wagner HM, Whitin TM (1958) Dynamic version of the economic lot size model. Manag. Sci. 5:86–96 Wang S, Sarker BR (2006) Optimal model for a multi-stage supply chain system controlled by Kanban under just-in-time philosophy. Eur. J. Oper. Res. 172:179–200 Yano CA, Lee HL (1995) Lot-sizing with random yields - a review. Oper. Res. 43(2):311–334
Chapter 5
Radio-frequency Identification (RFID): Technology and Applications
Abstract RFID is a big step forward when compared to bar codes. A technical description and the possible future progress of this technology are provided. The various factors to take into account when implementing RFID in a supply chain and most of the problems that can occur under these circumstances are outlined. Particular attention is paid to the use of RFID to improve inventory management. Practical applications, related to tracking the movement of goods in diverse industrial sectors, illustrate the importance of this technology. The advantages of using RFID in supply chains as well as expert opinions are highlighted. Some concepts to provide an economic evaluation are presented. Privacy concerns, which are socially important, are discussed. Moreover, some techniques to protect privacy are proposed. The chapter ends by discussing the RFID authentication issue. This includes the problem of misbehaving tags, particularly counterfeit ones.
5.1 Introduction RFID can be defined as an automatic identification technology composed of: 1. Tags incorporated into or attached to any kind of objects (products, tools, animals, goods, human beings, etc.), 2. A specialized RFID reader that read the information stored in the tags and transfer them to a processing device (computer for instance). When an item to which a tag is attached (or incorporated) passes by a reader, the tag sends the information to the reader that, in turn, passes the information to the processing device. While an RFID system can be considered as the successor of bar codes, there are several key differences between RFID and bar codes:
164
5 Radio-frequency Identification (RFID): Technology and Applications
• Unlike bar codes, data are not gathered manually and, since companies rarely have identical product codes, using RFID leads to a drastic reduction of workload. • When using a bar-code system, the operator has to scan the items one by one, while the RFID reader can automatically receive information from a number of tags simultaneously. In other words, bar codes require a heavy human interface, which is not the case when using RFID technology. • RFID scanning can be done at greater distance. The scanning distance of RFID depends on the type of system as will be explained hereafter. • The advantages of RFID technology over bar codes are flexibility in reading tags in a large scanning area and reliability in environments polluted by moisture, noise and dirt. • The constraints that apply to the positioning of the tags are much weaker than those for bar codes. Tagged objects can be read when oriented differently at high speed. Orientation sensitivity depends on antenna design. • Tags can store more data. Furthermore, bar codes must be checked against a data base to obtain product information, while RFID tags contain information and evolve towards a higher information capacity. • RFID readers can communicate with multiple tags. As a consequence, it is possible to capture the information concerning an entire shipment. • In an RFID tag, information can be added, deleted or modified to reflect the state of the corresponding item at any time. Data can be captured from various points of a supply chain, making it possible to follow the evolution of work in process. Bar codes are not programmable. Limited information is printed once and cannot be modified, enriched or deleted. RFID technology is unavoidable if you want to track physical movements of items. • Active tags with advanced capabilities can be programmed to determine who is authorized to read certain parts of the data or store new information in the tag. • RFID technology is more expensive than bar codes. • The standards related to bar codes are well established, while RFID standards are only emerging. Numerous applications of RFID already exist and we will present some of them in this chapter. We will focus our attention on supply chains, and show how using RFID helps deal with process complexity, product variety, market uncertainty and data inaccuracy that, in turn, results in labor-cost reduction, reduction of the number of discrepancies between invoices and deliveries, automation of shipping notices, reduction of product shrinkage (loss of products due to theft, misplacement of items in the storage area, inaccurate handling information, etc.), fraud reduction by reducing the counterfeit of high-value products, etc. Indeed, detractors remain active. Most of their arguments are related to the cost of RFID implementation and use, the difficulty encountered when implementing the technology in an open-loop environment (a strong argument), the fear that citizens might be tracked (privacy issue), etc.
5.2 Technical Overview
165
In fact, RFID has been around for decades. The RFID concept was introduced during World War II to distinguish allied aircrafts from the enemy by means of radar. But only recently has the convergence of low cost, increase capability and the creation of electronic product code (EPC)1 made this technology more attractive.
5.2 Technical Overview 5.2.1 Global Description As aforementioned, an RFID system consists of a reader (also known as an interrogator) and a tag (or transponder), which is a silicon chip connected to an antenna. When a tag passes through a field covered by a reader, it transmits the information stored in it. A tag can be passive or active. An RFID system based on passive tags is presented in Figure 5.1. The antenna of the reader creates an electronic magnetic field (EMF) with the antenna of the tag. The antenna of the tag draws energy from this field and stores it in a capacitor (inductive coupling) that releases it in a coil embedded in the tag that, in turn, allows the tag to release radio waves that are transformed into digital information representing the EPC. An RFID based on active tags is represented in Figure 5.2. An active tag contains a battery that provides the energy to the antenna, which is necessary to send encoded radio waves to the reader. EMF + request for information
TAG Antenna
READER
Encoded radio waves
Coil
Data
Figure 5.1 Passive RFID
1
EPC (electronic product code) is the standard designed to assign a unique identifier to each item. EPC consists of four sequences of binary digits: an eight-bit header, the EPC manager (28 bits), the product type (24 bits) and the serial number of the product (36 bits). The EPCglobal Inc. organization oversees the development of the standard. EPCglobal is a joint venture of the UCC, which regulates bar-code use in the US and EAN that is in charge of the regulation in the rest of the world.
166
5 Radio-frequency Identification (RFID): Technology and Applications
TAG READER
Request for information
Encoded radio waves
Battery Data
Figure 5.2 Active RFID
A third type of tag, called semi-passive tags, contains a battery that provides energy to the chip but the reader field is still necessary for the transmission from tag to reader. Active and semi-passive tags are used to track high-value items that have to be scanned over longer distances. As outlined in (Prater et al., 2005) and (Smith, 2005), RFID technology makes it possible automatic data capture and data identification. As mentioned before, RFID technology is the successor of the well-known bar code. Keep in mind the two main advantages of RFID against bar codes: • RFID goes a step further than the bar code in the information related to an item: it emits a unique identifier for each item, distinguishing it from other identical items. • RFID tags are readable without precise positioning and line-of-sight contact while bar codes require a carefully positioned item and line-of-sight contact with the reader.
5.2.2 Properties Passive tags are inexpensive (less than 0.2 euros each, with an objective of 0.05 euros in a near future). The data-storage capacity is low (less than 256 bytes), which is just enough to store an item identification and a limited history. An advantage of passive tags is their size, less than the size of a 10-cent coin, which makes it easy to incorporate or attach passive tags to items. These advantages, low cost and reduced size, occur at the expense of read range that does not exceed 5 m using stationary-based readers at a frequency ranging from 860 MHz to 930 MHz, and 20 cm using a hand-held reader. Furthermore, any metal surface presents an impassable barrier for reading a passive tag. The read range of a tag depends on both the power of the reader and the frequency used to communicate. The higher the frequency of the tag, the longer the distance it can be read, but the greater the energy required from the reader.
5.2 Technical Overview
167
A major issue with readers has to do with the frequency at which they communicate with the tags. The frequencies of electronic waves are regulated by governments, and a frequency available for RFID in one country may be unavailable in another. For instance: • 125 to 134 kHz, as well as 13.56 MHz, 2.400 to 2.500 GHz and 5.725 to 5.875 GHz are operational in the US, Canada, Europe and Japan. • 433.05 to 434.79 MHz are operational in Europe and the US, under registration constraints. • 865 to 868 MHz are operational in Europe. • 866 to 869 MHz and 923 to 925 MHz are operational in South Korea. • 902 to 928 MHz are operational in the US • 952 to 954 MHz are operational in Japan. Passive tags are the most widely used in RFID applications and their lifespan is much greater than those of active tags. Nevertheless, they are still too expensive to be used on a large scale. Therefore, passive tags are mainly attached to reusable pallets or cases, making it impossible, for the time being, to track every single item. Item-level tagging is the next step of RFID deployment; on the consumer side, security and privacy issues will create concerns at this stage, as explained in Section 5.9. Semi-passive tags have a battery built in, which allows them to function with much lower power provided by the reader field. As a consequence, they have a read range up to 100 m. Active tags are totally independent from the reader as far as energy is concerned. This allows them to communicate at distances of several kilometers. Furthermore, a metal surface is no longer an impassable barrier. Active tags can remain dormant until they arrive in the field of the reader. They can also constantly broadcast a signal. Currently, active tags have a memory capacity ranging from 32 to 128 kB. A drawback with these tags is that the quality of the embedded battery is difficult to evaluate, which results in a random life expectancy. Another problem is the cost of active tags, resulting mainly from the built-in battery. For the same reason, size is quite important since they are the size of a playing card. The chip included in the tag can be either read-write or read-only. Indeed, readwrite tags are much more expensive than read-only, but their application range is broader. In particular, a read-write tag is convenient when the information concerning the items change according to the production stage (for example, after an assembly operation or at a point where a given product can be completed in different ways). For a long time, readers were supposed to take care of a limited flow of tags containing low volumes of data. A current and new tendency is to develop readers to treat large numbers of tags containing a high volume of data each. As mentioned before, the cost of tags is a parameter that will play a major role in the development of RFID. As commonly accepted, the 5 cents a tag is the limit below which wide adoption of RFID technology will occur. The only way to reach
168
5 Radio-frequency Identification (RFID): Technology and Applications
this threshold is to find alternative tag designs and more efficient tagmanufacturing processes. Most of the attempts in the alternative tag design evolve around chip less tags. Among others, we can mention: • Chipless tag that uses surface acoustic wave technology (SAW) that propagates radio-frequency acoustic waves on the surface of polished crystals. • Chipless tag that uses nanotechnology genomes. Apart from reducing the cost, these chipless technologies are more easily applicable near metals and liquids, which are impassable barriers for passive tags. Improving tag packaging is another research path for some specific uses of tags and reducing manufacturing cost.
5.2.3 Parameters of Importance when Selecting Tags Selecting the type of tag to use depends on the work to be done. Communication distance and maximum tag speed measured by the reading and transferring rate are certainly the first parameters to consider. The size of the tag is another important parameter and is constrained by the type of item to which it should be attached (or incorporated). The environment is also an aspect of importance. The following questions should be answered: what will be the temperature exposure? Will the environment be harsh (corrosion, humidity, steam, dust, …)? Is the environment prone to disturbing communication between readers and tags (presence of other radio devices or electrical noise, proximity to other tags)? Will metal perturb communications? We also have to ask if tags should be reusable and what are the constraints related to the orientation of the tags in relation to the orientation of the reader. Other parameters to consider are the communication protocols (EPC) and the operating frequency (LF, HF or UHF) that depend on the application and may be constrained by regulation. Indeed, the volume of data to be carried by the tags (the granularity) and the speed of the items carrying the tags are of utmost importance. We have also to mention data security and anticollision aspects (How many tags are in the field of the reader at the same time? Do signals emitted by different readers interfere?). Anticollision is related to avoiding confusion between data carried by different tags when they are in the reading environment of the same reader, but also interference of signals of different readers. The number of tags that can be identified “simultaneously” depends on the protocol and the frequency of electromagnetic waves used. Typically, this number ranges from 50 tags/s for HF to about 200 tags/s for UHF. Finally, the selection of the reader depends on the tags to be treated.
5.3 Succinct Guideline for RFID Deployment
169
5.2.4 Auto-ID Center at MIT The Auto-ID Centre at MIT has developed an EPC global network system, which is an automatic identification, data capture and sharing (AIDCS) system that combines RFID with several other technologies in order to be able to track items through a supply chain and share information between the participants to the supply chain using the Internet. To avoid interference between signals issued from different readers; this new global network system uses time division multiple access (TDMA): the readers are programmed to read tags at different times instead of simultaneously. To avoid the problem that arises when a reader has to read several tags in the same field, the Auto-ID Center developed a system that asks tags to respond only if their first digits match the digits emitted by the reader. To solve the frequency problem, the Auto-ID Center has designed reference specifications that allow reading chips of different frequencies (agile readers). This saves the cost of having a separate reader for each frequency.
5.3 Succinct Guideline for RFID Deployment Some information has been previously given concerning the choice of tags, but an effective use of RFID needs a perfect integration of the techniques in the environment of the company, particularly in the IT infrastructure. In this section, we emphasize some of the most important factors that should be considered when introducing RFID in a production system and, in particular, in a supply chain.
5.3.1 Choice of the Technology As mentioned in the RFID Journal, March 31, 2003, decision makers have to take into account the following three general aspects when making their choice: 1. The needs of their corporate environment. This has to do with the evolution of the corporate environment regarding RFID, and thus to competitiveness. 2. The desires of their trading partners. For instance, the well-known example of Wal-Mart shows that a powerful retailer can force its providers to use RFID. 3. The needs of the industry concerned, and thus of the production performed by the company. For instance, valuable goods or time-dated products may require specific abilities from the RFID.
170
5 Radio-frequency Identification (RFID): Technology and Applications
5.3.2 Analysis of Problems that May Happen Some problems have already been mentioned in Section 5.2.3 as far as tags are concerned. Other problems may result from an inappropriate handling of items. To deal with this kind of problem, it is usually necessary to establish efficient handling procedures. We also have to include in this section the collision problems already mentioned in Section 5.2.4 that is dedicated to the RFID system developed by the Auto ID Center at MIT.
5.3.3 Matching RFID with IT As mentioned before, introducing RFID in a supply chain is not only a technical problem. The RFID technology has some characteristics that may oblige the companies to reorganize and, possibly, redesign their IT systems. The first characteristic of importance is the volume of data to handle, due to: • Real-time data exchange related to the use of RFID that leads to automation of some of the activities that were previously performed by employees. Thus, the expertise of these employees and managers should be analyzed and translated into software, which in turn drastically increases the volume of data to be processed. • The tendency to increase the granularity (i.e., the volume of information concerning an item) in order to develop more automated applications related to maintenance, security, quality, real-time management, etc. • The integration of the new system with upstream and downstream applications. This will also require a great deal of analysis and programming. This often requires redesigning the whole data processing and communication system. Thus, the implementation of RFID applications requires expanded, and even new, IT infrastructure. New computers, application programs, tag readers and antenna will be disseminated in distribution centers, transportation resources, factory floors and warehouses. This will create new services around IT. Finally, the objective is to create an IT architecture that can appropriately manage, analyze and handle the new wealth of data generated not only by existing applications but also by the ones that are made possible using the new technology. The vast amount of data generated will take advantage of the recent developments of grid computing that is currently spreading. Grid computing is related to distributed computing. In our case, grid computing can be a set of independent comput-
5.4 RFID Applications
171
ing clusters that process data at different levels of the supply chain and cooperate amongst themselves.
5.4 RFID Applications 5.4.1 Application to Inventory Systems 5.4.1.1 Causes of Inventory Inaccuracy Retailers are rarely completely aware of the number of items in their inventory. At least once a year, a physical count of SKUs (stock keeping units) is conducted at inventory. Comparing the actual quantities stored with the quantities in the inventory records is often surprising. As mentioned in (Kang and Gershwin, 2005), the best situation encountered in companies is the one where 75 to 85% of inventory records match perfectly the actual inventory. However, in some cases, 65% of the inventory records at retailers are inaccurate. The reasons for inventory inaccuracy fall usually into one of four categories: • Stock loss, which is apparently the most common explanation for the discrepancy between actual and recorded inventory. Stock loss is called shrinkage. It may be due to theft, defects caused by inappropriate handling, out of date timedated items, etc. Undetected shrinkage, resulting from theft or from inappropriate handling, does not lead to automatically updating inventory records. As a consequence, decisions are made based on false data. • Transaction errors. This may happen when the shipment records are taken for granted and do not reflect physical inputs in the inventory. This is the case when employees in charge of inventory management rely on shipment records to simplify their work. Other transaction errors are label errors or approximate checkouts. We can also mention in this category the case of items that are introduced in the inventory without being recorded. • Inaccurate location of items in the inventory or inaccessible items (for instance, heavy items such as a truck engine stored behind other engines). This situation often results from inadequate storage facilities. • Incorrectly labeled items. This may happen when wrong tags are attached to items. In Section 5.6, we mention that the use of RFID helps reducing shrinkage, reducing misplacement of products in storage facilities and suppressing transaction errors due to wrong scanning of products.
172
5 Radio-frequency Identification (RFID): Technology and Applications
5.4.1.2 Illustrative Example In the following example, we will show that inventory inaccuracy may lead to a substantial loss of bottom-line profit. For simplicity, this example will concern a unique item. Well in advance, the wholesaler orders a quantity q0 to the provider. The deadline accepted by the wholesaler is t. At the same time, the wholesaler makes a commitment to the retailers, informing them that a quantity q0 will be available at time t. The selling activity will take place on a unique period (one month, for instance), and there is no supply option (i.e., no replenishment can take place during the selling period). The quantity q1 that will be really available at time t is different from q0 . We assume that q1 < q0 , due to stock loss, inaccurate location or incorrectly labeled items. Nevertheless, the inventory costs should be computed based on q0 (and not q1 ) since q0 items have been delivered to the wholesaler. To make the formulation simple, we consider that q1 = q0 × (1 − α ) , 1 > α > 0 . The total demand of the retailers is random and depends on q0 . More precisely, the demand is the value taken by a random variable X that takes its values on [ 0, q0 ] . The density of probability of X is f X ( x ) = exp ( a × x ) − 1 , with a > 0 . The demand is continuous. Indeed,
∫
q0
0
∫
q0
0
f X ( x ) dx = 1 , which leads to the value of a by solving:
[ exp ( a × x ) −1] dx =
1 [ exp ( a × q 0 ) ] − q 0 = 1 a
For instance, for q0 = 4 , we obtain a = 0.107711 using the Newton numerical method. The corresponding density function is represented in Figure 5.3.
Probability density
0.6 0.5 0.4 0.3 0.2 0.1
Demand
Figure 5.3 Probability density for q0 = 4
4
3. 6
3. 2
2. 8
2 2. 4
1. 6
1. 2
0. 8
0 0. 4
0
5.4 RFID Applications
173
Two costs should be taken into account: • The inventory cost IC: IC = ic × [ q0 + Max ( q0 − x , 0 ) ] / 2
where x is the demand and ic is the inventory cost of one unit during the selling period. This expression is based on the following assumptions: i) that the flow of items sold is constant during the selling period, and ii) in the case of stock loss or its inaccurate location an inventory cost is still generated. These assumptions are common in inventory management. • The backlogging cost: BC = −bc × Min ( q1 − x, 0 )
where bc is the backlogging cost of one unit during the selling period. In this expression, −Min ( q1 − x, 0 ) is the demand that is not satisfied at the end of the selling period. For each value of α , 1000 simulations have been made, and the mean values of IC and BC, denoted by IC and BC , respectively, have been computed, as well as the total average cost TC = IC + BC . The results are given in Table 5.1 and summarized in Figure 5.4 for α = 0, 0.01, ... , 0.19, 0.2 . In this example, q0 = 10 , the inventory cost is ic = 2 and the backlogging cost is bc = 25. Table 5.1 Costs over α
α
0
0.01
0.02
0.03
0.04
0.05
0.06
Inventory costs
13.47
13.31
13.39
13.21
13.26
13.21
13.23
Backlogging costs
0
0.02
0.09
0.29
0.44
0.60
0.97
Total costs
13.47
13.33
13.48
13.5
13.71
13.82
14.2
α
0.07
0.08
0.09
0.10
0.11
0.12
0.13
Inventory costs
13.42
13.18
13.20
13.44
13.35
13.44
13.29
Backlogging costs
1.27
1.97
2.11
2.36
3.48
3.31
4.04
Total costs
14.70
15.16
15.31
15.80
16.73
16.75
17.33
α
0.14
0.15
0.16
0.17
0.18
0.19
0.20
Inventory costs
13.23
13.37
13.28
13.25
13.23
13.34
13.36
Backlogging costs
5.42
5.54
6.73
7.49
8.57
9.31
9.59
Total costs
18.66
18.92
20.02
20.74
21.80
22.66
22.95
174
5 Radio-frequency Identification (RFID): Technology and Applications
20 15
10
5 α 0
0.05 Inventory cost
0.10 Total cost
0.15
0.20
Backlogging cost
Figure 5.4 Representation of the results
As we can see, the greater α , the greater the backlogging cost, but the inventory cost remains the same, because, even if an item is lost, inaccurately located or incorrectly labeled, it belongs to the store and participates in the inventory cost. Assume now that a tag is attached to each item. In this case, it is hard to lose items since locating a reader at the place items should be stored would prevent them from being misplaced and theft could become very difficult if readers are placed at the right location. In fact, the only possible error would be to store wrong information in the tags, which is unlikely if the initial information is double checked. As foreseen, the backlogging cost increases as α increases: this represents the cost resulting from the inaccurate inventory or, in other words, from shrinkage, transaction errors, inaccurate locations, incorrect labels, etc. Thus, introducing an RFID system would improve retailers’ service, which is of utmost importance from a commercial point of view. Indeed, the cost of introducing an RFID system remains to be estimated. The objective of this simple example was to give an insight into RFID use in inventory management. Other applications are currently emerging such as, for instance, real-time shipment processing and automated inventory updating at distribution centers, as we will explain below.
5.4.2 RFID Systems in Supply Chains Inventory systems are parts of supply chains. In this section, we consider inventories belonging to supply chains, keeping in mind that the relationship between
5.4 RFID Applications
175
RFID and inventories is only one aspect of the influence of RFID on supply chains. A more complete analysis of this influence is developed in Section 5.6.
5.4.2.1 A Three-echelon Supply Chain: Description and Management To illustrate the influence of RFID on supply chains, a simple three-echelon supply chain that is composed of a manufacturing system (MS), a distribution centre (DC) and n retailers (see Figure 5.5) is considered. The daily production of the MS is shipped to the DC once a day. We assume that one day is required to transfer products from MS to DC. This includes the packaging, transportation, physical assignment in the DC and, if applicable (that is if RFID is not used), registration of the transfer. A retailer is composed of a storage facility (called the backroom (BR)) and a shelf (SH) to display the products. Two days are required to transfer products from the DC to any one of the retailers’ BRs. The replenishment of a shelf decided at day j takes effect at day j+1. Only one product type is concerned in this example for simplicity. The dynamics and management of the supply chain can be summarized as follows: • Every single day, shelf i, i ∈ { 1, 2, ... , n} , has to meet the demands of customers. The sum of the demands during one day is the value taken by a random variable that obeys a Poisson distribution of parameter λ i . • The replenishment of shelf i is based on a
(si , Si )
policy: as soon as the num-
ber xi of products in the shelf becomes less than si , a quantity S i − xi is taken from the BR and transferred to the shelf. This quantity will be available to customers the next day. • The replenishment of backroom i is based on a (ss i , SSi ) policy, and replenishments are ordered from the DC. • Every evening, each retailer i informs the MS of the quantities d i sold during n
the day. The MS will launch the total demand, that is D = ∑ d i , in production i =1
the next day. We assume that the MS is able to perform the production of D units in one day. The model is represented in Figure 5.5.
176
5 Radio-frequency Identification (RFID): Technology and Applications
DC
MS
BR1
SH1
M
M
BRi
SHi
M
Transfer: 1 day Transfer: 2 days
BRn
D E M A N
M
D
SHn
S
Figure 5.5 A three-stage chain
We assume that: • At the MS level, the manufacturing problems (lack of quality, for instance) are solved by the end of the same day. In other words, the efficiency of the MS can be considered as being perfect. • The shrinkage in the DC is equal to a % of the items that are present in DC at the beginning of the day. This percentage includes the shrinkage during the transfer between MS and DC. • The shrinkage in the backroom of retailer i is equal to vi % of the inventory at the beginning of the day. This percentage includes the shrinkage during the transfer between DC and the backroom. • The shrinkage in the shelf of retailer i is equal to wi % of the items held in the shelf at the beginning of the day. This percentage includes the shrinkage during the transfer between the backroom and the shelf. We assume that a stock taking is done once every Js days by the retailers in the BRs and SHs and every Jd days in the DC. When a retailer takes stock, they observe that the quantities in the shelves and the backrooms are less than the theoretical quantities (i.e., the quantities registered in the computer). This is due to the shrinkage. As a consequence, the numbers of items in the shelves and the backrooms, that are stored in a file of the computer, should be reduced to match with the real values provided by the stock taking. Due to the management of the system presented above, i.e., the ( s, S ) policy, additional items are automatically ordered by the shelves to the backrooms, and by the backrooms to the DC. The management of the DC is slightly different: when a stock taking happens, the total shrinkage from the previous stock taking is added to the quantity to be manufactured by the MS the next day. 5.4.2.2 Deployment of RFID We assume that an RFID is deployed and that tags are introduced at the item level (in other words, one tag is attached to each item). Thus, each single item is tracked
5.4 RFID Applications
177
in the supply chain, which ensures that each item that disappears from the system is detected the same day. Indeed, some undetected shrinkages (for instance, shrinkages due to lack of quality) exist. This is the case of items that belong physically to the inventory, but are not usable because of damage or defects. When an RFID is not deployed, the existence of shrinkage is detected and corrected only periodically, at the time an inventory is performed. In the meantime, the system is managed based on the data stored in the computer, which are usually inaccurate. This leads to stock shortage and delays to customers. To deal with shrinkages, one solution is to increase the initial inventory level of the shelves, backrooms and DC. The numerical examples presented in the next section will illustrate these remarks. 5.4.2.3 Illustrative Examples We consider the three-echelon supply chain presented in Figure 5.5, with three retailers. The daily demands to the retailers are the same and follow the Poisson distribution of parameter 2, which means that the mean daily demand to a retailer is 2 items. We assume that Js = 100 and Jd = 300. Simulations of 15 000 days have been performed to evaluate the average number of stock shortages at the different levels of the supply chain according to the percentage of undetected shrinkage. The ( s, S ) replenishment policy is (15, 20) for the shelves and (20, 30) for the backrooms. Note that retailers’ behaviors are identical on average. As you can see in Figures 5.6–5.8, when the proportion of shrinkages increases, the system is more often out of stock at each level of the supply chain.
Number of stock shortages 40 35 30 25 20 15 10 5 0 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 Proportion of shrinkages
Figure 5.6 Stock shortages in the DC according to the proportion of undetected shrinkages
178
5 Radio-frequency Identification (RFID): Technology and Applications
Number of stock shortages 70 60 50 40 30 20 10 0 0
0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 Proportion of shrinkages
Retailer 1: backroom
Retailer 2: backroom
Retailer 3: backroom
Number of stock shortages
Figure 5.7 Stock shortages in the backrooms according to the proportion of undetected shrinkages
100 90 80 70 60 50 40 30 20 10 0 0
0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
0.1
Proportion of shrinkages
Retailer 1: shelf
Retailer 2: shelf
Retailer 3: shelf
Figure 5.8 Stock shortages in the shelves according to the proportion of undetected shrinkages
To manage this situation when RFID is not in use, you have to postpone deliveries and/or keep inventories to avoid stock shortages. Another solution is to increase the frequency of taking stock. The best behavior of the supply chain arises when daily inventories are performed, which is what RFID makes possible, at least partially.
5.4 RFID Applications
179
5.4.3 Various Applications Related to Movement Tracking Applications related to movement tracking are varied. In this section, we present some of these applications in different activity domains. Wal-Mart, a US public corporation that runs a chain of large discount department stores, has required from its most important suppliers (more than 100) the placing of tags in all shipments to improve the inventory management. This decision triggered waves of RFID applications in various industries. The US Defense Department used the same strategy. Unfortunately, the read rate does not exceed 80% due to radio-wave attenuation caused by packaging and environment. In some hospitals, tags are attached to newborn babies in order to prevent kidnapping and to secure the identity of babies. Another application in hospitals is to track equipments as they move from room to room. This facilitates resource management, reduces theft and simplifies maintenance. A third application consists of associating tags to surgical patients and labeling the surgical procedures needed or performed in the tags: this prevents mistakes. Some schools require children to wear bracelets in which tags are incorporated to monitor attendance and locate children. In Taiwan, RFID is under study to improve delay in import cargo customs clearance in air-cargo terminals. This is motivated by a study of the International Air Transport Association (IATA) that found that flying accounts for only 8% of total transport time, while clearance requires 92% of the rest. Similar applications can be found in car-shipment yards where RFID is used to build the shipment loads. The fact that information is provided in real time by the wireless tracking system reduces drastically vehicle dwell time, improves customer satisfaction and increases transportation resource utilization. Numerous companies use RFID to track item movements. Three well-known examples are mentioned hereafter: • UNILEVER uses RFID to track its customer products in the warehouses. • The port of Singapore implemented RFID in conjunction with an electronic data interchange (EDI) system to track containers in the port and to manage their arrivals and departures. • The Mexican Ford Motor Company facility uses RFID for the routing of vehicles through their automated manufacturing systems. Some unusual applications are mentioned in the literature such as, for example, the following: • In 2004, members of the Baja Beach Club of Barcelona received syringeinjected RFID tag implants, which enabled them to pay for their drinks automatically, without being obliged to reach their credit card or their wallet. • In some golf clubs, tags are incorporated into the balls, so that balls can be located easily by means of portable readers.
180
5 Radio-frequency Identification (RFID): Technology and Applications
Numerous other applications already exist in our everyday life such as: 1. Proximity cards (badges used to enter protected areas, for instance). 2. Automated tool payment transponders. 3. Ignition keys of some vehicles. They include RFID tags and thus reduce the risk of theft. Putting imagination to work, the following applications are conceivable: • If tags are incorporated in clothes, you could make them communicate with washing machines and transfer the characteristics of the clothing material so that the machine can select automatically the appropriate washing cycle. • If tags are incorporated in the packages of some perishable food products (milk, soup, processed food, etc.), the products can communicate with the refrigerator and warn the customer if the product is out of date. • If tags are incorporated at the item level in retail shops, the invoices can be established automatically at the point-of-sale terminal and, even, automatically charge the customers’ payment devices. • Still in the case when tags are implemented at the item level, returning defective items would be simplified and easier for the store to verify that this was their merchandise. • We already mentioned the medical usefulness of RFID technology. This technology can also be used to check if medications are taken on time, or to help the elderly to navigate in their home, find them if they get lost, etc. The cases mentioned in this section are a tiny part of the applications currently in use throughout the world. Larger applications are forecasted in a near future in volume retailers.
5.5 Some Industrial Sectors that Apply RFID Industries that can take advantage of RFID technology to manage specific supply chains are mentioned hereafter. We restrict ourselves to the most significant industrial sectors.
5.5.1 Retail Industry This industry currently uses RFID technology the most intensively. For example, a precise estimation of Wal-Mart annual savings due to RFID amounted to 8.35 billion US dollars. This total is made up of 6.7 billion by reducing the work force required for scanning bar codes, 600 million from avoiding stock-outs, etc.
5.5 Some Industrial Sectors that Apply RFID
181
GAP Inc. is one of the world’s largest specialty retailers, with more than 3100 stores. They operate four of the most recognized apparel brands in the world: GAP, Banana Republic, Old Navy and Piperlime. GAP uses high-frequency RFID technology for theft avoidance and item detection. Furthermore, GAP has tested the use of RFID tags to track sales of denim clothes and was able to monitor precisely inventory, keeping track on the sizes and types of clothes. Gillette, the razor manufacturer, launched a large RFID project. It has installed tags on millions of individual items to keep track of the items as they move along the supply chain. Several retailers have been involved in the project; some of them being already converted in RFID technology like Wal-Mart or Metro. Woolworth’s was a retail company that was one of the original American fiveand-dime stores. It designed an RFID system that tracks items from the time they leave the distribution centre until the time they arrive at a specific store.
5.5.2 Logistics FedEx, founded as Federal Express in 1971, greatly expanded in 1978 after airline deregulation. In this company, a reader is mounted at each of the four doors of the vehicles and at the right side of the steering column. This system allows safely entering the delivery vehicle, locking the doors and starting the vehicle. Metro Group Logistics (MGL) is a provider of logistics services in 25 countries in 2007 and employs 3800 people. The use of RFID technology in its stores and distribution centers reduced losses during transit by 11 to 14%. YCH Group is the leading company for logistics and supply chain management in the Asia Pacific region. Employees attach tags on pallets of bonded goods when they arrive at a warehouse. This identifies products and their storage locations on arrival. The tags contain, in addition to product ID and location, some other information like next destination and spatial features. The RFID system allows better supply chain visibility, which can lead to higher profits.
5.5.3 Pharmaceutical Industry This area is propitious to develop RFID technology since very expensive products are handled in this industry and the level of risk is often very high. We would like to mention a particular application that consists in identifying prescriptions to visually impaired veterans in the US. A special talking reader is used to inform the patient about the drug (name, instructions, warnings, etc.). In the pharmaceutical industry RFID will reduce or even eradicate counterfeiting.
182
5 Radio-frequency Identification (RFID): Technology and Applications
5.5.4 Automotive Industry RFID technology is currently in use by Toyota (at the painting stations), Harley Davidson (at the assembly stations), BMW (tags provide all the required information for assembly), to mention just a few. It should be noted that RFID technology is pivotal in the automotive industry, due to the following factors that require better and faster tracking ability as well as more information available “on the items”: • The increase of the number of different models and variants of their brands in an environment where the car sales have remained static. • The introduction of new regulations (End-of-life vehicle – ELV – and block exemption regulation – BER – in the European Communities). The ELV regulates the take-back and recycling of used cars, while the BER allows suppliers to sell spare parts on the market. • Customers’ requirement for better after-sales service, which call for faster and safer access to genuine spare parts. In the automotive industry, RFID technology provides immediate access to vehicle information such as, for example, registration number, owner ID, information on insurance, shipping date, receiving date and destination. Thus, maintenance management is improved and stolen cars can be tracked and recovered.
5.5.5 Security Industry Some departments of rehabilitation and correction use RFID technology to keep watch on inmates: an alert is sent to prison computers if a prisoner is trying to remove their watch-size transmitter.
5.5.6 Finance and Banking Industry Smartcards embedded with RFID chips are used as electronic cards all around the world. This tool is more or less successful, depending on the country.
5.5.7 Waste Management A number of applications are expected in this domain. One of them concerns waste bins: as a waste bin is emptied and parked, the information is stored in the
5.6 Advantages when Applying RFID Technology to Supply Chains
183
tag fixed to the bin. This allows managing waste collection and tracking the bin, if necessary.
5.5.8 Processed Food Industry They are specific levels in processed food supply chains where significant exposure to risk exists. Indeed, approaches like control at critical points of the process, hazard analysis and employee training exist to prevent risk. Nevertheless, item recall may happen. In the case of recall, the use of RFID ensures that only items from specific production lines will be withdrawn instead of the whole production.
5.6 Advantages when Applying RFID Technology to Supply Chains In this section, we sum up the main advantages of RFID technology when applied to supply chains, assuming that reader portals are installed at critical points of the chains. Studies conducted in distribution centers show that bar-code reliability is below 80%, while RFID used by retailers and in airports achieved over 99% reliability. Readers (interrogators) can read multiple tags instantaneously. This reduces drastically item-identification lag, allows the automation of many time-consuming tasks such as scanning inventory inputs and outputs, as well as checking the inventory state: these tasks can be done in real time. Automation of storage activities results in lower inventory levels (and thus buffer reduction) and less out of stock (shortage) occurrences that, in turn, leads to the physical flow speeding up. This aspect is of utmost importance in supply chains. Indeed, inventory and labor costs decrease accordingly. Some other improvements can be mentioned: • The data provided by the reader portals located at the entrance of the inventories can be matched against the purchase orders, and in the case of a discrepancy action can be taken immediately (rejection of the delivery concerned, launching a new purchase order, etc.). • A product arriving in a storage facility is identified through RFID and its placement is fixed automatically, thus avoiding misplacement. At the same time, the state of the inventory files in the computer is updated. • The locations of the products in storage facilities are automatically stored in computer files. As a consequence, scanning bar-code identifiers become useless when picking up products to continue production or for sale, and employees in
184
5 Radio-frequency Identification (RFID): Technology and Applications
charge of replenishment will not have to search products that are not at specific locations. Thus, picking operations can be automated. • Customer billing can be automated using the data provided by a reader portal located at the exit of the system and by a data base dedicated to orders and products. RFID technology allows for tracking products throughout the supply chain from supplier delivery to warehouses and points of sale, thus increasing competitiveness by faster response to customer demands and production problems, reduction of delivery disputes, reduction of uncertainty on factors causing fluctuations in process, improvement of the readability of the quantity and quality of items produced, improvement of safety (in particular, reduction of counterfeiting) and reduction of waste and theft. A precise knowledge of the state of supply chains simplifies warrantee-claim management and, more generally, allows making decisions based on secure data. It should also be noticed that RFID technology increases security when used in domains like food supplies (when goods are perishable) or potentially hazardous substances. Tracing food “from farm to fork”, which is now mandatory for a large spectrum of food products (at least in Europe), is another domain of application of RFID technology. RFID technology advantages: • Reducing administrative errors, due to the automation of data transfer. • Simplifying production management, due to the ability to track work-inprogress in real time. • Reducing rework, due to early information related to possible production problems. • Providing access to major clients who require RFID use from their providers. • Controlling the quality of items when they move along the supply chain. RFID technology permits the collection, in real time, of information that influences quality. For instance, tags can monitor temperature, bacteria density, degree of humidity, etc. Indeed, active or semi-active tags are required to perform monitoring. • Managing yards, warehouses and factories. For instance, tags may be used to direct trucks to the most efficient drop-off locations. • Robustness of tags is another important aspect of RFID technology. RFID can work in dirty, wet, oily, warm or harsh environment. Also, passive tags can last for extensive period, making the technology safe. To conclude, RFID technology leads to labor reduction, allows precise management, improves safety, allows efficient management of assets, and thus leads to a drastic reduction of production costs. Reduction of the production cycle is another aspect that benefits from this technology.
5.8 Economic Evaluation of the Use of RFID in Supply Chains
185
5.7 Expert Opinion on the Matter The most common opinions about the expansion of RFID technology in industry are given hereafter. The list is not exhaustive. • Competitive pressure will be the most important factor for adopting this technology, as showed in the Wal-Mart and US Defense Department cases. Thus, cost evaluation will not be the major factor. • At the beginning of the use of RFID, this technology will simply be used to replace bar codes. In other words, the working habits will not change just after implementing this new technology. We can see some similarities with the introduction of robots or the implementation of computers in the 1960s. • The dissemination of the technology requires a critical mass that has not been reached so far. • This technology will have a very positive effect on quality control. • RFID increases supply chain visibility that, in turn, favors management and, in particular, reduces negative side effects such as the bullwhip effect. • The high cost of RFID will slow down the spread of this technology. • Since companies continue to use internal item identification instead of the EPC code, the spread of RFID will be slower. • The degree of concentration of RFID within an industry segment will affect the spread of this technology.
5.8 Economic Evaluation of the Use of RFID in Supply Chains 5.8.1 Current Situation In Section 5.6, we mentioned several measurable benefits when using RFID in supply chains. Many studies have been conducted to evaluate the return-oninvestments (ROI) based on the comparison between direct costs and benefits. Unfortunately, costs and benefits resulting from processes and production system transformations due to RFID are often difficult to ascertain. In other words, introducing RFID results in changes in production systems that depend on the way they are used, and these factors cannot usually be evaluated precisely. For example: • The amount of work required to adapt the information system to RFID is difficult, or even impossible, to evaluate accurately. It may be very costly since, in some circumstances, it requires the design of a completely new information system that is usually much more complex than the previous one: the fact that RFID provides real-time data advocates for automation of new activities, which
186
•
• •
•
5 Radio-frequency Identification (RFID): Technology and Applications
requires new software developments and new resources. This aspect has been mentioned in Section 5.3.3. We are able to evaluate the direct loss due to inventory shrinkage, but it is much more difficult to evaluate the cost due to stock-outs resulting from shrinkage (indirect loss). A study conducted at MIT, see (Kang and Koh, 2002), concluded that indirect loss can be 30 times greater than direct loss. The increase of production efficiency is expected, but is difficult to evaluate in terms of benefits. Benefits resulting from product availability in retailers’ shelves are usually evaluated in terms of sale improvements observed “a fortiori”, but it is usually impossible to evaluate the consequences of these benefits on the image of the company, and thus on the long-term effect on the company. The previous remark also holds for reduction of loss associated with product obsolescence.
It should be noticed that even “first range” benefits are not clearly evaluated. Studies conducted to evaluate the reduction of inventories when using RFID provide estimates that are neither precise nor consistent. Some estimates show inventory reduction of 10 to 30% in supply chains (Booth-Thomas, 2003) while others display 5% (Arens and Loebbecke, 1981) or 8 to 12% (SAP, 2003) or 1 to 2%. Reduction of labor cost due to RFID is evaluated at 30% in distribution, 17% or 7.5%, depending on the study, for retail stores. Other estimations claim that saving in receipt of products in inventory facilities is 7.5% or 5 to 40%, depending on the study. Other figures are 9% in manufacturing, 90% or 100% in physical inventory counting, 0.9 to 3.4% in stores. Concerning shrinkage and out-of-stock reduction, results mentioned in white papers and reports are also varied. Some of these results are listed hereafter: 1. 2. 3. 4.
The reduction of error when picking a product in inventory is 5%. Thefts at retailers are reduced by 11 to 18%. Shrinkage at retailers decreases and becomes 0.78% instead of 1.69%. Thefts on shelves decrease by 9 to 14%, while the reduction ranges between 40 to 59% in stores. 5. Stock availability increases by 5 to 10%. To conclude, the least we can say is that the benefits resulting from the use of RFID are fuzzy and often simply qualitative. The situation is even worse when evaluating RFID system cost. We have little and diverse quantitative information on the cost of acquiring and implementing RFID systems. Indeed, the costs of tags and readers, as well as the training costs are known. But, as mentioned earlier, the costs of analyzing and designing an information system that fits with the RFID system is usually unpredictable and often very high.
5.8 Economic Evaluation of the Use of RFID in Supply Chains
187
5.8.2 How to Proceed? As aforementioned, evaluating RFID technology is still an open issue. Similar situations have occurred in the past when computers, and later robots, were introduced in production systems. At that time, nobody was really able to evaluate what were the consequences of introducing computers to help managing inventories or robots to automate manufacturing tasks. In-house reports at that time show that they were more tailored to support decisions made “a priori” than technically suitable. Today, everybody understand the usefulness of these technologies despite the fact that lots of users were initially very disappointed and numerous projects failed when these technologies were introduced. The situation of RFID is similar. It is very risky to be among the first who try RFID, but using this technology is inevitable in the near future. Three main approaches are in use to evaluate the introduction of new technologies in a specific environment: 1. Ask experts to make their evaluations, and then organize brainstorming until the experts share the same conclusions. Experts proceed by using their professional backgrounds, looking for similar situations in their past and extrapolating the conclusions related to these situations. Indeed, the greater the gap from their work experience to the technology they are trying to evaluate, the more questionable the results. For the introduction of RFID in supply chains, comparison to bar codes is usually one of the technologies used by experts to build their evaluation. 2. Build a simulation model for the system under study and derive the evaluation of the technology from the results of the simulations. Indeed, this approach applies to specific problems. Furthermore, since simulation models do not capture all the characteristics of a real situation, some expertise should be put at work to infer the evaluation of the system. 3. Make an accurate and systematic analysis of the effects of the application of the technology (RFID in our case) to the problem at hand. The first two approaches are quite reasonable for technological advances that are evolutionary in nature. In other words, using a rear-view mirror is acceptable to evaluate situations that are reasonable extensions of past situations. But a specific analysis is required to evaluate technologies that present a gap with the current ones. The analysis needs the use of analytical models that link operation behavior to the decision-making system and ultimately to performance measures. It seems that RFID requires this type of approach. When applying the third approach, we have to keep in mind the following points: • Inventory shrinkage is a major problem for retailers and manufacturers. RFID can help addressing the problem in two ways:
188
– –
5 Radio-frequency Identification (RFID): Technology and Applications
RFID makes inventory records closer to actual inventory: this results in a more accurate replenishment policy that, in turn, leads to fewer stock-outs. RFID prevents misplacements, avoids frauds, reduces transaction errors, which leads to more accurate inventory records and, as above, leads to fewer stock-outs.
• Reducing inventory shrinkage leads to stock reduction, inventory savings, outof-stock reduction and sales increases. The benefits associated with these improvements are difficult to capture in terms of money. The best way to reach this goal is to build models (analytical and simulation models) designed for the specific problem under study. Indeed, building such models requires making assumptions on the efficiency of RFID, the behavior of customers, the behavior of employees, the quality of products, etc. • The use of RFID increases the visibility of the supply chain and, as a consequence, provides better forecasting and more efficient planning, services and inventory managements. But how to evaluate the benefits associated with these improvements? The answer is the same as the one given in the previous subsection. • To evaluate labor cost saving, you have to examine the tasks that can be significantly shortened (in terms of operation times) or removed by RFID, but also tasks resulting from new required competencies. Finally, the use of RFID is not recommended if the supply chain is not “optimized” beforehand taking into account RFID characteristics and functions, otherwise we may reach a situation worse than before and, at least, we will be unable to identify the improvements due specifically to RFID.
5.9 Privacy Concerns 5.9.1 Main Privacy Concerns Considering the problems passive tags have to face (in particular, the impassable metal barrier and the cost) they are unlikely to appear on each consumer item for some years, but the objective of item-level tagging remains fresh in everybody’s mind due to the numerous advantages of RFID technology mentioned in the previous section. Item-level tagging raises important privacy concerns. Basically, RFID tags respond to any reader requirement assuming that the read range allows it, thus making possible clandestine tracking and inventorying. Indeed, tags broadcast only a fixed serial number, but this serial number can be combined with personal information in some circumstances. For example, it may be the case when a customer pays their purchases with a credit card: the vendor
5.9 Privacy Concerns
189
may establish a link between the identity of the customer and the serial number, making it possible to use this information to compile a customer profile. A tag may also contain information about the product to which it is attached. This may be of great interest to competitors since they could be informed about volumes sold and stock turnover rates. Thus, RFID technology that enhances supply chain visibility and, as a result, facilitates management and improves competitiveness may also play a negative role. Concerning privacy and the individual person, clandestine reading may inform about the products they are carrying. If these products are medications, it is possible to derive from the information which illness the person suffers from, which can be used at different levels of everyday life (insurance, hiring, etc.) or erroneously attribute the illness of one family member to all of the family. Clandestine tracking and reading is still quite limited, because RFID infrastructure development is limited. But privacy concerns will certainly become of utmost importance in the future. RFID privacy is already of concern in some cases: • Recently, RFID-enabled passports became mandatory to enter some countries. Tags incorporated in these passports can be read clandestinely. They contain private information concerning the owner of the document and can be used by unauthorized persons. • In some book stores, due to the cost of books, tags are incorporated in books in order to control inventory, facilitate check out and avoid thefts. These may potentially be used for marketing purposes and, more precisely, for establishing the profile of some consumers. • Human implementation for medical purposes (an implemented tag that contains the medical record of the patient) is obviously of concern, as well as tool payment transponders already mentioned in this chapter as a common application.
5.9.2 How to Protect Privacy? Several solutions are conceivable to protect privacy when item-level tagging will be of common use and RFID infrastructure well developed: • Tag killing. This solution consists in deactivating tags when their usefulness vanishes. To avoid accidental deactivation of tags, the killing command is PIN - protected. An example: in supermarkets, RFID tags on purchased items are deactivated automatically when the invoice is established and paid. In this case, tag killing is effective for privacy protection, but the benefits for after-sales services like management of item return or maintenance are lost. Also, it is impossible to use this solution in some domains like rental activities (books, cars, etc.) or when tracking items is mandatory (food distribution, for instance).
190
5 Radio-frequency Identification (RFID): Technology and Applications
• Putting tags to sleep is another type of solution. The solution consists in rendering tags temporarily inactive. Indeed, this solution requires the ability to deactivate and reactivate tags in due course, which could be difficult to manage. Furthermore, a very effective control (using a PIN) should be put in place to avoid unauthorized reactivation of tags. • Renaming tags is a third type of solution that is currently proposed. It consists in changing tag identifiers over time. • Distance measurement consists in introducing additional (and low-cost) circuitry in tags so that they can achieve a rough measurement of the distance to the reader that tries to connect itself to the tag. Depending on the comparison of this distance with some predefined distances, the tag will deliver more or less information to the reader. • A blocking scheme is also available. It consists of introducing a so-called privacy bit in the tags. If the value of this bit is “0”, the scanning is limited to the identifier. Further information is delivered only if the privacy bit is equal to “1”. Other solutions exist but, whatever the solution selected to protect privacy, limitations appear. The higher the protection, the lower the benefits of RFID technology and/or the more complex the activity of the tag, which leads to growing costs.
5.10 Authentication As cleverly mentioned in (Juels, 2006): RFID privacy concerns the problem of misbehaving readers harvesting information from well-behaving tags. RFID authentication, on the other hand, concerns the problem of well-behaving readers harvesting information from misbehaving tags, particularly counterfeit ones.
Basic RFID tags, as well as EPC tags, are vulnerable to counterfeiting: replicating such tags requires little expertise. In principle, the EPC of a target tag can simply be skimmed and “pasted” in a counterfeit tag. Thus, counterfeit tags may appear on the market and facilitate counterfeiting since tags may carry information that does not guarantee the authenticity of the item they are attached to. Blocking counterfeiting is a difficult task, but some possibilities exist. For instance, a unique numbering of items can be efficient to fight counterfeiting in a “closed-loop” RFID system: if two items or pallets carry tags with the same serial number, counterfeiting is obvious. Another way to fight counterfeiting is to repurpose the kill function in the EPC. The kill PIN can be used to authenticate the tag to the reader.
5.11 Conclusion
191
To conclude, a well-stabilized set of solutions does not exist yet to guarantee authenticity, but several research projects are under development.
5.11 Conclusion RFID technology is profitable in “closed-loop” organizations. A “closed-loop” organization is a system (warehouse, supply chain, etc.) where the infrastructure that encapsulates RFID is well developed and able to connect the different components of the system. The main advantages of RFID technology in a supply chain are: 1. 2. 3. 4. 5. 6.
Real-time data on assets and goods. Increased speed of the physical flows. Drastic reduction of work load, and thus of labor cost. Reduction of work in process (WIP) and inventories. Easier management, due to real-time information. Reduction of shrinkage, inaccurate locations of stored items, transaction errors, incorrectly labeled items. As a consequence, inventory records are closer to actual inventory. 7. Reduction of counterfeiting. 8. Overall visibility in supply chains, which helps maintain a competitive edge. 9. Reduction of theft and loss. Numerous “indirect” advantages can be listed such as reduction of rework, the possibility of becoming a provider of major clients that require the use of RFID technology, a reduction of internal administrative error, a simplification of warrantee-claim management, better forecasting, etc. These advantages lead to a drastic reduction of production costs, but these benefits are usually very difficult to evaluate. As mentioned in Section 5.8.2, the only way is to establish analytical models to approach these benefits. Other domains are taking advantage of RFID technology, as is the case of the food industry where tracking some products “from farm to fork” is mandatory. Nevertheless, many problems are still awaiting a solution. The cost of tags is still too high to adopt item-level tagging for consumer products. Reducing the cost of tags requires a new step forward in technology and, probably, alternative tag designs. Another open problem is the difficulty of implementing RFID technology in an “open-loop” environment. Furthermore, standards are still not stabilized, which slows down the dissemination of RFID technology. The day item-level tagging will become economically acceptable, privacy concerns will arise, and solving this problem is of utmost importance. Finally, the cost of implementing RFID technology in an organization depends on the development level of this organization. The cost of integrating RFID in a
192
5 Radio-frequency Identification (RFID): Technology and Applications
coherent IT infrastructure (that must be adapted and even redesigned) and connecting it with upstream and downstream applications vary from an implementation to the next one. Thus, the return of investment can probably not be achieved in the short term, but the dissemination of RFID technology in our everyday life is inevitable, as were the dissemination of data processing and communication systems in the 1970s. RFID will be a technical breakthrough in the twenty-first century since it will have a major impact not only on almost every industry but also on our daily lives. As a consequence, RFID companies will certainly take aggressive actions toward acquiring exclusive rights by patent applications. RFID is a disruptive technology that will fundamentally change the competitive environment. Currently, RFID is still considered as a sustaining technology but, according to the stand taken by several international companies, it is close to being part of the supply chain paradigm.
References Arens AA, Loebbecke JK (1981) Applications of Statistical Sampling to Auditing. Prentice-Hall, Englewood-Cliff, NJ Booth-Thomas C (2003) The see-it-all chip. Time Magazine, September 14 Juels A (2006) RFID security and privacy: A research survey. IEEE J. Select. Ar. Comm. 24(2):381–394 Kang Y, Gershwin SB (2005) Information inaccuracy in inventory systems: stock loss and stockout. IIE Trans. 37:843–859 Kang Y, Koh R (2002) MIT Auto-ID Center Research Report. Available on quintessenz.org/rfiddocs/www.autoidcenter.org/media/ Prater E, Frazier GU, Reyes PM (2005) Future impact of RFID on e-supply chains in grocery retailing. Suppl. Ch. Manag.: Int. J. 10(2):134–142 Smith AD (2005) Exploring radio frequency identification technology and its impact on business systems. Inform. Manag. Comput. Secur. 13(1):16–28
Further Reading Baker P (2004) Aligning distribution center operations in Supply Chain strategy. Int. J. Log. Manag. 15(3):11–123 Brewer A, Sloan N, Landers TL (1999) Intelligent tracking in manufacturing. J. Intell. Manuf. 10:245–250 Brown KL, Imman RA, Calloway JA (2001) Measuring the effects of inventory inaccuracy in MRP inventory and delivery performance. Prod. Plann. Contr. 12(1):46–57 Caputo AC, Cucchiella F, Fratocchi L, Pelagagge PM, Scacchia F (2004) Analysis and evaluation of e-supply chain performances. Ind. Manag. Dat. Syst. 104(7):546–557
Further Reading
193
Chappell G, Durdon D, Gilbert G, Ginsburg L, Smith J, Tobolski J (2002) Auto-ID on delivery: the value of Auto-ID technology in the retail supply chain. Auto-ID Center. www.autoidcenter.org, November 1 Collins J (2003) Smart labels set to soar. RFID Journal, 23, Available at www.rfidjournal.com/article/articleprint/712/-1/1/ Collins J (2003) Estimating RFID’s pace of adoption. RFID Journal, 5, Available at www.rfidjournal.com/article/articleprint/675/-1/1/ Croft J (2004) High-tech baggage tracking wins in Vega. Air Transp. W. 41(3):12–13 Croft J (2004) Saving private luggage. Air Transp. W. 41(13):10–12 Erickson Research Group (ERG) (2004) Good Manufacturing Practice (GMPs) for the 21st century – food processing. Food Drug Admin. Stud., August 9 Ernst R, Guerrero J-L, Roshwalb A (1993) A quality control approach for monitoring inventory stock levels. J. Oper. Res. Soc. 44(11):1115–1127 Gunasekarana A, Ngai EWT (2005) Build-to-order Supply Chain management: A literature review and framework for development. J. Oper. Manag. 23(5):423–451 Hou J-L, Huang Ch-H (2006) Quantitative performance evaluation of RFID applications in the supply chain of the printing industry. Ind. Manag. Dat. Syst. 106(1):96–120 Hsieh C-T, Lin B (2004) Impact of standardization on EDI in B2B development. Ind. Manag. Dat. Syst. 104(4):68–77 Juels A, Pappu R (2003) Squealing Euros: Privacy protection in RFID-enabled banknotes. In: Wright R (ed), Proceedings Financial Cryptography, Lecture Notes in Computer Science, vol 2742. Springer-Verlag, New York, pp. 103–121 Juels A (2004) Minimalist cryptography for low-cost RFID tags. In: Blundo C, Cimato S (eds), Proceedings of the 4th International Conference on Security in Communication Networks. Lecture Notes in Computer Science, vol 3352. Springer-Verlag, New York, pp. 149–164 Juels A, Garfinkel S, Pappu R (2005) RFID privacy: an overview of problems and proposed solutions. IEEE Secur. Priv. 3(3):34–43 Kärkkäinen M (2003) Increasing efficiency in the Supply Chain for short shelf life goods using RFID tagging. Int. J. Ret. Distr. Manag. 31(10):529–536 Lee H, Özer Ö (2007) Unlocking the value of RFID. Prod. Oper. Manag. 16(1):40–64 Oudheusden DLV, Boey P (1994) Design of an automated warehouse for air cargo: the case of the Thai air cargo terminal. J. Bus. Log. 15(1):261–285 Petersen CG, Aase G (2004) A comparison of picking, storage, and routing policies in manual order picking. Int. J. Prod. Econ. 92:11–19 Saar S, Thomas V (2002) Towards trash that thinks. J. Ind. Ecol. 6(2):133–146 Sahin E (2004) A qualitative and quantitative analysis of the impact of the Auto ID technology on the performance of supply chains. Dissertation, Ecole Centrale de Paris, France SAP (2003) SAP Auto-ID Infrastructure. SAP white paper Selker T, Burleson W (2000) Context-aware design and interaction in computer systems. IBM Syst. J. 39(3–4):880–891 Selker T, Holmstrom J (2000) Viewpoint reaching the customer through grocery VMI. Int. J. Ret. Distr. Manag. 28(2):55–61 Sbihli S (2002) Developing a Successful Wireless Enterprise Strategy: a Manager’s Guide. John Wiley & Sons, New York, NY Subramaniam C, Shaw MJ (2004) The effects of process characteristics on the value of B2B eprocurement. Inf. Techn. Manag. 5(1–2):161–180 Takaragi K, Usami M, Imura R, Itsuki R, Satoh T (2001) An ultra small individual recognition security chip. IEEE Micr. 21(6):42–49 Trappey AJC, Trappey CV, Hou J-L, Chen BJG (2004) Mobile agent technology and application for online global logistic services. Ind. Manag. Dat. Syst. 104(2):169–183 Violino B (2003) SAP takes RFID into the enterprise. RFID Journal, October 13 http://www.rfidjournalevents.com/
194
5 Radio-frequency Identification (RFID): Technology and Applications
Westhues J (2005) Hacking the prox card. In: Garfinkel S, Rosenberg B (eds), RFID Appl. Secur. Priv., Addison-Wesley, Reading, MA, pp. 291–300 Wu W-Y, Chiag C-Y, Wu Y-J, Tu H-J (2004) The influencing factors of commitment and business integration on supply chain management. Ind. Manag. Dat. Syst. 104(4):322–333
Chapter 6
X-manufacturing Systems
Abstract In the title of this chapter, a generic X is used to designate various core concepts of manufacturing systems that are in use in the literature and industry. Mass production or the job-shop is characterized first. This is the most ancient form of manufacturing system. Flexible manufacturing systems (FMS) appeared more recently as a reaction to worldwide competition and changeable markets. Such a system can switch automatically from one type of product to another, inside of a set of well-defined product types. Efficiency and limits of FMS are analyzed. Later, agile manufacturing systems (AMS) were introduced to deal with profound market changes as a matter of routine. AMS are compared with FMS. Reconfigurable manufacturing systems (RMS) were the most recent step towards adaptability to system configuration changes. They are discussed in detail. Another recent and popular concept is lean manufacturing systems (LMS), the emphasis of which is to systematically identify and eliminate waste from manufacturing processes, while increasing responsiveness to market changes as well as keeping the manufacturing costs as low as possible. Six core methods to implement LMS are presented and illustrated. The ideas of LMS are compared with those of FMS, AMS and RMS.
6.1 Introduction Until the 1960s, economy of scale was the major objective of manufacturing. It called for mass production and full utilization of plant capacity, which guaranteed moderate production costs. Mass production created enormous wealth. This kind of production was characterized mainly by large volumes with low product variety, but the corresponding manufacturing systems were inflexible, that is to say impossible to adapt to significant market changes at reasonable cost, and were associated with high WIP (Work-in-progress) and inventory levels.
196
6 X-manufacturing Systems
Mass-production characteristics are developed in Section 6.2. Note that job-shops were already in use at that time. A job-shop is a set of facilities (lathes, milling, cutting and drilling machines, etc.) that process jobs consisting of a series of operations that may differ from customer to customer. A job does not necessarily require all the facilities. A job-shop usually fills small customer orders or small batches. The job-shop problem consists in assigning jobs to machines in order to optimize a given criterion. Numerous criteria are considered for minimization such as, for example, the makespan, maximum delay, total working time, etc. A job-shop problem is generally NP-hard. We will not further develop this type of standard manufacturing system. Due mainly to competition from Japanese firms, the objectives of production switched to customer satisfaction in terms of delivery time, quality and services, without abandoning the objective of reducing production costs. This new market environment required greater flexibility, drastic reduction of WIP and inventory levels, shortened production cycles, better quality in products and services, and reduction of waste. Flexible manufacturing systems (FMS) that appeared in the 1970s were the first reaction to these changes. In Section 6.3, we list the most popular definitions of FMS and analyze the advantages and disadvantages of this type of system. The key terms that, at least partially, characterize FMS are: computer-controlled system, product quality, short manufacturing cycle times, reasonable manufacturing costs but high design cost and limited flexibility. Section 6.4 is devoted to agile manufacturing systems (AMS). AMSs have the ability to deal with profound market changes as a matter of routine and to quickly manufacture various types of customized high-quality products. Changes in the type of products may call for redesigning the manufacturing system. The time needed to adapt a system to market changes and the percentage of initial investment that can be reused when changing system configuration are the two main measures of agility. In Section 6.5 we introduce the most recent concept of manufacturing systems: the reconfigurable manufacturing systems (RMS). RMSs are designed to permit quick changes in the system configurations, their machines and controls in order to adjust to market changes. Another concept in manufacturing is lean manufacturing. In lean manufacturing systems (LMS), the emphasis is to systematically identify and eliminate waste from manufacturing processes, while increasing responsiveness to market changes and keeping the manufacturing costs as low as possible. We will analyze in detail how waste elimination is achieved and the consequences of the lean philosophy in Section 6.6.
6.3 Flexible Manufacturing Systems (FMS)
197
6.2 Mass Production Mass production, also called flow production, was popularized by Henry Ford at the beginning of the twentieth century. Typically, in such a system, products move from worker to worker by means of conveyors, or automated guided vehicles, or some other kind of transportation system. A specific set of operations is assigned to each worker, and the times assigned to perform the different sets of operations are balanced. Operations are standardized, which guarantees high and constant production rates since workers perform simple and repetitive tasks. This, in turn, allows hiring workers with limited technical or mechanical skills after a short training period. Mass production displays advantages and disadvantages. Among the advantages are drastic reduction of non-productive activities such as preparing material and tools, diminished human errors due to the standardization of operations and the fact that operations are repetitive, reduced labor costs and a high production rate due to the standardization of operations and the elimination of non-productive activities. Among the disadvantages, it is worth mentioning the rigidity of massproduction systems, which makes any production change costly. Furthermore, the diversity of the products that can be manufactured on the same mass-production system is strongly limited. To conclude, mass production is capital intensive and justified only when the products to be produced will be required for quite a long period, which is becoming increasingly rare in the current market. This explains why more and more new production concepts are arising around flexibility and cost reduction.
6.3 Flexible Manufacturing Systems (FMS) 6.3.1 What Does Flexibility Means? Several aspects of flexibility are involved in the FMS concept. 6.3.1.1 Flexibility Aspects at the Tactical Level The most important aspects of flexibility at this level are: • Machine flexibility, which is the ability for a machine to perform several operations with little setup time when switching from one operation to the next one.
198
6 X-manufacturing Systems
• Material handling and transportation resource flexibility, which is the ability to easily transport and fix tools to machines, and transport and position products at machines. • Processing flexibility, which is the possibility to choose among several manufacturing processes to manufacture a product. • Routing flexibility, which is the possibility to follow different routes in the manufacturing system to perform the same sequence of operations. This implies that identical or equivalent machines are included in the system, two equivalent machines being able to perform the same subset of operations, possibly with different operation times. • Capacity flexibility, which is the ability to easily modify the production volumes, while keeping production costs at their lowest level. • Mix flexibility, which is the possibility of changing the mix of product types performed during the same period without penalizing production costs and resource utilization. • Maintenance flexibility, which refers to the possibility of performing most of the maintenance activities during the working periods. 6.3.1.2 Flexibilities at the Strategic Level At the strategic level, the most common flexibility characteristics are: 1. Expansion flexibility, which refers to the possibility of extending a production system incrementally at a reasonable cost to meet market requirements. This type of flexibility is obtained by modular designs. 2. Program flexibility, which means that the programs that control the system are sophisticated enough to reduce human intervention as much as possible. This requires a precise analysis of the system, but introduces constraints that increase the rigidity of the system. In fact, the goal is to find a tradeoff between automation and flexibility. 3. Market flexibility, which reflects the ability of a production system to efficiently adapt itself to market changes. Market flexibility is feasible if expansion flexibility and efficient program flexibility are available.
6.3.2 Definition of FMS Formally, the objective of an FMS is to approach the efficiency and low production cost that characterize mass production, while simultaneously producing a diversity of parts in varying proportions that is also called a “mix of parts” or simply “mix”. These parts belong to well-defined part families, which means that the variety of parts that can be produced is limited.
6.3 Flexible Manufacturing Systems (FMS)
199
In short, a FMS can be defined as an automated production system that produces a mix of parts flexibly. Automation is general and obtained using computer numerical control (CNC), automated transportation, inventory and tool systems, as well as robots. Another common definition suggests that a FMS is a set of machines as well as part and tool handling devices (robots, conveyors, wire-guided carts, pallets, overhead conveyors, etc.) arranged so that it can manufacture any mix of parts that belong to the part families for which the FMS has been designed. In this second definition, the expression “for which the FMS was designed” emphasizes the fact that the variety of parts a FMS can manufacture is still limited. However, we prefer a longer but more precise definition as follows: A FMS is composed of a set of machines, a system for transporting and handling parts, and a system for transporting and handling tools and fixing tools to machines. Each component of the FMS has its own local computerized control system. Local control systems are coordinated by an overall computerized management system. A FMS automatically produces diverse parts in varying proportions to meet customers’ requirements. A change in customer requirements is automatically taken into account when needed. In other words, the system changes its task planning automatically in real time. A FMS also adapts itself in real time to resource breakdowns. The set of feasible mix is limited. A common FMS management system is summarized in Figure 6.1. This system has a two-level hierarchical management composed of local computerized control systems that are coordinated by a global computerized management system. Indeed, it is possible to find more than two levels in hierarchical management systems, in particular when simple flexible manufacturing modules or flexible manufacturing cells are used instead of machines. Core computerized management system
Monitoring
Local computerized control system
Local computerized control system
Local computerized control system
Machine
Part transportation and handling system
Tool transportation handling and fixing system
Control data Monitoring data
Figure 6.1 A two-level management system for FMS
200
6 X-manufacturing Systems
Robot Buffer
Machine
Figure 6.2 A flexible manufacturing module (FMM)
FMM1
FMM2
FMMm Transfer robots
ATS
Figure 6.3 A flexible manufacturing cell (FMC)
In this section, we will give an example of FMS having only one management level: a flexible packing system to be presented later. Two flexible entities that are widespread in manufacturing systems are: • The simple flexible manufacturing modules (FMM). Such a flexible system is shown in Figure 6.2. It is composed of a buffer of limited capacity, computercontrolled machine, and robot to transfer parts from the buffer to the machine. • The flexible manufacturing cells (FMC). This is a set of flexible manufacturing modules (FMM) linked by and automated transportation system (ATS) and robots that are in charge of transferring parts from ATS to FMMs and vice versa. A FMC is shown in Figure 6.3. Numerous types of FMS are currently at work. We present two differing examples to give an insight into the diversity of this type of system. A Flexible Job-shop This system is shown in Figure 6.4. The particularity of this system is that it runs 24 hours a day. Two employees work on two shifts of 8 hours each. They fix parts on pallets and load the pallets on the transportation system that takes them to a cell, if a cell is idle, or lays them down on a storage plot otherwise. The stored pallets feed the machines during the shift when employees are absent. Loading–unloading robots are associated with each cell to transfer pallets from the transportation system to the cells and vice versa. When a part is completed, it is either sent to the loading–unloading station if an employee is at work, or laid down on a storage plot awaiting an employee. The employee unloads the completed part from the pallet, and then the pallet is reused for another part.
6.3 Flexible Manufacturing Systems (FMS)
Tool
201
Cart 1
magazine
Cell 1
Cell n
Cell 2
Part magazine and loading-unloading Cart 2
station
Plot 2
Plot 1
Plot k
Figure 6.4 A flexible job-shop
Cart 1 is devoted to tools. Automated loading–unloading systems connect the cart and the tool magazine and the cart and the cells. Cart 2 is in charge of part transportation. Automated loading–unloading systems shuttle parts between the part magazine and the cart, cart and storage plots, cart and storage plots and cells. Each component of the system (cells, transportation systems, loading– unloading robots, tool magazine, etc.) has its own automated control system, and the principal management system is in charge of coordinating local control systems. A Flexible Packing System Each component of the system (cells, transportation systems, loading–unloading robots, tool magazine, etc.) has its own automated control system, and a main management system is in charge of coordinating the local management systems. This system is depicted in Figure 6.5.
Sn Pallets
S3 I S1
Figure 6.5 A packing system
S2
202
6 X-manufacturing Systems
Products arrive randomly at the entrance “I”. Each product is fixed on a pallet and enters the packing system. Products are of various volumes and weights. Several packing stations S1 , S 2 , L, S n are available round the conveyor belt. Each station disposes of specific resources according to the characteristics (volumes, weights) of the products it is supposed to pack. When a pallet arrives at the entrance of a packing station that includes the adequate resources, it enters the station if the station is idle or otherwise continues its way on the conveyor belt. If the product enters the packing station, it is packed and leaves the station. Note: products on the conveyor belt take precedence over the products that are candidates to enter the system; this priority has been introduced to avoid overloading the conveyor belt. Only local computerized control systems are used to manage this FMS. One manages the entrance of the packing system and the other one manages the entrances into the stations. Thus, there is no overall management, and only one management level is required for this system.
6.3.3 Advantages and Limitations of FMS As previously emphasized, an FMS: • Steadily guarantees high product quality, thanks to automation. • Leads to an increased average working time of the machines. This is the consequence of the flexibility of FMSs and the possibility to work with few employees. Thus, FMS are more productive than job-shops. On the average, the working time of the machines is between 10% and 20% of the working period in job-shops and jumps to 80% – 90% in FMS. • Reduces indirect labor since automation removes errors and rework. • Reduces inventory and WIP: this is the consequence of computerized management. • Quickly adjusts to market changes and breakdowns. This is the most important characteristic of FMSs and a powerful argument for their competitiveness. • Reduces direct labor cost, due to fewer employees. Nevertheless, the technical and management levels of the employees in charge of an FMS should be significantly higher than the usual workers in typical manufacturing systems. • Reduces the number of machines required to perform the same amount of work. As a consequence, the floor space needed for a given production is drastically reduced compared to a job-shop, for example. • Reduces waste, thanks to automation. Unfortunately, some important limitations should be discussed: 1. As has been emphasized several times in this section, a FMS can only handle a relatively narrow range of part varieties. Thus, a FMS may become useless in
6.4 Agile Manufacturing Systems (AMS)
2.
3. 4.
5.
203
the case of fundamental changes in customers’ demands, and only parts of the FMS may be recovered for further uses. The purchasing costs of the components of an FMS are very high. Also, the design, implementation and testing of the programs needed to control the system require a huge amount of time, which makes the implementation of an FMS very expensive. The period needed for the development of a FMS is much longer than for the traditional manufacturing systems. A two-year period between the time the choice of an FMS is made and the time it is fully operational is not uncommon. Quite often, the technical level of the employees and the management skill of the managers are inadequate to deal with this new technology. As a consequence, additional expenditures are necessary to train employees and/or hire new ones. Inadequate strategy or imprecise forecasting may lead to important financial losses, due to the limited flexibility of the FMS.
To conclude, FMS increases the speed of responsiveness to markets, but choosing an FMS is always risky. Note also that machines equipped with CNCs (computer numerical control) are a kind of FMS.
6.4 Agile Manufacturing Systems (AMS) 6.4.1 Definition It is difficult to give an extensive definition of agile manufacturing systems (AMS). Let us start with a common definition that will be refined by comparing AMS with lean manufacturing systems (LMS), see Section 6.6, and flexible manufacturing systems (FMS). An AMS is a manufacturing system that has the ability to manage market changes as a matter of routine. An AMS is able to quickly (that is with short lead times) produce low-cost and high-quality customized products in various volumes. Note that market changes refer not only to changes in production volumes but also to changes in product characteristics and even in types of products. A measure of agility is the time required to switch from one set of characteristics to another. A second measure of agility is the percentage of the initial investment that can be reused when demand characteristics change: the greater this percentage, the more agile the system. Figure 6.6 illustrates this second measure. Agile manufacturing systems usually need resources that are far beyond the abilities of most companies. Therefore, AMS require a strong cooperation among several companies that, in turn, calls for a managerial structure to take care of the cooperation (resource sharing, contacts with clients, etc.).
204
6 X-manufacturing Systems
Adaptation period Mass production FMS
AMS Reused percentage of initial investment
Figure 6.6 Adaptation to market changes
The aim of AMS is to combine organizations, people and technologies into a single structure able to manage quickly and at low cost a virtually infinite number of demand types. Obviously, these joint ventures will be successful only if they are supported by advanced information technologies and a collaboration framework that supports skilled people in their efforts to improve the production system. Knowledge is a major factor of success in AMS. Another characteristic of AMS is their proximity to customers and providers in the sense that they organize common projects with them to improve the efficiency of the whole system. These projects are usually technical ones to improve the manufacturing processes and/or the characteristics of the products and/or the quality, or organizational projects to optimize internal functions. The importance of people in AMS is becoming increasingly apparent since they bring flexibility to the system. Several constraints must be satisfied to help workers to become as efficient as possible. The main constraints concern the involvement of employees in daily system improvement, ergonomics and training. 6.4.1.1 Provide Ergonomic Working Places Among other objectives, designers have to: • Design working places that eliminate unnecessary effort. • Make sure that information related to tasks to be performed is provided in due time and is easy to read. • Assign a limited diversity of tasks to each working place (and thus to each employee). • Plan to include at each work place a sound alert in case of breakdown, quality deficiency, starvation, etc. • Manage the system in order that components arrive at the work places on a just-in-time basis. • Make sure that the communication with adjacent work places and the hierarchy is easy. • Do your best to protect the health of the employees (low level of noise, optimal lighting, ergonomic seating, etc.).
6.4 Agile Manufacturing Systems (AMS)
205
6.4.1.2 Involvement of the Employees Employees should be involved in the day-to-day improvement of the efficiency of the system. As a consequence, a system able to evaluate the performance of each workplace should be introduced. 6.4.1.3 Training As mentioned before, knowledge is a major factor of success in AMS. Thus, training the employees is of utmost importance, mainly in the following domains: 1. quality improvement; 2. identification and suppression of unnecessary tasks; 3. improvement of production processes.
6.4.2 Agile Versus Lean While a LMS meets a given demand quickly with low cost, an AMS is able, in addition, to adapt itself to changes in demand characteristics and even product types. The principal difference between AMS and LMS is whether or not the market changes have been taken into account at the design level of the system.
6.4.3 Agile Versus Flexible Let us now compare AMS with FMS. A FMS is able to rapidly switch from one task to another, but this is only possible if both tasks were foreseen at the design level, while an AMS is able to quickly respond to unanticipated market changes. Thus the difference between FMS and AMS is that FMS covers a finite set of tasks while the set of tasks included in AMS is virtually infinite. AMS are proactive, the definition of which is “to act before a situation becomes a source of confrontation or crisis”.
6.4.4 Cost Stability During the Life of an AMS Real costs observed during the life an AMS are often different from those foreseen. The main reasons for this observation are the following:
206
6 X-manufacturing Systems
• changes in the system such as, for instance, increase of capacity, introduction of new resources as substitutes for some old ones, etc.; • evolution of demands; • simplification of manufacturing processes; • changes in characteristics of some products, which calls for changes in some processes; • problems arising when starting the new system, which calls for changes in the manufacturing system design; • a breakdown ratio that appears to be greater than foreseen, which also calls for changes in the design of the manufacturing systems; • hidden costs (unplanned training of the employees, resource costs greater than expected, etc.). This list is not exhaustive. In an AMS, surpassing cost expectations leads to consequences that are less crucial than in mass production. The main reason is that adjusting an AMS to new conditions is much easier than in the case of mass production (or FMS often). Thus, the rectification of evaluation mistakes is made as soon as they are detected and with little increased cost. A consequence is that adjustments are more frequent in AMS than in mass production, but they are of smaller range. Figure 6.7 illustrates the relation between capacity and cost. Dotted lines represent actual events, while continuous lines represent foreseen situations. SP is the selling price; thus, as long as the production cost remains less than the selling price, the system remains profitable. In the figure, “*” indicates the points in time at which AMS is adjusted to face the demand, the goal being to force the AMS to remain profitable.
Cost
Mass production
AMS
SP
Capacity 0 0 * qf qr
*
*
v
*
*
*
qr1 qf1 *
Figure 6.7 Actual and expected production costs for mass production and AMS
6.5 Reconfigurable Manufacturing Systems (RMS)
207
Note that, in this illustration, only the volume of the demand changes: a massproduction system would be unable to adjust to a change in a type of product. As we can see: • Let v be the expected production volume. When v is manufactured by a massproduction system, the production cost is greater in the actual event than it would be if the foreseen situation holds. Furthermore, the range of volume for which production remains profitable, actually decreases compared with the foreseen situation ( [ qr0 , q1r ] < [ q 0f , q1f ] ). • When the production is done by an AMS, the increase of the production cost is less than the increase observed when a mass-production system is used; the explanation lies in the high frequency of adjustments in AMSs, which allow a better control of the profitability of the system. In Figure 6.7, we can see that an AMS can always remain profitable due to frequent adjustments at low cost.
6.5 Reconfigurable Manufacturing Systems (RMS) 6.5.1 Motivation Fundamental changes arose in the market during the past decades at an increasing pace. The causes of these changes are known: worldwide competition, wellinformed and increasingly skilled customers, rapid changes in process technology, increasing frequency in the introduction of new products (marketing effect) or new parts in existing products, government regulation in the fields of environment, safety and employee health that call for sophisticated ergonomics, new materials to cope with government regulation and cost reduction, to quote only some. As a consequence, new production systems should be able to rapidly react to profound market changes without reducing productivity and quality, while keeping costs low. Numerous improvements that have been proposed in the past were mentioned in the previous sections. Basic mass production is composed of dedicated manufacturing lines (DML). The objective is to produce high volumes at low cost, which does not fit with the required characteristics mentioned above. A lean manufacturing system (LMS) is an enhancement of mass production. It aims at eliminating waste and improving quality, while keeping manufacturing costs low. Again, lean manufacturing does not meet the above requirements because of lack of flexibility.
208
6 X-manufacturing Systems
A flexible manufacturing system (FMS) addresses changes in the production mix, work orders, tooling, but the set of products that can be made by this kind of system is well defined (it belongs to a single product family) and cannot be extended, except by redesigning the entire system. FMS possesses rigid hardware and software architectures. A reconfigurable manufacturing system (RMS), as explained earlier, can be considered as an extension of the flexibility of FMS in two directions: hardware and software architectures. An agile manufacturing system (AMS) is more a business philosophy than a new manufacturing approach. It aims at responding to challenges posed by the market, keeping costs low, quality high and responsiveness efficient. To reach this goal, a business partnership (virtual production system) is established as the market dictates. Roughly speaking, an agile manufacturing system works toward the same objectives as the reconfigurable manufacturing system, but by using existing structures instead of a single system able to adapt itself to the market. Thus, an agile system does not compete with RMS; they are complementary in some circumstances.
6.5.2 RMS Definition The latest significant improvement of manufacturing systems appears in a new type of system: the reconfigurable manufacturing system (RMS). The definition of an RMS can be found in (Mehrabi et al., 2000): A reconfigurable manufacturing system is designed for rapid adjustment of production capacity and functionality in response to new circumstances, by rearrangement or change of its components. Components may be machines and conveyors for entire manufacturing systems, mechanisms for individual machines, new sensors, and new controller algorithms. New circumstances may be changing product demand, producing a new product on an existing system, or integrating new process technology into existing manufacturing systems.
As mentioned in (Mehrabi et al., 2000), the key characteristics of RMS are: 1. Modularity: hardware and software components of RMS are modular. In other words, inputs, outputs and functionalities of each module are precisely defined and should be standardized. Among the components are control software modules, tooling systems and axes. 2. Integration ability: refers not only to the “plug and play” capability of the components, but also their acceptance of new technologies. Indeed, integration requires well-specified interfaces and functions of components that, in turn, guarantee a precise knowledge of the activity spectrum of the system. 3. Convertibility: refers to the ability to quickly switch from one manufacturing activity to another, and to quickly adapt the architectures (software and hard-
6.5 Reconfigurable Manufacturing Systems (RMS)
209
ware) to new market requirements. Conversion requires changing basic hardware and software resources (tools, fixtures, software modules, etc.) and may require a new initialization of the system for optimizing the run-up time of the system. 4. Diagnosis ability: means to be able to quickly identify the sources of quality and reliability problems that occur in large systems. This is ensured by the modularity. 5. Customization: refers to being able to produce all the parts of the part family around which the RMS is designed. It also concerns the ability to integrate control modules in the framework of open architecture technology to reach the optimal control needed. Table 6.1 proposes a short comparison among these concepts. Table 6.1 Comparison among manufacturing concepts
DML
LMS
Advantages
Limitations
High production volumes
Produces a single part (no flexibility)
Low production cost
Not scalable (fixed capacity)
Good quality
Limited capacity
High production volumes
Limited variety of products
Low production cost
Limited capacity
Suppressing waste
Not scalable
High quality Fast changes of production mix and schedule FMS
Expensive design and implementation
Cost-effective product manufacturing
Low throughput
High quality
Limited variety of products
Business approach. Does not deal with the production system technology or operation AMS
High variety of products High quality
Difficult to coordinate differing managements
Short lead time Low production cost Can be created by connecting basic hardware and software modules RMS
Reconfigurable with basic modules to adjust to market Quick integration of new technologies and functions
The following aspects should be clarified: design and implementation costs, gap between available modules and required modules, relationship between market forecasting and efficiency of the RMS
210
6 X-manufacturing Systems
6.5.3 Reconfiguration for Error Handling 6.5.3.1 Motivations Basically, reconfigurable manufacturing systems (RMS) quickly adjust production capacity and functions in response to unexpected changes in the market. An RMS is also supposed to quickly integrate new technologies to improve its efficiency. RMS is assumed to be the perfect tool for the new era of mass customization that requires simultaneously the productivity of a dedicated system and the flexibility of agile manufacturing. Recently (see (Bruccoleri et al., 2006)) and the papers referenced in this publication), some studies have been conducted to promote reconfiguration as a mean for error handling since a RMS can easily be reconfigured at the system level (by changing the layout or introducing a new technology), at the machine level (by introducing a new automated tool magazine) and at the control level (by integrating a new software module). An error is the effect of exception that refers to a state of the production system that is different from that expected. Examples of exceptions are machine breakdowns, unexpected customers’ requirements, order cancellations, quality problems, to quote just a few. Strategies are necessary to deal with exceptions; such strategies already exist. Routing flexibility is a way to handle machine breakdowns, an oversized system is often the solution to absorb unexpected demands; specialized stations are sometimes used for rework in the case of quality problems, etc. These solutions are expensive, and a quick reconfiguration would certainly be more cost effective. 6.5.3.2 Examples of Error Handling Assume that a production system has to rectify quality problems, which leads to an unexpected volume of rework. A RMS is not naturally flexible since its objective is to meet as closely as possible the production requirements concerning the type of part and the production volume. Thus, the only way to face the problem is to reconfigure the system. Note: managing an unexpected problem of this type is similar to dealing with an unexpected demand, which is the main objective of RMS. Consider now a sudden change in the production mix, which usually calls for a change of machine loads and the schedule. Reconfiguration is probably the solution when coupled with a scheduler able to adapt itself to the new configuration.
6.5 Reconfigurable Manufacturing Systems (RMS)
211
6.5.4 A Problem Related to RMS 6.5.4.1 Model for Capacity-extension Scheduling Consider T elementary periods denoted by 1,2, …,T, where T is the horizon of the problem. We assume that the RMS under consideration is in an expansion period; in other words, the demand increases on the average. As a consequence, the manager decides that the only possible action is to increase the capacity of the system. We denote by: • di, i = 1, 2, …, T, the demand during period i, • vi, i = 1, 2, …, T, the increase of capacity at the beginning of period i. According to the above hypothesis, vi ≥ 0 . The cost of capacity extension during period i is composed of: 1. A concave cost ci ( vi ) that represents the cost of the resources bought to increase the capacity of the system and the cost of their implementation.1 2. A concave cost zi ( Qi − d i ) , where Qi is the capacity available for period i. Ini
deed, Qi = Q0 + ∑ v j , where Q0 is the initial capacity of the system. This cost j =1
is due to the excess of capacity (additional maintenance, energy and work force, frozen capital, etc.). The problem to solve can be expressed as follows: T
Min
{v1 , L, vT
∑[ c }
i
( vi ) + zi ( Qi − d i )]
(6.1)
i =1
subject to: i
Qi = Q0 + ∑ v j , i = 1, L, T
(6.2)
Qi ≥ d i , i = 1, L, T
(6.3)
vi ≥ 0, i = 1, L, T
(6.4)
j =1
1 This assumption is supported in (Manne, 1961) and (Luss, 1982). Concavity reflects the economy of scale. Note also that a linear function belongs to the set of concave functions.
212
6 X-manufacturing Systems
Thus, the objective is to minimize a concave function, see Equation 6.1, on a convex domain defined by Equations 6.2–6.4. It is well known that, in this case, there exists an optimal solution that is “extreme” (i.e., saturates some constraints) as we will see in the next subsection. 6.5.4.2 Solution to the Problem
The initial capacity to the system is Q01 = Q0 . The capacity of the system at the beginning of period i and after the possible increase of capacity can be one of the following, taking into account the fact that the solution is “extreme”: Qi1 = Max ( Qi1−1 , d i ) Qi2 = d i1 , where i1 is the smallest index greater than i
such that d i > Qi1 , if any 1
… Qik = d i , where ik −1 is the smallest index greater than k −1
ik − 2
such that d i
k −1
> Qik −1 , if any
⎫ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎪ ⎬ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎭
(6.5)
As a consequence, the optimal cost and the optimal sequence of capacity increases can be obtained using a dynamic programming approach that is twofold. Let J (i ) be the maximum number of feasible increases in capacity at the beginning of period i. For each capacity Qik , k = 1, L, J ( i ) , the cost is: K i ( Qik ) =
Min
Qir−1 Qir−1 ≤Qik
{K ( Q i
r i −1
}
) + ci ( Qik − Qir−1 ) + zi ( Qik − d i )
(6.6)
Algorithm 6.1. Forward step: 1. Initialize the cost: K 0 ( Q01 ) = 0 . 2. For i = 1, L, T : 2.1. Compute the feasible capacities at the beginning of period i after adjusting the capacity. This is done by applying Relations 6.5. J ( i ) is the number of feasible capacities.
6.5 Reconfigurable Manufacturing Systems (RMS)
213
2.2. For each capacity Qik , k = 1, L , J ( i ) , compute the optimal cost using Equation 6.6. r
2.3. Set Rik = Qi −11 where Qir−11 is the capacity that achieves the minimum of (6.6). 3. K T ( QT1 ) is the optimal cost. Backward step: 4. Set kT = 1 . 5. Set vT = QTkT − RTkT . 6. For i = T − 1 to 1, step –1: 6.1. Select Qiki such that Qiki = Rik+i1+1 . 6.2. Compute vi = Qiki − Riki . The sequence v1 , v2 , L, vT is the optimal sequence of capacity increases.
6.5.4.3 Numerical Example
In this example T = 25 and the demands are given in Table 6.2. Table 6.2 Demands Period
1
2
3
4
5
6
7
8
9
Demand
10
15
30
15
20
35
45
40
25
Period
10
11
12
13
14
15
16
17
18
Demand
50
35
25
40
45
55
45
50
55
Period
19
20
21
22
23
24
25
Demand
60
55
45
60
70
65
60
The initial capacity is equal to 20. In this example:
(
)
ci ( v ) = 30 × 1 − e −0.1 v and zi ( Q ) = 0.5 × ( Q − d i
)
(6.7)
The possible capacities at the beginning of a period and after adjustment are: Period 1: 20, 30, 35, 45, 50, 55, 60, 70, J ( 1 ) = 8. Period 2: 20, 30, 35, 45, 50, 55, 60, 70, J ( 2 ) = 8. Period 3: 30, 35, 45, 50, 55, 60, 70, J ( 3 ) = 7. Period 4: 30, 35, 45, 50, 55, 60, 70, J ( 4 ) = 7. Period 5: 30, 35, 45, 50, 55, 60, 70, J ( 5 ) = 7. Period 6: 35, 45, 50, 55, 60, 70, J ( 6 ) = 6. Period 7: 45, 50, 55, 60, 70, J ( 7 ) = 5. Period 8: 45, 50, 55, 60, 70, J ( 8 ) = 5.
214
6 X-manufacturing Systems
Period 9: 45, 50, 55, 60, 70, J ( 9 ) = 5. Period 10: 50, 55, 60, 70, J ( 10 ) = 4. Period 11: 50, 55, 60, 70, J ( 11 ) = 4. Period 12: 50, 55, 60, 70, J ( 12 ) = 4. Period 13: 50, 55, 60, 70, J ( 13 ) = 4. Period 14: 50, 55, 60, 70, J ( 14 ) = 4. Period 15: 55, 60, 70, J ( 15 ) = 3. Period 16: 55, 60, 70, J ( 16 ) = 3. Period 17: 55, 60, 70, J ( 17 ) = 3. Period 18: 55, 60, 70, J ( 18 ) = 3. Period 19: 60, 70, J ( 19 ) = 2. Period 20: 60, 70, J ( 20 ) = 2. Period 21: 60, 70, J ( 21 ) = 2. Period 22: 60, 70, J ( 22 ) = 2. Period 23: 70, J ( 23 ) = 1. Period 24: 70, J ( 24 ) = 1. Period 25: 70, J ( 25 ) = 1. The optimal cost is K 25 ( 70 ) = 186.646. Furthermore, vi = 0 for i = 1, 2, 4, 5, 7, 8, 9, 11, 12, 13, 14, 16, 17, 18, 20, 21, 22, 24, 25 v3 = 10 , v6 = 15 , v10 = 5 , v15 = 5 , v19 = 5 , v23 = 10 The optimal progression of capacity is shown in Figure 6.8.
25
23
21
19
17
15
13
11
9
7
5
3
80 70 60 50 40 30 20 10 0
1
Capacity
RMS capacity (1)
Period
Figure 6.8 Optimal progression of capacity when costs are those given by (6.7)
6.5 Reconfigurable Manufacturing Systems (RMS)
215
25
23
21
19
17
15
13
11
9
7
5
3
80 70 60 50 40 30 20 10 0 1
Capacity
RMS capacity (2)
Period
Figure 6.9 Optimal progression of capacity when the cost of excess capacity is given by (6.8)
Let us now replace the second cost by: zi ( Q ) = 0.1 × ( Q − d i )
(6.8)
The other data are the same as in the previous example. The new optimal cost is equal to 88.4002. The optimal changeover period of the capacity is shown in Figure 6.9. As we can see, the changes of capacity are less frequent than in the previous example. The explanation is that the cost of excess capacity is smaller, which results in higher capacity increases to take advantage of the concavity of costs ci ( vi ) . 6.5.4.4 Extension of the Model
Problem Setting The model is the same as the one presented in Section 6.5.4.3, except that variables vi are allowed to take negative values. Costs ci ( v ) are: ⎧Concave and decreasing if v < 0⎫ ⎨ ⎬ ∀ i ∈ { 1, L, T } ⎩ Concave and increasing if v > 0 ⎭ Furthermore, ci ( 0 ) = 0 for i = 1, L , T . Such a cost is represented in Figure 6.10.
216
6 X-manufacturing Systems
2.2
1.8
1.4
1
0.6
0.2
-0.2
-0.6
-1
-1.4
-1.8
12 10 8 6 4 2 0 -2.2
Costs
Costs for capacity changes
Capacity changes
Figure 6.10 Cost ci ( v
)
A Heuristic Approach Since costs ci ( v ) are not concave, an optimal algorithm cannot be derived from the properties of the problem. Thus, we propose a heuristic algorithm that starts from a feasible solution and improve it iteratively and randomly. The process is performed several times and the best solution is kept. The notations are those of the previous problem. The Algorithm In this algorithm, Q0 is the capacity of the system when the management starts. This capacity is known. The value of w is given by the user. In the examples presented below, we took w = 30. IT is the number of iterations. It is also given by the user. In the following examples, IT = 30 000. Algorithm 6.2.
1. For k1 = 1, L, IT : Generate a feasible solution: 1.1. For i = 1, L, T : 1.1.1. Compute x = d i − Qi−1 . 1.1.2. Generate y at random between 0 and w . 1.1.3. Set vi = x + y . 1.1.4. Set Qi = Qi −1 + vi . 1.2. End of loop i. Random adjustment of the feasible solution:
6.5 Reconfigurable Manufacturing Systems (RMS)
217
1.3. Set V 1 = V and Q1 = Q . V and Q are the vectors of components vi and Qi , respectively. 1.4. For i = 2, L, T : 1.4.1. For j = 1, L , i − 1 :
(
)
1.4.1.1. Compute s = Min Qk1 − d k . k∈{ i ,L, T }
1.4.1.2. Generate at random r ∈ [ 0, s ] . 1.4.1.3. Set v1i = v1i − r .
1.4.1.4. For k = i, L, T set Qk1 = Qk1 − r . 1.4.2. End of loop j. 1.5. End of loop i. Cost associated with the adjusted solution:
(
1.6. Compute the cost a1 related to solution V 1 , Q1
) using (6.6) with the new cost c ( v ) . i
Keeping the last solution if it is the best one until now: 1.7. If ( k1 = 1 ) , then set V 2 = V 1 , 1.8. If
Q 2 = Q1 , a 2 = a1 .
[ ( k1 > 1 ) and ( a1 < a2 ) ] , then set V 2 = V 1 ,
2. End of loop k1.
(
Q 2 = Q1 , a 2 = a1 .
)
3. Print solution V 2 , Q 2 , a2 .
Numerical Applications In this example, T = 20 and:
( (
⎧ ci ( vi ) = f × 1 − e 0.2 v ⎪ ⎨ ⎪⎩ ci ( vi ) = g × 1 − e −0.1v zi ( Qi − d i
i
i
) )
if vi < 0 ⎫ ⎪ ⎬ if vi ≥ 0 ⎪⎭
) = h × ( Qi − d i ) , where
f , g and h are parameters.
90 80 70 60 50 40 30 20 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 DEMANDS
CAPACITIES
Figure 6.11 Demands and capacities for f = 40, g = 30, h = 0.5
218
6 X-manufacturing Systems
90 80 70 60 50 40 30
DEMANDS
20
18
16
14
12
10
8
6
4
2
0
20
CAPACITIES
Figure 6.12 Demands and capacities for f = 30, g = 80, h = 20
We applied the program for f = 40, g = 30, h = 0.5 and f = 40, g = 30, h = 20. In the example presented in Figure 6.11, due to the exponential functions, decreasing the capacity is more expensive than increasing it. Furthermore, the costs zi ( Qi − d i ) are low. As a consequence, the capacity curve does not “follow” closely the demand curve when it decreases. In the example presented in Figure 6.12 the capacity curve follows the demand curve. The reason is the high value of zi ( Qi − d i ) .
6.6 Lean Manufacturing Systems (LMS) 6.6.1 Definition Lean manufacturing was originally developed by Toyota and the concept was then spread throughout the world. Making a manufacturing system lean consists in cutting waste in the manufacturing processes. In this definition, waste is anything that does not add value to the products or anything customers are unwilling to buy. The literature mentions seven types of waste: 1. Overproduction: manufacturing more than needed or earlier than needed. Overproduction is expensive due to product waste, surplus inventory, transportation and handling, and risk for quality. 2. Waiting times that includes the time semi-finished products are waiting for the next operation or the time machines and/or workers are idle, awaiting work. Both types of waiting periods are expensive. 3. Product transportation further than required. Since transportation is an operation that does not add value to the products, transporting further than required is a waste of time, money and effort.
6.6 Lean Manufacturing Systems (LMS)
219
4. Processing, if not carefully designed, may include operations that could be too time consuming or excessively complicated. Lack of processing efficiency refers to preliminary design of manufacturing systems, but also to lack of clear communication with customers. 5. Excess inventory. Having surplus inventory is not only expensive, but also can generate product defects due to excessive handling. 6. Excess of workers’ motions. Unnecessary workers’ motions lead to undue stress that may result in product defects and workers’ injuries. 7. Scrap from manufactured products, which refers to total quality. Eliminating waste (MUDA from the Japanese expression) is reached by working toward the ends that will be mentioned in the next section. Indeed, the seven types of waste mentioned above are not independent of each other. For instance, excess inventory is not independent of scrap; overproduction leads to more waiting times of products, excess of product transportation may lead to additional workers’ motions, etc.
6.6.2 How to Eliminate Wastes? Several complementary ways are available to obtain lean manufacturing systems. The most significant are: • Adopt a pull strategy, which means that a product is released into production only if it corresponds to a customer’s demand. We refer to this strategy as pull processing. Using pull processing results in reducing waste 1 (overproduction) and participates in reducing 3 (transport) and 5 (inventory). • Eliminate activities that do not add value to the products. This reduces waste 3 (transport) and 5 (inventory) while favoring the reduction of 2 (waiting time) and worker effort as well as helping to promote process efficiency. • Reduce the manufacturing cycle. This is a way to reduce types 2, 5 and 6 of waste. This goal can be reached by acting at the technological and management levels: – At the technological level by automating some operations, improving the manufacturing processes (product design) and adequately choosing the resources. – At the management level, by improving planning and scheduling approaches. • Deliver an ever-higher product quality. This leads to less rework and thus facilitates management that, in turn, helps reduce overproduction, inventory and scrap, among others. Finally, high quality leads to a reduction of 1, 5 and 7. • Produce different mixes at low costs, which guarantees a short due date. We refer to this ability as flexibility. Flexibility influences mainly 1, 2 and 3.
220
6 X-manufacturing Systems
• Improve the product flows that aims at reducing flow variability and congestion of the shop floor. Reducing flow variability can be attained by: – Reducing the setup, maintenance and changing work shift times. – Choosing a layout that fits with the product flows in order to reduce transportation and handling. This objective is illustrated in Figure 6.13. – Using line-balancing methods to control flows in linear systems (see Chapters 7 and 8).
1
5
6
4
7
3
2
(a) 1
2
5
3
6
4
7
(b)
Figure 6.13 Improving layout: (a) inadequate layout; (b) layout after taking the flows into account
Reducing the congestion on the shop floor requires: – Launching the production taking into account the capacity of the bottleneck resource. – Using a real-time scheduling system.
6.6.3 Six Core Methods to Implement Lean Manufacturing This section is devoted to the presentation of the most popular approaches used to implement lean manufacturing systems. They are used in the order of their presentation. Some of these methods can be implemented concurrently. Note also that numerous other methods exist; we have restricted ourselves to the most popular ones.
6.6 Lean Manufacturing Systems (LMS)
221
6.6.3.1 Cellular/Flow Manufacturing
This approach can be defined as the cooperation between automated and manual operations to maximize the value of the products while minimizing waste. The most efficient cooperation includes the concept of process balancing (also known as cell balancing or line balancing, a line being a sequence of cells). In this kind of organization, workers are often responsible for several manual operations and/or automated operations. The assignment of operations to cells and, inside each cell, the assignment of operations to workers or robots is done is such a way that all resources have about the same workload in any situation. This guarantees the flexibility of the system and a smooth flow of items through the manufacturing system with minimal handling and transportation. The consequences are elimination of waste, reduction of WIP and floor-space, shorter manufacturing cycle times and low production costs. Most publications on lean manufacturing concern the management aspects: 5S, Kaizen, just-in-time (JIT), Total productive maintenance (TPM), the six sigma method, SMED, etc. Only a few publications concern the design of lean manufacturing processes. Nevertheless, this is a crucial aspect for the success of a lean approach. Indeed, most of the waste can be eliminated at this step of manufacturing system life cycle by revealing and solving problems before production. The following schema for the design of a lean manufacturing process is suggested.
Takt Time Calculation “Takt Time” imposes the rate of fabrication needed to meet customer demand. It is equal to the available work time divided by the customer demand. " Takt Time" =
Operating time Customer demand
Operating Time = Shifts × Effective Time
“Effective time” takes into account the shifts worked and makes allowances for stoppages, for predetermined maintenance, and planned team briefings, breaks, etc. The customer demand includes the anticipated average sales rate plus any extras such as spare parts, anticipated rework and defective pieces.
Definition of Tasks (Operations) Definition of tasks (operations) is the next step. The operations that should be considered are those with added value, i.e., the operation where the product value after the operation is greater than the product value before. For example, holding
222
6 X-manufacturing Systems
and transport operations have no added value and should be reduced (or eliminated if possible).
Choice of Process Plan Choice of a process plan consists in analyzing several possible process plans and choosing the one that minimizes cost and production cycle. Remember that a process plan is defined by a set of operations, their sequence and the corresponding tools necessary to execute these operations. Usually, there are a great number of possible process plans. A graph theory approach can help manufacturers to analyze possible plans and optimize a given criterion. 2
a
g
b 3
h 4
f
5
9 i
5 c
1
d 3
e
2
j
10 2
l 4 k
5 n
m 8
Figure 6.14 An example of a graph for manufacturing process analysis
Figure 6.14 is an example of a graph approach (Dolgui and Proth, 2006). In this graph the arcs represent operations and vertices precedence constraints. The letters are the identities of the operations and the numbers are the operation durations. So to calculate the production cycle, it is necessary to search for the critical path in this graph. To reduce the production cycle, an aforementioned objective of the lean approach, it is necessary to reduce the durations of the tasks on this critical path (or even eliminate some of them). Note that after reducing the operation time or eliminating a task belonging to the critical path, the critical path may change and other operations form a new critical path. The durations of these new operations should be also reduced, and so on. In addition, if the approach is used for the choice of the best process plan, this type of graph should be designed for each potentially interesting process plan. To reduce the number of operations on the critical path, some additional studies can be made to alter the product. In addition, reductions can also be obtained by improving the manufacturing equipment and production methods. The only difficulty of this approach is the time-consuming data preparation. Since all product data are known, the graph approach to calculate the critical path is relatively simple for computer realization and holds promise. Nevertheless, this requires advance acquaintance with manufacturing technologies, equipment and processes to adequately model the possible process plans via graphs.
6.6 Lean Manufacturing Systems (LMS)
223
The critical path for the manufacturing operation is a good technique to reduce the production cycle. Nevertheless, in a production cycle there exist also nonproductive operations. Therefore, additional analysis is necessary to identify them. An adequate support for this is the lead-time analysis chart (see for example, Dolgui and Proth, 2006).
Line Balancing (Assignment of Tasks to Workstations) After the operations and process plan are chosen, the next step is line balancing. At the line-balancing step, the assignment of operations to workstations is made under the takt time, precedence and other constraints. The load of each station cannot exceed the takt time.
Equipment Selection For any given line-balancing method and for each work station several possible configurations of equipment exist in general. Therefore, this procedure of selecting the appropriate equipment for each workstation can reduce the total time and cost considerably.
Simulation of Flows and Cost Estimation Finally, flow simulation and cost estimation are necessary to verify the feasibility of the project before its final acceptation and implementation. Line balancing and equipment selection are the key steps in a lean process design approach. This will be further explained in the next section. In the remainder of this book line balancing will be developed in detail. 6.6.3.2. Kaizen
Kaizen is usually translated as “continuous improvement”, in contrast with the emphasis on brutal changes in manufacturing systems (reengineering being the most extreme example). Applying Kaizen introduces never-ending incremental changes to improve the manufacturing systems. Kaizen is supposed to humanize the work place by eliminating hard work, training employees in order to make them more capable of taking an active part in the system, and proposing improvements in the operating conditions of the system. As widely mentioned in the literature, the Kaizen approach is based on the following principles: • Work as a team, which is the ability to cooperate with other employees for the best of the company.
224
6 X-manufacturing Systems
• Improve all the components of professional life. This includes knowledge improvement, attainment of social values, deep involvement in the success of the company, etc. • Implement quality circle to reach perfect first-time quality. • Encourage improvement suggestions. • Introduce statistical and quantitative methodologies to measure improvement. • Take care of employees’ training: human resources are the most important company asset. • Make sure that processes evolve by gradual improvements (rather than by drastic changes). As mentioned by Imai (1986): Kaizen means improvement. Moreover it means continuing improvement in personal life, home life, social life, and working life. When applied to the workplace, Kaizen means continuing improvement involving everyone – managers and workers alike.
6.6.3.3 5S
The goal of the 5S system is to reduce waste and improve productivity by making the workplace clean, ergonomic and rational. The 5S method is characterized by 5 Japanese words: • Seiri, which means tidiness. This refers to the practice of keeping only essential items in the working area. Furthermore, each item (parts, cutting tools, etc.) should have a very precise location in order to facilitate work. • Seiton, or orderliness. A precise place should be dedicated to each object, the goal being to make the access to any object easy. • Seiso, or cleanness. The workplace must be kept clean; ideally, the workplace must be cleaned and the items must be restored to their place at the end of each shift. • Seiketsu, which refers to the standardization of the activities of each employee concerning housekeeping. • Shitsuke, or discipline. The goal is to maintain the standards of the workplace in order to be able to work efficiently day after day. Routines that maintain the workplace well organized and in order are of utmost importance to a smooth and efficient flow of activities: it is the goal of the 5S system. To summarize, the 5S system reduces waste and optimizes productivity through a workplace that is maintained well organized and clean day after day.
6.6 Lean Manufacturing Systems (LMS)
225
6.6.3.4 Total Productive Maintenance (TPM)
The goal of TPM is to preserve the functions of the physical assets or, in other words, make sure that resources are capable of doing what the users want them to do and when they want them to do. To reach this goal TPM organizes the systematic execution of maintenance by all employees, whatever their level in the hierarchy, through small groups of activities. The overall goal of TPM can be broken down into five more detailed goals: 1. Design and install equipment that needs little or no maintenance. Continuously check the effectiveness of equipment by measuring waste that occurs such as defect or downtime losses. To summarize, this first goal is to improve the effectiveness of the equipment. 2. Target autonomous maintenance. This objective is reached by training workers to take care of the equipment they are using. “Taking care” means: – repair by using a service manual that contains carefully crafted instructions to solve problems; – develop preventive actions; – propose improvements to avoid breakdowns. 3. Plan the maintenance in detail. This implies: – identifying all possible breakdowns; – standardizing the solutions to problems; – defining precisely the way preventive maintenance is to be organized. 4. Train employees to efficiently perform maintenance and repair operations on their equipment. 5. Focus on preventing breakdowns (preventive maintenance). This includes the use of methods that prevent questionable working habits. To summarize, TPM emphasizes the importance of employees who are supposed to be capable of repairing and improving the effectiveness of the manufacturing system and to work together to reach this goal. Another view of TPM is to say that it focuses on the total life cycle of the equipment, on continuous improvement of production efficiency, on participation of all the employees and on a total-system approach. 6.6.3.5 Just-in-time (JIT)
The main objective of JIT is to reduce WIP and associated costs. JIT is driven by Kanban, a visual system controlling the flow of products. A rough description of Kanban can be given in two points: • Initially, a given set of Kanban is assigned to each station of the line. A line is a sequence of cells that are visited in the order of their sequence. Each Kanban corresponds to a set of semi-finished products a station is entitled to require
226
6 X-manufacturing Systems
from the next upstream station if any or from the raw material and/or component magazine otherwise. • When a set of semi-finished products enters a station, the corresponding Kanban is attached and cannot be used to require another set of products from the next upstream station. The Kanban becomes active again when the set of products it is attached to is required by the next downstream station. Indeed, a station can require a set of products from the next upstream station only if it has a corresponding active Kanban. A consequence of these rules is that the number of products in a station is upper bounded by the number of products represented by the Kanban initially assigned to this station. By assigning to the stations a set of Kanban that represent approximately the same amount of work (in terms of working time) the system reaches a smooth production flow. In the previous description, we assumed that the production system is a sequence of cells. The Kanban approach also applies if different cells that provide different components are available at some manufacturing levels. Assume, for instance, that two cells denoted by A and B provide two components a and b at level k and that these components are assembled at level k + 1. Assume also that a Kanban at level k + 1 represent 10 products. This system is represented in Figure 6.15.
A (A, B)
(A, B)*
B Level k
Level k+1
Level k+2
Figure 6.15 Assembly system
When such a Kanban is active at level k + 1, this level orders 10 components a and b to cells A and B, respectively. If these components are available, they are delivered to level k + 1 and, at this point in time, the Kanban of level k + 1 is no longer usable and the Kanbans at level k that correspond to the components delivered become usable again. The 10 components a and b arrive at level k + 1 where they are assembled to provide 10 semi-finished products denoted by (a, b). When these products are ordered by level k + 2, then the corresponding Kanban of level k + 1 becomes available again, but the corresponding Kanban of level k + 2 is frozen.
6.6 Lean Manufacturing Systems (LMS)
227
6.6.3.6 Six Sigma Method
The six sigma methodology originated at Motorola in the 1980s. The objective of the methodology is process improvement and reduction of the variation of the characteristics of the products through a measurement-based strategy.
Mathematical Background The performance of a process is measured by the “distance” between the design requirement, which is the set of required values of the characteristics of a type of product, and the actual values taken by these characteristics. Assume that only one characteristic is concerned, say X, and denote by x1 , x2 , L, xn the values taken by X for n products. Denote by m the mean value of the characteristic. This value is defined at the design level. The values xi , i = 1, 2, L, n are disseminated around m. The measure of the deviation of these values around m is the standard deviation σ that is the value of: n
1 n
Lim
n→+∞
∑ (x
i
−m
)2
i =1
X is a random variable. In other words, X is a variable that takes a random value each time an experiment takes place. In this case, an experiment consists in manufacturing a product and the value taken by X is the value of the characteristic for this product. It is commonly assumed that X is Gaussian, which means that its density of probability is: f (x)=
⎡ −( x−m exp ⎢ 2σ 2 2π ⎣⎢
1
σ
)2 ⎤ ⎥ ⎦⎥
where x represents the value taken by X. Thus, the probability for X to take a value on interval [ m − z σ , m + z σ ] , where z is a real positive number, is: Iz =
⎡ −( x−m exp ⎢ 2σ 2 2 π m− z σ ⎣
1
σ
m+ z σ
Setting y =
∫
x−m
σ
, we obtain:
)2
⎤ ⎥ dx ⎦
228
6 X-manufacturing Systems
Iz =
1 2π
z
⎡ −y 2
∫ exp ⎢⎣
−z
2
⎤ ⎥ dy ⎦
The value of this integral is available in tables provided for the Gaussian, centered and reduced density of probably. In Table 6.3, we display the value of I z for several values of z. Table 6.3 Some values of Iz as function of z z
1
1.5
1.8
2
2.5
3
Iz
0.6826
0.8664
0.9282
0.9594
0.9876
0.99735
The probability that X takes its values in the interval [ m − 3 σ , m + 3 σ ] is 0.99735. In other words, on the average, 99 735 products out of 100 000 will have a characteristic value that belongs to this interval. Thus, 100 000–99 735 = 265 products out of 100 000 will have a characteristic value that is either greater than m + 3 σ or less than m − 3 σ . Now, assume that two characteristics X and Y are attached to each product. We denote by m X and σ X the mean value and the standard deviation of X, and by mY and σ Y the mean value and the standard deviation of Y. If X and Y are independent from each other, then the probability to have X ∈ [ mX − 3 σ X , mX + 3 σ X
] and Y ∈ [ mY − 3 σ Y , mY + 3 σ Y ]
is (0.99735)2 = 0.99471, which means that, on the average, 100 000 – 99 471 = 529 products (out of 100 000) have at least one of their characteristics with a value outside the interval, which is twice as much as in the case of one characteristic. More generally, if r independent characteristics are involved, the probability for all the characteristics to take their values inside an interval of length 6 σ is (0.99735) r and the probability that at least one characteristic takes its value outside the related 6 σ interval is equal to 1– (0.99735) r. If the intervals of length 6 σ related to the r characteristics represent the tolerances defined at the design level, then 1–(0.99735) r is the probability to have a defective product when r is the number of characteristics involved. Table 6.4 shows how this probability increases with r. As we can see, a tolerance of ± 3 σ is not acceptable since it leads to about 5% of defective parts if 20 characteristics are involved, which is not rare. This is why the value of z has been taken equal to 6, which explains the name of the method: six sigma. Thus, from this point onwards, we will consider an interval of length 12 σ : [ m − 6 σ , m + 6 σ ] .
6.6 Lean Manufacturing Systems (LMS)
229
Table 6.4 Average number of defective products according to the number of characteristics for a tolerance of ± 3 σ r Probability (multipled by 103) Number of defective prod. (out of 100 000)
1
2
3
4
5
6
7
10
20
2.65
5.29
7.92
10.56
13.18
15.79
18.40
26.18
51.68
265
529
792
1056
1318
1579
1840
2618
5168
With this new constraint, a unique characteristic would lead to 0.002 defective parts per million. Motorola who initiated the method presumed that the characteristic mean can drift 1.5 σ in either direction. This prompts us to adopt the worst
case, that is the interval [ m − 4.5 σ , m + 4.5 σ ] that corresponds to 3.4 defective parts per million on the average, when one characteristic is concerned. Table 6.5 provides the new results according to the number of characteristics. According to the results given in Table 6.5, we see that only 68 defective products will be detected in a set of one million products (on the average) if 20 characteristics are checked. In other words, 0.0068% of the products are defective on the average if 20 characteristics are concerned: this result is acceptable. Table 6.5 Average number of defective products according to the number of characteristics for a tolerance of ± 4.5 σ r Probability (multipled by 106)
1
2
3
4
5
6
7
10
20
3.4
6.8
10.2
13.6
17
20.4
23.8
34
68
Six Sigma Outline In practice, the tolerance is given either at the design level (mechanical tolerance) or at the management level (when competitiveness is at stake). Assume for instance that [ a, b ] is the tolerance of a characteristic. This means that ideally the characteristic should take its value in this interval. In this case, the mean value of the related random variable is: m=
a+b 2
Let us consider the case of the ±6 σ interval. The goal is to manage the manufacturing process in order to obtain a standard deviation σ such that
230
6 X-manufacturing Systems
a+b a+b − 6 σ = a (or +6σ = b) 2 2
Transforming one of these equations, we obtain:
σ=
b−a 12
(6.9)
The goal of the six sigma method is to improve the manufacturing process to reach a standard deviation upper bounded by the second member of Equation 6.9. Two questions remain open: • How could we improve the manufacturing process? • How could we evaluate the standard deviation σ ?
Improvement of the Manufacturing Processes Two key methodologies are applied to reach the standard deviations of the characteristic values that are small enough to guarantee a limited number of defective products, as explained in the previous subsection. • The first method is used to improve an existing process or product. It is called DMAIC, where D stands for “define”, M for “measure”, A for “analyze”, I for “improve” and C for “control”. The five stages are as follows: – Stage “define”: formally defines the goals to reach in order to improve customers’ satisfaction. – Stage “measure”: measures the initial values taken by the characteristics of the products (or processes) for future comparison. – Stage “analyze”: the objective is to establish relationships between management or design decisions and the values taken by the characteristics. – Stage “improve”: the goal is now to use the results of the previous stage in order to decide what changes are to be introduced in the system. – Stage “control”: at this last stage, the effects of the changes are checked and evaluated. Sometimes, the process restarts at the second stage (“measure”) for further improvements. • The second method aims at establishing the activities to perform in order to design a product (or a process) that meets customers’ requirements. This method is called DMADV, where D stands for “define”, M for “measure”, A for “analyze”, D for “design” and V for “verify”. The five stages are as follows: – Stage “define”: the goal is to define the design process that is supposed to meet customers’ requirements and the enterprise strategy. – Stage “measure”: defines the key measurable characteristics of the product (or the process) under consideration, the product (or process) capabilities, and the risks that may be encountered at the production and utilization
6.6 Lean Manufacturing Systems (LMS)
231
levels. In other words, this stage is mainly concerned with the analysis of the design and utilization environment. – Stage “analysis”: this consists in generating feasible processes, analyzing these processes and selecting the best one. – Stage “design”: the objective is now to translate the selected design process into design instructions and to check their effectiveness. – Stage “verify”: the goal is mainly to implement the production process. It is possible to restart the process at stage “measure” for further improvements. Other approaches exist. They are derived from the previous ones. In all these methods, the goal is to tend to the process that meets customers’ requirements and enterprise strategy. Evaluation of the Mean Value and the Standard Deviation As mentioned before, the six sigma method is based on the reduction of the standard deviation in order to make sure that it is upper bounded by the second member of Equation 6.9 with a great probability. As a consequence, we must be able to evaluate the standard deviation of the values taken by anyone of the characteristics of the products (or processes). In this section, we show how to evaluate the mean value and the standard deviation of a characteristic X, assuming that X is Gaussian. Let x1 , x2 , L, xn the values taken by X. For instance, these values could be the values taken by a characteristic X on n manufactured products. The mean value m and the standard deviation σ of this population are unknown. An estimation of m is: n
m* =
∑x
i
i =1
(6.10)
n
An estimation of the standard deviation is:
σ* =
1 n −1
n
∑( x
− m* )
2
i
i =1
(6.11)
Indeed, m * and σ * tend, respectively, towards m and σ as n tends to infinity. For a finite value of n, m could be quite different from m * and σ from σ * . If σ is known and n is “large enough” (practically greater than 30), then: ⎡ σ σ ⎤ m ∈ ⎢ m * −t , m * +t ⎥ n n ⎦ ⎣
(6.12)
232
6 X-manufacturing Systems
where: • t = 1 if the probability for m to belong to Interval (6.12) is 0.638. • t = 2 if the probability for m to belong to Interval (6.12) is 0.955. • t = 3 if the probability for m to belong to Interval (6.12) is 0.997.
If n is greater than 30, we can use σ * instead of σ in Equation 6.12. The second member of Equation 6.12 is the confidence interval related to the mean value, and this interval depends on the probability for m to belong to it. If this probability is equal to 1 then the confidence interval is ( − ∞, + ∞ ) . Example
Consider a characteristic that has been measured on 33 products. The results are gathered in Table 6.6. Table 6.6 Values of a Gaussian characteristic X
x1
x2
x3
x4
x5
x6
x7
x8
x9
x10
x11
Value
10.01
9.95
10.03
10
10.02
9.98
9.99
9.97
10.01
10.01
9.99
X
x12
x13
x14
x15
x16
x17
x18
x19
x20
x21
x22
Value
10.02
9.98
10.01
10.02
10.01
9.97
9.98
9.99
10.01
10.01
9.99
X
x23
x24
x25
x26
x27
x28
x29
x30
x31
x32
x33
Value
9.99
10.01
10.02
10.03
9.98
9.97
10.03
9.99
10.02
10.02
9.99
Applying Relations 6.10 and 6.11, we obtain: m* = 10 and σ * = 0.020462
The confidence interval with a probability of 0.955 is: ⎡ 0.020462 0.020462 ⎤ , 10 + 2 × ⎢ 10 − 2 × ⎥ = [ 9.992876, 10.007124 ] 33 33 ⎦ ⎣
Thus, we can claim that m ∈ [ 9.992876, 10.007124 ] with a probably 0.955. Now, we are trying to find a confidence interval for the standard deviation. We know that n
∑( x
i
− m* ) 2
i =1
σ
2
6.7 Conclusion
233
is a random variable that obeys the χ 2 (also called chi-2) distribution with n – 1 degrees of freedom. Assume that we want to define a confidence interval with the probability 0.95. In other words, we want to define an interval such that the standard deviation belongs to this interval with the probability 0.95. To reach this goal, we use a table that provides the χ 2 distribution and check the values a1 and a2 that correspond, respectively, to probabilities ( 1 – 0.95 ) / 2 = 0.025 and 1 – 0.025 = 0.975 at the n – 1 degrees of freedom. Finally, we solve the following equations: n
∑( x
i
n
− m* ) 2
i =1
σ1 2
∑( x
i
= a 2 and
and the interval [ σ 1 , σ 2
− m*) 2
i =1
σ2 2
= a1
(6.13)
] is the confidence interval.
Example
In this example we use the same data as in the previous example. Since n = 33, the number of degrees of freedom is 32. The χ 2 distribution table provides a1 = 18 291 and a2 = 56 328. Furthermore, n
∑( x
i
− m * ) 2 = 0.0134
i =1
Solving
Equations 6.13
will
lead
to
σ 12 = 23.78923 × 10 −8
and
σ 2 = 73.260073 × 10 −8 . Finally, [ σ 1 , σ 2 ] = [4.8774 × 10 −4 , 8.5592098 × 10 −4 ] is the interval that contains the standard deviation with the probability of 0.95. 2
6.7 Conclusion In this chapter, the most popular concepts currently encountered in production are covered. The choice among these approaches for designing a system will depend on the production to be performed. It is often possible to mix different methods to fit with the problem at hand. In other words this chapter provides some new ideas, but potential users do not have to select one of the concepts and discard the others.
234
6 X-manufacturing Systems
References Bruccoleri M, Pasek ZJ, Koren Y (2006) Operation management in reconfigurable manufacturing systems: reconfiguration for error handling. Int. J. Prod. Econ. 100:87–100 Dolgui A, Proth J-M (2006) Les Systèmes de Production Modernes. Hermes Science Publications, London Imai M (1986) Kaizen: The Key to Japan's Competitive Success. McGraw-Hill/Irwin, New York, NY Luss H (1982) Operation research and capacity expansion problems: A survey. Oper. Res. 30(5):907–947 Manne AS (1961) Capacity expansion and probabilistic growth. Econometrica 29(4):632–649 Mehrabi MG, Ulsoy AG, Koren Y, Heytler P (2000) Reconfigurable manufacturing systems: key to future manufacturing. J. Intell. Manuf. 11:403–419
Further Reading ASQ Statistics Division (2000) Improving Performance through Statistical Thinking. ASQ Quality Press, Milwaukee, WI Baudin M (1990) Manufacturing Systems Analysis with Application to Production Scheduling. Yourdon Press and Prentice Hall, Englewood Cliffs, NJ Bensoussan A, Crouhy J, Proth J-M (1983) Mathematical Theory of Production Planning, Advanced Series in Management. North-Holland, Amsterdam Black JT (1991) The Design of the Factory with a Future. McGraw-Hill, New York, NY Bodek N, Tozawa B (2001) The Idea Generator: Quick and Easy Kaizen. PCS, Vancouver, WA Bonczek RH (1981) Foundation of Decision Support Systems. Academic Press, New York, NY Buzacott JA (1995) A perspective of new paradigms in manufacturing. J. Manuf. Syst. 4(2):118– 125 Cho H, Jung M, Kim M (1996) Enabling technologies of agile manufacturing and its related activities in Korea. Comput. Ind. Eng. 30(3):323–334 Dennis P (2002) Lean Production Simplified: A Plain-Language Guide to the World's Most Powerful Production System. Productivity Press, New York, NY De Toni A, Tonchia S (1998) Manufacturing flexibility: a literature review. Int. J. Prod. Res. 36(6):1587–1617 Dinero D (2005) Training within Industry: The Foundation of Lean. Productivity Press, New York, NY Dove R (1995) Design principles for agile production. Prod. Magaz. 107(12):16–18 Emiliani ML, Stec D, Grasso L, Stodder J (2003) Better Thinking, Better Results: Using the Power of Lean as a Total Business Solution. The CLBM, Kensington, Conn Fliedner G, Vokurka RJ (1987) Agility: competitive weapon of the 1990s and beyond. Prod. Inv. Manag. J. 38(3):19–24 Flinchbaugh J, Carlino A (2006) The Hitchhiker's Guide to Lean: Lessons from the Road. SME, Dearborn, MI Forsythe C (1997) Human factors in agile manufacturing: a brief overview with emphasis on communications and information infrastructure. Hum. Fact. Ergon. Manuf. 7(1):3–10 Forsythe C, Ashby MR (1996) Human factors in agile manufacturing. Ergon. Des. 4(1):15–21 Gold B (1982) CAM sets new rules for production. Harv. Bus. Rev. 60(6):88–94 Goldratt EM, Fox RE (1986) The Race. North River Press, New York, NY Gunasekaran A (1998) Agile manufacturing: enablers and an implementation framework. Int. J. Prod. Res. 36(5):1223–1247
Further Reading
235
Harry M (2000) Six Sigma: The Breakthrough Management Strategy Revolutionizing the World's Top Corporations. Random House Publisher, New York, NY Hirano H (1995) 5 Pillars of the Visual Workplace. Productivity Press, Portland, OR Hirano H, Makota F (2006) JIT is Flow: Practice and Principles of Lean Manufacturing. PCS Press, Vancouver, WA Hutchinson GK, Holland JR (1983) The economic value of flexible automation. J. Manuf. Syst. 1(2):215–228 Imai M (1997) Gemba Kaizen: A Commonsense, Low-Cost Approach to Management. McGrawHill, New York, NY Iyer S, Nagi D (1997) Automated retrieval and ranking of similar parts in agile manufacturing. IIE Trans. 29(10):859–876 Joshi SB, Smith JS (eds), (1994) Computer Control of Flexible Manufacturing Systems, Research and Development. Chapman & Hall, London Jung M, Chung MK, Cho H (1996) Architectural requirements for rapid development of agile manufacturing systems. Comput. Ind. Eng. 31:551–554 Katayama H, Bennett D (1999) Agility, adaptability and leanness: a comparison of concepts and a study of practice. Int. J. Prod. Econ. 60:43–51 Kusiak A, He DV (1997) Design for agile assembly: an operational perspective. Int. J. Prod. Res. 35(1):157–178 Lee GH (1997) Reconfigurability consideration design of components and manufacturing systems. Int. J. Adv. Manuf. Techn. 13(5):376–386 Lee GH (1998) Designs of components and manufacturing systems for agile manufacturing. Int. J. Prod. Res. 36(4):1023–1044 Lee J (1997) Overview and perspectives in Japanese manufacturing strategies and production practices in machinery industry. ASME J. Manuf. Sci. Eng. 119:726–731 Leflar JA (2001) Practical TPM: Successful Equipment Management at Agilent Technologies. Productivity Press, Portland, OR Liker J (2003) The Toyota Way: 14 Management Principles from the World's Greatest Manufacturer. McGraw-Hill, New York, NY Mehrabi MG, Ulsoy AG, Koren Y, Heytler P (2002) Trends and perspectives in flexible and reconfigurable manufacturing systems. J. Intell. Manuf. 13:135–146 Mansfield E (1997) New evidence on the economic effects and diffusion of FMS. IEEE Trans. Eng. Manag. 40(1):76–79 Mason-Jones R, Towill DR (1999) Total cycle time compression and the agile supply chain. Int. J. Prod. Econ. 62:61–73 Monden Y (1983) Toyota Production System. Industrial Engineering and Management Press, Norcross, GA Naylor JB, Naim MM, Berry D (1999) Leagility: Integrating the lean and agile manufacturing paradigms in the total supply chain. Int. J. Prod. Econ. 62:107–118 Nee AYC, Whybrew K, Senthil Kumar A (1997) Advanced Fixture Design for FMS. Springer Verlag, New York, NY Ohno T (1988) Toyota Production System: Beyond Large-Scale Production. Productivity Press, Portland, OR Papadopoulos HT, Heavey C, Browne J (1993) Queuing Theory in Manufacturing Systems: Analysis and Design. Chapman & Hall, London Pritschow G, Sperling W (1997) Modular system platform for open control systems. Prod. Eng. 4(2):77–80 Proth J-M, Hillion HP (1990) Mathematical Tools in Production Management. Plenum Press, New York and London Pyzdek TH (2003) The Six Sigma Handbook. McGraw-Hill, New York, NY Rinehart J, Huxley Ch, Robertson D (1997) Just Another Car Factory? Lean Production and Its Discontents. ILR Press, Ithaca, NY
236
6 X-manufacturing Systems
Roby D (1995) Uncommon sense: lean manufacturing speeds cycle time to improve low-volume production at Hughes. Nat. Product. Rev. 14(2):79–87 Rogers GG, Bottaci L (1997) Modular production systems: a new manufacturing paradigm. J. Intell. Manuf. 8:147–156 Santos J, Wysk RA, Torres JM (2006) Improving Production with Lean Thinking. John Wiley & Sons, New York, NY Sethi AK, Sethi SP (1990) Flexibility in manufacturing: a survey. Int. J. Flex. Manuf. Syst. 2:289–328 Sharp JM, Irani Z, Desai S (1999) Working towards agile manufacturing in the UK industry. Int. J. Prod. Econ. 62:155–169 Spear S, Bowen HK (1999) Decoding the DNA of the Toyota Production System. Harv. Bus. Rev. 77(5):97–106 Struebing L (1995) New approach to agile manufacturing. Qual. Progr. 28(12):18–19 Syam SS (1997) Model for the capacitated p-facility location problem in global environments. Comput. Oper. Res. 24(11):1005–1016 Tchijov I (1992) The diffusion of flexible manufacturing systems. In: Ayres RU, Haywood W, Tchijov I (eds), Computer Integrated Manufacturing, vol 2, Chapman & Hall, London, pp. 197–248 Viswanadham N, Narhari Y (1992) Performance Modeling of Automated Manufacturing Systems. Prentice-Hall, Englewood Cliffs, NJ Wadell W, Bodek N (2005) The Rebirth of American Industry. PCS Press, Vancouver, WA Waguespack K, Cantor B (1996) Oil inventories should be based on margins, supply reliability. Oil Gas J. 94(28):39–41 Wang ZY, Rajurkar KP, Kapoor A (1996) Architecture for agile manufacturing and its interface with computer integrated manufacturing. J. Mater. Proc. Techn. 61(1):99–103 Womack JP, Jones DT (1996) Lean Thinking: Banish Waste and Create Wealth in Your Corporation. Simon & Schuster, New York, NY
Chapter 7
Design and Balancing of Paced Assembly Lines
Abstract Paced assembly lines increase productivity and minimize costs in mass production. This concept is also pivotal in lean manufacturing. Thus, the chapter is devoted to these simple lines that manufacture only one type of product (or several types of products close to each other in terms of manufacturing processes). Two types of problems are analyzed: SALB-1 and SALB-2. For the former, the objective is to minimize the number of stations required to reach a given cycle time (takt time). For the latter, where the number of stations is given, the goal is to minimize the takt time. Some standard simple algorithms, such as RPW and KW, are presented. Special interest is given to the heuristic COMSOAL and its extensions. This algorithm is especially effective and easy to implement. To complete the review of existing approaches, we also present techniques based on branchand-bound, simulated annealing, tabu search and genetic algorithms. Moreover, a linear programming model is suggested, which can be straightforwardly used with standard solvers on the market. Finally, a detailed section that deals with the properties of line-balancing solutions and ways to evaluate them ends the chapter.
7.1 Simple Production Line (SPL) and Simple Assembly Line (SAL) The concept of the paced assembly line was introduced into manufacturing systems by Henry Ford in order to increase productivity and minimize cost in mass production. For paced assembly lines, line-balancing problems are crucial. Moreover, in lean manufacturing, balancing is in the heart of the approach (see Chapter 6). This is why this and the next chapter extensively deal with these issues. The lines studied in this chapter are simple, which means that they manufacture only one type of product or products that are close to each other in terms of manu-
238
7 Design and Balancing of Paced Assembly Lines
facturing processes (the same sequence of operations and operation times close to each other if the same operations are concerned). A simple paced line is composed of several stations that are visited in a given order. Each station is dedicated to a set of tasks (assembly or transformation) that are performed by workers and/or robots and/or machines, etc. The transportation system (a conveyor, for example) carries the products from station to station. A product stays at a station for a time period C, and this period is the same for all stations. Thus, if N stations are involved in the line, then the production cycle (or lead time) for a product (i.e., the period between the arrival of a product in the assembly line and its completion) is equal to N × C , assuming that the transportation times between the stations can be neglected. C is called the cycle time or takt time. In a paced assembly line, products are moved from one station to the next simultaneously. In Figure 7.1, we summarize the progress of the products in a simple line composed of three stations S1, S2 and S3 for five consecutive periods C. Time t0 is the starting time of the production and Pi are the identical products that are manufactured. The operations performed at the stations may be subject to precedence constraints. If operation A precedes operation B, then A is performed either at the same station as B or at a station that precedes B in the line. For instance, considering the line represented in Figure 7.1: • If B is performed at station S3, then A is performed either at S3, S2, or S1. • If B is executed at station S2, then A is executed either at S2 or S1. • If B is performed at station S1, then A is performed at S1.
PERIOD
EVOLUTION OF PRODUCTION S1
S2
S3
COMPLETED PRODUCTS
[ t0, t0+ C ]
P1
[ t0+ C, t0+ 2C ]
P2
P1
[ t0+ 2C, t0+ 3C ]
P3
P2
P1
[ t0+ 3C, t0+ 4C ]
P4
P3
P2
P1
[ t0+ 4C, t0+ 5C ]
P5
P4
P3
P2
Figure 7.1 Progress of production in a paced line
P1
7.1 Simple Production Line (SPL) and Simple Assembly Line (SAL)
Ii
Sk
S1
Sn
239
Io
Ii Raw material or semi-finished product inventory Sk (k=1, …, n) Stations Io Finished product inventory
Figure 7.2 A production line
Note that an operation is always performed at a single station (it is not divisible, i.e., it is impossible to execute an operation on two consecutive stations). Another production line is represented in Figure 7.2. In a production line, the products visit stations S1,…, Sn successively. A set of operations is performed at each station. Each operation transforms the product, which results in added value. Examples of paced lines can be found in the following domains, among others: • automation and electronic industries; • spare parts (of cars, household appliances, heating systems, etc.); • some processed foods. Figure 7.3 represents an assembly line.
Ii
Components
Components
S1
Sk
Sn
Io
Ii Raw material or semi-finished product inventory Sk (k = 1, …, n) Stations Io Finished product inventory
Figure 7.3 An assembly line
The only difference between assembly and production lines is that in an assembly line components are introduced in the system at some stations to be assembled with the principal products. Thus, some operations performed in these stations are assembly operations. Usually, components are introduced on a just-in-time (JIT) basis. Numerous examples of assembly lines can be mentioned such as, for example, in the manufacturing of:
240
• • • • •
7 Design and Balancing of Paced Assembly Lines
cars; household appliances; television sets; processed food production; etc.
In this chapter, we are interested in balancing simple assembly and production lines. The aim is to optimize the load of the stations with regard to different criteria. There is no difference between an assembly and a production line in this regard. This is why we will only refer to simple assembly lines in the remainder of this chapter.
7.2 Simple Assembly Line Balancing (SALB) The SALB problem is defined as follows: • All the parameters of the problem are known. • Each operation is performed on a single station; in other words, an operation cannot start on one station and be completed on another. • A partial order is usually attached to the set of operations (precedence constraints). • All the operations required to complete a product must be performed on each single product. • An operation can be executed on any station; this is the consequence of the presence of employees at the stations and the fact that the same resources are available for each station. • The operation times are deterministic and do not depend on the station that performs the operation. • The stations are visited in a given order. • A sole product type is processed on a simple assembly line. • Either the cycle time (takt time) C or the number N of stations are given. When C is given, the goal is to define the number N of stations in order to optimize a criterion. Conversely, when N is given, the goal is to define the value of C that optimizes a criterion. When C is given, the problem is denoted by SALB-1, conversely by SALB-2 when the number N of stations is known. Let us set CS = N × C as the total time available to assemble a product. We also set T * =
N
∑T , where Ti is the sum of the operation times of the operai
i =1
tions assigned to station Si . Indeed, Ti ≤ C , ∀i = 1, L, N . An objective can be to reduce the sum of the station idle times, that is:
7.3 Problem SALB-1
IT = CS − T *
241
(7.1)
Another criterion is to minimize: K = Max ( C − Ti ) i =1,L, N
(7.2)
Sometimes, C − Ti is called the “safety margin” of station i. Note that minimizing (7.2) is much more sensitive to assignment changes than minimizing (7.1) since the latter decreases only if the number of stations decreases. In the next two sections, we propose simple heuristics to reach a solution for SALB-1 and SALB-2, respectively.
7.3 Problem SALB-1 7.3.1 Common Sense Approach The goal is to optimize Criterion 7.1. In this section, we do not suggest an optimal algorithm, but only a common sense approach (a heuristic) to introduce the notion of line balancing. The proposed algorithm is iterative. For each iteration, we consider the set W of unassigned operations without predecessor or the predecessors of which have already been assigned to a station. The operations of W are examined in decreasing order of their operation times and assigned, if possible, to the station that is currently under consideration. If none of these operations can be assigned to the current station, a new station is opened and the assignment process is restarted. The algorithm stops when all the operations are assigned to stations. The basic idea behind this algorithm is to consider that the operations that are the most difficult to assign are the ones with greater operation times. The SALB-1-1 algorithm is presented hereafter, see Algorithm 7.1. Algorithm 7.1. (SALB-1-1) 1. Set i = 1 . The value of variable i is the rank of the station currently under consideration. 2. Set Q = C , where Q is the remaining time available at the current station. 3. Build W as defined above. 4. Consider the operations of W in the decreasing order of their operation times. 4.1. If we found an operation having an operation time θ less than Q, then: 4.1.1. Assign this operation to station i. 4.1.2. Set Q = Q − θ .
242
7 Design and Balancing of Paced Assembly Lines 4.1.3. Build the new set W. 4.1.4. If W is empty, then stop the algorithm, otherwise go to 4. 4.2. If none of the operations of W can be assigned to station i, then: 4.2.1. Set i = i + 1 . 4.2.2. Go to 2.
Note that Criterion 7.1 does not appear in Algorithm 7.1. Thus, there is no direct reason to obtain a “good” solution to SALB-1 according to Criterion 7.1 using this algorithm. Nevertheless, since the strategy behind SALB-1-1 tends to reduce the number of stations, it indirectly leads to a small value for Criterion 7.1. Numerical Example Consider a manufacturing process composed of 15 operations denoted by A, B,…, O. The manufacturing times and the direct predecessors of the operations (partial order) are given in Table 7.1. Furthermore, the cycle time (or takt time) is C = 40. The partial order on the set operations is also represented in Figure 7.4. Table 7.1 Operations and direct predecessors Operation
A
B
C
D
E
Operation time
10
12
7
8
20
Predecessors
/
/
/
A
B, C
Operation
F
G
H
I
J
Operation time
4
11
6
9
12
Predecessors
/
D
D, E, F
/
I, G O
Operation
K
L
M
N
Operation time
15
13
9
8
9
Predecessors
G, H
J
J, K
M
L, N
I
A
D
G
J
L O
B
E
C
F
H
K
M
N
Figure 7.4 Graph of the partial order on the set of operations
7.3 Problem SALB-1
243
Table 7.2 Application of algorithm SALB-1-1 Set W
Station number
Task assigned
Remaining time in the station
A, B, C, F, I
1
B
40 – 12 = 28
A, C, F, I
1
A
28 – 10 = 18
D, C, F, I
1
I
18 – 9 = 9
D, C, F
1
D
9–8=1
C, F, G
2
G
40 – 11 = 29
C, F, J
2
J
29 – 12 = 17
C, F, L
2
L
17 – 13 = 4
C, F
2
F
4–4=0
C
3
C
40 – 7 = 33
E
3
E
33 – 20 = 13
H
3
H
13 – 6 = 7
K
4
K
40 – 15 = 25
M
4
M
25 – 9 = 16
N
4
N
16 – 8 = 8
O
5
0
40 – 9 = 31
We applied algorithm SALB-1-1 to this example. The different steps of this algorithm are summarized in Table 7.2. If we take as a criterion K = Max ( C − Ti ) , then we obtain K = 31. i =1,L,15
As we can see, the algorithm SALB-1-1 is not satisfactory for this example if Criterion 7.2 is considered. Since the value of the criterion is 31, this means that 77.5% of the time available in station 5 remains untapped. In the next section, we show how Algorithm 7.1 (SALB-1-1) can be significantly improved by introducing randomness. As will be shown in the next section, the improvement of the algorithm results mainly from: • A drastic increase of the number of random trials, which increases the probability of reaching the optimal solution. • Relaxation of the search constraint (that requires examining the operations of W in the decreasing order of their times), which enlarges the search domain. This algorithm with random choices converges in probability to the optimal solution. In this book, we focus on methods for practical applications. Therefore, we selected simple but efficient algorithms and discarded more complex algorithms that did not significantly outperform those selected. Hereafter, we present COMSOAL, one of most effective and simple heuristics for industrial applications, available in
244
7 Design and Balancing of Paced Assembly Lines
the literature. We will also mention algorithms RPW, KW, as well as branch and bound (B&B) approaches.
7.3.2 COMSOAL Algorithm COMSOAL is the acronym of computer method of sequencing operations for assembly lines (Arcus, 1966). The algorithm COMSOAL developed in this section differs from SALB-1-1 mainly in the following respects: • The set of operations W is replaced by W1 that is the set of unassigned operations without predecessor or whose predecessors have already been assigned to a station, and have a processing time less than or equal to the remaining available time in the station under consideration (current station). • The task to be assigned to the station under consideration is chosen at random in W1. • The algorithm is run many times and one keeps the best solution, i.e., the solution with the smallest criterion value. Taking these remarks into account leads to the COMSOAL algorithm, see Algorithm 7.2. The same notations as previously are used. In this algorithm, Z is the number of trials decided by the users. The best solution will be stored in ( S*, K* ). Algorithm 7.2. (COMSOAL) 1. For z = 1, L, Z do: 1.1. Set i = 1. 1.2. Set Q = C ( Q is the remaining time available at the current station). 1.3. Build W1 (defined above). 1.4. If all the operations have been assigned (i.e., the z-th trial is completed), then: 1.4.1. Compute the criterion value K corresponding to the assignment (the solution) S . 1.4.2. If ( z = 1 ), then: 1.4.2.1. Set S* = S. 1.4.2.2 Set K* = K. 1.4.3. If ( z > 1 ) and ( K < K* ), then: 1.4.3.1. Set S* = S. 1.4.3.2. Set K* = K. 1.5. If W1 is empty and some operations are still not assigned, then: 1.5.1. Set i = i + 1 . 1.5.2. Go to 1.2. 1.6. If W1 is not empty, then:
7.3 Problem SALB-1
245
1.6.1. Select an operation at random in W1 and assign it to the current station (i.e., station i ). 1.6.2. Set Q = Q − θ , where θ is the operation time of the selected operation. 1.6.3. Go to 1.3. 2. End of loop z. 3. Print S* and K*.
Numerical Example We apply Algorithm 7.2 (COMSOAL) to the example presented in Section 7.3.1 with Criterion 7.2. The first trial leads to the solution presented in Table 7.3. Table 7.3 Solution obtained at the first trial Station
Operations assigned to the station
1
C, I, F, A, D
Remaining available time 2
2
G, J, L
4
3
B, E, H
2
4
K, M, N
8
5
O
31
The solution presented in Table 7.3 is not the same as that obtained using Algorithm SALB-1-1. The reason is that COMSOAL generates assignments randomly. We ran the program with Z = 100. The best solution was obtained in the third attempt. This solution is presented in Table 7.4 and Figure 7.5, where the reader can see that the precedence constraints are satisfied. Criterion 7.2 is equal to 2. Some essential remarks should be made at this point: 1. Since IT < C (see the definitions of IT and C in Section 7.2), we can claim that the above solution minimizes the number of stations for the given parameters. Inequality IT < C is a sufficient (but not necessary) condition for the number of stations to be minimal. Moreover, if we examine the total idle time repartition among the stations, we can see that this solution is also optimal for Criterion 7.1. 2. We observe that some operations have one or more predecessors in the same station. This is the case for H at Station 2, J and K at Station 3, N and O at Station 4. These precedence constraints are managed at the station level. 3. It is possible to run the COMSOAL algorithm by replacing Criterion 7.2 by (7.1). Minimizing Criterion 7.1 is equivalent to minimizing the number of stations. 4. In Algorithm COMSOAL, the best solution tends in probability toward the optimal as Z → +∞ .
246
7 Design and Balancing of Paced Assembly Lines
Table 7.4 Best solution Station
Operations assigned to the station
Remaining available time
1
I, A, C, B
2
2
E, D, F, H
2
3
G, K, J
2
4
M, L, N, O
1
I
A
D
G
J
L O
B
E
C
F
H
K
M
Station 1
Station 3
Station 2
Station 4
N
Figure 7.5 The final result
We also applied the COMSOAL algorithm to the same example with Criterion 7.1 for different values of C, Z remaining constant and equal to 200. The results are given in Table 7.5. As we can see, the three solutions are optimal for Criterion 7.1 since the total idle time is less than the takt time (i.e., the cycle time).
7.3.3 Improvement of COMSOAL In Algorithm 7.2, a new station is open when W1 is empty and some operations are still not assigned to a station (see Step 1.5 of the algorithm). When Criterion 7.2 is used, an improvement consists in the possibility of opening a new station even if W1 is not empty and some operations are still waiting for assignment. However, the probability to open a new station under these conditions should be low (typically less than 0.05).
7.3 Problem SALB-1
247
Table 7.5 Some results with different takt times C
Station number
Operations
Remaining time
1
B, I, F
5
2
A, C, D
5
3
E, H
4
4
G, K
4
Number of trials to obtain the solution
30
50
55
5
J, L
5
6
M, N, O
4
1
I, C, B, E
2
2
F, A, D, G, J
5
3
H, K, M, N
12
4
L, O
28
1
A, C, B, F, D, I
5
2
E, H, G, K
3
3
J, M, N, L, O
4
74
184
12
When Criterion 7.1 is used, it is possible to introduce another degree of freedom by authorizing the assignment of any task not only to the last station opened (current station), but to any available station, providing that the precedence constraints are satisfied. Other possibilities are mentioned in the literature: • One of these improvements consists of modifying Step 1.6.1 of the COMSOAL algorithm by attaching a specific probability pi to each operation i ∈ W1 . This is the probability to select operation i at random. Thus, the probability to select randomly an operation in W1 is no longer the same for all the operations (i.e., 1 / card (W1), where card (W1) is the number of operations in W1) as it was in Algorithm 7.2. For instance, pi can be proportional to the sum of the times for operation i and all its successors. The idea behind this choice is to give the priority to the operations that may lead to an excessive number of stations due to the time necessary to perform all their successors. Of course, ∑ pi = 1 . i∈W1
• Another enhancement consists of calling the previous assignments in question again, in other words, performing backtracking. Nevertheless, applying this modification makes the algorithm more complex.
248
7 Design and Balancing of Paced Assembly Lines
7.3.4 RPW Method The acronym RPW stands for ranked positional weight (Helgeson and Birnie, 1961). In most cases this basic approach is not as effective as COMSOAL, but it can be used manually for small-sized problems. That is why it is often used for teaching. RPW is a deterministic single-pass algorithm that assigns operations to stations in the decreasing order of weights wi of operations i ∈ {1, L , n } . The weight wi is the sum of the operation time of i and the operation times of all the successors of i: wi = θ i +
∑θ
(7.3)
s
s∈Sc ( i )
where Sc ( i ) is the set of successors of operation i and θ s is the operation time of s. The RPW algorithm is derived from the algorithm SALB-1-1 by replacing Step 4 with: “4. Consider the operations of W in the decreasing order of their weight defined by Relation 7.3.” Numerical Example We illustrate the RPW algorithm using the same example as in Section 7.3.1. Table 7.6 Weights of operations Operation
Successors
Weight
A(10)
D(8), G(11), H(6), J(12), K(15), L(13), M(9), N(8), O(9)
101
B(12)
E(20), H(6), K(15), M(9), N(8), O(9)
79
C(7)
E(20), H(6), K(15), M(9), N(8), O(9)
74
D(8)
G(11), H(6), J(12), K(15), L(13), M(9), N(8), O(9)
91
E(20)
H(6), K(15), M(9), N(8), O(9)
67
F(4)
H(6), K(15), M(9), N(8), O(9)
51
G(11)
J(12), K(15), L(13), M(9), N(8), O(9)
77
H(6)
K(15), M(9), N(8), O(9)
47
I(9)
J(12), L(13), M(9), N(8), O(9)
60
J(12)
L(13), M(9), N(8), O(9)
51
K(15)
M(9), N(8), O(9)
41
L(13)
O(9)
22
M(9)
N(8), O(9)
26
N(8)
O(9)
17
O(9)
/
9
7.3 Problem SALB-1
249
The weights of all operations are computed and given in Table 7.6. The operation times are between parentheses. With the notations introduced previously: • In the first iteration: – Q = 40 . – W = { A, B, C , I , F } . – A having the greatest weight, it is assigned to Station 1. • In the second iteration: – Q = 40 − 10 = 30 . – W = { D, B, C , I , F } . – D having the greatest weight and its operation time being less than Q, it is assigned to Station 1. The rest of the algorithm functions identically. We summarize the steps of the computation in Table 7.7. Figure 7.6 represents the solution obtained by applying the RPW algorithm. It should be noted that 4 stations are used, which is the same number as that obtained by applying COMSOAL with Criterion 7.2, see Section 7.3.2. Nevertheless, the value of Criterion 7.2 is better when using COMSOAL. This is due to the fact that RPW implicitly favors Criterion 7.1. Table 7.7 Solution obtained by applying the RPW method Set W
Station number
Task assigned
Remaining time in the station
A, B, C, F, I
1
A
40 – 10= 30
B, C, D, F, I
1
D
30 – 8= 22
B, C, F,G, I
1
B
22 – 12= 10
C, F, I
1
C
10 – 7= 3
E, F, G, I
2
G
40 – 11= 29
E, F, I
2
E
29 – 20= 9
F, I
2
I
9 – 9= 0
F, J
3
F
40 – 4= 36
H, J
3
J
36 – 12= 24
H, L
3
H
24 – 6= 18 18 – 15= 3
K, L
3
K
M, L
4
M
40 – 9= 31
N, L
4
L
31 – 13= 18
N
4
N
18 – 8= 10
O
4
O
10 – 9= 1
250
7 Design and Balancing of Paced Assembly Lines
I
A
D
G
J
L O
B
E
C
F
H
M
K
Station 1
Station 3
Station 2
Station 4
N
Figure 7.6 Representation of the solution of Table 7.7
We ran the algorithm for the same values of C as those tested with COMSOAL. The results are collected in Table 7.8. The number of stations (Criterion 7.1) is the same as in Table 7.5 for each of three given values of the cycle time C. Nevertheless, the assignments of operations to stations differ from those reported in Table 7.5 (the values of Criterion 7.2 are better for the solutions in Table 7.5). Table 7.8 Some examples of RPW application C
30
50
55
Station number
Operations
1
A, B, D
Remaining time 0
2
C, G, I
3
3
E, F, H
0
4
J, K
3
5
L, M, N
0
6
O
21
1
A, B, C, D, G
2
2
E, F, I, J
5
3
K, L, M
7
4
N, O
33
1
A, B, C, D, F, G
3
2
E, H, I, J
8
3
K, L, M, N, O
1
7.3 Problem SALB-1
251
7.3.5 Kilbridge and Wester (KW)-like Heuristic In the heuristic presented in (Kilbridge and Wester, 1961), operations are organized into levels. Level 1 is composed of operations without predecessors. Level 2 is made with the operations having their predecessors at level 1. More generally, level i is composed of the operations having at least a direct predecessor at level i − 1 . The levels are designed until all the operations are assigned to a level. Table 7.9 presents the distribution of operations to levels for the example of Table 7.1 and Figure 7.4. Table 7.9 Organization into levels Level
Operations
1
A, B, C, F, I
2
D, E
3
G, H
4
J, K
5
L, M
6
N
7
O
The assignment of operations to stations is done in the increasing order of the levels. It is possible to complete a station with operations of the next upper level, provided that precedence constraints are satisfied. At each level, if all the operations of this level cannot be assigned to the station at hand, then they are selected using a rule (decreasing order of operation times or decreasing order of the weights as in the RPW algorithm, for instance). A KW-like heuristic with assignment of operations one by one was applied to the previous example. When there were several possibilities operation weights were used (see Table 7.6). The sequence of iterations is presented in Table 7.10 for C = 40. For the record, the weights are mentioned between parentheses. In column 4 of Table 7.10, the remaining times at the stations are printed in bold. The same solution is obtained as when using COMSOAL (see Table 7.4).
7.3.6 Branch and Bound (B&B) Approaches At this point of the presentation, readers are assumed to be familiar with the philosophy of the B&B approach (see Appendix C).
252
7 Design and Balancing of Paced Assembly Lines
Table 7.10 Application of a KW-like heuristic Station
Candidates
Assigned operation
Remaining time in the station
1
A(101), B(79), C(74), F(51), I(60)
A
40 – 10= 30
1
B(79), C(74), F(51), I(60)
B
30 – 12= 18
1
C(74), F(51), I(60)
C
18 – 7= 11
1
F(51), I(60)
I
11 – 9= 2
2
F(51)
F
40 – 4= 36
2
D(91), E(67)
D
36 – 8= 28
2
E(67)
E
28 – 20= 8
2
G(77), H(47)
H
8 – 6= 2
3
G(77)
G
40 – 11= 29
3
J(51), K(41)
J
29 – 12= 17
3
K(41)
K
17 – 15= 2
4
L(22), M(26)
M
40 – 9= 31
4
L(22)
L
31 – 13= 18
4
N(17)
N
18 – 8= 10
4
O(9)
O
10 – 9= 1
Substantial research work has been done around the use of B&B to solve linebalancing problems. Algorithms like FABLE (Johnson, 1988), EUREKA (Hoffmann, 1992) or SALOME (Scholl and Klein, 1997) theoretically lead to an optimal solution, but they are usually stopped before reaching this solution due to the computational burden incurred, which changes these approaches into heuristics. Let us briefly introduce the EUREKA approach. The notations used hereafter are those of the previous sections. n
T * = ∑ θ i is the sum of all operation times. i =1
A lower bound of the number of stations required to reach the cycle time C is: ⎡ T *⎤ N* = ⎢ ⎥ ⎢ C ⎥
If a solution composed of N * stations exists, then the sum of the idle times for the stations is equal to: K* = N *C − T *
7.3 Problem SALB-1
253
In the B&B tree, each node represents a station. Assume that a node r has just been introduced (that is to say that a station has just been completed), and let R0 be the set of predecessors of r in the B&B tree (or, similarly, the set of stations obtained prior to r ). We set R1 = R0 ∪ { r } . We also denote by T1* the sum of the times of the operations assigned to R1 and by N1 the number of nodes in R1 . With these definitions ⎡ T * −T1* ⎤ N1* = ⎢ ⎥ C ⎥ ⎢
is the lower bound of the number of stations still required to encapsulate the unassigned operations. If N1 + N1* > N * , then we do not continue extending the B&B tree from r and return to an immediate predecessor r − of r . If there exists a successor of r − that has not been explored yet, then we deal with it as done with r , otherwise we consider the predecessor of r − , and so on. If N1 + N1* ≤ N * , then we deal with the successors of r as done with r . If a leaf of the tree is reached, then an optimal solution is obtained, otherwise the whole process is restarted after setting N * = N * +1 , and so on. Indeed, we assume that: C ≥ Max θ i i∈{1,L, n
}
This condition is necessary and sufficient for an optimal solution to exist. Hoffmann (1992) introduced a heuristic algorithm to decide the order in which the tree branches are explored: if all the successors of a given node are built, the tree is extended from the node that represents the station having the minimum idle time.
7.3.7 Mathematical Formulation of a SALB-1 Problem We use the notations introduced in Section 7.2. Assume that the goal is to minimize Criterion 7.2 and that the cycle time C is given. We do not know the number of stations required in order to reach this cycle time, but a lower bound N* on this number is known:
254
7 Design and Balancing of Paced Assembly Lines
⎡T *⎤ N* = ⎢ ⎥ ⎢C ⎥
where T* is the sum of all the operation times and ⎡x ⎤ represents the smallest integer greater than or equal to x. Thus, the optimal solution to the problem will be obtained with either N* stations or N*+ 1 stations or N*+ 2 stations, etc. Assume that the number of required stations N is known. Let us introduce some additional notations: • r is the total number of operations. • θ j , j ∈ { 1, L, r } is the time of the operation j.
• Pd ( j ) is the set of direct predecessors of j ∈ { 1, L, r } .
⎧ 1 if operation j is assigned to station i ⎫ ⎧ j ∈ { 1, L, r } • x j,i = ⎨ ⎬ for ⎨ 0 otherwise ⎩ ⎭ ⎩i ∈ { 1, L, N } With these notations, the considered SALB-1 is formulated as follows: ⎡ ⎛ Min ⎢ Max ⎜ C − ⎢⎣i =1, L, N ⎜⎝
⎞⎤
r
∑x j =1
j,i
θ j ⎟⎟ ⎥ ⎠ ⎥⎦
(7.4)
Expression 7.4 indicates that the objective is to minimize Criterion 7.2. The constraints are: N
N
i =1
i =1
∑ i xu, i ≤ ∑ i x j , i for any j ∈ { 1, L, r} and u ∈ Pd ( j )
(7.5)
Relations 7.5 guarantee that a predecessor of an operation j is assigned either to the same station as j or to its preceding station. N
∑x
j, i
i =1
= 1 for j = 1, L, r
(7.6)
Relations 7.6 are introduced to make sure that each operation is assigned to exactly one station. r
∑x j =1
j,i
θj ≤C
for i = 1, L, N
(7.7)
Relations 7.7 guarantee that the sum of the operation times related to a station is less than the cycle time.
7.4 Problem SALB-2
255
Finally, Constraints 7.8 are introduced to make sure that the variables x j , i are binary: x j , i ∈ { 0, 1 }
for i = 1, L , N and j = 1, L , r
(7.8)
Constraints 7.5 to 7.8 are linear, but Criterion 7.4 is not linear. This criterion can be rewritten as follows: Minimize Z
(7.9)
with: Z ≥ C − ∑ x j , i θ j for i = 1, L , N
This can be rewritten as: r
Z + ∑ x j , i θ j ≥ C for i = 1, L , N
(7.10)
j =1
Finally, the optimization of the considered SALB-1 leads to the solution of the problem composed of Criterion 7.9, Constraints 7.5 to 7.8, and 7.10. This is a mixed integer linear programming problem (MIP), usually solved using a branch and bound approach. We solve this MIP problem successively for N = N*, N = N*+1, N = N*+ 2, etc. The first solution obtained in this way is optimal for the given SALB-1 problem.
7.4 Problem SALB-2 This problem consists in minimizing the cycle time C knowing the number of stations N. In SALB-1, the goal was to reach a given productivity with a minimal investment level. In SALB-2, the investment is known and the goal is to maximize the throughput. Actually, SALB-2 is undoubtedly less common than SALB-1.
256
7 Design and Balancing of Paced Assembly Lines
7.4.1 Heuristic Algorithm The common sense idea behind this heuristic is very simple. Each time a new station is created (even the first station), the sum of the times of the unassigned operations is divided by the number of stations that remain empty. Let X be this ratio. If the station under consideration is the first, then X is a lower bound of the cycle time required to perform all the operations with N stations. We assign to this station the operations without predecessors or the predecessors of which have already been assigned as long as the sum of the operation times does not exceed X. The assignment is made in a decreasing order of operation times. When no other candidate can be assigned to the station, a new station is created. If the new station under consideration is not the first, then we have to consider two possibilities: • If the new value of X is less than or equal to the previous one, then we proceed as for the first station. • Otherwise, we restart the whole process with the first station after increasing the value of the initial X by a “small” value δ . The notations used in Algorithm SALB-2-1 (see Algorithm 7.3) are those introduced in Section 7.2. Algorithm 7.3. (SALB-2-1) 1. Set s = 1. Variable s contains the rank of the station under consideration. 2. Set U = T * . Initially, we assign to U the sum of all operation times. 3. Compute X 1 = U / ( N − s + 1 ) . 4. Let y = 0 . Variable y contains the sum of the times of the operations assigned to the station of rank s . 5. Define the set W of operations without predecessors or the predecessors of which have already been assigned. 6. Order the operations of W in the decreasing order of their times. 7. Consider the operations of W in the previous order. 7.1. If an operation k is such that y + θ k ≤ X s , then do: 7.1.1. Set y = y + θ k . 7.1.2. Assign operation k to station s. 7.1.3. Go to 5. 7.2. Otherwise: 7.2.1. Set U = U − y . 7.2.2. Set s = s + 1 . 7.2.3. If s > N, then go to 7.2.6.1. 7.2.4. Compute X s = U / ( N − s + 1 ) . 7.2.5. If X s ≤ X s −1 , then:
7.4 Problem SALB-2
257
7.2.5.1. Set X s = X s −1 . 7.2.5.2. Go to 4. 7.2.6. If X s > X s −1 , then: 7.2.6.1. Set X 1 = X 1 + δ , where δ is a constant less than the smallest operation time. 7.2.6.2. Set s = 1. 7.2.6.3. U = T * . 7.2.6.4. Go to 4.
Numerical Example
Consider again the example introduced by Table 7.1 and Figure 7.4. The results of applying Algorithm 7.3 are presented in Table 7.11. Table 7.11 Examples with different numbers of stations Number of stations 4
5
6
Stations
Operations assigned
Work load
1
A, B, D, I
39
2
F, G, J, L
40
3
C, E, H
33
4
K, M, N, O
41
1
A, B, C, I
38
2
D, E, F, H
38
3
G, J, K
37
4
L, M, N
30
5
O
9
1
A, B, F
26
2
C, D, I
24
3
E, H
26
4
G, K
26
5
J, L
25
6
M, N, O
26
Cycle time
41
38
26
7.4.2 Algorithm Based on Heuristics for SALB-1 SALB-2 can always be solved by means of an algorithm designed for SALB-1 (COMSOAL, for instance). In this case, the algorithm proceeds by dichotomy. Let N* be the number of stations that should be used. The algorithm SALB-2-Dicho is presented below.
258
7 Design and Balancing of Paced Assembly Lines
Algorithm 7.4. (SALB-2-Dicho) 1. Introduce a “small” cycle time Cm and a “great” cycle time CM. 2. Apply an algorithm of SALB-1 (say AA, which could be COMSOAL, for instance) to compute Nm (respectively, NM) the number of stations required to reach the cycle time Cm (respectively, CM). Indeed, N M ≤ N m . 3. If N * < N M , then increase CM and restart the computation at step 2. 4. If N * > N m , then decrease Cm and restart the computation at step 2. 5. Compute C = ( C m + C M ) / 2 and apply AA to solve SALB-1 for C . Let N be the minimum number of stations that leads to cycle time C . 6. If N ≤ N * , then set C M = C . 7. If N > N * , then set C m = C . 8. If ( C M − C m < ε ) and ( N = N * ) , then we keep C as the cycle time and the last assignment of operations to N* stations is the solution, otherwise, go to step 5.
In this algorithm, ε is given by the user, according to the precision required in the result. Indeed, this algorithm is a heuristic.
7.4.3 Mathematical Formulation of Problem SALB-2 The problem consists in minimizing the cycle time C knowing the number of stations N. It is easy to see that the mathematical formulation is: Minimize C under Constraints 7.5 – 7.8. This problem is still a MIP problem. Numerous products are available in the software market to solve this type of problem such as, for instance, Cplex, XpressMP, LINGO, Lindo.
7.5 Using Metaheuristics Some authors proposed heuristic algorithms to solve SALB-1 based on metaheuristics. The metaheuristics used are presented in the appendices of this book. We will briefly mention the approaches based on the metaheuristics hereafter.
7.5 Using Metaheuristics
259
7.5.1 Simulated Annealing Readers are encouraged to consult Appendix A of this book on simulated annealing before going through this section. When using simulated annealing, two problems must be solved: • find an initial feasible solution; • define the neighborhood of a solution. 7.5.1.1 Initial Feasible Solution
The initial solution can be obtained by applying one of the heuristics presented earlier such as, for instance, COMSOAL. 7.5.1.2 Building a Neighbor of a Given Solution
Two possibilities exist to obtain a neighbor of a solution: • Permute two operations located in two neighboring stations. • Transfer an operation to the next station. If the transfer applies to an operation assigned to the last station of the assembly line, then a new station is created. If a station of the line contains only one operation before transfer, then this station disappears. This is called fusion. Indeed, a permutation or a transfer is made only if it does not violate any precedence constraints. Figures 7.7 and 7.8 illustrate these transformations. Operations are represented by capital letters.
7.5.2 Tabu Search It may be wise to have a look at Appendix D devoted to the tabu search before reading this section. The tabu approach can be used to find a solution respecting a cycle time C while minimizing the number of stations. When applying the tabu search we have to: • Find an initial solution. • Define a process that leads to a neighbor of a given solution. • Define the tabu list: what will be the elements and maximal possible length of the list? • Choose a criterion.
260
7 Design and Balancing of Paced Assembly Lines Station i A
Station i + 1
B
E D
C
F G
Before permutation of operations D and E A
B
E
F D
C
G
After permutation of operations D and E
Figure 7.7 Permutation of two operations Station i + 1
Station i A
B
D
C
E Before transfering operation C
A
B
D C
E
After transfering operation C
Figure 7.8 Transfer of one operation
7.5.2.1 Initial Feasible Solution
As when applying simulated annealing, COMSOAL or any SALB-1 algorithm can be used to obtain an initial solution. 7.5.2.2 Building a Neighbor of a Solution
The same approaches as in Section 7.5.1.2 (permutation and transfer) can be used. 7.5.2.3 Tabu List
The elements of the tabu list are of two types: • In the case of permutation: a pair of operations that have been permuted.
7.5 Using Metaheuristics
261
• In the case of transfer: a pair made with the transferred operation and one of its direct predecessors that remains in the initial station. In the first case, the two operations of a pair will not be allowed to permute again as long as the pair remains in the tabu list. In the second case, the two operations of a pair will not be allowed to belong again to the same station as long as the pair remains in the tabu list. The length of the tabu list is defined empirically. 7.5.2.4 Objective Function
Following (Chiang, 1998), we maximize the following criterion: ⎛ ⎜ ∑ ⎜ i =1 ⎝ N
∑
j∈S i
⎞ θj ⎟ ⎟ ⎠
2
where Si is the set of operations assigned to station i. A short explanation is required. If the number of stations N is known, the goal is to decompose the set of operations into N subsets in order to maximize the above criterion (while satisfying the constraints). But we know that the greater the number of subsets having a sum of operation times close to C, the greater the above criterion. As a consequence, maximizing this criterion tends to maximize the number of stations to which no operation is assigned, which is equivalent to minimizing the number of stations used.
7.5.3 Genetic Algorithms Readers are encouraged to read Appendix E devoted to genetic algorithms before consulting this section. The notations are those introduced in Section 7.2. The use of a genetic algorithm to solve SALB-1 requires: • A code (also called genetic representation) that should define an individual in an unambiguous manner. In the case of SALB-1, an individual is a distribution of the operations between N stations. Normally, N is chosen to be much larger than the number of stations required to find a solution. • Two types of individuals will appear in the rest of this section: – The “feasible individuals” that verify Ti ≤ C for i = 1, L , N and the precedence constraints.
262
7 Design and Balancing of Paced Assembly Lines
The “admissible individuals” that verify Ti ≤ C for i = 1, L , N but violate some of the precedence constraints. • A population that is a set of feasible and admissible individuals. A genetic algorithm starts with a population (initial population) that contains many more feasible than admissible individuals. Note that in an individual some stations may remain unused. • A criterion (also called fitness function) is defined. It should be able to be calculated using the code of the corresponding individual. The value of the criterion measures the quality of the individual. In the present section, the goal will be to maximize this criterion. • For each iteration, a proportion of the current population is selected to breed a new generation that will be the population operating at the next iteration. The reproduction process is the way two individuals generate two new offspring by applying two genetic operators: – crossover (also called recombination); – mutation (applied with a very small probability). –
In the remaining part of this section we will show how the previous elements are defined for a SALB-1. 7.5.3.1 Definition of the Code and Initial Population
To illustrate the way a code is built, consider the example introduced in Table 7.1 and Figure 7.4. Assume that N = 8, C = 40 and that an individual is defined as in Table 7.12. Table 7.12 An individual Station
1
2
3
4
5
6
7
8
Operations assigned
A, B, C
D, I, F
/
E, G, H
J, K, M
L, N
/
O
Sum of operation times
T1 = 29
T2 = 21
T3 = 0
T4 = 37
T5 = 36
T6 = 21
T7 = 0
T8 = 9
The code of the admissible individual presented in Table 7.12 is obtained as follows: • Arrange the operations in any order (at random, for instance). Alphabetical order was chosen in this example. • To each operation associate the rank of the station it has been assigned to. The code is the sequence of station ranks written in the order of the operations. For the individual represented in Table 7.12, we obtain the following code:
7.5 Using Metaheuristics
263
111242442556568 Indeed, all the individuals of the population are built starting from the same order of the operations, and the near-optimal code obtained as the result of the genetic algorithm is translated into a near-optimal individual also using the same order of operations. Remarks:
• To generate a feasible individual Algorithm 7.5 can be used. • Generating an admissible individual can be done using the same algorithm, except that W is replaced by the set of operations that have not been assigned yet. • The initial population is a mix of “feasible individuals” and “admissible individuals” with a large proportion of “feasible individuals”. An initial population may be made up of only feasible individuals. • The size of the initial population is usually greater than 10 times the number of elements in the code. In Algorithm 7.5, we assume that N is large enough to absorb all the operations, otherwise increase N and/or decrease the probability mentioned in Step 2.3. Algorithm 7.5. (Generate a feasible individual) 1. Set i = 1. Integer i will be the rank of the last station that opened. 2. While at least one operation is not assigned to a station: 2.1. Define the set W of operations without predecessors or the predecessors of which are already assigned to a station. 2.2. Choose at random an operation j in W. 2.3. If the remaining working time in station i is less than the operation time of j or if the system randomly decides (with a very low probability) to create a new station, then: 2.3.1. Set i = i + 1. 2.3.2. Go to 2.2. Otherwise, assign operation j to station i.
7.5.3.2 Definition of the Criterion
We use the following criterion to evaluate an individual I: ⎧ L × r2 ⎪⎪ K (I ) = ⎨ r2 ⎡ 2 × + L r ⎢ ⎪ N 2 × C ⎢⎣ ⎩⎪
for an admissible individual
∑ ( Ti × Q −i +1 ) ⎥⎥ N
i =1
⎤ ⎦
for a feasible individual
(7.11)
264
7 Design and Balancing of Paced Assembly Lines
In this criterion: • L is the number of precedence constraints that are satisfied by the individual I or, in other words, the number of arcs of the precedence graph (see Figure 7.4) that hold for the individual I. • r is the number of operations. • N is the number of stations. • C is the cycle time. • Ti is the sum of the times for the operations assigned to station i. • Q is a constant greater than 1. The value of L related to an admissible individual is always less than the value of L corresponding to a feasible individual. As a consequence, inequality K ( I1 ) > K ( I 2 ) always holds if I1 is feasible and I2 admissible. Since the selection of the individuals that are in charge of breeding the next generation is made at random with a probability proportional to the value of their criterion, the probability to select feasible individuals is greater than the probability to select admissible individuals.
∑ (T ×Q N
Now consider:
−i +1
i
).
i =1
The expression between parentheses is a decreasing function of i. As a consequence, the greater the rank of the stations occupied by the operations that compose an individual, the lower the value of the criterion for this individual. Therefore, there is a lower probability to select this individual to participate in the breeding of the next generation. Since the set of stations used by the descendents tends to be close to the set of stations used by the parents, the algorithm tends progressively to select stations of lower rank, which is equivalent to reducing the number of selected stations. This will become clearer in the next section devoted to the reproduction process. Concerning the value of the constant Q, applying the following rules is advisable: • If T * / C ≤ 10 , then take Q = 1.5. • If 10 < T * / C ≤ 20 , then take Q = 1.3. • If T * / C > 20 , then take Q = 1.1.
7.5.3.3. Reproduction Process
Consider that the population is available at the beginning of iteration. The first action is to compute, for each individual I, the probability to be selected for breeding. This probability is:
7.5 Using Metaheuristics
p(I ) =
265
M −K (I ) ∑ (M − K ( J ))
(7.12)
J ∈Pop
where: • M is an upper bound of the criterion. • Pop is the population. A possible upper bound is: M = L × r2 +
r2 N2 ×C
N
∑ ( z ×Q i
−i +1
)
(7.13)
i =1
where the values of zi are computed as follows: 1. z1= Min ( T *, C ) . k −1
2. z k = Min ( T * −∑ z s , C ) for k = 2, L , N . s =1
It is also easy to see that Relation 7.13, which is defined based on the concentration of the work load in the stations of lower ranks by relaxing the integrity constraint, is an upper bound of the criterion. Now we explain how to generate the next generation from the current population. Let R ( Pop ) be the number of individuals in the current population. For the initial population, R ( Pop ) is given. We will see that the process used to generate the next generation results in a population of the same size. Thus, R ( Pop ) is the size of the population at any level of the genetic algorithm. The next generation is obtained by applying the following process R ( Pop ) / 2 times: 1. Select two individuals at random in the current population taking into account the probabilities defined by Relation 7.12. Let I1 and I2 be the selected individuals and u1 and u2 the codes of these individuals, respectively. 2. Generate at random an integer h ∈ { 2, L , N } . 3. Build: – A code v1 that is made with the h – 1 first elements of u1 followed by the N– h + 1 last elements of u2. – A code v2 that is made with the h – 1 first elements of u2 followed by the N– h + 1 last elements of u1. These are the codes of two individuals of the next generation (i.e., two successors often called offspring). In the algorithm under consideration, if both successors are either feasible or admissible, they are integrated in the next generation,
266
7 Design and Balancing of Paced Assembly Lines
otherwise they are eliminated and a new pair is generated (and the pair that has been eliminated is not counted). Since the process is performed R ( Pop ) / 2 times, the size of the new generation will be R ( Pop ) . The process we just described is the crossover (or recombination). An Example of Crossover
Consider the problem defined by Table 7.1 and Figure 7.4. Assume also that C = 40 and N = 8. Assume that two individuals I1 and I2 have been selected in the current population. The individual I1 is described in Table 7.13. Table 7.13 Individual I1 selected in the current population Station
1
2
3
4
5
6
7
8
Operations assigned
A, B
C, D, F
/
/
E, G, H
I, J, K
L, M, N, O
/
Sum op. times
T1 = 22
T2 = 19
T3 = 0
T4 = 0
T5 = 37
T6 = 36
T7 = 39
T8 = 0
The code of I1 is: 112252556667777 The Individual I2 is described in Table 7.14. The code of I2 is: 111133433445566 Table 7.14 Individual I2 selected in the current population Station
1
2
3
4
5
6
7
8
Operations assigned
A, B, C, D
/
E, F, H, I
G, J, K
L, M
N, O
/
/
Sum op. times
T1 = 37
T2 = 0
T3 = 39
T4 = 38
T5 = 22
T6 = 17
T7 = 0
T8 = 0
Assume that h = 6. Figure 7.9 provides the schema that explains how the descendents are derived from the pair ( I1, I2 ).
7.5 Using Metaheuristics
Code of I1
1
Code of I2
1 1
267
1 2 2
1
5
2
1 3
5 5 6
3 4
3
6
6 7 7
3 4 4
5
7
5
7
6 6
Figure 7.9 Building the descendents
Thus, the codes of the descendents are: Descendent J1: 1 1 2 2 5 3 4 3 3 4 4 5 5 6 6 Descendent J2: 1 1 1 1 3 2 5 5 6 6 6 7 7 7 7 Individuals J1 and J2 are given in Tables 7.15 and 7.16, respectively. Individual J1 is neither feasible nor admissible: • A precedence constraint is not satisfied since H, a successor of E, is in a station that precedes the station to which E is assigned. • T5 > C. It is easy to verify that J2 is feasible. Table 7.15 Descendent J1 Station
1
2
3
4
5
6
7
8
Operations assigned
A, B
C, D
F, H, I
G, J, K
E, L, M
N, O
/
/
Sum of operation times
T1 = 22
T2 = 15
T3 = 19
T4 = 38
T5 = 42
T6 = 17
T7 = 0
T8 = 0
Table 7.16 Descendent J2 Station
1
2
3
4
5
6
7
8
Operations assigned
A, B, C, D
F
E
/
G, H
I, J, K
L, M, N, O
/
Sum of operation times
T1 = 37
T2 = 4
T3 = 20
T4 = 0
T5 = 17
T6 = 36
T7 = 39
T8 = 0
268
7 Design and Balancing of Paced Assembly Lines
The pair ( J1 , J 2 ) cannot be integrated in the next generation because of J1. In the genetic algorithm, a new pair would be generated and pair ( J1, J2 ) is not counted nor included in the next generation. Another solution is to change some elements of the code of J1 to turn it into a feasible individual. When a pair of successors is obtained by using a crossover, we may apply to these elements another genetic operator named mutation. This consists, for each individual and with a low probability (usually less than 0.05), of choosing randomly one element of the code (a station rank) and replacing it by another element of {1, L , N } also selected at random. Indeed, the mutation should be performed before checking if the offspring are either feasible or admissible individuals.
7.5.3.4 Overall Algorithm
Finally, the genetic algorithm is summarized as in Algorithm 7.6. Algorithm 7.6. (Genetic algorithm) 1. Generate an initial population Pop 0 of size R ( Pop ) . 2. Compute the value of the criterion for each individual of Pop0 using Expression 7.11. 3. Select I *∈ Pop 0 that is the individual having the greatest criterion. 4. For i = 1, L, G ( G is the number of iterations chosen by the user. Several thousand iterations are usually required). 4.1. Compute the probability associated with each individual of Pop0 using (7.12). 4.2. Apply the crossover and the mutation to pairs of codes selected from Pop0 according to the probabilities computed in Step 4.1 until a set of R ( Pop ) / 2 pairs of feasible or admissible individuals is obtained. This set is the next generation Pop1. 4.3. Select I + ∈ Pop that is the individual having the greatest criterion. 1
4.4. If K ( I + ) > K ( I * ) , then set I* = I+. 4.5. Set Pop0 = Pop1. 5. End of loop i. 6. Print I*.
Example 1
We applied the genetic algorithm to the problem defined by Table 7.1 and Figure 7.4 with the cycle time C = 40 and the population size equal to 150.
7.5 Using Metaheuristics
269
Table 7.17 Result of genetic algorithm with mutation probability 0.02 Station
Operations assigned to the station
Remaining available time
1
A, B, D, I
1
2
C, E, G
2
3
F, H, J, K
3
4
L, M, N, O
1
Table 7.18 Result of genetic algorithm with mutation probability 0.04 Station
Operations assigned to the station
Remaining available time
1
B, C, E
1
2
A, D, F, H, I
3
3
G, J, K
2
4
L, M, N, O
1
Table 7.19 Three more examples Cycle time C
30
50
55
Station
Operations
Remaining time
1
A, B, C
1
2
E, I
1
3
D, F, G, H
1
4
J, K
3
5
L, M, N
0
6
O
21
1
A, B, C, D, F, I
0
2
E, G, H, J
1
3
K, L, M, N
5
4
O
41
1
A, B, D, F, G, I
1
2
C, E, J, L
3
3
H, K, M, N, O
8
The genetic algorithm was run twice: • In the first run, the probability of mutation was 0.02. The result is given in Table 7.17. • In the second run, the probability of mutation was 0.04. The result is given in Table 7.18.
270
7 Design and Balancing of Paced Assembly Lines
Both results are optimal since they require 4 (lower bound) stations only. But these two solutions are different. In both cases, the number of iterations was 30 000. Example 2
We ran the genetic algorithm successively for C = 30, C = 50 and C = 55 as we did with algorithm COMSOAL (see Table 7.5). The number of iterations was 30 000, the mutation probability was 0.02 and the size of the population was 150. The results are presented in Table 7.19. If we compare these results with those of Table 7.5, we can see that: • Results are different but, for the same cycle time, the numbers of stations used in both cases are identical. • In the results displayed in Table 7.19, the lowest values of the remaining time are concentrated in stations of low rank, which clearly show that the way this genetic algorithm works consists in progressively “pushing” the operations into the stations of lower rank, thus reducing the number of stations used.
7.6 Properties and Evaluation of a Line-balancing Solution 7.6.1 Relationship Cycle Time/Number of Stations/Throughput Let T be the working period, D the demand during this period and C the cycle time. The demand is expressed in terms of number of products. Since one unit of product must be completed every C units of time and since we have to perform D units of products every T units of time, then: C=
T D
(7.14)
Indeed, C cannot be less than the greatest operation time since, in this case, at least one operation could not be performed during period C . Thus, the condition to be satisfied is that C must be greater than or equal to the greatest operation time: it is a necessary condition to reach a solution. Let us denote by T * the sum of the operation times of a product. In our approach, the set up times are neglected. The demand being D , the total operation time over working period T is D × T * . The number of stations N being integer, we can consider:
7.6 Properties and Evaluation of a Line-balancing Solution
⎡ D ×T * ⎤ N =⎢ ⎥ ⎢ T ⎥
271
(7.15)
where ⎡ x ⎤ is the smallest integer greater than or equal to x . Unfortunately, Equality 7.15 holds only if it is possible to distribute an operation time over several stations, which is not the case in our problems. Thus, ⎡ D ×T * ⎤ ⎢ T ⎥ ⎢ ⎥
is a lower bound on the number of stations. From Equation 7.14 we derive T = C × D . Replacing T by C × D in the expression of the lower bound of N we obtain: ⎡T*⎤ Lower bound of N = ⎢ ⎥ ⎢ C ⎥
(7.16)
Note that if a heuristic algorithm provides a solution where the number of stations is equal to the lower bound, this solution is optimal for Criterion 7.1.
7.6.2 Evaluation of a Line-balancing Solution Until now, we considered two types of problems: • Minimize the number of stations knowing the cycle time (SALB-1). • Minimize the cycle time knowing the number of stations (SALB-2). When several solutions that optimize the criterion under consideration exist, it is easy to understand that a secondary criterion is necessary to select a solution. Several common sense coefficients are available to measure the quality of a solution. 7.6.2.1 Coefficient Measuring the Productivity Loss
The coefficient to measure the loss of productivity, known as the balance delay, is defined as follows: BD =
Sum of the idle times over the N stations × 100 C×N
(7.17)
272
7 Design and Balancing of Paced Assembly Lines
BD provides the mean percentage of time stations remain idle. Indeed, this coefficient is calculated for the overall line; however, the percentage may vary from station to station. Consider, for instance, the result provided in Table 7.19 for C = 55. In this ex12 ample, BD = = 7.27 % , but the percentages of idle times at the stations are: 3 × 55 1 × 100 = 1.82 % . 55 3 • Station 2: × 100 = 5.45 % . 55 8 • Station 3: × 100 = 14.55 % . 55
• Station 1:
Thus, a solution having a satisfactory BD can be unbalanced in the sense that the loads of the stations are very different from each other. 7.6.2.2 Coefficient Measuring the Effectiveness of the System
The effectiveness coefficient is defined as follows: EC =
Production during period T × 100 Expected production during period T
(7.18)
Consider the example given in Table 7.1 and Figure 7.4. Assume that we want to complete 400 units of products in each period of 8 h. The minimum cycle time that allows this production level is, expressed in seconds: C=
8 × 3600 = 72 400
If a cycle time of 72 is chosen, an optimal solution with 3 stations is obtained. This solution is given in Table 7.20. Table 7.20 Solution for C =72 Station
Operations
Idle time
1
A, C, D, F, G, I, J
11
2
B, E, H, K, L
6
3
M, N, O
46
For this solution, EC = 109%.
7.7 Concluding Remarks
273
Assume now that the same solution is kept but that a cycle time equal to 80 is chosen. In this case: 8 × 3600 80 EC = = 90% 400
7.6.2.3 Maximum Deviation Coefficient
This coefficient is, in percentage, the maximal deviation over the stations between the loads of stations and the maximal load C. It is denoted by MDC. Let Y be the set of stations in a solution, thus: MDC =
Max ( C − load of station y y∈Y
C
)
× 100
(7.19)
For the solution presented in Table 7.20 for C = 72 , we obtain: MDC =
46 × 100 = 63.89% 72
This value reflects the fact that the station loads are unbalanced: the idle time of the third station is equal to 73% of the total idle time for the line.
7.7 Concluding Remarks The heuristics and metaheuristics mentioned in this chapter are only some of the algorithms of this type available in the literature. A general remark that applies to COMSOAL, simulated annealing, tabu search and genetic algorithms should be emphasized: since randomness is a component of these approaches, the results obtained by running the same example several times may be different from one to another. More precisely, the distribution of the operations at the stations may be different, but the criterion remains close to the optimal value. Thus, applying the same algorithm several times usually leads to several “good” solutions, allowing the designer to select the best one with regard to a secondary criterion. Let us now review the three metaheuristics presented in this chapter:
274
7 Design and Balancing of Paced Assembly Lines
• Simulated annealing can be used whatever criterion is under consideration. Thus, it can be utilized for solving SALB-1 and SALB-2, assuming that the criteria used are adequate, which implies the following requirement: a criterion must be sensitive to any change on the parameters of the problem. Furthermore, this approach is easy to apply: we only have to define an initial solution and a neighbor of any solution. We showed how to meet these requirements. • Tabu search is similar to simulated annealing and can be used for both SALB-1 and SALB-2, assuming that the criteria used are adequate. An additional problem is the definition of the tabu list and its maximal length. A solution has been proposed. With the criterion used in Section 7.5.2.4, the tabu search tends to minimize the number of stations but may provide a solution that poorly balances the workload of the stations. • Genetic algorithms are much more sophisticated. The algorithm presented in Section 7.5.3.2 has as objective to minimize the number of stations. Thus, this approach is developed to solve SALB-1 (Note that the stations of higher rank may be underloaded). It is possible to apply the presented genetic approach also to solve SALB-2 if we modify accordingly the criterion.
References Arcus AL (1966) COMSOAL: A COmputer Method of Sequencing Operations for Assembly Lines. Int. J. Prod. Res. 4:259–277 Chiang WC (1998) The application of a tabu search metaheuristic to the assembly line balancing problem. Ann. Oper. Res. 77:209–227 Helgeson WB, Birnie DP (1961) Assembly line balancing using the ranked positional weight technique. J. Ind. Eng. 12:394–398 Hoffmann TR (1992) EUREKA: a hybrid system for assembly line balancing. Manag. Sci. 38(1):39–47 Johnson RV (1988) Optimally balancing large assembly lines with “FABLE”. Manag. Sci. 34:240–253 Kilbridge LD, Wester L (1961) A heuristic method for line balancing. J. Ind. Eng. 12:292–298 Scholl A, Klein R (1997) SALOME: A bidirectional branch-and-bound procedure for assembly line balancing. INFORMS J. Comput. 9(4):319–334
Further Reading Anderson EJ, Ferris MC (1994) Genetic Algorithms for Combinatorial Optimization: The Assembly Line Balancing Problem. ORSA J. Comput. 6:161–173 Balas E (1965) An additive algorithm for solving linear programs with zero-one variables. Oper. Res. 13:517–546 Baybars I (1986) A survey of exact algorithms for the simple assembly line balancing problem. Manag. Sci. 32(8):909–932
Further Reading
275
Baybars I (1986) An efficient heuristic method for the simple assembly line balancing problem. Int. J. Prod. Res. 24:149–166 Betts J, Mahmoud KI (1989) A method for assembly line balancing. Eng. Costs Prod. Econ. 18:55–64 Boctor FF (1995) A multiple-rule heuristic for assembly line balancing. J. Oper. Res. Soc. 46:62–69 Bowman EH (1960) Assembly line balancing by linear programming. Oper. Res. 8(3):385–389 Boysen N, Fliedner M, Scholl A (2007) A classification of assembly line balancing problems. Eur. J. Oper. Res. 183(1):674–693 Dar-El EM (1973) MALB – a heuristic technique for balancing large single-model assembly lines. AIIE Trans. 5(4):343–356 Dolgui A (ed) (2006) Feature cluster on balancing assembly and transfer lines. Eur. J. Oper. Res. 168(3):663–951 Dolgui A, Finel B, Guschinsky N, Levin G, Vernadat F (2005) A heuristic approach for transfer lines balancing. J. Intell. Manuf. 16(2):159–171 Dolgui A, Finel B, Guschinsky N, Levin G, Vernadat F (2006) MIP approach to balancing transfer lines with blocks of parallel operations. IIE Trans. 38:869–882 Dolgui A, Guschinsky N, Levin G, Proth J-M (2008) Optimisation of multi-position machines and transfer lines. Eur. J. Oper. Res. 185(3):1375–1389 Dolgui A, Ihnatsenka I (2009) Branch and bound algorithm for a transfer line design problem: Stations with sequentially activated multi-spindle heads. Eur. J. Oper. Res. 197(3):1119– 1132 Easton F, Faaland B, Klastorin TD, Schmitt T (1989) Improved network based algorithms for the assembly line balancing problem. Int. J. Prod. Res. 27:1901–1915 Eglese RW (1990) Simulated annealing: a tool for operational research. Eur. J. Oper. Res. 46:271–281 Fleszar K, Hindi KS (2003) An enumerative heuristic and reduction methods for the assembly line balancing problem. Eur. J. Eur. Res. 145:606–620 Graves SC, Lamar BW (1983) An integer programming procedure for assembly system design problems. Oper. Res. 31(3):522–545 Gutjahr AL, Nemhauser GL (1964) An algorithm for the line balancing problem. Manag. Sci. 11:308–315 Kao EPC, Queyranne M (1982) On dynamic programming methods for assembly line balancing. Oper. Res. 30:375–390 Keytack H (1997) Expert Line Balancing System (ELBS). Comput. Ind. Eng. 33(1–2):303–306 Klein R, Scholl A (1996) Maximizing the production rate in simple assembly line balancing – a branch and bound procedure. Eur. J. Oper. Res. 91:367–385 Leu Y, Matheson LA, Rees LP (1994) Assembly line balancing using genetic algorithms with heuristic-generated initial populations and multiple evaluation criteria. Dec. Sci. 25:581–606 McMullen PR, Frazier GV (1998) Using simulated annealing to solve a multiobjective assembly line balancing problem with parallel workstations. Int. J. Prod. Res. 36(10):2717–2741 McMullen PR, Tarasewich P (2003) Using ant techniques to solve the assembly line balancing problem. IIE Trans. 35(7):605–617 Nof SY, Wilhelm WE, Warnecke HJ (1997) Industrial Assembly, Chapman & Hall, London Ponnambalam SG, Aravindan P, Naidu GM (2000) A multi-objective genetic algorithm for solving assembly line balancing problem. Int. J. Adv. Manuf. Techn. 16(5):341–352 Rachamadugu R, Talbot B (1991) Improving the equality of workload assignments in assembly lines. Int. J. Prod. Res. 29(3):619–633 Rekiek B, Dolgui A, Delchambre A, Bratcu A (2002) State of art of optimization methods for assembly line design. Ann. Rev. Contr. 26:163–174 Rubinovitz J, Levitin G (1995) Genetic algorithm for assembly line balancing. Int. J. Prod. Econ. 41:343–354
276
7 Design and Balancing of Paced Assembly Lines
Sabuncuoglu I, Erel E, Tanyer M (2000) Assembly line balancing using genetic algorithms. J. Intell. Manuf. 11:295–310 Saltzman MJ, Baybars I (1987) A two-process implicit enumeration algorithm for the simple assembly line balancing problem. Eur. J. Oper. Res. 32:118–129 Scholl A (1999) Balancing and Sequencing of Assembly Lines. Physica-Verlag, Heidelberg Sotskov Y, Dolgui A, Sotskova N, Werner F (2005) Stability of optimal line balance with given station set, In: Dolgui A, Soldek J, and Zaikin O (eds), Supply Chain Optimisation: Product/Process Design, Facilities Location and Flow Control, Series: Applied Optimization, vol 94, Springer, pp. 135 – 149 Sotskov Y, Dolgui A, Portmann MC (2006) Stability analysis of optimal balance for assembly line with fixed cycle time. Eur. J. Oper. Res. 168(3):783–797 Sprecher A (1999) A competitive branch-and-bound algorithm for the simple assembly line balancing problem. Int. J. Prod. Res. 37:1787–1816 Talbot FB, Patterson JH, Gehrlein WV (1986) A comparative evaluation of heuristic line balancing techniques. Manag. Sci. 32(4):430–454 Suresh G, Vinod VV, Sahu S (1996) A genetic algorithm for assembly line balancing. Prod. Plann. Contr. 7(1):38–46
Chapter 8
Advanced Line-balancing Approaches and Generalizations
Abstract The objective of this chapter is to generalize the simple line-balancing problems handled in the previous chapter. Cases with several products in variable proportions or/and stochastic operation times are analyzed. Numerical solutions based on a triangular density of probability are presented. This part of the chapter ends with an assembly-line-balancing algorithm in the most general case of probability distribution. Three models are proposed to rectify when the actual loads of stations exceed the cycle time. Another section is devoted to the introduction of parallel stations and equipment-selection problems. Some additional constraints, which arise frequently in real-life situations, are examined. Finally, two specific models are introduced: the bucket-brigade assembly line, which is a self-balancing model, and the U-shaped assembly line able to adapt to frequent changes in demand. Numerous numerical examples illustrate and explain the approaches developed in this chapter.
8.1 Introduction In Chapter 7, we made three strong assumptions: • The operation times are deterministic. • The assembly line is designed for a single type of product (or to products having the same manufacturing process and operation times close enough to each other to be considered as being identical). • One worker is assigned to each station. In this chapter, we generalize the balancing problem, first studying single product lines with stochastic operation times, then multiproduct lines with a mix of products in varying proportions but with deterministic operation times, and finally,
278
8 Advanced Line-balancing Approaches and Generalizations
the most general situation: multiproduct lines with a set of products in variable proportions and stochastic operation times. In addition to these models, we will consider three particular cases. In the first, employees are required to gather given operations together at the same station. In the second, some operations have to be assigned to different stations. In the third, some operations are allowed to use parallel stations. Another section is devoted to the introduction of parallel stations and equipment-selection problems. Finally, two novel models are introduced: bucket-brigade assembly lines, which are self-balancing, and U-shaped assembly lines able to adapt to frequent changes in demand. With some exceptions, especially for the aforementioned bucket brigades and U-lines, most of the assumptions made in this chapter are similar to those of Chapter 7: • An operation cannot be assigned to a station as long as all its predecessors have not been assigned to a station. • All the assembly-line workers are qualified to perform all the operations. • The setup times are negligible. • All the required resources are available at each station. • The investment and operating costs are the same for each station. • An operation is executed at a station and cannot be shared among several stations. Some generalizations are suggested where it is possible to use, as many parallel stations as wanted, particular equipment on each station or to share an operation (or worker) between two stations. These possibilities are studied at the end of the chapter.
8.2 Single Type of Product and Triangular Operation Times 8.2.1 Triangular Density of Probability Except for highly automated production systems, it is often impractical to use a well-known probability density (like Gaussian or negative exponential) to characterize operation times. Indeed, for manual production systems where a lot of employees are involved, it is extremely difficult to estimate these densities. Nevertheless, in practice, as a rough estimate, managers are able to provide a usual operation time m and 2 values a and b that are, respectively, the lowest and greatest operation times. Happily, we observed that evaluating an operation time using the above three parameters can be enough for most management purposes. We denote by f ( x ) such a triangular density. It is represented in Figure 8.1.
8.2 Single Type of Product and Triangular Operation Times
279
f(x) 2/(b–a)
a
m
b
x
Figure 8.1 A triangular density of probability
The maximum value of f into account the relations:
(x)
is obtained for x = m and is computed taking
b
∫ f ( x ) dx = 1
x=a
f (a )= f (b )= 0
As a consequence, the triangular density of probability is expressed as follows: 0 if x ≤ a ⎧ ⎪ 2( x − a ) if a ≤ x ≤ m ⎪ ⎪ ( b − a) ( m − a ) f (x) = ⎨ 2(b−x) ⎪ if m ≤ x ≤ b ⎪( b − a ) ( b − m ) ⎪⎩ 0 if x ≥ b
We derive the cumulative distribution function (c.d.f.) F ( x ) =
(8.1)
x
∫ f ( x ) dx
u = −∞
from Equation 8.1: 0 if x ≤ a ⎧ ⎪ ( x −a )2 if a ≤ x ≤ m ⎪ ⎪ ( b − a) ( m − a ) F ( x) = ⎨ (b−x)2 ⎪ if m ≤ x ≤ b ⎪( b − a ) ( b − m ) ⎪ 0 if x ≥ b ⎩
(8.2)
280
8 Advanced Line-balancing Approaches and Generalizations
8.2.2 Generating a Random Value In the triangular case, the density of probability f ( x ) and the c.d.f. F ( x ) are both continuous. The inverse function of y = F ( x ) exists and is denoted by x = F −1 ( y ) . It has been proven that y is uniformly distributed on [0, 1]. As a
consequence, generating x at random according to f ( x ) is straightforward: we obtain y following the uniform distribution on [0, 1] (density unity) and we compute x = F −1 ( y ) . Numerous algorithms are available to generate pseudo-random numbers on [0, 1] according to the uniform distribution. We denote by Random-triangular the algorithm used to generate x . Algorithm 8.1. (Random_Triangular (a,b,m)) 1. Generate y according to the uniform distribution on the interval [ 0, 1 ] . We use a random generator that can be found with any computer. 2. Compute x : 2.1. If y ≤
m−a then x = a + b−a
y ( b − a ) (m − a) .
2.2. If y >
m−a then x = b − b−a
(1− y ) ( b − a ) ( b − m ) .
Instructions 2.1 and 2.2 of Algorithm 8.1 provide F −1 ( y ) .
8.2.3 Assembly-line Balancing 8.2.3.1 Evaluation of the Probability of Overflow Assembly-line balancing (ALB) is a tradeoff between the cost resulting from overflow and the sum of the investment and operating costs. Assume that the best tradeoff occurs when the upper limit of the probability of overflow is ε . When using COMSOAL, for instance, we will have to calculate for every station the probability that the sum of its operation times overflows the cycle time C exceeds ε . Consider the case of n operations and assume that the parameters of the triangular densities are ai , bi , mi for i = 1, L, n . We decide to execute nb simulations to evaluate the overflow probability. For instance, we can take nb = 50 000. The
8.2 Single Type of Product and Triangular Operation Times
281
algorithm Random_triangular ( a, b, m ) introduced in Section 8.2.2 will be used. Let A , B and M denote the vectors: A = [ a1 , a2 , L, an ] , B = [ b1 , b2 , L, bn ] and M = [ m1 , m2 , L, mn ] . These vectors provide the parameters of triangular probability densities for all n operations. The algorithm to evaluate if a set of n operations can be assigned to the same station is given hereafter. Algorithm 8.2. ( ACC ( A, B, M , C , ε , nb, n ) ) n
1. Compute MIN =
∑a
.
i
i =1
n
2. Compute MAX =
∑b
i
.
i =1
3. Set tot = 0 ( tot counts the number of times the sum of the operation times surpasses C ). 4. Set j = 0 ( j is the counter of the number of iterations). 5. While ( j ≤ nb ) do: 5.1. Set W = 0 . 5.2. Set j = j + 1 . 5.3. For i = 1, L, n do: 5.3.1. Compute x = Random_triangular ( ai , bi , mi ) . 5.3.2. Set W = W + x . 5.4. If W > C then tot = tot + 1 . 6. Compute tot = tot / nb . 7. If tot > ε then ind = 0 , otherwise ind = 1 .
At the end of the algorithm, we decide that the considered set of operations can be assigned to the same station if and only if ind = 1 . Example Consider a set of 4 operations, the operation times of which are presented in Table 8.1. As can be seen, the sum of the operation times belong to the interval [MIN, MAX] = [35, 55] . Table 8.1 Definition of the triangular densities Operation
1
2
3
4
a b m
8 12 9
13 17 15
6 11 8
8 15 10
282
8 Advanced Line-balancing Approaches and Generalizations
The probability that the sum of the operation times exceeds C for different values of C has been evaluated. The results are reported in Table 8.2. Table 8.2 Overflow probabilities for different values of C C Overflow probability
46 0.18652
47 0.0934
48 0.03994
49 0.0142
50 0.00414
51 0.00098
8.2.3.2 Making COMSOAL Suitable for SALB with Stochastic Operation Times According to the results presented in the previous subsection, COMSOAL (see Section 7.3.2) can be adapted to the case of random operation times by replacing set W1 with W2 defined as follows: W2 is the set of unassigned operations without predecessors or the predecessors of which have already been assigned to a station, and such that when an operation of W2 is added to the set of operations already assigned to the current station the sum of operation times may exceed the cycle time C , but only with a probability less than or equal to a given probability ε . Taking into account the previous definition, COMSOAL can be rewritten in COMSOAL-S. It is assumed that the data concerning the manufacturing process and operation times have already been introduced. Algorithm 8.3. (COMSOAL-S) 1. Set nx = 0 , where nx is the number of solutions with the smallest number of stations. 2. For z = 1, L, Z do: ( Z is the number of trials). 2.1. Set i = 1 and initialize the state of station i with an empty set of operations. i is the rank of the station under consideration. 2.2. Build W2 . 2.3. If W2 is empty and all the tasks are assigned, then do: 2.3.1. Compute the criterion value K z corresponding to the solution S z obtained. This criterion is the number of stations in this solution. 2.3.2. If { [ z = 1 ] or [ ( z > 1 ) and ( K z = K * ) ] } , then do: 2.3.2.1. Set nx = nx + 1 . * 2.3.2.2. Set S nx = Sz .
2.3.2.3. Set K * = K z
.
The character * points out the current best solution. The first solution obtained is the best one at the first stage of the computation. 2.3.3. If [ ( z > 1 ) and ( K z < K * ) ] , then do:
2.3.3.1. Set nx = 1 .
8.2 Single Type of Product and Triangular Operation Times
283
* = Sz. 2.3.3.2. Set S nx
2.3.3.3. Set K * = K z . For each iteration whose rank is greater than 1, we keep the solution if it is better than the current best solution. 2.4. If W2 is empty and some of the tasks are still unassigned, then do: 2.4.1. Set i = i + 1 (A new station is open). 2.4.2. The list of the tasks assigned to this new station is initialized to ∅ . 2.4.3. Go to 2.2. 2.5. If W2 is not empty, then do: 2.5.1. Select one operation at random in W2 and add it to the list of operations already assigned to the current station. 2.5.2. Go to 2.2. 3. End of loop z . 4. Print K * and S i* for i = 1, L, nx.
8.2.3.3 A Numerical Application In this example, we chose: • C = 40 (cycle time or “takt time”). • ε = 0.01: in any station, the probability that more than C units of time are required to perform all the operations assigned to this station is less than 1%. • 50 000 iterations are made each time ACC ( A, B, M , C , ε , nb, n ) is called ( nb = 50 000 ). • 200 trials are made ( Z = 200 ).
The information concerning the manufacturing process is presented in Table 8.3. Table 8.3 The manufacturing process Operation Minimum operation time Maximum operation time Usual operation time Predecessors
A 9
B 10
C 6
D 7
E 19
F 3
G 10
H 5
I 8
J 10
K 14
L 12
M 8
N 7
O 8
11
13
8
10
23
5
13
7
11
14
18
15
10
9
10
10
12
7
8
20
4
11
6
9
12
15
13
9
8
9
/
/
/
A
B, C
/
D
D, E, F
/
I, G
G, H
J
J, K
M
L, N
284
8 Advanced Line-balancing Approaches and Generalizations
We obtained 4 solutions as presented in Table 8.4. In this table we provide, for each station in each solution: • the list of operations assigned to the station; • the minimum value of the sum of operation times; • the maximum value of the sum of operation times. Table 8.4 Results Station Trial 1
2
3
4
1
2
3
4
5
A, B, C, D 32 42 A, B, F, I 30 40 A, B, C, F 28 37 A, B, F, I 30 40
E, F, G 32 41 C, D, G 23 31 E, I 27 34 C, D, E 32 41
H, I, K 27 36 E, H 24 30 D, G, J 27 37 G, H, J 25 34
J, M, N 25 33 J, K, M 32 42 H, K, L 31 40 K, L 26 33
L, O 20 25 L, N, O 27 34 M, N, O 23 19 M, N, O 23 19
8.3 Particular Case: Gaussian Operation Times In this section, we study the case where operation times follow Gaussian probability density f ( x ) of mean value m and standard deviation σ : f ( x )=
1
σ
2π
exp ( −
( x − m )2 ) 2σ
2
For this case, parameters m and σ are sufficient to characterize an operation time.
8.3.1 Reminder of Useful Properties Assume that n operations are assigned to the same station and that operation i ∈ { 1, L , n} is characterized by a Gaussian probability density of parameters mi
8.3 Particular Case: Gaussian Operation Times
285
and σ i . We know that, in this case, the sum of the n operation times is ruled by a n
Gaussian probability density of mean value mn* = ∑ m i and standard deviation: i =1
σ n* =
n
∑σ
2 i
i =1
In the rest of Section 8.3, we will have to compute the cumulative density function (c.d.f.): Fm, σ ( a ) =
a
∫
a
1
f ( x ) dx =
σ
−∞
2π
⎛
∫ exp ⎜⎜⎝ −
( x − m ) 2 ⎞⎟ dx
−∞
2σ
2
(8.3)
⎟ ⎠
where Fm,σ is the c.d.f. that corresponds to the Gaussian probability density of mean value m and standard deviation σ . x−m Setting y = , Equation 8.3 becomes: σ a −m
⎛ a−m ⎞ F0,1 ⎜ ⎟= ⎝ σ ⎠
σ
1 2π
∫
−∞
⎛ y2 ⎞ ⎟⎟ dy exp ⎜⎜ − ⎝ 2 ⎠
⎛ y2 ⎞ ⎟⎟ is symmetrical about the y -axis, Fm,σ ( a Since exp ⎜⎜ − ⎝ 2 ⎠ ten as: ⎧ ⎪ 1 ⎪0.5 + 2π ⎪ Fm,σ ( a ) = ⎨ ⎪ ⎪ 0.5 − 1 ⎪⎩ 2π
)
can be rewrit-
a −m
σ
⎛
y2 2
⎞ ⎟⎟ dy if a ≥ m ⎠
⎛
2
⎞ ⎟⎟ dy if a ≤ m ⎠
∫ exp ⎜⎜⎝ −
0 m−a
σ
∫ exp ⎜⎜⎝ − 0
y 2
(8.4)
Equation 8.4 will be used each time Equation 8.3 should be computed. An approximation of an integral with finite bounds can be obtained using Tchebycheff’s polynomials, as explained in the next subsection.
286
8 Advanced Line-balancing Approaches and Generalizations
8.3.2 Integration Using Tchebycheff’s Polynomials Tchebycheff’s polynomials are defined as follows: T0 ( x ) = 1 T1 ( x ) = x Tn+1 ( x ) = 2 x Tn ( x ) − Tn−1 ( x ) for n ≥ 1
1
Assume that we want to calculate the integral: b
I a,b =
∫ f ( x ) dx a
We can use Tchebycheff’s polynomials to interpolate f ( x ) on interval [ a, b ] , and then compute the integral of the linear combination of the polynomials to obtain an approximation I a*,b of I a , b . If we choose M + 1 interpolation points, we obtain:2 I a*, b =
b−a M ⎧ ∑ ⎨ H ( xi M + 1 i =0 ⎩
⎛ a+b b−a ⎞ ⎫ + xi ⎟ ⎬ ⎜ 2 ⎝ 2 ⎠⎭
)f
(8.5)
with: xi = cos
2 i +1 π 2M +2
H ( xi ) = 1 − 2
k≤M
2
∑ k =1
T2 k ( xi
)
4 k +1 2
Equation 8.5, if applied with M = 500 , provides a result that is correct at least to five places of decimals if the function to integrate is a Gaussian density of probability on interval [0, 5]. 1 2
Tchebicheff’s polynomials are orthogonal.
The interpolation points are the M + 1 roots of T M +1 ( x ) = 0 .
8.3 Particular Case: Gaussian Operation Times
287
8.3.3 Algorithm Basis Integral b
⎛ y2 ⎞ ⎟⎟ dy exp ⎜⎜ − 2π 0 ⎝ 2 ⎠ 1
J 0, b =
∫
can be approximated using Tchebycheff polynomials as shown in the previous section. Thus, we can compute 1
F0,1 ( b ) =
2π
b
⎛
∫ exp ⎜⎜⎝ −
−∞
y2 ⎞ ⎟ dy 2 ⎟⎠
for b = Δ, 2 Δ, L , n Δ by applying Equation 8.4. Choosing n = 1000 Δ = 5 / n guaranties that:
and
• J −∞ , b = F0,1 ( b ) is known with great precision for b ∈ [ 0, 5 ] (the result is correct at least to five decimal places). • Since J −∞ , b = F0,1 ( b ) is close to 1 for b = 5 , we can consider that the integral is known for b ∈ [ 0, + ∞ ] .
Assume that a set of n operations is assigned to a station and that the total operation time related to this set is ruled by a Gaussian probability density of mean value mn* and standard deviation σ n* , determined as shown in Section 8.3.1. Assume also that the cycle time is C and the probability that the total operation time
θ n* exceeds C should be less than a given value ε . In other words: +∞
1
σ
* n
∫
2π
C
(
)
2 ⎛ x − mn* exp ⎜ − ⎜ 2 ( σ n* ) 2 ⎝
⎞ ⎟ dx ≤ ε ⎟ ⎠
This inequality can be rewritten as: 1 2π
+∞
∫
C − mn*
⎛ y2 exp⎜⎜ − ⎝ 2
⎞ ⎟⎟ dy ≤ ε ⎠
(8.6)
σ n*
Since ε is always less than 0.5 (and usually less than 0.1), then Let b * be such that:
C − mn*
σ n*
>0.
288
1 2π
8 Advanced Line-balancing Approaches and Generalizations +∞
⎛
∫ exp⎜⎜⎝ −
b*
y2 2
⎞ ⎟⎟ dy = ε ⎠
It can be rewritten as 1 − F0,1 ( b * ) = ε or F0,1 ( b * ) = 1 − ε . The variable b * belongs to the interval
[ i Δ, ( i + 1 ) Δ ]
F0,1 ( i Δ ) ≤ 1 − ε
such that
and
F0,1 ( ( i + 1) Δ ) > 1 − ε , and this interval is easily obtained using Tchebycheff’s ap-
proximation. Thus, a fine approximation of b * is (we consider that F0,1 ( x ) is
linear on [ i Δ, ( i + 1 ) Δ ] ):
(
) (
)
b* ≈ i Δ + Δ 1 − ε − F0,1 ( i Δ ) / F0,1 ( ( i + 1 ) Δ ) − F0,1 ( i Δ )
Finally, if
C − mn*
σ n*
> b*
then, the probability that θ n* exceeds C is less than or equal to ε , otherwise, this probability is greater than ε . In the first case, we take into account that the set of operations can be assigned to the station under consideration. In the second case, we assume that the total operation time exceeds the cycle time too often to assign the set of operations to the station. As a consequence, we can use the algorithm COMSOAL-S after replacing W2 with W3 , where W3 is the set of unassigned operations without predecessors or the predecessors of which have already been assigned to a station, and such that when an operation of W3 is added to the set of operations already assigned to the current station, the sum of operation times (that is ruled by a Gaussian probability density of mean value mn* and standard deviation σ n* ) verifies: C − mn*
σ n*
> b*
with:
(
) (
b* ≈ i Δ + Δ 1 − ε − F0,1 ( i Δ ) / F0,1 ( ( i + 1 ) Δ ) − F0,1 ( i Δ ) Remember
that
F0,1 ( ( i + 1) Δ ) > 1 − ε .
i
is
chosen
such
that
) F0,1 ( i Δ ) ≤ 1 − ε
and
8.3 Particular Case: Gaussian Operation Times
289
8.3.4 Numerical Example In this example, we chose: • C = 40 (cycle time). • ε = 0.01: in any station, the probability that more than C units of time are required to perform all the operations assigned to this station is less than or equal to 1%. • 100 trials are to be made. • The information concerning the manufacturing process is presented in Table 8.5. This table provides, for each operation: – the mean value m of the operation time; – the standard deviation σ of the operation time. The predecessors and the operation times are given in Table 8.5. Table 8.5 The manufacturing process Operation A m 10 σ 0.5 Predecessors /
B 12 1 /
C D 7 8 0.2 0.4 / A
E 20 2 B, C
F G H 4 11 6 0.5 1.5 0.6 / D D, E, F
I 12 2 /
J 12 1 I, G
K 15 2 G, H
L M N 13 9 8 1.5 0.8 0.4 J J, M K
O 9 1 L, N
We obtained the 4 solutions presented in Table 8.6. We selected the solutions that required the minimum number of stations (5 stations in our case). In this table, we provide, for each station in each solution, the list of operations assigned to the station. For each station, we also report the value of b* × σ n* + mn* , which should remain less than C . Table 8.6 Assignments and maximum manufacturing times at probability 1– ε Station Solution 1 b * ×σ*n + m*n Solution 2
b * ×σ*n + m*n Solution 3 b * ×σ*n + m*n Solution 4 b * ×σ*n + m*n
S1 A, C, I, F 37.9568
S2 B, D, G 35.2959
S3 E, J 37.2019
S4 H, K, L 39.981
S5 M, N, O 29.1211
B, C, I 36.2227
A, E, F 38.9349
D, G, J 35.2959
H, K, L 39.981
M, N, O 29.1211
A, B, I 39.3303
C, D, E 39.7676
F, G, H, J 37.5706
K, L 33.8159
M, N, O 29.1211
A, B, C, D 39.8013
E, I 38.5799
F, G, H, J 37.5706
K, L 33.8159
M, N, O 29.1211
290
8 Advanced Line-balancing Approaches and Generalizations
8.4 Mixed-model Assembly Line with Deterministic Task Times 8.4.1 Introduction Consider the case when production is made by lot S of n parts. There are different types of parts in each lot. Each part type (product) is characterized by its manufacturing process, that is to say a set of operations, a partial order on this set and the task (operation) times. We assume that the partial order on the operations of any part of S is the same, or that it is possible to find a partial order that “covers” all the partial orders. For instance, Figure 8.2 shows a partial order X that covers partial orders 1, 2 and 3 defined on 5 operations A, B, C, D and E. B
B D
A
C
D
A
C E
E Partial order 1
Partial order 2
B
B D
A E
Partial order 3
D
A
C E
Partial order X
Figure 8.2 Covering a set of partial orders
If an operation is not included in a manufacturing process, we presume that its operation time is equal to zero in the “covering” process. Let n j be the number of parts of type j , j = 1, L, m in a lot. Indeed, m
n = ∑n j . j =1
Let ti , j be the time of operation i for a product of type j . It is assumed that i = 1, L , Op ; in other words, Op is the total number of operations. The total time required to perform all the operations i for parts of S is: m
Ti = ∑ n j t i , j j =1
8.4 Mixed-model Assembly Line with Deterministic Task Times
The ratio k j of products of type j in S is k j =
291
nj
. n First, we consider that the ratios are constant. Then, we study the case when ratios are stochastic. In both cases, we assume that the operation times are constant.
8.4.2 Ratios are Constant Since the operation times are constant, the times Ti , i = 1, L , Op are also constant. Thus, balancing such an assembly line can be done using one of the algorithms presented in Chapter 7 for SALB.
8.4.3 Ratios are Stochastic 8.4.3.1 Introductory Remarks
We assume that the number n j of parts of type j ∈ { 1, L , m } in the set S is sto-
{
}
chastic and that n j ∈ 0, 1, L , N j . We also suppose that the probability to have Nj
n j parts of type j in the set S is p j , n . Indeed, j
∑
n j =0
p j,n j = 1 .
If n j , j = 1, L, m , is the number of parts j in S , then the time required to perform operations i = 1, L , Op is: m
Ti
( n1 , L, nm ) = ∑
(8.7)
n j ti , j
j =1
Assuming that the occurrences of the numbers of parts of the different types are independent from each other, the probability of Ti ( n1 , L , nm ) is: m
P ( n1 , L , nm ) = Π p j , n
(8.8)
j
j =1
Note that probabilities p j ,n do not depend on i , but only on j . j
Indeed:
292
8 Advanced Line-balancing Approaches and Generalizations
∑ P ( n , L, n ) = 1
( n1 , L, nm )∈Ω
1
m
where Ω is the set of all the possible vectors card ( Ω ) =
Π( N m
j =1
j
( n1 , L, nm ) . The set
Ω contains
)
+ 1 vectors.
Assume that we want to assign k operations ( i1 , L , ik ) ⊂ ( 1, L , Op ) to the same station. Then, the time required to perform this set of operations when the number of parts j in S is n j , j = 1, L, m , is: ⎧
⎪ ∑ T ( n , L, n ) = ∑ ⎨⎪ ∑ ( n k
k
r =1
ir
1
m
r =1
⎩
m
j =1
) ⎫⎪⎬ = ∑ ⎧⎪⎨ n ⎛⎜⎜ ∑ t ⎪ ⎝ ⎪ m
j
ti , j r
⎭
This time is denoted by T * { ( i1 , L, ik
k
⎩
j =1
j
r =1
ir , j
⎞ ⎟⎟ ⎠
⎫⎪ ⎬ ⎪⎭
(8.9)
) ( n1 , L, nm ) }. The probability of this
occurrence is given by Equation 8.8. We now have to figure out if the set of operations ( i1 , L, ik ) can be assigned to the same station. In other words, we have to verify if the probability that T * { ( i1 , L, ik ) ( n1 , L, nm ) } exceeds the cycle time (“takt time”) C is less than or equal to a given (small) probability ε .
8.4.3.2 Probability That the Total Operation Time Exceeds the Cycle Time
Assume that the values of times T * { ( i1 , L, ik
) ( n1 , L, nm ) }
are arranged in
increasing order and that their probabilities are arranged accordingly. To simplify the notations, these times are denoted by T1* ≤ T2* ≤ L ≤ TR* and the related probabilities by q1 , q 2 , L , q R , where R = card ( Ω ) . If Z is the smallest integer such that Tz* > C for z ≥ Z , then the probability that the total operation time exceeds C is Q ( i1 , L, ik
R
)= ∑
q z and the set
z=Z
( i1 , L, ik )
can be
assigned to a single station only if Q ( i1 , L , ik ) ≤ ε . Unfortunately, the size of Ω (i.e., the value of R ) prevents us from calculating all the times and the corresponding probabilities. We prefer: • Computing the times T * { ( i1 , L , ik
) ( n1 , L , nm ) } in their decreasing order
as well as the related probabilities, • Stopping the computation when T * { ( i1 , L , ik than C .
) ( n1 , L, nm ) } becomes less
8.4 Mixed-model Assembly Line with Deterministic Task Times
• Adding up the probabilities of the times T * { ( i1 , L, ik
293
) ( n1 , L, nm ) }
that
are greater than C . This sum is denoted by Q ( i1 , L, ik ) .
If Q ( i1 , L, ik ) ≤ ε , then the set of operations ( i1 , L, ik ) can be assigned to the same station. We propose an approach to compute the times T * { ( i1 , L, ik ) ( n1 , L, nm ) } in their decreasing order. Table 8.7 Step 1 of the process T * { ( i1 , L, ik
) ( N1 , L, N m ) } ≤ C
The set ( i1 , L , ik ) can be assigned to a station The set ( i1 , L , ik ) can be assigned to a station
PT1 ≤ ε PT1 > ε
T * { ( i1 , L, ik
) ( N1 , L, N m ) } > C
No conclusion The set ( i1 , L , ik ) cannot be assigned to a station
Step 1
According to Relation 8.9, the greatest time is obtained for n j = N j , j = 1, L, m. If T * { ( i1 , L , ik
) ( N1 , L, N m ) } ≤ C , then the set ( i1 , L, ik ) of operations can be assigned to the same station, otherwise we compute P ( N1 , L, N m ) according to Equation 8.8. If P ( N1 , L, N m ) > ε , then the set ( i1 , L, ik ) cannot be assigned to the same station. If T * { ( i1 , L, ik ) ( N1 , L, N m ) } > C and P ( N1 , L, N m ) ≤ ε , we set PT1 = P ( N1 , L , N m ) and we assign ( N1 , L, N m ) to an auxiliary set E that is initially empty. This first step is summarized in Table 8.7. We set h = 1, where h is the number of elements of E . Step 2
When a conclusion cannot be drawn, the algorithm continues by obtaining the vector ( n1 , L, nm ) that leads to the second greatest time. This vector is one of the m vectors ( N1 , L , N r −1 , N r − 1, N r +1 , L , N m ) for r = 1, L, m . Let r * be the value of r such that: T * { ( i1 , L , ik = Max
r∈{1,L, m
}
) ( N1 , L , N r*−1 , N r* − 1, N r*+1 , L , N m ) } T * { ( i1 , L , ik ) ( N1 , L , N r −1 , N r − 1, N r +1 , L , N m ) }
294
8 Advanced Line-balancing Approaches and Generalizations
If T * { ( i1 , L , ik
( i1 , L, ik )
) ( N1 , L, N r*−1 , N r* − 1, N r*+1 , L, N m ) } ≤ C ,
then the set
of operations can be assigned to the same station since PT1 ≤ ε . The algorithm stops. If T * { ( i1 , L , ik ) ( N1 , L , N r*−1 , N r* − 1, N r*+1 , L , N m ) } > C , the probabili-
ties P ( N1 , L , N r*−1 , N r* − 1, N r*+1 , L , N m ) are calculated using Equation 8.8. Then, PT2 = PT1 + P ( N1 , L, N r*−1 , N r* − 1, N r*+1 , L, N m ) . If PT2 > ε , then the set ( i1 , L, ik ) of operations cannot be assigned to the same station since the probability that the total operation time exceeds C is greater than ε . The algorithm stops. If PT2 ≤ ε , then do h = h + 1 (as aforementioned, h is the counter of the number of elements in E ) and assign ( N1 , L , N r*−1 , N r* − 1, N r*+1 , L , N m ) to E . General Step (Step w)
At any step w , we know: • PTw−1 , the probability that one of the total operation times corresponding to the elements of set E occurs. Remember that these operation times exceed C . • h , the number of elements of E at the beginning of step w .
If the assignment of ( i1 , L, ik ) to a single station has been neither rejected nor accepted (which means that PTw−1 ≤ ε ), then apply Algorithm 8.4. Algorithm 8.4.
1. For T*
each
element
( n1 , L, nm ) ∈ E ,
we
compute
{ ( i1 , L, ik ) ( n1, L, nr*−1, nr* − 1, nr*+1 , L, nm ) } .
(see
the
definition
above)
We obtain h total operation times
and we keep the greatest one. This greatest total operation time is denoted by T * * ( i1 , L , ik ) ( n1 , L, nr**−1 , nr** − 1, nr**+1 , L , n m ) . Furthermore, according to Rela-
{
}
( n1 , L, nr**−1 , nr** − 1, nr**+1 , L, nm ) . ) ( n1, L, nr**−1, nr ** − 1, nr **+1, L, nm ) } ≤ C , then the set ( i1 , L , ik )
tion 8.8, we calculate: P 2. If T * *
{ ( i1, L, ik
of
operations can be assigned to the same station. The algorithm stops. 3. If T * * ( i1 , L, ik ) ( n1 , L, nr**−1 , nr** − 1, nr**+1 , L, nm ) > C , then:
{
}
( n1 , L, nr**−1 , nr** − 1, nr**+1 , L, nm ) . ( i1 , L , ik ) of operations cannot be assigned to the same station.
3.1. We compute PTw = PTw−1 + P 3.2. If PTw > ε , the set
The algorithm stops. 3.3. If PTw ≤ ε , we go to step w + 1 .
8.4 Mixed-model Assembly Line with Deterministic Task Times
295
The algorithm presented above converges since PTw increases with w , PT0 = 0 and PTR = 1 , which means that PTw becomes greater than ε for a given value of w . This algorithm can be incorporated in COMSOAL to see whether a new operation can be assigned to a station or not. The example presented in the next section has been solved using this COMSOAL algorithm. 8.4.3.3 Numerical Example
Consider the case of 3 types of products. The manufacturing processes of these parts are represented in Figure 8.3.
A
D
G
J
L O
B
E
C
F
H
K
M
J
L
N
Product type 1
I
A
D
G
O B
E
H
K
M
N
C
Product type 2
I
A
D
G
J
L
B
E
H
K
M
O N
Product type 3 Figure 8.3 Manufacturing processes for three types of products
296
8 Advanced Line-balancing Approaches and Generalizations
The manufacturing process that “covers” the above three processes is represented in Figure 8.4. Table 8.8 provides the manufacturing processes and the operation times for the three product types. The number of products of each type that may appear in a mix and the corresponding probabilities are given in Table 8.9. We compute two sets of solutions with C = 100. The first set is calculated with ε = 0.015 and the results appear in Table 8.10. Nine solutions have been obtained with 200 trials. Each solution requires 5 stations. The numbers provided in Table 8.10 are the ranks of the stations to which operations are assigned. The solution number 4 is presented in Figure 8.5. I
A
D
G
J
L O
B
E
C
F
H
K
M
N
Figure 8.4 The “covering” manufacturing process
Table 8.8 Manufacturing processes and operation times
Product type 1 Operation Op. time Predecessors
A 1 /
B 3 /
C 2 /
D 5 A
E 7 B, C
F 4 /
G 1 D
H 3 E, F
J 2 G
K 3 H
L 4 J
M 1 J, K
N 2 M
O 6 L, N
A 2 /
B 1 /
C 1 /
D 1 A
E 3 B, C
G 5 D
H 2 E
I 6 /
J 4 I, G
K 2 G, H
L 1 J
M 5 J, K
N 2 M
O 2 L, N
Product type 2 Operation Op. time Predecessors
Product type 3 Operation Op. time Predecessors
A 5 /
B 3 /
D 2 A
E 1 B
G 4 D
H 6 D, E
I 1 /
J 3 I, G
K 1 G, H
L 3 J
M 3 K
N 5 M
O 3 L, N
8.4 Mixed-model Assembly Line with Deterministic Task Times
297
Table 8.9 Mix data 1
Product type 1
Number of parts Probability
0 0.1
1 0.3
2 0.3
3 0.2
0 0.2
1 0.6
2 0.1
3 0.1
0 0.1
1 0.1
2 0.4
3 0.3
4 0.1
Product type 2
Number of parts Probability Product type 3
Number of parts Probability
4 0.1
Table 8.10 Line balancing if ε = 0.015 Operation Solution
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
1 2 3 4 5 6 7 8 9
1 2 1 1 1 1 1 1 1
3 1 2 1 1 1 1 2 1
1 1 2 1 1 1 1 1 2
1 2 1 2 2 2 2 1 2
3 1 3 2 3 3 2 2 3
1 2 1 1 2 1 1 1 1
2 3 2 2 2 2 3 3 2
3 4 4 3 3 3 3 2 4
1 1 1 1 1 1 3 1 1
2 3 2 3 4 2 4 3 3
4 4 4 4 4 4 4 3 4
2 3 3 3 5 4 5 4 3
4 4 4 4 4 4 4 4 4
4 5 5 4 5 5 5 4 5
5 5 5 5 5 5 5 5 5
I
A
D
G
J
L
O
B
E
H
K
M
N
C
F
Figure 8.5 Solution 4 when ε = 0.015
298
8 Advanced Line-balancing Approaches and Generalizations
The second set is obtained with ε = 0.1 and the results appear in Table 8.11. Nine solutions have also been obtained with 200 trials. Each solution requires 4 stations. The numbers provided in Table 8.11 are the ranks of the stations to which operations are assigned. We represent solution number 7 in Figure 8.6. Note on the “Covering” Manufacturing Process
In the previous explanations, we assumed that, if two operations exist in two manufacturing processes, they are executed in the same order. If this is not the case, introducing WIP allows us to design a “covering” manufacturing process, and thus apply the previous approach. Table 8.11 Line balancing if ε = 0.1 Operation Solution
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
1 2 3 4 5 6 7 8 9
1 2 1 1 1 2 1 1 1
2 1 1 1 1 1 1 2 1
1 1 1 2 2 1 1 1 1
1 2 2 1 1 2 2 1 2
3 1 2 2 3 1 2 2 1
1 1 1 1 1 1 1 1 2
1 2 3 2 2 2 2 2 2
3 2 2 3 3 2 3 3 2
1 1 1 1 1 1 1 1 2
2 3 3 2 2 3 2 2 3
3 3 3 3 3 3 3 3 3
2 3 3 3 2 4 3 3 3
3 3 4 3 3 3 3 3 3
4 4 4 4 4 3 4 4 4
4 4 4 4 4 4 4 4 4
I
A
D
G
J
L
O
B
E
H
K
M
N
C
F
Figure 8.6 Solution 7 when ε = 0.1
8.5 Mixed-model Line Balancing: Stochastic Ratio and Operation Times
299
8.5 Mixed-model Line Balancing: Stochastic Ratio and Operation Times 8.5.1 Introduction In this section, we study the most general problem that is also the most realistic: the ratios of the product types in the lots to be manufactured, as well as the operation times, are stochastic. The notations are those used in Section 8.4.3 except that the operation times are stochastic: we assume that the time of an operation i ∈ { 1, L , Op } when belonging to the manufacturing process of type j ∈ { 1, L , m } is ruled by a triangular density of probability defined by three parameters ai , j < mi , j < bi , j as explained in Section 8.2. To check if a given set of operations can be assigned to a certain station, we have to simulate its total operation time, as will be explained in the next section. Remember that a set can be assigned to a station if the probability that its total operation time surpasses the cycle time C is less than ε .
8.5.2 Evaluation of an Operation Time Assume that the number n j of products of types j ∈ { 1, L , m } in the lot (batch)
{
}
S to be processed is stochastic and that n j ∈ 0, L , N j . The probability to
have n j products of type j in S is p j , n . j
Let i ∈ { 1, L , Op } an operation. If this operation belongs to a process of type j ∈ { 1, L , m } , then its manufacturing time is ti , j which is a random variable fol-
lowing triangular distribution with ai , j < mi , j < bi , j . To generate at random the total time of an operation i for lot S in such a manufacturing system with m possible products, the following algorithm can be used:
}
Algorithm 8.5. ( Tot_Op {i, ( ai , j , mi , j , bi , j , N j , p j , n j ) j =1,L, m )
1. Set TTi = 0 (initialization of the operation time). 2. For j = 1, L , m do: 2.1. Generate at random x ∈ [ 0, 1 ] (density 1 on [0, 1]).
{
2.2. Compute n j ∈ 0, L, N j
n j −1
nj
k =0
k =0
n j −1
} such that ∑ p j,k ≤ x < ∑ p j ,k . If n j = 0 , ∑ p j,k = 0 . k =0
300
8 Advanced Line-balancing Approaches and Generalizations
2.3. Compute t i , j = Random_triangular ( ai , j , bi , j , mi , j ) , see Section 8.2.2. 2.4. Compute TTi = TTi + n j × t i, j .
Thus, to evaluate if a set I = { i1 , L , ir } ⊂ { 1, L , Op } of operations can be assigned to the same station, we apply algorithm Eval_C presented below, knowing that the probability that the total operation time exceeds C should be less than ε . In this algorithm, Z is the number of iterations (20 000, for example).
{
}
Algorithm 8.6. ( Eval_C ε , I , ( ai , j , mi , j , bi , j , N j , p j , n j ) i∈I , j =1, L, m )
1. Set tot = 0 (this variable is the counter of the number of times the total operation time exceeds C ). 2. For z = 1, L, Z do: 2.1. Set TTI = 0 (this variable will contain the total operation time). 2.2. For all i ∈ I do: TTI = TTI + Tot_Op (i,{ ai , j , mi,j , bi,j , N j , p j,n j } j =1, …, m ) . 2.3. If TTI > C , then set tot = tot + 1 . 3. Compute x = tot / Z . 4. If x > ε , then the set I of operations cannot be assigned to the same station (return value = 0), otherwise, the assignment can be done (return value = 1).
8.5.3 ALB Algorithm in the Most General Case This algorithm is close to COMSOAL. It is denoted by COMSOAL-S-2. It is applied a number L of times (1000 times, for instance). The final solution is selected depending on the criterion chosen (minimize the number of stations, minimize the maximal mean idle time in the stations, etc.). Algorithm 8.7. (COMSOAL-S-2)
1. Let H = { 1, L, Op} ( H is the set of all operations to be assigned). 2. Initialize N = 1 ( N is the rank of the station currently under consideration). 3. Set A( N ) = ∅ , where A( N ) is the set of operations already assigned to the current station. 4. Compute W that is the set of unassigned operations without predecessors or the predecessors of which have already been assigned to a station. 5. Select i ∈ W at random. 6. Set H = H \ { i } . 7. Set A1 = A( N ) ∪ { i } .
{
}
8. Compute ind = Eval_C I , ε , ( ai , j , mi , j , bi , j , N j , p j , n j ) i∈A1, j =1,L, m . 9. If ind = 0 , then do:
8.5 Mixed-model Line Balancing: Stochastic Ratio and Operation Times
301
9.1. Set N = N + 1 . 9.2. Set A( N ) = { i } . 10. If ind = 1 , set A( N ) = A1 .
11. If H ≠ ∅ then go to 4. 12. Display the contents A(M) of stations M= 1, L , N .
8.5.4 Numerical Example We consider the case of three types of products represented in Figure 8.3. The parameters of the triangular densities are given in Tables 8.12–8.14. Table 8.12 Product type 1 a m b Pred.
A 1 2 3 /
B 2 3 5 /
C 3 4 6 /
D 0 1 3 A
E 2 3 4 B, C
F 0 2 3 /
G 1 2 4 D
H 2 3 5 E, F
J 2 3 4 G
K 1 3 4 H
L 0 3 4 J
M 0 4 6 J, K
N 1 4 5 M
O 1 3 4 L, N
D 1 3 5 A
E 0 3 4 B, C
G 1 3 4 D
H 0 3 4 E
I 2 3 4 /
J 3 5 6 G, I
K 2 5 6 G, H
L 1 3 4 J
M 0 2 3 J, K
N 0 3 4 M
O 1 3 4 L, N
Table 8.13 Product type 2 a m b Pred.
A 3 4 5 /
B 2 4 5 /
C 1 4 5 /
Table 8.14 Product type 3 a m b Pred.
A 1 3 4 /
B 2 4 5 /
D 0 3 4 A
E 2 4 5 B
G 1 3 4 D
H 3 4 5 D, E
I 2 4 5 /
J 0 3 5 G, I
K 1 3 4 G, H
L 3 4 5 J
M 1 3 5 K
N 2 4 5 M
O 1 3 4 L, N
The number of parts of each product that may appear in a mix and the corresponding probabilities are given in Table 8.15. The “covering” manufacturing process is given in Figure 8.4. COMSOAL-S-2 is applied with the following parameters: • L = 100 . L is the number of times COMSOAL-S-2 runs. • The number of iterations to evaluate the total time of a set I of operations (parameter Z in Algorithm 8.6, i.e., in the procedure Eval_C ), is chosen to be equal to 10 000. • Cycle time C = 50.
302
8 Advanced Line-balancing Approaches and Generalizations
Table 8.15 Mix data 2
Product type 2
Product type 1 Number of parts
0
1
2
Number of parts
0
1
2
3
Probability
0.2
0.7
0.1
Probability
0.1
0.5
0.3
0.1
Product type 3 Number of parts
0
1
2
Probability
0.3
0.5
0.2
We first run the program for ε = 0.01 . In other words, a set of operations is assigned to the same station if the probability that the total operation time exceeds ε is less than 0.01. We obtained 9 solutions with 7 stations each. The numbers provided in Table 8.16 are the ranks of the stations to which operations are assigned. The solution number 7 is represented in Figure 8.7. Table 8.16 Case ε = 0.01 Operation Solution
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
1 2 3 4 5 6 7 8 9
1 2 3 2 1 1 3 2 1
2 1 1 1 1 3 2 1 4
1 2 1 2 2 1 1 1 2
3 3 3 3 2 2 3 3 2
3 3 2 3 3 3 2 3 4
2 1 1 1 2 2 1 1 1
3 4 4 3 3 2 4 4 2
4 4 5 4 4 4 4 5 5
2 1 2 1 2 2 1 2 1
5 5 4 4 5 4 5 4 3
4 5 5 5 4 5 6 5 5
6 6 6 5 6 5 5 6 3
5 6 6 6 5 6 6 6 6
6 7 7 6 6 6 7 7 6
7 7 7 7 7 7 7 7 7
Then, we run the program for ε = 0.1 . In other words, a set of operations is assigned to the same station if the probability that the total operation time exceeds ε is less than 0.1. We still obtained 9 solutions with 5 stations each. The numbers provided in Table 8.17 are the ranks of the stations to which operations are assigned. The number of stations drastically decreases as ε increases. Obviously, if ε = 1 , all the operations can be assigned to the same station. Table 8.18 provides some examples of the number of stations according to ε .
8.5 Mixed-model Line Balancing: Stochastic Ratio and Operation Times
I
303
5
3 A
D
G
J
L
O
B
E
H
K
M
N
2
4
1
F
C
7
6
Figure 8.7 Representation of solution 7 with ε = 0.01
Table 8.17 Case ε = 0.1 Operation Solution
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
1 2 3 4 5 6 7 8 9
1 1 1 1 1 1 1 1 1
2 1 2 2 2 3 2 2 3
1 2 1 1 1 1 1 2 1
1 2 2 1 2 1 1 1 2
3 2 3 3 2 3 3 3 3
2 1 1 1 2 1 1 1 1
2 3 2 2 3 2 2 1 2
3 3 4 3 3 4 4 3 4
2 1 1 2 1 2 2 2 1
3 3 3 3 4 2 3 3 2
4 4 4 4 3 4 4 4 4
5 5 3 4 4 3 3 4 3
4 4 4 4 4 4 4 4 4
4 4 5 5 5 5 5 5 5
5 5 5 5 5 5 5 5 5
Table 8.18 Number of stations versus ε Probability
0.01
0.015
0.05
0.1
0.2
0.3
Number of stations
7
6
5
5
4
4
The solution number 5 of Table 8.17 is presented in Figure 8.8. Remarks: • In this chapter, the goal was to minimize the number of stations knowing the cycle time. If the number of stations is given and the goal is to minimize the cycle time, a dichotomy approach can be used.
304
8 Advanced Line-balancing Approaches and Generalizations
Assume, for instance, that the number of stations is 4, ε = 0.01 and the data are those of the previous example. In Table 8.19, we provide the different steps of a dichotomy approach. If the computation is stopped at Step 15, we obtain an approximation of the minimal cycle time, which is C = 68.353271. Furthermore, comparing to Step 14, we can see that the maximal error is 68.353271 – 68.347166 = 0.006105. Thus, the maximal error is less than 0.01%. • Other strategies are also available for line balancing with random operation times. They consist in using the mean values of the operation times and/or the mix ratios, and to load the stations until a given percentage of C (90%, for instance).
8.6 How to React when the Loads of Stations Exceed the Cycle Time by Accident? Three models are proposed to deal with this problem. Table 8.19 Dichotomy approach Step
1 50 7
2 100 3
3=(1+2)/2 75 4
4=(1+3)/2 62.5 5
5=(3+4)/2 68.75 4
6=(4+5)/2 65.625 5
7=(5+6)/2 67.1875 5
8=(5+7)/2 67.96875 5
9=(8+5)/2 68.359375 4
10=(8+9)/2 68.16403 5
11=(9+10)/2 68.261703 5
12=(9+11)/2 68.310539 5
13=(9+12)/2 68.334957 5
14=(9+13)/2 68.347166 5
15=(9+14)/2 68.353271 4
Cycle time Number of stations
Step Cycle time Number of stations Step Cycle time Number of stations
I
1
5 4
3 A
D
G
J
L
O
B
E
H
K
M
N
C
F
2
Figure 8.8 Representation of solution 5 of Table 8.17 with ε = 0.1
8.6 How to React when the Loads of Stations Exceed the Cycle Time by Accident?
305
8.6.1 Model 1 In this model, all the stations switch simultaneously to the next set of operations only when the last station has completed its work. Using this model reduces the expected throughput of the assembly line: the real average cycle time C * is greater than C . Thus, if we want to complete a product every C units of time, we need a secondary assembly line to compensate the loss of productivity. Assume that C* = α × C , ( α > 1 ) , then the secondary assembly line should be able to complete, on the average, α − 1 products every C units of time. C * can be evaluated by simulation.
8.6.2 Model 2 If a set of operations cannot be completed during a cycle time C in a given station n ∈ { 1, L, N } , the corresponding products are extracted from the assembly line and completed on a relief line, as explained in Figure 8.9. Indeed, the next stations of the assembly line, that are stations { n + 1, L , N } , will remain idle during one period C . In this kind of model, the tendency is to oversize the first stations of the lines in order to reduce the mean total idle time.
Relief assembly line
Station n – 1
Station n
Station n + 1
Figure 8.9 Model 2
8.6.3 Model 3 One or more semi-finished products (or lots) are stored at the exit of each station. If a semi-finished product (or lot) is not completed in station n ∈ { 1, L, N } by the end of the current cycle time C , a semi-finished product (or lot) stored at the exit of station n is released to station n + 1 at the beginning of the next cycle time, and the delayed semi-finished product (or lot) currently performed in station n will be stored at the exit of this station as soon as it leaves the station. This model is represented in Figure 8.10.
306
8 Advanced Line-balancing Approaches and Generalizations
Station n – 1
Station n + 1
Station n
Sn
Sn–1 Storage facilities
Figure 8.10 Model 3
8.7 Introduction to Parallel Stations We use the notations introduced in the previous sections. Two mean reasons are advanced to justify the introduction of parallel stations: 1. At least one operation time exceeds the cycle time C with a probability greater than ε (stochastic case), or one of the operation times exceeds C (deterministic time). In this case, we introduce K parallel stations defined as follows: ⎡θ ⎤ K = ⎢ ⎥ where: ⎢C ⎥ – In the deterministic case θ > C is the greatest operation time. – In the stochastic case, θ > C is the smallest value of time with the probability that an operation time exceeds θ is less than ε .
Note that increasing C in order to “absorb” the greatest operation time would reduce the throughput accordingly. 2. In the case of important variations of production volume, it is possible to off line some of the parallel stations when demand is low. Thus, such an assembly line is flexible. Note that parallel stations have the same mission. In other words, they perform the same set of operations during a given cycle time. Thus, K stations working in parallel with a cycle time C are equivalent to one station able to perform the C same set of operations during a period . Since the assembly lines under considK eration are paced, all the stages of the assembly line should be reduced to a cycle C time . This makes it possible to use the algorithms of the previous (Chapter 7) K and present chapters to balance an assembly line with parallel stations.
8.8 Particular Constraints
307
Finally, when demand increases, then either we introduce parallel stations or reduce the cycle time. The first solution is expensive and the second is not always possible since, if the operation times are deterministic, C cannot be less than the greatest operation time and if the operation times are stochastic, reducing C increases the number of times the defined sets of operations will not be completed during the period C , which leads to additional costs too. Remark:
Assume that a single station follows K parallel stations in an assembly line. The C cycle time (“takt” time) of the K stations is C , while it is equal to for the sinK gle station. To preserve the regularity of the production flow, it is recommended to C shift the starting times of operations on each next parallel station units of time K forward, while maintaining the starts on a parallel station every C units of time. Figure 8.11 illustrates this rule for a case of stochastic operation times when K = 3 . In this example none of operation times exceeds the “takt” time C . Remember that if a completion time exceeds the “takt” time, some of the techniques given in Section 8.6 can be used.
Parallel stations at stage n
Single station at stage n+1
Figure 8.11 A single station following three parallel stations
8.8 Particular Constraints Two types of constraints are quite common in assembly-line-balancing problems: gather given operations together in the same station or force the assignment of two given operations to different stations.
308
8 Advanced Line-balancing Approaches and Generalizations
8.8.1 A Set of Operations Should be Assigned to the Same Station A straightforward approach consists in considering this set of operations, say E , as a single operation O the predecessors of which are the predecessors of the operations belonging to E . If operation times or mix ratios are stochastic, some additional simulations are necessary to evaluate the operation time of O . It may happen that, even though the operations belonging to E , it is possible to assign them to several stations provided that additional resources are assigned to these stations, incurring an additional cost. In this case, this cost becomes a secondary criterion.
8.8.2 Two Operations Should be Assigned to Different Stations In the algorithms developed in Chapter 7 and the current chapter, an additional condition should be added to the definition of W1 (or W2 or W3 ): these sets should not contain operations that cannot be assigned to the same station as an operation already assigned to the station under consideration.
8.8.3 Line Balancing with Equipment Selection 8.8.3.1 Introduction
Let EQ = { e1 , L , eM } be a set of pieces of equipment. Equipment is a set of devices that can perform a set of operations. For i = 1, L, M , we denote, respectively, by ci and Oi the cost of ei and the set of operations it can perform. We also denote by O the set of operations required to manufacture the type P of product under consideration and we assume M
that O ⊂ U Oi , i.e., that the pieces of equipment available in the set EQ are sufi =1
ficient to manufacture P . When a piece of equipment is used, it is not compulsory to execute all of its possible operations. For example, if we consider equipment ei and if card ( Oi ) = ni (remember that card ( Oi ) is the number of elements of Oi ), we can theoretically perform 2 n − 1 sets of operations using ei . In fact, due to proi
duction constraints, only ki 1 ) and ( K < K* ), then: 1.4.3.1. Set S* = S. 1.4.3.2. Set K* = K. 1.5. If W11 and W12 are empty and some operations are still not assigned, then: 1.5.1. Set i = i + 1 . 1.5.2. Go to 1.2. 1.6. If W11 ∪ W12 is not empty, then: 1.6.1. Select an operation at random in W11 ∪ W12 and assign it to the station under consideration. If the selected operation belongs to W11 , the task will be located at the front of the U-line, and otherwise at the back. Note: at this level, we may want to select operations alternately in W11 and W12 , if possible. 1.6.2. Set Q = Q − θ , where θ is the operation time of the selected operation. 1.6.3. Go to 1.3. 2. End of loop z. 3. Print S* and K*.
Numerical Example
Consider the example given in Section 7.3 with C = 40 . The objective is to minimize the maximum idle time among the stations. The number of times the optimization algorithm is launched is Z = 100 . The result is given in Figure 8.15 and Table 8.20. In Table 8.20, the number in brackets indicates the location of the operation: 1 means that the operation is assigned from the entrance of the U-line, while 2 means that the assignment is made starting from the exit.
O
N
H
K
M
J
L
S2
G
S1 S4
S3 C
F
Figure 8.15 A U-shape layout
I
A
B
D
E
320
8 Advanced Line-balancing Approaches and Generalizations
Table 8.20 Layout and location of operations Station
Operations assigned to the station
Remaining available time
1
C(1), F(1), I(1), N(2), O(2)
3
2
A(1), H(2), K(2), M(2)
0
3
B(1), J(2), L(2)
3
4
D(2), E(2), G(2)
1
8.9.2.3 Binary Linear Programming Model
We consider a set S = { 1, 2, L, n } of operations and denote by ti , i = 1, L, n the time required to perform operation i . In the following, N min is a lower bound on the number of stations required to encapsulate the operations. N max is a upper bound. Remember that a possible value of N min is:
N min
⎡ n ⎤ ti ⎥ ⎢ = ⎢ i =1 ⎥ ⎢ C ⎥ ⎢ ⎥ ⎢ ⎥
∑
Furthermore, N max can be obtained by applying a heuristic algorithm (COMSOAL, for instance). We also define:
xi , j
⎧1 ⎪ =⎨ ⎪⎩0
yi , j
⎧1 ⎪ =⎨ ⎪⎩0
⎧1 ⎪ zj = ⎨ ⎪⎩0
if task i is assigned to station j from the entrance of the U-line otherwise if task i is assigned to station j from the end of the U-line otherwise if at least one task is assigned to station j otherwise
With these notations, if we minimize the number of stations, the criterion is:
8.9 Specific Systems with Dynamic Work Sharing N max
∑
Minimize
zj
321
(8.15)
j = N min +1
Thus, the objective is to reduce the number of additional stations above the minimal number required to accommodate the operations. The constraints are: N max
∑ (x
i, j
j =1
+ yi , j ) = 1 for i = 1, L, n
(8.16)
Constraints 8.16 guarantee that each operation is assigned to one and only one station. n
∑t
i
i =1
( xi , j + yi , j ) ≤ C for j = 1, L, N min
(8.17)
Constraints 8.17 guarantee that the sum of operation times in a station belonging to the minimal set of stations (i.e., the stations that are necessarily used) is less than or equal to the cycle time. n
∑t
i
i =1
( xi , j + yi , j ) ≤ C z j for j = N min + 1, L, N max
(8.18)
Equations 8.18 concern the stations that exceed the minimal set of stations required to accommodate the operations. These constraints guarantee that the sum of times of the operations assigned to such a station does not exceed the cycle time. N max
∑ (N
max
(
)
− j + 1 ) x a , j − xb , j ≥ 0
(8.19)
j =1
for all pairs ( a, b ) such that a immediately precedes b . Equations 8.19 ensure that the precedence constraints are not violated for the operations assigned starting from the U-line entrance. N max
∑(N
max
− j + 1 ) ( yb, j − y a , j ) ≥ 0
(8.20)
j =1
for all pairs ( a, b ) such that a immediately precedes b . Constraints 8.20 ensure that the precedence constraints are not violated for the operations assigned starting from the U-line exit.
322
8 Advanced Line-balancing Approaches and Generalizations
Finally, as already mentioned: xi , j , yi , j , z j ∈ { 0, 1 }, ∀ i, j
(8.21)
Remarks: 1. Solving this binary linear program leads to a solution that minimizes the number of stations, but to introduce other criteria such as, for example, the minimization of the maximum idle time over the stations, it is necessary to rewrite the model. From this point of view, COMSOAL is much more flexible. 2. If variables yi , j and Constraints 8.20 are omitted, this model can be used for the simple assembly line balancing (SALB) problem if the objective is to minimize the number of stations.
8.9.2.4 Adjustment to Demand
When quantities and changes in demands arise, three types of techniques are available to rebalance the U-line: • Adding or removing worker(s) to increase or decrease the throughput. Remember that workers are supposed to be able to perform several functions and are aware of several production processes. • Moving machines to create a new type of station when the changes in demands are substantial. • Changing the operation chart, which includes changing the paths workers have to follow to reach the equipment required to perform the operations they are in charge of. A trial-and-error method to balance (or rebalance) a U-line can be summarized as follows: 1. Calculate the takt time, that is to say the period between 2 consecutive product completion times in order to meet the demand of customers. 2. Define the minimum number of workers required. This is done based on the sum of the operation times. 3. Define the number of stations. This is done based on the minimum number of workers and corrected by a “thumb factor”. In some special cases, designers may want to assign two workers to the same station depending on the operations to be performed (some assemblies just need two workers). 4. Assign workers and operations to stations. Times required to move from task to task should be taken into account. This step includes defining the crossover stations and the workers’ paths.
8.9 Specific Systems with Dynamic Work Sharing
323
If the balance is not satisfactory, restart at Step 2. 8.9.2.5 Comparing U-shaped Lines with Regular Lines
Many researchers pointed out the following advantages of U-shaped assembly lines compared to regular assembly lines: 1. An increase in flexibility due to the presence of multiskilled operators. 2. More possibilities of grouping tasks in stations are available in U-shaped lines due to the process applied for assigning tasks to stations. In U-shaped lines, a task can be assigned to a station when either its predecessors or its successors have been already assigned, while, in a regular assembly line, a task can be assigned to a station only if its predecessors are already assigned. As a consequence, the number of stations may be smaller in the case of a U-shaped line. 3. Distances are shorter in U-lines; this reduces moving times, facilitates communication between operators, increases visibility in the line. 4. Quality control is improved due to the fact that a product is checked at least twice by the operator in charge of the first and the last operation of the U-line. 5. Since a U-shaped line is more compact, the sense of belonging to the same team is usually stronger than in a straight line.
8.9.3 Concluding Remarks Several generalizations of line-balancing problems were examined in this chapter. In particular, line balancing with stochastic operation times and number of products manufactured. Note also that for these line-balancing problems, another approach can be to introduce overflow costs in the criterion of models. The concept of a bucket brigade is quite new and used mostly in distribution warehouses (to organize order picking), in simple assembly processes and in apparel industry. Several industrial applications are mentioned in the literature, but most of them are very sensitive to the context and, therefore, distant from the theoretical models. Often, bucket brigades were successfully used in those assembly systems with large employee turnover and, as a consequence, heterogeneous and instable workers’ competences. Japanese firms are reluctant to assign workers to specialized jobs. They prefer to train workers in order to develop them into multiskilled operators able to handle several types of jobs, which in turn favor U-shaped lines. This kind of line is used where the goal is to reduce throughput when low-skilled workers are at stake. These lines improve quality, increase the range of manufactured products, and also respond quickly to changes in demands and labor turnover or absenteeism.
324
8 Advanced Line-balancing Approaches and Generalizations
References Bard JF, Feo TA (1991) An algorithm for the manufacturing equipment selection problem. IIE Trans. 23:83–92 Bartholdi JJ, Eisenstein DD (1996) A production line that balances itself. Oper. Res. 44(1):21–34 Bartholdi JJ, Bunimovich LA, Eisenstein DD (1999) Dynamics of 2- and 3-worker bucket brigade production lines. Oper. Res. 47(3):488–491 Bartholdi JJ, Eisenstein DD, Foley RD (2001) Performance of bucket brigades when work is stochastic. Oper. Res. 49(5):710–719 Belmokhtar S, Dolgui A, Guschinsky N, Levin G (2006) An integer programming model for logical layout design of modular machining lines. Comput. Ind. Eng. 51(3):502–518 Bratcu A, Dolgui A (2005) A survey of the self-balancing production lines (“Bucket Brigades”). J. Intell. Manuf. 16(2):139–158 Bukchin J, Tzur M (2000) Design of flexible assembly line to minimize equipment cost. IIE Trans. 32: 585-598 Dolgui A, Ihnatsenka I (2009) Balancing modular transfer lines with serial-parallel activation of spindle heads at stations. Discr. Appl. Math. 157(1):68–89 Gokcen H, Agpak K (2006) A goal programming approach to simple U-line balancing problem. Eur. J. Oper. Res. 171(2):577–585 Graves SC, Lamar BW (1983) An integer programming procedure for assembly system design problems. Oper. Res. 31(3):522–545 Graves SC, Redfield CH (1988) Equipment selection and task assignment for multiproduct assembly system design. Int. J. Flex. Manuf. Syst. 1(1):31–50 Miltenburg J, Wijngaard J (1994) The U-line balancing problem. Manag. Sci. 40(10):1378–1388 Miltenburg J (2001) One-piece flow manufacturing on U-shaped production lines: a tutorial. IIE Trans. 33(4):303–321 Nakade K, Ohno K (1997) Analysis and optimization of a U-shaped production line. J. Oper. Res. Soc. Jpn. 40(1):90–104 Urban T (1998) Optimal balancing of U-shaped assembly lines. Manag. Sci. 44(5):738–741
Further Reading Agpak K, Gokcen H (2007) A chance-constrained approach to stochastic line balancing problem. Eur. J. Oper. Res. 180:1098–1115 Amen M (2000) An exact method for cost-oriented assembly line balancing. Int. J. Prod. Econ. 64:187–195 Amen M (2001) Heuristic methods for cost-oriented assembly line balancing: a comparison on solution quality and computing time. Int. J. Prod. Econ. 69:255–264 Askin RG, Standridge CR (1993) Modeling and Analysis of Manufacturing Systems. John Wiley and Sons, New York, NY Askin RG, Zhou M (1998) Formation of independent flow-line cells based on operation requirements and machine capabilities. IIE Trans. 30:319–329 Berger I, Bourjolly JM, Laporte G (1992) Branch-and-bound algorithms for the multi-product assembly line balancing problem. Eur. J. Oper. Res. 58:215–222 Bratcu A, Dolgui A (2009) Some new results on the analysis and simulation of bucket brigades (self-balancing production lines). Int. J. Prod. Res. 47(2):369–388 Capacho L, Pastor R, Dolgui A, Guschinskaya O (2009) An evaluation of constructive heuristic methods for solving the alternative subgraphs assembly line balancing problem, J. Heuristics 15:109–132
Further Reading
325
Chakravarty AK, Shtub A (1986) A cost minimization procedure for mixed model production lines with normally distributed task times. Eur. J. Oper. Res. 23(1):25–36 Chow W-M (1990) Assembly Line Design: Methodology and Applications. Marcel Dekker, New York, NY Ding F-Y, Cheng L (1993) An effective mixed-model assembly line sequencing heuristic for just-in-time production systems. J. Oper. Manag. 11(1):45–50 Dolgui A, Finel B, Guschinsky N, Levin G, Vernadat F (2005) A heuristic approach for transfer lines balancing. J. Intell. Manuf. 16(2):159–172 Dolgui A (ed) (2006) Feature cluster on balancing assembly and transfer lines. Eur. J. Oper. Res. 168(3):663–951 Driscoll J, Abdel-Shafi AAA (1985) A simulation approach to evaluating assembly line balancing solutions. Int. J. Prod. Res. 23(5):975–985 Erel E, Sarin SC (1998) A survey of the assembly line balancing procedures. Prod. Plann. Contr. 9:414–434 Fleszar K, Hindi KS (2003) An enumerative heuristic and reduction methods for the assembly line balancing problem. Eur. J. Oper. Res. 145:606–620 Ghosh S, Gagnon RJ (1989) A comprehensive literature review and analysis of the design, balancing and scheduling of assembly systems. Int. J. Prod. Res. 27(4):637–670 Gokcen H, Baykoc OF (1999) A new line remedial policy for the paced lines with stochastic task times. Int. J. Prod. Econ. 58:191–197 Gokcen H, Erel E (1998) Binary integer formulation for mixed-model assembly line balancing problem. Comput. Ind. Eng. 34(2):451–461 Ignall E (1965) A review of assembly line balancing. J. Ind. Eng. 16(4):244–254 Jin M, Wu SD (2002) A new heuristic method for mixed model assembly line balancing problem. Comput. Ind. Eng. 44:159–169 Kottas JF, Lau HS (1973) A cost oriented approach to stochastic line balancing. AIIE Trans. 5(2):164–171 Kottas JF, Lau HS (1976) A total operating cost model for paced lines with stochastic task times. AIIE Trans. 8(2):234–240 Kottas JF, Lau HS (1981) A stochastic line balancing procedure. Int. J. Prod. Res. 19(2):177– 193 Lau HS, Shtub A (1987) An exploratory study on stopping a paced line when incompletions occur. IIE Trans. 19(4):463–467 Matanachai S, Yano CA (2001) Balancing mixed-model assembly lines to reduce work overload. IIE Trans. 33:29–42 McMullen PR, Frazier GV (1997) A heuristic for solving mixed-model line balancing problems with stochastic task durations and parallel stations. Int. J. Prod. Econ. 51:177–190 Miltenburg J (2002) Balancing and scheduling mixed-model U-shaped production lines. Int. J. Flex. Manuf. Syst. 14(1):119–151 Minzu V, Henrioud JM (1998) Stochastic algorithm for tasks assignment in single or mixedmodel assembly lines. APII - J. Eur. Syst. Aut. 32:831–851 Monden Y (1994) Toyota Production System, 2nd edn. Industrial Engineering and Management Press, Institute of Industrial Engineers, Norcross, GA Moodie CL, Youg HH (1965) A heuristic method of assembly line balancing for assumptions of constant or variable work element times. J. Ind. Eng. 16(1):23–29 Nkasu MM, Leung KH (1995) A stochastic approach to assembly line balancing. Int. J. Prod. Res. 33(4):975–991 Nof SY, Wilhelm WE, Warnecke HJ (1997) Industrial Assembly, Chapman & Hall, London Ramsing K, Downing R (1970) Assembly line balancing with variable time elements. Ind. Eng. 2:41–43 Reeve NR, Thomas WH (1973) Balancing stochastic assembly lines. AIIE Trans. 5(3):223–229 Rekiek B, Dolgui A, Delchambre A, Bratcu A (2002) State of art of optimization methods for assembly line design. Ann. Rev. Contr. 26:163–174
326
8 Advanced Line-balancing Approaches and Generalizations
Sarin SC, Erel E, Dar-El EM (1999) A methodology for solving single-model, stochastic assembly line balancing problem. Omega 27(5):525–535 Scholl A (1999) Balancing and Sequencing of Assembly Lines. Physica-Verlag, Heidelberg Shin D (1990) An efficient heuristic for solving stochastic assembly line balancing problem. Comput. Ind. Eng. 18(3):285–295 Silverman FN, Carter JC (1984) A cost-effective approach to stochastic line balancing with offline repairs. J. Oper. Manag. 4(2):145–157 Silverman FN, Carter JC (1986) A cost-based methodology for stochastic line balancing with intermittent stoppages. Manag. Sci. 32(4):455–463 Suresh G, Sahu S (1994) Stochastic assembly line balancing using simulated annealing. Int. J. Prod. Res. 32(8):1801–1810 Talbot FB, Patterson JH, Gehrlein WV (1986) A comparative evaluation of heuristic line balancing techniques. Manag. Sci. 32(4):430–454 van Zante-de Fokkert JI, De Kok TG (1997) The mixed and multi model line balancing problem: a comparison. Eur. J. Oper. Res. 100(3):399–412 Vrat P, Virani A (1976) A cost model for optimal mix of balanced stochastic assembly line and the modular assembly system for a customer oriented production system. Int. J. Prod. Res. 14(4):445–463 Yano CA, Rachamadugu R (1991) Sequencing to minimize work overload in assembly lines with product options. Manag. Sci. 37(5):572–586
Chapter 9
Dynamic Scheduling and Real-time Assignment
Abstract The beginning of this chapter analyzes the progress of scheduling methods over time and explains why dynamic scheduling is superior to static scheduling in modern production systems. This is why the rest of the text is exclusively devoted to dynamic scheduling. First, the well-known priority rules are presented and analyzed. Then, repair-based approaches are introduced. They are used to adjust a given schedule when an unexpected event arises. The remainder of the chapter concerns some novel approaches developed by the authors of this book. The first one is proposed to assign a new job to resources in real time, without disturbing previous assignments. This approach is applied to linear and assembly production systems and is highlighted with numerical examples. How this approach can be used to control WIP is demonstrated. Secondly, another technique that permits some slight modifications to previous assignments is suggested. Interestingly, this approach has also been successfully applied to multifunction battlefield radars.
9.1 Introduction and Basic Definitions Since World War II, production management has slowly evolved from local to global decision making. Initially, developments in the field of production management were limited to the improvement of the management of activities related to departments of production systems, such as inventory management, planning and scheduling of manufacturing activities, quality management, transportation management, to quote just a few. Most of the time, each of these activities was usually under the responsibility of a particular manager whose goal was to make decisions for a given period (management horizon). Usually, manufacturing managers generated an operation schedule each week, the goal being either to balance the load of the
328
9 Dynamic Scheduling and Real-time Assignment
machines, or to meet intermediate deadline, or to perform most of the operations during the week, etc. Scheduling is defined as the process of assigning operations to a set of resources in the course of time with the objective of optimizing a criterion. Some criteria will be mentioned hereafter. We will mainly consider the makespan1 that is the common criterion in real-time situations. Four types of constraints may be part of a scheduling problem: 1. Operation precedence constraints. Such a constraint requires the completion time of an operation to be less than or equal to the start time of any succeeding operation. 2. Resource capacity constraints. Such a constraint requires that there is no overlap between two manufacturing periods of operations that share the same resource. 3. Processing time constraints. A processing time constraint asserts that the difference between the completion time and the start time of an operation is equal to the operation time. 4. Ready time constraints. A ready time requirement concerns a product. It states that the start time of the first operation of the product is greater than or equal to the release date of the product. Several criteria may be involved in a scheduling problem but, in this case, various techniques are available to transform the multicriteria problem into a unique criterion problem. Among these techniques are the weighted sum of the criteria, conversion of the multicriteria problem into a single-criterion problem by treating all but one criteria as constraints, introduction of a hierarchy among criteria, application of the so-called goal programming method that consists in minimizing a weighted sum of deviations of the criteria to predefined values, application of the so-called compromise programming that consists in approaching some “ideal” solution as closely as possible. These methods have been illustrated in the chapter dedicated to outsourcing. The first schedules were worked out “by hand” using a Gantt chart.2 Furthermore, they were frozen until the end of the management horizon and modified based on the experience of the manager in case of disturbances of the schedule due to the appearance of unexpected events. In other words, the global adjustment of the different departments was done at the beginning of each period and adapted locally in case of disturbances meanwhile.
1
In manufacturing systems, the makespan of a set of products is the difference between the maximal completion time and the minimal start time of the operations required to complete the set of products.
2 A Gantt chart is a type of bar chart that illustrates a project schedule. A Gantt chart represents each operation by a bar on a time axis associated with the resource in charge of the operation. The origin (respectively, the end) of the bar is located at the abscissa that represents the starting (respectively, the completion) time of the operation. Usually, the bars related to operations performed on the same product or batch of products are of the same color.
9.1 Introduction and Basic Definitions
329
When computers started appearing in firms, the schedules switched from Gantt charts to computer screens. At the same time, computer programs were established to compute schedules that “optimize” some criterion over the management horizon. Among the criteria mentioned in the literature are the minimization of the completion time of the set of operations under consideration (the makespan), mean WIP (work-in-progress), mean manufacturing time (the mean flow time), mean tardiness, and mean processing cost, as well as the maximization of productivity, etc. These first tentative approaches largely ignored the randomness inherent to production systems. They made the implicit assumption of a static environment without any kind of unexpected event that may disrupt the schedule. This kind of technique is called static scheduling. An extensive literature exists on this subject when operation times are deterministic, see for instance (Baker, 1974), (Wiers, 1997), (Blazewicz et al., 2001), (Pinedo, 2008). Static scheduling problems that call for an optimal solution are usually NP-hard.3 The explanation of such a restrictive decision support system is twofold: weak data processing and communications systems, combined with relatively limited competition until the 1970s. Remember that affordable powerful processing and communications systems appeared quite recently. Furthermore, international trade agreements really started only at the end of World War II. For instance, the GATT (General Agreement on Tariffs and Trade) was signed in 1947 and put in place in 1948, but became really influential at the beginning of the 1990s. In most real-world environments, unexpected events arise, which forces revising or adapting the schedule. Unexpected events are either resource or operation related, see (Stoop and Wiers, 1996), (Cowling and Johansson, 2002), (Vieira et al., 2003). Resource-related events are machine breakdowns, tool failures, unavailability of tools or employees, shortage of raw material or components, defective or inadequate material or components, etc. Operation-related events are, for instance, modifications of deadlines, order cancellations, late arrivals of orders and changes in manufacturing processes due to replacement of some resources. Thus, a schedule often becomes obsolete almost the moment it is completed (Wu et al., 1999). Some authors addressed the gap between scheduling theory mainly focused on static decisions and the needs of real-life environments that are often disturbed by unexpected events, see (MacCarthy and Liu, 1993), (Cowling and Johansson, 2002). This situation advocates promoting dynamic scheduling, a set of methods that are able to react to unexpected events by either adjusting the existing schedule or rescheduling the remaining operations. Despite competition among companies and an ever-changing market, the basic structure of production systems remained stable until the middle of the 1990s: a 3
A problem is NP-hard if an algorithm for solving it can be translated into one for solving any NPproblem (non-deterministic polynomial time problem). NP-hard therefore means “at least as hard as any NP-problem”. A problem is assigned to the NP class if it is solvable in polynomial time by a nondeterministic Turing machine.
330
9 Dynamic Scheduling and Real-time Assignment
set of departments, each of them having its own manager whose goal was to optimize a criterion specific to the department on a given horizon, was the common organization of production systems. Recently, the pressure of the competitive market, encouraged by powerful data processing, communication systems and permissive international trade agreements, has affected the structure of production systems, calling for: • Integration of the activities that cover the whole spectrum from customers’ requirements to cashing. • Flexibility with regard to demand changes. The answer to these requirements is given by the supply chain paradigm, see (Govil and Proth, 2002), (Lee et al., 1997), (Poirier and Reiter, 1996). A supply chain is a global network of organizations that cooperate to improve the flows of material and information between suppliers and customers at the lowest cost and with the highest speed. Thus, a supply chain is organized by projects instead of departments. The scheduling objective in such a system is no longer to optimally schedule a set of tasks. The goal is now to schedule and reschedule operations on-line, in the order in which the demands these operations are related to, appear in the system. We refer to this activity as real-time assignment. A decision is said to be made in real time if the period between the time the data required to make the decision are available and the decision time is greater than the time required to process the decision. As a consequence, this period can vary drastically from case to case. For example, this period is around 10–4 s when controlling multifunction radars or may be 24 h or more in a civil engineering project. For a long period, real-time decisions were linked with data processing and communication. Now, this notion is common in the management of supply chains. Real-time assignment in supply chains is motivated by two objectives: • Being able to reschedule “on-line” the whole supply chain in the case of unexpected events such as machine breakdowns, strikes, rework, fundamental changes in the market, etc. • Being able to react immediately to customers’ requirements. In particular, it is not unusual to guarantee the delivery date in less than two minutes, possibly over the phone. Part of the work done on real-time assignment is mentioned in (Chauvet et al., 2000) and will be detailed in this chapter. It should be noted that dynamic scheduling is different from real-time assignment. Dynamic scheduling is in keeping with the classical production architecture that considers a production system as the concatenation of departments that are under the responsibility of different managers. A dynamic scheduling approach is used in the manufacturing departments, providing a static schedule at the beginning of the management period, and adjusting this schedule in the case of distur-
9.2 Dynamic Scheduling
331
bance during this period as aforementioned. This schedule concerns a set of operations corresponding to different products that have to be completed on a given horizon (a week, for instance). Real-time assignments appeared when the supply chain paradigm started spreading. In a supply chain, each project covers the whole production cycle, from customers’ requirements to cashing and, in addition, the diversity of the products involved in a project is limited in terms of the number of types of operations. Nowadays, a project is a continuous flow of activities. In particular, manufacturing systems progressively switch from job-shops to assembly lines that, adequately balanced, guarantee productivity as well as flexibility. Thus, real-time assignment consists in assigning a set of operations related to a product (or a small range of product types) to a set of resources as soon as the order arrives in the manufacturing system. In the case of a disturbance, previously scheduled operations that are not completed are rescheduled in the order of the arrival of the corresponding demands. The goal is to reassign the operations in real time. In this chapter, we address dynamic scheduling and introduce real-time assignment, keeping in mind that the future will certainly favor real-time assignment. The next section deals with dynamic scheduling. The last two sections of this chapter relate some new advances in real-time assignment.
9.2 Dynamic Scheduling 9.2.1 Reactive Scheduling: Priority (or Dispatching) Rules In a reactive scheduling approach, no schedule is generated in advance. Each time a conflict arises in the choice of the next operation to perform on a resource that becomes free, a priority rule (also called a dispatching rule) is applied to make the decision. 9.2.1.1 Priority Rules Based on Operation Times In (Panwalkar and Iskander, 1977), 113 priority rules have been enumerated. Some of these rules are recalled hereafter. Rule 1: Priority to the product whose next operation has the minimal operation time (SPT that stands for “shortest processing time”). It is advised not to apply this rule in the case of significant setup times or welldiversified operation times. It is worth mentioning the rule that gives priority to the product the next operation of which has the minimal weighted operation time
332
9 Dynamic Scheduling and Real-time Assignment
or WSPT (weighted shortest processing time). This rule minimizes the weighted mean flow time and the weighted percentage of tardy product completion times, see (French, 1982) and (Haupt, 1989). This rule is efficient when the system is highly loaded. In (Jayamohan and Rajendran, 2004), two types of weights (denoted, respectively, by Wi ,1 and Wi , 2 ) are considered for each product i : • W i ,1 =
1 , where hi is the holding cost of a product of type i for one unit of hi
time. 1 , where ri is the tardiness cost of product i when the completion ri time of one unit of product is one unit of time late with regard to the deadline. Wi ,1 favors products having a high holding cost, while Wi , 2 favors products
• W i, 2 =
with a high penalty cost.
Rule 2: Priority to the product whose next operation has the minimal sum of operation time and setup time. This rule generalizes the previous one and should be avoided when the sums of operation and setup times are well diversified. These two rules reduce the length of the files in front of the resources (machines, transportation resources, etc.). We obtain two more rules by replacing “minimal” by “maximal” in Rules 1 and 2. These rules may lead to important queues in front of some machines.
Rule 3: Priority to the product such that the sum of the operation times of the operations that have not been performed yet is minimal. This rule gives the priority to the products that are the closest (in terms of duration) from completion. Another rule is obtained by replacing “minimal” by “maximal” in Rule 3. This rule gives the priority to the products that are at the beginning of their manufacturing cycle. As a consequence, it increases the WIP (work-in-progress) but tends to improve productivity. Note: these rules make it difficult to forecast product completion times. 9.2.1.2 Priority Rules Based on Deadlines
Rule 4: Priority to the most urgent products (EDD that stands for “earliest due date”).
9.2 Dynamic Scheduling
333
The most urgent product is the one for which the difference between the deadline and the current time is minimal among the products that compete for the same assignment. This rule does not take into account either the number of operations that are not performed at the time of the decision nor the sum of the operation times of these operations. The primary goal of this rule is to meet the deadlines, but some side effects may appear such as delaying the manufacturing of products that require many operations and/or a long remaining total manufacturing time that, in turn, leads to exceeding the deadlines.
Rule 5: Priority to the product having the greatest relative urgency. The relative urgency is measured by the ratio of the total manufacturing time of the operations that remain to be performed over the difference between the deadline and the current time. The greater this ratio is, the greater the relative urgency of the product. It should be noted that if the ratio is less than 1, then it might be possible to complete the product before the deadline, otherwise the delivery of the product will be late.
Rule 6: Priority based on the slack per remaining operation. We define the slack si of a product i at time t by: ni
si = Di − t − ∑ ti , k , where: k= j
Di is the deadline for product i ,
ti , k is the processing time of the k-th operation of product i , t is the time at which the decision is made (current time), j is the rank of the next operation to perform, ni is the number of operations to be performed to complete product i . The priority of product i is measured by:
si ⎧ if si ≥ 0 ⎪ Z i = ⎨ ni − j + 1 ⎪s × ( n − j + 1 ) if s < 0 i i ⎩ i
The lower Z i , the greater the priority of product i . Usually, this rule leads to a reduction of the mean tardiness. To conclude, Rules 4 and 5 tend to favor the criterion that satisfies the deadlines but, since these rules do not take into account the other products in process, unexpected side effects may arise.
334
9 Dynamic Scheduling and Real-time Assignment
9.2.1.3 Priority Rules Based on the Number of Operations The following rules are similar to the previous ones, except that we do not consider the manufacturing times of the remaining operations, only their number.
Rule 7: Priority to the product having the greatest number of remaining operations. This rule is of importance when the setup times are not negligible compared to the operation times. A negative side effect is the increase of WIP (work-inprogress).
Rule 8: Priority to the product having the smallest number of remaining operations. The primary objective of this rule is to evacuate from the production system any products that are close to completion (in terms of the number of remaining operations). 9.2.1.4 Priority Rules Based on the Costs
Rule 9: Priority to the product that has the greatest value or that guarantees the greatest benefit. With the former, the goal is to avoid tying up capital, as to the latter the goal is to complete as soon as possible products that provide a maximum of new capital.
Rule 10: Priority to the product to which the greatest penalty applies when completed after deadline. Rule 11: Priority to the product to which the greatest relative penalty applies when it is completed after its deadline. In this rule, the relative penalty is the product of the penalty by the relative urgency defined in Rule 5. 9.2.1.5 Priority Based on the Setup Times
Rule 12: Priority to the product having the smallest setup time. This rule is particularly interesting when the setup times depend on the sequence of operations performed by the machine under consideration. The rule tends to locally minimize the sum of setup times related to each machine.
9.2 Dynamic Scheduling
335
Rule 13: Priority to the product having the smallest relative setup time. The relative setup time is the setup time divided by the operation time of the candidate operation. This rule is particularly interesting when the relative setup times are various and/or depend on the sequence of operations performed by the machine under consideration.
9.2.1.6 Priority Based on the Release Dates
Rule 14: Rule FIFO (first in, first out). The priority is given to the product that arrived in front of the machine first. The rule tends to reduce the standard deviation of the waiting times.
Rule 15: Priority to the product that arrived in the system first. In other words, the priority goes to the product that spent the greatest amount of time in the system (this rule is called AT that stands for “arrival time”). This rule tends to balance the manufacturing cycles of the products.
Rule 16: Rule LIFO (last in, first out). The priority is given to the product that arrived in front of the machine last. This rule is rarely used in practice, since when the system is very loaded, it may happen that some products remain “blocked”. 9.2.1.7 Priority Based on a Rough Evaluation of the Near Future
Rule 17: Priority to the product that will next visit the machine having the shortest file in front of it. Rule 18: Priority to the product that will visit next the machine having the lowest load or, in other words, such that the sum of the operation times for the operations waiting in front of the next machine is the lowest. These rules are an attempt to take the near future into account. The rules that integrate some information concerning the future belong to the set of global dispatching rules. The rule COVERT that will be presented hereafter is also a global dispatching rule.
336
9 Dynamic Scheduling and Real-time Assignment
9.2.1.8 Various Priority Rules Some other rules that have been studied are listed hereafter.
Rule 19: Mixed rule (FIFO + operation time). In this rule, a threshold T is given by the user. Let E be the set of products that are waiting in front of the machine for a time greater than T. If E is not empty, the priority is given to a product of E on a FIFO basis, otherwise rule 1 applies. The goal of this rule is to avoid blocking some products in front of a machine. Indeed, the consequences of this rule closely depend on the value of T .
Rule 20: In this rule, FIFO and Rule 1 are used alternately. Rule 21: The user chooses a threshold u ∈ [ 0, 1 ] . Let F be the set of products the relative urgency (see Rule 5) of which is greater than u . If F is not empty Rule 1 applies to the products of F , otherwise the same rule applies to the complete set of products. Rule 22: Apply Rule 1 to the subset of products that will visit the less loaded machine after the current one. Rule 23: COVERT (that stands for “cost over time”) rule (Russell et al., 1987). For this rule, the priority index is defined as follows: ⎧ 1 if si < 0 ⎪ 1 ⎪ WTi − si Zi = if 0 ≤ si < WTi ⎨ ti , j ⎪ WTi ⎪ 0 if s ≥ WT i i ⎩ si and ti , j have been defined in Rule 6. WTi is the sum of expected waiting time for product’s uncompleted operations.4 ni
Thus, WTi = ∑ WTi , j , where k is the first uncompleted operation and ni the j =k
last operation of product i. 5 4
WTi is a look-ahead component of the dispatching rule. A way to evaluate this parameter if the sys-
tem is not expected to change drastically over the short run is to consider that the current system status is a good representation of the future system status. This method is used in (Holthaus, 1999) and (Raghu and Rajendran, 1993).
9.2 Dynamic Scheduling
337
We observe that the greater Z i , the greater the priority for product i . It is difficult to anticipate the consequences of a dispatching rule when applied to a manufacturing system. Indeed, some rules are known to improve particular criteria. For example, FIFO is usually effective for minimizing the maximum flow time and the variance of flow times, Rules 1 and 2 tend to reduce WIP, etc. Nevertheless, before applying a dispatching rule to a specific production system, it is worth testing the rule by simulation, and to restart the test when some parameters like system load, ratios of product types to manufacture, mean percentage of machine breakdowns, etc., change significantly.
9.2.2 Predictive-reactive Scheduling Predictive-reactive scheduling (also called the repair-based approach) is a twostep process: • At the beginning of the working period (the week for instance) a static scheduling process provides the optimal or near-optimal schedule that will be applied over the whole period if no unexpected events arise – which is usually not the case. Note that it is possible to start with an initial unfeasible schedule and transform this into a feasible schedule by applying one of the so-called repair heuristics introduced hereafter. • When an unexpected event disturbs the schedule, the schedule is modified by using a repair heuristic. They are three types of repair heuristics: – –
–
The shifting/swapping repair heuristics. The match-up schedule repair heuristics that reschedule the remaining operations to match-up with the initial schedule at some point in the future. Indeed, it is not always possible to reach this goal: the manufacturing system must be flexible enough to absorb the disturbance on a short horizon. The last type of repair heuristics is called partial schedules. These heuristics are specific to the problem at hand.
9.2.2.1 Shifting/Swapping Repair Heuristics A conflict is the violation of a constraint belonging to one of the four types listed in the introduction of this chapter:
5
WTi, j is the estimated waiting time for operation j of product i. For real-life problems, we can write
WTi , j = u × t i , j , where u is a constant multiplier that can be computed by applying regression analysis on waiting times of the history of the system.
338
1. 2. 3. 4.
9 Dynamic Scheduling and Real-time Assignment
operation precedence constraints; resource capacity constraints; processing time constraints; ready time constraints.
A conflict that consists in violating a constraint of Types 1 or 2 involves two operations, while a Type 3 or 4 would involve only one operation. The type of conflict is the type of constraint violated. The rule “earliest conflict first” provides the order in which conflicts will be repaired. Three types of actions can be conducted: • Action swap that consists in swapping the start times of two operations in conflict. This action may be applied to conflicts of Type 2. • Action left-shift that consists in shifting an operation backward in time until the violated constraint is satisfied. This action can be applied to the first three types of conflicts. • Action right-shift that consists in shifting an operation forward in time until the violated constraint is satisfied. This action can be applied to any type of conflict. As mentioned in the introduction, we are interested in minimizing the makespan. When a constraint is violated, we penalize this criterion according to the “intensity of the violation”. For example: • For a conflict of Type 1, we penalize the criterion by adding u × ( br − bs ) , where u is a positive constant, br is the greater start time and bs is the smaller start time of the operations under consideration. • For a conflict of Type 2, we penalize the criterion by adding u × Δ to the criterion. Δ is the overlap of the two operation periods. • For a conflict of Type 3, we penalize the criterion by adding
[
(
u × t i , j − z i , j − yi , j
)]
+
to the makespan. zi , j is the completion time and yi , j
the start time of the operation under consideration. ti , j is the operation time and u is a positive constant. • For a conflict of Type 4, we penalize the makespan by adding u × S i − yi ,1
(
) to
the makespan. S i is the release date of product i and yi ,1 is the start time of the first operation of the same product. Indeed, since a conflict exists that involves the first operation of product i , then yi ,1 < Si . The decisions that require the values of the criteria are made based on the penalized ones. Furthermore, there is a priority among the actions conducted to resolve a conflict. The highest priority is for action swap and the second priority is for action left shifting. Action right shifting has the lowest priority.
9.2 Dynamic Scheduling
339
Example 1 We consider 2 machines M 1 and M 2 that are visited by 2 batches of products denoted by B1 and B2 . Manufacturing time of B1 on M 1 : 7 units of time. Period: [ 0, 7 ] . Manufacturing time of B1 on M 2 : 4 units of time. Period: [ 8, 12 ] . Manufacturing time of B2 on M 1 : 5 units of time. Period: [ 7, 12 ] . Manufacturing time of B2 on M 2 : 6 units of time. Period: [ 12, 18 ] . This initial schedule is represented in Figure 9.1.
M1
M2
5
10
15
20
5
10
15
20
Busy period
Batch 1 Batch 2
Figure 9.1 Initial schedule
A breakdown arises at time 4 on machine M 1 and lasts 3 units of time. The new state of the system is represented in Figure 9.2. Indeed, this schedule is no longer feasible.
M1
M2
5
5
7
8 Batch 1
10
15
10
15 Breakdown
20
20 Busy period
Batch 2
Figure 9.2 Consequence of the breakdown: an unfeasible schedule
Two conflicts appear in Figure 9.2: • A Type-1 conflict between the first manufacturing stage of B1 on M 1 and the second manufacturing stage of B1 on M 2 (period [ 8, 10 ] ). • A Type-2 conflict between B1 and B2 on M 1 (period [ 7, 10 ] ).
340
9 Dynamic Scheduling and Real-time Assignment
The Type-2 conflict is chronologically the first. We apply a right-shift action to the second part of B1 . The result of this action is represented in Figure 9.3. It leads to a schedule that is still unfeasible.
M1
M2
5
7
5
8
10
15
10
15
Batch 1
20
20 Busy period
Breakdown
Batch 2
Figure 9.3 Consequence of the first right-shift action
A second right-shift action is necessary to obtain a feasible schedule. The result is presented in Figure 9.4.
M1
M2
5
5
7
8 Batch 1
10
15
20
10
15
20
Breakdown
Busy period
Batch 2
Figure 9.4 The feasible schedule
The makespan corresponding to the schedule of Figure 9.4 is equal to 22. In this example, the schedule is adjusted at time 7: it is the instant at which the breakdown is over. Note that the makespan increased by 4 units of time.
Example 2 We consider again two machines M 1 and M 2 and three batches B1 , B2 and B3 . The initial schedule is defined as follows: • B1 spends 3 units of time on M 1 (period (period [ 3, 11 ] ).
[ 0, 3 ] ) and 8 units of time on
M2
9.2 Dynamic Scheduling
341
• B2 spends 2 units of time on M 1 (period
[ 3, 5 ] ) and 3 units of time on
M2
(period [ 12, 15 ] ). • B3 spends 10 units of time on M 1 (period [ 5, 15 ] ) and 4 units of time on M 2 (period [ 15, 19 ] ). This schedule is represented in Figure 9.5. The makespan is equal to 19.
M1 5
10
15
20
5
10
15
20
M2 B1 B2 B3
Figure 9.5 Initial schedule
A breakdown arises on machine 2 at time 3 and lasts 3 units of time. The resulting unfeasible schedule is represented in Figure 9.6.
M1 5
10
15
20
5
10
15
20
M2 B1 B2
Breakdown period
B3
Figure 9.6 State after breakdown
The first conflict is of Type 2 between B1 and B2 on machine M 2 . We decide to apply the swap action. The result is represented in Figure 9.7. Figure 9.7 shows that a new conflict of Type 2 (resource capacity constraint) arises between B1 and B3 on machine M 2 . We resolve this conflict by rightshifting B3 . The final (and feasible) schedule is represented in Figure 9.8. The makespan is equal to 21 units of time.
342
9 Dynamic Scheduling and Real-time Assignment
M1 5
10
15
20
5
10
15
20
M2 B1 B2
Breakdown period
B3
Figure 9.7 Unfeasible schedule after swapping
M1 5
10
15
20
5
10
15
20
M2 B1 B2
Breakdown period
B3
Figure 9.8 Final schedule
9.2.2.2 Match-up Schedule Repair Heuristics
Introduction Match-up schedule repair heuristics have been extensively developed in (Bean et al., 1991) and in the bibliography mentioned in that paper. In this model, it is assumed that an initial (optimal or near-optimal) schedule has been established and can be followed as long as no disruption occurs. We call this initial schedule the “preschedule”. A preschedule is often obtained heuristically. When a disruption occurs, the schedule is revisited to match up with the preschedule at some point in time T in the future. T is called “match-up time”. Indeed, we assume that the preschedule is flexible enough to “absorb” the disruption before time T . In other words, the match-up schedule is the same as the preschedule from T onwards.
9.2 Dynamic Scheduling
343
In (Bean et al., 1991), a theoretical development is proposed, which leads to interesting results under strong assumptions. Unfortunately, in real-life problems, setups are discrete, operations are usually not pre-emptive and tardiness concerns the whole order (and not only the tardy part). This is why we restrict ourselves to a presentation of this approach under realistic assumptions, which deprives us of the theoretical results. In this presentation, we assume that each scheduled job concerns a lot composed of a large number of products. Thus, a job can be divided into several subjobs that can be performed on different machines. Furthermore, we assume that setup times that may depend on the sequence of jobs are taken into account and that performing an operation requires both a machine and a set of tools, and that both resources are limited. A particular case that presents some restrictions compared to the general problem is proposed to illustrate this method. We consider a preschedule that concerns operations performed on two identical machines. We assume that the quantity of tools available for manufacturing is not limited. This preschedule is represented in Figure 9.9. M1 5
10
15
20
25
5
10
15
20
25
M2
Figure 9.9 Preschedule
In this example, the setup time is equal to one time unit whatever the job. Assume that a breakdown occurs at time 6 and lasts 2 units of time. The match-up method may lead to the repaired schedule represented in Figure 9.10. The matchup time is 12 for this solution.
Remarks: 1. We mentioned above that the schedule must be flexible enough to “absorb” the disruption before the match-up instant. Flexibility depends on the number of slacks and their amplitude. A remaining question is: how to define the matchup time? In actual situations, there is no algorithm to define this time. The only possible approach is to perform a sequence of trials as showed in the next section. 2. The match-up method assumes that disruption times are far enough apart to allow match-up before the next disruption. If this assumption is not valid, we do not recommend using this method.
344
9 Dynamic Scheduling and Real-time Assignment
B M1 5
10
15
20
25
5
10
15
20
25
M2
B : Breakdown period
Figure 9.10 Repaired schedule
General Algorithm The idea behind this algorithm is to try to reassign the jobs to the disrupted machines considering successively increasing match-up times. If this first attempt does not succeed, the reassignment of the jobs is tried on all the machines that share operation compatibilities with the disrupted machines. The following notations are used in the general algorithm presented hereafter: • H1 ( T ) and H 2 ( T ) are two heuristics that will be explained in the next subsections. The goal of these heuristics is to reschedule jobs to “absorb” the disruption before match-up time T . • C is the value of the criterion (maximal tardiness or weighted tardiness). • L is a threshold: if C > L the solution is rejected. • Tmin is the minimal value of the match-up time that will be tried. • Tmax is the maximal value of the match-up time that will be tried. • Δ is the increase of the match-up time when the current match-up time did not lead to a solution. The threshold L is given by the user. Its value depends on the quality required for the expected solution: the smaller the value of L , the better the quality of the solution (if any). Tmin could be taken equal to the greatest value among the initial completion times of the jobs affected by a machine disruption. Tmax should be taken large enough to include enough slack to absorb disruptions. Algorithm 9.1. (General algorithm) 1. Define parameters Tmin , Tmax , L and Δ . First stage of the algorithm: 2. Set T = Tmin . 3. Apply H1 ( T
) to each disrupted machine.
9.3 Real-time Assignment with Fixed Previous Assignments
345
4. If a solution is found, keep this solution and stop the algorithm. 5. Otherwise: 5.1. Set T = T + Δ . 5.2. If T < Tmax go to 3, otherwise go to 6. Second stage of the algorithm: 6. Set T = Tmin .
7. Apply H 2 ( T ) . It concerns each disrupted machine and all machines that share job compatibilities with it. 8. If a solution is found, keep this solution and stop the algorithm. 9. Otherwise: 9.1. Set T = T + Δ . 9.2. If T < Tmax go to 7, otherwise no solution to the match-up.
Heuristics H 1 ( T
) and
H2 ( T
)
Several approaches are possible. Among them an integer linear programming approach exists to solve H 2 ( T ) for small problem size. For H1 ( T ) , the easiest and most efficient approach is to reschedule, for each disrupted machine, the jobs that start between the time of disruption and the match-up time T . To reschedule, we apply successively several preoperative rules and we keep the first feasible solution obtained (if any). Indeed, we take into account the release dates of the jobs and the tools required for performing the jobs. The following rules are recommended: SPT (Rule 1), EDD (Rule 4), Rule 6, Rule 12 and Rule 14 (see Section 9.2.1). In H 2 ( T ) , we reschedule not only the jobs executed on the disrupted machines, but also the jobs performed on all machines that share job compatibilities with the disruptive machines. Again, we apply priority rules.
9.3 Real-time Assignment with Fixed Previous Assignments Real-time assignment with frozen previous assignments consists in assigning a set of operations related to a product (or a batch) to a set of resources as soon as the order arrives in the manufacturing system. Previous assignments are frozen. In other words, the starting times of previously assigned operations cannot be modified. In the case of disturbance, previously scheduled products or batches that are not completed are rescheduled one by one in the order of their arrival in the system. The goal is to reassign the operations in real time.
346
9 Dynamic Scheduling and Real-time Assignment
It is important to remember that the real-time concept is relative to the problem under consideration. A decision is made in real time if the period between the instant the data required to make the decision are available and the moment the decision must be taken is greater than the time required to process the decision. As a consequence, this period varies drastically from case to case.
9.3.1 Problem Formulation We formulate the first version of the real-time assignment problem under the following assumptions: • Projects are scheduled in the order they arrive, i.e., first in, first scheduled. A project is a product or a batch to be manufactured. The manufacturing process is given. • Operation time for each operation lies between two limits. We will show later in this chapter how this assumption can help in tackling the WIP issue. • Zero waiting time between two consecutive operations. Such a schedule is called a no-waiting schedule. • The production plan (i.e., the manufacturing process) of the project is unique. • One resource cannot perform two or more different operations of the same product, whereas two or more identical resources may perform the same operations. Let us assume that operation i can be performed by any of the identical re-
{
sources mi1 , mi2 , mi3 , ..., miK periods I = k i
{ [α
k i ,q
,β
k i ,q
}, and that resource m , k ∈ {1, 2, ..., K }, is idle on k i
i
]}
q =1, 2 ,...,Qk , i
i
. Thus, K i is the maximum number of identi-
cal resources that are able to perform operation i , and Qk ,i is the maximum number of idle periods available for operation i on resource mik . In the case of several resources we group all the idle periods associated with one set of identical resources and order them as follows: For
{
k1 ≠ k 2 ,
}
where
[
{
k1 , k 2 ∈ 1, 2, ..., K i
]
{
},
the
q ∈ 1, 2, ..., Qk ,i precedes α ik, r , β ik, r , r ∈ 1, 2, ..., Qk 1
1. α ik,q < α ik,r , or 1
2
2. α ik,q = α ik,r and β ik,q < β ik,r . 1
2
1
2
2
2
2 ,i
period
} if:
[α
k1 i ,q
]
, β ik,q , 1
9.3 Real-time Assignment with Fixed Previous Assignments
347
The sequence of these sorted periods is denoted by [ α is , β is ] for Ki
s = 1, 2, ..., ∑ Qk ,i = Qi . k =1
In Figure 9.11, the operation time is the minimum time required to perform the operation. The extension of the operation time is introduced to make sure that the completion time of the operation is equal to the start time of the next operation. Indeed, the extension is less than the maximum overstay permitted for the corresponding operation. In the remaining part of this section, we will only use the notation [ α is , β is ] to denote the period in which operation i could be performed. Since each window is associated with a unique resource, it is easy to identify the resource that will be used. Figure 9.11 illustrates this schedule. O1 O2 O3 O4 Busy periods resulting from previous scheduling Operation times
Extension of operation times
Figure 9.11 A no-waiting assignment
9.3.2 Case of a Linear Production This algorithm deals with linear operation sequences (sequences in which any operation is preceded by no more than one operation). The following notations are used in the algorithm: Rank of the idle period assigned to the i-th operation in the operation se• si quence. Starting time of operation i. • ti • θi
Processing time required to complete the operation i .
• δ i Maximum overstay permitted for operation i , on the corresponding resource.
348
9 Dynamic Scheduling and Real-time Assignment
• m Number of operations to be performed in order to complete the product at hand. • α is i.
Starting instant of the si-th idle period that could be assigned to operation
• β is
Finishing instant of the si-th idle period.
i
i
The following algorithm minimizes the completion time of the product at hand. Algorithm 9.2. (Algorithm S) 1. Set si = 1 for i = 1, 2, ..., m . 2. Set p1 = α1s1 . 3. Set pi = Max ⎛⎜ α isi , pi −1 + θ i −1 ⎞⎟ , i = 2, ..., m . ⎝ ⎠ 4. Set pm+1 = pm + θ m . 5. Set t m+1 = pm+1 .
(
6. Set t i = Max pi , t i +1 − θ i − δ i
) for i = m, m −1, ..., 1 .
7. If t i +1 ≤ β isi hold for all i = 1, 2, ..., m , stop: the optimum is reached, Else, for each i such that ti +1 > β isi , set si = si + 1 and go to 2.
The complexity of the above algorithm is Ο(m × n) , where n denotes the total m
number of idle periods in a linear operation sequence, i.e., n = ∑ Qi . We propose i =1
an application of this algorithm. Example
In this example, four operations should be performed in sequence. To simplify the graphical representation, we assume that an operation is performed by only one machine. Thus, the product has to visit successively machines M 1 , M 2 , M 3 and M 4 in this order. According to the above definitions, θ1 = 3, θ 2 = 4, θ 3 = 5, θ 4 = 5 and δ i = 1 for i = 1, 2, 3, 4 . The idle periods on the machines are: Machine M 1 : [ 0, 2 ], [ 5, 15 ] , [ 25, + ∞ ] . Machine M 2 : [ 10, 15 ], [ 25, + ∞ ] . Machine M 3 : [ 0, 5 ], [ 10, 25 ], [ 30, + ∞ ] . Machine M 4 : [ 0, 10 ], [ 20, 27 ], [ 30, + ∞ ] .
9.3 Real-time Assignment with Fixed Previous Assignments
349
This situation is represented in Figure 9.12. M1
M2
M3
M4
5
10
15
20
25
30
5
10
15
20
25
30
5
10
15
20
25
30
15
20
25
30
5
10 Busy periods
Figure 9.12 Initial state of the schedule
We now apply Algorithm 9.2 (Algorithm S). First Iteration
1. We set s1 = s 2 = s3 = s4 = 1 . This means that we will first try to assign the operations to the first idle periods of the corresponding machines. 2. Building the forward sequence. p1 = 0, p2 = Max ( 10, 0 + 3 ) = 10, p3 = Max ( 0, 10 + 4 ) = 14
p4 = Max ( 0, 14 + 5 ) = 19 , p5 = 19 + 5 = 24 3. Building the backward sequence. t5 = 24, t 4 = Max ( 19, 24 − 5 − 1 ) = 19, t3 = Max ( 14, 19 − 5 − 1 ) = 14 t 2 = Max ( 10, 14 − 4 − 1 ) = 10, t1 = Max ( 0, 10 − 3 − 1 ) = 6
4. Test. This step consists in checking if the solution is feasible. If it is feasible, it is optimal. If the solution is not feasible, the test identifies the operations we should try assigning to the next idle window. t 2 = 10 > β11 = 2 , thus s1 = s1 + 1 = 2
t 3 = 14 < β 21 = 15 , thus s2 = 1 t 4 = 19 > β 31 = 5 , thus s3 = s3 + 1 = 2 t 5 = 24 > β 41 = 10 , thus s 4 = s 4 + 1 = 2
350
9 Dynamic Scheduling and Real-time Assignment
At least one idle window is not suitable for performing the related operation. Thus, we have to restart the computation with the new idle windows. Second Iteration
1. We set s1 = s3 = s4 = 2 and s2 = 1 . This means that we will try to assign the operations to the second idle periods of the corresponding machines, except the second operation that we will try to assign to the first idle period of the corresponding machine. 2. Building the forward sequence. p1 = 5, p2 = Max ( 10, 5 + 3 ) = 10, p3 = Max ( 10, 10 + 4 ) = 14 p4 = Max ( 20, 14 + 5 ) = 20 , p5 = 20 + 5 = 25
3. Building the backward sequence. t5 = 25, t 4 = Max ( 20, 25 − 5 − 1 ) = 20, t3 = Max ( 14, 20 − 5 − 1 ) = 14 t 2 = Max ( 10, 14 − 4 − 1 ) = 10, t1 = Max ( 5, 10 − 3 − 1 ) = 6
4. Test. t 2 = 10 < β12 = 15, thus s1 = 2 t3 = 14 < β 21 = 15, thus s2 = 1
t 4 = 20 < β 32 = 25, thus s3 = 2 t5 = 25 < β 42 = 27, thus s4 = 2
M1
M2
M3
M4
5
10
15
20
25
30
5
10
15
20
25
30
5
10
15
20
25
30
5
10
15
20
25
30
Busy periods Extensions of the operation times
Figure 9.13 Solution
Operation times
9.3 Real-time Assignment with Fixed Previous Assignments
351
All the tests are positive, i.e., all the idle windows are suitable for performing the related operations. Furthermore, ti is the start time and ti+1 the completion time of the i-th operation. The solution is represented in Figure 9.13. Thus, if the release date of the product is zero, its optimal makespan is 25. Remark:
For each resource, the upper limit of the last window is always +∞ . As a consequence, applying Algorithm 9.2 always leads to a solution.
9.3.3 Control of the Production Cycle We are interested in managing only the storage duration of each product at the end of each operation. We denote by δ i the maximal overstay permitted for operation i . We assume that the product spends the overstay period at the exit of the corresponding machine in order not to obstruct it. Thus, the maximal overstay period is the maximal storage time at the end of the corresponding operation. We fix the maximum storage duration at the exit of each operation. We just have to add a “storage resource” at the end of each operation (i.e., at the exit of each pool of identical machines that have the capacity to perform the same operations). These storage resources are supposed to be totally idle each time a new order appears in the production system. At this point, the previous algorithm applies. The time interval associated to an operation i is reduced to the lower value θ i . The time interval associated to the storage “operation” at the end of operation i is [0, δ i ] . To make it simpler, we apply the previous algorithms using [θ i , θ i + δ i ] as the operation time interval for operation i . In the resulting schedule, the operation time of a given product on i is θ i + ui , where ui ∈ [ 0, δ i ] . We then just reduce the operation time to θ i , and ui is the storage period of the product at the end of operation i . For instance, in the example given in Figure 9.13, we just remove the hatched bars to have the idle periods of the machines. For machine M 1 , the idle periods are [ 0, 2 ] , [ 5, 6 ] , [ 9, 15 ] , [ 25, + ∞ ] . The smaller the mean maximal storage time, the lower the average production cycle, but the more constrained the schedule: on the average, the starting time of the schedule increases when the mean maximal storage time decreases. In other words, the smaller the mean maximal storage time selected the lower the mean working time of the resources. The goal of the manager is to find the best tradeoff between the WIP (work-in-progress), which is connected to production cycles, and the productivity of the system. It is well known that productivity and production cycle increase on the average as WIP level increases, but the proposed approach
352
9 Dynamic Scheduling and Real-time Assignment
establishes the link between these two characteristics of the production dynamics. Finally, it appears that the set of parameters δ i is an indirect measure for productivity, WIP level and production cycle: these three criteria are functions of this set of parameters. The only problem is that simulation software is required for each supply chain in order to express these functions, but having such software drastically reduces the diversity of measures, which is important to better control the system. Note again that the proposed approach does not limit the number of products waiting at the end of the operations, but only the storage times of the products. The production cycle of any product is upper bounded by:
∑( θ m
i
)
+ δ i , where m is the number of operations.
i =1
Example
9. 5
8. 5
7. 5
6. 5
5. 5
4. 5
3. 5
2. 5
1. 5
72 70 68 66 64 62 60 58 0. 5
Percentage
Four machines denoted by M 1 , M 2 , M 3 and M 4 are visited in this order by a sequence of products. The time spent on a machine is the value taken by a random variable uniformly distributed on [ 4, 20 ] , whatever the machine or the product. Thus, the mean operation time for any operation is 12. We define a period T and a probability p . At the end of each period T , we assume that a product arrives at the entrance of the system with the probability p . The probability p is defined in order not to overload the system. Thus, we choose p < T / 12 . In the simulation presented hereafter, T = 1 and thus p < 0.0833 . We choose p = 0.082 in both examples.
Delta Controlling production cycle Controlling production cycle and WIP
Figure 9.14 Percentage of busy time versus Delta
9.3 Real-time Assignment with Fixed Previous Assignments
353
We use the same maximal overstay permitted (called Delta) for all operations. We simulate the system over 30 000 units of time for Delta varying from 0.5 to 10. The results are represented in Figure 9.14, curve “controlling production cycle”. The continuous line joins the points representing the percentage of busy time obtained by simulation for each value of Delta and for the set of machines. As we can see, the percentage of the busy time of each machine increases with Delta.
9.3.4 Control of the Production Cycle and the WIP We are interested in managing both the storage duration and the maximal number of parts that are simultaneously stored at the end of each operation. This approach allows both the maximum WIP level and the production cycle to be controlled. In this case, we add as many “storage resources” as the number of WIP units that are allowed at the exit of an operation. Another difference from the previous approach is that we keep the busy periods of these “storage resources”. In other words, the “storage idle windows” are managed in exactly the same way as the idle windows of the corresponding operation. In this approach, productivity, WIP level and production cycle are functions of two sets of parameters: the parameters δ i and the numbers of storage resources at the exits of the operations. In other words, each operation i is replaced by two operations: • An operation i1 to which a fixed operation time θ i is assigned (i.e., the opera-
tion time of i1 takes its value on interval [θ i , θ i ] ). The identical resources assigned to operation i1 are the same as those assigned to operation i. • An operation i2 whose operation time takes its value on [0, δ i ] . The identical resources assigned to this operation are the storage resources assigned to i. As the reader can see, the size of the problem is approximately twice as large in this approach as in the previous one where only the production cycle was at stake. Example
We consider the example of the previous subsection and we assume that one storage resource is available at the exit of each machine. Thus, as in the previous subsection, the production cycle is up bounded by the sum of the manufacturing times and four times Delta (one Delta for each operation). Furthermore, the WIP is up bounded by 8 (4 products in process and 4 products waiting at the exit of the resources). The result is represented in Figure 9.14 (curve “controlling production cycle and WIP”).
354
9 Dynamic Scheduling and Real-time Assignment
9.3.5 Assembly Systems The idea of the algorithm is to decompose each assembly process into linear processes and to iteratively adjust the solution in order to reach a situation where each assembly operation that is disseminated in several linear manufacturing processes, is performed during the same period. Decomposition of an Assembly Process
Let us consider the assembly process represented in Figure 9.15. The assembly operations are gray tinted. If we consider a sequence of operations joining a leaf of the tree to the root, we have a linear process. For instance, the assembly process represented in Figure 9.15 can be decomposed into six linear processes as shown in Figure 9.16. A
C G D K E
B
H M O
I L F
N
J
Figure 9.15 Assembly process
A
B
C
G
K
M
O
D
G
K
M
O
E
H
K
M
O
I
L
M
O
J
L
M
O
N
O
F
Figure 9.16 Decomposition into linear processes
9.3 Real-time Assignment with Fixed Previous Assignments
355
Indeed, a given assembly operation appears in at least two of the linear processes. If we schedule these linear processes independently from each other, then there is no reason that the same assembly activity belonging to different linear processes is performed during the same period for each linear process. In the next subsection, we will show how to adjust these periods. Real-time Assignment Algorithm for Assembly Processes
We apply Algorithm S presented in Section 9.3.2 to each linear process. In the solution, three cases are possible for each assembly operation: 1. The operation is performed in different idle windows depending upon the linear process. In this case, we assign the highest rank of these windows to the assembly operation. In other words, if i is the assembly operation under consideration, we assign to si the highest rank of the windows where this operation is performed in all the linear processes containing it, and we restart the computation for each linear process. 2. The operation is performed in the same window for all the linear processes but the starting times of the operation are different. In this case, the rank of this window is assigned to the assembly operation, but the lower bound of the window is provisionally replaced by the longest starting time. We then restart the computation for each linear process. 3. The operation is performed in the same window and the starting time is the same whatever the linear process. In this case we just assign the rank of this window to the assembly operation. If all the assembly activities are in the third case, the algorithm stops. Otherwise, we restart the algorithm keeping the assigned windows as initial windows. It has been proven that this algorithm minimizes the makespan (i.e., the completion time). Example
We consider the assembly process given in Figure 9.17. In this figure, the values that are written near the operation ID are the operation times. The initial state of the system is given in Figure 9.18. A
4
4 C
6 B
6 E
5
D
Figure 9.17 Assembly process
356
A B C D E
9 Dynamic Scheduling and Real-time Assignment
5
10
15
20
25
30
5
10
15
20
25
30
5
10
15
20
25
30
5
10
15
20
25
30
5
10
15
20
25
30
Figure 9.18 Initial state
The black bars of Figure 9.18 represent the busy periods of the machines (one machine is assigned to each operation). Three linear processes are derived from the decomposition of the assembly process: • linear process 1: A, C, E; • linear process 2: B, C, E; • linear process 3: D, E.
In the notations introduced hereafter, the lower indices are the operations, and thus the machines performing these operations. The upper indices are the linear processes. The upper index disappears when the variable concerns all the linear processes. First Iteration
We start with the first idle period on each machine: s A = s B = sC = s D = s E = 1 . Processing of Linear Process 1
We start with the first windows: s1A = sC1 = s1E = 1 . We apply Algorithm S. The starting times of the operations are: t 1A = 5, tC1 = 9, t 1E = 13
The related idle windows are: s1A = 1, sC1 = 2, s1E = 2 . Processing of Linear Process 2
We start with the first windows: s B2 = sC2 = s E2 = 1 . We apply Algorithm S. The starting times of the operations are: t B2 = 4, tC2 = 10, t E2 = 14
9.3 Real-time Assignment with Fixed Previous Assignments
357
The related idle windows are: s B2 = 1, sC2 = 2, s E2 = 2 . Processing of Linear Process 3
We start with the first windows: s D3 = s E3 = 1 . We apply Algorithm S. The starting times of the operations are: t D3 = 17, t E3 = 22 . The related idle windows are: s D3 = 1, s E3 = 3 . Applying the rules proposed above: • We change (provisionally) the lower bound of the second window of C that becomes equal to 10. • We restart the computation with sC = 2 . • We restart the computation with s E = 3 . Second Iteration
We start with the first idle period on each machine: s A = s B = s D = 1 , sC = 2 and s E = 3 . The lower bound of sC is 10. Processing of Linear Process 1
We start with the first windows: s1A = 1, sC1 = 2, s1E = 3 . We apply Algorithm S. The starting times of the operations are: t 1A = 21, tC1 = 25, t 1E = 29
The related idle windows are: s1A = 2, sC1 = 3, s1E = 3 . Processing of Linear Process 2
We start with the first windows: s B2 = 1, sC2 = 2, s E2 = 3 . We apply algorithm S. The starting times of the operations are: t B2 = 19, tC2 = 25, t E2 = 29
The related idle windows are: s B2 = 2, sC2 = 3, s E2 = 3 . Processing of Linear Process 3
We start with the first windows: s D3 = s E3 = 1 .
358
9 Dynamic Scheduling and Real-time Assignment
We apply Algorithm S. The starting times of the operations are: t D3 = 17, t E3 = 22
The related idle windows are: s D3 = 1, s E3 = 3 . At the end of this second iteration, the starting time of operation C is the same in linear processes 1 and 2, but the starting time of operation E is greater in linear processes 1 and 2 than in line 3. We thus conduct a third iteration after increasing provisionally the lower limit of the third idle window of E to 29. We finally obtain the optimal solution represented in Figure 9.19. A B C D E
5
10
15
20
25
30
5
10
15
20
25
30
5
10
15
20
25
30
5
10
15
20
25
30
5
10
15
20
25
30
Figure 9.19 The optimal solution
Finally, we can say that the real-time assignment problem is solved, assuming that: • If several machines are available to perform a given operation, they are identical in terms of operations times. • The operation times are deterministic.
The first constraint can be released at the expense of additional computation, which increases the complexity of the algorithm. The second constraint is not really a constraint since, for each operation, the value of δ can be selected such that [θ , θ + δ ] encapsulates the stochastic operation time (at least with a high probability) at the expense of productivity. If an urgent demand appears, it is possible to launch the algorithm again after: • Freezing the schedule of the operations that have been completed or that are in progress. • Launching the urgent demand first. • Launching the partial products that have not been completed in the order of the arrival of the corresponding demands.
Since the algorithm is very fast, this action does not slow down the manufacturing activity.
9.4 Real-time Assignment with Possible Limited Adjustment of Previous Assignments
359
9.4 Real-time Assignment with Possible Limited Adjustment of Previous Assignments In this case, the starting times of the previously assigned operations can be slightly modified in order to allow a late assignment of a new operation. This may be necessary when an urgent and unexpected job appears in the system. As a consequence, it is not possible to deliver a product at a precise point in time, even in the absence of breakdown. This kind of real-time assignment algorithm is not frequently used. We introduced it for the control of battlefield radar.
9.4.1 Setting the Problem We restrict ourselves to the case of a unique resource performing unrelated operations. A schedule of n operations is given on this resource. The processing time of operation i ∈ { 1, K, n } is denoted by θ i and its starting time by μi . Operation times are deterministic. Indeed, μi + θ i ≤ μi +1 for i ∈ { 1, K, n − 1 } . The due date of task i is denoted by d i . In the rest of this section, we use two more notations Δ i and f i for i ∈ { 1, K, n } defined as follow:
Δ i = μi +1 − ( μi + θ i ) is the period between the completion time of operation
i
and the start time of operation i + 1 . In other words, Δ i is the amount of time operation i can be postponed without overlapping operation i + 1 . f i = [ d i − ( μ i + θ i ) ] + is the amount of time operation i can be postponed without increasing its delay. Let us recall that Figure 9.20 illustrates these definitions.
μi
μ i +1
di
θi
fi
[ x ] + = Max ( 0, x ) .
Δi
d i +1
θ i +1 f i +1 = 0
Figure 9.20 Illustration of the definitions
360
9 Dynamic Scheduling and Real-time Assignment
Assume that an unexpected operation arrives in the system at time 0 and that its duration θ * is known at the time of its arrival. This operation is urgent and must be completed by time D * . If there exists an idle period [ μ i + θ i , μ i +1 ] such that Δ i = μ i +1 − ( μ i + θ
i
)≥θ *
and μ i +1 ≤ D * , then a possible start time of the un-
expected operation is μi + θ i , otherwise one or more operations that are already scheduled must be postponed. The criterion applied to the n operations that have been previously scheduled is: Cn =
∑( μ
+
n
i
+θi − di
)
(9.1)
i =1
Cn is the sum of the delays of the operations that have been already scheduled. The goal is to insert the new operation into an idle period of the existing schedule in order to minimize the increase of Criterion 9.1. In other words, the goal is to insert the unexpected operation in the existing schedule while preserving the quality of this schedule as much as possible. We start by giving some basic properties of the problem.
9.4.2 Basic Relations We first evaluate the increase of (9.1) according to the duration θ * of the unexpected operation if the start point of this operation is μ i + θ i that is the completion time of operation i . The meaning of the lower index of Z is given in (9.2). As mentioned before, if Δ i = μ i +1 − ( μ i + θ i ) ≥ θ * , then the increase of (9.1)
is Z −1 ( θ * ) = 0 . If Δ i < θ * ≤ Δ i + Δ i +1 , then only operation i + 1 is postponed and its start
time becomes θ * + μ i + θ i . If operation i + 1 is already late with regard to its due date, that is if f i +1 = 0 , then the increase of C n is θ * − Δ i , otherwise this increase is θ * −Δ i − f i +1 .
(
In brief, if Δ i < θ * ≤ Δ i + Δ i +1 , C n increases by Z 0 ( θ * ) = θ * − Δ i − f i +1
)
+
.
Let us now consider the case when Δ i + Δ i +1 < θ * ≤ Δ i + Δ i +1 + Δ i + 2 . In this case, both operations i + 1 and i + 2 are delayed. The contribution of operation i + 1 to the increase of C n is still θ * −Δ i − f i +1 + and the contribution of i + 2
(
(
to the increase is θ * −Δ i − Δ i +1 − f i + 2
)
)
+
.
9.4 Real-time Assignment with Possible Limited Adjustment of Previous Assignments
(
Thus, Z1 ( θ * ) = θ * −Δ i − f i +1
∑Δ k =0
+
i
− Δ i +1 − f i + 2
)
+
.
p +1
p
More generally, if
) + ( θ * −Δ
361
i+k
< θ * ≤ ∑ Δ i+k , then the increase of C n is: k =0 +
p j ⎞ ⎛ Z p ( θ * ) = ∑ ⎜⎜ θ * −∑ Δ i + r − f i + j +1 ⎟⎟ for p = 0, 1, 2, 3, L j =0 ⎝ r =0 ⎠
(9.2)
and Z −1 ( θ * ) = 0 if Δ i = μ i +1 − ( μ i + θ
i
)≥θ *
Example Consider the schedule represented in Figure 9.21. i
i+1 5 6
10
12
15
i+3
i+2 18 di+1
20
23 25
27 29 di+2
35 32 di+3
Figure 9.21 Initial schedule
The due dates of operations i + 1, i + 2, i + 3 are, respectively, d i +1 = 17 , d i + 2 = 28 and d i +3 = 35 . Assume that the unexpected operation starts when operation i is completed. According to Relation 9.2:
• If θ *∈ [ 0, 6 ] , then Z −i 1 ( θ * ) = 0 .
• If θ *∈ ( 6, 11 ] , then Z 0i ( θ * ) = ( θ * − 6
)+ = θ * −6 .
• If θ *∈ ( 11, 13 ] , then Z1i ( θ * ) = θ * −6 + ( θ * −11 − 1 ) + . This can be rewritten as: ⎧ θ * −6 if θ * ∈ ( 11, 12] ⎪ Z1i ( θ * ) = ⎨ ⎪⎩2 θ * −18 if θ * ∈ ( 12, 13]
• If θ * > 13 , then Z 2i ( θ * ) = 2 θ * −18 + ( θ * −13 − 3 ) + . This can be rewritten as:
362
9 Dynamic Scheduling and Real-time Assignment
⎧2 θ * −18 if θ * ∈ ( 13, 16 ] ⎪ Z 2i ( θ * ) = ⎨ ⎪⎩ 3 θ * −34 if θ * > 16
This function is represented in Figure 9.22, curve 1.
Criteria increase
30 25 20 15 10 5
Operation time
Curve 1
18 19 .5
15 16 .5
12 13 .5
9 10 .5
6
7. 5
0
Curve 2
Figure 9.22 Criterion increase versus operation time
Assume now that the unexpected operation starts when operation i + 1 is completed. According to Relation 9.2:
• If θ *∈ [ 0, 5 ] , then Z −i +11 ( θ * ) = 0 .
• If θ * ∈ ( 5, 7 ] , then Z 0i +1 ( θ * ) = ( θ * −5 − 1 ) . This can be rewritten as: +
⎧ 0 if θ * ∈ ( 5, 6 ] ⎪ Z 0i +1 ( θ * ) = ⎨ ⎪⎩θ * −6 if θ * ∈ ( 6, 7 ]
• If θ * > 7 , then Z1i +1 ( θ * ) = θ * −6 + ( θ * −7 − 3) + . This can be rewritten as: ⎧θ * −6 if θ * ∈ ( 7, 10 ] ⎪ Z1i +1 ( θ * ) = ⎨ ⎪⎩ 2 θ * −16 if θ * > 10
This function is represented in Figure 9.22, curve 2. Assume that an unexpected operation arises at time 0, its due date is D* = 40 and its operation time is θ * = 20. This operation can start either when i is completed or when i + 1 is completed. Since Z 2i ( 20 ) = 26 > Z1i +1 ( 20 ) = 24 , this operation will optimally start as soon as operation i + 1 is completed.
9.4 Real-time Assignment with Possible Limited Adjustment of Previous Assignments
363
Assume that θ * = 16 . Still in this case, the unexpected operation can start either when i or i + 1 is completed. Since Z 2i ( 16 ) = 14 < Z1i +1 ( 16 ) = 16 , the unexpected operation will optimally start as soon as operation i is completed.
9.4.3 Real-time Algorithm in the Case of Adjustment The following algorithm is twofold: the off-line preparation of the real-time decision and the on-line decision process. The period R mentioned in the off-line preparation depends on the problem at hand. This period should be short enough so that the probability that more than one unexpected event arises during this period is negligible. Algorithm 9.3. 1. Off-line preparation: At each period R or as soon as a new operation has been integrated in the schedule: 1.1. Compute the new limits of the idle periods. We denote by t the time at which the adjustment is made. 1.2. For each idle period, compute the increase of Criterion 9.1 as a function of the operation time θ * using (9.2). Note that, in practice, only the idle periods that start before t + W are considered, where W is evaluated based on the characteristics of the problem. Note also that each function corresponds to an idle period. We denote by M the number of idle periods under consideration. 1.3. For each pair of functions, compute the operation times for which the functions take the same value. 1.4. Arrange these operation times in their ascending order. Let θ1* < L < θ K* be these ordered abscissa. 1.5. In each interval I k = [ θ k* , θ k*+1 ] , k = 1, L, K − 1 , arrange the functions in their increasing value. Since each function corresponds to an idle period we obtain, in each interval I k , the list of the idle periods arranged from the best to the worst (in terms of the increase of the criterion). Let P1k , L , PMk the ordered list of the idle periods for interval Ik .
2. On-line decision: When an unexpected operation arises, we know its due date D* and its duration θ * . Then: 2.1. Identify the idle period im having the smallest lower limit greater than or equal to t . 2.2. Identify the idle period am having the greatest lower limit less than or equal to D* − θ * . Note that 2.1 and 2.2 are possible only if D * − θ * ≥ t , otherwise the problem has no solution. 2.3. Identify the interval I z = [ θ z* , θ z*+1 ] such that θ * ∈ I z .
364
9 Dynamic Scheduling and Real-time Assignment
2.4. Let Prz be the first idle period of the ordered list P1z , L, PMz that belongs to
{ im, L, am } .
2.5. Start the unexpected operation at the beginning of idle period Prz and modify the start time of the next operations accordingly if necessary.
This algorithm can make decisions in real time since an off-line computation is made either R units of time after the previous computation or immediately after introducing an unexpected operation in the schedule.
9.4.4 Case of a Linear Production Let us assume that an unexpected product has to be completed by time D * after visiting machines 1, L , m . We denote by θ i* the time spent by the product on machine i . The goal is still to minimize the increase of the sum of the delays of the products that have been already scheduled. The problem is much more complicated than the previous one since postponing an operation on a machine may lead to postponing operations on other machines. Such a problem cannot be solved optimally in real time since meeting the real time constraint requires not to reschedule the whole set of operations. The following heuristic algorithm leads to satisfactory results. 9.4.4.1 Step 1: Compute a Deadline for Each Operation
We consider the horizon D * and we denote by D1* , D2* , L, Dm* −1 , Dm* = D * the deadlines we have to compute for operations 1, L, m , respectively. Let bs i the sum of the busy periods of machine i until horizon D * . We define the deadlines of operations starting from Relations 9.3: bs i
=
m
∑ bs
Di* − Di*−1 for i = 1, L, m D*
(9.3)
k
k =1
where D0* = t , current point in time. Note that the indices i of the Di* reflect the order of the operations in the manufacturing process. Equation 9.3 is such that the busier the machine, the greater the interval of where to select an idle period to perform the operation assigned to this machine.
9.4 Real-time Assignment with Possible Limited Adjustment of Previous Assignments
365
9.4.4.2 Step 2: Assign the Operations to Idle Periods
We apply to each operation the same approach as for the case of a unique resource. As explained before, we may have to postpone some of the operations of the existing schedule that, in turn, may lead to postpone operations related to the same product on other machines. To make the explanation clear, let us consider the case of 2 machines M 1 and M 2 on which 3 products P 1 , P2 and P 3 are already scheduled as shown in Figure 9.23. The manufacturing processes of these products are (the machines are mentioned in the order they are visited by the product and the numbers between parentheses are the operation time):
• Product P 1 : M 1 ( 12 ), M 2 ( 9 ) . • Product P 2 : M 2 ( 3 ), M 1 ( 4 ) . • Product P 3 : M 2 ( 10 ), M 1 ( 7 ) . Assume that an unexpected requirement arises at time 0 for a product P * characterized by the following manufacturing process:
• Product P * : M 2 ( 4 ), M 1 ( 5 ) . The due date of P * that cannot be violated is D * = 25 .
M1 5
10
15
20
25
30
5
10
15
20
25
30
M2
P1
P2
P3
Figure 9.23 The initial schedule
Since P * starts on machine M 2 , we have to establish the due date of the operation on this machine. We first compute bs1 = 25 − 3 = 22 and bs 2 = 25 − 2 − 4 = 19 . According to (9.3): 19 D2* , which leads to D2* = 11.58 . = 41 25
366
9 Dynamic Scheduling and Real-time Assignment
Finally, the operation of P * performed on M 2 should be completed by time D2* and the operation performed on M 1 should start after D2* and finish by time
D1* = D * = 25 .
As we can see in Figure 9.24, assigning the operation performed on P * by M 2 requires postponing operations performed on P 2 and P 3 that, in turn, requires postponing the operation performed on P 2 and P 3 by M 1 ; this provides the room to insert the operation performed on P * by M 1 . M1 5
10
15
20
25
5
10
15
20
25
30
M2
P1
P2
P3
30 *
P
Figure 9.24 Schedule after inserting P *
To summarize, assigning an operation to a machine may modify not only the start time of the operations already assigned to the same machine, but also the start time of operations assigned to other machines. A heuristic approach can be summarized in the following algorithm. Algorithm 9.4.
1. Compute a deadline for each operation. 2. Assign the operations to the machines, without taking into account the precedence constraints over operations related to the same product. At this step, we apply the algorithm introduced to solve optimally the case of a unique resource. 3. Consider the overlaps between operations. Two types of overlaps are considered: –
–
The overlap of operations related to different products on the same machine. This situation does not happen at the beginning of Step 3 since Step 2 takes care of this kind of situation. The overlap between consecutive operations of the same product. An overlap of this type concerns operations performed on different machines.
We arrange the overlaps in the increasing order of the start time of the operation that start last (first type of overlaps) or the start time of the operation that is a successor of the other operation (second type of overlap). 4. Consider the first overlap of the list and postpone the operation whose start time has been used to order overlaps until the overlap disappears. 5. Return to Step 3.
9.5 Conclusion
367
Indeed, this heuristic does not lead to an optimal solution. Furthermore, if the system is overloaded, the computation time may be considerable.
9.5 Conclusion Since reactivity is becoming the pivotal factor for competitiveness, static scheduling is slowly vanishing from industrial environments. It is being replaced by dynamic scheduling and real-time assignment approaches that are able to provide an optimal or near-optimal solution in real time, that is to say without delaying the physical process. Priority or dispatching rules are one of the dynamic scheduling approaches available. A dispatching rule is applied each time a scheduling decision has to be made when several products are ready to start their next operation on a resource that becomes idle. Several dispatching rules have been explained in this chapter. They are usually easy to apply and meet the real-time requirement. The drawback, when using a dispatching rule, is that unexpected side effects may penalize the resulting schedule. It is strongly recommended to test such a rule by simulation and to keep in mind that the results of the simulation are viable as long as the characteristics of the manufacturing system and the customer’ requirement remain steady. Other components of the dynamic scheduling approaches are the so-called repair-based approaches. Roughly speaking, such an approach consists in building a schedule at the beginning of a working period and adjusting it to keep it feasible and as close as possible to optimal when an unexpected event (breakdown, unforeseen demand, etc.) makes the initial schedule unfeasible. Several strategies have been mentioned in this chapter. Real-time assignment techniques are based on another philosophy. Their goal is to schedule a demand as soon as it arrives at the entrance of the system. Two types of approach have been proposed: assignment when previous assignments are fixed and assignment when previous assignment can be slightly modified to make it possible to integrate an urgent task. When previous assignments are fixed, it is possible to provide in real time the delivery date to customers. Two algorithms have been introduced for linear and assembly production systems. These algorithms are particularly efficient and have been applied in companies. Nevertheless, they have two limitations:
• When several machines can perform the same operation, they are supposed to be identical in terms of operation time. • A given product cannot use the same machine at different manufacturing stages.
368
9 Dynamic Scheduling and Real-time Assignment
In the case when previous assignments can be slightly modified, an optimal real-time algorithm has been proposed in the case of a unique machine. This algorithm has been applied for the control of multifunction battlefield radar and works well. The idea behind this approach is to periodically prepare the decisions offline. Finally, when a job-shop is concerned, the complexity of the problem increases drastically and only a heuristic approach is possible.
References Baker KR (1974) Introduction to Sequencing and Scheduling. John Wiley & Sons, New York, NY Bean JC, Birge JR, Mittenthal J, Noon CE (1991) Match up scheduling with multiple resources, release dates and disruptions. Oper. Res. 39(3):470–483 Blazewicz J, Ecker K, Pesch E, Schmidt G, Węglarz J (2001) Scheduling Computer and Manufacturing Processes, 2nd edn, Springer Verlag, Berlin Chauvet F, Levner E, Meyzin L, Proth J-M (2000) On-line scheduling in a surface treatment system. Eur. J. Oper. Res. 120:382–392 Cowling PI, Johansson M (2002) Using real time information for effective dynamic scheduling. Eur. J. Oper. Res. 139(2):230–244 French S (1982) Sequencing and Scheduling: An Introduction to the Mathematics of the Job Shop. Ellis Harwood, Chichester Govil M, Proth J-M (2002) Supply Chain Design and Management: Strategic and Tactical Perspectives. Academic Press, San Diego, CA Haupt R (1989) A survey of priority rule-based scheduling. OR Spectr. 11:3–16 Holthaus O (1999) Scheduling in job-shops with machine breakdowns: an experimental study. Comput. Ind. Eng. 36(1):137–162 Jayamohan MS, Rajendran C (2004) Development and analysis of cost-based dispatching rules for job shop scheduling. Eur. J. Oper. Res. 157:307–321 Lee HL, Padmanabhan V, Whang S (1997) Information distortion in a supply chain: the bullwhip effect. Manag. Sci. 43:546–558 MacCarthy BL, Liu J (1993) Addressing the gap in scheduling research: a review of optimization and heuristic methods in production scheduling. Int. J. Prod. Res. 31(1):59–79 Panwalkar S, Iskander W (1977) A survey of scheduling rules. Oper. Res. 25(1):45–63 Pinedo M (2008) Scheduling Theory Algorithms and Systems. 3rd edn, Springer, New York, NY Poirier CC, Reiter SE (1996) Supply Chain Optimization. Building the Strongest Total Business Network. Berret-Koehler Publishers, San Francisco, CA Raghu TS, Rajendran C (1993) An efficient dynamic dispatching rule for scheduling in a job shop. Int. J. Prod. Econ. 32:301–313 Russell RS, Dar-El EM, Taylor III BV (1987) A comparative analysis of the COVERT job sequencing rule using various shop performance measures. Int. J. Prod. Res. 25(10):1523–1540 Stoop PPM, Wiers VCS (1996) The complexity of scheduling in practice. Int. J. Oper. Prod. Manag. 16(10):37–53 Vieira GE, Herrmann JW, Lin E (2003) Rescheduling manufacturing systems: A framework of strategies, policies and methods. J. Sched. 6(1):36–62 Wiers VCS (1997) A review of the applicability of OR and AI scheduling techniques in practice. Omega 25(2):145–153 Wu SD, Byeon ES, Storer RH (1999) A graph-theoretic decomposition of the job shop scheduling problem to achieve scheduling robustness. Oper. Res. 47(1):113–124
Further Reading
369
Further Reading Chan MY, Chin L (1992) General Schedulers for the Pinwheel Problem Based on Double Integer Reduction. IEEE Trans. Comput. 41(6):755–768 Chauvet F, Proth J-M (2001) On-line scheduling in assembly processes. INFOR 39(3):245–256 Chauvet F, Herrmann JW, Proth J-M (2003) Optimization of cyclic production systems: a heuristic approach. IEEE Trans. Rob. Aut. 19(1):150–154 Duron C, Proth J-M (2002) Multifunction radar: task scheduling. J. Math. Mod. Alg. 1(2):105– 116 Graham RL, Lawler EL, Lenstra JK, Rinnooy Kan AHG (1979) Optimization and approximation in deterministic sequencing and scheduling: a survey. Ann. Discr. Math. 5:287–326 Hillion HP, Proth J-M (1989) Performance evaluation of job-shop systems using timed event graphs. IEEE Trans. Aut. Contr. 34(1):3–9 Kutanoglu E, Sabuncuoglu I (1999) An analysis of heuristics in a dynamic job-shop with weighted tardiness objectives. Int. J. Prod. Res. 37(1):165–187 Laftit S, Proth J-M, Xie X-L (1992) Optimization of invariant criteria for event graphs. IEEE Trans. Aut. Contr. 37(5):547–555 Lee CY, Lei L, Pinedo M (1997) Current trends in deterministic scheduling. Ann. Oper. Res. 70:1–41 Nowicki E (1994) An approximation algorithm for a single-machine scheduling problem with release times, delivery times and controllable processing times. Eur. J. Oper. Res. 72(1):74–82 Orman AJ, Potts CN, Shahani AK, Moore AR (1996) Scheduling for a multifunction array radar system. Eur. J. Oper. Res. 90:13–25 Orman AJ (1998) Modelling for the control of a complex radar system. Comput. Oper. Res. 25(3):239–249 Proth J-M (2001) On-line scheduling in Supply Chain environment. In: Menaldi JL, Rofman E, Sulem A (eds), Optimal Control and Partial Differential Equations, IOS Press, Amsterdam Proth J-M (2007) Scheduling: new trends in industrial environment. Ann. Rev. Contr. 31(1):157– 166 Rajendran C, Holthaus O (1999) A comparative study of dispatching rules in dynamic flowshops and job-shops. Eur. J. Oper. Res. 116(1):156–170 Ramamoorthy CV, Ho GS (1980) Performance evaluation of asynchronous concurrent systems using Petri nets. IEEE Trans. Soft. Eng. SE-6(5):440–449 Vidal CJ, Goetschalckx M (1997) Strategic production-distribution models: A critical review with emphasis on global supply chain models. Eur. J. Oper. Res. 98:1–18
Chapter 10
Manufacturing Layout
Abstract Designing a layout consists in optimally locating manufacturing facilities in order to reduce the required material handling resources and movement of material. Evidently, this leads to cost reduction. Static layout models are presented in the first part of the chapter. They are used when the environment can be considered as steady. The basic static models and their characteristics are provided. Kmean analysis, often required for designing functional departments, as well as cross-decomposition, used to design cells, are both explained and carefully illustrated. Note that these two approaches are commonly used beyond layout design. The standard approaches (CORELAP, INRIA-SAGEP, CRAFT) to locate manufacturing entities on the space available are reviewed and spotlighted. They close the first part of the chapter. Dynamic layout models make up the second part. These models are studied because they cope well with an ever-changing market environment. Dynamic facility layouts approaches, which are flexible and easy to reconfigure, and robust layout techniques, which can be used efficiently over many product mixes and volumes, complete the chapter.
10.1 Introduction A manufacturing layout is performed when either a new manufacturing system is set up, important changes occur in flow volumes and routes, or new resources (robots, automated guided vehicles, etc.) are introduced in a manufacturing system. Designing an optimal layout consists in locating manufacturing facilities in order to reduce the movement of material and material handling. An optimal layout simplifies management (in particular scheduling), reduces manufacturing cycles, the number of employees and, last but not least, the surface required to set up the system. As a consequence, the quality of products increases due to the reduction
372
10 Manufacturing Layout
of the risks resulting from transportation and handling, system reactivity increases and work-in-progress (WIP) decreases. Research work on this subject dates from the 1950s, see (Kuhn, 1955), for instance, and the interest in layout problems grew drastically in the 1980s due to the increase in demand variety, the frequent changes in customers’ requirements and an ever-stronger worldwide competition. Until this point in time, the objective of layout problems was to optimize a layout, assuming that the environment remains basically steady (stable material flow and deterministic operation times).We refer to these problems as static facility layout (SFL) problems. Another growing interest for layout problems arose in the mid-1990s when it became obvious that existing layout configurations were unable to meet the needs of enterprise facing multiproduct manufacturing in a fast-changing market over a reasonable horizon. The idea of adaptable layouts, that is to say layouts that change to remain optimal or close to the optimum when the demand varies drastically, was born at that time. We refer to layout problems that aim at dynamically adapt layouts to demands as dynamic facility layout (DFL) problems. In the rest of this chapter, we present SFL and DFL models. We will also provide some insights into robust layouts that are another way to face unforeseen changes in environment.
10.2 Static Facility Layouts 10.2.1 Basic Layout Models Three types of manufacturing layouts are usually mentioned in the literature: linear layout, functional department and cellular layouts. Stage 1
Stage 2
Stage 3
Substation 2.1 Substation 1.1 Substation 2.2 Substation 1.2 Substation 2.3
Figure 10.1 Linear layout
Substation 3.1
10.2 Static Facility Layouts
373
A linear layout, such as the one represented in Figure 10.1, applies to a system that manufactures a limited variety of products in stable volumes. Such a system can be more or less automated. High automation guarantees a great productivity at the expense of low agility, while a low automation leads to high agility but low productivity. Note that a linear layout with low automation sends us back to line balancing since in this case all the stations are usually designed to perform any of the required operations. At some stages of the manufacturing system proposed in Figure 10.1, parallel stations that perform the same operations are available in order either to offset breakdowns of some substations or to help adapting the production flow to the demands. We refer to functional department layout when the resources of the same type or, in other words, resources able to perform the same set of tasks, are gathered together in the same department. A functional department layout is an organization around jobs. Usually, functional department layouts are considered as being efficient for systems that manufacture a large variety of products in small volumes. This kind of organization is stable when production evolves in volume only. However, such a layout leads to a huge number of moves and handling activities in the system that, in turn, leads to complex scheduling and risks of damaging products. Furthermore, the tendency when using this kind of organization is to manage functional departments independently from each other, going back to having an organization in departments. We know the problems resulting from this kind of organization: increase in WIP, number of employees, production cycle, to quote just a few. This organization is the opposite of the modern tendency for supply chain organizations. We can consider that functional department layouts are a survival from the past organizations but they remain necessary in the face of increasing requirements for customized products. An example of a functional layout is presented in Figure 10.2.
Drilling
Milling
Resource Manufacturing processes
Figure 10.2 Functional department layout
Assembly
Packaging
374
10 Manufacturing Layout
In cellular layouts, the manufacturing system is partitioned into cells, each cell being designed to manufacture the major portion of a given family of products or a set of operations. This organization is based on the assumption that the product families will be manufactured on a horizon large enough to “absorb” the cost of designing a new layout by the horizon, which is not common due to the frequent changes of customers’ requirements. If such a situation happens, a cellular layout is notorious for simplifying product moves and handling activities. Indeed, cellular manufacturing is inefficient if market requirements frequently fluctuate in volume and product types. Such a layout is represented in Figure 10.3. Note that some products may have to visit a cell other than their generic cell for performing a particular operation when the corresponding resource is too expensive to be duplicated and rarely needed by products of other cells.
Cell 1
Cell 2
Cell 3
Figure 10.3 Cellular layout
Note that when cells are obtained, several decisions remain to be made: the location of the resources inside each cell and location of cells on the available surface. This last decision is necessary when products have to visit several cells to be completed or when each cell is composed of a unique resource, which happens in the case when a huge variety of manufacturing processes are at stake.
10.2.2 Selection of a Type of Layout In this section we provide some insights to answer the following question: How to make a choice among these three types of layouts, assuming that at least one of them is convenient?
10.2 Static Facility Layouts
375
The principal parameters to consider when making this selection are: 1. The period over which products are expected to be sold, also called the expected product life (PL). If this period is long enough, that is to say equal to several years without significant changes in products or technologies, then automated manufacturing systems can be considered at the expense of agility. Unfortunately, the tendency is product diversification in order to remain competitive with regard to developing countries. Indeed, several production domains remain highly automated such as car production for instance. 2. The average volume (AV) to manufacture per unit of time. A great average volume advocates for high automation, assuming that high value-added products are concerned. 3. The variety of products (VP) at stake. Usually, a large variety of products that require specific operations advocates for production/assembly lines where the automation level is low and manpower involvement high. A huge amount of manpower with a low automation level results in an agile system. 4. The average number of possible changes in the quantities to manufacture and the characteristics of the products over time (CH). By characteristics we mean secondary functionalities, shape, color, etc. If changes are important, then agile production/assembly lines are obviously the solution. Note that, in this section, we assume that a unique layout selection is made. The above comments are summarized in Table 10.1. Table 10.1 Parameters for layout selection Linear layout Agile
Automated
Functional department layout Agile
Automated
x
x
Cellular layout Agile
Automated
x
x
Product life
Short
(PL)
Large
Mean
Low
production
Medium
x
volume (AV)
High
x
x
x
Variety
Very low
x
x
x
of
Medium
x
products (VP)
High
x
Intensity of
Low
changes
Medium
over time (CH)
High
x
x x x
x x
x x
x x
x
x
x
x
x
x
x x x
376
10 Manufacturing Layout
We propose some examples that show how Table 10.1 can be used: Example 1 If the life of products is short, the mean production volume medium, the variety of products high and the intensity of changes over time low, then Table 10.1 suggests the choice of a functional cell layout with low automation, which guarantees agility. Example 2 If the life of products is important, mean production volume low, variety of products very low and intensity of changes over time medium, then Table 10.1 does not suggest anything. The reason is that a low production volume associated with a very low variety of products is probably not a promising niche. Example 3 If the life of products is long, mean production volume medium, variety of products very low and intensity of changes over time low, then Table 10.1 suggests choosing a cellular automated system. Example 4 If the life of products is short, mean production volume high, variety of products very low and intensity of changes over time low, then Table 10.1 does not suggest anything. As we will see in the next main section, this situation typically calls for a dynamic facility layout (DFL). Example 5 If the life of products is long, mean production volume high, variety of products very low and intensity of changes over time low, then Table 10.1 suggests choosing either linear or cellular layouts both with automation.
10.2.3 Layout Design A linear layout does not require any particular design technique: facilities are gathered together into well-balanced stations as explained in Chapters 7 and 8; some stations could be replaced by several substations in order to facilitate the adjustment of material flows with demands; stations are finally arranged in the order they will be visited. U-shaped lines may be used to allow an employee to manage several facilities at different stages of production.
10.2 Static Facility Layouts
377
Theoretically, a functional department layout does not require any particular technique since it consists in grouping together resources that perform related operations into functional departments and in locating this departments on the available surface. When each resource performs several operations it may be difficult to select the resources to assign to the same department. In this case, K-mean analysis is a technique that may prove helpful: this technique will be presented hereafter. Furthermore, several techniques to locate resources on the available surface will be presented in this section. Cellular layout requires a particular technique to build the cells. This building task is twofold: first select the product types that will be manufactured in the same cells and then define the number of resources of the same type to introduce in each cell to balance the system. Finally, the location of the resources inside functional departments or cells requires some particular approaches that will be presented later. To summarize, four types of activities are necessary for designing a layout: 1. The design of manufacturing entities that concerns: – – –
Linear layout, where stations are designed simultaneously with line balancing. Functional department layout that sometimes requires K-mean analysis. Cellular layout for which cross-decomposition is required. We will present the GP method that is simple and efficient.
2. The location of the manufacturing entities on the available surface that mainly concerns functional department layouts where moves from department to department are unceasing. This stage may also concern cellular layout when particular resources are located in a cell but used by most of the products: this situation happen when some resources are too expensive to be duplicated in several cells. 3. The location of resources inside manufacturing entities. This depends on several parameters: weights of the products, number of resources in the manufacturing entities, intensity of product flows, transportation system, etc. 4. The last stage of layout design consists in balancing entities’ workloads. These four stages are summarized in Table 10.2.
10.2.4 Design of Manufacturing Entities When linear layouts are concerned, the entities are designed during the balancing process. This problem has been studied in Chapters 7 and 8. As aforementioned, it may happen that the design of functional department layouts requires K-mean analysis when the similarities between resources are not obvious.
378
10 Manufacturing Layout
Table 10.2 Layout design steps Layout types
Linear layout
Functional department layout
Cellular layout
X
X
X
(Line balancing)
(sometimes)
Design stages Design of manufacturing entities Location of manufacturing entities
X
X (sometimes)
Resources inside manufacturing entities Workload balancing
X
X
X
X
X
X
(Line balancing)
The design of cells requires a cross-decomposition approach. We propose the GP method. These two methods are presented hereafter. 10.2.4.1 K-mean Analysis Introduction to the Method Assume that a set of n objects (or individuals) I1 , L , I n is such that each object is characterized by the same q parameters denoted by C1 , L, Cq . These parameters take numerical values. We denote by c1k , L , cqk the numerical values that characterize individual I k . Thus, individual I k can be represented by a point still denoted by I k and whose coordinates are c1k , L , cqk in Rq. K-mean analysis aims at partitioning the set of individuals into clusters such that two individuals that are grouped in the same cluster are close to one another. The proximity of two individuals can be measured using Euclidean distance, but also using a dissimilarity index. Let us recall some basics. d ( I k , I r ) is the Euclidean distance between individuals I k and I r if the following properties hold: 1. d ( I k , I r
)= 0
if and only if I k ≡ I r .
The distance between two individuals is equal to zero if and only if these individuals are exactly the same. 2. d ( I k , I r
) = d ( Ir , Ik
)
10.2 Static Facility Layouts
379
The distance between individual I k and individual I r is the same as the distance between individual I r and individual I k (symmetry property). 3. d ( I k , I r ) ≤ d ( I k , I u ) + d ( I u , I r ) whatever the individuals I k , I r and I u . This property is called the triangular inequality. The Euclidean distance between I k and I r is defined as: d ( Ik , Ir ) =
∑ (c q
k i
− cir
)
2
i =1
When the triangular property does not hold, then d is called the dissimilarity index. The K-mean Algorithm The K-mean algorithm iteratively converges toward a set of homogeneous clusters. The result may depend upon the starting state that is generated at random. Algorithm 10.1. (K-mean with quantitative parameters) 1. Generate at random h artificial individuals P1 , L, Ph in the domain of I1 , L , I n . Usually, h≈n/5. 2. For i = 1, L, n do:
2.1. Find u such that d ( I i , Pu ) = Min d ( I i , Ps ) . s =1, L, h
2.2. Assign I i to cluster S u . 3. Remove the empty clusters. The number of remaining clusters is g ≤ h. 4. Compute the gravity centers of the clusters. We denote these gravity centers by V1 , L , Vg . 5. If
{V1, L , Vg } ≠ { P1, L , Ph } then:
5.1. Set h = g . 5.2. Set Pi = Vi for i = 1, L, h . 5.3. Go to 2. 6. End.
Let us recall the definition of a gravity center. Assume that we assign to each individual I k an additional quantitative parameter wk that we call the weight of I k . This weight is introduced to give more or less importance to the individual.
380
10 Manufacturing Layout
The gravity center of the cluster S u , u = 1, L, h , is a point in Rq whose coordinates s1u , L, squ are defined as follows:
⎛ siu = ⎜ ⎜ ⎝
∑w
k
k I k ∈Su
⎞ ⎛ cik ⎟ / ⎜ ⎟ ⎜ ⎠ ⎝
∑w
k
k I k ∈Su
⎞ ⎟ for i = 1, L, q ⎟ ⎠
It has been proven that the above algorithm converges. Nevertheless, running this algorithm several times may lead to slightly different clusters. The reason is that the initial set P1 , L, Ph that is generated at random (see Step 1 of the algorithm) may be different from a run to the next one. Individuals that do not remain in the same cluster in all the trials belong to a “gray” set of individuals, that is to say individuals that are not close to a particular cluster. Usually, a majority rule is used to assign these individuals to clusters. An Example with Numerical Parameters Due to the limited space available we choose a small example with 5 parameters and 20 individuals. The weights of the individuals are equal to 1. The parameters are given in Table 10.3. We generated at random 4 artificial individuals to start. The K-mean analysis algorithm provides 3 clusters. Table 10.3 Quantitative data for K-mean analysis
I1
I2
I3
I4
I5
I6
I7
I8
I9
I10
C1
12
0
1
20
1
13
1
1
17
1
C2
15
0
0
18
2
17
4
1
20
4
C3
2
20
0
1
18
0
2
20
2
20
C4
1
18
20
0
15
0
25
25
5
25
C5
1
1
25
0
0
0
30
1
3
1
I11
I12
I13
I14
I15
I16
I17
I18
I19
I20
C1
3
1
1
7
19
5
5
20
3
2
C2
1
5
18
5
18
3
4
15
2
3
C3
2
1
17
20
1
6
21
1
5
20
C4
15
20
20
25
1
29
20
2
30
20
C5
18
20
1
1
1
25
2
1
18
1
10.2 Static Facility Layouts
381
Cluster 1: Individuals: I 2 , I 5 , I 8 , I10 , I13 , I14 , I17 , I 20 Gravity center: 2.25, 4.625, 19.5, 21.0, 1.0 Cluster 2: Individuals: I1 , I 4 , I 6 , I 9 , I15 , I18 Gravity center: 16.8333, 17.1667, 1.16667, 1.5, 1.0 Cluster 3: Individuals: I 3 , I 7 , I11 , I12 , I16 , I19 Gravity center: 2.33333, 2.5, 2.66667, 23.1667, 22.6667 This algorithm is used in group technology to build clusters of product types (part families) that spend approximately the same amount of time on the resources: this simplifies the planning. Assume, for instance, that a job-shop is composed of q machines designed to manufacture n products. Assume also that a
statistical analysis has shown that the production ratio of product i ∈ { 1, L , n } is wi . For each product type, we know the sequence of machines a product of this type has to visit in order to be manufactured, as well as the time such a product spends on the machines. We then set: ⎧ti , j if a product of type i has to visit machine j ⎫ ⎪ ⎪ c =⎨ ⎬ ⎪ ⎪⎭ 0 otherwise ⎩ i j
for i ∈ { 1, L, n } and j ∈ { 1, L, q } where ti , j is the time a unit of product i has to spend on machine j . In this example, the product types are the individuals and the machines the parameters. The K-mean analysis approach that is of interest for designing a layout is presented in the next subsection.
An Example with Binomial Parameters We consider the layout department problem. In this framework, individuals are the resources and parameters are the operations the set of resources has to perform. Keeping the previous notations:
382
10 Manufacturing Layout
⎧1 if resource i can perform operation j ⎫ ⎪ ⎪ c ij = ⎨ ⎬ ⎪ ⎪⎭ 0 otherwise ⎩
for i ∈ { 1, L, n } and j ∈ { 1, L, q } We still denote by wi the weight associated to resource i . This weight is in connection with the importance of the resource (running or buying cost, difficulty to maintain the resource, etc.). In this kind of problem, we no longer use the Euclidean distance but the dissimilarity index d1 defined as follows: d1 ( I k , I r ) =
1 q
q
∑ (c
k j
+ c rj − 2 × c kj × c rj )
j =1
Other dissimilarity indices are available in the literature, but d1 is particularly efficient for the problem at hand since it groups in the same cluster (i.e., the same department) resources that can perform the same set of operations and that are not designed to perform the operations that do not belong to this set. In other words, this dissimilarity index gathers together operations having the same behavior in regard to operations they are able to perform or not. In Step 2.1 of Algorithm 10.1 (K-mean with quantitative parameters), we replace d with d1 . A second change is necessary because the parameters are binary. This change concerns the computation of the gravity centers. Let S ⊂ { I1 , L, I n } be a cluster. The computation of the “gravity center” is obtained by applying Algorithm 10.2. Algorithm 10.2. 1. Compute W =
∑ wk . This is the sum of the weights of the resources belonging to S .
k I k ∈S
2. For j = 1, L , q do: 2.1. Compute R j =
∑ wk × c kj .
k I k ∈S
2.2. If R j < W / 2 then s j = 0 , otherwise s j = 1 .
Indeed, s1 ,L, sq are the coordinates of the “gravity center” of cluster S . The example presented hereafter comprises 20 individuals (i.e., resources) and 8 parameters (i.e., operations). The data is reported in Table 10.4. All the weights are equal to 1. We generated at random 6 individuals to start.
10.2 Static Facility Layouts
383
We obtain 6 clusters.
Cluster 1: Individuals: I 2 , I 6 , I10 , I11 Gravity center: 0 0 1 1 1 1 0 0
Cluster 2: Individuals: I 7 , I14 , I19 Gravity center: 0 0 0 1 0 1 1 1
Cluster 3: Individuals: I 9 , I13 Gravity center: 0 1 0 0 1 1 1 1
Cluster 4: Individuals: I 3 , I 4 , I15 , I16 , I17 , I 20 Gravity center: 1 1 1 0 0 1 1 1
Cluster 5: Individuals: I 5 , I18 Gravity center: 0 1 0 1 1 1 0 0
Cluster 6: Individuals: I1 , I 8 , I12 Gravity center: 1 1 1 1 0 0 0 0 Table 10.4 K-mean analysis for binary data I1
I2
I3
I4
I5
I6
I7
I8
I9
I10
I11
I12
I13
I14
I15
I16
I17
I18
I19
I20
C1
1
0
1
1
0
0
0
1
0
0
0
1
0
0
1
1
1
0
0
1
C2
1
0
1
1
0
0
0
1
1
1
0
1
1
0
1
1
1
1
0
1
C3
1
1
0
1
0
1
0
1
0
1
0
0
0
0
0
1
0
0
1
1
C4
0
1
0
0
1
1
1
1
0
1
1
1
0
1
0
0
0
1
1
0
C5
0
1
0
0
1
1
0
0
0
1
1
0
1
0
0
0
0
1
0
0
C6
0
1
1
0
1
0
1
0
1
0
1
0
1
1
1
1
0
1
0
0
C7
0
0
1
1
0
0
1
0
1
0
0
0
1
1
1
1
1
0
1
0
C8
0
0
1
1
0
0
1
0
1
0
1
0
1
0
0
0
1
0
1
1
384
10 Manufacturing Layout
Thus, we would have 6 departments for this example, and the resources assigned to these departments would be the individuals mentioned above. Remarks concerning K-mean analysis: 1. As mentioned before the number of clusters obtained is never greater than the number of artificial individuals generated at the beginning of the process. 2. If we run a K-mean analysis program several times with the same numbers of initial artificial individuals generated at random, we may obtain different solutions. 3. Usually, we run the program several times and gather the individuals that are in the same cluster most of the time (for instance, 90% of the time) together in the same cluster. Individuals that do not belong to a cluster at the end of the process are elements of the “gray set”.
10.2.4.2 Cross-decomposition Methods
Introduction Cross-decomposition methods are used to design cells. Let us consider a matrix A having n rows and m columns. The elements of this matrix are non-negative values. The cross-decomposition methods presented in this section consist in reordering the rows and columns of the matrix to obtain a set B1 , L, BK of non-encroached blocks that optimize a given criterion. Figure 10.4 illustrates such a method. In matrix A , rows represent product types while columns represent resources. There are n product types and m resources. Two criteria are associated with this problem: the weight criterion that leads to manufacturing each type of product exclusively in one cell, if possible, and the traffic criterion that leads to a reduction in the traffic between cells. C1 R1 A
Initial matrix
Figure 10.4 A cross-decomposition
CK
B1 B2
R2
RK
C2
A
BK
Cross-decomposition result
10.2 Static Facility Layouts
385
Cross-decomposition Based on Weight Criterion: The GP Method In this approach, the elements ai , j , i = 1, L, n ; j = 1, L , m , of matrix A are bi-
nary: ⎧1 if product i uses resource j ⎪ ai , j = ⎨ ⎪⎩ 0 otherwise
The approach based on the weight criterion is the original GP method (GPM), see (Garcia and Proth, 1986). The objective is to reorder the rows and the columns of matrix A such that a weighted sum of the number of the 1-values inside the blocks B1 , L, BK (see Figure 10.4) and the 0-values outside these blocks is maximal. If w ∈ [ 0, 1 ] is the weight associated with the blocks, then 1 − w is the weight associated with the outside of the blocks. This weight is chosen by the users. Note that if w = 1 then an optimal solution is a single block that is the whole matrix, and this solution is the only one that is optimal whatever the elements of the matrix. If w = 0 then an optimal solution consists in considering that the whole matrix is the outside of the blocks. Thus, the weight should be carefully selected between these two limits. Since this algorithm is fast, several trials are possible. Note also that an initial number K of blocks is required. Usually, this initial number is chosen such that Max ( n / K , m / K ) ≥ 4 . We use the notations introduced in Figure 10.4. Algorithm 10.3. (The GPM algorithm)
{
}
1. Generate at random a partition of { 1, L, n } into K subsets R 0 = R10 , L , L RK0 . 2. For j = 1, L , m do: 2.1. Compute Gk ( j ) = w
∑ ai, j + ( 1 − w ) ∑ ( 1 − ai, j ) , for k = 1, L, K .
i∈Rk0
i∉Rk0
2.2. Let k * such that Gk* ( j ) = Max Gk ( j ) . k =1, L, K
2.3. Assign column j to subset C k0* .
{
When Step 2 is completed, we obtain a partition of the columns C 0 = C10 , L , C K0
} . Some
of these subsets may be empty; they are removed and K is adjusted accordingly. To simplify the explanations, we do not integrate this adjustment in the algorithm; K is always the number of elements of the partition. 3. For i = 1, L , n do: 3.1. Compute H k ( i ) = w
∑ ai, j + ( 1 − w ) ∑ ( 1 − ai, j ) , for k = 1, L, K .
j∈Ck0
j∉Ck0
386
10 Manufacturing Layout
3.2. Let k * such that H k * ( i ) = Max H k ( i ) . k =1, L, K
3.3. Assign row i to subset Rk1* .
{
When Step 3 is completed, we obtain a partition of the rows R1 = R11 , L , R1K
} . Some of
these subsets may be empty; they are removed and K is adjusted accordingly. To simplify the explanations, we do not integrate this adjustment in the algorithm. 4. If R1 ≡ R 0 then: 4.1. The solution to the problem is B* =
{( R
0 1
) (
× C10 , L , RK0 × C K0
) }, set of blocks.
4.2. Compute the value of the criterion Z =w
∑ ai, j + ( 1 − w ) ∑ ( 1 − ai, j ) , where U = U ( Ri0 × Ci0 ) K
(i , j )∈U
(i , j )∉U
i =1
4.3. Stop the algorithm. 5. If R1 ≠ R 0 then: 5.1. Set R 0 = R1 . 5.2. Go to 2.
It has been proven that this algorithm always converges (Garcia and Proth, 1986). The result says that the subset of product types Ri0 is manufactured (mainly) in the cell composed of the resources of the subset Ci0 . Remark 1 Usually, we run the algorithm several times and keep the result that leads to the best value of the criterion. Remark 2 We can also start the computation by generating at random a partition of { 1, L , m } into K subsets C 0 = C10 , L , LC K0 , derive R 0 from C 0 in the
{
}
same way as C 0 is derived from R 0 , and so on. This is how we proceed in the following example. An Example In this example, 15 product types P1 , L, P15 are performed on 10 resources M 1 , L, M 10 . The data are given in Table 10.5.
10.2 Static Facility Layouts
387
Table 10.5 Data for GP method
M1 M2 M3 M4 M5 M6 M7 M8 M9 M10
P1 1 1 1 1 0 0 0 0 0 1
P2 0 0 0 1 1 1 1 0 0 0
P3 1 1 1 1 0 0 0 0 0 0
P4 1 1 1 0 0 0 0 0 0 1
P5 0 0 0 1 1 1 1 0 0 0
P6 1 1 0 0 0 0 0 1 1 1
P7 0 0 0 1 1 1 1 0 0 0
P8 0 0 0 1 1 1 1 0 1 0
P9 1 1 1 1 0 0 0 1 0 0
P10 0 0 0 0 1 1 0 0 1 1
P11 0 0 0 0 1 1 1 0 1 1
P12 1 1 1 0 0 0 0 0 0 1
P13 0 0 0 0 1 1 1 0 0 0
P14 0 0 0 1 1 1 1 0 1 1
P15 0 0 0 1 1 1 1 0 0 1
We first apply the GP method with w = 0.5 and we obtain the solution represented in Table 10.6. This solution shows that 3 cells are possible: • Cell 1 composed of resources { M 8 , M 9 , M 10 } . The part family associated with this cell is { P6 , P10 } . • Cell 2 composed of resources { M 7 , M 4 , M 5 , M 6 } . The part family associated
with this cell is { P2 , P7 , P8 , P5 , P11 , P13 , P14 , P15 } . • Cell 3 composed of resources { M 2 , M 3 , M 1}. The part family associated with this cell is { P4 , P12 , P1 , P9 , P3 } . The value of the criterion is 65. When we apply the GP method with w = 0.8 , we obtain only 2 cells (see Table 10.7). The value of the criterion is 60.6. Table 10.6 Solution with w = 0.5
M8 M9 M10 M7 M4 M5 M6 M2 M3 M1
P6 1 1 1 0 0 0 0 1 0 1
P10 0 1 1 0 0 1 1 0 0 0
P2 0 0 0 1 1 1 1 0 0 0
P7 0 0 0 1 1 1 1 0 0 0
P8 0 1 0 1 1 1 1 0 0 0
P5 0 0 0 1 1 1 1 0 0 0
P11 0 1 1 1 0 1 1 0 0 0
P13 0 0 0 1 0 1 1 0 0 0
P14 0 1 1 1 1 1 1 0 0 0
P15 0 0 1 1 1 1 1 0 0 0
P4 0 0 1 0 0 0 0 1 1 1
P12 0 0 1 0 0 0 0 1 1 1
P1 0 0 1 0 1 0 0 1 1 1
P9 1 0 0 0 1 0 0 1 1 1
P3 0 0 0 0 1 0 0 1 1 1
388
10 Manufacturing Layout
Table 10.7 Solution with w = 0.8
M4 M5 M6 M7 M9 M3 M1 M8 M2 M10
P2 1 1 1 1 0 0 0 0 0 0
P5 1 1 1 1 0 0 0 0 0 0
P7 1 1 1 1 0 0 0 0 0 0
P8 1 1 1 1 1 0 0 0 0 0
P10 0 1 1 0 1 0 0 0 0 1
P11 0 1 1 1 1 0 0 0 0 1
P13 0 1 1 1 0 0 0 0 0 0
P14 1 1 1 1 1 0 0 0 0 1
P15 1 1 1 1 0 0 0 0 0 1
P1 1 0 0 0 0 1 1 0 1 1
P6 0 0 0 0 1 0 1 1 1 1
P12 0 0 0 0 0 1 1 0 1 1
P3 1 0 0 0 0 1 1 0 1 0
P4 0 0 0 0 0 1 1 0 1 1
P9 1 0 0 0 0 1 1 1 1 0
As we can see, some parts have to use resources that do not belong to the cell to which these part types are assigned. For example, part type P8 that is associated to cell 2 uses resource M 9 that belongs to cell 1. A solution to avoid the intercell moves of parts of type P8 is to introduce a resource M 9 in cell 2, assuming that M 9 is not too expensive and that the balance of the cells is not disturbed in doing so. The Improved GP Method It may happen that some product types have to visit many resources. As a consequence, such a product type cannot be assigned to a specific cell. Similarly, it may happen that some expensive resources, required by almost all the product types, are unique. Such a resource cannot be included in a cell. We provide a simple procedure that can be added to the GP algorithm in order to extract the rows and the columns that cannot be associated to a specific cell in the resulting table. We use the notations introduced before. Algorithm 10.4.
1. For i = 1, L, n do: 1.1. Compute s0 ( i ) = w
∑ ai, j + ( 1 − w ) ∑ ( 1 − ai, j ) , where k * is the cell to which part
j∈Ck0*
j∉Ck0*
type i is assigned in the result provided by the GP algorithm. 1.2. Compute s1 ( i ) = w
m
∑ ai , j . j =1
1.3. Compute s2 ( i ) = ( 1 − w )
m
∑ ( 1 − ai, j ) . j =1
1.4. If ( ( s0 ( i ) < ( s1 ( i ) ) or ( s0 ( i ) < ( s2 ( i ) ) ) , then row i that represents a product type is no longer assigned to cell k * .
10.2 Static Facility Layouts
389
2. For j = 1, L , m do: 2.1. Compute s0 ( j ) = w
∑ ai, j + ( 1 − w ) ∑ ( 1 − ai, j ) , where
i∈Rk0*
k * is the cell to which
i∉Rk0*
resource j belongs in the result provided by the GP algorithm. 2.2. Compute s1 ( j ) = w
n
∑ ai, j . i =1
2.3. Compute s2 ( j ) = ( 1 − w )
n
∑ ( 1 − ai, j ) . i =1
2.4. If ( ( s0
( j ) < ( s1 ( j ) ) or ( s0 ( j ) < ( s2 ( j ) ) ) , then column j
that represents a re-
source is removed from cell k * .
An Example We consider the data given in Table 10.8. We applied GP extended to these data. The results are given in Table 10.9. Table 10.8 Data for GP extended
M1 M2 M3 M4 M5 M6 M7 M8 M9 M10
P1 1 1 1 1 0 0 1 0 0 1
P2 1 1 1 1 1 0 1 1 1 0
P3 1 1 1 1 0 0 1 0 0 0
P4 1 1 1 0 0 0 1 0 0 1
P5 0 0 0 1 1 1 1 0 0 0
P6 1 1 0 0 0 0 1 1 1 1
P7 0 0 0 1 1 1 1 0 0 0
P8 0 0 0 1 1 1 1 0 1 0
P9 1 1 1 1 0 0 1 1 0 0
P10 0 0 0 0 1 1 0 0 1 1
P11 0 0 0 0 1 1 1 0 1 1
P12 1 1 1 0 0 0 0 0 0 1
P13 0 0 0 0 1 1 1 0 0 0
P14 0 0 0 1 1 1 1 0 1 1
P15 0 0 0 1 1 1 1 0 0 1
P6 1 0 1 0 0 0 1 1 1 1
P2 1 1 0 0 1 1 1 1 1 1
Table 10.9 Result related to data of Table 10.8 with w = 0.5
M9 M3 M10 M6 M4 M5 M8 M2 M1 M7
P10 1 0 1 1 0 1 0 0 0 0
P4 0 1 1 0 0 0 0 1 1 1
P1 0 1 1 0 1 0 0 1 1 1
P12 0 1 1 0 0 0 0 1 1 0
P8 1 0 0 1 1 1 0 0 0 1
P5 0 0 0 1 1 1 0 0 0 1
P11 1 0 1 1 0 1 0 0 0 1
P7 0 0 0 1 1 1 0 0 0 1
P13 0 0 0 1 0 1 0 0 0 1
P14 1 0 1 1 1 1 0 0 0 1
P15 0 0 1 1 1 1 0 0 0 1
P3 0 1 0 0 1 0 0 1 1 1
P9 0 1 0 0 1 0 1 1 1 1
390
10 Manufacturing Layout
We obtain 4 cells, one cell being made with only one resource ( M 9 ). One column and one row are not included in the cells: they are framed. This particular row and column concern resource M 7 and product type P2 . Cross-decomposition Based on Traffic Criterion This algorithm is twofold: • Find a partition of the columns (each element of the partition will be a cell) such that the traffic between cells is minimum. As explained in the previous subsection, some columns may be ruled out of the partition. • Find the set of product types to assign to each cell. As explained before, some product types may remain unassigned.
Partition of the Columns This part of the algorithm is based on a matrix B of n rows that represent the product types and m columns that represent the resources. The elements of this matrix are defined as follows: ⎧ 0 if a part of type i does not visit resource j ⎫ ⎪ ⎪ bi , j = ⎨ ⎬ ⎪k if resource j is the k - th resource visited by a part of type i ⎪⎭ ⎩
for i = 1, L , n and j = 1, L, m . Let r , s ∈ { 1, L, m } . If ( bi ,s = bi ,r + 1 and bi ,r > 0 ) or ( bi ,r = bi ,s + 1 and bi ,s > 0 ), then the traffic between resources r and s due to products of type i is
t ( i, r , s ) = wi , weight of products of type i ; otherwise, t ( i, r , s ) = 0 . The weight of products of type i is proportional to the number of products of this type that enter the manufacturing system during a given period. In other words, the weight reflects the part of this type of product in the overall traffic. The total traffic between resources r and s is: n
t ( r , s ) = ∑ t ( i, r , s ) i =1
We set t ( s, s ) = +∞, ∀s ∈ { 1, L, m } .
(10.1)
10.2 Static Facility Layouts
391
We derive the dissimilarity index d between columns r and s from the total traffic: d ( r, s ) =
1 1 + t ( r, s )
(10.2)
From this definition, we can see that: • d ( s, s ) = 0, ∀s ∈ { 1, L, m }.
• d ( r , s ) = d ( s, r ) ∀s, r ∈ { 1, L , m } . • d ( r , s ) = 1 if t ( r , s ) = 0 ∀s, r ∈ { 1, L, m } : the dissimilarity index is equal to 1 when there is no traffic between two resources. • d ( r , s ) → 0 if t ( r , s ) → +∞ ∀s, r ∈ { 1, L , m } : the greater the traffic, the closer the dissimilarity index is to 0.
The algorithm that leads to the partition of the columns is based on two parameters: • A parameter η ∈ [ 0, 1 ] chosen by the user. This parameter is used to compute
( s ) of a column s ∈ { 1, L, m } is the number of columns r ∈ { 1, L, m } such that d ( s, r ) < η . A parameter α ∈ [ mi, ma ] , where mi is the smallest density among the col-
the densities of the columns. The density Δη •
umns and ma the greatest. This parameter is also chosen by the user. It can select a set D of columns having high densities:
{
D = r r ∈ { 1, L , m } and Δη ( r ) > α
}
The columns belonging to set D are candidate kernels of the cells we are looking for. The partition of the columns is then obtained using Algorithm 10.5. Algorithm 10.5.
1. We choose at random an element of D*⊂ D, where D * is the set of elements having the greatest density and that are not assigned to a cell yet. Let r * be this element that will be the kernel of a new cell denoted by C ( r * ) .
2. Assign to C ( r * ) the elements s that are not assigned yet and such that d ( s, r * ) < η . Note that s may be an unassigned element of D . 3. If some elements of D are not assigned, then go to 1.
Indeed, the result closely depends on parameters η and α . Several trials are usually required to obtain an acceptable result.
392
10 Manufacturing Layout
Numerical Example We consider a set of 8 product types { P1 , L, P8 } that are manufactured on 6 resources { M 1 , L, M 6 } . The sequence of resources to be visited by each type of part as well as the weight associated to the part type, are given hereafter: P1 : M 2 , M 3 , M 1 ; w1 = 2 P2 : M 4 , M 6 , M 5 ; w2 = 1 P3 : M 4 , M 2 , M 3 , M 1 ; w3 = 1 P4 : M 6 , M 5 ; w4 = 2 P5 : M 2 , M 1 , M 3 ; w5 = 2 P6 : M 5 , M 4 , M 3 , M 2 ; w6 = 1 P7 : M 5 , M 6 ; w7 = 2 P8 : M 1 , M 3 , M 2 ; w2 = 2 We derive matrix B as shown in Table 10.10. The traffic between the resources is computed using (10.1) and given in Table 10.11. Table 10.10 Matrix B M1
M2
M3
M4
M5
M6
wi
P1
3
1
2
0
0
0
2
P2
0
0
0
1
3
2
1
P3
4
2
3
1
0
0
1
P4
0
0
0
0
2
1
2
P5
2
1
3
0
0
0
2
P6
0
4
3
2
1
0
1
P7
0
0
0
0
1
2
2
P8
1
3
2
0
0
0
2
Table 10.11 Traffic
M1 M2 M3 M4 M5 M6
M1
M2
M3
M4
M5
M6
+∞
2
7
0
0
0
+∞
6
1
0
0
+∞
1
0
0
+∞
1
1
+∞
5 +∞
10.2 Static Facility Layouts
393
Table 10.12 Values of the dissimilarity index
M1 M2
M1
M2
M3
M4
M5
M6
0
0.333
0.125
1
1
1
0
0.143
0.5
1
1
0
0.5
1
1
0
0.5
0.5
0
0.167
M3 M4 M5 M6
0
We derive the dissimilarity indices from the traffic by applying Relation 10.2. The result is given in Table 10.12. Let us first choose η = 0.4 . According to the definition of the density, we obtain: Δη ( 1 ) = 3, Δη ( 2 ) = 3, Δη ( 3 ) = 3, Δη ( 4 ) = 1, Δη ( 5 ) = 2, Δη ( 6 ) = 2
If we choose α = 1.5 , we obtain: D = { 1, 2, 3, 5, 6 }
Assume that 3 has been chosen at random in D among the elements of density 3. The elements s such that d ( 3, s ) < η are 1, 2 and 3. Thus { M 1 , M 2 , M 3 } is the first cell. After removing the elements that have been assigned, D = { 5, 6 } . Choosing 5 or 6 at random, we obtain the second cell { M 5 , M 6 } . The resource M 4 is not assigned to a cell.
Partition of the Rows We first build matrix A by replacing by 1 the elements of B that are strictly positive. Consider the above example: we obtain matrix A represented in Table 10.13 (the unassigned column is at the last position). The partition of the rows is then performed using the GP approach. The resources that are not assigned to a cell ( M 4 in this example) are ignored. We obtain the part family { P1 , P3 , P5 , P6 , P8 } that is assigned to cell { M 1 , M 2 , M 3 } and the part family { P2 , P4 , P7 } assigned to cell { M 5 , M 6 } .
394
10 Manufacturing Layout
Table 10.13 Matrix A
M1
M2
M3
M5
M6
M4
P1
1
1
1
0
0
0
P2
0
0
0
1
1
1
P3
1
1
1
0
0
1
P4
0
0
0
1
1
0
P5
1
1
1
0
0
0
P6
0
1
1
1
0
1
P7
0
0
0
1
1
0
P8
1
1
1
0
0
0
10.2.5 Location of Manufacturing Entities on an Available Space The manufacturing entities having been designed, the next step consists in locating these entities on the available surface. Two types of approach are available: ones that consider that entities should be located on an unbounded surface (it is assumed that the edifice will be built around the manufacturing system) and others that adapt the layout to an existing building. The CORELAP method belongs to the first type, while the INRIA-SAGEP method is of the second. We will also present CRAFT, another method that takes into account the layout constraints. 10.2.5.1 CORELAP (Computerized Relationship Layout Planning)
This approach is supposed to apply not only to manufacturing systems but also to any set of entities that exchange information or products. In this environment, “manufacturing entities” are called “departments”. The method consists in locating departments on an unbounded surface, taking into account the “intensity” of the flows between departments (flows of information and/or flows of products or material). The first set of parameters reflects the intensity of the links between departments. Six intensity levels are considered and represented by six letters: A: the departments must absolutely be close to each other; E: it is very important that the departments are close to each other; I: it is important that the departments are close to each other; O: it is moderately important that the departments are close to each other; U: it is unimportant whether the departments are close to each other or not;
10.2 Static Facility Layouts
395
X: the nearness between the departments should be avoided. Some additional information can be linked to the previous ones with the following codifications: 1: the departments are linked by flows of material or products; 2: the departments are linked by flows of services; 3: the departments are linked by flows of documents; 4: the intensity of the links is due to the activities of the employees; 5: one of the departments should supervise the work done in the other; 6: the intensity of the links is due to the necessary contacts between employees; 7: the parameter of importance is noise; 8: various reasons. This secondary set of information is not used to design the layout; it is only a reminder of the reasons why the main parameters have been selected. These two sets of information are usually represented as showed in Figure 10.5 for 7 departments.
1
1
A1
2
O2
U8 X7 6
5 U8
O6
X7
I4
U8
6
X7
X7
U8
I3
5
4 E5
I4
X7
4
Departments
3 A2
O6
3
7
2
I5
E2
6
7 X7
2
1
3 4
5
7
Figure 10.5 Data representation
In the CORELAP framework, each department: • is supposed to be made with a set of elementary squares that are juxtaposed; the number of squares is given; • comprises a maximum number of elementary squares widthwise; • has a given priority that suggests the order the departments should be placed.
Table 10.14 gathers together this information for the example presented in Figure 10.5. It should be noticed that the shape of a department is only constrained by the number of elementary squares it is made of, and its width expressed in the number of elementary squares.
396
10 Manufacturing Layout
Table 10.14 Priorities and sizes Department
1
2
3
4
5
6
7
Priority
50
40
45
20
10
15
10
Number of elementary squares
9
5
6
6
6
6
10
Width
3
2
3
2
2
2
5
The layout is obtained using the following algorithm. Algorithm 10.6. (CORELAP)
1. We place the department of highest priority. For the above example, we place department 1 first. In the language used in CORELAP, this department is called “winner”. 2. We check if a department has a type A link with the winner. 2.1. If such a department exists (it is called “victorious”) then it is placed as close as possible to the winner. In the case of the current example, departments 2 or 3 can be selected but we select department 3 that has the highest priority. 2.2. If such a department does not exist, we check if a department has a link of type A with a victorious department. 2.2.1. If such a department exists, it is placed as close as possible to the victorious department that becomes a winner. 2.2.2. If such a department does not exist, we apply the same process to the link of type E (i.e., we restart the computation at Step 2 with the department of highest priority that is not placed yet and we replace A by E). When the computation with E is completed, we restart with I, then with O, and so on until all the departments are placed. For the links of type X, the placements are made “as far as possible” instead of “as close as possible”.
A solution obtained for the example at hand is represented in Figure 10.6.
7
6 3
2 1
Figure 10.6 A solution
4
5
10.2 Static Facility Layouts
397
10.2.5.2 INRIA-SAGEP Method
The details of this method can be found in (Souilah, 1994). The problem consists in placing manufacturing entities (ME) represented by rectangles of various dimensions on a surface on which forbidden locations may be found (walls, supporting pillars, storage resources, power source, etc.). The sides of a rectangle are parallel to two given orthogonal directions. The goal is to minimize a criterion that is the sum of the products (length of the path × product flow) over all pairs of ME. Assume that 4 ME have been placed as shown in Figure 10.7. The layout being given, the space is divided into strips (see R1 , R2 , L in the figure). Some strips are adjacent to each other such as, for example, R1 and R3 , or R11 and R12 . A path that joins the exit of a ME to the input of another one can be defined by a sequence of strips. For example, the path joining M 2 to M 3 can be expressed by the sequence R14 , R12 , R11 , R7 . A path is a sequence of horizontal and vertical segments. R4
R1 M1
R2
R8
R5
R3
R6
M3
R7
R12 R13
M2 R16
Forbidden locations
R9
M4 R10
R11
R14 R15 ME
Figure 10.7 A layout and a path
In the INRIA-SAGEP method, the shortest path between two MEs is computed using a branch-and-bound (B&B) approach (see the corresponding appendix). Thus, a layout being given, it is possible to compute the shortest paths corresponding to all the manufacturing processes. Since the product flows are known, the value of the criterion is straightforward. Let us now present the general approach. We use a simulated annealing (see the appendix devoted to this method) to search for the optimal layout. This method requires the definition of a neighbor of a layout. A layout being given, two actions are available to generate a neighbor: permuting two MEs and moving a ME toward an empty location. One of these two actions is chosen at random (with probability 0.5, for example).
398
10 Manufacturing Layout
Permuting Two MEs To perform this action, two MEs A and B are chosen at random. Let a and b be the intersections of the diagonals of the rectangles that represent these MEs. Permuting the MEs consists of permuting a and b , and then choosing between the two possible directions for each ME (see Figure 10.8). a A
b
B Initial state
a
b
B A Four possible states after permutation
Figure 10.8 Permuting two MEs
Four permutations are possible. If one permutation meets the constraints that apply to the layout, it is selected; otherwise, we go back to the random choice of a neighbor type between the two that is possible.
Moving an ME Toward an Empty Location An elementary square is chosen at random on an empty part of the available surface. This square plays the role of one rectangle of a pair of rectangles. The other rectangle is chosen at random, and we proceed as for permuting two MEs. Finally, we need an initial feasible solution to start the simulated annealing process. Another possibility that seems not to have been tried out by authors consists of applying simulated annealing without taking into account the constraints imposed by the available surface, but by penalizing the criterion if these constraints are violated. The problem to solve in this case is to associate a penalty with each type of violation. 10.2.5.3 CRAFT (Computerized Relative Allocation of Facilities Technique)
CRAFT assumes that a feasible layout, that is to say a layout that meets the constraints imposed on the layout, is available. Note that various constraints apply such as, nearness of MEs, compulsory location of MEs (clean space in the microprocessor industry, for example), necessity of keeping some MEs far apart from each other (because of noise or pollution), etc. It is also assumed that a criterion is given and that it is possible to associate a value of the criterion with each layout. CRAFT proceeds by progressive improvements of the initial situation. At each step, all the 2 by 2 and 3 by 3 permutations of MEs are tried out, and we keep the
10.2 Static Facility Layouts
399
best feasible solution if it is better than the best previous solution; otherwise, the algorithm stops. Note that if n resources are concerned then, for each iteration, n C 2 + 6 × n C 3 trials are required. Another common notation for the number of trials is Cn2 + 6 Cn3 . This number can be expressed as: n × ( n −1 ) + n × ( n − 1 )× ( n − 2 2
)
10.2.6 Layout Inside Manufacturing Entities In this section, MEs refer to stations or cells. We use a three-step process to arrange the resources inside a ME: 1. selection of transportation resources; 2. selection of a type of layout among some basic types; 3. arrangement of the resources on the available surface of the ME.
10.2.6.1 Selection of the Transportation Resource
There is usually only one type of transportation resource in a ME since each ME is designed for a product family composed of types of products that have similar manufacturing processes. Furthermore, the number of resources in a ME is small (less than 10 in most of the cases). The most common transportation resources are: • • • • •
overhead crane (small, medium or large); robot (articulated or designed for pallets or heavy); automated guided vehicle (AGV) for parts or pallets; conveyor for parts or pallets; cart for parts of pallets.
A database and an expert system have been developed in (Hamann, 1992) to select the right transportation resource with regard to the characteristics of the ME. For example, an articulated robot is chosen if the following conditions are valid: (The number of machines in the ME is less than 5) & (The parts are not fixed on a pallet) & (The maximal weight of a part is 10 kg) & (The maximal length of a part is 1 m) & (The maximal width of a part is 1 m) & (The maximal height of a part is 1 m) & (The required transportation speed is less than 2.5 m/s) & (The
400
10 Manufacturing Layout
minimal positioning precision is 0.1 mm) & (The material flow is multidirectional) & (The transportation is possible with a 3-axis robot) & (The distance to cover is less than 2 m) & (The temperature of the environment should remain below 70 °C) & (The vibration level is small) & (The required energy is available). This is only one example. Furthermore, the database should be refreshed in order to take into account the technical evolution. 10.2.6.2 Basic Layout Types
The basic layout types are represented in Figure 10.9. Robots of different sizes are the transportation resources in a circular layout. The type of robot selected depends mainly on the weights and the dimensions of the products, as well as on the precision of the positioning of the products on the manufacturing resources and the number of rotations required to position the product. In a linear or bilinear layout, the transportation resource may either be a conveyor, overhead crane, cart, or AGV and depends on the weights, size and packaging of the products, but also on the number of manufacturing resources as well as the fact that product flows may be unidirectional. In a multilinear layout, it is possible to use either carts, conveyors, or AGV, depending on numerous factors such as the number of manufacturing resources, flexibility level of the transportation system, positioning of the input and output of the cell, type of machines as well as the fact that product flows are unidirectional or not, etc.
Circular layout
Multilinear layout Manufacturing resources
Figure 10.9 The basic types of intracell layouts
Linear layout
Bilinear layout Transportation resources
10.2 Static Facility Layouts
401
A data base and an expert system have been developed in (Hamann, 1992) to select the type of intracell layout. As an example, the rule that leads to a multilinear layout is the following: If [(The transportation resource that has been selected in the previous step is an AGV) & (The number of manufacturing resources in the cell is greater than or equal to 7 and less than or equal to 12) & (The flexibility level of the manufacturing system is high) & (The product flows are not unidirectional)] Or [(The transportation resource that has been selected in the previous step is an AGV) & (The number of manufacturing resources in the cell is greater than 12) & (The flexibility level of the manufacturing system is medium or high) & (The product flows are not unidirectional)] Or [(The transportation resource that has been selected in the previous step is an AGV) & (The number of manufacturing resources in the cell is greater than 16) & (The product flows are not unidirectional)].
Note: It is sometimes necessary to introduce a layout that does not belong to one of the four basic layout types. An example of such a layout is given in Figure 10.10.
Figure 10.10 A particular layout
10.2.6.3 Arrangement of the Resources on the Available Surface of a ME
There is no particular method to arrange the manufacturing resources inside a ME. A layout type, if chosen, provides some constraints for the arrangement of the manufacturing resources (in particular if the circular layout model is chosen). Other useful information is the flows between resources.
402
10 Manufacturing Layout
10.2.7 Balancing of the Manufacturing Entities In this section, we do not discuss the case of the stations in assembly lines: this case is discussed in Chapters 7 and 8. We restrict ourselves to the case of manufacturing entities (cells and functional departments). The problem we are interested in is set hereafter. n types of operations have to be performed in the ME under consideration. n j
is the number of operations of type j ∈ { 1, L , n } to be executed during an elementary period of duration T . r types of manufacturing resources are available to perform these operations, and θ i, j is the time required by a resource of type
i ∈ { 1, L, r } to perform one operation of type j ∈ { 1, L, n } . If an operation of type j cannot be executed by a resource of type i , we set θ i, j = +∞ .
We denote by wi the number of manufacturing resources of type i available in the ME and by xi , j the number of operations of type j that are performed by the resources of type i during an elementary period T . We denote by ci the cost of one manufacturing resource of type i and by hi the running cost of a manufacturing resource of type i over one elementary period. We also denote by α the actualization rate and by β the depreciation rate; both are assumed to be constant. H is the number of elementary periods over which the study is conducted. This variable is called the “horizon of the problem”. For i = 1, L, r , the cost of one manufacturing resource of type i at the horizon H is: K i = ci × [( 1 + α ) H − ( 1 − β ) H ] + hi ×
1+ α
α
×[ ( 1 + α ) H −1 ]
Thus, the criterion to minimize is: C ( X,W ) =
r
∑w
i
Ki
(10.3)
i =1
where W represents the set of variables wi and X the set of variables xi , j . The minimization process is subject to the following constraints. The first constraints guarantee that all the required operations will be performed: r
∑x i =1
i, j
= n j for j = 1, L, n
(10.4)
10.3 Facility Layout in a Dynamic Environment
403
Constraints 10.5 force the system to perform the required operations during one elementary period: n
∑x j =1
i, j
θ i , j ≤ wi T for i = 1, L, r
(10.5)
In addition, variables wi and xi , j are integer. This problem is an integer linear programming problem that can be solved with software tools such as Cplex, XpressMP, etc., which are available on the market. Solving this problem provides not only the number of resources of each type to integrate in the ME, but also the production planning. Remark: Criterion 10.3 should be modified if the depreciation rate and/or the actualization rate depend on time (i.e., on the elementary period).
10.3 Facility Layout in a Dynamic Environment 10.3.1 Changes in the Needs of Manufacturing Systems We now introduce dynamic facility layouts (DFL) and robust layouts (RL). In modem production systems, there is a need for a new generation of manufacturing layouts that are flexible and easy to reconfigure, which calls for modularity. The goal is to avoid redesigning the layout each time significant manufacturing changes occur. These manufacturing layouts are called DFL. Indeed, redesigning a layout is usually expensive, due to the work force involved in this activity and the need of new resources, as well as being risky from a commercial point of view since it requires slowing down, or even interrupting production temporarily. This is why companies are also looking for layouts that can be used efficiently over many product mixes and volumes with satisfactory results; these layouts are referred to as robust layouts (RL). Several factors advocate the use of DFL or RL. According to (Tompkins et al., 1996) 20 to 50% of the manufacturing costs are due to material handling and an effective layout design may reduce these costs by 10 to 30%. Another factor that makes DFL promising is the rapid advances in material engineering and manufacturing technology. As mentioned in (Heragu and Kochhar, 1994) composites have become the primary choice for a number of manufactured components; for example aluminum composites can replace cast iron and phenolics can replace aluminum, see (Arimond and Ayles, 1993). These materials are light and have excellent
404
10 Manufacturing Layout
mechanical properties. A consequence is that manufacturing resources do not need a strong foundation, making them easily movable. Manufacturing processes also progress facilitating the design of light tools and reducing the time for mounting and dismounting manufacturing resources. Thus the tendency of numerous manufacturing systems of the future is to be lightweight and to use transportation and handling equipment that is easy to reconfigure. Note that the objective of DFL problems is quite limited. It consists of rearranging the manufacturing facilities (usually gathered together in “departments”) when changes of material flows between departments and/or flow volumes occur, the criterion to minimize is the sum of rearrangement and material handling costs. Similarly, the goal of RL problems is to find the locations of a given set of departments in the same number of sites that can deal with, at the low cost, all the forecasted scenarios. Several authors have addressed the DFL design problems when product mix and production volume vary from period to period. Solving such a problem implies that all the possible mixes and volumes are known over each one of the periods under consideration, which is difficult and often impossible to obtain. It is also assumed that the cost for switching from one layout to another, whatever the pair of departments concerned, as well as the cost for running the system over one period for each of the possible layouts, are known. In particular, this requires knowing, at least implicitly, all feasible layouts. If the cost for switching from one layout to another is high compared with the running costs, then it is often preferable to compute a layout that is satisfactory whatever the environment. In other words, the goal is to find a RL, that is to say a layout that minimizes the average cost that is the sum of the running costs for the different environments weighted by the probabilities of these environments. Let us dwell on the fact that, in the DFL or RL problems, the goal is to assign predefined departments to predefined locations in order to optimize some criterion usually related to a cost. This is a way to implicitly define all layouts but, at the same time, it reduces drastically the number of possible layouts. Furthermore, departments are predefined, but no information is given about how to design these departments. To summarize, strong assumptions are made in DFL and RL problems: • The so-called departments are frozen; redesigning a department is out of scope. • Only the rearrangement and material handling costs are taken into account. This implies that only product mixes and volumes are allowed to change. • Possible department locations are laid down, which reduce the set of possible solutions. Thus, DFL and RL problems are based on assumptions that situate them at a different conceptual level from static facility layout (SFL) problems.
10.3 Facility Layout in a Dynamic Environment
405
10.3.2 Robust Layouts Robust layouts are usually efficient when product mix and production volumes fluctuate only to a limited extent. Two models are proposed to illustrate this concept. The first one is derived from (Rosenblatt and Lee, 1987) and concerns discrete demand, while the second one illustrates situations where demand is continuous. 10.3.2.1 Case of Discrete Demand
Introduction In this model, it is assumed that n sites S1 , L, S n are available to locate n departments D1 , L, Dn . We also know, for each of the R types of products to perform: • The weight wr , r ∈ { 1, L, R } of the type of product r . This parameter is proportional to the average number of demands (i.e., of orders) of this type of product. • The quantities qr ( 1 ), L, qr ( K r ) of products of type r ∈ { 1, L, R } that may be ordered. The probability associated to demand quantity qr ( k ) is pr ( k ) . In this model, it is assumed that demand is discrete. We may also assume that demand is continuous; in this case, it is characterized by a probability density. • The sequence Dr ( 1 ), L , Dr ( N r ) of departments a product of type r has to visit in order to be completed. The distances between the sites are known: d ( i, j ) is the distance between sites S i and S j , where i, j ∈ { 1, L, n } . In-
deed, d ( i, i ) = 0 .
The transportation cost is defined as follows: c ( i, j ) is the cost incurred per unit of distance when transporting one unit of
product between departments Di and D j , where i, j ∈ { 1, L, n } .
In this model, the objective is to locate the departments in the sites so as to minimize the mean transportation cost. Exactly one department is located in each site. There are n ! possible locations of the departments in the sites; for example, if n = 10 the number of possible locations of departments in sites is 10! =
406
10 Manufacturing Layout
3 628 800: thus, it is difficult to investigate all the possible locations if the size of the problem is greater than 10. The Algorithm Consider a permutation i1 , L , in of 1, L, n . This permutation means that Dk is assigned to Si . To investigate all the possible assignments, we have to generate k
all the permutations of 1, L , n . The following simple algorithm provides these permutations. Algorithm 10.7. (Permu)
1. Set s ( i ) = i, i = 1, L , n (initial permutation). 2. Let k be the greater integer less than n such that s ( k ) < s ( k + 1 ) . If k does not exist, all the permutations have been found: the algorithm stops. 3. If k > 1 set u ( i ) = s ( i ) for i = 1, L , k − 1 . 4. Let r be such that s ( r ) = Min s ( j ) and s ( r ) > s ( k ) . j = k +1,L, n
4.1. Set u ( k ) = s ( r ) . 4.2. Set i = k and j = k + 1 . 4.3. While ( j ≤ n ) do: 4.3.1. If ( i ≠ r ) set u ( j ) = s ( i ) and j = j + 1 . 4.3.2. Set i = i + 1 . 4.4. Arrange u ( k + 1 ), L , u ( n ) in their increasing order. 4.5. Set s ( i ) = u ( i ) for i = 1, L, n . 4.6. Go to 2.
The layout optimization algorithm is obtained by introducing a simulation procedure just after instructions 1 (initial permutation) and 4.5. This simulation consists of applying the following process: Algorithm 10.8. (Simulation)
1. Set ct = 0 . The value of ct will be the mean transportation cost corresponding to permutation s ( 1 ), L , s ( n ) . 2. For i = 1, L , N do: 2.1. Set g = 0 . 2.2. For r = 1, L , R do: 2.2.1. Select at random one of the values qr ( 1 ), L , qr ( K r ) according to the given probabilities of demand for product r . Let Qr be this value. 2.2.2. Compute the transportation cost Z related to product type r taking into account the ordered quantity Qr , the weight wr and the departments visited (and thus their location and the costs by unit of distance).
10.3 Facility Layout in a Dynamic Environment
407
2.2.3. Compute g = g + Z . 2.3. Compute ct = ct + g / N .
The position of the department in the sites is given by the permutation that leads to the lowest mean transportation cost. A Numerical Example We consider the case of 4 departments and sites ( n = 4 ). In this case; 4! = 24 solutions are possible. The distances between the sites (Table 10.15) and the transportation costs per unit of distance between departments (Table 10.16) are given hereafter. The data related to the three types of products concerned are: Product 1: Weight: 2, departments visited: D1 , D2 , D3 Possible demands and corresponding probabilities: (20, 0.3), (40, 0.6), (80, 0.1). Product 2: Weight: 3, departments visited: D1 , D4 , D2 Possible demands and corresponding probabilities: (10, 0.4), (20, 0.3), (30, 0.3). Table 10.15 Distance between sites
D1 D2 D3 D4 D1
0
2
6
1
D2
2
0
3
5
D3
6
3
0
2
D4
1
5
2
0
Table 10.16 Transportation costs
S1 S2 S3 S4 S1 0
5
3
7
S2 5
0
8
4
S3 3
8
0
9
S4 7
4
9
0
408
10 Manufacturing Layout
Product 3: Weight: 2, departments visited: D1 , D3 , D4 , D2 Possible demands and corresponding probabilities: (20, 0.2), (30, 0.6), (50, 0.2). We ran successively the program with 1000, 10 000 and 30 000 simulations. These values are the values of N . Applying the algorithm leads to the optimal permutation 4, 1, 2, 3 and to the main transportation cost 7141.65 when N = 30 000. We obtain the same optimal permutation for N = 1000 and N = 10 000. Thus, the optimal layout consists of locating D1 in S 4 , D2 in S1 , D3 in S 2 , D4 in S 3 . Simulated Annealing Approach As mentioned at the beginning of this section, the above algorithm cannot be applied if n > 10 due to the computation burden. To overcome this problem, we developed a simulated annealing approach that is summarized hereafter. The detail of the simulated annealing method is given in Appendix A. Algorithm 10.9.
1. Introduce the temperature T , the factor α ∈ ] 0, 1 [ used to decrease the temperature and m that is the lower bound of the temperature. 2. Set s ( i ) = i for i = 1, L, n (initial permutation). 3. Evaluate the cost ct corresponding to the initial permutation. Simulation is used as in the previous algorithm. 4. Set cte = ct and se ( i ) = s ( i ) for i = 1, L, n . At each step of the algorithm, these variables contain the previous solution. 5. Set ctp = ct and sp ( i ) = s ( i ) for i = 1, L, n . At each step of the algorithm, these variables contain the best solution obtained so far. 6. Switch two elements of the permutation s ( i ) = i, i = 1, L , n chosen at random. The new permutation is still called s . 7. Evaluate the corresponding cost ct * . 8. If ( ct* < cte ) then: 8.1. If ( ct* < ctp ) then: 8.1.1. Set ctp = ct * ; 8.1.2. Set sp ( i ) = s ( i ), i = 1, L, n . 8.2. Set cte = ct * . 8.3. Set se ( i ) = s ( i ), i = 1, L, n . 9. If ( ct* ≥ cte ) then: 9.1. Set Δ = ct * −cte . 9.2. Compute pp = exp (−Δ / T ) .
9.3. Generate at random x ∈ [ 0, 1 ] (uniform distribution).
10.3 Facility Layout in a Dynamic Environment
409
9.4. If x < pp set cte = ct * and se ( i ) = s ( i ) for i = 1, L, n . 9.5. If x ≥ pp set s ( i ) = se ( i ) for i = 1, L, n . 10. Set T = α × T . 11. If ( T > m ) , then go to 6, else print the solution ctp and sp ( i ) = s ( i ) for i = 1, L , n .
We applied this algorithm with T = 100, m = 1 and α = 0.99 to the previous numerical example and obtained the same solution as above. This solution was obtained at iteration 307. Remark: As mentioned in Appendix A, several methods are possible to reduce the temperature. 10.3.2.2 Case of Continuous Demand
Let yi , j be the bidirectional stochastic flow between departments Di and D j , f i , j ( . ) the probability density of this flow and ai , j (respectively, bi , j ) the mini-
mum (respectively, maximum) value of this density for i, j ∈ { 1, L, n } . We define: ⎧1 if Di is assigned to site S k ⎪ X i ,k = ⎨ ⎪⎩ 0 otherwise
We know for all i, j ∈ { 1, L, n } : • d i , j , the distance between sites S i and S j . • ci , j , the transportation cost of one unit of product over one unit of distance between departments Di and D j . As in the previous model, it is assumed that exactly one department is assigned to one site. The total transportation cost depends on the flows and, as a consequence, is stochastic: K ( { X u ,v
}
n
)=∑ u ,v∈{1, L, n } i =1
n
n
n
j =1
k =1
r =1
∑∑∑
X i , k X j , r d k , r ci , j y i , j
(10.6)
410
10 Manufacturing Layout
Since yi , j is a stochastic variable, we minimize the mean value of the cost given by Relation 10.7: K ( { X u ,v
n
}
u ,v∈{1, L, n }
)=∑ i =1
n
n
n
j =1
k =1
r =1
∑∑∑
[
X i ,k X j ,r d k ,r ci , j E yi , j
]
(10.7)
bi , j
where E [ yi , j ] =
∫
y f i , j ( y ) dy
ai , j
subject to constraints: n
∑ k =1
X i ,k = 1, i = 1, L, n
(10.8)
Constraints 10.8 guarantee that each department is assigned to a site. n
∑ i =1
X i ,k = 1, k = 1, L, n
(10.9)
Constraints 10.9 guarantee that each site contains one department. Furthermore, the variables X i ,k are equal either to 0 or 1. This quadratic problem can be solved optimally only for small problems. A heuristic algorithm can be used for medium or large size problems. Simulated annealing (SA) works well (see Section 10.3.2.1 and Appendix A). As in the previous section, the SA algorithm starts with a feasible solution. In our case, a feasible solution is a set of 0–1 values assigned to the variables and that verify Constraints 10.8 and 10.9. For each iteration, we select at random two variables X i ,k and X j ,r the values of which are equal to 1. The neighbor solution is obtained by setting X i ,r = 1, X i ,k = 0, X j ,k = 1, X j ,r = 0 . The value of the criterion is obtained using (10.7).
10.3.3 Dynamic Facility Layout 10.3.3.1 Introduction
The dynamic facility layout (DFL) problem is based on the forecasted changes in production flows during future elementary periods. An elementary period may be,
10.3 Facility Layout in a Dynamic Environment
411
for example, a month. It is assumed that the production flows remain constant throughout an elementary period. Assume that the horizon of the problem, that is to say the number of consecutive elementary periods over which we want to minimize the total cost defined below, is H . Solving the problem consists in finding a sequence of H facility layouts, each of them being associated with an elementary period, so as to minimize the sum of the material handling and rearrangement costs for all the elementary periods. Note that the same layout can be assigned to several consecutive periods. Rearrangement costs result from moving departments between sites. These costs include production loss, labor cost, equipment cost, inventory cost (inventories are introduced in order to ensure a minimal selling activity during the rearrangement period), etc. When rearrangement costs are significant, the number of rearrangements is reduced as much as possible. As a consequence; the same layout is usually assigned to several consecutive elementary periods. Material handling costs include mainly transportation and labor costs. Remember that only the sum of material handling costs is involved when looking for a robust layout (RL). 10.3.3.2 Mathematical Model
The model presented in this section has been proposed in (McKendall et al., 2006). We denote by n the number of departments and sites. The number of possible solutions for H elementary periods is ( n ! ) H ; thus, only small problems can be solved optimally in a reasonable computation time. To express the mathematical model, we need the following notations: ⎧1 if department i is assigned to site j during the elementary period h ⎪ xh , i , j = ⎨ ⎪⎩ 0 otherwise yh,i , j , r =
⎧1 if department i is transferred from site j to site r at the beginning of period h ⎪ ⎨ ⎪⎩ 0 otherwise
ah, i , j , r is the cost for transferring department i from site j to site r at the
beginning of period h . bh, i , j , r , s is the cost for handling material flows between department i located in site j and department r located in site s during elementary period h .
412
10 Manufacturing Layout
With these notations, the total cost CT to minimize is: H
n
n
n
h= 2
i =1
j =1
r =1
y ∑ ∑ ∑ ∑ a CT = 144424443 h, i, j , r
h, i, j , r
Rearrangement cost
+
H
n
n
n
n
h=2
i =1
j =1
r =1
s =1
∑ ∑ ∑ ∑ ∑
bh , i , j , r , s
x h, i , j
x h, r , s
(10.10)
144444444244444444 3 Material handling cost
subject to constraints: n
∑ j =1
xh, i , j = 1, for h = 1, L , H and i = 1, L , n
(10.11)
Constraints 10.11 guarantee that each department is assigned to a site during each period. n
∑x i =1
h, i , j
= 1, for h = 1, L, H and j = 1, L, n
(10.12)
Constraints 10.12 guarantee that each site will contain a department during each period.
yh, i , j , r = x( h−1), i , j × xh, i , r for i, j , r = 1, L , n and h = 1, L , H
(10.13)
Constraints 10.13 are introduced to make sure that, if a department i is transferred from site j to site r at the beginning of elementary period h , then it was in site j during period h − 1 and will be in site r during period h .
10.3.3.3 Simulated Annealing Approach
We present a simulated annealing (SA) approach for the dynamic facility layout (DFL) problem. We assume that the reader already knows the appendix dedicated to SA. In this kind of approach, the following aspects are pivotal: • We must be able to define an initial feasible solution. With the notations introduced in the previous section, such a solution could be obtained by setting:
10.3 Facility Layout in a Dynamic Environment
413
xh , i , i = 1 for i = 1, L , n and h = 1, L , H xh , i , j = 0 for i, j = 1, L , n, i ≠ j and h = 1, L , H yh , i , i , i = 1 for i = 1, L , n and h = 1, L , H
y h , i , i , s = 0 for i, s = 1, L, n, i ≠ s and h = 1, L, H • We have to define the initial temperature and the way the temperature decreases with the iterations. Usually this is done experimentally, the goal being to guarantee an adequate number of iterations in order to explore a large number of solutions and to allow a great flexibility in the evolution of the solutions, that is to say the possibility to explore solutions that are worse than the solutions obtained previously. • We also have to define the way in which a neighbor solution is derived from a given solution. In the problem at hand, the neighbor solution is generated as follows: – Choose at random an elementary period h ∈ { 1, L , H } . – Choose at random two departments i and j in the layout corresponding to period h . – Assign department i to the site of j and j to the site of i . According to the notations of Section 10.3.3.2, these steps can be rewritten as follows: – Choose at random an elementary period h ∈ { 1, L, H }. – Choose at random i, j ∈ { 1, L, n }, i ≠ j . – For r such that xh , i , r = 1 and s such that xh, j , s = 1 , we set xh , i , r = 0 , xh , i , s = 1 , xh , j , s = 0 , xh, j , r = 1 . In addition, for u such that yh , i , u , r = 1 , we
set yh , i , u , r = 0 and yh , i , u , s = 1 . For w such that yh , j , w, s = 1 , we set yh , j , w, s = 0 and yh , j , w, r = 1 .
• The computation of the criterion is straightforward by applying Relation 10.10.
10.3.3.4 Dynamic Programming Approach
General Formulation Let CT ( L, h ) be the minimal total cost, that is the sum of the department transfer costs and the costs for handling material over all elementary periods up to period h . Layout L is the layout for period h . Recall that we can associate n ! layouts to each elementary period; we denote by LT this set of layouts, so L ∈ LT .
414
10 Manufacturing Layout
The general dynamic programming (DP) process is (see the appendix devoted to this method), for h = 1, L , H : CT ( L*, h ) = Min { CT ( L, h − 1 ) + R ( L*, L, h ) } + MH ( L*, h ) L∈LT
(10.14)
where: • R ( L*, L, h ) is the rearrangement cost incurred when replacing layout L by layout L * at the beginning of period h . This cost is the sum of the ah , i , j , r over the departments i that change their location at the beginning of period h (see Section 10.3.3.2). • MH ( L*, h ) is the material handling cost for layout L * during elementary period h . This cost is the sum of the bh , i , j , r , s concerning the flows in L * (see Section 10.3.3.2). Note that R ( L*, L, 1 ) is the cost incurred for implementing the first layout L * and CT ( L, 0 ) = 0 . Unfortunately, it is impossible to utilize the general formulation for most reallife problems, due to the size of the set LT . Several approaches have been presented to reduce this size, but these simplified dynamic programming approaches do not guarantee optimality.
Simplified Heuristic Formulations Some authors suggested to replace LT by LT * ( h ) ⊂ LT composed of layouts chosen at random for each elementary period. An improvement of this approach is to select the N best static layouts of elementary period h to compose LT * ( h ) . A further improvement is to add to all LT * ( h ) the best static layout of all the elementary periods. Another approach proposed in (Sweeney et al., 1976) consists in introducing a budgetary constraint on layout rearrangements. As a consequence, the set LT * ( h ) ⊂ LT becomes LT * ( h, L * ) ⊂ LT : only the layouts L such that the rearrangement cost to derive L * from L is less than a given value are considered.
10.4 Conclusion As mentioned in Section 10.2.3, designing a layout requires constructing manufacturing entities, locating the entities on the available surface, locating the resources inside the manufacturing entities and, finally, balancing the loads of the manufac-
References
415
turing entities. Static layouts (SL) are obtained following these four steps for which several solutions have been provided in Section 10.2. To manage the fast pace of the market (worldwide competition, ever-changing customers’ demands), robust layouts (RL) and dynamic facility layouts (DFL) have been introduced. Most of the research works in these domains concern only the location of entities on the available surface, with two important restrictions: the entities under consideration are functional departments and the possible locations are predefined (sites). Furthermore, some strong hypotheses hold: • demands are supposed to be known over several consecutive elementary periods, that is to say several weeks or months, which is questionable due to the continual unexpected changes in the market; • only changes in product-flow capacities and product mix are taken into account; nothing is done to adjust departments to new types of products; • in most of the models it is assumed that rearrangement costs do not depend on time, otherwise it would be too difficult to establish these costs for each one of the elementary periods. Rapid advances in material and manufacturing engineering may lead to the possibility of real-time rearrangement in a near future. This would remove the problems resulting from demand forecasting. At the manufacturing level, important work on reconfigurable manufacturing systems (RMS) has been done and facilitates rearrangement of manufacturing entities. An RMS has a modular structure of hardware and software. It is characterized by modular machines and open architecture controllers, which allows removing or integrating software and/or hardware modules. These characteristics make an RMS able to adapt itself quickly to changes in capacities and product types (see (Koren et al., 1999) and (Mehrabi et al., 2002) as well as the corresponding references). Thus, RMS is a concept that is complementary to DFL.
References Arimond J, Ayles WR (1993) Phenolics creep up on engine application. Adv. Mater. Proc. 143(6):34–36 Garcia H, Proth J-M (1986) A new cross-decomposition algorithm: the GPM. Comparison with the bond energy method. Contr. Cyber. 15(2):115–165 Hamann T (1992) Le problème d’agencement des ressources à l’intérieur des cellules des systèmes de production. Dissertation, University of Metz Heragu SS, Kochhar JS (1994) Material handling issues in adaptive manufacturing systems. In: Malstrom EM, Pence IW Jr (eds) The Materials Handling Engineering Division 75-th Anniversary Commemorative Volume, ASME, New York, NY Koren Y, Heisel U, Jovane F, Moriwaki T, Pritschow G, Ulsoy G, Van Brussel H (1999) Reconfigurable manufacturing systems. Ann. CIRP 48(2):527–540 Kuhn HW (1955) The Hungarian method for the assignment problem. Nav. Res. Log. Quart. 2:83–97
416
10 Manufacturing Layout
McKendall ARJr, Shang J, Kuppusamy S (2006) Simulated annealing heuristics for the dynamic facility layout problem. Comput. Oper. Res. 33(8):2431–2444 Mehrabi MG, Ulsoy AG, Koren Y, Heytler P (2002) Trends and perspectives in flexible and reconfigurable manufacturing systems. J. Intell. Manuf. 13:135–146 Rosenblatt MJ, Lee HL (1987) A robustness approach to facilities design. Int. J. Prod. Res. 25(4):479–486 Souilah A (1994) Les systèmes cellulaires de production : l’agencement inter-cellules. Dissertation, University of Metz Tompkins JA, White JA, Bozer Y, Frazelle E, Tanchoco J, Trevino J (1996) Facilities Planning. 2nd edn, John Wiley & Sons, New York, NY
Further Reading Afentakis P, Millen RA, Solomon MM (1990) Dynamic layout strategies for flexible manufacturing systems. Int. J. Prod. Res. 28(2):311–323 Agarwal A, Sarkis J (2001) Evaluating functional and cellular manufacturing systems: a model and case analysis. Int. J. Manuf. Techn. Manag. 3(6):528–549 Antonsson E, Sebastian H-J (1999) Fuzzy sets in engineering design. In: Zimmermann H-J (ed), Practical Applications of Fuzzy Technologies, Kluwer Academic Publishers, London, pp. 57 – 117 Arapoglu RA, Norman BA, Smith AE (2001) Locating input and output points in facilities design – a comparison of constructive, evolutionary, and exact methods. IEEE Trans. Evol. Comput. 5(3):192–203 Armour GC, Buffa ES, Vollmann TE (1964) Allocating facilities with CRAFT. Harv. Bus. Rev. 42:136–158 Balakrishman J, Jacobs FR, Venkataramanan MA (1992) Solutions for the constrained dynamic facility layout problem. Eur. J. Oper. Res. 57(2):280–286 Ballou RH (1968) Dynamic warehouse location analysis. J. Mark. Res. 5:271–275 Banerjee P, Zhou Y, Montreuil B (1997) Genetically assisted optimization of cell layout and material flow path skeleton. IIE Trans. 29:277–291 Bard JF, Feo TA (1989) Operations sequencing in discrete parts manufacturing. Manag. Sci. 35:249–255 Bazaraa MS (1975) Computerized layout design: a branch and bound approach. AIIE Trans. 7(4):432–437 Benjaafar S, Sheikhzadeh M (2000) Design of flexible plant layouts. IIE Trans. 32(4):309–322 Benjaafar S, Heragu SS, Irani SA (2002) Next generation factory layouts: research challenges and recent progress. Interfaces 32(6):58–76 Benson B, Foote BL (1997) DoorFAST: A constructive procedure to optimally layout a facility including aisles and door locations based on an aisle flow distance metric. Int. J. Prod. Res. 35(7):1825–1842 Beziat P (1990) Conception d’un système d’implantation d’ateliers de production: PLOOT. Dissertation, University of Languedoc Braglia M, Zanoni S, Zavanella L (2003) Layout design in dynamic environment: strategies and quantitative indices. Int. J. Prod. Res. 41(5):995–1016 Drezner Z (1980) DISCON: a new method for the layout problem. Oper. Res. 28(6):1375–1384 Dolgui A, Proth J-M (2006) Les Systèmes de Production Modernes. Hermes Science Publications, London. Harhalakis G, Ioannou G, Minis I, Nagi R (1994) Manufacturing cell formation under random product demand. Int. J. Prod. Res. 32(1):47–64
Further Reading
417
Harhalakis G, Nagi R, Proth J-M (1990) An efficient heuristic in manufacturing cell formation for group technology applications. Int. J. Prod. Res. 28(1):185–198 Hassan MMD, Hogg GL, Smith DR (1986) SHAPE: A construction algorithm for area placement evaluation. Int. J. Prod. Res. 24(5):1283–1295 Herrmann JW, Ioannou G, Minis I, Nagi R, Proth J-M (1995) Design of material flow networks in manufacturing facilities. J. Manuf. Syst. 14(4):277–289 Kim J, Klein CM (1996) Location of departmental pickup and delivery points for an AGV system. Int. J. Prod. Res. 34(2):407–420 King JR (1980) Machine-component grouping in production flow analysis: An approach using rank order clustering algorithm. Int. J. Prod. Res. 18(2):213–232 Kouvelis P, Kiran AS (1991) Single and multiple period layout models for automated manufacturing systems. Eur. J. Oper. Res. 52(3):300–314 Kusiak A, Heragu SS (1987) The facility layout problem. Eur. J. Oper. Res. 29(3):229–251 Lee RC, Moore JM (1967) CORELAP: Computerized Relationship Layout Planning. J. Ind. Eng. 18:195–200 Meller RD, Gau KY (1996) Facility layout objective functions and robust layouts. Int. J. Prod. Res. 34(10):2727–2742 Meller RD, Narayanan V, Vance PH (1998) Optimal facility layout design. Oper. Res. Lett. 23(3–5):117–127 Meng G, Heragu SS, Zijm H (2004) Reconfigurable layout problem. Int. J. Prod. Res. 42(22):4709–4729 Montreuil B (2007) Layout and location of facilities, In: Don Taylor G (ed), Handbook on Logistics Engineering, CRC Press, Boca Raton, FL Norman BA, Arapoglu RA, Smith AE (2001) Integrated facilities design using a contour distance metric. IIE Trans. 33(4):337–344 Pierreval H, Caux C, Paris JL, Viguier F (2003). Evolutionary approaches to the design and organization of manufacturing systems. Comput. Ind. Eng. 44(3):339–364 Proth J-M (1992) Conception et Gestion des Systèmes de Production. Presses Universitaires de France, Paris Ramabhatta V, Nagi R (1998) An integrated formulation of manufacturing cell formation with capacity planning and multiple routings. Ann. Oper. Res. 77:79–95 Scott MJ, Antonsson EK (2000) Arrow’s theorem and engineering design decision making. Res. Eng. Des. 11(4):218–228 Sebastian H-J, Antonsson EK (eds) (1996) Fuzzy Sets in Engineering Design and Configuration. Kluwer Academic Publishers, London Sweeney DJ, Tatham RL (1976) An improved long-run model for multiple warehouse location. Manag. Sci. 22:748–758 Urban TL, Chiang WC, Russell RA (2000) The integrated machine allocation and layout problem. Int. J. Prod. Res. 38(13):2911–2930 Wang S-J, Bhadury J, Nagi R (2002) Supply facility and input/output point locations in the presence of barriers. Comput. Oper. Res. 29(6):685–699 Webster DB, Tyberghein MB (1980) Measuring flexibility of job-shop layouts. Int. J. Prod. Res. 18:21–29 Zhou L, Nagi R (2002) Design of distributed information systems for agile manufacturing virtual enterprises using CORBA and STEP standards. J. Manuf. Syst. 21(1):14–31
Chapter 11
Warehouse Management and Design
Abstract Many if not all goods pass though a warehouse at some stage. The main activity of a warehouse is material handling, but it may happen that some operations (packaging, cleaning, assembling, painting, etc.) are performed also during storage. These significant aspects should be included in the analyses of the warehouse systems. The chapter begins with a description of warehouse types and their usefulness. The operations performed and resources used are extensively studied. Special consideration is given to warehouse-management problems. Afterwards, the design stage is considered at length. The components of a warehouse are presented. In particular, storage in unit-load warehouses is covered. Then, the static warehouse-sizing problem is considered, modeled and solved. Later, a dynamic warehouse-sizing problem is discussed. Finally, the chapter finishes by profoundly examining two major approaches for the problem of where to locate warehouses: the single-flow and multiflow hierarchical location models.
11.1 Introduction The major activity in a warehouse is material handling to store goods for a limited period. The concept of a warehouse is somewhat paradoxical in contemporary production systems. Reduction of inventories and more generally elimination of operations that do not provide added value is always on the agenda in order to lower production costs. On the other hand, the use of a warehouse is inevitable for the several reasons given in Section 11.2.2. As a consequence, the management of warehouses requires a lot of attention: this aspect will be developed in Section 11.4 after reviewing the basic operations and listing the possible mistakes that may arise when performing these operations (see Section 11.3). Some aspects of the design of warehouses, and in particular their sizing, are proposed in Section 11.5.
420
11 Warehouse Management and Design
Finally, Section 11.6 is dedicated to the warehouse-location models. It should be noted that numerous approaches and techniques that are used for warehouse design and management have already been mentioned in the previous chapters. This is the case for layout methods, RFID techniques or scheduling methods. We will not revisit them in the current chapter.
11.2 Warehouse Types and Usefulness 11.2.1 Warehouse Taxonomies Various taxonomies are mentioned in the literature. Two of them are presented hereafter. 11.2.1.1 Common Taxonomy Different types of warehouses exist to serve diverse customers. The most common type, called “retailer supply warehouses” (RSW), receives finished products from manufacturing systems located in the same country or from foreign suppliers and provides stocks for retail stores. Normally, such a warehouse serves routinely a given set of captive customers. The second sort of warehouse, called “spare part warehouses” (SPW), furnishes spare parts and serves clients and manufacturing systems. It is often associated with mass production (cars, household appliance, computer systems, etc.). The difference from RSW lies in that some orders are highly random, the demand for specific types of parts may be relatively small and, last but not least, orders are usually very urgent. Thus, this requires that some of the parts be held in stock for years. The third type denoted by “mail order selling” (MOS), warehouses and ships orders to individuals and informs customers using mail order catalogs or Internet sites. Orders are placed by the Internet, letter, or phone. Each individual order usually concerns a small number of items, but overall the variety and quantity of items at stake is commonly huge. The last type, referred to as “special warehouses” (SW), is relatively rare. These warehouses are usually rented for a long or short period of time. The products stored in such a warehouse are often expensive, bulky and seldom required. The characteristics of the aforementioned warehouses are reported in Table 11.1.
11.2 Warehouse Types and Usefulness
421
Table 11.1 Summarizes the characteristics of these types of warehouses Warehouse type
RSW
SPW
MOS
SW
High
High
High
Low
Demand intensity
High
Medium
High
Low
Randomness of demand
Medium
High
High
Low
Response to demand
Quick
Quick
Medium
Medium
Associated production type
Variable
Mass
Variable
Variable
Characteristic Variety of products concerned
production
11.2.1.2 Taxonomy Based on Warehouse Functionalities Six types of warehouses can be identified on the basis of their functions: • Warehouses that are used to provide distribution services on behalf of their customers. These warehouses more often than not belong to a company that is also in charge of upstream and downstream transportation. They may serve several independent production systems. This type of warehouse is often referred to as private warehouses. DHL, a well-known transport company, engages numerous private warehouses in its transportation network. • Public warehouses are essentially spaces that can be leased for limited periods to deal with short-term storage needs. A public warehouse may occasionally be used as a supplemental storage space for an overloaded private warehouse. • Warehouses that receive products in large quantities and dispatch a large number of small lots. This is common in the food industry, for instance. “Do it yourself” (DIY) centers is another example of this type of warehouse. They are also called distribution centers. • Warehouses that provide value-added services. They are usually part of production systems. Tasks performed in such entities are mainly: repackaging (to make the products on sale or to prepare them for specific operations), labeling, assembling (computers, for instance), etc. • Warehouses that store products for periodic delivery. This is the case when delivery must be made on a just-in-time basis. Examples of this type of warehouse can be found in assembly systems where the components are outsourced (car or domestic appliance manufacturing, for example). • Warehouses for fresh food products. These warehouses are refrigerated and often called climate-controlled warehouses.
422
11 Warehouse Management and Design
11.2.2 Warehouse Usefulness Warehouses are almost inevitable: • To cope with the discrepancy between the relative slow supply chain response and rapid changes in quantities ordered. A warehouse helps to react quickly when demand changes abruptly. Note that the low reactivity of supply chains usually results from their complexity (the number of companies involved and the multitude of stages in the production processes), the existence of quality problems (that lead to rework) and use of long-duration and/or unreliable transportation systems (mainly in the case of offshore outsourcing), etc. • To favor upstream production systems by allowing them to increase the size of lots, thus simplifying the management, increasing the throughput and reducing the production cost by reducing the number of setups. Transportation costs also decrease since the disparity of the products loaded in the same truck or freight car decreases, thus reducing the distribution costs. • To configure and finalize products as near as possible to the customer. This is the case when different products can be obtained by assembling components issued from the same limited set. This situation is frequent in the computer and furniture industries and more generally in firms where the strategy consists in postponing product differentiation, assuming that it is easy to perform (personalizing products consisting in packaging, labeling or colors, for instance). • To execute additional operations like inbound inspections, part preparation, kitting or packaging. Inbound inspections mainly concern quality control. Depending on the kind of verifications needed, inbound inspection may require specific area and certain material-handling resources. Preliminary part preparation facilitates manufacturing operations that follow. Kitting occurs when predetermined parts or components are removed from storage and gathered together to make up kits. These kits are then used for the next manufacturing or assembly operations. • To recondition the products to meet customers’ requirements. The objective is to switch from production packaging to that demanded by customers or retailers. • To reorganize the lots for transportation purpose. This function includes sorting. • To have a wide assortment of items so that customers can purchase small quantities of many different products at reasonable costs. This is often the case in the food industry. • To supply seasonal production to retailing groups as and when required. This concerns climate-controlled warehouses. • To protect against technical glitches and security threats. To summarize, the objectives of warehouses are to build a bridge between upstream and downstream activities.
11.3 Basic Warehousing Operations
423
Finally, price stabilization is another consequence of warehousing since scarcity in the supply of goods may increase prices.
11.3 Basic Warehousing Operations Common warehousing operations are listed hereafter.
11.3.1 Receiving Advance notification may or might not precede the arrival of products, components or material. In the former, the notification should be compared with the corresponding order. In both cases, the differences between orders and deliveries should be checked and discrepancies, if any, resolved with the provider. Emphasis should be placed on partial deliveries that penalize warehousing management. Once the product has arrived, a quality control is performed (this may be limited to visual checking) and any exception is noted. Then, the product is registered (bar code or RFID devices are usually used to perform this operation).
11.3.2 Storage Storage includes sorting, transporting to storage facilities and placing in stock. Sorting consists in putting together entities that will be stocked at the same place and/or sent to the same customer. Transportation to the storage location and their facilities are described hereafter. The objectives when selecting storage equipment are: • • • • • • • • • •
reducing handling costs; shorten work cycles; reduce storage space; facilitate shipments and deliveries; simplify flows (in particular, avoid flow crossing); avoid damages due to transportation and handling; optimize safety of resources; optimize workforce; maximize resource utilization; minimize the amount of energy required to operate the storage system.
424
11 Warehouse Management and Design
11.3.2.1 Transportation Resources Transportation resources used in warehouses depend on the type of products involved. Different sorts of conveyors are available to transport cases or pallets to various points in a warehouse (for example see Figure 11.1). If necessary, they may be completed by an automated system that picks items from the conveyor when they arrive at a given position. The recognition of items is based either on bar codes or RFID. The latter is used mainly when pallets are concerned, due to the cost of RFID tags. In this case, the items are stored in a rack. The storage is made directly by the automated system that unloads from the conveyor (usually the case for simple pallet racks). Otherwise, particular equipment can be employed that is selected according to the item itself or/and the number of levels in the rack.
Figure 11.1 Conveyor
Conveyors are mainly used for high-intensity item flows. Gravity conveyors, that is to say conveyors that do not require energy, are favored when suitable, due to the low investment and maintenance costs. Some of them are mobile. Among gravity conveyors are skate wheel, roller, belt conveyors, etc. They are selected according to the items to transport, the volume of item flows, working surface area and path to cover. Gravity conveyors can be equipped with curves and switches to follow various types of paths. Power conveyors can be found in systems that transport heavy items. These conveyors include: • Belt conveyors can operate at inclines of thirty degrees or more (depending on the volume and weight of the items). However, a belt conveyor is usually unable to accumulate items, because they cannot slip on the belt. • Life roller conveyors are able to accumulate but have limited capability to operate in incline–decline situations. They can be line-shaft driven powered via a shaft beneath the rollers. These conveyors are suitable for light applications up to 20 kg. Line roller conveyors can also be chain driven. They are adjustable to quite any path design since curves, brakes and switches are available on this type of conveyor. Life roller conveyors are easy to install and quite inexpensive to maintain. Nevertheless, items that are transported have to be of adequate size so that they cannot fall between the moving rollers.
11.3 Basic Warehousing Operations
425
Other classes of equipment include wire-guided vehicles, monorail conveyors and tow lines: • Wire-guided vehicles are managed through preset commands and can go over any path made by a network of wires incorporated in the floor. • Robotic transfer vehicles (RTV) bring items to the AS/RS (automated storage and retrieval systems) and take them away. AS/RS are introduced below. • The monorail does not require floor space. In some industries (assembly systems for microcomputers, for example) designers install small robots on the carriers to continue assembling computers or making quality tests as they move. Some of these systems are managed through automatic presets command. • Tow lines are similar to wire-guided vehicles as far as functionality is concerned. They are powered by chain drives under the floor. These four types of equipment offer great flexibility. Pallet transporter and trailer trains (see Figure 11.2) are powered systems built to transport heavy items from any point to point. Their flexibility (in terms of path to follow) is maximal, but resources should be available at the two ends of the path to load and unload the cargo. The two-wheeled trolley (Figure 11.3), operated by an employee, is used to transport light cases on short distances.
Figure 11.2 Pallet transporter
Figure 11.3 Two-wheeled trolley
Figure 11.4 Forklift truck
Figure 11.5 Mobile tower
426
11 Warehouse Management and Design
11.3.2.2 Putting Away Equipment When items should be lifted to reach shelves, users may select one of the many resources available on the market such as forklift truck (see Figure 11.4) or one of the similar devices like a pantograph reach truck, stand up, lift towers (mobile or rail directed, see Figures 11.5 and 11.6) or overhead forklift crane (see Figure 11.7).
Figure 11.6 Rail-directed tower
Figure 11.7 Forklift crane
11.3.2.3 Storage Equipment Using pallets to move heavy items or lots is widespread. The simplest way of storing pallets is to arrange them in lanes on the floor. Unfortunately, this kind of pallet management consumes a lot of space, which encourages managers to use rackspaces, from single-deep racks to more sophisticated systems. A single-deep rack is shown in Figure 11.8.
Figure 11.8 Single-deep rack
Among numerous other types of racks, we would like to mention: • FIFO (first in – first out) racks retrieve first the pallet that has been stored first in a lane. The bases of the racks slope so that if a pallet is removed from the
11.3 Basic Warehousing Operations
427
back of the rack, gravity pulls the remaining pallets to the back, making the next pallet ready for retrieval. Rollers are used to facilitate the move. New pallets are stored in the front of the rack. This prevents storage and retrieval operations from interfering with each other. An example of the use of FIFO racks is the storage of truck engines in spare-part warehouses. These items are heavy and cannot be stored for more than one or two years, because the lubricants for the engine decompose, thus damaging the engine. • LIFO (last in – first out) racks are similar. In this case, the slope is reversed. Pallets are removed by the front of the rack and also stored by the front. High throughput facilities often use this kind of system. Cases can also be stored on shelves, while items are stored in bins or on shelves, depending on their size.
11.3.3 Automated Systems 11.3.3.1 Automated Storage and Retrieval Systems (AS/RS) AS/RS are used in warehouses to hold and buffer the flow of materials moving through the manufacturing process or waiting to be sent to customers. An AS/RS is a combination of equipment and controls that handles, stores, and retrieves materials with precision, accuracy and speed. Large systems typically store and retrieve pallet loads. Smaller systems handle products in totes, trays or cases. Automated storage and retrieval systems minimize the space and labor needed to warehouse materials. They also offer very small pick and put cycle times along with more accurate inventory management. These systems are particularly popular in countries where labor is expensive and space is limited. An AS/RS system is made up of one or more aisles, each having a robotic crane to retrieve from and store product in the racks on the aisle. The use of a robotic crane allows racks to be built up to fifteen meters high over almost any length, providing more storage density than almost any other solution. The cranes also bring the materials to the operator, which almost eliminates waiting and reduce cycle times. An AS/RS system can also handle a variety of materials, from small part bins up to pallets of materials. A pick-and-deposit station is generally located at the end of the aisle to transfer loads into or out of the AS/RS aisles. To conclude, the main benefits when using AS/RS systems are: • consistent improvement in operator efficiency and storage density; • reduction of work-in-progress (WIP) inventory;
428
11 Warehouse Management and Design
• improvement of quality and just-in-time performance; • ensuring real-time inventory control and instantaneous reporting functionality.
11.3.3.2 Carousels A horizontal carousel is a rotatable circuit of shelving that brings the items in front of the order-picker. Vertical carousels also exist. The objective is identical to the previous one: to bring the items in front of the order-picker. In a horizontal or vertical carousel, there is no need for an aisle to access a stock-keeping unit (SKU) as is the case in an AS/RS system (SKU is the smallest physical unit of the product that is tracked in a supply chain). As a consequence, there are fewer moves in the warehouse, and thus less space is required. The only problem in the case of put up operations is that only one SKU can be picked up or stocked at a time, contrary to AS/RS where several picking and/or storing operations can be performed simultaneously in some circumstances. A carousel system consists of multiple levels of carousels in which items are stored in tote boxes or storage bins. The optimal way to store or retrieve SKUs from a carousel requires solving a traveling salesman-like problem (TSP). 11.3.3.3 A-frame System Automated A-frame systems are dispensing machines that drop SKUs onto a conveyor or picks up SKUs from a conveyor, depending on the type of machine. Some of them are designed for high-volume picking applications, consisting of multiple “dispensers” that hold a vertical stack of product. Items are dispensed by removing the bottom item from the vertical stack individually based on order requirements. Other A-frame systems are able to drop items onto a conveyor. Non-automated A-frame systems, such as the one presented in Figure 11.9, also exist. They are designed to carry plywood, sheets of metal, and other large, bulky sheet or panel materials as they are moved through a warehouse, store, lumberyard, or other areas
Figure 11.9 Simple A-frame system
11.4 Warehouse Management
429
The advantages of an A-frame system is that SKU (usually flat products) are leaned against the “A” and thus do not suffer any compression. They are not stacked up. Thus, they have no chance to damage each other. The equipment mentioned in this section represents the main types available on the market, but our list is not exhaustive.
11.4 Warehouse Management 11.4.1 Warehouse Functions The main functions of a warehouse are: 1. Checking the state of the packaging, if any, at the arrival of the ordered items. If the packages are damaged, the instruction is usually to dispatch it to the internal office in charge of contact with the providers. The whole pack is provisionally rejected, waiting for instructions. 2. Checking the inbound shipments with bills of lading or orders that have been placed. When RFID or bar codes are used with individual checking item by item, (rare due to its cost), the number of items of each type is virtually guaranteed, nevertheless, the quality of each item has to be verified. Sometimes, due to the volume of products to be stored, only samples of items are taken and their quality is checked. Statistical methods should be used to measure the risk of having quality problems in the shipment as a whole. Note that inaccurate deliveries disrupt customers’ operations and generate returns, which is expensive. When the shipment is accepted, the corresponding information is recorded in the database (virtual inventory). Remarks: • When the receipt of products is not automated, the tendency is sometimes to take the bills of lading for granted and to record in the computer the data provided by these documents. This is a significant source of errors, in particular in the case of missing items. • When typing is used to introduce information in the virtual inventory, different types of mistakes may happen: – Typing mistakes. Duplicate hand input or proof reading is a (costly) solution to the problem. – Delayed input. This is usually performed by the employees who are in charge of the storage, and typing in data often appears as being “useless”
430
11 Warehouse Management and Design
or “secondary”. The tendency of employees is often to postpone this operation to the end of the shift, or even to the next day. This can lead to a discrepancy between the real and virtual inventories, which would result in management errors. 3. Stocking and retrieving the items. Stocking items consists of selecting the position to assign to the items in the storage facility, then transporting them from the warehouse entrance to a precise location and finally recording the identification of the location and the items in a computer (virtual inventory). When automated systems are used, one only has to move items to this system and identify the items. The automated system automatically selects the position and puts away the items. When bar codes or RFID systems are used, the identification of the items can be done automatically, thus avoiding worker input mistakes. Retrieving items consists in finding the required items and transporting them to the warehouse exit while modifying the virtual inventory accordingly. When the system is not automated, the following errors are possible: – Unloading the items at the wrong location. This mainly happens when items vary frequently and transport is the responsibility of employees. This creates discrepancies between the physical and virtual inventories due to the impossibility of recovering the items that, in turn, leads to inappropriate management decisions. The only way to handle this problem is to entrust employees with the management. Indeed, the RFID is a solution to the problem when items are palletized or when items are expensive; in these cases, the costs of the tags per unit of item is proportionally low, which makes the use of RFID economically viable. – Damaging items during the loading/unloading processes. Periodic inventories are necessary to cope with this problem. 4. Path used to retrieve an order from the storage facility. This operation is of utmost importance since it is time consuming and may affect customer service. This problem is a special case of the traveling salesman problem (TSP). Usually, a “good” solution of a TSP is obtained using simulated annealing, ant algorithm approaches, genetic algorithms, etc. Moreover, some value-added operations may concern warehousing. These operations depend on the warehouse under consideration. For example: • Repackaging. This kind of operation is introduced when items are delivered to the warehouse in large lots and sent to retailers in individual packages, or when the operations at the exit of the warehouse necessitate new packaging. This work is labor intensive. • Performing some assembly or disassembly operations. This is frequent in the computer industry where the characteristics of the products are often personalized. • Performing overall quality control. An example is car engines. Such products are fabricated according to strict specifications at each step of the manufacturing process. Nevertheless, some problems may arise when the engine is com-
11.5 Design: Some Remarks
• • • • •
431
pleted. This is due to the tolerance ranges for dimensions at different levels of the manufacturing process. For example, a piston may be at the maximal margin, while its corresponding cylinder is at the minimal margin. This may lead to unforeseen dysfunctions. Repairing damages that occur during transport or handling, assuming that these damages are limited. Executing finishing operations. Ticketing and labeling to fit with customers’ demands. Kitting that consists in repackaging items to gather together components that will be assembled to form the final products. This operation is usually time consuming. Making available a garage for maintaining and repairing handling and storage equipment. Indeed, this garage is for simple repair and maintenance.
11.4.2 Warehouse Management Systems (WMS) Almost all of the functions of a warehouse can be supported by warehousemanagement systems (WMS). The aim of a warehouse-management system is to furnish computer-aided procedures dealing with the movements of stock to and from their facilities. They control the flows and slotting activities within a storage facility and processes the related transactions. One advantage of WMS systems is that they can optimize stock picking and put-away based on real-time information about the warehouse utilization.
11.5 Design: Some Remarks 11.5.1 Warehouse Overview In modern production systems, and in particular in warehouses, goods should be processed with minimal turnaround time. This is one of the basic objectives when designing a warehouse. Usually, four areas can be found in a warehouse: 1. A storage space that makes possible the storage of products and resources. 2. An office, usually quite small, for administrative work. This space also includes computer facilities. 3. Docks used for shipping and receiving products. They are usually well separated from each other to avoid confusion between inbound and outbound shipments.
432
11 Warehouse Management and Design
4. A workshop with resources for simple operations, which are usually easy to perform, like customizing products, repackaging, etc.
11.5.1.1 Storage Space Objectives The storage space is designed not only to stock items and hold storage facilities, but also to allow for the regular circulation of employees, handling vehicles, as well as towers and forklift cranes used to accommodate vertical storage. The main objectives when designing a storage space are the following: • Guarantee the flexibility of the warehouse, that is to say anticipate future changes in flows and types of items to be treated, improvements in handling and storage equipment, the changes in legal regulations, etc. • Optimize the utilization of space by using AS/RS systems, vertical carousels, forklift crane, etc., to store, at heights of 10 m or more, items of reasonable weight. • Ensure an efficient circulation of vehicles and employees. • Simplify the flows in the warehouse. In particular, separate the input and output flows and place at the employees’ disposal adequate handling resources. • Make sure that product flows do not cross each other. • Plan parking places for equipment when temporarily unused. • Make sure that security precautions for employees and material are properly implemented. • Carefully select the type of floor and its surface, to bear the weight of the transportation resources, guarantee the traction of the wheels and attempt to eliminate employee falls. • Keep in mind that reducing energy consumption is essential as well. • Last, but not least, pay particular attention to the health and well being of workers. It is necessary to optimize lighting, temperature and ventilation, and also conduct a study to implement ergonomic workstations, etc. Dedicated Storage Versus Shared Storage In dedicated storage, each storage location in the storage space is reserved for a specific type of item. Dedicated storage is expensive since it requires more storage space, but the management of such a system is simple, due in particular to the fact that employees can easily learn the layout and thus are comfortable with it. As mentioned in the literature, a dedicated storage facility is usually half-empty. The idea behind shared storage is to assign a type of item to more than one location. In other words, an empty location may be available for different types of
11.5 Design: Some Remarks
433
items. In such a system, employees cannot learn the storage layout and the management software is more complicated to design and often cannot take into account all the possible situations. This can open the door to management “by hand” in some circumstances. In a shared storage, the location of each item in stock must be memorized and the optimization of the picking process is much more complicated. Two particular storage layouts are proposed in Section 11.5.2. Put Away and Order Picking Putting away an item requires determining its location and thus having an accurate overview of the state of the storage facility. A put-away operation may generate faults. For example, the location of the item may be incorrectly identified or/and recorded. In large storage facilities, this may make picking the item impossible when needed. RFID is a powerful technology to deal with this kind of problem. The storage location of each item can be recorded with its identification. This information will be used to construct an efficient picking list, that is to say paths that minimize the picking times. A process should be defined to handle and reduce the damages that may occur when putting away or packing items. Shipping documentations are produced by the management office, as well as all the required scheduling algorithms (in particular to provide picking paths). The continuous exchange of information in the storage space and between the storage space and the management office requires an efficient and pervasive IT network. 11.5.1.2 Office Space The so-called office consists of various subspaces such as, among others, management, reception, locker, meeting, file, mail and copier rooms, etc. The management office is in charge of the administrative work related to reception and expedition of items, tracing the items in the warehouse, drawing up the rules to govern storage (define the adequate process to compare periodically the physical and the virtual inventories and to adjust the virtual inventory to the physical one), manage maintenance of the handling and storage equipment, schedule work shifts for the employees, maintain contacts with upstream and downstream parties, supervise the utilization of the meeting rooms, the light operations (if any), etc. The management office is also supposed to continuously improve the information network to augment the efficiency of the warehouse. The office space should be flexible to adjust to changes in needs, and comfortable (adequate temperature, ventilation and lighting, ergonomic working stations, etc.). Furthermore, the safety of employees is of paramount importance. Ergonomics advocate safety. Last but not least, energy consumption should be as low as possible.
434
11 Warehouse Management and Design
11.5.1.3 Docks for Shipping and Receiving These docks are often extensions of storage space. Receiving docks should provide an easy access to trucks and offer the necessary handling facilities. In the neighborhood of these docks is a staging area used to store damaged or incomplete deliveries (missing or erroneous items, lacking shipping documentations or broken packaging, etc.). These problems are solved at the management office level and the resulting instructions are effectuated by the dock employees, under the responsibility of the dock manager. Indeed, inbound verification is associated with this dock. This activity may consist merely of glancing at the delivery or, on the contrary, analyzing the items in detail by means of sophisticated techniques with the equipment needed installed in the staging area. An example of this situation is quite commonly found in warehouses that supply systems to assembly expensive items. The characteristics of the shipping dock depend on the type of warehouse. In the case of a public warehouse, the handling resources available in this dock are similar to those of receiving since nothing has been changed during the storage. In the case of distribution centers, that is to say warehouses that receive products in large quantities and dispatch numerous small lots, the handling resources are usually specific. Note that the shipping and receiving docks should be well separated from each other, even if they handle identical items: the objective is to simplify and facilitate the flow of products. These docks often contain an office for the manager, and in this case can provide an efficient communication link with the general management office. As in the rest of the warehouse, ventilation, lighting, ergonomic working conditions, temperature, circulation of handling resources and safety are of importance. Additionally, in the case of docks, some provision for the weather must be made for items and employees. This requires a special consideration since these spaces are a bridge between the exterior and the relatively safe confines of the warehouse. 11.5.1.4 Value-added Service Area The light operations that can be performed in this space have been (partially) listed in Section 11.4. The necessary equipment is available in this workshop. The main characteristic of such a space is the low automation level because the objective is to reach a high operational flexibility. Exactly what are value-added services? Succinctly, they are for convenience, fulfilling a function in the storage facility that would usually occur in-house at the client’s factory or workplace. They let companies lower their inventory cost for finished goods by postponing final assembly or labeling of their products until customers have ordered. This is both cost effective and increases inventory flexi-
11.5 Design: Some Remarks
435
bility in a firm’s supply chain. More and more warehousing firms are providing these services. Nevertheless, there is a complete lack of either understanding of how to design or incorporate them into their storage processes with no solutions to oversee the work or evidence of planning for this space. Perhaps even more importantly, improving the efficiency of the value-added services performed in their warehouses is not on the agenda. It is difficult to say what this is costing the industry.
11.5.2 Storage in Unit-load Warehouse 11.5.2.1 Introductory Remarks In a unit-load warehouse, a single storage unit is handled at a time. Usually, the storage unit is a pallet. Typically, a unit-load warehouse is a public warehouse that receives, stores and forwards pallets. The revenue source of a public warehouse is to charge rent by the pallet, but the expenses are proportional to the surface of the warehouse. Thus, the revenue increases if the number of pallets per surface unit increases. These can be achieved in two ways: vertically or have deep lanes with narrow aisles. 11.5.2.2 Taking Advantage of the Vertical Space Installing pallet racks means that pallets are stored independently of each other (see Figure 11.9). Indeed, racks require special handling resources such as: • forklift trucks (see Figure 11.5); • mobile and rail directed towers (Figures 11.6 and 11.7); • overhead forklift cranes (see Figure 11.8). Other pieces of equipment that take advantage of vertical space are vertical carousel or AS/RS systems. 11.5.2.3 Using Deep Lanes In this kind of system, we assume that the aisle space is as narrow as possible but wide enough to provide adequate accessibility. The objective is to reduce the proportion of space lost for aisles.
436
11 Warehouse Management and Design
Pallet locations
One-deep lane
Two-deep lane
Aisles
Three-deep lane
Four-deep lane
Figure 11.10 Various lane depths
In Figure 11.10, we represent one-deep, two-deep, three-deep and four-deep lanes. We assume that the width of the lane is equal to the width of the pallets. As you can see, the proportions of wasted surfaces are: • • • •
50% for the one-deep lanes; 33.33% for the two-deep lanes; 25% for the three-deep lanes; 20% for the four-deep lanes.
Note that in a one-deep lane, each pallet is directly accessible and its location is available for reassignments as soon as the current pallet is removed. In contrast, only half of the pallets are directly accessible but are not available for reuse until the deepest location in the same lane becomes available. Thus, deeper lanes offer more pallet locations for the same surface but they are of diminishing value due to reduced accessibility. Finally, the number of useful storage locations per unit of surface is not the unique index to select the best layout in a unit-load warehouse. We must also take into account the turnover of item types and consider using a mixed layout, that is to say a layout that uses different deep lanes with vertical storage facilities.
11.5.3 Warehouse Sizing 11.5.3.1 Some Models An interesting review of storage-capacity models has been published in (Cormier and Gunn, 1992). The objective of most standard models consists in minimizing the overall cost, keeping in mind that if the private warehouse under study is in-
11.5 Design: Some Remarks
437
sufficient, then the firm has to rent space in a public warehouse. Thus, the goal is to find the best (i.e., cheapest) tradeoff between in-house and external storage. An alternative aim is to reach the best service level. Following (Ballou, 1973), first, we analyze the static warehouse-sizing problem when several storage scenarios are available. Then, the case when cost varies over time (dynamic problem) is considered (Rao and Rao, 1998). 11.5.3.2 Static Warehouse-sizing Problem The horizon H of the problem is the concatenation of N elementary periods h1 , h2 , L, hN . The demand for warehouse space is estimated for each period. To take into account the randomness of the demand, several (say M ) estimates are made for each period. A strong assumption is made at this level: if d h , j , i = 1, L, N , j = 1, L, M is the j -th estimate in elementary period hi , then i
M
we also know the probability of occurrence denoted by p h , j . Indeed, i
∑p j =1
hi , j
=1
and for i = 1, L , N . This assumption is a weakness in the model, since it is unlikely that these values will be able to be defined. The following parameters and variables are used: • x is the surface of the private warehouse (to be optimized). • C0 is the sum of overheads and amortized capital expenditure per unit of storage surface per period. • y h , j , i = 1, L, N , j = 1, L, M is the surface in the private warehouse required i
to store d h , j . i
• Cpv is the holding cost per unit of storage surface per period in the private warehouse. • Cpub is the holding cost per unit of storage surface per period in the public warehouse. • Ballou’s model also introduces 0 < α < 1 , which is the fraction of the private warehouse that is available for storage. The expected cost for the planning horizon H is: N
M
{
C H = ∑∑ ph , j C0 x + Cpv yh , j + Cpub ( d h , j − yh , j ) i =1 j =1
where:
i
i
i
i
}
(11.1)
438
yh j i
11 Warehouse Management and Design
⎧ α x if d h , j > α x ⎪ =⎨ ⎪⎩d h , j otherwise i
i
We denote by y hi the expected value of private warehouse surface used during period hi and by d h the value of private warehouse surface required during pei
riod hi . With these notations, the cost (11.1) can be rewritten as:
{
M
C H = C0 N x + ∑ Cpv yh + Cpub ( d h − y h ) i
j =1
i
i
}
(11.2)
Under the following constraints:
yh ≤ α x
(11.3)
yh ≤ d h
(11.4)
i
i
i
x ≥ 0, y h ≥ 0
(11.5)
i
Relations 11.2 to 11.5 is the LP formulation of the static problem proposed by (Hung and Fisk, 1984). Another approach is possible. It is easy to verify that yh , j can be rewritten as: i
yh , j = d h , j − ( d h , j − α x ) + i
i
i
Remember that: ⎧ z if z > 0 ⎪ ( z )+ = ⎨ ⎪⎩0 otherwise
With this formulation, the total cost (11.1) becomes: CH =
N
M
∑∑ i =1 j =1
= C0 N x +
{
}
phi , j C0 x + Cpv d hi , j + ( Cpub − Cpv ) ( d hi , j − α x ) + N
M
∑∑ i =1 j =1
{
phi , j Cpv d hi , j + ( Cpub − Cpv ) ( d hi , j − α x ) +
}
11.5 Design: Some Remarks
439
Indeed, Cpub > Cpv in real-life situations. Function C0 N x increases with x , while the second part of the cost is a non-increasing function of x . As a consequence, a unique solution exists and can be reached using a dichotomy approach, starting with: Max ( d h , j ) i
xm = 0 and xM =
i, j
α
11.5.3.3 Dynamic Warehouse-sizing Problem
The previous costs are modified to introduce the notion of time: h • Cpv is the holding cost per unit of storage surface during period hi in the prii
vate warehouse. h • C pub is the holding cost per unit of storage surface during period hi in the pubi
lic warehouse. We introduce the following new parameters and variables:
• xh is the warehouse size during period hi . We denote by x0 the initial warei
house size, i.e., the size at the beginning of h1 . This size is known. • eh is the amount of expansion of the warehouse size during period hi . i
• Cexh is the per unit expansion cost in period hi . • rh is the amount of restriction of the warehouse size during period hi . i
i
• Creh is the per unit restriction cost in period hi . i
With these notations, the cost to minimize becomes: CH =
N
M
∑∑ p i =1 j =1
hi , j
h h {C0 xh + Cexh eh + Creh rh + Cpv y h , j + Cpub ( d h , j − y h , j )} i
i
i
i
i
i
i
i
i
i
(11.6)
Under the following constraints that hold for i = 1, L, N : y hi ≤ α x hi
(11.7)
y hi ≤ d hi
(11.8)
xhi = xhi−1 − rhi + ehi (state equation)
(11.9)
440
11 Warehouse Management and Design
yhi , ehi , rhi ≥ 0
(11.10)
Equality 11.6 can be rewritten as: CH =
N
M
∑∑
phi , j ×
i =1 j =1
h h h { C0 xh + Cexh eh + Creh rh + Cpv d h , j + (Cpub − Cpv ) ( d h , j − α xh ) + } i
i
i
i
i
i
i
i
(11.11)
i
i
i
This problem can be solved using either a dichotomy (on variables eh and rh ) i
i
or a flow approach. Remarks: 1. In the static and dynamic models, demands and the related probabilities are assumed to be known. They are sometimes difficult to estimate in a real-life situation. 2. These models concern one type of product, or products that are similar in terms of storage characteristics.
11.6 Warehouse-location Models 11.6.1 Introduction Warehouse location is a strategic problem, which means that the nature of the decision has long-term impacts on the profitability of the system. This explains the great interest of researchers and practitioners in this issue. Among the most recent publications, it is worth mentioning:
• (Chen et al., 2007) consider the planning of a supply chain network that consists of several plants (already placed in fixed locations), customer zones (sites) and finally some warehouses and distribution centers that have to be located. • (Mladenovic et al., 2007) study the p-median problem, a NP-hard discrete location problem. • (Sahin and Süral, 2007) propose a review of hierarchical facility-location models. • (Ding et al., 2009) address the design of production-distribution networks. Location problems have been mainly studied for single-level systems, i.e., where warehouses are not connected to other warehouses. In (Francis et al., 1983) single-level systems have been analyzed from an application point of view.
11.6 Warehouse-location Models
441
Level 2
W2(2)
h4
h3
h2
h1
W3(1)
W2(1)
W1(1)
Level 1
Customer sites
W1(2)
Figure 11.11 A single-flow hierarchical (2-level) model
Hierarchical systems, i.e., systems where warehouses interact within a multiple layer configuration, are shown in Figure 11.11 for a 2-level system. In this model, warehouses W1 (2) and W2 (2) (level 2) supply warehouses of level 1 ( W1 (1) , W2 (1) and W3 (1) ) they, in turn, furnish the customer sites. This is a single-flow 2level structure: a warehouse feeds only entities belonging to the next lower level. In some more complex structures, a warehouse of level k is allowed to feed some entities of level k − i , where i > 1 . This is called a “multi-flow” system. Figure 11.12 shows a 2-level multiflow system. Dotted arrows show warehouses that do not supply entities from the next lower level. In the next section, we suggest a model for a 2-level single-flow hierarchical location problem. This model can easily be extended to n levels.
11.6.2 Single-flow Hierarchical Location Problem The locations of the customer sites are given and we assume that a quantitative and qualitative study has been conducted to define candidate locations for the placements of warehouses. For simplicity, we restrict ourselves to the 2-level single-flow model.
Level 2
W1(1)
Level 1
Customer sites
W1(2)
h1
W2(2)
W3(1)
W2(1)
h2
Figure 11.12 A multiflow hierarchical (2-level) model
h3
h4
442
11 Warehouse Management and Design
We denote by J (respectively, I ) the set of candidate locations for warehouses at level 2 (respectively, 1) and by H the set of customer sites. Remember that if Z is a set, then card ( Z ) is the number of elements of Z . ci , j , i ∈ I , j ∈ J is the cost charged for transferring one unit of product from j
to i . Similarly, ch,i , h ∈ H , i ∈ I is the cost charged for transferring one unit of product from i to h . We denote by d h the known (or forecasted) demand at site h ∈ H during a standard period. ai (respectively, a j ) is the capacity of warehouse i ∈ I (respectively, j ∈ J ). nI (respectively, n J ) is the number of possible locations (can be opened) in the
set I (respectively, J ). Indeed, nI ≤ card ( I ) and n J ≤ card ( J ) . The decision variables that should be defined are the following: ⎧ 1 if j ∈ J is an open location. • xj = ⎨ ⎩ 0 otherwise. Similarly: ⎧ 1 if i ∈ I is an open location. ⎩ 0 otherwise.
• yi = ⎨ •
f i , j , i ∈ I , j ∈ J is the quantity of products that should be transferred from j to
i during a standard period. • g h, i , h ∈ H , i ∈ I is the quantity of products that should be transferred from i
to h during a standard period. According to the previous notations, the cost to minimize is: C=
∑∑ c
i, j
j∈J i∈I
fi, j +
∑∑ c
h,i
g h,i
(11.12)
i∈I h∈H
Under the following constraints: 1. The demands at the customer sites must be satisfied:
∑g
h,i
= d h for any h ∈ H
(11.13)
i∈I
2. The flow that arrives in warehouse i ∈ I from the warehouses of J is equal to the flow that leaves i to feed the customer sites:
11.6 Warehouse-location Models
∑f
i, j
=
j∈J
∑g
h, i
for any i ∈ I
443
(11.14)
h∈H
3. Capacities cannot be violated, which is expressed as: –
At level 2:
∑f
i, j
≤ x j a j for any j ∈ J
(11.15)
i∈I
At level 1:
–
∑g
h, i
≤ yi ai for any i ∈ I
(11.16)
4. The number of locations that are open cannot be greater than allowed: –
At level 2:
∑x
j
≤ nJ
(11.17)
j∈J
–
At level 1:
∑x
i
≤ nI
(11.18)
i∈I
Moreover, flows are positive: f i , j ≥ 0 whatever i ∈ I and j ∈ J
(11.19)
g h, i ≥ 0 whatever h ∈ H and i ∈ I
(11.20)
Remarks: 1. Introducing nI and n J helps adjust the solution to the transportation devices. 2. If two entities are not connected, the related transportation cost is set to infinity. 3. The model we have just presented is a MIP (mixed-integer programming) problem since it includes binary as well as continuous variables. 4. This model is static in the sense that it concerns only one standard period and the system is empty at the beginning of the period. Time is not involved. 5. Since the model is static, it is useful in helping to design a supply chain network.
444
11 Warehouse Management and Design
11.6.3 Multiflow Hierarchical Location Problem We still consider a two-level model to illustrate the problem. The multiflow model requires some additional parameters and variables: • ch, j , h ∈ H , j ∈ J is the cost for transferring one unit of product from j to h . • q h, j , h ∈ H , j ∈ J is the quantity of products that should be transferred from j to h . The cost becomes: C = ∑∑ ci , j f i , j + ∑∑ ch , i g h , i + ∑∑ ch , j qh , j j∈J i∈I
i∈I h∈H
(11.21)
j∈J h∈H
Compared with the single-flow constraints: 1. Constraints 11.13 is replaced with:
∑g
h,i
+
i∈I
∑q
h, j
= d h for any h ∈ H
(11.22)
j∈J
2. Constraint 11.14 as well as (11.16) to (11.20) still hold. 3. Constraints 11.15 is replaced with:
∑f
∑q
≤ x j a j for any j ∈ J
(11.23)
4. qh, j ≥ 0, whatever h ∈ H and j ∈ J
(11.24)
i∈I
i, j
+
h, j
h∈H
11.6.4 Remarks on Location Models The two warehouse-location models shown above are basic. Some extensions and/or improvements for these models are available in the literature. For example, (Sahin and Süral, 2007) present a model that differentiate products arriving at a customer site directly from a warehouse of level 2 from those arriving from the same warehouse after visiting level 1 warehouses. Moreover, the reader can observe that the suggested quantitative singlecriterion models are insufficient. The definition of the best locations for the warehouses is pivotal and therefore, requires a more general strategic approach as well as a dynamic view of the system that should be reconfigurable at low cost, etc.
References
445
These aspects cannot be expressed in terms of a single criterion optimization problem. (Korpela and Lehmusvaara, 1999) propose a five-level approach: 1. A preliminary analysis that states the objectives for the warehouse network design problem, defining the best and alternative solutions and gathering information related to these solutions, 2. The second step defines the final evaluation problem using information collected in the previous step. 3. The third step uses an analytic heuristic process to analyze warehouse operators (functions) in order to meet customer satisfaction. Authors mention that “the analysis of warehouse operators is based on both factual operations and subjective judgments” the quality of which closely depends on the practical experience of designers. 4. The definition of a MIP optimization problem based on maximization of customer satisfaction. 5. The implementation and monitoring of the solution.
11.7 Conclusion Warehousing is an essential function of the logistics systems. The objectives of warehouses are to build a bridge between upstream and downstream activities. Warehouses are used to cope with the discrepancy between the relative slow supply chain response and rapid changes in quantities ordered. A warehouse helps to react quickly when demand changes abruptly. The warehouses are also useful in reconditioning the products to meet customers’ requirements or in reorganizing the lots for transportation purposes. They can be used to configure and finalize products as near as possible to the customer with the addition of value-added services areas. In this chapter we have provided an overview of warehousing problems, typical warehouse devices, equipment and organization. Design and management aspects were covered. Some models for two principal problems of warehouse sizing and location were presented and discussed. Several ways forward for this domain are suggested.
References Ballou HR (1973) Business Logistics Management. Prentice-Hall, Englewood Cliffs, NJ Chen CL, Yuan TW, Lee WC (2007) Multi-criteria fuzzy optimization for locating warehouses and distribution centers in a supply chain network. J. Chin. Inst. Chem. Eng. 38:393–407 Cormier G, Gunn EA (1992) A review of warehouse models. Eur. J. Oper. Res. 58(1):3–13
446
11 Warehouse Management and Design
Ding H, Benyoucef L, Xie X (2009) Stochastic multi-objective production-distribution network design using simulation-based optimization. Int. J. Prod. Res. 47(2):479–506 Francis RL, McGinnis LF, White JA (1983) Location analysis. Eur. J. Oper. Res. 12:220–252 Mladenovic N, Brimberg J, Hansen P, Moreno-Pérez JA (2007) The p-median problem: a survey of metaheuristic approaches. Eur. J. Oper. Res. 179:927–939 Hung MS, Fisk JC (1984) Economic sizing of warehouses – a linear programming approach. Comput. Oper. Res. 11(1):13–18 Korpela J, Lehmusvaara A (1999) A customer oriented approach to warehouse network evaluation and design. Int. J. Prod. Econ. 59:135–146 Rao AK, Rao MR (1998) Solution procedures for sizing of warehouses. Eur. J. Oper. Res. 108:16–25 Sahin G, Süral H (2007) A review of hierarchical facility location models. Comput. Oper. Res. 34:2310–2331
Further Reading Ascheuer N, Grotschel M, Abdel-Hamid AA-A (1999) Order picking in an automatic warehouse: solving online asymmetric TSPs. Math. Meth. Oper. Res. 49(3):501–515 Bachers R, Dangelmaier W, Warnecke HJ (1988) Selection and use of order-picking strategies in a high-bay warehouse. Mater. Flow 5:233–245 Bartholdi JJ, Gue KR (2000) Reducing labor costs in an LTL crossdocking terminal. Oper. Res. 48(6):823–832 Bozer YA, Quiroz MA, Sharp GP (1988) An evaluation of alternative control strategies and design issues for automated order accumulation and sortation systems. Mater. Flow 4:265–282 Caron F, Marchet G, Perego A (2000) Optimal layout in low-level picker-to-part systems. Int. J. Prod. Res. 38(1):101–117 Chung WWC, Yam AYK, Chan MFS (2004) Network enterprise: a new business model for a global sourcing. Int. J. Prod. Econ. 87:267–280 Daniels RL, Rummel JL, Schantz R (1998) A model for warehouse order picking. Eur. J. Oper. Res. 105:1–17 Egbelu PJ (1991) Framework for dynamic positioning of storage/retrieval machines in an automated storage/retrieval system. Int. J. Prod. Res. 29(1):17–37 Elsayed EA, Lee M-K, Kim S, Scherer E (1993) Sequencing and batching procedures for minimizing earliness and tardiness penalty of order retrievals. Int. J. Prod. Res. 31(3):727–736 Ioannou G, Prastacos GP, Skinzi G (2004) Inventory positioning in multiple product supply chains. Ann. Oper. Res. 126:195–213 Narasimhan R, Srinivas T, Das A (2004) Exploring flexibility and execution competencies in manufacturing firms. J. Oper. Manag. 22:91–106 Frazelle EH (2002) World-class Warehousing and Material Handling. McGraw Hill, New-York, NY Gademann N, van de Velde S (2005) Order batching to minimize total travel time in parallelaisle warehouse. IIE Trans. 37(1):63–75 Gallego G, Queyranne M, Simchi-Levi D (1996) Single resource multi-item inventory system. Oper. Res. 44(4):580–595 Goetschalckx M, Ratliff HD (1988) Order picking in an aisle. IIE Trans. 20(1):53–62 Hackman ST, Rosenblatt MJ (1990) Allocating items to an automated storage and retrieval system. IIE Trans. 22(1):7–14 Hariga MA, Jackson PL (1996) The warehouse scheduling problem: formulation and algorithms. IIE Trans. 28(2):115–127
Further Reading
447
Hwang H, Song JY (1993) Sequencing picking operations and travel time models for man-onboard storage and retrieval system. Int. J. Prod. Econ. 29:75 – 88 Jane CC (2000) Storage location assignment in a distribution center. Int. J. Phys. Distr. Log. Manag. 30(1):55–71 Jarvis JM, McDowel ED (1991) Optimal product layout in an order picking warehouse. IIE Trans. 23(1):93–102 Kim B-I, Heragu SS, Graves RJ, Onge AS (2005) Clustering-based order-picking sequence algorithm for an automated warehouse. Int. J. Prod. Res. 41(15):3445–3460 Lee HS, Schaefer SK (1997) Sequencing methods for automated storage and retrieval system with dedicated storage. Comput. Ind. Eng. 32(2):351–362 Liu CH, Lu LY (1999) The procedure of determining the order picking strategies in distribution center. Int. J. Prod. Econ. 60–61:301–307 Meller RD (1997) Optimal order-to-lane assignments in an order accumulation/sortation system. IIE Trans. 29:293–301 Pan C-H, Liu S-Y (1995) A comparative study of order batching algorithms. Omega 23(6):691– 700 Petersen CG (1999) The impact of routing and storage policies on warehouse efficiency. Int. J. Oper. Prod. Manag. 19(10):1053–1064 Petersen CG (2002) Consideration in order picking zone configuration. Int. J. Oper. Prod. Manag. 22(7):793–805 Petersen CG, Aase G (2004) A comparison of picking, storage, and routing policies in manual order picking. Int. J. Prod. Econ. 92(1):11–19 Roodbergen KJ, De Koster R (2001) Routing order pickers in a warehouse with a middle aisle. Eur. J. Oper. Res. 133:32–43 Rosenwein MB (1996) A comparison of heuristics for the problem of batching orders for warehouse selection. Int. J. Prod. Res. 34(3):657–664 Skintzi G, Ioannou G, Prastacos G (2008) Investigating warehousing policies. Int. J. Prod. Econ. 112:955–970 van den Berg JP (2002) Analytic expressions for the optimal dwell point in an automated storage/retrieval system. Int. J. Prod. Econ. 76(1):13–25 van den Berg JP, Gademann AJRMN (1999) Optimal routing in an automated storage/retrieval system with dedicated storage. IIE Trans. 31:407–415 Won J, Olafsson S (2005) Joint order batching and order picking in warehouse operations. Int. J. Prod. Res. 43(7):1427–1442 Zhang G, Xue J, Lai KK (2002) A class of genetic algorithms for multiple-level warehouse layout problems. Int. J. Prod. Res. 40(3):731–744
Appendix A
Simulated Annealing
A.1 Introduction Simulated annealing can be considered as an extension of local optimization methods because: • This approach can be applied to criteria that are neither continuous nor continuously differentiable. It is just necessary to be able to compute the value of the criterion for any feasible solution. Thus, the criterion may be given by any type of function or even by an algorithm that returns a numerical value starting from the values of the parameters that define a solution. • The variables can be of a qualitative nature, needing only to be able to derive from their “values” a quantitative value of the criterion. Indeed, simulated annealing easily applies to combinatorial optimization problems. A problem P that belongs to the field of application of simulated annealing is expressed as any optimization problem: Find s* ∈ S such that f ( s * ) = Opt f ( s ) s∈S
where: • S is the set of feasible solutions. A feasible solution is one that satisfies all the constraints. We will see some examples in the next section. • f is a “function” in the broadest sense, as explained above. This function is the criterion. • Opt refers either to minimization or to maximization, depending on the type of problem.
450
A Simulated Annealing
A.2 Basic Requirements In order to apply simulated annealing, we must be able to: • Compute the value of the criterion for any feasible solution. • Define an initial feasible solution. • Derive a neighboring feasible solution from any current solution. The criterion depends on the problem to be solved. An initial feasible solution may be difficult to find. A heuristic is often used to generate such a solution. Another possibility is to start with a solution that is not feasible and to penalize the criterion in order to shrug off this solution as soon as possible. A neighboring solution is usually obtained by slightly altering the solution at hand. Again, the way the current solution is altered depends on the type of problem considered. Let us illustrate the basic requirements with the following four examples.
A2.1 Traveling Salesman Problem A salesman has to visit shops located in n different towns. The objective is to find the shortest circuit passing once through each town. This circuit starts from the salesman’s office and ends in the same office that is located in the ( n + 1 ) -th town. For this problem, the criterion is the length of the circuit. Any circuit passing once through each town is a feasible solution. Note that the number of feasible solutions is equal to n! since n + 1 towns are concerned. Indeed, this problem is combinatorial. A neighboring solution of a given circuit is obtained by permuting 2 towns of the circuit, these towns being chosen at random among the n towns to visit.
A2.2 Balancing a Paced Assembly Line We refer to the notations introduced in Chapter 7. We consider that the cycle time C is given as well as a partial order over the set of operations and the operation times. A feasible solution is the assignment of the operations among N stations such that the partial order is satisfied and the sum of the operation times in any station is less than or equal to C.
A.2 Basic Requirements
451
The criterion could be (7.1) or (7.2) defined in Chapter 7, or any other criterion derived from the distribution of the operations among the stations. A way to obtain a neighboring solution has already been explained in Chapter 7. It consists either in permuting two operations located in two consecutive stations or in moving one operation to the next (or the previous) station, assuming that the capacity constraints and the partial order are satisfied by this new solution.
A2.3 Layout Problem This layout problem consists in arranging the resources (machines, equipment, etc.) in a shop in order to minimize a criterion that is often the sum of the products of the average flows between machines by the distance covered by these flows. A feasible solution should satisfy various constraints such as, for instance: • The resources must be located on a surface that is limited by the walls of the shop. • Some resources must be located near the entrance (or the exit) of the shop, for practical reasons. • The location of some resources is fixed. This is the case when the machines are particularly heavy and require a floor that is reinforced. • Some pairs of resources must be close to each other since they work together. • Some pairs of machines must be far from each other. One reason could be that a machine emits vibrations that disturb the functioning of another. These constraints are some of the most frequent when solving this kind of problem. A neighboring solution of a given solution is obtained either by shifting a resource to an idle location or by permuting two resources. An example is developed in Section 10.2.5.2.
A2.4 Establishing a School Timetable Several criteria may be proposed in such a problem such as, for instance: • Minimize the total time students have to stay at school. This criterion can be expressed as the sum over the classes and the days of the week of the difference between the time students leave school in the evening and the time they arrive at school in the morning. • Minimize the sum of the idle periods between the first and the last course of each day, for all the teachers and days of the week.
452
A Simulated Annealing
• Minimize the total number of courses that are taught outside a “normal” activity period. The “normal” activity period could be the period between 8 a.m. and 5 p.m. Thus, we can use one of these criteria or a weighted sum of two of them or all of them. A feasible timetable verifies constraints like: 1. Some special courses must be taught in specific rooms (chemistry, physics, language labs, etc.). 2. A professor cannot teach more than one course at a time. 3. The timetable should fit with the courses that must be taught in the classes. 4. Each teacher should teach courses corresponding to her/his specialty. 5. Each teacher should teach a given number of hours every week. A neighboring solution of a given solution can be obtained by: • moving a course to a free period chosen at random; • permuting two courses (with the corresponding teachers); • permuting two classes that are concerned by the same course, without permuting the teachers. The above lists are not exhaustive.
A.3 Simulated Annealing Algorithm A3.1 A Brief Insight into the Simulated Annealing Approach The basic idea behind a simulated annealing algorithm is to generate step by step a sequence of solutions, without requiring an improvement of the solution at each step. Simulated annealing can keep a solution that is worse than the previous one with a probability. This probability diminishes when the deterioration of the criterion grows and when the number of solutions already generated increases. The goal of this approach is to avoid being entrapped in a subset of feasible solutions, like when using a gradient method for a multimodal function. Consider Figure A.1, for instance, that corresponds to a minimization problem. There are three levels of contour lines visible. Each type of contour line corresponds to a value of the criterion: the thick lines represent solutions having a criterion value equal to 1000, the dotted lines represent solutions having a criterion value equal to 800, etc.
A.3 Simulated Annealing Algorithm
453
X5
X2
X1
X4 X0
X3
Value 1000 Value 800 Value 600
Sequence of solutions
Figure A.1 The search path in the set of feasible solutions
The local minimum values are X1 to X5. If X0 represents the initial solution, then a gradient method would generate a sequence of solutions that tends towards the local minimum X3. Using the simulated annealing approach we can visit several “basins” and possibly find a solution whose criterion is better than X3. Now, we will investigate how to decide if we should keep or reject a solution that is generated as a neighboring solution of the previous one.
A3.2 Accepting or Rejecting a New Solution Let S n be the previous solution and U ( S n ) the corresponding value of the criterion. A neighboring solution S n +1 has been derived at random from S n and U ( S n+1 ) is the corresponding criterion value. If S n +1 is “better” than S n , that is to say if U ( S n+1 ) ≤ U ( S n ) in the case of a minimization problem and U ( S n+1 ) ≥ U ( S n ) in the case of a maximization problem, we keep the solution S n +1 as the next current solution of the sequence. If S n +1 is “worse” than S n , then we take the solution S n +1 as the next current solution of the sequence depending on the probability calculated as follows: pn = exp ( − Δ n / Tn )
(A.1)
where: Δ n = U ( S n+1 ) − U ( S n
)
454
A Simulated Annealing
Tn is a decreasing function of the rank n of the solution. Tn is called the “temperature”. Algorithm A.1 is used to accept or reject S n +1 when this solution is “worse”
than S n . Algorithm A.1. 1. Generate at random a real number x on the interval [ 0, 1 ] (uniform probability density). 2. Compute pn according to (A.1). 3. If pn ≥ x , then keep S n+1 as the next current solution of the sequence, otherwise reject S n+1 and keep S n as the next current solution.
A3.3 Temperature Two decisions should be made: • Give an initial value T0 to the temperature. • Define the way the temperature reduced.
A3.3.1 Choice of the Initial Temperature
No general algorithm exists to define the initial temperature T0. Practically, several trials are necessary to achieve an acceptable value for T0. This is a value large enough to guarantee a sufficient number of iterations to reach a “good” solution, but limited to avoid a computational burden. One situation should be mentioned. Consider Relation A.1 at iteration 0: p0 = exp ( − Δ 0 / T0 )
Therefore: ln ( p0 ) = −Δ 0 / T0
Thus: T0 = −
Δ0 ln ( p0
)
Assume that we are able to define the maximum value Δ max of Δ 0 , or the maximum deterioration of the criterion when switching from a solution to a
A.3 Simulated Annealing Algorithm
455
neighboring one. Suppose also that we choose p0 as the probability to keep a “worse” solution at the first iteration. Then: T0 = −
Δ max ln ( p0
)
A3.3.2 Evolution of the Temperature
The temperature decreases every Kn iterations. In this notation, n represents the rank of the current solution. Different definitions of Kn are possible: • (a) Kn = constant. In this case, the temperature decreases periodically. If Kn = 1 the temperature decreases for each iteration. • (b) Kn = Kn–1 + constant, K0 being given. Thus, the size of the plateaus on which the temperature remains constant changes following an arithmetical progression. • (c) Kn = Kn–1 / a with a < 1. (Indeed, we keep the nearest integer value.) In this case, the size of the plateaus on which the temperature remains constant evolves following roughly a geometrical progression. • (d) K n = ( K n−1 ) 1/ a with a < 1. (The nearest integer value is kept.) Thus, the size of the plateaus on which the temperature remains constant grows roughly exponentially. • (e) K n = constant / ln ( Tn ) . (The nearest integer value is retained.) In this case, the size of the plateaus on which the temperature remains constant evolves roughly logarithmically. In this formula, Tn is the temperature at iteration n. Rule (a) is the most often used and seems to be efficient for solving most problems. When the temperature decreases, several rules are possible: 1. 2. 3. 4.
Tn = Tn – 1 – constant; Tn = a Tn−1 with a < 1; Tn = constant / ( 1 + n ) ; Tn = constant / ln ( 1 + n ) .
Rule 2 is the most popular. In Algorithm A.2 presented in Section A3.3.4, we use Kn = K (i.e., we keep the same temperature during K successive iterations) and Rule 2 to modify the temperature at each iteration.
456
A Simulated Annealing
A3.3.3 How to End the Computation?
Three tests are possible to stop the computation: • When the temperature becomes less than a given value ε . • When the number of iterations exceeds a given value W. • When no improvement occurs after a given number Z of iterations. In the algorithm presented in the next section, we stop the computation when the temperature becomes less than a given value ε . A3.3.4 Simulated Annealing Algorithm
The simulated annealing algorithm presented hereafter can be modified to change the rules that define Kn and Tn. We also presume that the goal is to minimize the criterion. In this algorithm, we chose Kn = K and Tn = a Tn−1 with a < 1 and T is the current temperature. In this algorithm, S* is the best solution in the sequence of solutions generated so far. Algorithm A.2. (Simulated Annealing) 1. Introduce T , a, K , ε .
2. Generate at random a feasible solution S0, calculate the corresponding value U ( S 0 criterion and set S * = S 0 , U ( S * ) = U ( S 0 ) .
)
of the
3. Set k = 0. 4. Set k = k + 1. 5. Generate at random a feasible solution S1 in the neighborhood of S0 and compute U ( S1 ) . 6. Compute Δ = U ( S1 ) − U ( S 0 ) .
7. Test: 7.1. If Δ ≤ 0 : 7.1.1. Set S 0 = S1 and U ( S 0 ) = U ( S1 ) .
7.1.2. If U ( S1 ) < U ( S * ) , then set S * = S1 and U ( S * ) = U ( S1 ) .
7.2. If Δ > 0 , then do: 7.2.1. Generate at random x ∈ [ 0, 1 ] (uniform distribution). 7.2.2. Compute p = exp ( − Δ / T ) . 7.2.3. If x ≤ p , then set S 0 = S1 and U ( S 0 ) = U ( S1 ) .
8. If k ≥ K do: 8.1. Set T = a T . 8.2. Set k = 0. 8.3. If T ≥ ε then go to 4. 9. Display S* and U ( S * ) .
A.5 Recommended Reading
457
If practically, it is impossible to generate a feasible initial solution because of the complexity of the process, we start with a solution that is not feasible and compensate by penalizing the criterion.
A.4 Conclusion Two noteworthy advantages of simulated annealing are: • It is easy to program, whatever the rules used to define Kn, Tn or the test chosen to stop the computation. • The solutions obtained when running this algorithm several times with the same data are of similar quality (i.e., close criterion values), but they may differ from each other. This allows users to choose a solution among several “good” solutions according their experience in the field.
A.5 Recommended Reading Azencott R (1992) Simulated Annealing Parallelization Techniques. John Wiley & Sons, New York, NY Cerny V (1985) A thermodynamical approach to the travelling salesman problem: an efficient simulation algorithm. J. Opt. Th. Appl. 45:41–51 Darema F, Kirkpatrick S, Norton VA (1987) Parallel algorithms for chip placement by simulated annealing. IBM J. Res. Dev. 31(3):391–402 Das A, Chakrabarti BK (eds) (2005) Quantum Annealing and Related Optimization Methods. Lecture Notes in Physics 679, Springer, Heidelberg De Vicente J, Lanchares J, Hermida R (2003) Placement by thermodynamic simulated annealing. Phys. Lett. A 317(5–6):415–423 Eglese RW (1990) Simulated annealing: a tool for operational research. Eur. J. Oper. Res. 46:271–281 Harhalakis G, Proth J-M, Xie XL (1990) Manufacturing cell design using simulated annealing: an industrial application. J. Intell. Manuf. 1(3):185–191 Johnson DS, Aragon CR, McGeoch LA, Schevon C (1989) Optimization by simulated annealing: an experimental evaluation; Part I: Graph partitioning. Oper. Res. 37(6):865–892 Kirkpatrick S, Gelatt CD, Vecchi MP (1983) Optimization by simulated annealing. Science 220(4598):671–680 Metropolis N, Rosenbluth A, Rosenbluth M, Teller A, Teller E (1953) Equation of state calculations by fast computing machine. J. Chem. Phys. 21:1087–1092 Proth J-M, Souilah A (1992) Near-optimal layout algorithm based on simulated annealing. Int. J. Syst. Aut.: Res. Appl. 2:227–243 Tam KY (1992) A simulated annealing algorithm for allocating space to manufacturing cells. Int. J. Prod. Res. 30(1):63–87 Ware JM, Thomas N (2003) Automated cartographic map generalisation with multiple operators: a simulated annealing approach. Int. J. Geogr. Inf. Sci. 17(8):743–769 Weinberger E (1990) Correlated and uncorrelated fitness landscapes and how to tell the difference. Biolog. Cybern. 63(5):325–336
Appendix B
Dynamic Programming
B.1 Dynamic Programming (DP) Formulation B.1.1 Optimality Principle1 The optimality principle is the basis of dynamic programming (DP). It can be formulated as follows: Let C ( A, B ) be an optimal path between two points A and B and let X a point belonging to this path. Then the part of the path joining X to B , denoted C ( X , B ) , is an optimal path between X and B . To make the optimality principle easy to understand, assume that C ( X , B ) is not optimal and denotes by C * ( X , B ) an optimal path joining X to B (see Figure B.1). In this case, C ( A, X ) o C * ( X , B ) would be better than C ( A, B ) = C ( A, X ) o C ( X , B ) , where “ o ” denotes the concatenation operator. This is at variance with the initial assumption.
C ( A, B )
C ( X, B ) X B
A C* ( X, B )
Figure B.1 Illustration of the optimality principle 1
Also known as the “Bellman principle”.
460
B Dynamic Programming
A common expression of the optimality principle is: Every optimal control is composed of partial optimal controls.
Note: the reverse of this statement is not true: the concatenation of partial optimal controls is usually not an optimal control, except if the elements on which the partial controls apply are independent from each other. In other words: The global optimum is not the concatenation of local optima.
B.1.2 General DP Problem: Characteristics and Formulation B.1.2.1 Recursive Problems and Definitions Let P ( x ) be the set of predecessors of x and S ( x ) the set of successors of x . A dynamic programming approach applies to recursive problems that meet the following characteristics: 1. The system is made up of a finite number of states. 2. Two particular states should be mentioned: – The initial state xI characterized by P ( x I ) = ∅ , which means that the initial state cannot be reached from another state ( xI does not have predecessors). – The final state x F characterized by S ( x F ) = ∅ , which means that any state can be reached starting from the final state ( xF does not have successors). 3. For any state x ≠ x I and x ≠ xF we have P ( x ) ≠ ∅ and S ( x ) ≠ ∅ . 4. Let x ≠ x F be a state. There exists a decision that, when applied to x , leads to any y ∈ S ( x ) . 5. Any x ≠ x I results from a decision applied to a state z ∈ P ( x ) . 6. Whatever the state x of the system, there does not exist a sequence of decisions that leads to x again. If we represent the system by a set of nodes (a node represents a state), also called vertices, and a set of directed arcs (an arc joins a node x to a node y if y ∈ S ( x ) ), we obtain a connected digraph. This digraph does not contain a circuit (directed circuit). Finally, starting from the initial state x I , a finite sequence of feasible decisions always exist to reach the final state xF . A positive real value, which can be a distance, a cost or any other characteristic, is associated with each decision.
B.1 Dynamic Programming (DP) Formulation
x1
13
461 20
x3 15
12 5
x4
3
8
x2
x6 11 7
10 17
x5
5
11
x7 2
9
14 xI
11
x9 4
10
x10 12 5 xF
x8
12 25
Figure B.2 Illustration of the general dynamic programming problem
As mentioned above, such a system can be represented by a connected digraph as in Figure B.2. The nodes of the digraph represent the states of the system. The directed arcs join the nodes to their successors. They represent the decisions that are made in the states represented by the origins of the arcs to reach the states represented by the ends of the arcs. The values associated to the arcs represent the “costs” of the decisions. A path is a sequence of nodes that starts with the initial xI and ends with the final xF . The length of a path is the sum of the values associated with the arcs that belong to the path. For instance, in Figure B.2 the length of the path { xI , x2 , x4 , x7 , x9 , xF } is 8 + 9 + 11 + 11 + 5 = 44. The objective is to find an optimal path: a path that has the minimum (or the maximum) length, depending on the type of problem at hand. Consider a node x ( x ≠ x I ) of the digraph and assume that we know the optimal path that joins xI to any predecessor y of x ( y ∈ P ( x ) ) . Let K ( y ) be the length of this optimal path. According to the optimality principle: K ( x ) = Opt y∈P ( x )
{ K ( y ) + w ( y, x ) }
(B.1)
where w ( y, x ) is the “cost” associated to the directed arc ( y, x ) . Note that K ( xI ) = 0 . Indeed, (B.1) can be applied only if K ( y ) is known for all y ∈ P ( x ) . Thus, a constraint applies to the order the optimal “costs” are computed. Furthermore, each time a new “cost” is computed, we preserve the predecessor that led to the optimum, as seen in the examples presented in the next subsection. This is necessary to build up the optimal path when K ( xF ) is computed. This approach is illustrated in Figure B.3.
462
B Dynamic Programming y1 K( y1 )
w( y1, x )
y2 K( y2 )
xI
K( y3 ) K( y4 )
y3 w( y2, x ) w( y3, x )
x
w( y4, x ) y4
Figure B.3 Forward dynamic programming approach
In the next subsection, we explain why it is described as the forward dynamic programming approach. B.1.2.2 Forward Step For forward formulation, the solution is built starting from the initial node. The algorithm is given hereafter. Algorithm B.1. 1. Set K ( x I ) = 0 (initialization). 2. Select x such that K ( y ) has been already computed for every y ∈ P ( x ) . 3. Apply Equation B.1 to compute K ( x ) and denote by p ( x ) the predecessor of x that lead to the optimum. Note that several such predecessors may exist, which means that several optimal solutions are available. 4. Test: 4.1. If x ≠ x F , then go to 2. 4.2. If x = x F , then do: 4.2.1. K ( xF ) is the value associated with the optimal sequence. 4.2.2. Build the optimal sequence backward: x F , x1* = p ( x F ), x2* = p ( x1* ), L, xn* = p ( xn*−1 ), x I = p ( xn* )
5. Display the optimal sequence x I , xn* , xn*−1 , L, x2* , x1* , x F and the associated cost K ( x F ) .
Remark: If several predecessors of a given node lead to the optimum of (B.1), the same number of optimal sequences can be built at Stage 4.2.2 (see Example 2).
B.1 Dynamic Programming (DP) Formulation
463
Example 1: A Minimization Problem using a Forward DP Approach Consider the digraph presented in Figure B.2 and assume that we are interested in finding the shortest path between xI and xF . We start by setting K ( xI ) = 0 . Then consider the nodes the predecessors of which have been previously processed and apply (B.1). The node that provides the minimum value is given between parentheses. K ( x1 ) = K ( x I ) + w ( xI , x1 ) = 0 + 3 = 3
( xI )
K ( x2 ) = K ( x I ) + w ( x I , x2 ) = 0 + 8 = 8
( xI )
K ( x3 ) = K ( x1 ) + w ( x1 , x3 ) = 3 + 13 = 16
( x1 )
K ( x 4 ) = Min { K ( x1 ) + w ( x1 , x4 ), K ( x I ) + w ( x I , x4 ), K ( x2 ) + w ( x2 , x4 ) } =
Min { 3 + 5, 0 + 14, 8 + 9 } = 8
( x1 )
=
K ( x5 ) = Min { K ( x I ) + w ( x I , x5 ), K ( x2 ) + w ( x2 , x5 ) } Min { 0 + 17, 8 + 10 } = 17
( xI )
K ( x6 ) = Min { K ( x2 ) + w ( x2 , x6 ), K ( x5 ) + w ( x5 , x6 ) } = Min { 8 + 11, 17 + 7 } = 19
( x2 )
K ( x7 ) = Min { K ( x3 ) + w ( x3 , x7 ), K ( x4 ) + w ( x4 , x7 ) } Min { 16 + 12, 8 + 11 } = 19
( x4 )
=
K ( x8 ) = Min { K ( x 4 ) + w ( x 4 , x8 ), K ( x6 ) + w ( x6 , x8 ), K ( x5 ) + w ( x5 , x8 ) } =
Min { 8 + 2, 19 + 10, 17 + 12 } = 10
( x4 )
K ( x9 ) = Min { K ( x3 ) + w ( x3 , x9 ), K ( x7 ) + w ( x7 , x9 ) } = Min { 16 + 15, 19 + 11 } = 30
( x7 )
K ( x10 ) = Min { K ( x3 ) + w ( x3 , x10 ), K ( x9 ) + w ( x9 , x10 ) } = Min { 16 + 20, 30 + 5 } = 35
( x9 )
⎧ K ( x10 ) + w ( x10 , x F ), K ( x9 ) + w ( x9 , xF ), ⎫ K ( x F ) = Min ⎨ ⎬ ⎩ K ( x8 ) + w ( x8 , x F ), K ( x5 ) + w ( x5 , xF ) ⎭ = Min { 35 + 12, 30 + 5, 10 + 4, 17 + 25 } = 14
( x8 )
464
B Dynamic Programming
As shown, the length of the shortest path is 14. We have now to build the shortest path backward. The last node of the path is xF . The node kept when processing xF is x8 : this node will precede xF in the shortest path. The node kept when processing x8 is x4 : this node will precede x8 in the shortest path. The node kept when processing x4 is x1 : this node will precede x4 in the shortest path. The node kept when processing x1 is xI : this node will precede x1 in the shortest path. Finally, the shortest path is < x I , x1 , x4 , x8 , xF > . Example 2: A Maximization Problem Using a Forward DP Approach Assume now that we are interested in computing the longest path between x I and xF in the digraph represented in Figure B.2. The process is the same as previously after replacing Min by Max and keeping the node that lead to the maximum each time a node is processed. We obtain: K ( xI ) = 0 K ( x1 ) = K ( x I ) + w ( x I , x1 ) = 0 + 3 = 3
( xI )
K ( x2 ) = K ( x I ) + w ( x I , x2 ) = 0 + 8 = 8
( xI )
K ( x3 ) = K ( x1 ) + w ( x1 , x3 ) = 3 + 13 = 16
( x1 )
K ( x4 ) = Max { K ( x1 ) + w ( x1 , x4 ), K ( x I ) + w ( x I , x4 ), K ( x2 ) + w ( x2 , x4 ) } =
Max { 3 + 5, 0 + 14, 8 + 9 } = 17
( x2 )
K ( x5 ) = Max { K ( x I ) + w ( x I , x5 ), K ( x2 ) + w ( x2 , x5 ) } = Max { 0 + 17, 8 + 10 } = 18
( x2 )
K ( x6 ) = Max { K ( x2 ) + w ( x2 , x6 ), K ( x5 ) + w ( x5 , x6 ) } = Max { 8 + 11, 18 + 7 } = 25
( x5 )
K ( x7 ) = Max { K ( x3 ) + w ( x3 , x7 ), K ( x4 ) + w ( x4 , x7 ) } = Max { 16 + 12, 17 + 11 } = 28
( x3 and x4 )
K ( x8 ) = Max { K ( x4 ) + w ( x4 , x8 ), K ( x6 ) + w ( x6 , x8 ), K ( x5 ) + w ( x5 , x8 ) } = Max { 17 + 2, 25 + 10, 18 + 12 } = 35 ( x6 ) K ( x9 ) = Max { K ( x3 ) + w ( x3 , x9 ), K ( x7 ) + w ( x7 , x9 ) }
B.1 Dynamic Programming (DP) Formulation
=
Max { 16 + 15, 28 + 11 } = 39
465
( x7 )
K ( x10 ) = Max { K ( x3 ) + w ( x3 , x10 ), K ( x9 ) + w ( x9 , x10 ) } = Max { 16 + 20, 39 + 5 } = 44
( x9 )
⎧ K ( x10 ) + w ( x10 , x F ), K ( x9 ) + w ( x9 , xF ), ⎫ K ( xF ) = Max ⎨ ⎬ ⎩ K ( x8 ) + w ( x8 , x F ), K ( x5 ) + w ( x5 , x F ) ⎭ = Max { 44 + 12, 39 + 5, 35 + 4, 18 + 25 } = 56
( x10 )
The length of the longest path is 56. We have now to build this path backward. The last node of the path is xF . The node kept when processing xF is x10 : this node will precede xF in the shortest path. The node kept when processing x10 is x9 : this node will precede x10 in the longest path. The node kept when processing x9 is x7 : this node will precede x9 in the longest path. When processing x7 we have to keep two nodes: x3 and x4 . Thus, we will obtain 2 longest paths. Path 1: The node kept when processing x3 is x1 : this node will precede x3 in the first longest path. The node kept when processing x1 is xI : this node will precede x1 in the first longest path. Finally, the first longest path is < x I , x1 , x3 , x7 , x9 , x10 , xF > . Path 2: The node kept when processing x4 is x2 : this node will precede x4 in the second longest path. The node kept when processing x2 is xI : this node will precede x2 in the second longest path. Finally, the second longest path is < x I , x2 , x4 , x7 , x9 , x10 , xF > .
B.1.2.3 Backward Formulation Consider a node x ( x ≠ xF ) of the digraph and assume that we know the optimal path that joins any y ∈ S ( x ) to xF . Let L ( y ) be the length of this optimal path. According to the optimality principle:
466
B Dynamic Programming
L ( x ) = Opt
y∈S ( x )
{ L ( y ) + w ( x, y ) }
(B.2)
where w ( x, y ) is the “cost” associated to the directed arc ( x, y ) . Note that L ( x F ) = 0 . In this approach, the “cost” related to a node can be computed only if the “costs” of every successors of this node have been computed previously. The computation stops when L ( x I ) is obtained. Each time (B.2) is used to compute the “cost” L ( x ) of a node x , we store in s ( x ) the successor(s) of x that provide the optimal value: these nodes will be used to build the optimal sequence of nodes. This backward approach is illustrated in Figure B.4. The algorithm is given hereafter. Algorithm B.2. 1. Set L ( x F ) = 0 (initialization). 2. Select x such that L ( y ) has already been computed for every y ∈ S ( x ) . 3. Apply Equation B.2 to compute L ( x ) and denote by s ( x ) the successor of x that lead to the optimum. Note that several such successors may exist, which means that several optimal solutions are available. 4. Test: 4.1. If x ≠ xI , then go to 2. 4.2. If x = x I , do: 4.2.1. L ( x I ) is the value associated with the optimal sequence. 4.2.2. Build the optimal sequence forward: x I , x1* = s ( x I ), x2* = s ( x1* ), L, xn* = s ( xn*−1 ), x F = s ( xn* )
5. Display the optimal sequence and the associated “cost” L ( x I ) .
w ( x, y1 )
y1
L ( y1 ) y2
L ( y2 )
w ( x, y2 ) x
y3
L ( y3 )
xF
y4 L ( y4 )
w ( x, y3 ) w ( x, y4 )
y5
w ( x, y5 )
Figure B.4 Backward dynamic programming approach
L ( y5 )
B.1 Dynamic Programming (DP) Formulation
467
Remark: If several successors of a given node lead to the optimum of (B.2), the same number of optimal sequences can be built at Stage 4.2.2 (see Example 3). In Example 3, the backward DP approach is applied to find the longest path in the digraph presented in Figure B.2. Example 3: A Maximization Problem Using a Backward DP Approach We obtain successively: L ( xF ) = 0
L ( x10 ) = w ( x10 , xF ) = 12
( xF )
L ( x9 ) = Max ( w ( x9 , x F ), w ( x9 , x10 ) + L ( x10 ) ) = Max ( 5, 5 + 12 ) = 17
( x10 )
L ( x8 ) = w ( x8 , xF ) = 4
( xF )
L ( x6 ) = w ( x6 , x8 ) + L ( x8 ) = 10 + 4 = 14
( x8 )
L ( x7 ) = w ( x7 , x9 ) + L ( x9 ) = 11 + 17 = 28
( x9 )
L ( x5 ) = Max ( w ( x5 , xF ), w ( x5 , x8 ) + L ( x8 ), w ( x5 , x6 ) + L ( x6 ) ) Max ( 25, 12 + 4, 7 + 14) = 25
( xF )
=
L ( x4 ) = Max ( w ( x4 , x7 ) + L ( x7 ), w ( x4 , x8 ) + L ( x8 ) ) =
Max ( 11 + 28, 2 + 4) = 39
( x7 )
L ( x3 ) = Max ( w ( x3 , x7 ) + L ( x7 ), w ( x3 , x9 ) + L ( x9 ), w ( x3 , x10 ) + L ( x10 ) ) =
Max ( 12 + 28, 15 + 17, 20 + 12) = 40
( x7 )
L ( x2 ) = Max ( w ( x2 , x4 ) + L ( x4 ), w ( x2 , x6 ) + L ( x6 ), w ( x2 , x5 ) + L ( x5 ) ) =
Max ( 9 + 39, 11 + 14, 10 + 25) = 48
( x4 )
L ( x1 ) = Max ( w ( x1 , x3 ) + L ( x3 ), w ( x1 , x4 ) + L ( x4 ) ) =
Max ( 13 + 40, 5 + 39) = 53
( x3 )
468
B Dynamic Programming
L ( xI ) = Max ( w ( xI , x1 ) + L ( x1 ), w ( xI , x4 ) + L ( x4 ), w ( xI , x2 ) + L ( x2 ), w ( xI , x5 ) + L ( x5 ) ) = Max ( 3 + 53, 14 + 39, 8 + 48, 17 + 25) = 56
( x1 , x2 )
There are two optimal sequences of nodes. One of them contains x1 and the second contains x2 . Both are generated forward and start with xI . We obtain: < x I , x1 , x3 , x7 , x9 , x10 , x F > and < x I , x2 , x4 , x7 , x9 , x10 , x F > .
Indeed, the optimal “cost” is 56.
B.2 Illustrative Problems The difficulty encountered when facing a problem is to recognize its nature, that is to say the type of approach that could help to solve it. This is particularly true when DP is a possible approach. In this section, we develop some well-known problems that can be solved using DP.
B.2.1 Elementary Inventory Problem A company wants to establish a production schedule for an item during the next H elementary periods, an elementary period being either a day or a week or a month, depending on the type of production. H is the horizon of the problem. The manufacturing time to produce a batch of items is negligible. The notations used to set and analyze the problem are the following, for i = 1, L , H : • xi : Production scheduled during the i-th period. This production becomes available at the end of the period. The variables xi are the control of the system. • d i : Demand requirement at the end of period i . These demands are known. •
yi : Inventory level during period i + 1 .
The following state equation characterizes the evolution of the system: yi = yi −1 + xi − d i , i = 1, L , H
Two sets of constraints apply:
(B.3)
B.2 Illustrative Problems
469
xi ≥ 0, i = 1, L, H
(B.4)
These constraints mean that the production cannot be negative. yi ≥ 0, i = 1, L, H
(B.5)
These constraints mean that backlogging is not allowed. We also know y0 that is the inventory level at the beginning of the first period (initial inventory level). A feasible solution (or control) is given by X = { x1 , K , xH } that verifies Relations B.3–B.5. Two sets of costs are taken into account: •
f i ( yi −1 ), i = 1, L , H , which are the costs for keeping in stock a quantity
yi−1 during the i -th period. These inventory costs are concave and nondecreasing.2 • ci ( xi ), i = 1, L, H , which are the costs for manufacturing a quantity xi during the i -th period. These production costs are also concave and nondecreasing.
Since the costs depend on the periods, we are in the non-stationary case (more general). Thus, the cost associated with a feasible control X = { x1 , L, x H } is: H
C ( X ) = ∑ [ f i ( yi −1 ) + ci ( xi ) ]
(B.6)
i =1
{
The objective is to find a feasible control X * = x1* , L, x *H
} such that:
C ( X * ) = Min C ( X ) X∈E
where E is the set of feasible controls. It is easy to see that there exists an infinite number of feasible solutions. X * is an optimal control. Note that if y0 ≥ σ 1H =
H
∑d
i
i =1
2
f ( x ) is concave and non-decreasing if f ( x ) increases from Δf when x increases from Δx and
0 ≤ Δf ≤ Δx .
470
B Dynamic Programming
then, the optimal solution is xi = 0 for i = 1, L, H . Analyzing this problem led to fundamental properties that exempt some of the elements of E from consideration. Fundamental Properties There exists an optimal control that has the following properties, for i = 1, L, H :
{
}
1. If yi −1 < d i , then xi ∈ d i − yi−1 , d i + d i+1 − yi−1 , L, σ iH − yi−1 . s
In this formulation, σ rs = ∑ d i and σ rs = 0 if s < r . i =r
The first property can be expressed as follows: if the inventory level during a period is not large enough to meet the next demand, then the quantity to produce during the period must be such that the sum of the inventory level and the quantity produced meets exactly a sequence of successive demands, the first of them being the next one. Indeed, the optimal number of successive demands covered by this sum is not given. This property is illustrated in Figure B.5. 2. If yi −1 ≥ d i and there exist j < i such that x j > 0 , then xi = 0 . In other words, if the inventory level during a period is enough to satisfy the demand at the end of the period and if a production run has been made previously, then there is no production during this period. Note that if no production took place previously, then we may have to produce to reach the optimal solution. 3. If yi −1 ≥ d i and x j = 0 for j = 1, L, i − 1 , then: ⎧ ( d i − yi −1 ) + , ( d i + d i +1 − yi−1 ) + , ( σ ii + 2 − yi −1 ) + ,⎫ ⎪ ⎪ xi ∈ ⎨ ⎬ ⎪L , ( σ H −1 − y ) + , ( σ H − y ) + ⎪ − − i i 1 i i 1 ⎩ ⎭
Remember that:
( a )+ = ⎧⎨
0 if a < 0
⎩ a otherwise
This third property can be expressed as follows: if the inventory level during a period is large enough to satisfy the next demand, but if no production has been carried out previously, then the production during the period must be either equal to 0 or such that the sum of the inventory level and the quantity produced exactly meets a sequence of successive demands, the first of them being the next one. Indeed, the number of successive demands covered by this sum must be defined.
B.2 Illustrative Problems
471
di
Inventory level
di+1
xi
di+2 di+3
yi-1 i
i+1 di
i+2
i+3
di+2
di+1
di+4
i+4
Period
di+4
di+3
Figure B.5 Illustration of result 1
These three properties will allow us to introduce a DP formulation. Consider an elementary period i ∈ {1, L, H } . The inventory level during this period is yi−1 . According to the possible production levels mentioned above:
{
}
yi −1 ∈ ( y0 − σ 1i −1 ) + , σ is , σ is +1 , L , σ iH , where s is the smallest integer such that
σ > ( y0 − σ s i
i −1 1
+
) .
Let Ci ( y ) denote the optimal value of the “cost” between elementary period i and elementary period H if the inventory level is y during period i . The backward DP formulation for the lowest inventory level is: Ci [ ( y0 − σ 1i −1 ) + ] = f i [ ( y0 − σ 1i −1 ) + ] + Min
j = s , L, H
{c [σ i
j i
]
[
− ( y 0 − σ 1i −1 ) + + Ci +1 σ i j+1
]}
(B.7)
The backward DP formulation for the others possible inventory levels is: Ci ( σ i j ) = f i ( σ i j ) + ci ( 0 ) + Ci +1 ( σ i +j 1 ) for j = s, s + 1, L, H
(B.8)
Assume that y 0 < σ 1H otherwise the optimal solution is to produce nothing on horizon H . We set C H +1 ( 0 ) = 0 since, according to the above properties and the previous assumption, the inventory level at the end of the last period is equal to 0. Algorithm B.3. (Inventory optimization algorithm) 1. Set C H +1 ( 0 ) = 0 . 2. For i = H to 2 step –1 do:
472
B Dynamic Programming
2.1. Compute (B.7). 2.2. Set xi = σ ij * − ( y0 − σ 1i −1 ) + , where j * achieves the minimum of (B.7).
2.3. Compute (B.8). 3. Compute (B.7) for i = 1. 4. Set x1 = σ 1j * − y0 , where j * achieves the minimum of (B.7). C1 ( y0 ) is the optimal cost.
5. Set x1* = x1 . 6. Compute y1* = y0 − d1 + x1* . 7. For i = 2 to H do: 7.1. If yi*−1 > ( y0 − σ 1i −1 ) + then xi* = 0 , otherwise xi* = xi . 7.2. Compute yi* = yi*−1 − d i + xi* .
Example Consider an example defined by the following parameters: H = 6, y0 = 6, d1 = 4, d 2 = 1, d 3 = 4, d 4 = 2, d 5 = 3, d 6 = 2 ⎧⎪ 0 if x = 0 ⎫ ci ( x ) = ⎨ ⎬ i = 1, 2 ⎪⎩ x + 1 if x > 0⎭ ⎪⎧ 0 if x = 0 ⎫ fi ( x ) = ⎨ ⎬ i = 1, 2 ⎪⎩0.5 x if x > 0⎭
⎧⎪ 0 if x = 0 ⎫ ci ( x ) = ⎨ ⎬ i = 3, 4, 5, 6 ⎪⎩ x + 2 if x > 0⎭ ⎪⎧ 0 if x = 0 ⎫ fi ( x ) = ⎨ ⎬ i = 3, 4, 5, 6 ⎪⎩0.1 x if x > 0⎭
Applying Algorithm B.3 we obtain: Step 1: i = H = 6 We search for s that is the smallest integer such that σ 6s > ( y 0 − σ 15 ) + . We obtain s = 6 . As a consequence: C 6 ( 0 ) = f 6 ( 0 ) + c6 ( 2 ) + C 7 ( 0 ) = 0 + 4 + 0 = 4 and x6 = 2 C 6 ( 2 ) = f 6 ( 2 ) + c 6 ( 0 ) + C 7 ( 0 ) = 0.2 + 0 + 0 = 0.2
Step 2: i = 5 We search for s that is the smallest integer such that σ 5s > ( y 0 − σ 14 ) + . We obtain s = 5 . As a consequence: C5 ( 0 ) = f 5 ( 0 ) + Min [ c5 ( 3 ) + C6 ( 0 ), c5 ( 5 ) + C6 ( 2 ) ] = 7.2 and x5 = 5
B.2 Illustrative Problems
473
C 5 ( 3 ) = f 5 ( 3 ) + c5 ( 0 ) + C 6 ( 0 ) = 4.3 C5 ( 5 ) = f 5 ( 5 ) + c5 ( 0 ) + C6 ( 2 ) = 0.7
Step 3: i = 4 We search for s that is the smallest integer such that σ 4s > ( y0 − σ 13 ) + . We obtain s = 4 . As a consequence: C 4 ( 0 ) = f 4 ( 0 ) + Min [ c4 ( 2 ) + C5 ( 0 ), c4 ( 5 ) + C5 ( 3 ), c4 ( 7 ) + C5 ( 5 ) ] = 9.7
and x 4 = 7 C 4 ( 2 ) = f 4 ( 2 ) + c 4 ( 0 ) + C 5 ( 0 ) = 7. 4 C 4 ( 5 ) = f 4 ( 5 ) + c 4 ( 0 ) + C 5 ( 3 ) = 4. 8 C 4 ( 7 ) = f 4 ( 7 ) + c 4 ( 0 ) + C 5 ( 5 ) = 1. 4
Step 4: i = 3 We search for s that is the smallest integer such that σ 3s > ( y0 − σ 12 ) + . We obtain s = 3 . As a consequence: C3 ( 1 ) = f 3 ( 1 ) + Min [ c3 ( 3 ) + C4 ( 0 ), c3 ( 5 ) + C4 ( 2 ), c3 ( 8 ) + C4 ( 5 ), c3 ( 10 ) + C 4 ( 7 ) ] = 13.5
and x3 = 10 C3 ( 4 ) = f 3 ( 4 ) + c3 ( 0 ) + C 4 ( 0 ) = 10.1 C 3 ( 6 ) = f 3 ( 6 ) + c3 ( 0 ) + C 4 ( 2 ) = 8 C 3 ( 9 ) = f 3 ( 9 ) + c 3 ( 0 ) + C 4 ( 5 ) = 5. 7 C3 ( 11 ) = f 3 ( 11 ) + c3 ( 0 ) + C 4 ( 7 ) = 2.5
Step 5: i = 2 We search for s that is the smallest integer such that σ 2s > ( y 0 − σ 11 ) + . We obtain s = 3 . As a consequence: C2 ( 2 ) = f 2 ( 2 ) + Min [ c2 ( 3 ) + C3 ( 4 ), c2 ( 5 ) + C3 ( 6 ), c2 ( 8 ) + C3 ( 9 ), c2 ( 10 ) + C3 ( 11 ) ] = 14.5
and x2 = 10 C 2 ( 5 ) = f 2 ( 5 ) + c2 ( 0 ) + C3 ( 4 ) = 12.6
474
B Dynamic Programming
C 2 ( 7 ) = f 2 ( 7 ) + c2 ( 0 ) + C3 ( 6 ) = 11.5 C 2 ( 10 ) = f 2 ( 10 ) + c2 ( 0 ) + C3 ( 9 ) = 10.7 C 2 ( 12 ) = f 2 ( 12 ) + c2 ( 0 ) + C3 ( 11 ) = 8.5
Step 6: i = 1 We search for s that is the smallest integer such that σ 1s > ( y0 − σ 10 ) + . We obtain s = 3 . As a consequence: C1 ( 6 ) = f1 ( 6 ) + Min [ c1 ( 3 ) + C2 ( 5 ), c1 ( 5 ) + C2 ( 7 ), c1 ( 8 ) + C 2 ( 10 ), c1 ( 10 ) + C 2 ( 12 ) ] = 19.6
and x1 = 3 Thus, the optimal cost is 19.6. Forward Process Now, we start the forward process (Steps 4 to 7 of the algorithm) to build the optimal solution. This process is summarized in Table B.1. Table B.1 Building the optimal solution
i
0 1 2 3 4 5 6
( y0 − σ 1i −1 ) +
6 2 1 0 0 0
di
4 1 4 2 3 2
xi*
3 0 0 7 0 0
yi*
6 5 4 0 5 2 0
The solution is represented in Figure B.6.
B.2.2 Capital Placement A finance company decides to invest Q million euros in N projects. Each project can be developed at any of K different investment levels. Investing at level k n ∈ { 1, L , K } in project n ∈ { 1, L , N } has a cost vn ( k n ) and the future earnings are estimated to be bn ( k n ) . Indeed, bn ( k n ) > vn ( k n ) .
B.2 Illustrative Problems
475
Inventory level 9 8
x1* = 3
7
d1 = 4
d4 = 2
6 5 d2 = 1
4
x 4* = 7
d5 = 3
3 2
d3 = 4 d6 = 2
1 1
2
3
4
5
6
Period
Figure B.6 Optimal inventory level
The problem can be formulated as follows: N
Maximize
∑b
n
( kn )
n=1
subject to: N
∑v n =1
n
( k n ) = Q and k n ∈ { 1, L, K
}
A forward DP approach can be used to solve this problem. Let S n ( q ) be the maximum earning for the first n projects if the investment is q . The DP formulation is: S n ( q ) = Max { bn ( k n ) + S n−1 ( q − vn ( k n ) ) } kn
(B.9)
Let k n* ( q ) denote the optimal investment level in project n if the investment is q for the first n projects. We also set: S 0 ( q ) = 0, ∀ q ≥ 0
Indeed, the problem is to define, in each step of the computation (i.e., for each n ), the value of q .
476
B Dynamic Programming
Table B.2 Capital placement data 1
n
2
3
k
v1 ( k )
b1 ( k )
v2 ( k )
b2 ( k )
v3 ( k )
b3 ( k )
0
0
0
0
0
0
0
1
2
7
3
4
2
5
2
5
9
6
8
4
6
3
6
10
7
10
6
7
4
7
11
9
12
8
10
We illustrate this approach with a small example, where Q = 10 and the rest of the data is given in Table B.2. We apply Relation B.9 successively to n = 1, 2 and 3 . The notation ki* ( z ) denotes the optimal investment level for project i if the total investment for the i -th first projects is z . Step 1: S1 ( 0 ) = 0 and k1* ( 0 ) = 0 S1 ( 2 ) = 7 and k1* ( 2 ) = 1 S1 ( 5 ) = 9 and k1* ( 5 ) = 2 S1 ( 6 ) = 10 and k1* ( 6 ) = 3
S1 ( 7 ) = 11 and k1* ( 7 ) = 4
Step 2: S 2 ( 0 ) = 0 and k 2* ( 0 ) = 0 S 2 ( 2 ) = b2 ( 0 ) + S1 ( 2 ) = 7 and k 2* ( 2 ) = 0
S 2 ( 3 ) = b2 ( 1 ) + S1 ( 0 ) = 4 and k 2* ( 3 ) = 1 S 2 ( 5 ) = Max [ b2 ( 1 ) + S1 ( 2 ), b2 ( 0 ) + S1 ( 5 ) ] = 11 and k 2* ( 5 ) = 1 S 2 ( 6 ) = Max [ b2 ( 0 ) + S1 ( 6 ), b2 ( 2 ) + S1 ( 0 ) ] = 10 and k 2* ( 6 ) = 0
S 2 ( 7 ) = Max [ b2 ( 0 ) + S1 ( 7 ), b2 ( 3 ) + S1 ( 0 ) ] = 11 and k 2* ( 7 ) = 0 S 2 ( 8 ) = Max [ b2 ( 2 ) + S1 ( 2 ), b2 ( 1 ) + S1 ( 5 ) ] = 15 and k 2* ( 8 ) = 2 S 2 ( 9 ) = Max [ b2 ( 1 ) + S1 ( 6 ), b2 ( 3 ) + S1 ( 2 ), b2 ( 4 ) + S1 ( 0 ) ] = 17 and k 2* ( 9 ) = 3
S 2 ( 10 ) = b2 ( 1 ) + S1 ( 7 ) = 15 and k 2* ( 10 ) = 1
B.2 Illustrative Problems
477
Step 3: S 3 ( 10 ) = Max [ b3 ( 0 ) + S 2 ( 10 ), b3 ( 1 ) + S 2 ( 8 ), b3 ( 2 ) + S 2 ( 6 ), b3 ( 4 ) + S 2 ( 2 ) ] = 20 and k3* ( 10 ) = 1
Thus, the maximal sum of future earnings is 20. The optimal strategy has to be reconstructed forward. Since k 3* ( 10 ) = 1 , the investment in project 3 should be done at level 1. The maximum for step 3 is obtained for S 2 ( 8 ) . Since k 2* ( 8 ) = 2 , the investment in project 2 should be done at level 2. The maximum for step 2 is obtained for S1 ( 2 ) . Since k1* ( 2 ) = 1 , the investment in project 1 should be done at level 1.
B.2.3 Project Management B.2.3.1 Definitions and Examples
A project is a set of tasks on which a partial order applies. This partial order can be of different types: 1. Start task A after task X is completed. 2. Start task A t units of time after the completion of task X. 3. Start task A after the starting time of task X. Table B.3 A project Task
Duration
Constraints
A
6
/
B
11
/
C
8
/
D
7
Starts after C is completed
E
14
Starts after B is completed
F
7
Starts after B is completed
G
8
Starts after A is completed
H
2
Starts after A is completed
I
17
Starts after D and E are completed
J
5
Starts 7 units of time after D and E are completed
K
9
Starts after I and H are completed and 5 units of time after F and G are completed
478
B Dynamic Programming
D
32
7
5 J
8
C
11 B
I
14
E I
25
7
8
11
18
7
9
17
O 51
K 5
F
42 8 G
A 6
H
2
6
Figure B.7 Graphic representation of the project
A task can be represented by a directed arc. The beginning of the arc represents the starting time of the task, the end of the arc is the completion time and the length (or weight) of the arc represents the duration of the task. Finally, a project can be represented by a digraph. To illustrate the concept, let us consider a project described in Table B.3. This project is represented in Figure B.7 by an acyclic directed graph. A forward DP approach is applied to find the longest path between nodes I and O. The results of the computation are the framed values. Thus, 51 units of time are necessary to complete this project. A backward approach shows that the critical path is B, E, I, K. Thus, to reduce the duration of the project, we have to reduce the duration of one or more of these 4 tasks, until another critical path appears. B.2.3.2 Earliest and Latest Starting Times of Tasks
It is important to define these two limits since, as long as a task starts between the earliest and latest starting times, neither the schedule of the other tasks nor the duration of the whole project are disturbed. For a task to start, all the tasks that precede it should be completed. As a consequence, the earliest starting times of the tasks are the results of the computation of the longest path between nodes I and O (see Figure B.7). The computation of the latest starting times of tasks begins with the tasks without successors. We proceed backwards. The duration of the task under consideration is deduced from the minimum among the latest starting time of its successors. Indeed, it is assumed that the latest starting times of the successors have been computed earlier. Table B.4 provides these starting times for the example given in Table B.3 and Figure B.7.
B.2 Illustrative Problems
479
Table B.4 Earliest and latest starting times Task Earliest starting time
Latest starting time
A
0
Min ( 29, 40 ) – 6 = 23
B
0
Min ( 30, 11 ) – 11 = 0
C
0
18 – 8 = 10
D
8
Min ( 39, 25 ) – 7 = 18
E
11
25 – 14 = 11
F
11
42 – 5 – 7 = 30
G
6
42 – 5 – 8 = 29
H
6
42 – 2 = 40
I
25
42 – 17 = 25
J
32
51 – 5 = 46
K
42
51 – 9 = 42
Note that the earliest and the latest starting times of the tasks belonging to the critical sequence are equal. B.2.3.3 PERT (Program Evaluation and Review Technique) Method
The PERT method is a DP method that involves three estimates of time for each task: 1. Optimistic t 0 . 2. Mean tm . 3. Pessimistic t p . These estimations are used to define the duration t * of the task: t* =
t0 + 4 t m + t p 6
The duration of the project is computed using either the backward or the forward algorithm. B.2.3.4 CPM (Critical Path Method)
This method finds a good tradeoff between the duration and basic cost of a project. The basic idea is that each task can be performed in one of two possible ways:
480
B Dynamic Programming
1. Using a standard mode that leads to a medium cost and can incur quite significant project time. 2. Using an urgent mode that leads to a high cost (since additional resources have been used) but short project time. Assume that the cost is a linear function of time. The goal is to reduce the duration of some critical tasks. The following approach can be applied. The task with the greatest ratio (decrease of task duration) / (increase of cost) is selected. The duration of this task is reduced. Then, the critical sequence of tasks is again sought (it may be different from the previous one) and so on. The process stops when the time of the project is acceptable.
B.2.4 Knapsack Problem The term “knapsack” is derived from the activity that consists in selecting items to pack in a knapsack, each item being defined by two parameters: • The value of each item i , denoted by u i . This value represents the usefulness of the item for the traveler. • The weight of each item i , denoted by wi . The goal is to select, among a set of n items, the subset of items that maximizes the total value of the selected items while keeping the total weight less than a given value M . We introduce the following decisions variables: ⎧ 1 if item i is selected ⎫ ⎪ ⎪ xi = ⎨ ⎬ for i = 1, L , n ⎪ ⎪⎭ 0 otherwise ⎩
The problem can be formulated as: n
Maximize
∑xu i
i
i =1
subject to: n
∑xw i
i
≤M
i =1
and: xi ∈ { 0, 1 } for i ∈ 1, L , n
B.3 Recommended Reading
481
This problem is the bounded knapsack problem since only one unit of each item is available. This problem can be expressed as a DP problem. We can apply the logic used in the capital placement problem. Let Si ( m ) be the maximum value of the first i items selected when the maximum weight is m . The dynamic programming formulation is then: S i ( m ) = Max { S i −1 ( m − x wi ) + xi ui x
} for i = 1, L, n
B.3 Recommended Reading Adda J, Cooper R (2003) Dynamic Economics. MIT Press, Cambridge, MA Bensoussan A, Proth J-M (1984) Inventory planning in a deterministic environment: concave cost set-up. Larg. Scal. Syst. 6:177–184 Bensoussan A, Crouhy M, Proth J-M (1983) Mathematical Theory of Production Planning. North-Holland, Amsterdam Bertsekas DP (2000) Dynamic Programming and Optimal Control. Vols. 1 & 2, 2nd edn Athena Scientific Buffa ES (1973) Modern Production Management. 4th edn, John Wiley & Sons, New York, NY Buffa ES (1976) Operations Management: The Management of Production Systems. John Wiley & Sons, New York, NY Chase RB, Aquilano NT (1981) Production and Operations Management: a Life Cycle Approach. 3rd edn, R.D. Irwin, Homewood, Ill Cormen TH, Leiserson CE, Rivest RL, Stein C (2001) Introduction to Algorithms. 2nd edn, MIT Press, Cambridge, MA Dolgui A, Guschinsky N, Levin G, Proth J-M (2008) Optimisation of multi-position machines and transfer lines. Eur. J. Oper. Res. 185(3):1375–1389 Giegerich R, Meyer C, Steffen P (2004) A discipline of dynamic programming over sequence data. Sci. Comput. Progr. 51:215–263 Menipaz E (1984) Essentials of Production and Operations Management. Prentice-Hall, Englewood Cliffs, NJ Proth J-M, Hillion H (1990) Mathematical Tools in Production Management. Plenum Press, New York and London Stokey N, Robert EL, Prescott E (1989) Recursive Methods in Economic Dynamics. Harvard University Press, Cambridge, MA Wagner HM (1975) Principles of Operations Research. Prentice-Hall, Englewood Cliffs, NJ
Appendix C
Branch-and-Bound Method
C.1 Introduction Assume that a finite (but large) number of feasible solutions have to be examined to find the solution that has the maximal (or minimal) value of a criterion. The branch-and-bound method enumerates the feasible solutions in order to find the optimal. A large number of solutions are eliminated from consideration by setting upper and lower bounds, so only a tiny fraction of them are examined, thus reducing calculation time. In Section C.2, we introduce the basis of the approach. Section C.3 presents some applications in order to illustrate the method.
C.2 Branch-and-Bound Bases C.2.1 Find the Solution that Minimizes a Criterion Let S be the set of feasible solutions and f ( • ) the criterion to minimize. The goal is to find x* ∈ S such that f ( x * ) = Min f ( x ) . x∈S
Assume that it is possible to define a set
{ S1 , S 2 , L, S n }
of solution subsets
n
that are feasible or not such that S ⊂ U S i . i =1
We also assume that: 1. Whatever i ∈ { 1, 2, L, n } , it is possible to find a lower bound of Min f ( x ) . x∈S ∩ Si
Let bi be this lower bound. 2. We know an upper bound U of f ( x * )
484
C Branch-and-Bound Method
Basic Remark:
If bi > U for a i ∈ { 1, 2, L, n } , then the optimal solution x * cannot belong to S i ∩ S . As a consequence, x* ∈
U (S
i
∩S )
i∈E1
where E1 = { i i ∈ { 1, L , n } and bi ≤ U }. In other words, x * belongs to one of the subsets S i ∩ S for i ∈ E1 . At the end of the first iteration, there are card ( E1 ) 1 subsets to analyze, see Figure C.1. For each S i , i ∈ E1 , we define a set { S i ,1 , S i , 2 , L, S i ,n of subsets such that i
Si ⊂
}
ni
US
i, j
. Let bi , j be a lower bound of Min f ( x ) . x∈Si ∩Si , j
j =1
If U < bi , j , then Si , j is brushed aside from the list of subsets taken into account in the second iteration. If U ≥ bi , j , then the optimal solution x * may belong to Si , j and this subset is integrated in the list. Thus, at the end of the second iteration, we have card ( E2 )
subsets candidates for further investigation, where E2 = { ( i, j ) i ∈ { 1, L, n }, bi ≤ U , j ∈ { 1, L, ni }, bi , j ≤ U
}. We then apply the same approach to each
Si , j ,
( i, j ) ∈ E 2 as the one applied to S i , i ∈ E1 . This leads to the results of the third iteration. This approach is illustrated in Figure C.2. Continue the same process until we reach subsets that are small enough to make possible the computation of the optimal solutions of each subset, and select the best one. In some cases, only one subset remains at the last iteration. S
S∩S1 U≥b1
S∩S2
S∩Si
S∩Sn-1
S∩Sn
U qk . In the first case, i will be stored at least until the beginning of qk . In the second case, backlogging is unavoidable. Finally, if the lead times li and the order dates (derived from the control parameters ui ) are known, the cost incurred for assembling product k is: C ( l1 , L, l n ; u1 , L , u n ) = X + Y + Z
where: X =
n
∑H
i
( u i − li ) +
i =1
Y = B Max ( l j − u j ) + j =1, L, n
Z=
n
∑H i =1
i
+ Max ( l j − u j )
j =1, L, n
(C.1)
488
C Branch-and-Bound Method
Here, X is the total inventory cost until the beginning of period qk . Y is the backlogging cost, if any. Z is the inventory cost corresponding to the components that are kept in stock after the beginning of period qk . Relation C.1 can be rewritten as: C ( l1 , L, ln ; u1 , L, u n ) = X + W
(C.2)
where: n ⎛ ⎞ W = ⎜ B + ∑ H i ⎟⎟ Max ( l j − u j ) + ⎜ i =1 ⎠ j =1, L, n ⎝
In Relation C.2, it is assumed that the lead times are known. Nevertheless, they are random variables. Therefore, we derive the average cost from (C.2) and obtain: C ( u1 , L , u n ) = R + S
(C.3)
where: n ⎡u R = ∑ H i ⎢∑ ⎢⎣ k =1 i =1 i
(u
−k
i
n ⎛ ⎞ v* S = ⎜ B + ∑ H i ⎟⎟ ∑ ⎜ i =1 ⎠ w=1 ⎝
) pi , k
⎤ ⎥ ⎥⎦
⎛ ⎡ n ⎤ ⎞ ⎜ w× ⎢ ∏ pi ,si ⎥ ⎟⎟ ∑ ⎜ ( s1 , L, sn )∈Ew ⎣ i =1 ⎦ ⎠ ⎝
In the latter expression:
(
v* = Max n j − u j j =1, L, n
⎧ Ew = ⎨ ( s1 , L , sn ⎩
)
)
+
si ∈ { 1, L , ni
} for i = 1, L, n
The objective is to find the set ( u1 , L , u n ) ∈ ( 1, L , n1 ) × L × ( 1, L , n n
that minimizes Expression C.3.
)
(
)
⎫ and Max s j − u j = w ⎬ j =1, L, n ⎭
C.3 Applications of Branch-and-Bound
489
C.3.1.2 B&B Algorithm
Actually, it is possible to compute the value of Criterion C.3 for each feasible solution and keep the one corresponding to the minimal criterion value. However, this strategy may lead to a huge amount of computation. In contrast, as we will see in this section, the B&B approach can reach the optimal solution with a reasonable amount of computation. The following two remarks are useful to understand the suggested B&B approach: 1. Looking at Criterion C.3, it is easy to see that R is an increasing function of the parameters ui while S is a decreasing function of these parameters. This remark allows us to find a lower bound of the optimal solution in a subset ϖ of feasible solutions. Assume that this subset is defined as follows: ui ∈ { mi , L, M i } for i = 1, L, n . Then, we can choose bϖ = Rϖ + Sϖ as a lower bound of the optimal solution, where Rϖ is the value of R for ui = mi , i = 1, L, n and Sϖ is the value of S for ui = M i , i = 1, L, n . 2. Assume that a new node ϖ (defined as above) is such that bϖ < U , where U is an upper bound of the criterion. In this case, we know that the optimal solution of the problem may belong to bϖ . Furthermore, if we compute the value ⎡m + Mi ⎤ 2 U * of Criterion C.3 for u i = ⎢ i ⎥ (see ) and if U * < U , then U * is a 2 ⎢ ⎥ better upper bound of the optimal solution than U . The following algorithm translates the B&B approach to the problem at hand. Algorithm C.1. 1. Building the initial level. 1.1. Set N = 1 . The value of this variable is the number of nodes (i.e., of subsets) at the level under consideration. 1.2. Set m1i ,0 = 1 and M i1,0 = ni for i = 1, L, n ; this is the initial set of feasible solutions.
⎡ m1,0 + M i1,0 ⎤ 1.3. Compute the value of Criterion C.3 for ui = ⎢ i ⎥ and assign the result to U , 2 ⎢ ⎥ upper bound of the criterion. 2. Iterations: 2.1. Set r = 0 . This variable will contain the number of “active” nodes at the level under consideration. 2.2. For k = 1, L , N do: 2
Recall that
⎡a ⎤
is the smallest integer greater than or equal to a .
490
C Branch-and-Bound Method 2.2.1. Set r = r + 1 .
⎢ m k ,0 + M ik ,0 ⎥ 2.2.2. Set mir ,1 = mik ,0 and M ir ,1 = ⎢ i ⎥ (see 3) for i = 1, L, n . 2 ⎢⎣ ⎥⎦ 2.2.3. Compute bir ,1 , the lower bound of the criteria of the solutions that belong to the subset defined in Step 2.2.2. (see also remark 1 above). 2.2.4. If bir ,1 < U do: (the objective is to update the upper bound)
⎡ m r ,1 + M ir ,1 ⎤ 2.2.4.1. Compute the value U * of the criterion for ui = ⎢ i ⎥ for 2 ⎢ ⎥ i = 1, L, n . 2.2.4.2. If U * < U , then set U = U * . 2.2.5. If bir ,1 ≥ U then do r = r − 1 . 2.2.6. Set r = r + 1 . ⎡ m k ,0 + M ik ,0 ⎤ r ,1 k ,0 2.2.7. Set mir ,1 = ⎢ i ⎥ and M i = M i for i = 1, L, n . 2 ⎢ ⎥ r ,1
2.2.8. Compute bi , the lower bound of the criteria of the solutions that belong to the subset defined in Step 2.2.7 (see also remark 1 above). 2.2.9. Repeat Steps 2.2.4 and 2.2.5. 2.3. For k = 1, L , r do mik ,0 = mik ,1 and M ik ,0 = M ik ,1 for i = 1, L, n . 2.4. Set N = r. 2.5. If ( ( N = 1) and (mi1,0 = M i1,0 for i = 1, L , n ) ) , then ui* = mi1,0 for i = 1, L, n is the optimal solution; Otherwise go to 2.2.
C.3.1.3 Numerical Experiments
Consider the case of 3 components. The maximum lead times of these components are, respectively, equal to 8, 7 and 6 elementary periods. The probabilities associated to the different lead times are presented in Table C.1. The inventory costs per period are 10, 5, and 15 for the components 1, 2, 3, respectively. Furthermore, the backlogging cost is 0.05 per period. We apply the algorithm to this example. The initial set of solutions is { { 1, L , 8 }, { 1, L , 7 }, { 1, L , 6 } } . The initial upper bound is computed with u1 = 4, u 2 = 4, u3 = 3 . It is equal to: U = 71.7857 .
3
⎣a ⎦
is the greatest integer less than or equal to a .
C.3 Applications of Branch-and-Bound
491
Table C.1 Lead time probabilities for each component
Period
1
2
3
4
5
6
7
8
Component 1
0.2
0.3
0.4
0.02
0.02
0.02
0.02
0.02
Component 2
0.03
0.05
0.1
0.2
0.3
0.3
0.02
Component 3
0.4
0.5
0.03
0.03
0.02
0.02
At the second iteration, we consider 2 × 2 × 2 = 8 subsets. Only 5 subsets have a lower bound smaller than U . They are presented in Table C.2. According to the results presented in column 4, the new upper bound is equal to: U = 45.29 . Among the 5 subsets reviewed at iteration 2, only 3 of them have a lower bound less than the new upper bound (see Table C.2). These 3 subsets are:
{ { 1, L , 4 }, { 4, L , 7 }, { 1, L , 3 } } , { { 5, L , 8 }, { 4, L, 7 }, { 1, L , 3 } } and { { 1, L, 4 }, { 4, L , 7 }, { 4, L , 6 } } . As a consequence, we will have to consider 3 × 2 × 2 × 2 = 24 subsets at iteration 3. Only 7 subsets among them will display a lower bound less than U . They are presented in Table C.3. As can be seen in this table, the new upper bound will be U = 35.6 . Among the 7 subsets reviewed at iteration 3, only 2 of them have a lower bound less than the new upper bound (see Table C.3). Table C.2 Second iteration
Subsets at iteration 2
Lower
Solution A selected
Criterion
bound
to update the upper
value for
bound
A
{ {1, …, 4 }, {1, …, 3 }, {1, …, 3 } }
57.18
2, 2, 2
96.21
{ {1, …, 4 }, {4, …, 7 }, {1, …, 3 } }
11.10
2, 5, 2
45.29
{ {5, …, 8 }, {4, …, 7 }, {1, …, 3 } }
30.56
6, 5, 2
61.36
{ {1, …, 4 }, {4, …, 7 }, {4, …, 6 } }
40.91
2, 5, 5
82.79
{ {5, …, 8 }, {4, …, 7 }, {4, …, 6 } }
60.1
6, 5, 5
97.82
492
C Branch-and-Bound Method
Table C.3 Third iteration
Subsets at iteration 3
Lower
Solution A selected to
Criterion
bound
update the upper
value for A
bound { { 1, …, 2 }, { 4, …, 5 }, { 1, …, 1 } }
43.26
1, 4, 1
64.52
{ { 3, …, 4 }, { 4, …, 5 }, { 1, …, 1 } }
41.72
3, 4, 1
55.37
{ { 3, …, 4 }, { 6, …, 7 }, { 1, …, 1 } }
42.97
3, 6, 1
45.69
{ { 1, …, 2 }, { 4, …, 5 }, { 2, …, 3 } }
39.17
1, 4, 2
67.16
{ { 3, …, 4 }, { 4, …, 5 }, { 2, …, 3 } }
32.87
3, 4, 2
53.67
{ { 1, …, 2 }, { 6, …, 7 }, { 2, …, 3 } }
39.48
1, 6, 2
64.23
{ { 3, …, 4 }, { 6, …, 7 }, { 2, …, 3 } }
29.40
3, 6, 2
35.60
These 2 subsets are:
{ { 3, L , 4 }, { 4, L , 5 }, { 2, L , 3 } } and { { 3, L , 4 }, { 6, L , 7 }, { 2, L , 3 } } As a consequence, we will have to consider 2 × 2 × 2 × 2 = 16 subsets at iteration 4. Only 1 subset among them will display a lower bound less than or equal to U , this subset is:
{ { 3, L, 3 }, { 6, L, 6 }, { 2, L, 2 } } Thus, the optimal solution is u1 = 3 , u 2 = 6 and u3 = 2 and the optimal value of the criterion is 35.6.
C.3.2 Assignment Problem C.3.2.1 Problem Statement
Consider n resources and m tasks. Each task should be assigned to at most one resource and each resource should perform at the most one task.
C.3 Applications of Branch-and-Bound
493
Let ci , j be the cost incurred when performing task i using resource j . Furthermore, if there are fewer resources than tasks, then each resource must be assigned to a task and, similarly, if they are fewer tasks than resources, then all the tasks should be executed. We define: ⎧1 if task i is performed with resource j ⎪ xi , j = ⎨ ⎪⎩ 0 otherwise
With this definition, the problem can be written as follows: m
⎡ ⎢ ⎢⎣
n
∑ ∑c
Minimize
i =1
i, j
j =1
⎤ xi , j ⎥ ⎥⎦
(C.4)
subject to: m
∑x
i, j
≤ 1 for j = 1, L, n
x i,
j
i =1
(C.5)
n
∑
≤ 1 for i = 1, L, m
(C.6)
⎤ xi , j ⎥ = Min ( m, n ⎥⎦
(C.7)
j =1
m
⎡ ⎢ ⎢⎣
n
∑ ∑ i =1
j =1
)
Expression C.4 is the total cost to be minimized. Constraints C.5 express that each resource takes care of at most one task. Constraints C.6 are introduced to make sure that a task is assigned to at most one resource. Constraint C.7 guarantees that the maximum number of assignments is done. To apply a B&B approach to this problem, we have to: • Build a B&B tree. One possibility is to choose a variable xi*, j* and to build, at the first iteration, the subset of variables where the value of xi*, j* is zero and the subset where xi*, j* = 1 . Thus, the initial set of solutions is split up into two
subsets. At the next iteration, select another variable and do the same, and so on. We can also select 2 variables xi*, j* and xi**, j ** and generate 4 subsets at the first iteration:
494
C Branch-and-Bound Method
–
subset 1 is characterized by xi*, j* = 0 and xi**, j** = 0 .
–
subset 2 is characterized by xi*, j* = 0 and xi**, j** = 1 .
–
subset 3 is characterized by xi*, j* = 1 and xi**, j** = 0 .
–
subset 4 is characterized by xi*, j* = 1 and xi**, j** = 1 .
Do the same for each subset that remains candidate whatever the iteration. • Define an upper bound that can be updated at each iteration. Initially, a solution can be to assign task i1 to resource j1 , where ci , j = Min ci , j , then task i2 to 1
resource j2 , where ci , j = 2
2
Min
i , j i ≠i1 , j ≠ j1
1
i, j
ci , j , and so on until either all the tasks are
assigned to resources or all the resources to tasks. If the iteration is not the first one and we want to refresh the upper bound, then the same process is applied but the values of the variables that define the subset under consideration are kept. • Define a lower bound for a given subset. The lower bound can be found by solving the problem as a real linear programming (LP) problem, but the values of the variables that define the subset are frozen.
C.3.2.2 Numerical Example
Consider an example with 3 tasks: T1 , T2 , T3 , and 4 resources: R1 , R2 , R3 , R4 . The costs are given in Table C.4. Table C.4 Costs
R1
R2
R3
R4
T1
6
2
4
5
T2
6
3
8
9
T3
4
3
5
9
Iteration 1:
Compute the initial upper bound as explained in the previous subsection, and obtain successively x1, 2 = 1, x3,1 = 1 and x2,3 = 8 . Thus, the initial upper bound is 2 + 4 + 8 = 14 . According to the previous notations, U = 14 . The first subset is defined by x1,1 = 0 and x2, 2 = 0 . To find the lower bound,
the following linear programming problem (LP) should be solved:
C.3 Applications of Branch-and-Bound
495
Minimize (2 x1, 2 + 4 x1,3 + 5 x1, 4 + 6 x2,1 + 8 x2,3 + 9 x2, 4 + 4 x3,1 + 3 x3, 2 + 5 x3,3 + 9 x3, 4 ) subject to: x2,1 + x3,1 ≤ 1 , x1, 2 + x3, 2 ≤ 1 , x1,3 + x 2,3 + x 3,3 ≤ 1 , x1, 4 + x2, 4 + x3, 4 ≤ 1 x1, 2 + x1,3 + x1, 4 ≤ 1 , x2,1 + x2,3 + x2, 4 ≤ 1 , x3,1 + x3, 2 + x3,3 + x3, 4 ≤ 1
x1, 2 + x1,3 + x1, 4 + x2,1 + x2,3 + x2, 4 + x3,1 + x3, 2 + x3,3 + x3, 4 = 3
Evidently, the variables should be greater than or equal to 0. The optimal value of the criterion is 13 for x1,3 = x2,1 = x3, 2 = 1 , the other variables being equal to 0. If an upper bound starting from the definition of this subset is computed, we still obtain U = 14 . Since the lower bound 13 is less than the upper bound, we keep this subset for further consideration. The second subset is defined by x1,1 = 0 and x2, 2 = 1 . If we refresh the upper bound taking into account the fact that x1,1 = 0 and x2, 2 = 1 , and applying the approach described in the previous section, we obtain
x1,3 = 1 , x2, 2 = 1 and x3,1 = 1 . The other variables are equal to 0. The value of the
criterion for this solution is 11. Thus, the new value of the upper bound is U = 11 , which rules the previous subset out for further consideration. To find the lower bound, the following LP has to be solved:
Minimize ( 2 x1, 2 + 4 x1,3 + 5 x1, 4 + 6 x 2 ,1 + 3
+ 8 x2,3 + 9 x2, 4 + 4 x3,1 + 3 x3, 2 + 5 x3,3 + 9 x3, 4 ) subject to: x2,1 + x3,1 ≤ 1 , x1, 2 + x3, 2 = 0 , x1,3 + x 2,3 + x3,3 ≤ 1 , x1, 4 + x2, 4 + x3, 4 ≤ 1 x1, 2 + x1,3 + x1, 4 ≤ 1 , x2,1 + x2,3 + x2, 4 = 0 , x3,1 + x3, 2 + x3,3 + x3, 4 ≤ 1
x1, 2 + x1,3 + x1, 4 + x2,1 + x2,3 + x2, 4 + x3,1 + x3, 2 + x3,3 + x3, 4 = 2
496
C Branch-and-Bound Method
Again, the variables should be greater than or equal to 0. The optimal value of the criterion is 11 for x1,3 = x2, 2 = x3,1 = 1 , the other variables being equal to 0. The lower bound 11 being equal to the current upper bound, we keep this subset for further consideration. Indeed, we will discard this subset only if the upper bound decreases on the occasion of future refreshment. The third subset is defined by x1,1 = 1 and x2, 2 = 0 . If we try to refresh the upper bound taking into account the fact that x1,1 = 1 and x2, 2 = 0 , and applying the approach described in the previous section, we obtain x1,1 = 1 , x2,3 = 1 and x3, 2 = 1 . The other variables are equal to 0. The value of the
criterion for this solution is 17. Thus, the value of the upper bound remains U = 11 . To find the lower bound, the following LP should be solved: Minimize (6 + 2 x1, 2 + 4 x1, 3 + 5 x1, 4 + 6 x 2 ,1
+ 8 x2,3 + 9 x2,4 + 4 x3,1 + 3 x3,2 + 5 x3,3 + 9 x3, 4 ) subject to: x2,1 + x3,1 = 0 , x1, 2 + x3, 2 ≤ 1 , x1,3 + x2,3 + x3,3 ≤ 1 , x1, 4 + x2, 4 + x3, 4 ≤ 1
x1, 2 + x1,3 + x1, 4 = 0 , x2,1 + x2,3 + x2, 4 ≤ 1 , x3,1 + x3, 2 + x3,3 + x3, 4 ≤ 1 x1, 2 + x1,3 + x1, 4 + x2,1 + x2,3 + x2, 4 + x3,1 + x3, 2 + x3,3 + x3, 4 = 2
Once more, the variables should be greater than or equal to 0. The optimal value of the criterion is 17 for x1,1 = x2,3 = x3, 2 = 1 , the other variables being equal to 0. Thus, the third subset will be discarded from future consideration. The last subset is defined by x1,1 = 1 and x2, 2 = 1 . The optimal solution of the LP problem with real variables is straightforward: x1,1 = x2, 2 = x3,3 = 1 , the other variables being equal to 0. The corresponding value of the criterion is 14 > U . Thus the last subset does not contain the optimal solution. Iteration 2:
At this point, we know that the second subset is the only one that deserves further consideration. Furthermore, the lower bound related to this subset is equal to the
C.3 Applications of Branch-and-Bound
497
upper bound that is valid at the end of the first iteration. As a consequence, this solution is optimal. For the problem under consideration, the optimal value of the criterion is 11 and the optimal solution is x1,3 = x2, 2 = x3,1 = 1 and the other variables are equal to 0.
C.3.3 Traveling Salesman Problem C.3.3.1 Stating the Problem
A salesman has to visit n shops located in different towns denoted by 1, 2, L , n . The objective is to find the shortest path passing once through each town. There are ( n − 1 )! such circuits. We assume that the salesman’s office is located in city 1. We define a variable xi , j as follows: ⎧1 if town j follows i in the circuit ⎪ xi , j = ⎨ ⎪⎩ 0 otherwise
We also denote by ci , j the distance between towns i and j . Finally, the traveling salesman problem can be expressed as follows: ⎤ xi , j ⎥ ⎥⎦
(C.8)
= 1 for j = 1, 2, L, n
(C.9)
= 1 for i = 1, 2, L, n
(C.10)
n
Minimize
⎡
n
∑ ⎢⎢ ∑ c i =1
⎣
j =1, j ≠i
i, j
subject to: n
∑x
i, j
i =1, i ≠ j
n
∑x
i, j
j =1, i ≠ j
xi , j ∈ { 0, 1 } for i = 1, 2, L, n and j = 1, 2, L, n
(C.11)
498
C Branch-and-Bound Method
Expression C.8 shows that the objective is to minimize the length of the circuit. Constraints C.9 express that only one town precedes each town in the circuit. Constraints C.10 state that only one town succeeds each town in the circuit. Finally, Constraints C.11 shown that the problem is a binary linear programming problem. C.3.3.2 Applying a B&B Approach
B&B Tree
We are dealing with a circuit. Thus, we can start with any one of the towns. Let town 1 be the root of the tree. This town can have n − 1 possible successors: 2, 3, L, n . Thus, we will have n − 1 nodes at the second level of the tree. Each node of the second level will give birth to n − 2 nodes at the third level, and so on. Indeed, some nodes are discarded at some levels of the B&B tree due to the relative values of the global upper bound and the local lower bounds of the criterion (this is the case of the minimization problem). In the case of a maximization problem, the previous item holds after inverting “upper” and “lower”. Upper Bound
An initial upper bound can be obtained in several ways: 1. Construct a circuit step-by-step, starting with town 1 and selecting as the successor of a town the closest town among those not yet integrated in the circuit. The length of such a circuit is an upper bound. 2. We can compute a “good” circuit by using a heuristic (a simulated annealing approach, for instance). The length of this circuit is an upper bound. Note that using a simulated annealing approach usually leads to a near-optimal solution. 3. Generate at random several circuits and keep the best (or shortest one). The length of this circuit is also an upper bound. These approaches can be applied to refresh the upper bound at different levels of the tree. Another strategy to refresh the upper bound is the so-called in-depth strategy. It consists of partitioning one of the subsets obtained, then partitioning one of the last subsets obtained, and so on, until we reach a subset containing only one element: at this point, we have a circuit the length of which is an upper bound. Different ways exist to select the node (i.e., the subset) from which an in-depth strategy is developed: lowest lower bound at the level under consideration, first subset generated at this level, etc.
C.4 Conclusion
499
Computing Lower Bounds
We obtain a lower bound of the criterion for the solutions belonging to a subset by: • freezing the values of the variables that define the subset; • solving the linear programming problem after relaxing the binary constraints that apply to the variables that do not define the subset.
C.4 Conclusion The goal of this appendix was to introduce the basis of the B&B approach. Three important aspects have been highlighted (we have restricted ourselves to the minimization problem): • The design of the B&B tree. • The computation of an upper bound that could be refreshed based on the information contained in the current subsets. Usually, an upper bound is computed either by generating one or more solutions at random and keeping the best one or by applying a heuristic algorithm. • The computation of a lower bound in each subset. Usually, a lower bound is obtained by computing the optimal solution of the problem on the subset under consideration but after relaxing some constraints.
Also mentioned was the in-depth strategy that consists in developing a branch of the tree until a leaf is reached. This provides an upper bound of the optimal solution. Several ways have been indicated to select the node from which the indepth strategy is developed. Note that the examples proposed in this appendix are only a small subset of the applications available in the literature. Other possible examples are: • • • • •
the knapsack problem; non-linear programming problems; the quadratic assignment problem; line-balancing problems; lot-sizing problems.
The main drawback of the B&B method is that it is impossible to predict the computation burden to reach the optimal solution. If the bounds are far from the optimal criterion value, then we may be forced to enumerate most or all of the solutions to find the optimal.
500
C Branch-and-Bound Method
C.5 Recommended Reading Agin N (1966) Optimum seeking with Branch-and-Bound. Manag. Sci. 13(4):176–185 Baker KR (1974) Introduction to Sequencing and Scheduling. John Wiley & Sons, New York, NY Balas E, Ceria S, Cornuéjols G (1996) Mixed 0-1 programming by lift-and-project in a branchand-cut framework. Manag. Sci. 42(9):1229–1246 Barnhart C, Johnson EL, Nemhauser GL, Savelsbergh MWP, Vance PH (1998) Branch and price: column generation for solving huge integer programs. Oper. Res. 46:316–329 Bazaraa MS, Shetty CM (1979) Nonlinear Programming. Theory and Algorithms. John Wiley & Sons, New York, NY Brucker P, Hurink J, Jurisch B, Wöstmann B (1997) A branch and bound algorithm for the openshop problem. Discr. Appl. Math. 76:43–59 Clausen J, Trïa JL (1991) Implementation of parallel branch-and-bound algorithms – experiences with the graph partitioning problem. Ann. Oper. Res. 33(5):331–349 Climaco J, Ferreira C, Captivo ME (1997) Multicriteria integer programming: an overview of different algorithmic approaches. In: Climaco J (ed) Multicriteria Analysis. Springer, Berlin, pp. 248 – 258 Cordier C, Marchand H, Laundy R, Wolsey LA (1999) bc-opt: A branch-and-cut code for mixed integer programs. Math. Prog. 86(2):335–353 Dolgui A, Eremeev AV, Sigaev VS (2007) HBBA: Hybrid algorithm for buffer allocation in tandem production lines. J. Intell. Manuf. 18(3):411–420 Dolgui A, Ihnatsenka I (2009) Branch and bound algorithm for a transfer line design problem: stations with sequentially activated multi-spindle heads. Eur. J. Oper. Res. 197(3):1119–1132 Dowsland KA, Dowsland WB (1992) Packing problems. Eur. J. Oper. Res. 56(1):2–14 Dyckhoff H (1990) A typology of cutting and packing problems. Eur. J. Oper. Res. 44(2):145– 159 Gendron B, Crainic TG (1994) Parallel branch and bound algorithms: survey and synthesis. Oper. Res. 42:1042–1066 Hendy MD, Penny D (1982) Branch and bound algorithms to determine minimal evolutionary trees. Math. Biosci. 60:133–142 Horowitz E, Sahni S (1984) Fundamentals of Computer Algorithms. Computer Science Press, New York, NY Kumar V, Rao VN (1987) Parallel depth-first search, part II: Analysis. Int. J. Parall. Prog. 16:501–519 Louly MA, Dolgui A, Hnaien F (2008) Optimal supply planning in MRP environments for assembly systems with random component procurement times. Int. J. Prod. Res. 46(19):5441– 5467 Martello S, Toth P (1990) Knapsack Problems: Algorithms and Computer Implementations. John Wiley & Sons, New York, NY Mitten LG (1970) Branch-and-bound methods: general formulation and properties. Oper. Res. 18(1):24–34 Mordecai A (2003) Nonlinear Programming: Analysis and Methods. Dover Publishing, Mineola, NY Nemhauser GL, Wolsey LA (1988) Integer and Combinatorial Optimization. John Wiley & Sons, New York, NY Proth J-M, Hillion HP (1990) Mathematical Tools in Production Management, Plenum Press, New York, NY Rao VN, Kumar V (1987) Parallel depth-first search, part I: implementation. Int. J. Parall. Prog. 16:479–499 Senju S, Toyoda Y (1968) An approach to linear programming with 0-1 variables. Manag. Sci. 15:196–207
C.5 Recommended Reading
501
Sprecher A (1999) A competitive branch-and-bound algorithm for the simple assembly line balancing problem. Int. J. Prod. Res. 37:1787–1816 Sweeney PE, Paternoster ER (1992) Cutting and packing problems: a categorized, applicationorientated research bibliography. J. Oper. Res. Soc. 43(7):691–706
Appendix D
Tabu Search Method
D.1 Introduction Contrary to simulated annealing (see Appendix A), tabu search is a method with memory: once a solution has been defined, it is marked as a “tabu solution”, which prevents the algorithm from visiting this solution again for a given number of iterations. In the simulated annealing method, a solution is selected at random in the neighborhood1 of the current one. One keeps this solution if it is better than the current one, otherwise one keeps it with a probability that decreases with the number of iterations already performed and with the difference between the criterion value of the selected solution and the current one. The tabu search (TS) method is different, three rules apply: 1. The algorithm keeps track of the last N solutions that have been obtained (they constitute the tabu list), and these solutions cannot be revisited: they are tabu. A variation of the tabu list prohibits solutions that have some attributes called “tabu active attributes”. 2. The algorithm selects the best solution in the neighborhood of the current solution. It may select a solution worst than the current one when no better solution exists. In this case, the criterion value of the selected solution must be less than the value of the aspiration function.2 In some situations, this rule may lead to the selection of a tabu solution. 3. When the number of solutions in the neighborhood is too large, the search is limited to a subset of the neighborhood. The design of this subset depends on the type of problem under consideration.
1
The neighborhood of a solution S is the set of solutions obtained by disturbing S “slightly”. The disturbance depends on the problem under consideration.
2 An aspiration function could be the product of the best value of the criterion obtained so far by a number greater than 1 (case of a minimization problem), for instance.
504
D Tabu Search Method
The initial solution required to start the tabu search is usually computed using a heuristic. The quality of this initial solution is unimportant. Furthermore, several rules to stop the algorithm are available: • Stop the algorithm when the criterion does not improve for k consecutive solutions. The value of the parameter k is provided by the user. • Stop the algorithm when the value of the current criterion is “close” to a known lower bound. • Stop the algorithm when a given number of iterations is reached. This number is provided by the user.
D.2 Tabu Search Algorithm In this section, we present the general tabu search algorithm. Keep in mind that the neighborhood depends on the type of problem under consideration. Also the length N of the tabu list, the aspiration function3 and the criterion to stop the algorithm should be defined by the user. We introduce two notions to facilitate the implementation of the tabu algorithm: 1. A realizable (or feasible) solution is a solution that satisfies the constraints of the problem. E0 denotes the set of feasible solutions. 2. An admissible solution is a solution that satisfies a given subset of the constraints of the problem. The set of admissible solutions is denoted by E1 . Indeed E1 ⊃ E0 . f ( x ) denotes the value of the criterion when s ∈ E0 . When s ∈ E1 \ E 0 , then the criterion becomes f ( s ) + g ( s ) , where g ( s ) > 0 if the goal is to minimize the criterion and g ( s ) < 0 otherwise. The function g is chosen by the user. The objective is to prompt the algorithm to leave the subset E1 \ E0 and to preferably explore solutions belonging to E0 . Taking into account the previous information, the tabu algorithm can be summarized as follows. Algorithm D.1. (Tabu) Starting the computation: 1. Generate an admissible solution s 0 ∈ E1 . 2. Set m = 0 . The variable m will contain the rank of the current iteration. 3. Introduce the value of k defined in Section D.1 (first rule to stop the algorithm). 3
To start, it is advised to choose a multiplicative variable greater than 1.
D.3 Examples
505
4. Introduce the value of N (length of the tabu list). 5. Set T = ∅ . T will contain the set of tabu solutions. 6. Introduce the aspiration function A . This function can be a real value greater than 1 that multiplies the value of the criterion under consideration. 7. Set s* = s0 . 8. If s0 ∈ E0 set f * = f ( s0 ) , otherwise set f * = f ( s0 ) + g ( s0 ) . Iterations: In the following, F ( s ) refers to f ( s ) if s ∈ E0 and to f ( s ) + g ( s ) if s ∈ E1 \ E0 . 9. While m < k do: 9.1. Set m = m + 1 . 9.2. Generate a neighborhood V ( s0 ) of s0 . 9.3. Select in V ( s0 ) the best solution s1 that is not tabu ( s1 ∉ T ) and such that F ( s1 ) < A [ F ( s0 ) ] . If s1 does not exist, then stop the computation.
9.4. If F ( s1 ) < F ( s * ) set s* = s1 , f * = F ( s1 ) and m = 0 . 9.5. If the number of solutions in T (tabu solutions) is equal to N , then remove the oldest solution from T . 9.6. Set T = T ∪ { s1 } . 9.7. Set s0 = s1 . 10. Display s * and f * .
D.3 Examples Note that, in this book, tabu search has already been applied to a line-balancing problem (see Chapter 7). The following four examples provide an additional insight into the use of tabu search.
D.3.1 Traveling Salesman Problem This problem has been already presented in the previous appendices. A salesman has to visit shops located in n different towns. The objective is to find the shortest circuit passing once through each town. This circuit starts from the salesman’s office and ends in the same office that is located in the ( n + 1 ) -th town. Let d i , j denote the distance from town i to town j or vice versa. To solve this problem using the tabu method, we use the following definitions:
506
D Tabu Search Method
1. The neighborhood of a given solution (circuit) s0 is the set of solutions obtained by permuting 2 towns of the circuit. The number of solutions in V ( s0 ) is n ( n − 1 ) / 2 , thus we can keep the complete neighborhood to find the next “best” solution. 2. Each element of the tabu list is made with the pair of towns that have been permuted with their rank before permutation. Thus each element of the tabu list consists of four integers (assuming that a town is represented by an integer). 3. The aspiration function is the product of the variable by a number greater than 1. In this example the set of feasible solutions is the same as the set of admissible solutions.
D.3.2 Scheduling in a Flow-shop D.3.2.1 Problem Studied We manufacture K products denoted by P1 , P2 , L , PK . Each product has to visit n machines M 1 , M 2 , L, M n in this order to be completed. The time required to perform the operation of Pi on M j is denoted by ti , j . The order the products are launched in production is also the order they visit the machines. The objective is to find the release sequence that minimizes the makespan, that is to say the difference between the time the last product leaves M n and the time the first product enters machine M 1 .
D.3.2.2 Computation of the Makespan when a Release Sequence is Given Let i ( k ) be the index of the product of rank k . The computation is based on the following remark: for a product Pi ( k ) to enter a machine M j , two conditions are necessary: (i) it should have left the machine
M j −1 (if j > 1 ) and (ii) the product that precedes Pi ( k ) , that is to say Pi ( k −1 ) (if any), should have left M j .
D.3 Examples
507
As a consequence: • The time when Pi ( k ) leaves M 1 is: k
Θ i ( k ),1 = ∑ ti ( s ),1
(D.1)
s =1
• The time when Pi ( 1 ) leaves M j is:
Θi (1), j =
j
∑t
(D.2)
i (1), r
r =1
• When k > 1 and j > 1 , the time when Pi ( k ) leaves M j is:
Θi ( k ), j = Max ( Θ i ( k −1), j , Θ i ( k ), j −1 ) + ti ( k ), j
(D.3)
Finally, the makespan is: Θ i ( K ), n Example
Consider a small example involving 6 products and 4 machines. The operation times are given in Table D.1. Table D.2 Times products leave the machines
Table D.1 Operation times
M1
M2
M3
M4
P1
8
10
14
21
3
P2
11
19
25
28
10
9
P3
21
32
42
51
15
9
12
P4
23
47
56
68
P5
7
11
8
14
P5
30
58
66
82
P6
11
8
12
11
P6
41
66
78
93
M1
M2
M3
M4
P1
8
2
4
7
P2
3
8
6
P3
10
11
P4
2
The times products leave the machines, under the assumption that the order products are launched in production is P1 → P2 → P3 → P4 → P5 → P6 , are given in Table D.2. The first column is obtained by applying Relations D.1, the first row results from Relations D.2 and the other elements of the table are derived from Relations D.3.
508
D Tabu Search Method
In other words, the elements of the first column (row) of Table D.2 are obtained by adding the elements of the first column (row) of Table D.1 up to the position of the element. The first row and the first column being available, the element of row i and column j in Table D.2 is obtained by adding the element of the same position in Table D.1 to the maximum between the element of row i − 1 and column j and the element of row i and column j − 1 in Table D.2. Indeed, the rows of Table D.1 must be organized in the order the products are launched in production. The makespan for this order is 93 (the element of the last row and column in Table D.2). We then restart the computation for the order: P1 → P3 → P2 → P5 → P4 → P6
We first reorder the rows of Table D.1 according to the release sequence (see Table D.3) and derive Table D.4 from Table D.3 in the same way that Table D.2 has been derived from Table D.1. With this new order, the makespan is 95. This example shows that the makespan depends on the order products are set into production. The objective is to find the optimal order, that is to say the order that minimizes the makespan. Table D.3 Operation times
Table D.4 Times products leave the machines M1
M2
M3
M4
P1
8
10
14
21
3
P3
18
29
39
48
6
3
P2
21
37
45
51
8
14
P5
28
48
56
70
9
12
P4
30
63
72
84
12
11
P6
41
71
84
95
M1
M2
M3
M4
P1
8
2
4
7
P3
10
11
10
P2
3
8
P5
7
11
P4
2
15
P6
11
8
In the example presented above, the number of solutions is 6! = 720: it is possible to explore all the solutions to find the best one. In practice, the number of products to schedule in real-life problems may be greater than 50, which means that 50! solutions exist, and this number is greater than 3 × 10 64 : this explains why heuristic algorithms are of utmost importance to solve this problem.
D.3 Examples
509
D.3.2.3 Tabu Approach
The neighborhood of a given solution can be defined in several ways. However, we have to keep in mind that the makespan must be computed for each one of the solutions belonging to the neighborhood. As a consequence, the size of the neighborhood should have a reasonable size. For instance, we can decide that the neighborhood is the set of solutions obtained by permuting two products in the current solution. For this case, the size of the neighborhood is K ( K − 1 ) / 2 . Another possibility is to define the neighborhood as the set of solutions obtained by permuting two consecutive elements of the current solution. In this case, the size of the neighborhood is only K − 1 . Indeed, the more constrained the design of the neighborhood, the greater the risk of missing the optimal solution. The length N of the tabu list depends on K (the greater K the greater N ) but there is no rule to derive N from K . Furthermore, an element of the tabu list is made up of the indices and positions of the products that are permuted. Remember also that if N elements are stored in the tabu list, we remove the oldest element of the list before introducing the new one. Concerning the aspiration function, we suggest replacing F ( s1 ) < A [ F ( s0 ) ] in Step 9.3 of the algorithm by F ( s1 ) < a × F ( s0 ) , where a is a real number greater than 1. Furthermore, since E1 \ E0 is empty, F can be replaced by f everywhere in the algorithm.
D.3.3 Graph Coloring Problem D.3.3.1 Problem Statement
Consider a graph G = ( V , E ) , where V is the set of vertices (or nodes) and E the set of edges that connect pairs of vertices. The objective is to find a coloring of the vertices with as few colors as possible; the constraint is that two connected vertices (that is to say vertices linked by an edge) should not receive the same color. In other words, we are looking for a partition of V in K subsets V1 , V2 , L , VK that minimizes K and such that: K
∑ Q (V
k
)=0
k =1
where Q ( Vk ) is the number of edges having both endpoints in Vk .
510
D Tabu Search Method
D.3.3.2 Application of Tabu Search
Upper Bound on the Number of Colors
An obvious upper bound on the number of colors is the number n of vertices. Another solution to obtain an upper bound could be to apply a heuristic algorithm. Ingredients of Tabu Search
Assume that the number K of colors is known. Indeed, K < n . Each element of the neighborhood S ( P ) of a partition P = { V1 , V2 , L, VK } is obtained as follows: select an edge having both endpoints in the same subset Vk , then select one of the endpoints, say v , at random and assign it to another subset Vs , s ≠ k also selected at random. The number of elements of S ( P ) is: Q(P)=
K
∑ Q (V
k
)
k =1
since each edge having both endpoints in the same subset generates one element of S ( P ) . If the vertex v is removed from Vk and assigned to Vs then the pair ( v, k ) is assigned to the tabu list. Indeed, the oldest pair of the tabu list is removed first if the list T is full (i.e., if the number of elements of T is equal to N ). The size N of the tabu list is provided by the user applying a trial-and-error approach. The aspiration function is A ( P ) = a Q ( P ) , where a > 1 . In this example the set E0 of feasible solutions is not the same as E1 , the set of admissible solutions since, at Step 9 of the algorithm presented hereafter, the initial partition is obtained by assigning the edges of the graph at random to K different subsets. Graph-coloring Algorithm This algorithm is denoted by COL-T. Algorithm D.2. (COL-T) 1. Introduce the data that define the graph. 2. Introduce the value of MI . This value is the maximum number of iterations allowed to reach a solution when the number of colors is given. 3. Introduce N that is the length of the tabu list (or the number of elements in the tabu list).
D.3 Examples
511
4. Set K * = m . If we assign initially different colors to the vertices, then m = n , or m is the number of colors in the solution provided by a heuristic. 5. Set K = m − 1 . 6. Introduce a > 1 that reflects the aspiration function A ( P ) = a Q ( P ) . 7. Set T = ∅ and t = 0 . T will contain the tabu elements and t the number of elements in T . 8. Set mi = 0 , where mi is the counter of the number of iterations for each value of K . 9. Assign at random the vertices of the graph to K subsets. We denote this partition by
{
}
P = V1 , V2 , L , VK
10. While mi ≤ MI do: 10.1. If t < N , then do t = t + 1 . 10.2. Set mi = mi + 1 . 10.3. Generate the neighborhood S ( P ) of P as explained above and compute, for each K
p∈S ( P ) , Q ( p ) =
∑Q
p
( Vk ) , where Q p ( Vk ) is the number of edges of
k =1
p ∈ S ( P ) having both endpoints in Vk .
{
10.4. Set H P = p
p ∈ S ( P ), Q ( p ) < a Q ( P ), p ∉ T
}
and select p* ∈ H P such that
Q ( p * ) = Min Q ( p ) . p∈H P
10.5. If Q ( p * ) = 0 , then: 10.5.1. Set K * = K and P* = p* . 10.5.2. Set K = K − 1 . 10.5.3. Go to 7. 10.6. If Q ( p*) > 0 then: 10.6.1. If t = N remove the oldest element from T . 10.6.2. If p * is obtained by removing vertex v from Vk and assigning it to Vs then the pair ( v, k ) is assigned to the tabu list. 11. Display K * and P * .
7
3 1
5
4
6 8
9
2 10
Figure D.1 An example of a graph
11
512
D Tabu Search Method
Table D.5 Another definition of the graph from Figure D.1 Vertex
1
2
3
4
5
6
7
8
9
10
11
Adjacent
2,3
1,4,
1,5
2,6,9
3,6,7
4,5,7,
5,6,8
6,7,11
4,6,10,
2,9,11
8,9,10
vertices
10
8,9
11
Example Consider the graph of Figure D.1 (see also Table D.5). It includes 11 vertices denoted by 1, 2, L, 11 . The algorithm COL-T was applied to this graph with MI = 30 (number of iterations), N = 8 (length of the tabu list) and a = 3 (aspiration constant). The result shows that K * = 3 , which means that only three colors are enough to color the graph such that any two adjacent vertices are always differently colored. The colors assigned to the vertices are denoted by 1, 2, 3 . The assignments are given in Table D.6. Table D.6 Assignment of colors
Vertex
1
2
3
4
5
6
7
8
9
10
11
Color
2
3
3
1
2
3
1
2
2
1
3
D.3.4 Job-shop Problem D.3.4.1 Problem Considered
We consider a set of N machines denoted by M 1 , M 2 , L, M N . They are used to produce K products P1 , P2 , L , PK . Each product has to visit a specific sequence of machines. Such a sequence is called a manufacturing process. We denote by ti , j the operation time of product Pi on machine M j . To illustrate the problem, we consider a system where N = K = 3 . The manufacturing processes are given hereafter. The operation times are put in brackets. P1 : M 3 ( 4 ) , M 2 ( 2 ) , M 1 ( 5 ) P2 : M 2 ( 3 ) , M 3 ( 6 ) , M 1 ( 4 ) P3 : M 3 ( 4 ) , M 2 ( 8 )
D.3 Examples
513
The objective is to find the order products visit the machines. Since 2 products visit machine M 1 , two orders are possible. Similarly, 3 != 6 orders are possible in front of machines M 2 and M 3 . The objective is to find the set of orders that minimizes the makespan. D.3.4.2 Graph Model
We propose a graph model of the problem in Figure D.2. M3,4
M2,2
M1,5
M2,3
M3,6
M1,4
M3,4
M2,8
P1
I
P2
O
P3
Figure D.2 The graph model
I is the input vertex and O the output vertex. The arcs (continuous lines) represent the manufacturing processes. These arcs are called conjunctive arcs and represent the order that operations must be performed on each product. Moreover, changing the direction of a conjunctive arc is not allowed. The dotted lines are disjunctive edges. Disjunctive edges must be transformed into arcs by fixing the order products should visit the machines. Thus, an admissible solution is obtained by transforming each disjunctive edge into an arc. It has been proven that a solution is feasible if the directed graph obtained after transforming edges into arcs does not contain a circuit. Thus, an algorithm that detects a circuit in such a graph is necessary to apply the tabu search approach.
D.3.4.3 Detecting a Circuit in a Directed Graph
The Cycle algorithm presented hereafter detects if a directed graph contains a circuit or not, and provides a circuit for the former. This algorithm is based on the fact that if a directed graph does not contain a circuit then at least one of the vertices is without a predecessor.
514
D Tabu Search Method
Algorithm D.3. (Cycle)
1. Set Q = ∅ . 2. Assign to G the set of vertices. 3. While ( G ≠ ∅ ) do: 3.1. If G contains a node a without a predecessor, then do: 3.1.1. Q = Q ∪ { a } . 3.1.2. G = G \ { a
}.
3.1.3. Remove a from the graph as well as the arcs the origin of which is a . 3.2. If all the vertices of G have at least one predecessor, then: 3.2.1. The graph contains at least one circuit. 3.2.2. End of the algorithm. 4. The graph does not contain a circuit. 5. End of the algorithm.
As an example, consider the model of Figure D.2 and mark the nodes by 2 integers. The first one is the index of the product. The second is the index of the machine. Furthermore, the disjunctive edges have been transformed into arcs. The resulting directed graph is given in Figure D.3.
P1
I
P2
1, 3
1, 2
1, 1
2, 2
2, 3
2, 1
3, 3
3, 2
P3
O
Figure D.3 The directed graph (solution)
The changes in the directed graph when applying Algorithm D.3 are shown in Figures D.4 – D.11. The last state of the graph consists merely of the output vertex. Thus, the directed graph presented in Figure D.3 shows a feasible solution. It is easy to verify that if we reverse the direction of arc [( 3, 2), ( 2, 2 )] , the solution represented by the graph is no longer feasible.
D.3 Examples
515
1, 3
1, 2
1, 1
2, 2
2, 3
2, 1
3, 3
3, 2
O
Figure D.4 First step of the algorithm
2, 2
1, 2
1, 1
2, 3
2, 1
3, 3
3, 2
O
Figure D.5 Second step of the algorithm
2, 2
1, 2
1, 1
2, 3
2, 1
3, 2
Figure D.6 Third step of the algorithm
O
516
D Tabu Search Method
2, 2
1, 2
1, 1
2, 3
2, 1
O
Figure D.7 Fourth step of the algorithm
1, 2
1, 1
2, 3
2, 1
1, 2
O
Figure D.8 Fifth step of the algorithm
1, 1
2, 1
O
Figure D.9 Sixth step of the algorithm
1, 1
2, 1
2, 1
O
Figure D.10 Seventh step of the algorithm
O
Figure D.11 Height step of the algorithm
The makespan associated with a feasible solution is found by applying the dynamic programming approach to the graph representing this solution. Weights associated with vertices I and O are equal to 0. The weights associated with the other nodes are operation times. In the solution (see Figure D.3), the makespan is equal to 30. D.3.4.4 Application of the Tabu Approach
We first have to define a feasible solution. This is quite easy: each time a machine becomes idle, one of the products waiting in front (if any) is introduced to the machine. Thus, a total schedule can be obtained by simulation and translated into disjunctive arcs to represent a feasible solution.
D.5 Recommended Reading
517
The neighborhood of a solution is obtained by changing the directions of the disjunctive arcs. Thus, the number of elements in the neighborhood is equal to the number of disjunctive arcs. Indeed, a solution belonging to the neighborhood is either feasible (i.e., belongs to E 0 ) or simply admissible (i.e., belongs to E1 \ E 0 ). The status of a solution is defined by applying Algorithm D.3 (Cycle). If a solution s belongs to E0 , then the criterion is the makespan f ( s ) . If s ∈ E1 \ E0 , then the notion of makespan does not make sense. In this case, we can give to the criterion twice the value of the best criterion obtained so far. The tabu list contains the endpoints of the N last arcs the directions of which have been changed.
D.4 Drawbacks Some difficulties may arise when tabu search is used: • Defining the length of the tabu list is a tradeoff between the efficiency of the algorithm and the computation burden. This endeavor is often not easy. • Selecting a subset of the neighborhood when the number of elements is too large is difficult in some cases. • The same remark holds when defining the aspiration function.
D.5 Recommended Reading Aboudi R, Jörnsten K (1992) Tabu search for general zero-one integer program using the pivot and complement heuristic. ORSA J. Comput. 6(1):82–93 Battiti R, Tecchiolli G (1994) The reactive tabu search. ORSA J. Comput. 6(2):126–140 Costa D (1994) A tabu search algorithm for computing an operational time table. Eur. J. Oper. Res. 76:98–110 Crainic TG, Toulouse M, Gendreau M (1997) Toward a taxonomy of parallel tabu search heuristics. INFORMS J. Comput. 9(1):61–72 Dell’Amico M, Trubian M (1993) Applying tabu search to the job-shop scheduling problem. Ann. Oper. Res. 41:231–252 Friden C, Hertz A, de Werra D (1989) STABULUS: a technique for finding stable sets in large graphs with tabu search. Computing 42:35–44 Friden C, Hertz A, de Werra D (1990) TABARIS: an exact algorithm based on tabu search for finding a maximum independent set in a graph. Comput. Oper. Res. 17:437–445 Gendreau M, Hertz A, Laporte G (1994) A tabu search heuristic for the vehicle routing problem. Manag. Sci. 40(10):1276–1290 Glover F (1989) Tabu search, Part I. ORSA J. Comput. 1:190–206 Glover F (1990) Tabu search, Part II. ORSA J. Comput. 2:4–32 Glover F, Kochenberger GA (2002) Handbook of Metaheuristics. Kluwer Academic Publishers, Boston Glover F, Lagune M (1998) Tabu Search. Kluwer Academic Publishers, Boston Hertz A, de Werra D (1990) The tabu search metaheuristic: how we used it. Ann. Math. Art. Intell. 1:111–121
518
D Tabu Search Method
Hertz A (1991) Tabu search for large scale timetabling problems. Eur. J. Oper. Res. 54(1):39–47 Hertz A, de Werra D (1987) Using tabu search techniques for graph coloring. Computing 39:345–351 Li VC, Curry GL, Boyd EA (2004) Towards the real time solution of strike force asset allocation problem. Comput. Oper. Res. 31(12):273–291 Lin S, Kernighan BW (1973) An effective heuristic algorithm for the traveling-salesman problem. Oper. Res. 21:498–516 Lokketangen A, Glover F (1998) Solving zero-one mixed integer programming problems using tabu search. Eur. J. Oper. Res. 106(2–3):624–658 Hanafi S, Freville A (1998) An efficient tabu search approach for the 0–1 multidimentional knapsack problem. Eur. J. Oper. Res. 106(2–3):659–675
Appendix E
Genetic Algorithms
E.1 Introduction to “Sexual Reproduction” Genetic algorithms (GA) use techniques inspired by sexual reproduction. Each cell of an individual contains the same set of chromosomes that are strings of DNA (deoxyribonucleic acid). Such a set defines the whole individual. A chromosome is made of genes that are blocks of DNA. A gene encodes a characteristic of the individual. A gene has its own position in the chromosome. The set of chromosomes is the genome. During the reproduction process a complete new chromosome is formed from the parents’ chromosomes by means of two mechanisms: • Recombination (or crossover) consists in taking parts of the chromosomes of both parents to form the new one. • Mutation consists of changing elements of the DNA. Mutation is mainly caused by errors in copying genes from parents during recombination. The fitness of individuals reflects their efficiency in their environment. The underlying idea is that the better the fitness of the parents, the higher the probability of elevated fitness for their descendents. In a genetic algorithm, an individual is a feasible or admissible solution.
E.2 Adaptation of Sexual Reproduction to Optimization Processes Assume that we have to find an optimal solution of a combinatorial problem. In a genetic algorithm, we assign a code to each solution. This code will play the role of a genome and is supposed to characterize the solution in an unambiguous manner. Furthermore, the criterion associated to the problem takes a value for each solution (or code) and this value reflects the fitness of the solution. If the so-
520
E Genetic Algorithms
lution is just admissible, that is to say if some of the constraints are not satisfied, then the value of the criterion is penalized. The algorithm is inspired by Darwin’s theory of evolution. Assume that a set of solutions (feasible or admissible) have been generated either at random or by means of a heuristic algorithm. This set constitutes the initial population. Let n be its size. The genetic algorithm can be summarized as follows. Algorithm E.1. (Genetic algorithm) 1. Select n pairs of solutions in the population. A solution is chosen at random with a probability that increases with the fitness of the solution. In other words, the probability to choose a solution s is proportional to the criterion f ( s ) if the objective is to maximize the criterion and to M − f ( s ) , where M is an upper bound of the criterion, if the objective is to minimize the criterion. 2. Generate 2 descendents for each pair by simulating the reproduction process. Thus, the size of the new population is still n . This new population is a generation issued from the previous population. 3. Check if the stopping test holds. If yes, stop the computation, otherwise go to step 1. Several stopping tests are available, such as for instance: – –
A fixed number of generations have been reached. The system has reached a plateau such that successive iterations no longer produce better solutions.
Let us now explore in detail the ingredients of a genetic algorithm.
E.3 Ingredients of Genetic Algorithms A typical genetic algorithm requires: 1. a genetic representation of the solution (code); 2. a reproduction process; 3. a fitness function to evaluate the solution domain (criterion).
E.3.1 Code As mentioned before, a code should characterize a solution and lead to the corresponding criterion value in an unambiguous manner. The code is a sequence of numbers or characters that belong to a finite set. In other words, a code can be a binary vector, a vector whose elements are integer values or a dynamic structure such as a list or a tree. Let us consider some examples to clarify the concept.
E.3 Ingredients of Genetic Algorithms
521
E.3.1.1 Binary Code Assume that a problem involves integers belonging to the set X = { 0, 1, L, 99 } . In base 2, the integer 99 is represented by the string [ 1 1 0 0 0 1 1 ] . As a consequence, any integer of X can be represented by a string of 7 binary digits in base 2. For example: 1. In base 2, the integer 2 is written [ 0 0 0 0 0 1 0 ] . 2. In base 2, the integer 17 is written [ 0 0 1 0 0 0 1 ] . 3. In base 2, the integer 63 is written [ 0 1 1 1 1 1 1 ] .
E.3.1.2 Code Made with Integers Assume that several ordinal parameters characterize a solution. For example, assume that for the problem under consideration the solutions are characterized by two parameters A and B and that: • A takes the values “low”, “medium” and “high”. • B takes the values “small”, and “large”. In this case we can associate 1 to “low”, 2 to “medium” and 3 to “high”. Similarly, we can associate 1 to “small” and 2 to “large”. If a solution is characterized by the pair (“medium”, “small”), then the code is [ 2, 1 ] . Note: We should keep in mind that the code must be large enough to allow crossover and mutation. We also should be able to change “slightly” a code with limited consequence on the value of the criterion, which requires codes that are as large as possible and that cover uniformly all the characteristics of the solutions. It is not the case for the codes presented in this subsection. E.3.1.3 Code is a List This situation happens, in particular, when the parameters that define the solution take qualitative values. For example, the code [ F , BL, B ] may represent a female with blue eyes and blond hair. Note that, in this case, often to derive the value of the criterion from the code is not straightforward and may require a sophisticated algorithm. It is sometimes difficult to keep the consistency of the code up when the reproduction process is going on. In other words, it may happen that the code becomes
522
E Genetic Algorithms
meaningless for some solutions. For example, the code may contain contradictory or incompatible elements.
E.3.2 Reproduction Process In this section, we explore the choice of the parents used for the reproduction, recombination process and mutation processes. E.3.2.1 Choice of the Parents Let ci , i = 1, L, n be the value of the criterion for solution i . Assume also that we are looking for the solution that maximizes the criterion. In this case, we associate to solution i the following probability: pi =
ci
(E.1)
n
∑c
k
k =1
As mentioned before, when the objective is to find the solution that minimizes the criterion, we set: pi =
M − ci
, where M = Max ci
n
∑ ( M −c
k
)
(E.2)
i∈{ 1, 2 ,L, n }
k =1
Code of parent 1 Code of parent 2
Code of descendant 1 Code of descendant 2 k
Figure E.1 One point crossover
E.3 Ingredients of Genetic Algorithms
523
E.3.2.2 Recombination (or Crossover) Process In this section we provide the simplest recombination processes (i.e., a one-point crossover) and a two-point recombination process. The simplest process is presented in Figure E.1. Let K be the number of elements in the code. We generate at random k on { 2, L, K − 1 } and: • Construct the code of descendant 1 by concatenating the first k elements of the code of parent 1 with the elements k + 1 to K of the code of parent 2. • Construct the code of descendant 2 by concatenating the first k elements of the code of parent 2 with the elements k + 1 to K of the code of parent 1. Other recombination processes can be introduced such as, for example, the represented in Figure E.2. Code of parent 1 Code of parent 2
Code of descendant 1 Code of descendant 2 k
l
Figure E.2 Two-point crossover
As mentioned before, the recombination process may lead to a code that is neither feasible nor admissible. Such a situation has already been represented in Section 7.5.3.3. In this case we can restart the recombination process but, if the probability to reach two feasible codes is very low, we have to introduce more sophisticated processes to reach the codes of the descendants: such a case is shown in the rest of this appendix.
Example 1 Consider the set X = { 0, 1, L, 99 } introduced before and the binary code with 7 digits. Consider the integers 56, the code of which is [ 0 1 1 1 0 0 0 ] , and 97, the code of which is [ 1 1 0 0 0 0 1 ] . These codes will be combined using the simplest recombination process described above with k = 3 . The code of descendant 1 is [ 0 1 1 0 0 0 1 ] and corresponds to the integer 49.
524
E Genetic Algorithms
The code of descendant 2 is [ 1 1 0 1 0 0 0 ] and corresponds to the integer 104. As shown, the second descendant does not belong to X . Thus, the result of this recombination process is not acceptable. Example 2 Consider K products P1 , P2 , L , PK that visit successively and in the same order N machines M 1 , M 2 , L, M N . We denote by ti , j the operation time of Pi on Mj.
In the appendix dedicated to the tabu search it was shown how to compute the makespan when the order is given. It is easy to verify that choosing the order products visit the machines as a code of the solution leads to inconsistencies. For example, assume that K = 4 and consider 2 orders (i.e., 2 solutions). If the code is the index of the machines: • If the order of a solution S1 is P1 , P4 , P3 , P2 , then the code is C1 = [ 1, 4, 3, 2 ] . • If the order of a solution S 2 is P1 , P2 , P3 , P4 , then the code is C2 = [ 1, 2, 3, 4 ] . If we apply the simplest recombination for k = 2 , we obtain two codes: C3 = [ 1, 4, 3, 4 ] and C4 = [ 1, 2, 3, 2 ]
Both codes correspond to unfeasible orders because the same indices appear twice in both codes. Thus, a more sophisticated code is needed. For a given solution, we define: ⎧1 if Pi precedes Pj in the order (solution) under consideration ⎪ xi , j = ⎨ ⎪⎩ 0 otherwise
The code is defined as follows: C = [ x1, 2 , L, x1, K , L , xi ,1 , L, xi , i −1 , xi , i +1 , L, xi , K , L , x K ,1 , L, x K , K −1 ]
The number of elements in the code is K ( K − 1 ) . For example, the code corresponding to solution S1 is now: C1 = [ 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1 ]
The new code corresponding to S 2 is: C2 = [ 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0 ] .
E.3 Ingredients of Genetic Algorithms
525
The algorithm used to obtain the codes of the descendants can be summarized as follows: 1. First apply the rule: “If product Pi precedes Pj in the solutions that define both parents, then Pi precedes Pj in the solutions that define both descendants.”
In other words, if digits are equal to 1 in the codes defining both parents, then the digits in the same positions are equal to 1 in the codes of both descendants. Thus, for the above example: C3 = [ 1, 1, 1, •, •, •, •, •, •, •, •, • ] C4 = [ 1, 1, 1, •, •, •, •, •, •, •, •, • ]
2. If xi , j = 1 in a code, then x j , i = 0 in the same code. Thus, C3 and C4 become: C3 = [ 1, 1, 1, 0, •, •, 0, •, •, 0, •, • ] C4 = [ 1, 1, 1, 0, •, •, 0, •, •, 0, •, • ]
3. From now on, we explain the algorithm for the first descendant. The process is the same for the second one. Generate at random on { 0, 1 } the first digit that has not been defined yet (digit number 5 in this example). Assume that 1 is generated. The code becomes: C3 = [ 1, 1, 1, 0, 1, •, 0, •, •, 0, •, • ]
4. According to the last digit introduced ( x2, 3 = 1 ), the code can be enriched with x3, 2 = 0 : C3 = [ 1, 1, 1, 0, 1, •, 0, 0, •, 0, •, • ]
5. Now, apply the transitivity rule: “If Pi precedes Pj and Pj precedes Pk then Pi precedes Pk .” This rule can be rewritten as: if, in the code, xi , j = 1 and x j , k = 1 then xi , k = 1 . We apply this rule to the last 1-digit introduced, which is x2, 3 in our case. Since none of the x3, • is equal to 1, the transitivity rule does not apply. Going back to Step 3, we generate at random the value of x2, 4 . Assume that 0 is obtained. Then, assign 0 to x2, 4 and 1 to x4, 2 :
526
E Genetic Algorithms
C3 = [ 1, 1, 1, 0, 1, 0, 0, 0, •, 0, 1, • ]
Since x4, 2 = 1 and x2, 3 = 1 , then set x4, 3 = 1 : C3 = [ 1, 1, 1, 0, 1, 0, 0, 0, •, 0, 1, 1 ]
Since x4, 3 = 1 , then x3, 4 = 0 . Finally, the code of the first descendant is: C3 = [ 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1 ]
E.3.2.3 Mutation
A mutation is not always necessary and even not recommend in some circumstances. For instance, changing one element of a code in Example 2 presented above may transform a feasible code into an unworkable one. We just have to keep in mind that the probability of a mutation should remain very low and that we have to check if a code resulting from a mutation remains feasible. Note: a local optimization algorithm is often used to obtain a mutation.
E.3.3 Criterion A combinatorial problem usually imposes the criterion, but it may happen that this criterion does not show the required characteristics for an efficient application of a genetic algorithm. This is the case when it is not sensitive enough to small changes in the code. Changing one element of the code should result in a perceptible change in the criterion value. For instance, we met this situation in Chapter 7 when a genetic algorithm is used to solve a line-balancing problem: instead of minimizing the number of cells, which evolves in a discrete way, a continuous function was introduced that penalized the station loading according to the number of stations that are concerned. Indeed, this criterion also leads to the minimization of the number of cells. It may also happen that the value of the criterion becomes excessive for a limited number of changes in the code. In other words, some individuals (i.e., solutions) lead to criterion values that are well above those corresponding to most of the solutions. In this case, it is advised to replace the criterion K with f ( K ) = K α where α < 1 .
E.5 Examples
527
E.4 Preparing the Use of a Genetic Algorithm The following decisions should be made when using a genetic algorithm.
E.4.1 Which Test Should Be Used to Stop the Search? Usually one of the following tests is used: • The user is required to define the number of iterations to be done and the search is stopped when this number is reached. • The search is stopped when all the elements of the population are the same. • The system is stopped when a plateau such that a given number of successive iterations does not produce better solutions is reached. Indeed, this list is not exhaustive.
E.4.2 What Should Be the Size of the Population? The size of the population depends on the number of elements in the code. The size of the population should be greater than the number of elements in the code.
E.4.3 What Should Be the Probability of Mutation? As mentioned before, a mutation is not always necessary. In particular, mutations can be ignored when randomness is used in the crossover process, as was the case in the second example of Section E.3.2.2. When we can change the elements of a code without violating the constraints that apply to the solutions, the mutation consists of changing one element of the code with a probability that is usually very low (in the order of 0.001 or 0.01).
E.5 Examples In Chapter 7 we showed how to use a genetic algorithm to solve a line-balancing problem. In this section, two more examples are provided.
E.5.1 Traveling Salesman Problem This is a scheduling problem since its objective is to find a visiting order among the cities. Thus, the definition of the code and the recombination process will be the same as the ones given in Example 2, Section E.3.2.2.
528
E Genetic Algorithms
The criterion is the length of the circuit and does not pose a problem. Mutation is not used since randomness already occurs when generating the descendants.
E.5.2 Graph Coloring Problem This problem has been solved in Appendix D using a tabu search approach. Denote by n the number of vertices. The basic problem is to check if K colors are enough to color all the vertices of the graph taking into account that any two vertices connected by an edge should be colored differently. For using a genetic algorithm, we have to define the following elements: code choice of parents, recombination process and mutation. E.5.2.1 Code
Assume that the vertices are numbered from 1 to n . Each vertex must be assigned to one of the K subsets. Each one of these subsets represents a color. After assigning the vertices to the subsets, we obtain P = { V1 , L, VK } that is a partition of the vertices into K subsets and P is a solution. The code will be a string < k1 , k 2 , L, ki , L, k n > , where ki ∈ { 1, L, K } represents the color of vertex i (or the index of the subset to which i belongs). E.5.2.2 Choice of the Parents
A partition P (i.e., a solution) being given, we define: K
Q ( P ) = ∑ Q ( Vk ) k =1
where Q ( Vk ) is the number of edges having both endpoints in the same subset Vk . In other words, Q ( P ) is the number of pairs of connected vertices having the same colors for the partition (i.e., solution) P . We denote by { P1 , P2 , L , PW } the population (i.e., a set of solutions) under consideration. The probability to choose Pi as a parent is (see Relation E.2): qi =
n − Q ( Pi ) W
∑[ n − Q ( P ) ] k
k =1
=
n − Q ( Pi ) W
n W − ∑ Q ( Pk ) k =1
E.6 Concluding Remarks
529
where n is an upper bound of the number of colors: this is the case where each vertex has a different color. Indeed, the objective is to maximize n − Q ( P ) . An optimal solution P * is such that Q ( P*) = 0 : the two endpoints of a edge are not in the same subset (i.e., two vertices connected by a edge are not colored identically). E.5.2.3 Recombination Process
For this problem, we have just to apply the simplest recombination (see Figure E.1) or the two-point crossover (see Figure E.2). E.5.2.4 Mutation
For this problem, there is no risk to reach an inadequate code when applying a mutation. A mutation process can be summarized as follows. Algorithm E.2. (Mutation) 1. Decide first at random if a mutation should be applied. The probability to apply a mutation should not exceed 0.01. 2. If the mutation is decided, then: 2.1. Choose at random an element e of the code. 2.2. Choose at random r ∈ { 1, L, K } . 2.3. Set e = r .
E.6 Concluding Remarks For the use of genetic algorithms: • It is sometimes not easy to define a code that fits with the genetic approach. Such a code should neatly cover the characteristics of a solution, but it should also contain a number of elements large enough to facilitate recombination. • The recombination should preserve, with a reasonable probability, the consistency of the codes of the descendants. In other words, the codes resulting from recombination of parents should lead to codes that represent solutions. • The criterion should be sensitive to small changes in the code. Furthermore, the values of the criterion for all the possible codes (i.e., solutions) should remain inside reasonable limits; otherwise a correction is introduced as mentioned in Section E.3.3.
530
E Genetic Algorithms
E.7 Recommended Reading Alander J (1995) An indexed bibliography of genetic algorithms in manufacturing. In: Chambers L (ed) Practical Handbook of Genetic Algorithms: New Frontiers, Vol. II. CRC Press, Boca Raton, FL Biegel J, Davern J (1990) Genetic algorithms and job-shop scheduling. Comput. Ind. Eng. 19:81–91 Borisovsky P, Dolgui A, Eremeev A (2009) Genetic algorithms for a supply management problem: MIP-recombination vs greedy decoder. Eur. J. Oper. Res. 195(3):770–779 Chu C, Proth J-M (1996) L’Ordonnancement et ses Applications. Sciences de l’Ingénieur, Masson, Paris Davis L (ed) (1991) Handbook of Genetic Algorithms. Van Nostrand Reinhold, New York, NY Dolgui A, Eremeev A, Kolokolov A, Sigaev V (2002) Buffer allocation in production line with unreliable machines. J. Math. Mod. Alg. 1(2):89–104 Falkenauer E (1993) The grouping genetic algorithms: Widening the scope of GAs. JORBEL 33(1–2):79–102 Falkenauer E (1996) A hybrid grouping genetic algorithm for bin packing. J. Heuristics 2:5–30 Falkenauer E (1998) Genetic Algorithms for Grouping Problems. John Wiley & Sons, Chichester, England Garey MR, Johnson DS (1979) Computers and Intractability: A Guide of the Theory of NPCompleteness. Freeman, San Francisco, CA Goldberg DE (1989) Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley, Reading, MA Hill T, Lundgren A, Fredriksson R, Schiöth HB (2005) Genetic algorithm for large-scale maximum parsimony phylogenetic analysis of proteins. Bioch. Bioph. Acta 1725:19–29 Holland JH (1975) Adaptation in Natural and Artificial Systems. University of Michigan Press, Ann Arbor, MI Homaifar A, Qi CX, Lai SH (1994) Constrained optimization via genetic algorithms. Simulation 62(4):242–253 Koza J (1992) Genetic Programming. MIT Press, Cambridge, MA Laporte G (1992) The Traveling Salesman Problem: an overview of exact and approximate algorithms. Eur. J. Oper. Res. 59(2):231–247 Mühlenbein H (1997) Evolutionary Algorithms: Theory and Applications. In: Aarts E, Lenstra JK (eds) Local Search in Combinatorial Optimization, John Wiley & Sons, New York, NY Rubinovitz J, Levitin G (1995) Genetic algorithm for assembly line balancing. Int. J. Prod. Econ. 41:343–354 To CC, Vohradsky J (2007) A parallel genetic algorithm for single class pattern classification and its application for gene expression profiling in Streptomyces coelicolor. BMC Genomics 8:49 Wang S, Wang Y, Du W, Sun F, Wang X, Zhou C, Liang Y (2007) A multi-approaches-guided genetic algorithm with application to operon prediction. Art. Intell. Med. 41(2):151–159
Authors’ Biographies
Prof. Alexandre Dolgui is the Director of the Centre for Industrial Engineering and Computer Science at the Ecole des Mines de Saint-Etienne (France). The principal research of A. Dolgui focuses on manufacturing line design, production planning and supply chain optimization. The main results are based on exact mathematical programming methods and their intelligent coupling with heuristics and metaheuristics. He has coauthored 4 books, edited 11 additional books or conference proceedings, and published about 105 papers in refereed journals, 15 book chapters and over 250 papers in conference proceedings. He is an Area Editor of the Computers & Industrial Engineering journal, an Associate Editor of Omega– the International Journal of Management Science and IEEE Transactions on Industrial Informatics. A. Dolgui is also an Editorial Board Member of 10 other journals such as Int. J. of Production Economics, Int. J. of Systems Science, J. Mathematical Modelling and Algorithms, J. of Decision Systems, Journal Européen des Systèmes Automatisés, etc. He is a Board Member of the International Foundation for Production Research; Member of IFAC Technical Committees 5.1 and 5.2. He has been a guest editor of Int. J. of Production Research, European J. of Operational Research, and other journals, Scientific Chair of several major events including the symposiums INCOM 2006 and 2009. For further information, see www.emse.fr/~ dolgui Prof. Jean-Marie Proth is currently Consultant, Researcher and an Associate Editor of the IEEE Transactions on Industrial Informatics. He has been Research Director at INRIA (National Institute for Computer Science and Automation), leader of the SAGEP (Simulation, Analysis and Management of Production Systems) team and the Lorraine research centre of INRIA, Associate Member in the Laboratory of Mechanical Engineering, University of Maryland, University Professor in France, as well as at the European Institute for Advanced Studies in Management (Brussels). He has carried on close collaboration with several US universities. His main research focuses on operations research techniques, Petri nets and data analysis for production management, especially facility layout, scheduling and supply chains. He has authored or coauthored 15 books (text books and monographs) and more than 150 papers in major peer-reviewed international journals. He is the author or coauthor of about 300 papers for international conferences and
532
Authors’ Biographies
7 book chapters, the editor of 8 proceedings of international conferences, 55 times an invited speaker or invited professor throughout the world. J.-M. Proth was the supervisor of 28 PhD theses in France and the USA. He was also a coeditor of the journal Applied Stochastic Models and Data Analysis. He was an Associate Editor of IEEE Transactions on Robotics and Automation (1995–1998). He was guest editor of Engineering Costs and Production Economics and several other refereed international journals. He has been the Chairman of the Program Committee of various international conferences, an officer of several professional societies, for example: International Society for Productivity Enhancement, Vice-President of Flexible Automation, 1992–1995. For further information, see proth.jeanmarie.neuf.fr/
Index 5 5S, 221, 224
A Adjustable strategy, 5 Agile manufacturing system (AMS), 195, 196, 203, 208 Agility (Measure of –), 203 AIDCS (automatic identification, data capture and sharing), 169 Amplification, 119 Anticollision, 168 Assembly line balancing (ALB), 240, 277, 280, 307, 322 Particular constraints in –, 307 Assembly systems, 154, 323, 354, 421, 425 Mixed-model –, 290 Assignment – with adjustments, 359 Real-time –, 327, 330, 331, 345, 346, 355, 358, 359 Authentication, 190 Auto-ID center, 169 Automated storage and retrieval systems (AS/RS), 425, 427 Automatic identification, 163, 169
B Balancing of the manufacturing entities, 402 Bar code, 163–166, 180, 183, 185, 187, 423, 424, 429, 430 Base stock, 109, 130, 154, 156, 158 Bayes’ theorem, 72 Bellman principle (see Optimality principle) BER (block exemption regulation), 182 Bill-of-material (BOM), 133
Branch and bound (B&B), 244, 251, 255, 397, 483 Bucket brigades, 278, 311, 312, 316, 323 Bullwhip effect, 109, 115, 117-121, 157, 158, 185 – factors, 119 Buyer, 59, 77-82, 84, 94, 99-107, 114, 158
C Capacity extension scheduling, 211 Capacity requirements planning (CRP), 139 Capital placement, 474, 476, 481 Cluster, 1, 6, 29, 30, 32, 171, 378-384 Circuit (in a directed graph), 460, 513, 514 Code, 187, 261-263, 265-268, 519-529 Binary –, 521, 523 – made with integers, 521 List –, 521 Competition, 4, 17, 28, 35, 42, 50, 103, 105, 106, 195, 196, 207, 329, 372, 415 Imperfect –, 4, 50 Perfect –, 4 COMSOAL, 237, 243-251, 257-260, 270, 273, 280, 282, 288, 295, 300, 310, 318, 320, 322 Conflict, 331, 337-341 Conjoint measurement, 1, 6, 20, 21, 26, 37 Constraint, 5, 22, 25, 43, 44, 45, 87, 88, 90, 91, 93, 102, 106, 111, 140, 148, 164, 167, 168, 198, 204, 212, 223, 229, 243, 254, 255, 258, 261, 277, 307, 309, 321, 322, 328, 337, 358, 402, 403, 449, 451, 452, 461, 468, 469, 477, 493, 498, 499, 504, 509, 520, 527 Budgetary –, 414 Capacity –, 87, 328, 338, 341, 451 Demand –, 87 Integrity –, 8, 13, 265
534 Layout –, 394, 398, 401, 402, 410, 412 Precedence –, 222, 238, 240, 245, 247, 251, 259, 261, 262, 264, 267, 328, 338, 366 Production –, 308 Resource capacity –, 328 Processing time –, 328, 338 Ready time –, 328, 338 Real time –, 364 Transportation –, 112 Warehouse-sizing –, 438, 439, 442, 444 Convertibility, 208 CONWIP, 109, 113, 155, 156, 158 Core business (activity), 80 CORELAP (computerized relationship layout planning), 371, 394-396 Correspondence analysis, 7 Correlation coefficient, 19 Cost, 5, 7, 9, 10, 11, 16, 36, 37, 45, 49, 51, 52, 55, 65, 78, 79, 80-83, 85-89, 91, 92, 99, 101, 102, 110, 113, 120-126, 128-132, 138, 140, 144, 146, 148, 150, 151, 164-169, 173, 174, 185, 186, 188, 189, 190, 191, 193, 198, 203, 204219, 221-223, 225, 237, 278, 280, 307-311, 323, 328, 329, 330, 332, 334, 336, 371, 374, 382, 402-415, 419, 422-424, 429, 430, 435-439, 442, 443, 460-462, 466, 467, 469, 471, 472, 474, 479, 480, 482, 488, 490, 493, 494 Administrative –, 110 Average –, 3, 129, 130, 131, 173, 404, 488 Backlogging –, 110, 121, 125, 128, 173, 174, 487, 488, 490 Design –, 196 Fixed –, 3, 10, 11, 12, 13, 15, 81, 83, 110, 119, 145 Handling –, 404, 411, 412, 414, 423 Holding – (see also Inventory –), 110, 123, 139, 140, 143, 145, 146, 150, 152, 332, 437, 439 Incremental –, 16
Index Inventory – (see also Holding –), 3, 110, 125, 128, 146, 148, 151, 172-174, 411, 434, 469, 487, 488, 490 Labor –, 104, 106, 164, 183, 186, 188, 191, 197, 202, 411 Maintenance –, 424 Manufacturing –, 168, 195, 196, 207, 403 Marginal –, 3, 51 Ordering – (see Production –) Production – (i.e. Ordering –), 1, 2, 3, 10, 78, 80, 101, 104, 109, 110, 122, 125, 128, 142, 146, 147, 150, 184, 191, 195, 196, 198, 206, 207, 209, 221, 419, 422, 469 Rearrangement –, 411, 412, 414, 415 RFID implementation –, 164 Salvage –, 68, 126 Setup –, 81, 110, 123, 140, 145, 151 Shortage –, 110 Transportation –, 107, 110, 405-409, 422, 443 Variable –, 3, 10, 11, 12, 14, 15, 81, 86, 89 – stability, 205 Cost-plus method, 1, 15, 16, 37 Counterfeiting, 181, 184, 190, 191 COVERT rule, 336 CRAFT (computerized relative allocation of facilities technique), 371, 394, 398 Criterion, 43, 48, 81, 84-86, 88, 89, 91, 92, 93, 110, 196, 222, 240-247, 249, 250, 253-255, 259, 261-265, 268, 271, 273, 274, 282, 300, 308, 309, 318, 320, 323, 328-330, 333, 338, 344, 360, 362, 363, 384-387, 390, 397, 398, 402-404, 410, 413, 444, 445, 449, 454, 456, 457, 483, 485, 486, 489492, 495, 499, 503-505, 517, 519, 522, 526, 528, 529 Cost –, 89
Index Quality –, 89 Traffic –, 384, 390 Weight –, 384, 385 Critical – function, 101 – mass, 185 – point, 183 – path, 222, 223, 478, 479 – task, 480 Cross-decomposition, 371, 377, 378, 384, 385, 390 – GP method (GPM), 377, 378, 385, 387, 388 – with weight criterion, 385 – with traffic criterion, 390 Crossover process (see Recombination process) Crossover stations, 317, 322 Cumulative density function, 285 Currency – exchange, 77 – options, 82 Customization, 209, 210 Cycle time (takt time), 196, 221, 223, 237, 238, 240, 242, 246, 247, 250, 252, 253, 254-259, 264, 268-273, 277, 280, 282, 283, 287-289, 292, 299, 301, 303-307, 313-316, 321, 322, 427, 450
D Dedicated manufacturing line (DML), 207 Demand – curve, 4, 33, 218 – intensity, 60, 61, 70, 71, 75, 421 Design, 49, 79, 83, 102, 113, 122, 164, 185, 196, 198, 203-206, 209, 219, 221, 223, 225, 227-231, 233, 237, 298, 309, 371, 376378, 384, 403, 404, 419, 420, 424, 431, 433, 435, 440, 443, 445, 499, 503, 509 – change, 79
535 Tag –, 168 Diagnosis ability, 209 Discount strategy, 1, 7 Discount factor, 111 Dissimilarity index, 378, 379, 382, 391393 Distance Communication –, 168 Euclidian –, 29 Distribution Gamma –, 112 Gaussian –, 111, 112, 227, 228, 231, 232, 278, 284-288 Geometric –, 111 Logarithmic –, 111, 457 Poisson – (see Poisson process) Duopoly market, 4, 33, 37, 77, 94 Dynamic pricing, 41, 42, 49, 55, 60, 75 Dynamic programming, 126, 141, 142, 147, 149, 150, 212, 413, 414, 459-462, 466, 481, 516 Backward –, 126, 141, 142, 147, 149, 150, 466 Forward –, 462-464, 475, 478
E Echelon stock policy, 109, 113, 116, 121, 132, 133 Economic order quantity (EOQ), 151 EDI (electronic data interchange), 179 ELV (end-of-life vehicle) law, 182 EMF (electronic magnetic field), 165 EPC (electronic product code), 165, 185 Equilibrium – point, 12, 13 – profit, 94 – state, 95-99 – theory, 106 Equipment selection, 223, 277, 278, 308, 310 Ergonomics, 204, 207, 433 Error handling, 210 EUREKA, 252
536
Index
F FABLE, 252 Fitness function, 262, 520 Flexibility, 5, 79, 99, 106, 158, 164, 196198, 202, 204, 207-210, 219, 221, 323, 330, 331, 343, 400, 401, 413, 425, 432, 434, 435 Flexible job-shop, 200, 201 Flexible manufacturing cell (FMC), 199, 200 Flexible manufacturing module (FMM), 199, 200 Flexible manufacturing system (FMS), 195-197, 203, 208 Flexible packing system, 200, 201 Flow-shop scheduling, 506 Forecasting, 111, 121, 158, 188, 191, 203, 209, 415 Forrester effect (see Bullwhip effect) Frequency Operating –, 168 Fusion, 259
Management –, 327, 328, 329 Planning –, 437 Rolling –, 71, 75, 133
I INRIA-SAGEP method, 371, 394, 397 Installation stock policy, 116, 133 Integration ability, 208 Interrogator (see RFID reader) Inventory – level, 35, 41, 42, 48-56, 61-65, 70-75, 109, 110, 115-118, 125-128, 132, 134, 136, 137, 140, 142-148, 150, 151, 177, 183, 195, 196, 468-471, 475 – position, 127, 130 – control, 109, 133, 158, 428 – cost, 3, 110, 125, 128, 146, 148, 151, 172-174, 411, 434, 469, 487, 488, 490
J G Gantt chart, 328, 329 GATT, 329 Genetic algorithm (GA), 237, 261-263, 265, 268-270, 273, 274, 430, 519, 520, 526-529 Graph coloring, 509, 510, 528 Granularity, 168, 170 Gravity center, 30-32, 379-383
Job-shop – system, 195, 196, 202, 331, 368, 381, 512 – scheduling problem, 196, 512 Flexible –, 200, 201 Just-in-time (JIT), 112, 204, 221, 225, 239, 318, 421, 427
K H High-price strategy, 4 Horizon, 43, 60, 65, 111, 121, 125-128, 130, 139, 330, 331, 337, 364, 372, 374, 402 – of a problem, 60, 111, 127, 140, 142, 146, 148, 151, 211, 402, 411, 437, 468, 471 Forecasting –, 121
Kaizen, 221, 223, 224 Kanban, 109, 113, 122, 152-158, 225, 156 Generalized – (GKS), 109, 113, 156, 158 Extended – (EKS), 109, 157, 158 Kilbridge and Wester (KW) heuristic, 251 K-mean analysis, 1, 6, 29, 37, 371, 377, 378, 380, 381, 383, 384 Knapsack problem, 480, 481, 499 Kuhn and Tucker algorithm, 44
Index
L Lagrangian, 44, 45 Layout, 210, 220, 316, 319, 320, 371-378, 381, 394-401, 403, 404, 405, 406, 408, 410-415, 420, 432, 433, 436, 451 Adaptable –, 372 Basic –, 400, 401 Cellular –, 372, 374-378 Department –, 273, 375, 377, 378 Dynamic facility – (DFL), 371, 372, 376, 403, 410, 412, 415 Functional department –, 372, 373, 375-378 – design, 371, 376-378 Linear –, 372, 373, 375-378, 400 Manufacturing –, 371, 372, 403 Multilinear –, 400, 401 Robust – (RL), 371, 372, 403, 405, 411, 415 Static facility – (SFL), 372, 404, 414 U-shaped assembly line –, 316 Lean manufacturing system (LMS), 195, 196, 203, 207, 218, 219, 220 Left-shift, 338 Line balancing, 156, 220, 221, 223, 237, 240, 241, 270, 271, 277, 280, 297-299, 304, 307, 308, 310, 318, 322, 323, 373, 377, 378, 499, 505, 526, 527 Mixed-model –, 290, 299 Linear interpolation, 18, 19, 49 Linear production, 153, 347, 364 Linear programming, 92, 94, 237, 255, 320, 345, 403, 494, 498, 499 Loss, 99, 101, 186, 191 Direct –, 186 Financial –, 8, 203 Indirect –, 186 – of initiative, 101 – of product, 164 – of profit, 172 – of revenue, 124 – of skill, 107
537 Productivity –, 271, 305, 411 Stock –, 171-173 Lot-sizing, 109, 113, 119, 138, 139, 158, 499 Low-price strategy, 5, 9
M Makespan, 196, 328, 329, 338, 340, 341, 351, 355, 506-509, 513, 516, 517, 524 Manufacturing resource planning (MRP2), 133, 138, 139, 158 Margin 1, 3, 5, 9-14, 37 Market Duopoly –, 4, 33, 37, 77, 94 Monopoly –, 4, 6 Oligopoly –, 1, 4, 32, 34 – segmentation, 1, 6, 29, 35 – share, 1, 2, 9, 16, 21, 35, 104 Mass production, 195-198, 204, 206, 207, 237, 420, 421 Master production schedule (MPS), 134136, 139 Match-up schedule repair heuristic, 337, 342 Material requirement planning (MRP), 133, 136-139, 158 Maximum deviation coefficient, 273 Modularity, 208, 209, 403 Monte-Carlo method, 65, 66 MUDA, 219 Mutation, 262, 268-270, 519, 521, 522, 526-529 Myopic customer, 37, 41, 75
N Nash equilibrium, 4, 94 Neighbor solutions, 70, 259, 260, 274, 397, 398, 410, 413, 450-453, 455, 456, 503-506, 509, 510, 511, 517
538
Index
Oligopoly market, (see Market) Offshoring, 77, 78 Operation (task) time, 109, 114, 133, 135, 136, 138, 154, 155, 188, 198, 222, 238, 240-242, 245, 248, 249, 251, 252, 254, 256, 257, 261, 262, 263, 267, 270, 271, 277, 278, 280-285, 287-292, 294, 296, 299, 300, 302, 304, 306308, 310, 312, 318, 319, 321323, 328, 329, 331-336, 338, 346, 347, 350-353, 355, 358, 359, 362, 363, 365, 367, 372, 507, 508, 512, 516, 524 Deterministic –, 277, 312, 372 Gaussian –, 284 Stochastic –, 277, 278, 282, 307, 318, 323, 358 – evaluation, 299 Optimality principle, 459-461, 465 Oscillation, 119-121 Outsourcing, 77-82, 85, 94, 96, 97, 99, 100-107, 328, 422 – benefits, 77-81 – in China, 100, 101, 104 – negative effects, 77, 79, 101 – process, 77, 80 Cons –, 99 Offshore –, 77-79, 80, 99, 104-107, 422 Pros –, 99 Strategic –, 77, 94 Overflow probability, 280, 282
PERT, 479 Phase-lag, 119, 120 Poisson process, 41, 50, 51, 60, 72, 175, 177 Policy Base stock –, 109, 130, 154 Echelon stock –, 133 Return –, 120, 158 (R, Q) –, 127, 128 (s, S) –, 130 Profit margin (see Margin) Price Equilibrium –, 3 – discrimination, 6, 8, 37, 42 – elasticity, 3 – skimming, 1, 8, 9, 37 – strategy, 4, 5, 9, 32 – testing, 1, 15, 16 – war, 33, 35 Pricing Dynamic –, 41, 42, 49, 55, 60, 75 Penetration –, 9, 37 Privacy – bit, 190 – concern, 163, 188-191 Product life (PL), 375 Production – cost (see Cost) – capacity, 5, 33, 35, 83, 84, 110, 155, 208, 210 – cycle, 184, 195, 222, 223, 238, 351-353, 373 Productivity loss (see Loss) Profit Equilibrium – (see Equilibrium) Expected –, 124 Profile method, 1, 21, 26, 28, 29 Programming Compromise –, 88, 93, 309, 328 Goal –, 88, 92, 328 Linear –, 92, 94, 237, 255, 320, 403, 494, 498, 499 Project management, 477
P
Q
Packaging, 6, 51, 82, 168, 175, 179, 373, 400, 419, 422, 429-432, 434 Parallel stations, 277, 278, 306, 307, 373 Partial schedule, 337 Part-worth, 1, 6, 21-32
Quality, 2, 4, 6, 21, 78, 79, 81, 82, 84, 85, 89, 91, 92, 99, 107, 109, 120, 167, 170, 176, 177, 184, 185, 188, 196, 202-205, 207-210, 218,
Newsboy (Newsvendor) problem, 122, 123, 158
O
111,
139, 331,
345,
Index 219, 224, 245, 262, 271, 316, 323, 327, 344, 360, 371, 422, 423, 425, 427, 429, 430 – control, 82, 185, 323, 422, 423, 430 – management, 81, 327 Measure of –, 85 – risk, 79, 99
R Real-time decision, 330, 363 Ranked positional weight (RPW), 237, 244, 248, 249, 251 Reactive system, 154 Reactivity, 79, 81, 82, 84-86, 109, 158, 367, 372, 422 Recombination process, 262, 266, 519, 522-524, 527, 528, 529 Reconfigurable manufacturing systems (RMS), 195, 196, 207, 210, 415 Recursive problem, 460 Reengineering, 78, 223 Reproduction process, 262, 264, 519-522 Return-on-investments (ROI), 185, 192 Revenue, 1-3, 9, 33, 35, 37, 42, 43, 48, 49, 50-54, 62, 65, 67-70, 74, 122124, 435 Expected –, 42, 50-54 Marginal –, 3 Salvage –, 122, 123 Selling –, 122 – management, 1, 9, 37, 42 RFID (radio-frequency identification), 112, 163 – advantages, 183, 184, 188, 191 – cost, 164 – reader, 163, 164, 183, 184, 186 Right-shift, 338, 340 Risk-neutral model, 50 Rolling horizon, 71, 75, 133 Rough Cut Capacity Planning (RCCP), 139 Rule Dispatching –, 331, 335-337, 367
539 Priority –, 327, 331, 332, 334, 336, 345
S Safety margin, 241 SALOME, 252 SAL (simple assembly line), 237, 240 SALB-1, 237, 240-245, 248, 253-255, 257, 258, 260-262, 271, 274 SALB-2, 237, 240, 241, 255-258, 271, 274 Salvage value, 41, 49, 60, 61, 70, 75, 122 Scheduling Dynamic –, 327, 329, 330, 331, 367 Flow-shop –, 508 Job-shop –, 512 Predictive-reactive –, 337 Production –, 79 Reactive –, 331 Static –, 327, 329, 337, 367 School timetable, 451 Seiketsu, 224 Seiri, 224 Seiso, 224 Seiton, 224 Selling curve, 1, 4, 15-17, 37 Shifting/swapping repair heuristic, 337 Shitsuke, 224 Shortest path, 397, 463-465, 497 Shrinkage, 164, 171, 174, 176-178, 186188, 191 Simulated annealing, 67, 68, 70, 237, 259, 260, 273, 274, 397, 398, 408, 410, 412, 430, 449, 450, 452, 453, 456, 457, 498, 503 Six sigma method, 221, 227, 230, 231 SKU (stock keeping unit), 171, 428, 429 Solution Admissible –, 47, 504, 506, 510, 513, 519, 521 Feasible –, 48, 87, 111, 137, 138, 216, 259, 260, 345, 398, 399, 410, 412, 449-453, 456, 469, 483, 485, 489, 504, 506, 510, 514, 516 SPL (simple production line), 237
540 Station, 152-157, 166, 182, 200-202, 210, 223, 225, 226, 237-274, 277, 278, 280-284, 287-289, 292-296, 298-312, 316-323, 372-374, 376, 377, 399, 402, 427, 432, 433, 450, 451, 526 Parallel –, 277, 278, 306, 307, 373 Stimulus, 20-24, 26 Stock – loss, 171-173 – out, 125, 138, 180, 186, 188 – taking, 176 Storage equipment, 423, 426, 431-433 Subcontractor, 78 Supply chain, 2, 109, 110, 112-115, 118122, 132-134, 157, 158, 163, 164, 169-171, 174, 175, 177, 178, 180, 181, 183-189, 191, 192, 330, 331, 352, 373, 422, 428, 435, 440, 443, 445 Surface acoustic wave (SAW) technology, 168 Swap, 338, 341 System – effectiveness, 272 Pull –, 109, 152, 153, 155, 219
T Tabu list, 259, 260, 261, 274, 503-506, 509-512, 517 Tabu search, 237, 259, 273, 274, 503-505, 510, 513, 517, 524, 528 Tag, 163-171, 174, 176, 179-184, 186, 188, 189-191 – killing, 189, 191 Active –, 164, 165, 167 Passive –, 165-168, 184, 188 Semi-passive –, 166, 167 Takt time (see Cycle time) Tchebycheff’s polynomials, 285, 286 TDMA (time division multiple access), 169 Time
Index Cycle – (see Cycle time) Lead –, 81-83, 85, 86, 120-122, 126129, 131, 135, 138, 158, 203, 209, 223, 238, 486-488, 490, 491 Time-phased gross requirement, 135 Total productive maintenance (TPM), 221, 225 Transponder, 165, 180, 189 Transportation resources, 79, 170, 179, 198, 332, 399, 400, 401, 424, 432 Selection of –, 399 Traveling salesman problem (TSP), 428, 430, 450, 497, 505, 527 Triangular density of probability, 277-279, 299 Two-factor method, 1, 21, 26, 29
U Utility function, 21 U-shaped assembly line, 277, 278, 311, 316-318, 323
V Vendor selection, 77, 81, 82, 84, 99 Vendor–management–inventory (VMI), 121
W Warehouse Mail order selling (MOS) –, 420, 421 Retailer supply (RSW) –, 420, 421 Spare part (SPW) –, 420, 421, 427 Special (SW) –, 420, 421 Unit-load –, 419, 435, 436 – design, 419, 420, 431 – location, 420, 440, 444 – management, 419, 429, 431 – management systems (WMS), 431 – sizing, 419, 437, 439 – taxonomies, 420 – usefulness, 422
Index Warehousing – equipment, 427, 432 – operations, 423 Waste, 182-184, 195, 196, 202, 207, 209, 218, 219, 221, 224, 225 – elimination, 196, 219, 221 Whiplash effect (see Bullwhip effect)
541
Y Yield management (see Revenue management)