268 111 16MB
English Pages XVI, 462 [463] Year 2020
Engineering Applications of Computational Methods 2
Xinyu Li Liang Gao
Effective Methods for Integrated Process Planning and Scheduling
Engineering Applications of Computational Methods Volume 2
Series Editors Liang Gao, State Key Laboratory of Digital Manufacturing Equipment and Technology, Huazhong University of Science and Technology, Wuhan, Hubei, China Akhil Garg, School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Wuhan, Hubei, China
The book series Engineering Applications of Computational Methods addresses the numerous applications of mathematical theory and latest computational or numerical methods in various fields of engineering. It emphasizes the practical application of these methods, with possible aspects in programming. New and developing computational methods using big data, machine learning and AI are discussed in this book series, and could be applied to engineering fields, such as manufacturing, industrial engineering, control engineering, civil engineering, energy engineering and material engineering. The book series Engineering Applications of Computational Methods aims to introduce important computational methods adopted in different engineering projects to researchers and engineers. The individual book volumes in the series are thematic. The goal of each volume is to give readers a comprehensive overview of how the computational methods in a certain engineering area can be used. As a collection, the series provides valuable resources to a wide audience in academia, the engineering research community, industry and anyone else who are looking to expand their knowledge of computational methods.
More information about this series at http://www.springer.com/series/16380
Xinyu Li Liang Gao •
Effective Methods for Integrated Process Planning and Scheduling
123
Xinyu Li School of Mechanical Science and Engineering, HUST State Key Laboratory of Digital Manufacturing and Equipment Technology Wuhan, Hubei, China
Liang Gao School of Mechanical Science and Engineering, HUST State Key Laboratory of Digital Manufacturing and Equipment Technology Wuhan, Hubei, China
ISSN 2662-3366 ISSN 2662-3374 (electronic) Engineering Applications of Computational Methods ISBN 978-3-662-55303-9 ISBN 978-3-662-55305-3 (eBook) https://doi.org/10.1007/978-3-662-55305-3 Jointly published with Science Press The print edition is not for sale in China (Mainland). Customers from China (Mainland) please order the print book from: Science Press. © Springer-Verlag GmbH Germany, part of Springer Nature and Science Press, Beijing 2020 This work is subject to copyright. All rights are reserved by the Publishers, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publishers, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publishers nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publishers remain neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer-Verlag GmbH, DE part of Springer Nature. The registered company address is: Heidelberger Platz 3, 14197 Berlin, Germany
Foreword
In the international academia and industry, scheduling theory and method of manufacturing system is an interdisciplinary research direction, involving systems engineering, operations research, artificial intelligence, control theory, computer technology, management engineering, and other disciplines. Process planning and shop scheduling are two highly important subsystems in the modern manufacturing system. Process planning decides the process route and processing resource allocation of the jobs, which can transform raw materials into finished form. Scheduling system is to arrange the jobs to the actual processing machines properly, linking the process designing and the production action. The integration of process planning and scheduling in the manufacturing procedure can optimize the production process, improving the efficiency and reducing the cost, which is a key to realize the intellectualization of manufacturing system. The contents in book mainly include the following problems, including flexible process planning problem, shop scheduling problem, integrated process planning scheduling problem, etc. These problems have a large amount of complexity, such as difficulty in modeling, complexity in calculation, multi-constraint, uncertainty, multi-minimal, large problem scale, multi-objective, coexistence of discrete and continuous variables, etc. Therefore, the research of IPPS has important academic and engineering value. This book is a monograph about the effective methods for IPPS. Based on the research, teaching and engineering experience of the authors and their team for many years, the book mainly classifies in the following three aspects: flexible process planning, job shop scheduling, integrated process planning and shop scheduling. Furthermore, the book also summarizes Genetic Algorithm (GA), Genetic Programming (GP), Particle Swarm Optimization (PSO) algorithm, and other intelligent algorithms for solving the above problems. In addition, this book systematically describes the design thought of the intelligent algorithms, which may provide effective methods for the researchers to solve the practical engineering problems. This book is divided into five parts, the first part is from Chaps. 1 to 3, which introduces the review of IPPS problem, including the models, methods, and applications. The second part mainly discusses the single-objective optimization of IPPS. The third part introduces IPPS from the aspect of multi-objective optimization. As v
vi
Foreword
for the fourth part, it belongs to the dynamic scheduling problem. Finally, the fifth part is the IPPS simulation prototype system. In general, the main contents of the book are from the second part to the fourth part, where process planning, scheduling, and IPPS will be discussed in details. The second part is the single-objective optimization, which is discussed in the order of Process Planning (PP), Job shop Scheduling (JSP), Flexible Job shop Scheduling (FJSP), Integrated Process Planning and shop Scheduling (IPPS). In Chaps. 4 and 5, GP algorithm and PSO algorithm are used to solve the PP problem. Chapter 6 introduces the application of hybrid PSO and Variable Neighborhood Search (VNS) algorithm on JSP. In Chap. 7, a modified GA is introduced to solve FJSP, including total new Global Selection (GS) and Local Selection (LS) to generate high-quality initial population in the initialization. Chapter 8 introduces a Multi-Swarm Collaborative Evolutionary Algorithm (MSCEA) to solve FJSP. In Chap. 9, the mathematical model of IPPS and the application of evolutionary algorithm are presented. In Chap. 10, an agent-based approach is applied to solve IPPS. Chapter 11 introduces the application of modified GA on IPPS. The third part is multi-objective optimization, which is introduced from FJSP to IPPS. Chapter 12 is the application of GA and TS algorithm in multi-objective FJSP. Chapter 13 introduces PSO and TS algorithm to multi-objective FJSP. Chapter 14 presents a Multi-Objective Genetic Algorithm (MOGA) based on immune and entropy principle to solve the multi-objective FJSP. In Chap. 15, an effective genetic algorithm is proposed to optimize the multi-objective Integrated Process Planning and Scheduling (IPPS) problem with various flexibilities in process planning. The research in Chap. 16 focuses on the multi-objective IPPS problem, and a game theory based approach is used to deal with the multiple objectives. The fourth part focuses on the dynamic scheduling problem from Chaps. 17 to 20. In Chaps. 17 and 18, a genetic tabu search algorithm is developed for dynamic rescheduling job shop problem, while Chap. 18 is considering multi-objectives. In Chap. 19, Dynamic Flexible Job shop Scheduling Problem (DFJSSP) with job release dates is studied, and an approach based on GEP also proposed. In Chap. 20, a new dynamic IPPS model is formulated, the combination of hybrid algorithm and rolling window technology is applied to solve the dynamic IPPS problem. The fifth part, shown in Chap. 21, introduces a IPPS simulation prototype system which is developed based on the practical requirements of the work shop and theoretical research results. Firstly, the application background of the system is introduced, and then the structure of the system is analyzed. Finally, the implementation and operation of the prototype system are represented through an engineering example, verifying the availability and effectiveness of the prototype system.
Foreword
vii
The researches involved in this book were funded by the projects including National Natural Science Foundation of China (Grant Nos. 51825502, 51775216, 51375004, 51005088, 50305008), National Key R&D Program of China (Grant No. 2018AAA0101704), Natural Science Foundation of Hubei Province (Grant No. 2018CFA078), HUST Academic Frontier Youth Team (Grant No. 2017QYTD04), and Program for New Century Excellent Talents in University (Grant No. NCET-08-0232). This book is written by Dr. Xinyu Li and Dr. Liang Gao, from State Key Laboratory of Digital Manufacturing Equipment and Technology, School of Mechanical Science and Engineering, Huazhong University of Science and Technology (HUST). In addition, postdoctoral Chunjiang Zhang, graduate students including Guangchen Wang, Yingli Li, Jin Xie, Qihao Liu, Di Fang, Haoran Li, Lin Gui, Yang Li, and other graduate students also participated in the relevant research work. Due to the limited knowledge of the authors, the book will inevitably have some limitations and even errors. And many contents need to be improved and in-depth researched, so readers are requested to criticize and correct. Wuhan, China
Xinyu Li Liang Gao
Contents
1
Introduction for Integrated Process Planning and Scheduling 1.1 Process Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Shop Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . 1.2.2 Problem Properties . . . . . . . . . . . . . . . . . . . . . . . 1.2.3 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Integrated Process Planning and Shop Scheduling . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
1 1 3 3 4 5 6 11
2
Review for Flexible Job Shop Scheduling . . . . . . . . . . . . . . . 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Problem Description . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 The Methods for FJSP . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Exact Algorithms . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Heuristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.3 Meta-Heuristics . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Real-World Applications . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Development Trends and Future Research Opportunities . 2.5.1 Development Trends . . . . . . . . . . . . . . . . . . . . . 2.5.2 Future Research Opportunities . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
17 17 18 18 20 22 24 33 33 33 34 37
3
Review for Integrated Process Planning and Scheduling . 3.1 IPPS in Support of Distributed and Collaborative Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Integration Model of IPPS . . . . . . . . . . . . . . . . . . . . 3.2.1 Non-Linear Process Planning . . . . . . . . . . . 3.2.2 Closed-Loop Process Planning . . . . . . . . . . 3.2.3 Distributed Process Planning . . . . . . . . . . . . 3.2.4 Comparison of Integration Models . . . . . . .
........
47
. . . . . .
47 48 48 49 50 51
. . . . . .
. . . . . .
. . . . . . . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
ix
x
Contents
3.3
Implementation Approaches of IPPS . . . . . . . . . . . . . . . 3.3.1 Agent-Based Approaches of IPPS . . . . . . . . . . . 3.3.2 Petri-Net-Based Approaches of IPPS . . . . . . . . . 3.3.3 Algorithm-Based Approaches of IPPS . . . . . . . . 3.3.4 Critique of Current Implementation Approaches . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
52 52 54 54 55 56
4
Improved Genetic Programming for Process Planning . . . . . . . 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Flexible Process Planning . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Flexible Process Plans . . . . . . . . . . . . . . . . . . . . . 4.2.2 Representation of Flexible Process Plans . . . . . . . . 4.2.3 Mathematical Model of Flexible Process Planning . 4.3 Brief Review of GP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 GP for Flexible Process Planning . . . . . . . . . . . . . . . . . . . . 4.4.1 The Flowchart of Proposed Method . . . . . . . . . . . . 4.4.2 Convert Network to Tree, Encoding, and Decoding 4.4.3 Initial Population and Fitness Evaluation . . . . . . . . 4.4.4 GP Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Case Studies and Discussion . . . . . . . . . . . . . . . . . . . . . . . 4.5.1 Implementation and Testing . . . . . . . . . . . . . . . . . 4.5.2 Comparison with GA . . . . . . . . . . . . . . . . . . . . . . 4.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
61 61 62 62 64 64 67 68 68 69 71 72 74 74 75 78 78
5
An Efficient Modified Particle Swarm Optimization Algorithm for Process Planning . . . . . . . . . . . . . . . . . . . . . . . 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Process Planning . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 PSO with Its Applications . . . . . . . . . . . . . . . . . . 5.3 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Flexible Process Plans . . . . . . . . . . . . . . . . . . . . 5.3.2 Mathematical Model of Process Planning Problem 5.4 Modified PSO for Process Planning . . . . . . . . . . . . . . . . . 5.4.1 Modified PSO Model . . . . . . . . . . . . . . . . . . . . . 5.4.2 Modified PSO for Process Planning . . . . . . . . . . . 5.5 Experimental Studies and Discussions . . . . . . . . . . . . . . . 5.5.1 Case Studies and Results . . . . . . . . . . . . . . . . . . 5.5.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Conclusions and Future Research Studies . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. 81 . 81 . 82 . 82 . 84 . 84 . 84 . 85 . 86 . 86 . 88 . 94 . 94 . 102 . 104 . 104
. . . . . .
. . . . . . . . . . . . . . . .
Contents
6
7
8
xi
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
107 107 110 112 112 114 116 116
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
117 117 118 119 121 121 122 124 128 128 129
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
133 133 134 135 137 137
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
139 140 143 143 145 146 147 149 153
An Effective Collaborative Evolutionary Algorithm for FJSP . 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Proposed MSCEA for FJSP . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 The Optimization Strategy of MSCEA . . . . . . . . . 8.3.2 Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
157 157 158 158 158 159
A Hybrid Algorithm for Job Shop Scheduling Problem . . . . . 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Proposed Hybrid Algorithm for JSP . . . . . . . . . . . . . . . . . 6.3.1 Description of the Proposed Hybrid Algorithm . . . 6.3.2 Encoding and Decoding Scheme . . . . . . . . . . . . . 6.3.3 Updating Strategy . . . . . . . . . . . . . . . . . . . . . . . . 6.3.4 Local Search of the Particle . . . . . . . . . . . . . . . . 6.4 The Neighborhood Structure Evaluation Method Based on Logistic Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 The Logistic Model . . . . . . . . . . . . . . . . . . . . . . 6.4.2 Defining Neighborhood Structures . . . . . . . . . . . . 6.4.3 The Evaluation Method Based on Logistic Model 6.5 Experiments and Discussion . . . . . . . . . . . . . . . . . . . . . . 6.5.1 The Search Ability of VNS . . . . . . . . . . . . . . . . . 6.5.2 Benchmark Experiments . . . . . . . . . . . . . . . . . . . 6.5.3 Convergence Analysis of HPV . . . . . . . . . . . . . . 6.5.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 Conclusions and Future Works . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . An Effective Genetic Algorithm for FJSP . . . . . . . . . . . . . . . 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 An Effective GA for FJSP . . . . . . . . . . . . . . . . . . . . . . . 7.4.1 Representation . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.2 Decoding the MSOS Chromosome to a Feasible and Active Schedule . . . . . . . . . . . . . . . . . . . . . 7.4.3 Initial Population . . . . . . . . . . . . . . . . . . . . . . . 7.4.4 Selection Operator . . . . . . . . . . . . . . . . . . . . . . 7.4.5 Crossover Operator . . . . . . . . . . . . . . . . . . . . . . 7.4.6 Mutation Operator . . . . . . . . . . . . . . . . . . . . . . 7.4.7 Framework of the Effective GA . . . . . . . . . . . . 7.5 Computational Results . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 Conclusions and Future Study . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xii
Contents
8.3.3 Initial Population and Fitness Evaluation 8.3.4 Genetic Operators . . . . . . . . . . . . . . . . . 8.3.5 Terminate Criteria . . . . . . . . . . . . . . . . 8.3.6 Framework of MSCEA . . . . . . . . . . . . . 8.4 Experimental Studies . . . . . . . . . . . . . . . . . . . . . 8.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
160 160 161 161 163 163 165
Mathematical Modeling and Evolutionary Algorithm-Based Approach for IPPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Problem Formulation and Mathematical Modeling . . . . . 9.2.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . 9.2.2 Mathematical Modeling . . . . . . . . . . . . . . . . . . 9.3 Evolutionary Algorithm-Based Approach for IPPS . . . . . 9.3.1 Representation . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.2 Initialization and Fitness Evaluation . . . . . . . . . 9.3.3 Genetic Operators . . . . . . . . . . . . . . . . . . . . . . . 9.4 Experimental Studies and Discussions . . . . . . . . . . . . . . 9.4.1 Example Problems and Experimental Results . . . 9.4.2 Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
167 167 168 168 169 173 173 174 174 178 178 187 187 188
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
191 191 192 195 195 195 200 200 202 205 205 207
11 A Modified Genetic Algorithm Based Approach for IPPS 11.1 Integration Model of IPPS . . . . . . . . . . . . . . . . . . . . . 11.2 Representations for Process Plans and Schedules . . . . 11.3 Modified GA-Based Optimization Approach . . . . . . . . 11.3.1 Flowchart of the Proposed Approach . . . . . . . 11.3.2 Genetic Components for Process Planning . . . 11.3.3 Genetic Components for Scheduling . . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
209 209 210 212 212 213 217
10 An Agent-Based Approach for IPPS . . . . . . . . . . . 10.1 Literature Survey . . . . . . . . . . . . . . . . . . . . . 10.2 Problem Formulation . . . . . . . . . . . . . . . . . . . 10.3 Proposed Agent-Based Approach for IPPS . . . 10.3.1 MAS Architecture . . . . . . . . . . . . . . 10.3.2 Agents Description . . . . . . . . . . . . . . 10.4 Implementation and Experimental Studies . . . 10.4.1 System Implementation . . . . . . . . . . . 10.4.2 Experimental Results and Discussion . 10.4.3 Discussion . . . . . . . . . . . . . . . . . . . . 10.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
Contents
xiii
11.4 Experimental Studies and Discussion . . . . . . . . . 11.4.1 Test Problems and Experimental Results 11.4.2 Comparison with Hierarchical Approach 11.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
223 223 231 232 232 232
12 An Effective Hybrid Algorithm for IPPS . . . . . . . . . 12.1 Hybrid Algorithm Model . . . . . . . . . . . . . . . . . . 12.1.1 Traditionally Genetic Algorithm . . . . . . 12.1.2 Local Search Strategy . . . . . . . . . . . . . . 12.1.3 Hybrid Algorithm Model . . . . . . . . . . . 12.2 Hybrid Algorithm for IPPS . . . . . . . . . . . . . . . . 12.2.1 Encoding and Decoding . . . . . . . . . . . . 12.2.2 Initial Population and Fitness Evaluation 12.2.3 Genetic Operators for IPPS . . . . . . . . . . 12.3 Experimental Studies and Discussions . . . . . . . . 12.3.1 Test Problems and Experimental Results 12.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
235 235 235 235 236 237 237 239 239 243 243 245 249 249
13 An Effective Hybrid Particle Swarm Optimization Algorithm for Multi-objective FJSP . . . . . . . . . . . 13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 13.2 Problem Formulation . . . . . . . . . . . . . . . . . . . 13.3 Particle Swarm Optimization for FJSP . . . . . . 13.3.1 Traditional PSO Algorithm . . . . . . . . 13.3.2 Tabu Search Strategy . . . . . . . . . . . . 13.3.3 Hybrid PSO Algorithm Model . . . . . 13.3.4 Fitness Function . . . . . . . . . . . . . . . . 13.3.5 Encoding Scheme . . . . . . . . . . . . . . . 13.3.6 Information Exchange . . . . . . . . . . . . 13.4 Experimental Results . . . . . . . . . . . . . . . . . . . 13.4.1 Problem 4 5 . . . . . . . . . . . . . . . . . 13.4.2 Problem 8 8 . . . . . . . . . . . . . . . . . 13.4.3 Problem 10 10 . . . . . . . . . . . . . . . 13.4.4 Problem 15 10 . . . . . . . . . . . . . . . 13.5 Conclusions and Future Research . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
251 251 252 255 255 256 257 258 259 261 262 262 264 264 267 276 276
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
14 A Multi-objective GA Based on Immune and Entropy Principle for FJSP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 14.2 Multi-objective Flexible Job Shop Scheduling Problem . . . . . . . 281
xiv
Contents
14.3 Basic Concepts of Multi-objective Optimization . . . . . . 14.4 Handing MOFJSP with MOGA Based on Immune and Entropy Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4.1 Fitness Assignment Scheme . . . . . . . . . . . . . . 14.4.2 Immune and Entropy Principle . . . . . . . . . . . . 14.4.3 Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4.4 Encoding and Decoding Scheme . . . . . . . . . . . 14.4.5 Selection Operator . . . . . . . . . . . . . . . . . . . . . 14.4.6 Crossover Operator . . . . . . . . . . . . . . . . . . . . . 14.4.7 Mutation Operator . . . . . . . . . . . . . . . . . . . . . 14.4.8 Main Algorithm . . . . . . . . . . . . . . . . . . . . . . . 14.5 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . 14.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 An Effective Genetic Algorithm for Multi-objective IPPS with Various Flexibilities in Process Planning . . . . . . . . . . 15.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2 Multi-objective IPPS Description . . . . . . . . . . . . . . . . 15.2.1 IPPS Description . . . . . . . . . . . . . . . . . . . . . 15.2.2 Multi-objective Optimization . . . . . . . . . . . . . 15.3 Proposed Genetic Algorithm for Multi-objective IPPS . 15.3.1 Workflow of the Proposed Algorithm . . . . . . 15.3.2 Genetic Components for Process Planning . . . 15.3.3 Genetic Components for Scheduling . . . . . . . 15.3.4 Pareto Set Update Scheme . . . . . . . . . . . . . . 15.4 Experimental Results and Discussions . . . . . . . . . . . . 15.4.1 Experiment 1 . . . . . . . . . . . . . . . . . . . . . . . . 15.4.2 Experiment 2 . . . . . . . . . . . . . . . . . . . . . . . . 15.4.3 Discussions . . . . . . . . . . . . . . . . . . . . . . . . . 15.5 Conclusion and Future Works . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . 283 . . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
283 283 284 286 286 287 288 289 290 290 294 300
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
301 301 302 302 304 305 305 307 310 311 312 312 315 316 321 321
. . . .
. . . .
323 323 325 328
16 Application of Game Theory-Based Hybrid Algorithm for Multi-objective IPPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3 Game Theory Model of Multi-objective IPPS . . . . . . . . . . . . 16.3.1 Game Theory Model of Multi-objective Optimization Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3.2 Nash Equilibrium and MOP . . . . . . . . . . . . . . . . . . 16.3.3 Non-cooperative Game Theory for Multi-objective IPPS Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.4 Applications of the Proposed Algorithm on Multi-objective IPPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . 328 . . 329 . . 329 . . 330
Contents
xv
16.4.1 Workflow of the Proposed Algorithm . 16.4.2 Nash Equilibrium Solutions Algorithm for Multi-objective IPPS . . . . . . . . . . . 16.5 Experimental Results . . . . . . . . . . . . . . . . . . . . 16.5.1 Problem 1 . . . . . . . . . . . . . . . . . . . . . 16.5.2 Problem 2 . . . . . . . . . . . . . . . . . . . . . 16.5.3 Conclusions . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . 330 . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
331 335 335 336 341 342
17 A Hybrid Intelligent Algorithm and Rescheduling Technique for Dynamic JSP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.2 Statement of Dynamic JSPs . . . . . . . . . . . . . . . . . . . . . . . 17.2.1 The Proposed Mathematical Model . . . . . . . . . . . 17.2.2 The Reschedule Strategy . . . . . . . . . . . . . . . . . . . 17.2.3 Generate Real-Time Events . . . . . . . . . . . . . . . . . 17.3 The Proposed Rescheduling Technique for Dynamic JSPs . 17.3.1 The Rescheduling Technique in General . . . . . . . 17.3.2 The Hybrid GA and TS for Dynamic JSP . . . . . . 17.4 Experiential Environments and Results . . . . . . . . . . . . . . . 17.4.1 Experimental Environments . . . . . . . . . . . . . . . . . 17.4.2 Results and Discussion . . . . . . . . . . . . . . . . . . . . 17.5 Conclusions and Future Works . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
345 345 347 347 350 351 353 353 355 360 361 362 372 374
18 A Hybrid Genetic Algorithm and Tabu Search for Multi-objective Dynamic JSP . . . . . . . . . . . . . . . . . 18.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.2 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . 18.3 The Multi-objective Dynamic Job Shop Scheduling 18.4 The Proposed Method for Dynamic JSP . . . . . . . . . 18.4.1 The Flow Chart of the Proposed Method . . 18.4.2 Simulator . . . . . . . . . . . . . . . . . . . . . . . . . 18.4.3 The Hybrid GA and TS for Dynamic JSP . 18.5 Experimental Design and Results . . . . . . . . . . . . . . 18.5.1 Experimental Design . . . . . . . . . . . . . . . . 18.5.2 Results and Discussions . . . . . . . . . . . . . . 18.6 Conclusions and Future Researches . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
377 377 378 379 381 381 383 384 387 387 388 400 401
19 GEP-Based Reactive Scheduling FJSP with Job Release Dates . . 19.1 Introduction . . . . . . . . . . . 19.2 Problem Description . . . . . 19.3 Heuristic for DFJSSP . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
405 405 407 408
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
Policies for Dynamic . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
xvi
Contents
19.4 GEP-Based Reactive Scheduling Polices Constructing Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.4.1 Framework of GEP-Based Reactive Scheduling Policies Constructing Approach . . . . . . . . . . . . 19.4.2 Define Element Sets . . . . . . . . . . . . . . . . . . . . 19.4.3 Chromosome Representation . . . . . . . . . . . . . . 19.4.4 Genetic Operators . . . . . . . . . . . . . . . . . . . . . . 19.5 Experiments and Results . . . . . . . . . . . . . . . . . . . . . . . 19.5.1 GEP Parameter Settings . . . . . . . . . . . . . . . . . 19.5.2 Design of the Experiments . . . . . . . . . . . . . . . 19.5.3 Analysis of the Results . . . . . . . . . . . . . . . . . . 19.6 Conclusion and Future Work . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
409 409 411 413 414 414 414 416 426 427
20 A Hybrid Genetic Algorithm with Variable Neighborhood Search for Dynamic IPPS . . . . . . . . . . . . . . . . . . . . . . . . . . 20.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.3 Dynamic IPPS Problem Formulation . . . . . . . . . . . . . . 20.3.1 Problem definition . . . . . . . . . . . . . . . . . . . . . 20.3.2 Framework for DIPPS . . . . . . . . . . . . . . . . . . 20.3.3 Dynamic IPPS Model . . . . . . . . . . . . . . . . . . . 20.4 Proposed Hybrid GAVNS for Dynamic IPPS . . . . . . . . 20.4.1 Flowchart of Hybrid GAVNS . . . . . . . . . . . . . 20.4.2 GA for IPPS . . . . . . . . . . . . . . . . . . . . . . . . . 20.4.3 VNS for Local Search . . . . . . . . . . . . . . . . . . 20.5 Experiments and Discussions . . . . . . . . . . . . . . . . . . . . 20.5.1 Experiment 1 . . . . . . . . . . . . . . . . . . . . . . . . . 20.5.2 Experiment 2 . . . . . . . . . . . . . . . . . . . . . . . . . 20.5.3 Experiment 3 . . . . . . . . . . . . . . . . . . . . . . . . . 20.5.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.6 Conclusion and Future Works . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
429 429 430 433 433 433 435 438 438 438 440 442 443 444 448 451 451 452
21 IPPS Simulation Prototype System . . . 21.1 Application Background Analysis 21.2 System Architecture . . . . . . . . . . 21.3 Implementation and Application . 21.4 Conclusion . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
455 455 456 457 461 462
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . . 409
Chapter 1
Introduction for Integrated Process Planning and Scheduling
1.1 Process Planning Process planning determines the processing method of products, so it is one of the most important tasks in production preparation and the basis of all production activities. Both industry and academia have done a lot of work on the research of process planning. There are many definitions of process planning, which are summarized as follows [1]: (1) Process planning is a bridge between product design and manufacturing, transforming product design data into manufacturing information [2]. In other words, process planning is an important activity that connects the design function with the manufacturing function, and it specifies the processing strategies and steps of product parts [3]. (2) Process planning is a complex process involving many tasks, which requires the process designer to have a deep knowledge of product design and manufacturing. Ramesh [4] believes that these tasks include part coding, feature recognition, mapping between machining methods and features, internal and external sequencing, clamping planning, middleware modeling, machining equipment tools and corresponding parameter selection, process optimization, cost assessment, tolerance analysis, test plan, path planning, CNC program, etc. (3) Process planning is the activity of converting raw materials into the detailed operation of final parts or preparing detailed documents for the process of parts processing and assembly [5]. (4) Process planning systematically identifies detailed manufacturing processes to meet design specifications within available resources and capabilities [6]. The above definitions describe process planning from different engineering technical perspectives. These definitions can be summarized as: process planning is an activity that connects product design and manufacturing, combines manufacturing
© Springer-Verlag GmbH Germany, part of Springer Nature and Science Press, Beijing 2020 X. Li and L. Gao, Effective Methods for Integrated Process Planning and Scheduling, Engineering Applications of Computational Methods 2, https://doi.org/10.1007/978-3-662-55305-3_1
1
2
1 Introduction for Integrated Process Planning and Scheduling
process knowledge with the specific design under the limitation of workshop or factory manufacturing resources, and prepares specific operation instructions. Traditionally, process planning has been done manually and empirically. The following problems include: shortage of experienced personnel, low efficiency of designated process route, inconsistency of process route caused by differences in experience and judgment of process personnel, slow reaction to the actual manufacturing environment, etc. In order to alleviate these problems, Computer-Aided Process Planning (CAPP) emerged in the mid-1960s [7, 8]. CAPP uses computer-aided parts processing process formulation, in order to determine the raw materials into the design drawings required parts. It is a process in which the computer automatically completes the formulation of part processing technology and outputs part process route and process content by inputting the geometric information (shape, size, etc.) and process information (material, heat treatment, batch, etc.) of the processed parts to the computer [9]. Research on CAPP began in the 1960s. CIRP established the first CAPP working group at the annual conference in 1966, which marked the beginning of CAPP research work. In 1969, Norway introduced the world’s first CAPP system—AUTOPROS [10], and in 1973 it officially launched the commercial AUTOPROS system. A milestone in the development history of CAPP is the CAPP system, which was introduced in 1976 by CAM-I (Computer-Aided Manufacturing-International), a U.S. International organization for Computer-Aided Manufacturing, namely CAM-I’s Automated Process Planning system. China began to study the CAPP system in the early 1980s. In 1983, Tongji University developed the first CAPP system in China—TOJICAP [11]. After years of research, CAPP has made great progress, and Zhang [12] summarized 187 CAPP systems. However, due to the complexity of process planning and the particularity of product and manufacturing environment, CAPP is difficult to be applied and generalized. In the field of manufacturing automation, CAPP is the last part to be developed. Even today, when CAD, CAE, CAM, MRPII, ERP, MES, and even e-commerce are very mature and widely used, some key problems of CAPP have not been well solved and become the key bottleneck of the manufacturing industry [13]. Since its birth, CAPP has been a research hotspot and difficulty in the field of advanced manufacturing technology. Since the 1960s, CAPP has made great progress in both the technical level and practical application. Its main research focuses on the following aspects: (1) Integration of CAPP with other systems [14] Information integration is one of the development directions of advanced manufacturing technology and one of the technical means to shorten the product development cycle and respond to the market quickly. Therefore, the integration of CAPP and other systems is one of the hot topics of researchers. As a CAPP system connecting design and manufacturing, it should not only integrate with CAD and CAM, but also realize the integration of Production Planning and Scheduling (PPS) system, ERP (Enterprise Resource Planning) system, PDM (Product Data Management), CATD (Computer-Aided Tolerance Design), and other systems.
1.1 Process Planning
3
(2) Optimization and selection of process route The traditional process planning system produces a single and fixed process route for a part without considering the dynamic information of the workshop, which greatly reduces the feasibility and flexibility of the process route in actual production. In order to adapt to the workshop environment, the process planning system must produce a large number of flexible process routes for each part, and optimize and select them according to the production requirements. Therefore, the optimization and selection of process route is one of the main research directions of the process planning system. The optimization and selection of process route is an NP-complete problem, and it is difficult to realize the optimization and selection of process route only by using traditional gradient descent method, graph theory method, and simulation method [15]. In order to solve this problem, a lot of research scholars introduce the method of artificial intelligence to study and solve the process route optimization problem. The main research methods include: multi-agent system [16], genetic algorithm [17–20], genetic programming [21], tabu search [22, 23], simulated annealing [24], particle swarm optimization [25], ant colony algorithm [26, 27], psychological clone algorithm [28], immune algorithm [29], neural network method [30], and etc.
1.2 Shop Scheduling 1.2.1 Problem Statement Shop scheduling problem can be generally described as: n jobs are processed on m machines. A job contains k processes, each of which can be processed on several machines and must be processed in some feasible process sequence. Each machine can process several processes of the job, and the set of processes can be different on different machines. The goal of shop scheduling is to reasonably arrange the jobs to each machine, reasonably arrange the processing order and starting time of the jobs, so as to satisfy the constraint conditions and optimize some performance indicators [31]. In general, scheduling problems can be represented by “n/m/A/B”, where n is the number of jobs, m is the number of machines, A represents the morphological type of jobs flowing through the machines, and B represents the type of performance indicators. There are many classification methods for scheduling problems [32, 33]. Common types of jobs flowing through the machines include [34]: (1) Single Machine Scheduling Problem (SMP) SMP is the most basic one of shop scheduling problems, which is characterized by the jobs to be processed on only one process. The processing system has only one machine tool, and all the jobs are to be processed on the machine tool.
4
1 Introduction for Integrated Process Planning and Scheduling
(2) Parallel Machine Scheduling Problem (PMP) The processing system has a set of machine tools with the same functions. The jobs are to be processed on one process, and one can select the machine to process it. (3) Job shop Scheduling Problem (JSP) The processing system has a set of machine tools with different functions. Given the processing order of the jobs to be processed, the process route of all jobs is different. It is required to determine the processing order of the jobs on each machine and the starting time of each job with satisfying the constraints of the processing order. (4) Flow Shop Scheduling Problem (FSP) The processing system has a group of machine tools with different functions. Given the processing order of the jobs to be processed, the process route of all jobs is the same. It is required to determine the processing order of the jobs on each machine and the starting time of the job with satisfying the constraints of the processing order. (5) Open Shop Scheduling Problem (OSP) The processing system has a group of machine tools with different functions. Given the processing process of the jobs to be processed, the processing order of each process is arbitrary. It is required to determine the processing order of the jobs on each machine and the starting time of the job with satisfying the constraints of the processing order. Performance indicators B have various forms, which can be roughly divided into the following categories: (1) Performance indicators based on the processing completion time, such as Cmax (maximum completion time), Fmax (maximum flow time), etc. (2) Performance indicators based on the delivery date, such as Lmax (maximum delayed completion time), Tmax (maximum delayed time), etc.; (3) Multi-objective comprehensive performance indicators, such as the maximum completion time and the maximum delay in the completion, etc.
1.2.2 Problem Properties The objects and objectives of shop scheduling determine that this problem has the following characteristics [35]: (1) Multi-objective In different enterprises or different production environments, the optimization objectives of shop scheduling are different. For example, the general requirement is the shortest production cycle. In some environments, in order to deliver goods as quickly as possible, it is necessary to guarantee the delivery date of some products. In order to reduce costs, it also needs to consider the utilization of production equipment and reduce work-in-process inventory. In the actual production process, not only one
1.2 Shop Scheduling
5
objective is considered, but multiple objectives need to be considered simultaneously. As each objective may conflict with each other, comprehensive consideration must be taken into account in the formulation of shop scheduling. (2) Multi-constraint The research object and objective of shop scheduling determine that the problem is a multi-constraint one. In the scheduling process, the constraints of the job itself need to be taken into account including: process route constraints and process constraints on the processing machine. Resource constraints also need to be considered. The results are feasible only if they meet the resource constraints of the shop. Meanwhile, shop scheduling needs to be constrained by the constraints such as operators, transport vehicles, knives, and other auxiliary production tools. (3) Dynamic randomness The processing environment of the manufacturing system is constantly changing, and a variety of random events will be encountered, such as the variable processing time, machine tool failure, shortage of raw materials, emergency order insertion, etc. Thus the shop scheduling process is a dynamic random process. (4) Discretization General manufacturing system is a typical discrete system, which is a discrete optimization problem. The start time of jobs, arrival of task, addition and failure of equipment, change of order and so on are discrete events. It is possible to study shop scheduling problem by mathematical programming, discrete system modeling and simulation, sorting theory and other methods. (5) Computational complexity Shop scheduling is a combinatorial optimization problem constrained by several equations and inequalities, which has been proved to be an NP-complete problem. With the increase of the scheduling scale, the number of feasible solutions to the problem increases exponentially. The characteristics of the shop scheduling problem determine that the problem is a very complex problem. This is why over the years the research on this problem has attracted a large number of researchers from different fields. A number of solutions methods are proposed, aiming to meet the needs of practical application. However, these achievements cannot fully meet the needs of practical application, so we have to conduct a more comprehensive study on the nature of the problem and the relevant solution methods, so as to propose more effective theories, methods, and technologies to meet the practical application needs of enterprises.
1.2.3 Literature Review The research of shop scheduling problem is basically synchronous with the development of operational research. As early as 1954, Johnson proposed an effective
6
1 Introduction for Integrated Process Planning and Scheduling
optimization algorithm to solve n/2/F/C max and some special n/3/F/C max problems, which kicked off the research on scheduling problems [36]. In the 1960s, researchers tended to design definite shaping methods with polynomial time complexity in order to find the optimal solution to the shop scheduling problem. These methods include integer programming [37], dynamic programming, branch and bound method [38] and backtracking algorithm [39], etc. But these algorithms can solve a limited number of examples, the experiments show that even today’s mainframe computers cannot solve them in an acceptable time. In the 1970s, researchers conducted an in-depth study on the computational complexity of scheduling problems, proving that the vast majority of scheduling problems are NP-complete problems [40, 41]. Instead of seeking the optimal solutions to the problems by exact algorithms, researchers seek the satisfactory solution of the problems by approximate algorithms in an acceptable time. Therefore, a heuristic method is proposed to solve the problem. Panwalkar [42] summarized 113 scheduling rules and divided them into three categories: simple rules, compound rules, and heuristic rules. Since the 1980s, with the intersection of computer technology, life science, and engineering science, the meta-heuristic algorithms developed by imitating the mechanism of natural phenomena have been applied to solve scheduling problems and has shown the potential of solving largescale scheduling problems, including genetic algorithm, tabu search, constraint satisfaction algorithm, particle swarm optimization algorithm, ant colony algorithm, etc. The researchers improve the above algorithms all the time and also propose some novel algorithms. The occurrence and development of these algorithms have greatly promoted the development of the scheduling problems.
1.3 Integrated Process Planning and Shop Scheduling The research on the integration of process planning and shop scheduling began in the mid-1980s [43–45]. Chryssolouris and Chan [46, 47] first put forward the concept of process planning and workshop scheduling integration in the mid-1980s. Beckendorff [48] then USES alternative process paths to add flexibility to the system. Khoshnevis [49] introduced the idea of dynamic feedback into the integration of process planning and workshop scheduling. The integrated model proposed by Zhang [50] and Larsen [51] not only inherits the idea of alternative process route and dynamic feedback, but also embodies the idea of hierarchical planning to a certain extent. In recent years, domestic and foreign researchers have conducted a lot of researches on the integration of process planning and workshop scheduling, and put forward various integration models and research methods to enrich the research on the integration of process planning and workshop scheduling [43, 52]. At present, in view of the integration of process planning and shop scheduling, the researchers have put forward some integration models, which can be roughly summarized into the following three categories [53]: non-linear process planning, closed-loop process planning, and distributed process planning. Their common characteristic is to make use of the integration of process planning and shop scheduling,
1.3 Integrated Process Planning and Shop Scheduling
7
and give full play to the flexibility of the process planning system through some improvement of the process planning system, so as to improve the flexibility of the whole integrated system. (1) Non-linear process planning Non-linear Process Planning (NLPP) model is based on static manufacturing environment. It aims to generate all possible process routes before each part entering into the shop. According to the optimization objective of process planning, each optional technology is given a certain priority. And then according to specific resources and shop status, the shop scheduling system chooses the optimal process route. Most of the existing literature on the integration of process planning and shop scheduling have adopted the idea of this integration model. The advantage of this model is that all possible process routes are generated, which expands the optimization space of shop scheduling and is conducive to finding the optimal process route. The disadvantage is that all possible process routes are generated, which increases the storage space of the system. The optimization search for all process routes increases the calculation time, which may cause that satisfactory solutions cannot be found in an acceptable time. Its specific forms are as follows: Non-linear Process Planning (NPP), Multiprocess Planning (MPP), and Alternative Process Planning (APP). The above forms are not different in nature only in description. Non-linear Process Planning: when making a craft route, all of the process routes of the processed parts are generated. All of them are stored in the database, according to the premise of optimization, the selection of process route after evaluation sequence determines the priority. The process route with the highest priority is first selected to see if it is suitable for the current resource status. If not, select another route with the second priority, and so on until a satisfactory process route is found. Literature [54] proposed an integrated model of non-linear process planning in FMS. Lee [55] proposed a non-linear process planning model based on genetic algorithm, which can greatly reduce scheduling time and product delay. Literature [54] proposed the integration method of FMS-oriented flexible process planning and production scheduling based on a unified resource database under the idea of concurrent engineering. Literature [56] proposed a non-linear process based on Petri net, which has been widely used in the flexible production scheduling system. Jablonski [57] introduced the concept of flexible integrated system (the system includes three subsystems, namely, feature recognition system, static process planning system, and dynamic resource allocation system), and introduced some applications of the system. Multi-process planning [58]: it can be generally expressed by tree structure, which is called process route tree. Nodes in the tree represent the processing process, edges represent the sequence relationship between processing processes, a path from root to leaf represents a process route scheme, and a path from any non-root to leaf represents a subprocess route scheme. The job processing process is equivalent to the search process of process route tree. Literature [59, 60] uses a coevolution algorithm to realize multi-process route decision and integration of process planning and production scheduling. Literature [60] proposed a scheduling system supporting multi-process routes based on a negotiation mechanism. Literature [61] designed a
8
1 Introduction for Integrated Process Planning and Scheduling
system based on characteristics of process planning, the product shape design of the machine tool machine manufacturing characteristics, manufacturing resource capability information, and process knowledge rules, design process. And according to the machine load information for each process route allocation, the rationality of various scheduling schemes was evaluated. Literature [62, 63] summarized the mixed integer programming model of the shop scheduling problem based on the multiprocess route, and proposed two solutions. Literature [64] proposed an integrated mechanism of process planning and shop scheduling based on multi-process routes. Optional process planning: the basic idea is to produce a variety of optional process routes on the basis of the various constraints of the part process route, so as to improve the flexibility of process planning and provide convenience for production scheduling. By analyzing the objective of production scheduling integration, Literature [65] pointed out that these problems can be solved by reducing the optional factors of the optional process route, optimizing and linearizing the optional process planning decision-making process. Literature [66] proposed an optimization algorithm to solve the shop scheduling problem based on alternative process routes under the JIT production environment. Literature [67] used the branch and bound method to solve the integration problem of process planning and shop scheduling based on alternative process plans. Literature [68] proposed two meta-heuristic algorithms (genetic algorithm and tabu search algorithm) to solve the integration problem of process planning and shop scheduling based on alternative process planning. References [69, 70], respectively, adopted simulated annealing and improved genetic algorithm to solve this problem. NLPP is the most basic model for the integration of process planning and shop scheduling. Because the integration idea of this model is simple and easy to be operated, the existing research on the integration model mainly focuses on this model. (2) Closed-loop process planning Closed-Loop Process Planning (CLPP) model generates process route according to the shop resource information feedback from the scheduling system, so CLPP can better consider the state of shop resources and the generated process route is feasible compared with the current production environment. Real-time state data is the key of CLPP, and dynamic process planning is carried out according to real-time feedback information. Its specific forms are as follows: closed-loop Process Planning, Dynamic Process Planning, and On-Line Process Planning (OLPP). Closed-loop process planning: it is a dynamic process planning system for the shop [71], which generates real-time optimized process routes through the dynamic feedback of shop resource information. The shop scheduling system feeds back the currently available equipment status of the shop to the draft process line, so that it can be changed and adjusted in time, and thus improve the feasibility of the process plan. Process design integrates operation planning and production scheduling system together to form a closed loop for automatic adjustment of process route. This kind of dynamic system can obviously improve the real-time performance, guidance, and operability of the process planning system. Literature [72, 73] proposed a dynamic
1.3 Integrated Process Planning and Shop Scheduling
9
optimization mechanism for the integration of process planning and shop scheduling in a batch manufacturing environment. Dynamic process planning: it is based on the real-time dynamic scheduling information feedback from the shop scheduling system to generate the process route. At this time, the generated process route better considers the real-time situation of workshop resources, and thus to improve the feasibility of the process route. Literature [74] studied the dynamic CAPP system and put forward the network architecture, functional modules, and flowchart of the dynamic CAPP system. The system can make timely response and feedback to the changing workshop working environment and adjust the process route in time, so as to provide correct guidance to the production practice. Literature [75] studied an integrated model of the dynamic process planning system, and realized the dynamic selection of processing resources based on BP neural network, so as to produce a process route meeting the production conditions of the shop. Literature [76] divided the dynamic process planning into two stages according to functions: static planning stage and dynamic planning stage, and focused on the static planning stage. Literature [71] pointed out the problems existing in the traditional process planning system. To solve these problems, dynamic process planning system is introduced. The process route generated by the dynamic process planning system can better adapt to the actual situation of the shop and the needs of scheduling. Literature [77] combined the function of process planning and shop scheduling, and used a priority allocation method combined with parallel allocation algorithm, which used time window plan to control the allocation quantity of each stage. The dynamic process planning system mentioned in literature [78] can not only make the initial process route, but also make the process route according to the feedback of the integrated system. The process module, decision-making algorithm, and control strategy of the system were studied. Literature [79] proposed and constructed a multitask process route optimization decision-making model of the noncooperative game. On-line process planning: it is integrated by considering the specific situation of the shop. This helps the process line to adapt to the real-time production situation. The most important part of the scheme is the real-time information of the workshop and the dynamic feedback of scheduling. Literature [74] discussed the importance of process planning and shop scheduling system integration, and proposed a method to make on-line process planning specific, and carried out a simulation on it, and achieved good results. Literature [80] proposed scheduling system based on the selected machine tool, the complete scheduling simulation to determine the process route at the same time, considering workshop equipment utilization rate, fault, and processing cost of the specific situation of the on-line process route design, according to the purchase cost, use cost of the machine tool, process factors such as the number and alternative machine, every working procedure to calculate the coefficient of scheduling and process planning system. CLPP makes use of the feedback mechanism in the integration principle, which can better realize the integration of process planning and shop scheduling. However, since the existing CLPP only provides the interface of information and function, the depth of the coupling of information and function is not enough. How to improve
10
1 Introduction for Integrated Process Planning and Scheduling
the coupling depth of CLPP information and function is a problem to be solved. Moreover, CLPP needs to collect real-time information about workshop resources. How to represent, transmit, and process real-time information of workshop resources is also a problem to be solved. (3) Distributed process planning [81] Distributed Process Planning (DPP) can be expressed in the following forms: Distributed Process Planning, Concurrent Process Planning, and Collaborative Process Planning. Distributed Process Planning: it is also known as just-in-time Process Planning (JTPP) [45]. In this model, process planning and scheduling are completed synchronously. The process planning and scheduling are divided into two stages: the first stage is the preliminary planning stage. In this stage, features and characteristics and the relationship between the parts are mainly analyzed. And according to the features of parts information, the initial processing method is determined, as well as the processing of resources (such as raw materials, processing equipment, etc.) are preliminary estimated. The second stage is the detailed planning stage. The main task is to match the information of shop processing equipment and production task, and generate a complete process route and scheduling plan. Literature [82] systematically elaborated the conceptual model of parallel distributed integration, and adopted the multilevel distributed hierarchical structure different from the traditional process planning system structure, so as to better realize the integration of them and effectively solve the existing production problems. The hierarchical planning idea and the integrated model proposed in literature [83] were the specific forms of distributed process planning. Literature [45] made a more specific description of timely process planning, and gave the model frame diagram of timely process planning. Literature [84] briefly mentioned the model of distributed process planning and pointed out that it was a decentralized integration method among multilevel functional modules. Literature [85] proposed a two-level hierarchical model to integrate process planning and workshop scheduling. A distributed process planning method was proposed in literature [86], and Multi-Agent System (MAS) was adopted to construct the framework of the proposed method. Parallel process planning: it uses the idea of concurrent engineering to divide process design and production scheduling into several levels. The parts related to process design and production scheduling are integrated, respectively, at each level, so as to realize the purpose of mutual cooperation and common decision-making. Literature [87] is the process planning based on concurrent engineering. Based on concurrent engineering theory, the integration of process planning system with process segmentation design and production scheduling system based on an extended time rescheduling strategy is proposed, and the effectiveness and feasibility of the system are proved by examples. Literature [88] proposed and verified the parallel integration model of process planning and production scheduling in the distributed virtual manufacturing environment. Literature [89] proposed the framework of parallel integrated process planning system based on Holon, which divided the process planning into three stages: preliminary design stage, decision-making stage, and
1.3 Integrated Process Planning and Shop Scheduling
11
detailed design stage of process planning. It integrated CAD, process planning, and production scheduling system with concurrent engineering thought. An Integrated Process Planning/Production Scheduling system (IP3S) based on parallel engineering was proposed in literature [90]. In Literature [91], an integrated process planning and workshop scheduling system was designed and applied to the Holonic manufacturing system. Collaborative process planning: the resource condition of workshop is taken into consideration as well as the process planning, so that both process planning and scheduling plan can be carried out cooperatively. Literature [92] proposed an integrated model of collaborative process planning and workshop scheduling system, including three modules: shop resource evaluation module, scheduling evaluation module, and process planning evaluation module. And then the three modules are integrated by using the collaborative mechanism. A collaborative process planning system framework based on a real-time monitoring system was proposed in Literature [93]. And the design and use of functional modules in the integrated system were introduced in detail. The basic idea of DPP is hierarchical planning. This model considers the integration of the process planning system and the shop scheduling system in the early stage, and the process planning and scheduling plan are always carried out in parallel. Both sides reflect the interaction, coordination and cooperation on the whole integrate the decision-making process. However, its ability to optimize process route and schedule is not enough, so this model can be integrated with other models to improve its overall optimization.
References 1. Xu HZ, Li DP (2008) Review of process planning research with perspectives. Manufac Auto 30(3):1–7 2. Mahmood F (1998) Computer aided process planning for Wire Electrical Discharge Machining (WEDM), [PhD Thesis]. University of Pittsburgh 3. Pande SS, Walvekar MG (1989) PC-CAPP- A computer assisted process planning system for prismatic components. Comput Aided Eng J, 133–138 4. Ramesh MM (2002) Feature based methods for machining process planning of automotive power-train components, [PhD Thesis]. University of Michigan 5. Chang TC, Wysk RA (1985) An introduction to automated process planning systems. Prentice Hall, New Jersey 6. Deb S, Ghosh K, Paul S (2006) A neural network based methodology for machining operations selection in computer aided process planning for rotationally symmetrical parts. J Intell Manuf 17:557–569 7. Scheck DE (1966) Feasibility of automated process planning, [PhD Thesis]. Purdue University 8. Berra PB, Barash MM (1968) Investigation of automated planning and optimization of metal working processes. Report 14, Purdue Laboratory Fir Applied Industrial Control 9. Yang YT (2006) Study on the key technoligies of cooperative CAPP system supporting bilinggual languages, [PhD Thesis]. Nanjing, Nanjing University of Aeronautics and Astronautics, 2006.4 10. Du P, Huang NK (1990) Principle of computer aided process design. Beihang University Press, Beijing
12
1 Introduction for Integrated Process Planning and Scheduling
11. Wang XK (1999) Computer aided manufacturing. Tsinghua University Press, Beijing 12. Zhang H, Alting L (1994) Computerized manufacturing process planning systems. Chapman & Hall 13. Shao XY, Cai LG (2004) Modern CAPP technology and application. Machinery Industry Press, Beijing 14. Wu FJ, Wang GC (2001) Research and development of CAPP. J North China Ins Technol 22(6):426–429 15. Wang ZB, Wang NS, Chen YL (2004) Process route optimization based on genetic algorithm. J Tsinghua Uni 44(7):988–992 16. Zhang W, Xie S (2007) Agent technology for collaborative process planning: a review. Int J Adv Manuf Technol 32:315–325 17. Li WD, Ong SK, Nee AYC (2002) Hybrid genetic algorithm and simulated annealing approach for the optimization of process plans for prismatic parts. Int J Prod Res 40(8):1899–1922 18. Li L, Fuh JYH, Zhang YF, Nee AYC (2005) Application genetic algorithm to computeraided process planning in distributed manufacturing environments. Robot Comput Int Manufac 21:568–578 19. Kingsly D, Singh J, Jebaraj C (2005) Feature-based design for process planning of machining processes with optimization using genetic algorithms. Int J Prod Res 43(18):3855–3887 20. Zhang F, Zhang YF, Nee AYC (1997) Using genetic algorithm in process planning for job shop machining. IEEE Trans Evol Comput 1(4):278–289 21. Li XY, Shao XY, Gao L (2008) Optimization of flexible process planning by genetic programming. Int J Adv Manuf Technol 38:143–153 22. Li WD, Ong SK, Nee AYC (2004) Optimization of process plans using a constraint-based tabu search approach. Int J Prod Res 42(10):1955–1985 23. Veeramani D, Stinnes AH, Sanghi D (1999) Application of tabu search to process plan optimization for four-axis CNC turning centre. Int J Prod Res 37(16):3803–3822 24. Ma GH, Zhang YF, Nee AYC (2000) A simulated annealing based optimization algorithm for process planning. Int J Prod Res 38(12):2671–2687 25. Guo YW, Mileham AR, Owen GW, Li WD (2006) Operation sequencing optimization using a particle swarm optimization approach. Proc IMechE Part B: J Eng Manufac 220:1945–1958 26. Tiwari MK, Dashora Y, Kumar S, Shankar R (2006) Ant colony optimization to select the best process plan in an automated manufacturing environment. Proc IMechE Part B: J Eng Manufac 220:1457–1472 27. Krishna AG, Rao KM (2006) Optimization of operations sequence in CAPP using an ant colony algorithm. Int J Adv Manuf Technol 29:159–164 28. Dashora Y, Tiwari MK, Karunakaran KP (2008) A psycho-clonal-algorithm-based approach to the solve operation sequencing problem in a CAPP environment. Int J Prod Res 21(5):510–525 29. Chan FTS, Swarnkar R, Tiwari MK (2005) Fuzzy goal-programming model with an Artificial Immune System (AIS) approach for a machine tool selection and operation allocation problem in a flexible manufacturing system. Int J Prod Res 43(19):4147–4163 30. Ming XG, Mak KL (2000) A hybrid hopfield network-genetic algorithm approach to optimal process plan selection. Int J Prod Res 38(8):1823–1839 31. Zhang CY (2006) Research on the theory and application of job shop scheduling based on natural algorithm, [PhD Thesis]. Wuhan, School of Mechanical Science & Engineering of HUST, 2006.12 32. Pinedo M (2000) Scheduling: theory, algorithms, and systems (2nd Edition). Prentice-Hall, Inc 33. Graves SC (1981) A review of production scheduling. Oper Res 29(4):646–675 34. Jacek B, Klaus HE, Erwin P, Gunter S, Jan W (2007) Handbook on scheduling: from theory to applications. Springer-Verlag 35. Pan QK (2003) Research on multiobjective shop scheduling in manufacturing system, [PhD Thesis]. Nanjing, School of mechanical and electrical engineering, Nanjing University of Aeronautics and Astronautics, 2003.3 36. Johnson SM (1954) Optimal two and three-stage production scheduling with set-up times included. Naval Research Logistics Quarterly 1:64–68
References
13
37. Manner AS (1960) On the job-shop scheduling problem. Oper Res 8:219–223 38. Lomnicki Z (1965) A branch and bound algorithm for the exact solution of the three machine scheduling problem. Electr Eng 19(2):87–101 39. Golomb SW, Baumert LD (1965) Backtrack programming. J ACM 12:516–524 40. Garey MR, Graham RL, Johnson DS (1978) Performance guarantees for scheduling algorithms. Oper Res 26:3–21 41. Gonzalez T, Sahni S (1978) flow shop and job shop schedules: complexity and approximation. Oper Res 26:36–52 42. Panwalkar SS, Iskander W (1977) A survey of scheduling rules. Oper Res 25(1):45–61 43. Tan W, Khoshnevis B (2000) Integration of process planning and scheduling—a review. J Intell Manuf 11:51–63 44. Kumar M, Rajotia S (2005) Integration of process planning and scheduling in a job shop environment. Int J Adv Manuf Technol 28(1–2):109–116 45. Wu DZ, Yan JQ, Jin H (1999) Research status and progress of CAPP and PPC integration. J Shanghai Jiaotong Uni 33(7):912–916 46. Chryssolouris G, Chan S, Cobb W (1984) Decision making on the factory floor: an integrated approach to process planning and scheduling. Robot Comput Int Manufac 1(3–4):315–319 47. Chryssolouris G, Chan S (1985) An integrated approach to process planning and scheduling. Annals CIRP 34(1):413–417 48. Beckendorff U, Kreutzfeldt J, Ullmann W (1991) Reactive workshop scheduling based on alternative routings. In: Proceedings of a conference on factory automation and information management. Florida: CRC Press Inc, 875–885 49. Khoshnevis B, Chen QM (1989) Integration of process planning and scheduling function. In: IIE integrated systems conference & society for integrated manufacturing conference proceedings. Atlanta: Industrial Engineering & Management Press, 415–420 50. Zhang HC (1993) IPPM-A prototype to integrated process planning and job shop scheduling functions. Annals CIRP 42(1):513–517 51. Larsen NE (1993) Methods for integration of process planning and production planning. Int J Comput Integr Manuf 6(1–2):152–162 52. Wang L, Shen W, Hao Q (2006) An overview of distributed process planning and its integration with scheduling. Int J Comput Appl Technol 26(1–2):3–14 53. Deng C, Li PG, Luo B (1997) Research on the integration of job planning and process design. J Huazhong Uni Sci Technol 25(3):16–17 54. Zhang ZY, Tang CT, Zhang JM (2002) Application of genetic algorithm in flexible capp and production scheduling integration. Comput Int Manufac Sys 8(8):621–624 55. Lee H, Kim S (2001) Integration of process planning and scheduling using simulation based genetic algorithms. Int J Adv Manuf Technol 18:586–590 56. Li JL, Wang ZY (2003) Flexible process planning based on Petri net. J Yanshan Uni 27(1):71–73 57. Jablonski S, Reinwald B, Ruf T (1990) Integration of process planning and job shop scheduling for dynamic and adaptive manufacturing control. IEEE, 444–450 58. Sun RL, Xiong YL, Du RS (2002) Evaluation of process planning flexibility and its application in production scheduling. Comp Int Manufac Sys 8(8):612–615 59. Yang YG, Zhang Y, Wang NS (2005) Research on multi-process route decision based on coevolutionary algorithm. Mec Sci Technol 24(8):921–925 60. Kim KH, Song JY, Wang KH (1997) A negotiation based scheduling for items with flexible process plans. Comput Ind Eng 33(3–4):785–788 61. Yang YN, Parsaei HR, Leep HR (2001) A prototype of a feature-based multiple-alternative process planning system with scheduling verification. Comput Ind Eng 39:109–124 62. Kim K, Egbelu J (1998) A mathematical model for job shop scheduling with multiple process plan consideration per job. Produc Plan Cont 9(3):250–259 63. Kim K, Egbelu P (1999) Scheduling in a production environment with multiple process plans per job. Int J Prod Res 37(12):2725–2753 64. Jain A, Jain P, Singh I (2006) An integrated scheme for process planning and scheduling in FMS. Int J Adv Manuf Technol 30:1111–1118
14
1 Introduction for Integrated Process Planning and Scheduling
65. Lan GH, Wang LY (2001) An alternative process planning decision process integrating CAD/CAPP/CAM/CAE with production scheduling. Mod Manufac Eng 10:24–25 66. Thomalla C (2001) Job shop scheduling with alternative process plans. Int J Prod Econ 74:125– 134 67. Gan P, Lee K (2002) Scheduling of flexible sequenced process plans in a mould manufacturing shop. Int J Adv Manuf Technol 20:214–222 68. Kis T (2003) Job shop scheduling with processing alternatives. Eur J Oper Res 151:307–332 69. Li WD, McMahon C (2007) A simulated annealing—based optimization approach for integrated process planning and scheduling. Int J Comput Integr Manuf 20(1):80–95 70. Shao XY, Li XY, Gao L, Zhang CY (2009) Integration of process planning and scheduling-a modified genetic algorithm-based approach. Comput Oper Res 36(6):2082–2096 71. Wang ZB, Chen YL, Wang NS (2004) Research on dynamic process planning system considering decision about machines. In: Proceeding of the 5th world congress on intelligent control and automation, June 15–19, Hangzhou, and P.R. China:2758–2762 72. Wang J, Zhang YF, Nee AYC (2002) Integrating process planning and scheduling with an intelligent facilitator. In: Proceeding of the 10th international manufacturing conference in China (IMCC2002) Xiamen, China, October 73. Zhang Y, Saravanan A, Fuh J (2003) Integration of process planning and scheduling by exploring the flexibility of process planning. Int J Prod Res 41(3):611–628 74. Shen B, Tao RH (2004) Research on dynamic CAPP system integrating process planning and production scheduling. Combin Mac Tool Auto Mac Technol 5:45–48 75. Wang ZB, Wang NS, Chen YL (2005) Research on dynamic CAPP system and processing resource decision method. CAD Network World 76. Usher JM, Fernandes KJ (1996) Dynamic process planning-the static phase. J Mater Process Technol 61:53–58 77. Khoshnevis B, Chen Q (1990) Integration of process planning and scheduling functions. J Intell Manuf 1:165–176 78. Seethaler RJ, Yellowley I (2000) Process control and dynamic process planning. Int J Mach Tools Manuf 40:239–257 79. Lin Ye (2006) Research on process route optimization method for multi-manufacturing tasks. Chinese Mechanic Eng 17(9):911–918 80. Baker RP, Maropoulos PG (2000) An architecture for the vertical integration of tooling considerations from design to process planning. Rob Comp Int Manufac 6:121–131 81. Wang LH, Shen WM (2007) Process planning and scheduling for distributed manufacturing. Springer-Verlag 82. Wu DZ, Yan JQ, Wang LY (1996) Research on parallel distributed integration of CAPP and PPC. J Shanghai Jiaotong Uni 12(30):1–6 83. Tang DB, Li DB, Sun Y (1997) Research on integration of CAPP and job shop plan. Chinese Mechanic Eng 8(6):15–17 84. Li JL, Wang ZY, Wang J (2002) Formal logic expression of flexible process and its database structure. Manufact Auto 24:25–28 85. Brandimarte P, Calderini M (1995) A hierarchical bi-criterion approach to integrated process plan selection and job shop scheduling. Int J Prod Res 33(1):161–181 86. Wang L, Shen W (2003) DPP: an agent based approach for distributed process planning. J Intell Manuf 14:429–439 87. Hua GR, Zhao LX, Zhou XH (2005) Research on integration of capp and production scheduling based on concurrent engineering. Manufacturing Automation 27(3):45–47 88. Wu SH, Fuh JYH, Nee AYC (2002) Concurrent process planning and scheduling in distributed virtual manufacturing. IIE Trans 34:77–89 89. Zhang J, Gao L, Chan FTS (2003) A holonic architecture of the concurrent integrated process planning system. J Mater Process Technol 139:267–272 90. Sadeh N, Hildum D, Laliberty T, McANulty J, Kjenstad D, Tseng A (1998) A blackboard architecture for integrating process planning and production scheduling. Concur Eng Res Appl 6(2):88–100
References
15
91. Sugimura N, Shrestha R, Tanimizu Y, Iwamura K (2006) A study on integrated process planning and scheduling system for holonic manufacturing. Process planning and scheduling for distributed manufacturing, 311–334. Springer 92. Kempenaers J, Pinte J, Detand J (1996) A collaborative process planning and scheduling system. Adv Eng Softw 25:3–8 93. Wang LH, Song YJ, Shen WM (2005) Development of a function block designer for collaborative process planning. In: Proceeding of CSCWD2005, 24–26. Coventry, UK 94. Saygin C, Kilic SE (1999) Integrating flexible process plans with scheduling in flexible manufacturing systems. Int J Adv Manuf Technol 15:268–280 95. Kim Y, Park K, Ko J (2003) A symbiotic evolutionary algorithm for the integration of process planning and job shop scheduling. Comput Oper Res 30:1151–1171
Chapter 2
Review for Flexible Job Shop Scheduling
2.1 Introduction Production scheduling is one of the most critical issues in manufacturing systems and has been extensively studied in the literature [1]. Production scheduling is concerned with allocating available production resources to tasks and deciding the sequence of operations so that all constraints are met and the optimization objectives are optimized [2]. One of the most famous production scheduling problems is the Job shop Scheduling Problem (JSP), which is NP-hard [3]. In the JSP, a set of jobs are to be processed on a set of machines. Each job needs a sequence of consecutive operations, each of which requires exactly one machine, where the operations of each job, the machine processing operation, and the time of operation are predefined [4]. Flexible Job shop Scheduling Problem (FJSP) is an extension of the JSP, which is also NP-hard [5]. Different from the JSP, in the FJSP, for each operation, there exists a set of available machines to be selected. This increases the flexibility of scheduling, while also increases the complexity, which makes the FJSP more complex than JSP [6]. Moreover, the FJSP has many significant applications in a series of real-world scenarios, thus, it has attracted a lot of attention [4]. This chapter describes the FJSP briefly, and then reviews many existing solution methods in the recent literature, which are illustrated from three aspects of exact algorithms, heuristics, and meta-heuristics. Furthermore, this chapter also presents future research trends and opportunities in detail. The rest of the chapter is organized as follows. Section 2.2 refers to the description of the FJSP. Section 2.3 reviews various solution approaches comprehensively. Section 2.4 surveys the real-world applications of the FJSP. Finally, Sect. 2.5 discusses development trends and future research opportunities.
© Springer-Verlag GmbH Germany, part of Springer Nature and Science Press, Beijing 2020 X. Li and L. Gao, Effective Methods for Integrated Process Planning and Scheduling, Engineering Applications of Computational Methods 2, https://doi.org/10.1007/978-3-662-55305-3_2
17
18
2 Review for Flexible Job Shop Scheduling
2.2 Problem Description The FJSP can be illustrated as follows [6]. A set of n jobs J = {J1 , J2 , . . . , Jn } are to be processed on a set of m machines M = {M1, M2 , . . . , Mm }. Each job J i has a sequence of pi operations Oi1 , Oi2 , . . . , Oi pi , which are processed according to a given sequence. Each operation can be processed on several machines in M. The FJSP is to determine the most appropriate machine for each operation (called machine selection) and the sequence of the operations on machines (called operation sequencing). The optimization objective of the FJSP is to minimize some indicators, e.g., makespan and maximum tardiness. Moreover, there are some assumptions and constraints for the FJSP, which can be listed as follows. (1) (2) (3) (4) (5) (6)
ll machines are available at time zero; All jobs are available after release dates; Each operation can only be processed on one machine at one time; Each machine can only perform one operation at one time; Each operation cannot be interrupted during the processing process; There are no precedence constraints among operations of different jobs, since the jobs are independent of each other. (7) For each job, the sequence of operations is predefined. According to whether all operations can be processed on all machines in M, the FJSP can be classified into two categories, i.e., Total FJSP (T-FJSP) and Partial FJSP (P-FJSP) [7, 8], which are described as follows: (1) T-FJSP: each operation can be processed on all machines in M; (2) P-FJSP: there is at least one operation can only be processed on all machine in M. The T-FJSP can be considered as a special case of the P-FJSP [7]. The FJSP with all kinds of optimization objectives has been widely studied in the literature. Some optimization objectives are summarized in Table 2.1. In Table 2.1, the first column indicates the notations of the optimization objectives, the second column describes corresponding objective functions, the third column explains the meaning of the notations, the fourth gives the interpretation while the fifth provides some corresponding literature.
2.3 The Methods for FJSP More than one hundred recent papers are reviewed in this chapter, which includes many variants of the classical FJSP. Some variants are quite different from the classical FJSP while are more suitable for practical production environments. This section presents a comprehensive survey of various solution techniques for the classical FJSP and its variants. The main solution techniques are classified into three classes, i.e., exact algorithms, heuristics, and meta-heuristics, and summarized comprehensively.
2.3 The Methods for FJSP
19
Table 2.1 Some optimization objectives Notation
Description
Meaning
Interpretation
References
Cmax
max j (C j )
Makespan or maximum completion time
The cost of a schedule depends on how long the entire set of jobs has finished processing
Pezzella et al. [5] Zhang et al. [63]
Tmax
max j (T j )
Maximum tardiness
The maximum difference between the completion time and the due date of a single job
Baykasoglu [162]
Total tardiness
Positive difference between the completion time and the due date of all jobs and there is no reward for early jobs and only penalties incurred for late jobs
Scrich et al. [36] Brandimarte [78]
Mean tardiness
Average difference between the completion time and the due date of a single job
Tay et al. [43] Chen et al. [64]
Maximum lateness
Check how well the due dates are respected, and there is a positive reward for completing a job early
Chen et al. [163]
Ij
Total idle time
The difference between running time and processing time of all machines
Chen et al. [64]
Fj
Total flow time
The time that all jobs spent in the shop
Baykasoglu [162]
Mean flow time
Average time a single job spent in the shop
Tay et al. [43] Chen et al. [64]
Maximum workload
The maximum working time among all machines
Xia and Wu [1] Zhang et al. [117]
Total workload
The total working time on all machines
Kacem et al. [8] Gao et al. [62]
Tt
T
L max
It
Ft F
Wmax
Wt
Tj
T j /n
max j (L j )
F j /n
max j (W j )
Wj
(continued)
20
2 Review for Flexible Job Shop Scheduling
Table 2.1 (continued) Notation Ot Et
Description Oj
Ej
Meaning
Interpretation
References
Total operation cost
The cost value of all operations
Frutos et al. [81] Rabiee et al. [131]
Total energy consumption
The energy consumption of the whole production process
Mokhtari and Hasani [159] Lei et al. [160]
2.3.1 Exact Algorithms In the literature, the researchers presented some exact algorithms to solve the FJSP, most of which are formulated by Integer Linear Programming (ILP) or Mixed ILP (MILP) models. Stecke [9] defined a set of five production planning problems for a Flexible Manufacturing System (FMS) and addresses the grouping and loading problems specifically. These two problems are first formulated in detail as nonlinear 0-1 mixed integer programs. Sawik [10] formulated a multi-level ILP model for FMS. The hierarchical decision structure is proposed which includes part type selection, machine loading, part input sequencing, and operation scheduling. Werra and Widmer [11] presented some ILP models for considering FMS with a tool management of the following type: the system works in time periods whose durations are fixed or not; and tools are loaded on the machines at the beginning of each time period and stay there for the whole time period. Jiang and Hsiao [12] used 0–1 integer programming for FMS considering the operational scheduling problem and the determination of production routing with alternate process plans simultaneously. The approach of mathematical programming generates the optimal schedule, rather than near-optimal schedule or a better schedule, to meet the selected criterion. In order to optimize the allocation of workloads between a job shop and a FMS, Tetzlaff and Pesch [13] proposed some nonlinear optimization models. The models allow to optimize performance parameters like throughput, work-in-process inventory, utilization, and production lead time. Gomes et al. [14] formulated a new ILP model to schedule the flexible job shop, operating on a make-to-order basis. The model considers groups of parallel homogeneous machines, limited intermediate buffers, and negligible setup effects, and they used a commercial MILP software to solve it. Torabi et al. [15] developed a new Mixed Integer Nonlinear Program (MINLP) which simultaneously determines machine allocation, sequencing, lot-sizing, and scheduling decisions to addresses the common cycle multi-product lot-scheduling problem in deterministic flexible job shops where the planning horizon is finite and fixed by management. Özgüven et al. [16] developed a Mixed Integer Linear Programming model (MILP-1) for FJSPs, and compared them to an alternative model (Model F) to verify its superiority. Then, they modified MILP-1 to give rise to MILP2 for the same problem. Elazeem et al. [17] proposed a mathematical model of the primal problem of FJSP where the objective is to minimize the makespan and introduced its dual problem (Abdou’s problem). The optimal value of Abdou’s problem
2.3 The Methods for FJSP
21
is a lower bound for the objective value of the primal problem. Özgüven et al. [18] formulated two mixed integer goal programming models for FJSP which covers process plan flexibility and separable/non-separable sequence-dependent setup times in addition to routing flexibility. In the first model (Model A), the sequence-dependent setup times are non-separable. In the second one (Model B) they are separable. Jahromi and Tavakkoli-Moghaddam [19] presented a novel 0-1 ILP model considering the problem of dynamic machine-tool selection and operation allocation with part and tool movement policies in FMS. The objective of this model is to determine a machine-tool combination for each operation of the part type by minimizing production costs. Roshanaei et al. [20] developed two novel effective position-based and sequence-based MILP models to deal with the FJSP with the objective of minimizing makespan. Birgin et al. [21] proposed a MILP model for an extended version of the FJSP. The extension allows the precedence between operations of a job to be given by an arbitrary directed acyclic graph rather than a linear order. The goal of the model is minimizing the makespan. In addition to formulate ILP models, some researchers employed branch and bound approach to solve FJSP. Berrada and Stecke [22] first discussed a nonlinear integer mathematical programming formulation of the loading problem in FMS. This problem involves assigning to the machine tools, operations, and associated cutting tools. They used a branch and bound approach to balance the workload on all machines. Kim and Yano [23] developed an efficient branch and bound algorithm for the FMS of allocating operations to machine groups so as to maximize throughput while satisfying tool or component storage constraints. Zhou et al. [24] constructed a Petri net model for FMS. A firing sequence of the Petri net from an initial marking to the final one can be seen as a schedule of the modeled FMS. By using the branch and bound algorithm, an optimal schedule of the FMS can be obtained. Lloyd et al. [25] proposed an optimum scheduling algorithm for FMS using Petri net modeling and modified branch and bound search. The scheduling algorithm implemented a global search of the reachability tree and the optimum makespan is obtained through a modified branch and bound search. For minimizing FJSP with work centers (i.e., a variant of makespan), Hansmann et al. [26] derived a MILP model to represent the problem and further proposed a branch & bound method to solve it. Moreover, some researchers have considered the Multi-Objective Flexible Job shop Scheduling Problems (MOFJSPs). Gomes et al. [27] presented a MILP model for the FJSP considering the re-entrant process (multiple visits to the same machine group) and the final assembly stage simultaneously. The optimization objective is minimizing a weighted sum of order earliness, order tardiness, and in-process inventory. Gran et al. [28] formulated a Mixed Integer Goal Programming (MIGP) model to solve the FJSP with two objectives (i.e., minimizing the makespan and the total machining time). They presented an optimal production job shop scheduling strategy based on the solution model and adopted a preemptive goal programming approach by Microsoft Excel Solver Add-Ins to solve this problem.
22
2 Review for Flexible Job Shop Scheduling
2.3.2 Heuristics A lot of heuristics including dispatching rules have been applied to deal with the FJSP in the literature. For instance, Kripa and Tzen [29] compared heuristic methods with the exact mixed integer programming for minimizing the workload and balancing the sequence of jobs in a random flexible manufacturing system. A simulation model is proposed for solving the system performance for the FMS problem with four dispatching rules, i.e., FIFO, SPT, LPT, and MOPR. The search of Chang et al. [30] is to develop and evaluate a beam search heuristic method for addressing the F. The proposed beam search method is more sophisticated than traditional dispatching rules, yet is computationally feasible and yields improved system performance. Ro and Kim [31] discuss heuristics for solving the flexible manufacturing system with three traditional scheduling objectives and one constraint of system utilization. Robert and Kasirajan [32] combined nine dispatching rules with four next station selection rules to investigate a large dedicated flexible manufacturing system. Xiong et al. [33] presented a hybrid search algorithm for scheduling Flexible Manufacturing Systems (FMS). The algorithm combines a heuristic Best-First strategy with a controlled Backtracking strategy. Timed (place) Petri nets are used for problem representation. Their use allows to explicitly formulate concurrent activities, multiple resources sharing, precedence constraints, and dynamic routing in FMS operation. The hybrid heuristic search algorithm is combined with the execution of the timed Petri nets to search for an optimal or near-optimal and deadlock-fee schedule. The backtracking strategy is controllable. Jeong et al. [34] proposed a real-time scheduling method that used simulation and dispatching rules for flexible manufacturing systems. Mati et al. [35] used an integrated greedy heuristic to handle simultaneously the assignment problem and the sequencing problem for the FJSP with more than two jobs. Scrich et al. [36] developed two heuristics, a hierarchical procedure, and a multiple start procedure, based on tabu search for solving FJSP. The procedures use dispatching rules to obtain an initial solution and then search for improved solutions in neighborhoods generated by the critical paths of the jobs in a disjunctive graph representation. In the work of Mejia and Odrey [37], a new Petri net based algorithm, named Beam A* Search (BAS), was presented. This algorithm selectively expands portions of a Petri net reachability graph with the purpose of finding a nearoptimal schedule. The main features of the BAS algorithm presented here include the intelligent pruning of the search space, a controlled search deepening to avoid marking explosion, and the development of new heuristic evaluation functions. To date, Petri net based algorithms have only been tested with relatively small problems with few machines and few jobs. This paper proposes an algorithm that combines the A* search with both an aggressive node-pruning strategy and improved evaluation functions to generate near-optimal schedules. Extensive computational tests were conducted on a wide variety of scenarios ranging from classical job shop to complex FMS scheduling problems. Alvarez-Valdés et al. [38] presented a heuristic to minimize the total cost corresponding to the completion times of jobs for the FJSP in a glass factory. The research of Pitts and Ventura [39] focuses on production
2.3 The Methods for FJSP
23
routing and sequencing of jobs within a flexible manufacturing cell (FMC). A twostage algorithm that minimizes the manufacturing makespan is presented. During Stage I (construction phase), two heuristics are utilized to generate an initial feasible sequence and an initial MS solution. Fattahi et al. [40] developed two heuristics including integrated and hierarchical approaches to solve the FJSP, and six different hybrid searching structures depending on the used searching approach and heuristics were presented in their paper. To cope with the complexities of FMS scheduling, Huang et al. [41] presented a hybrid heuristic search strategy, which combines the heuristic A* strategy with the DF strategy based on the execution of the Petri nets. The search scheme can invoke quicker termination conditions, and the quality of the search result is controllable. This paper investigates a hybrid scheduling strategy in a Petri net framework. Timed Petri nets provide an efficient method for representing concurrent activities, shared resources, and precedence constraints encountered frequently in flexible manufacturing systems. We use a hybrid heuristic algorithm to search for an optimal or near-optimal schedule of a simple manufacturing system with multiple lot sizes for each job type considered. The method invokes quicker termination conditions, and the quality of the search result is controllable. Wang et al. [42] proposed a Filtered-Beam-Search-Based heuristic algorithm (FBSB) to obtain suboptimal schedules for FJSP with multiple objectives, i.e., minimizing makespan, the total workload of machines and the workload of the most loaded machine. The FBSB utilized dispatching rules-based heuristics and explored the search space intelligently to avoid useless search, which may enhance the search speed. An effective composite dispatching rules for solving the multi-objective FJSP has been proposed and analyzed in the article of Tay and Ho [43], which is discovered from the Genetic Programming (GP) approach. Wang and Yu [44] developed the FBSB heuristic algorithm to deal with the variant FJSP problem with maintenance activities. To schedule flexible manufacturing systems efficiently, Lee and Lee [45] proposed new heuristic functions for A* algorithm that is based on T-timed Petri net. In minimizing makespan, the proposed heuristic functions are usually more efficient than the previous functions in the required number of states and computation time. Nie et al. [46] studied a heuristic to solve the dynamic FJSP with job release dates. Based on harmony search and large neighborhood search, Yuan and Xu [47] designed a hybrid two-stage search heuristic for solving large-scale FJSP with makespan criterion. Ziaee [48] proposed an efficient heuristic based on a constructive procedure to obtain high-quality schedules for FJSP with the objective of minimizing makespan. Afterward, to further deal with the distributed FJSP which including the scheduling of jobs in a distributed manufacturing environment, Ziaee [49] developed a heuristic based on a constructive procedure. Gema and Pastor [50] presented a dispatching algorithm to solve a real-world case of the flexible job shop scheduling problem with transfer batches and the objective of minimizing the average tardiness of production orders. In this article, the priority-dispatching rules are applied in a predefined order. And the priority rules have been assigned probabilities based on a randomized variant. Baruwa and Piera [51] presented a simulation-optimization approach employing an anytime search method to optimize FMS scheduling problems modeled with the TCPN formalism. The underlying search graph is based on the CSS of TCPN models
24
2 Review for Flexible Job Shop Scheduling
which group markings of equivalent untimed markings into a state class. The proposed approach combines the column search method with backtracking that offers an anytime behavior in a time-constrained environment. Subsequently, an anytime heuristic search method has been proposed by Baruwa et al. [52] for the DL-free scheduling problem of FMS with shared resources, modeled as a discrete-event system. It finds optimal or near-optimal DL-free schedules for a given initial marking of the system based on the reachability graph of TCPN modeling. The algorithm has been tested extensively on five different cases of DL-prone situations that take into account limitations that arise in realistic manufacturing systems. Gao et al. [53] presented four ensembles of heuristics to schedule the FJSP with new job insertion. The objectives are to minimize makespan, average of earliness and tardiness (E/T), maximum machine workload, and total machine workload. Pérez et al. [54] proposed a new hierarchical heuristic to solve the MOFJSPs, which was an adaptation of Newton’s method for continuous multi-objective unconstrained optimization problems. Sobeyko et al. [55] devised an iterative local search approach, and then hybridized the shifting bottleneck heuristic with the iterative local search and variable neighborhood search to obtain high-quality solutions quickly for the FJSP considering minimizing the total weighted tardiness. Miguel et al. [56] proposed a heuristic method based on Tabu search for solving the FJSP with the constrain of lot streaming. Zadeh et al. [57] designed a heuristic algorithm model for solving a dynamic FJSP with the objective value of Makespan. This model involves the setup time of machines in the dynamic rescheduling. The study of Miguel et al. [58] shows a dispatching rule, used in the practical FJSP. As one of the heuristic algorithms, the effectiveness of the proposed dispatching rule is higher than related swarm intelligence algorithm in practical textile manufacturing. An effective dispatching rule solved the dynamic FJSP with a limited buffer in Teymourifar et al. [59]. A right shift heuristic and Least Waiting Time (LWT) heuristic were effectively proposed based on the stochastic condition of the machine break down. A hybrid priority scheduling rule was extracted by gene expression programming and simulation in the article of Ozturk et al. [60] as well. The proposed scheduling rule is combined with the cellular system for solving the multi-objective and dynamic FJSP. Azzedine et al. [61] proposed two greedy heuristics based on an iterated insertion technique for solving the FJSP with the constraint of transportation time between machines based on a case study.
2.3.3 Meta-Heuristics 2.3.3.1
Population-Based Meta-Heuristics
The population-based meta-heuristics have been widely applied for solving the FJSP, where Genetic Algorithms (GA) perhaps is the most widely utilized. The GA has been not only used to effectively solve the single-objective FJSP, but also used to solve the multi-objective FJSP.
2.3 The Methods for FJSP
25
Frist, the solution methods for FJSP with a single objective are introduced, and makespan is the most common objective. Many meta-heuristic algorithms based on GA have been proposed to solve this problem. For instance, Pezzella et al. [5] presented a GA for the FJSP. The algorithm integrates different strategies for generating the initial population, selecting the individuals for reproduction, and reproducing new solutions. Kacem [51] presented two approaches to solve the FJSP. The first one is the approach by localization, and the second one is genetic algorithms. They applied genetic manipulations to enhance the solution quality. Gao et al. [62] developed a hybrid GA with variable neighborhood descent to address the FJSP with three objectives: minimizing makespan, maximal machine workload, and total workload. An innovative two-vector representation scheme was proposed and an effective decoding method is used to interpret each chromosome into an active schedule. Zhang et al. [63] proposed an effective GA for solving the FJSP to minimize makespan. In the algorithm, an improved chromosome representation scheme is proposed and an effective decoding method interpreting each chromosome into a feasible active schedule is designed. Based on GA and Grouping Genetic Algorithm (GGA), Chen et al. [64] developed a scheduling algorithm for solving FJSP. The objectives of the proposed algorithm are the minimization of multiple performance measures including total tardiness, total machine idle time, and makespan. Chang et al. [65] proposed a GA which embeds the Taguchi method behind mating to increase the effectiveness for solving the FJSP. Nouri et al. [66] proposed a hybridization of two meta-heuristics within a holonic multi-agent model for solving the FJSP. The scheduler agent applies a Neighborhood-based Genetic Algorithm (NGA) for global exploration and the cluster agent uses a local search technique. Jiang et al. [67] presented an improved GA, where a new initialization method is adopted to improve the quality of the initial population and accelerate the convergence speed of the algorithm. For the same problem, Huang et al. [68] also proposed an improved GA, where two effective crossover methods and two mutation methods are conducted. Later, they proposed another improved GA with a new adaptive probability of crossover and mutation in the mating process [69], so that the convergence rate is greatly improved. Then, Driss et al. [70] proposed a GA to solve the FJSP with the objective of minimizing makespan. A new chromosome representation is used to conveniently represent the solution and special crossover and mutation operators are designed. Purnomo et al. [71] developed an improved GA with a knowledge-based system to solve the FJSP. Some heuristics are embedded in the GA for improving the performance. Morinaga et al. [72] developed a GA that takes advantage of knowledge included in heuristic dispatching rules to solve the FJSP. Machine selection and job selection are performed at once to relieve insufficient search of the solution space. In order to improve the exploitation ability of the GA, many researchers combine the GA with local search heuristics. Li et al. [73] proposed a hybrid algorithm that hybridized the GA and Tabu Search (TS) for the FJSP. An Improved Simulated Annealing Genetic Algorithm (ISAGA) was presented by Gu et al. [74] to solve FJSP. In the ISAGA, an X conditional cloud generator in cloud model theory is used to generate the mutation probability and a Simulated Annealing (SA) operation was designed for the variability of results. Zhang et al. [75] developed a Variable Neighborhood Search (VNS) based on GA to
26
2 Review for Flexible Job Shop Scheduling
deal with the FJSP. In their proposed algorithm, some simple local search methods are used to balance exploration and exploitation. Ma et al. [76] proposed a Memetic Algorithm (MA) to solve FJSP. The MA is a hybrid GA combined with two efficient local searches to exploit information in the search region. Cinar et al. [77] proposed a GA with priority-based representation for solving the FJSP. The priority of each operation is represented by a gene on the chromosome. To obtain improved solutions, Iterated Local Search (ILS) was embedded in their algorithm. Other objectives were also considered in the literature, e.g. tardiness. Brandimarte [78] proposed a hierarchical TS algorithm for FJSP considering makespan and total weighted tardiness. Hierarchical strategies have been proposed for complex scheduling problems, and the TS is used to cope with different hierarchical memory levels. A new algorithm by hybridizing GA and VNS was developed by Türkyılmaz and Bulkan [79] to solve the FJSP with the objective of minimizing total tardiness. Parallel-executed VNS algorithm is used in the elitist selection phase of the GA to minimize execution time. For the same objective, Kaweegitbundit et al. [80] proposed a GA that incorporates heuristic rules to solve it. Combinations of five job selection and five machine selection heuristics are embedded in this algorithm. Moreover, there are many researches on devising GA-based approaches to solve MOFJSPs. The basic MOFJSP can be classified by different objective combinations. Frutos et al. [81] proposed a MA, based on the NSGA-II acting on two chromosomes, to solve the MOFJSP. A local search procedure (SA) is added to the genetic stage to avoid the algorithm getting trapped in a local minimum. Yuan et al. [82] proposed a new MA based on Non-dominated Sorting Genetic Algorithm II (NSGA-II) for this problem with the objectives to minimize the makespan, total workload, and critical workload. In their proposed algorithm, a hierarchical strategy is used to handle the three objectives. An optimized algorithm according to a variety of population genetic-variable neighborhood search was proposed by Liang et al. [83] to solve the MOFJSP. The new algorithm aims at minimizing the makespan, obtaining the smallest machine maximum load and the smallest total machine minimum loads. Meanwhile, the new algorithm improves the inherent defects of poor local search ability, premature convergence, and longtime calculation compared with traditional GA. Then, Ren et al. [84] proposed an Immune Genetic Algorithm (IGA) which combined the artificial immune mechanism and GA to solve the MOFJSP with maximizing due time satisfaction and minimizing the total processing costs. Morinaga et al. [85] presented a GA using TS strategy for the MOFJSP with weighted tardiness, setup worker load balance, and work-in-process as objectives. Liang et al. [86] developed a MA in the frame of improved NSGA-II to deal with FJSP. On the basis of NSGA-II, a strategy of improving the elite which is based on circular crowding distance was designed to increase the diversity of population distribution, prevent the algorithm from trapping in local optimum, and avoid the occurrence of premature convergence. Recently, Deng et al. [87] presented a Bee Evolutionary Guiding Non-dominated Sorting Genetic Algorithm II (BEG-NSGA-II) for the MOFJSP with the objectives to minimize the maximal completion time, the workload of the most loaded machine, and the total workload of all machines. To solve the MOFJSP whose objectives are minimizing the longest makespan of workpieces, the load of
2.3 The Methods for FJSP
27
each machine, and the total machine load, Zhang et al. [88] put forward a MultiPopulation Genetic Algorithm (MPGA). Ghasemi et al. [89] presented a classical sum weighted (WS) method and NSGA-II to solve the FJSP with multiple objectives: minimizing completion time of jobs and maximizing machine employment. To generate Pareto-fronts, the algorithm uses the mechanism of variable weights and random selection to change directions in search spaces. Teymourifar et al. [90] proposed two modified NSGA-II algorithms to solve the MOFJSP. The neighborhood structures defined for the problem are integrated into the algorithms to create better generations during the evolutionary process. Recently, there are many variant versions of FJSP shown in the literature. Tayebi et al. [91] proposed a genetic-variable neighborhood search with affinity function to solve the FJSP with Sequence-Dependent Setup Times (SDST), learning effect, and deterioration in jobs. In the same year, an FJSP with cyclic jobs in which jobs must be delivered in determined batch sizes with definite time intervals was studied by Jalilvand-Nejad and Fattahi et al. [92]. They proposed GA and SA to solve this problem, respectively, where GA achieves better performance. To deal with the FJSP considering lot-sizing and SDST, Rohaninejad et al. [93] proposed a novel Mixed Integer Programming (MIP) model based on big bucket time, and developed a new hybrid algorithm by combining GA, Particle Swarm Optimization (PSO), and a local search heuristic. For solving the flexible job shop just-in-time scheduling problem, Rey et al. [94] proposed two meta-heuristics, i.e., GA and PSO. The scheduling objective for just-in-time production is translated into the minimization of the Mean-Square due date Deviation (MSD), quadratically penalizing inventory (earliness) costs, and backlogging (tardiness) costs. To handle a complex JSP with characteristics of re-entrant and flexible, Zhang et al. [95] proposed an improved GA with a comprehensive search mechanism to overcome the contradiction between convergence rate and convergence accuracy. Li et al. [96] proposed an improved GA to solve the FJSP with small-batch customization. Based on the experience, the standard GA was improved by designing a genetic operator based on dynamic procedure encoding, reserving the optimal individual, and meeting requirements of FJSP. For solving the distributed FJSP, Chang et al. [97] proposed a hybrid GA with a novel encoding mechanism to solve invalid job assignments. For the same problem, Lu et al. [98] proposed an improved GA to solve it. In this algorithm, they developed a 1D-to-3D decoding method to convert a 1D chromosome into a 3D solution. Azzouz et al. [99] proposed a self-adaptive evolutionary algorithm that combined GA with VNS and ILS to solve FJSP with SDST and learning effects. Elgendy et al. [100] put forward a modified GA which incorporated the traditional procedures of GA with a repair strategy to optimize the makespan of dynamic FJSP. Chen et al. [101] proposed a modified meta-heuristics algorithm, based on GA in which gene encoding is divided into process encoding and machine encoding, to solve the FJSP with different varieties and small batches. An initialization operation associated with the time matrix was introduced to accelerate the convergence speed and the generation gap coefficient was applied to guarantee the survival rate of superior offspring. For solving the dynamic FJSP considering machine failure, urgent job arrival, and job damage as disruptions, Wang et al. [102] proposed a dynamic rescheduling method
28
2 Review for Flexible Job Shop Scheduling
based on Variable Interval Rescheduling Strategy (VIRS). Meanwhile, they proposed an improved GA to solve the dynamic FJSP with the objective of minimizing makespan. Peng et al. [103] presented a GA to solve a double resource FJSP. Both machines and workers were considered in the process of job shop scheduling. In this algorithm, a new well-designed three-layer chromosome encoding method has been adopted and some effective crossover and mutation operators are designed. For the MOFJSP under random machine breakdown, Ahmadi et al. [104] applied two evolutionary algorithms, i.e., NSGA-II and NRGA, to combine the improvement of makespan and stability simultaneously. For bi-objective FJSP under stochastic processing times, Yang et al. [105] proposed an NSGA-II to solve it with consideration of the completion time and the total energy consumption. Wang et al. [106] proposed an effective MA that combined NSGA-II with a local search method for simultaneously optimizing fuzzy makespan, average agreement index, and minimal agreement index. A variable neighborhood local search was specially developed to enhance exploitation ability. Due to the importance of environmental protection in recent years, carbon emissions and energy consumption are considered in MOFJSP. Jiang et al. [107] proposed a modified NSGA-II to solve the MOFJSP considering energy consumption. Yin et al. [108] proposed a new low-carbon flexible job shop mathematical scheduling model and addressed a Multi-Objective Genetic Algorithm (MOGA) based on a simplex lattice design to solve this problem. For the MOFJSP with the objectives of minimizing total carbon footprint and total late work criterion, Piroozfard et al. [109] proposed an improved MOGA and compared it with NSGA-II and the strength of Pareto evolutionary algorithm to verify its superiority. For FJSP with energy-saving measures, Wu et al. [110] considered when to turnon/off machines and which speed level to choose as two measures to save energy and proposed an energy consumption model to compute the energy consumption for a machine in different states. Then, they developed a Non-dominated Sorted Genetic Algorithm (NSGA) to solve this problem. Zhang et al. [111] proposed a model of low-carbon scheduling for FJSP. For quantifying the carbon emission of different scheduling plans, they put forward three carbon efficiency indicators to estimate the carbon emission of parts and machine tools, and proposed a hybrid NSGA-II for solving this problem. Azzouz et al. [112] proposed a hybrid algorithm based on GA and VNS to solve the FJSP considering sequence-dependent setup times with two kinds of objectives function, makespan, and bi-criteria objective function. In addition to GA, other population-based meta-heuristics have also been applied, many of them have been used to solve the classical FJSP with the objective of minimizing makespan. For instance, Marzouki et al. [113] proposed a new multi-agent model based on the meta-heuristics Chemical Reaction Optimization (CRO) to solve FJSP with the objective of minimizing makespan. Afterward, they hybridized the CRO with TS to deal with the same problem [114]. Yang et al. [115] proposed an effective Modified Biogeography-Based Optimization (MBBO) algorithm with machine-based shifting to solve the FJSP with makespan minimization. Then, Lin et al. [116] developed a Hybrid Biogeography-Based Optimization (HBBO) algorithm for solving the FJSP with the makespan criterion. An insertion-based local search heuristics was incorporated into BBO to modify the mutation operator to
2.3 The Methods for FJSP
29
balance the exploration and exploitation abilities. PSO was also often used to solve FJSP with makespan minimization in recent years. Xia and Wu [1] develop an easily implemented hybrid approach combining PSO and SA for the FJSP. The results obtained from the computational study have shown that the proposed algorithm is a viable and effective approach for the MOFJSP, especially for problems on a large scale. Zhang et al. [117] combined a PSO algorithm and a TS algorithm to solve the MOFJSP. The objectives of the hybrid algorithm are minimizing makespan, maximal machine workload, and total workload of machines. Singh et al. [118] proposed a Quantum-Behaved Particle Swarm Optimization (QBPSO) for solving FJSP, which overcomes the drawback of PSO easily getting trapped at a local optimum. Muthiah et al. [119] proposed a hybridization of PSO and the Artificial Bee Colony (ABC) to solve the FJSP with the objective of minimizing makespan. Yi et al. [120] proposed an effective MA which is a combination of TS and GA for the FJSP with the objective of minimizing the makespan. Phu-ang and Thammano [121] proposed a new MA based on Marriage in honey Bees Optimization (MBO) algorithm for solving FJSP. The algorithm employs the incorporation of SA algorithm blended with a set of heuristics to enhance its local search capability. Ge et al. [122] proposed an efficient artificial fish swarm model with the estimation of distribution (AFSA-ED) for the FJSP with the objective of minimizing the makespan. A pre-principle and a post-principle arranging mechanism in this algorithm are designed to enhance the diversity of population. Wang et al. [123] designed a Bacterial Foraging Algorithm (BFO) with an improved adaptive step and stop condition for solving local optimal and premature problems, and then applied this improved algorithm to FJSP. In order to overcome the disadvantage of low efficiency in convergence speed, Wu et al. [124] developed an Elitist Quantum-Inspired Evolutionary Algorithm (EQIEA) to solve the FJSP. Xu et al. [125] proposed an Improved Bat Algorithm (IBA) to solve the FJSP with a new encoding strategy. Wang et al. [126] developed an Improved Ant Colony Optimization (IACO) with high computational efficiency to optimize the makespan for the FJSP. An improved Hybrid Immune Algorithm (HIA) with parallelism and adaptability was proposed by Liang et al. [127] to solve the FJSP with makespan as objective, and a SA is embedded into this hybrid algorithm for avoiding the local optimization. Buddala et al. [128] proposed a Teaching–LearningBased Optimization (TLBO) to solve the FJSP based on the integrated approach with the objective to minimize makespan. A new local search technique followed by a mutation strategy is integrated into TLBO to avoid being trapped at the local optimum. Jiang et al. [129] proposed a Grey Wolf Optimization (GWO) algorithm to deal with the FJSP with the objective of minimizing makespan. In the GWO, an adaptive mutation method is designed to keep the population diversity and avoid premature convergence. Recently, Gaham et al. [130] proposed an Effective Operations Permutation based Discrete Harmony Search (EOP-DHS) approach for tackling the FJSP with makespan criterion. Some population-based meta-heuristics have also been used to solve MOFJSP with multiple different criteria. For example, Rabiee et al. [131] proposed four multiobjective, Pareto-based, meta-heuristics optimization methods to solve the MOFJSP with the objectives of minimizing makespan and total operation costs. Xue et al.
30
2 Review for Flexible Job Shop Scheduling
[132] proposed a Quantum Immune Algorithm (QIA) based on the quantum and immune principles to solve the MOFJSP whose objectives are makespan, workload of machines, and workload of the critical machine. Ma et al. [133] proposed a MA based on the Non-dominated Neighbor Immune Algorithm (NNIA) to tackle the FJSP whose objectives are minimizing makespan and total operation cost. Then, Gong et al. [134] designed a MA with a well-designed chromosome encoding/decoding method to solve MOFJSP whose objective are to minimize the maximum completion time, the maximum workload of machines and the total workload of all machines. Mekni et al. [135] used a Modified Invasive Weed Optimization (MIWO) algorithm to solve the MOFJSP with the criteria to minimize makespan, the total workload of machines, and the workload of the critical machine. Karthikeyan et al. [136] developed a Hybrid Discrete Firefly Algorithm (HDFA) combined with local search to solve the MOFJSP with objectives of minimizing the maximum completion time, the workload of the critical machine, and the total workload of all machines. As with solving the traditional FJSP with makespan criterion, PSO was also often used to solve MOFJSP. For example, Kamble et al. [137] proposed a hybrid algorithm based on PSO and SA to solve the FJSP with five objectives to be minimized simultaneously, i.e., makespan, maximal machine workload, total workload, machine idle time, and total tardiness. Huang et al. [138] proposed a Multi-Objective PSO (MOPSO) integrating with VNS for solving the MOFJSP with three criteria: the makespan, the total workload, and the critical machine workload. Then, they mixed the MOPSO and SA with VNS to solve the same problem and obtained a better solution [139]. Zeng and Wang [140] hybridized Chaotic Simulated Annealing (CSA) with Particle Swarm Improved Artificial Immune (PSIAI) algorithm for solving the MOFJSP with considerations for makespan and processing cost. Zhu et al. [141] proposed a Modified Bat Algorithm (MBA) to solve the MOFJSP, and five neighborhood structures in this algorithm are hybridized with the BA to improve the searching ability. Moreover, there are also many meta-heuristics that also applied to nontraditional FJSP. For instance, Rossi [142] proposed an Ant Colony Optimization (ACO) algorithm based on a disjunctive graph model to deal with FJSP with sequence-dependent setup and transportation times. An effective TLBO was proposed by Xu et al. [143] to solve the FJSP with fuzzy processing time (FJSPF). A bi-phase crossover scheme based on the teaching–learning mechanism and special local search operators are incorporated into the search framework of TLBO to balance the exploration and exploitation capabilities. Liu et al. [144] simplified a dynamic FJSP with fuzzy processing time as a traditional static FJSP with fuzzy processing time and proposed an Estimation of Distribution Algorithm (EDA) to solve the post-transforming problem. As for the remanufacturing scheduling problem modeled as FJSP, Gao et al. [145] proposed a Two-stage Artificial Bee Colony (TABC) algorithm to solve this problem for scheduling and rescheduling with new job(s) inserting. Furthermore, Gao et al. [146] addressed an Improved Artificial Bee Colony (IABC) algorithm to solve the FJSP with fuzzy processing time which objective is to minimize the maximum fuzzy completion time and the maximum fuzzy machine workload. Meng et al. [147] presented a hybrid Artificial Bee Colony (hyABC) algorithm to minimize the total flow time for the FJSP with overlapping in operations. In the proposed hyABC, a dynamic
2.3 The Methods for FJSP
31
scheme is introduced to fine-tune the search scope adaptively. A Knowledge-Guided Fruit fly Optimization Algorithm (KGFOA) with a new encoding scheme was proposed by Zheng et al. [148] to solve the Dual-Resource Constrained Flexible Job shop Scheduling Problem (DRCFJSP) with makespan minimization criterion. Liu et al. [149] formulated a multi-objective optimization model based on FJSP aimed at minimizing carbon footprints of all products and makespan, and designed a Hybrid Fruit fly optimization algorithm (HFOA) to solve the proposed model. Zandieh et al. [150] considered the FJSP with Condition-Based Maintenance (CBM) and proposed an improved Imperialist Competitive Algorithm (ICA) combined with SA to solve this problem. Nouiri et al. [151] developed a Two-Stage Particle Swarm Optimization (2S-PSO) to solve the FJSP under machine breakdowns assuming that there was only one breakdown. Jamrus et al. [152] developed a hybrid approach integrating PSO with a Cauchy distribution and genetic operators (HPSO + GA) for solving the FJSP by finding a job sequence that minimizes the maximum flow time with uncertain processing time. Singh et al. [153] proposed a multi-objective framework based on Quantum Particle Swarm Optimization (QPSO) to solve the FJSP with random machine breakdown. Reddy et al. [154] proposed a new evolutionary-based Multi-Objective TLBO (MOTLBO), which has combined a local search technique to solve the FJSP with machines breakdown as a real-time event. Zhang et al. [155] proposed an approach named HMA which is a combination of Multi-Agent System (MAS) negotiation and Ant Colony Optimization (ACO) to solve FJSP under a dynamic environment. Azzouz et al. [156] proposed a Self-Adaptive Hybrid Algorithm (SAHA) for solving the FJSP where sequence-dependent setup times were considered. This algorithm proposes a new adaptation strategy based on similarity function and archiving process. Aiming at minimizing the total cost for the FJSP with controllable processing times, Mokhtari et al. [157] developed a Scatter Search (SS) to find the best trade-off between processing cost and delay cost. Lu et al. [158] proposed a new Multi-Objective Discrete Virus Optimization Algorithm (MODVOA) for solving the FJSP with controllable processing times, and the objectives are minimizing both the makespan and the total additional resource consumption. Mokhtari and Hasani [159] developed an enhanced evolutionary algorithm incorporated with the global criterion to solve the MOFJSP with three objective functions: minimizing total completion time, maximizing the total availability of the system, and minimizing total energy cost of both production and maintenance operations. Lei et al. [160] developed a Shuffled Frog-Leaping Algorithm (SFLA) based on a three-string coding approach to solve the FJSP with the objectives of minimizing the workload balance and total energy consumption. For solving the Multi-Objective Stochastic Flexible Job shop Scheduling Problem (MOSFJSSP), Shen et al. [161] proposed a modified multi-objective evolutionary algorithm based on decomposition (m-MOEA/D), which made the elitists kept in an archive and preserved in the child generation.
32
2.3.3.2
2 Review for Flexible Job Shop Scheduling
Single Solution Based Meta-Heuristics
Besides population-based meta-heuristics, single solution based meta-heuristics have also often been utilized to solve the FJSP, where Simulated Annealing (SA) is one of the popular ones. For instance, Baykasoglu [162] presented a meta-heuristics based on SA to solve the FJSP considering makespan, mean flow time, number of tardy jobs, maximum tardiness, and total machine idle time. The approach makes use of the grammars from linguistics to represent the FJSP data, and the dispatching rules for the sequencing of operation. Chen et al. [163] developed a scheduling algorithm, including two major modules: the machine selection module and the operation scheduling module, for solving the FJSP. The objectives of the algorithm are maximizing on-time delivery rate and minimizing makespan, the maximum lateness, and the average tardiness. Khalife et al. [164] developed an effective SA for solving the MOFJSP with overlapping in operations. The evaluation criteria of the MOFJSP is makespan, total machine work loading time, and critical machine work loading time. Shivasankaran et al. [165] proposed a new hybrid non-dominated sorting SA to solve the MOFJSP with the objectives of minimizing makespan, critical machine workload, total workload of the machines, and total operating cost. The critical or incapable machine is eliminated by sorting all the nondominant operations, which effectively reduce the computational time complexity of the algorithm. Then, Shivasankaran et al. [166] devised a hybrid sorting immune SA algorithm to solve FJSP. A critical machine isolating strategy in this algorithm is used to improve the local search ability. Bo˙zejko et al. [167] developed double paralyzed SA algorithms including fine-grained vector processing, multiple walk-multi-core processing to deal with the cyclic FJSP. Kaplano˘glu [168] developed a SA algorithm with an Object-Oriented (OO) approach for solving the MOFJSP. The OO approach can reduce the complexity of problem coding by using UML class diagrams. Moreover, many other single solution based meta-heuristics have also been applied to solve the FJSP. For example, Karimi et al. [169] combined the VNS algorithm with a knowledge module to solve the FJSP with the objective of minimizing makespan. The VNS part searches good solutions from the solution space and knowledge module obtains the knowledge of good solutions and feeds it back to the algorithm, which makes the search process more efficient. Meanwhile, Lei and Guo [170] devised a VNS algorithm, composed of two neighborhood search procedures and a restarting mechanism, for the dual-resource constrained FJSP with the objective of optimizing makespan. Vilcot and Billaut [171] studied two kinds of TS algorithms to minimize makespan, maximum lateness, and total tardiness simultaneously for the MOFJSP. Jia and Hu [172] showed a novel path-relinking TS for solving the multi-objective FJSP, whose objective is minimizing makespan, total workload of all machines (in terms of processing time), and the workload of the most-loaded machine. Rajkumar et al. [173] proposed a Greedy Randomized Adaptive Search Procedure (GRASP) algorithm to solve the FJSP with limited resource constraints. The model objectives are the minimization of makespan, maximum workload, and total workload. Yulianty and Ma’ruf [174] proposed an algorithm for solving the FJSP considering controllable processing times and expected downtime by using a predictive approach.
2.4 Real-World Applications
33
2.4 Real-World Applications The FJSP has important applications in the real world. A number of complicated realistic problems have been modeled as the FJSP in the literature. Meanwhile, a series of solution methods have been specially developed to cope with these problems, some applications are listed in this section. Li et al. [175] proposed a modified GA which was well verified in scheduling decision support systems for the production of seamless steel tubes in Baoshan Iron & Steel Complex. Chen et al. [137] studied a new algorithm based on GA and Grouping GA in a real weapon production factory. Gomes [14] presented a new ILP model to solve the FJSP, which is derived from a discrete part and make-to-order manufacturing industry. Birgin et al. [21] studied an extended FJSP, where the precedence between operations of a job is given by an arbitrary directed a cyclic graph instead of a linear order. They proposed a new MILP model, and used instances in the literature and instances inspired by the real data from a printing industry and solve it by the MILP model and CPLEX. Hansmann et al. [26] studied a MILP model for with B&B procedure to solve the FJSP with work centers which exists in rail car maintenance, and further presented heuristics and exact solution methods. For the MOFJSP with sequence-dependent setup times, auxiliary resources, and machine downtime, Grobler et al. [176] developed four kinds of PSO-based heuristics. Comparison results on real customer data indicated that the priority-based PSO algorithm performed better than the existing rule-based algorithms commonly used for this problem. For the FJSP from the printing and boarding industry, two TS algorithms were proposed to obtain a set of non-dominated solutions [171]. Alvarez-Valdes et al. [18] devised a heuristics to solve the FJSP existed in a glass factory, which produced a lot of manufactured glass objects in a complex process. Calleja and Pastor [177] proposed a dispatching algorithm for a FJSP with transfer batches, which is arisen from a realistic auto parts manufacturing plant. To schedule the customers’ orders in factories of plastic injection machines which is a case of the FJSP, Tanev et al. [178] presented a Hybrid Evolutionary Algorithm (HEA) by combining the Priority-Dispatching Rules (PDRs) with the GA. Hosseinabadi et al. [177] devised a gravitational emulation local search algorithm to deal with the multi-objective dynamic FJSP in small and medium enterprises.
2.5 Development Trends and Future Research Opportunities 2.5.1 Development Trends With the rapid development of economy and society, the manufacturing industry has encountered more and more opportunities and challenges, e.g., mass customization, customization, virtual enterprise, distribution manufacturing, and green manufacturing have become more and more popular. Intelligent manufacturing is the main
34
2 Review for Flexible Job Shop Scheduling
theme as the result of demands of Industry 4.0. The advanced technologies such as big data and artificial intelligence can provide powerful tools for coping with these challenges, which can further guide new development trends of the manufacturing industry. Various tasks of FJSP have increased substantially among the rapid development of the manufacturing industry.
2.5.2 Future Research Opportunities (1) New Mode of FJSP Mass customization, providing customers with products and services for specific needs, will result in new requirements of the FJSP modes, such as FJSP dynamic scheduling, FJSP online scheduling, FJSP real-time scheduling and FJSP reverse scheduling. New kinds of optimization objectives in FJSP are becoming significant issues following its development, for example, robustness, satisfaction degree, and system stability. This causes the FJSP with more than three objectives, i.e., manyobjective FJSP should also be paid more attention. (2) New Model of FJSP The actual workshops and personalized requirements become increasingly complex, which make practical production environments (e.g., the realistic production conditions and customer demands) more and more complex. Therefore, these complex factors should be considered in the FJSP model. Moreover, more constraints such as production resources restriction, buffer size restriction, due date restriction, batch size restriction, and cost restriction, should be taken into account in the FJSP. (3) New Solution methods for FJSP With the development of data analysis and artificial intelligence (including machine learning, deep learning and so on), data-driven modeling methods provide a new and effective way to solve the engineering optimization problem which is difficult to establish a mathematical model. The joint data-model-driven solution methods can be developed to solve complicated FJSP. For example, the energy consumption model of FJSP can be formulated based on the theoretical energy consumption mathematical model and the actual carbon emissions of the workshop. Big data and artificial intelligence technology can extract effective information from the production data in the workshop to guide the establishment of the FJSP model. Meanwhile, these technologies can also be employed to predict the occurrence of uncertainties (e.g., machine breakdown, new order arrival) based on historical production data, and input them to the dynamic FJSP. In addition, with the rapid development of intelligent optimization algorithms, more research can focus on solving the FJSP problem with new intelligent optimization algorithms to obtain better schedules. Due to the complexity of the actual workshop production environment, it is difficult to fully simulate and optimize, and
2.5 Development Trends and Future Research Opportunities
35
it takes too long. Therefore, some surrogate-model-based optimization methods can be used to solve the FJSP in an actual production environment to obtain a better schedule in a shorter time. (4) New FJSPs under the new manufacturing form Worldwide corporate restructuring inflows significantly affect manufacturing modes. distributed manufacturing has become a late-mode of production with the increasing popularity of cooperative production among companies and mergers between enterprises. Distributed manufacturing can make full use of the resources of multiple enterprises or factories, and realize the rapid production at a reasonable cost by implementing the rational allocation, optimal combination, and sharing of resources. With the progress of science and technology, the individualized and diversified development of consumer demand promotes the diversification of commodity supply. In order to satisfy the demands of consumers with low cost, high quality, personalization, and rapid response, mass-customized production models have emerged. Meanwhile, in order to cope with new opportunities in the market, enterprises with different resources and advantages have established a mutually beneficial enterprise alliance based on data network to share technology and information. Virtual enterprise, a mode of production and operation, provides a new way for manufacturing enterprises to quickly respond to market changes and establish competitive advantages. Due to the change of manufacturing mode, the scheduling models and solution methods of the workshop also need to be changed accordingly. In order to solve the FJSP under the new manufacturing model, new scheduling models should be established under the background of cooperative production between different companies or factories, and new solution methods should be constructed to optimize scheduling indicators and to enable enterprises to produce high-quality products with lower cost and risk. (5) New Applications of the FJSP It is necessary to develop Cyber Physics Systems (CPS) or digital twin driven scheduling methods for the FJSP. In the context of new technological innovations, intelligent plants have emerged. It is a highly flexible, personalized, digital and intelligent production system throughout the life cycle of product procurement, design, production, sales, and service by using many advanced technologies such as CPS, big data, virtual simulation, network communication, and artificial intelligence technology. These advanced technologies can be applied to the FJSP. For instance, CPS integrates advanced information technology and automatic control technology including sensing, computing, communication, control, etc., to construct a complex system that combines the actual production environment with production data closely. This system can implement on-demand response, fast iteration, and dynamic optimization of resource allocation and operation assignment in the workshop. Digital twin constructs a virtual entity and system that can represent the actual production workshop in virtual (information) space. Researchers can read various real-time parameters of the control system, construct visual remote monitoring and
36
2 Review for Flexible Job Shop Scheduling
collect historical data, analyze the health of the production system, and use artificial intelligence technology to achieve production trend prediction. Based on the production results, the strategies of equipment maintenance and resource management can be optimized to reduce and avoid the losses caused by unexpected downtime. These new technologies can help to apply the theories and methods of FJSP much better. (6) Integrated with other systems Recently, the integration and collaboration of shop scheduling system with logistics, production environment, human, and equipment has become a new research trend. In existing researches, in order to adapt to the small-batch production mode, the combination of shop scheduling system and process planning system can effectively optimize the selection of process routes, shorten the production cycle, reduce the rework rate, and improve the flexibility of manufacturing system. With the increasing complexity of the production environment, the shop scheduling system also needs to track logistics information in real time to grasp the current transportation status of products, control and analyze the production environment to guarantee that the operators can refer to the standard process flow to prevent mistakes in the processing stage, and monitor the operation, output, and performance of equipment to ensure the stability of the production process. Therefore, the integration of job shop scheduling system and other systems can timely deal with abnormal conditions in the production process and further globally control the production environment of the workshop. (7) Closed-loop of scheduling decision Open-loop scheduling means that once the scheduling strategy is made, the whole scheduling process is carried out in strict accordance with the scheduling strategy, which cannot be adjusted according to the changing of the actual production environment. In the environment where the production system can be accurately modeled, the open-loop scheduling strategy can achieve good performance. However, in the actual production process, the scheduling plan and customer needs often change, it is difficult to establish an accurate shop scheduling model. Facing these problems, openloop scheduling strategy hardly guarantee effective scheduling performance, and also lead to deterioration of the utilization of the resources in workshops. Closed-loop scheduling can dynamically adjust the scheduling scheme according to the real-time production status of the workshop to deal with the problems generated during the production process (e.g., new order arrival, machine breakdown, etc.), and achieve the goal of optimizing scheduling targets. Integrating big data, CPS, digital twin, and other technologies, closed-loop scheduling can monitor the production situation of the workshop in real time. The real-time production data is used as a feedback signal to predict the uncertain events in the workshop, and precautions can be taken in advance to guarantee the stableness of production. Meanwhile, closed-loop scheduling can make full use of the resources in the workshop, to availably balance machine loading and production efficiency.
2.5 Development Trends and Future Research Opportunities
37
(8) Extensions of FJSP In some nonstandard FJSP, there also exists a large number of scheduling problems, such as enterprise management, transportation, aerospace, health care, and network communications. The existing mathematical model and solution methods of the FJSP can be extended to scheduling resources in other fields, since there are similarities between the FJSP and these scheduling problems. For instance, in the logistics scheduling problem, products need to be assigned to different transport vehicles, which is similar to the FJSP where jobs need to be assigned to different machines. In the nurse scheduling problem, it is necessary to consider the degree of fatigue of the nurses so that they cannot continue to work night shifts, similarly to the consideration of the machine workload in the FSJP. Therefore, extending the mathematical model and solution methods of the FJSP to other scheduling problems will also be one of the future research trends.
References 1. Xia W, Wu Z (2005) An effective hybrid optimization approach for multi-objective flexible job-shop scheduling problems. Comput Ind Eng 48(2):409–425 2. Akyol DE, Bayhan GM (2007) A review on evolution of production scheduling with neural networks. Comput Ind Eng 53(1):95–122 3. Gonçalves JF, de Magalhães Mendes JJ, Resende MG (2005) A hybrid genetic algorithm for the job shop scheduling problem. Eur J Oper Res 167(1):77–95 4. Çali¸s B, Bulkan S (2015) A research survey: review of AI solution strategies of job shop scheduling problem. J Intell Manuf 26(5):961–973 5. Pezzella F, Morganti G, Ciaschetti G (2008) A genetic algorithm for the flexible job-shop scheduling problem. Comput Oper Res 35(10):3202–3212 6. Chaudhry IA, Khan AA (2016) A research survey: review of flexible job shop scheduling techniques. Int Trans Operat Res 23(3):551–591 7. Gao K-Z, Suganthan PN, Pan Q-K, Chua TJ, Cai TX, Chong C-S (2014) Pareto-based grouping discrete harmony search algorithm for multi-objective flexible job shop scheduling. Inf Sci 289:76–90 8. Kacem I, Hammadi S, Borne P (2002) Approach by localization and multiobjective evolutionary optimization for flexible job-shop scheduling problems. IEEE Trans Sys Man Cyber Part C (Applications and Reviews) 32(1):1–13 9. Stecke KE (1983) Formulation and solution of nonlinear integer production planning problems for flexible manufacturing systems. Manage Sci 29(3):273–288 10. Sawik T (1990) Modelling and scheduling of a flexible manufacturing system. Eur J Oper Res 45(2–3):177–190 11. de Werra D, Widmer M (1991) Loading problems with tool management in flexible manufacturing systems: a few integer programming models. Int J Flex Manuf Syst 3(1):71–82 12. Jiang J, Hsiao W-C (1994) Mathematical programming for the scheduling problem with alternate process plans in FMS. Comput Ind Eng 27(1–4):15–18 13. Tetzlaff UA, Pesch E (1999) Optimal workload allocation between a job shop and an FMS. IEEE Trans Robot Autom 15(1):20–32 14. Gomes M, Barbosa-Povoa A, Novais A (2005) Optimal scheduling for flexible job shop operation. Int J Produc Res 43(11):2323–2353 15. Torabi S, Karimi B, Ghomi SF (2005) The common cycle economic lot scheduling in flexible job shops: The finite horizon case. Int J Prod Econ 97(1):52–65
38
2 Review for Flexible Job Shop Scheduling
16. Özgüven C, Özbakır L, Yavuz Y (2010) Mathematical models for job-shop scheduling problems with routing and process plan flexibility. Appl Math Model 34(6):1539–1548 17. Elazeem AEMA, Osman MSA, Hassan MBA (2011) Optimality of the flexible job shop scheduling problem. African J Math Comput Sci Res 4(10):321–328 18. Özgüven C, Yavuz Y, Özbakır L (2012) Mixed integer goal programming models for the flexible job-shop scheduling problems with separable and non-separable sequence dependent setup times. Appl Math Model 36(2):846–858 19. Jahromi M, Tavakkoli-Moghaddam R (2012) A novel 0-1 linear integer programming model for dynamic machine-tool selection and operation allocation in a flexible manufacturing system. J Manuf Sys 31(2):224–231 20. Roshanaei V, Azab A, ElMaraghy H (2013) Mathematical modelling and a meta-heuristic for flexible job shop scheduling. Int J Prod Res 51(20):6247–6274 21. Birgin EG, Feofiloff P, Fernandes CG, De Melo EL, Oshiro MT, Ronconi DP (2014) A MILP model for an extended version of the Flexible Job Shop Problem. Optimiz Lett 8(4):1417–1431 22. Berrada M, Stecke KE (1986) A branch and bound approach for machine load balancing in flexible manufacturing systems. Manage Sci 32(10):1316–1335 23. Kim Y-D, Yano CA (1994) A new branch and bound algorithm for loading problems in flexible manufacturing systems. Int J Flex Manuf Syst 6(4):361–381 24. Zhou M, Chiu H-S, Xiong HH (1995) Petri net scheduling of FMS using branch and bound method. Paper presented at the Proceedings of IECON’95–21st Annual Conference on IEEE Industrial Electronics 25. Lloyd S, Yu H, Konstas N (1995) FMS scheduling using Petri net modeling and a branch & bound search. Paper presented at the Proceedings. IEEE International Symposium on Assembly and Task Planning 26. Hansmann RS, Rieger T, Zimmermann UT (2014) Flexible job shop scheduling with blockages. Math Methods Oper Res 79(2):135–161 27. Gomes MC, Barbosa-Póvoa AP, Novais AQ (2013) Reactive scheduling in a make-toorder flexible job shop with re-entrant process and assembly: a mathematical programming approach. Int J Prod Res 51(17):5120–5141 28. Gran SS, Ismail I, Ajol TA, Ibrahim AFA (2015) Mixed integer programming model for flexible job-shop scheduling problem (FJSP) to minimize makespan and total machining time. Paper presented at the Computer, Communications, and Control Technology (I4CT), 2015 International Conference on 29. Shanker K, Tzen Y-JJ (1985) A loading and dispatching problem in a random flexible manufacturing system. Int J Prod Res 23(3):579–595 30. Chang Y-L, Matsuo H, Sullivan RS (1989) A bottleneck-based beam search for job scheduling in a flexible manufacturing system. Int J Produc Res 27(11):1949–1961 31. Ro I-K, Kim J-I (1990) Multi-criteria operational control rules in flexible manufacturing systems (FMSs). Int J Produc Res 28(1):47–63 32. O’Keefe RM, Kasirajan T (1992) Interaction between dispatching and next station selection rules in a dedicated flexible manufacturing system. Int J Produc Res 30(8):1753–1772 33. Xiong HH, Zhou M, Caudill RJ (1996) A hybrid heuristic search algorithm for scheduling flexible manufacturing systems. Paper presented at the Proceedings of IEEE International Conference on Robotics and Automation 34. Jeong K-C, Kim Y-D (1998) A real-time scheduling mechanism for a flexible manufacturing system: using simulation and dispatching rules. Int J Prod Res 36(9):2609–2626 35. Mati Y, Rezg N, Xie X (2001) An integrated greedy heuristic for a flexible job shop scheduling problem. Paper presented at the Systems, Man, and Cybernetics, 2001 IEEE International Conference on 36. Scrich CR, Armentano VA, Laguna M (2004) Tardiness minimization in a flexible job shop: A tabu search approach. J Intell Manuf 15(1):103–115 37. Mejía G, Odrey NG (2005) An approach using Petri Nets and improved heuristic search for manufacturing system scheduling. J Manufac Sys 24(2):79–92
References
39
38. Alvarez-Valdés R, Fuertes A, Tamarit JM, Giménez G, Ramos R (2005) A heuristic to schedule flexible job-shop in a glass factory. Eur J Oper Res 165(2):525–534 39. Pitts Jr RA, Ventura JA (2007) A heuristic algorithm for minimizing makespan in a flexible manufacturing environment. Paper presented at the IIE Annual Conference. Proceedings 40. Fattahi P, Mehrabad MS, Jolai F (2007) Mathematical modeling and heuristic approaches to flexible job shop scheduling problems. J Intell Manuf 18(3):331–342 41. Huang B, Sun Y, Sun Y (2008) Scheduling of flexible manufacturing systems based on Petri nets and hybrid heuristic search. Int J Prod Res 46(16):4553–4565 42. Shi-Jin W, Bing-Hai Z, Li-Feng X (2008) A filtered-beam-search-based heuristic algorithm for flexible job-shop scheduling problem. Int J Prod Res 46(11):3027–3058 43. Tay JC, Ho NB (2008) Evolving dispatching rules using genetic programming for solving multi-objective flexible job-shop problems. Comput Ind Eng 54(3):453–473 44. Wang S, Yu J (2010) An effective heuristic for flexible job-shop scheduling problem with maintenance activities. Comput Ind Eng 59(3):436–447 45. Lee J, Lee JS (2010) Heuristic search for scheduling flexible manufacturing systems using lower bound reachability matrix. Comput Ind Eng 59(4):799–806 46. Nie L, Gao L, Li P, Li X (2013) A GEP-based reactive scheduling policies constructing approach for dynamic flexible job shop scheduling problem with job release dates. J Intell Manuf 24(4):763–774 47. Yuan Y, Xu H (2013) An integrated search heuristic for large-scale flexible job shop scheduling problems. Comput Oper Res 40(12):2864–2877 48. Ziaee M (2014) A heuristic algorithm for solving flexible job shop scheduling problem. Int J Adv Manufac Technol 71(1–4):519–528 49. Ziaee M (2014) A heuristic algorithm for the distributed and flexible job-shop scheduling problem. J Supercomput 67(1):69–83 50. Calleja G, Pastor R (2014) A dispatching algorithm for flexible job-shop scheduling with transfer batches: an industrial application. Prod Plan Cont 25(2):93–109 51. Baruwa OT, Piera MA (2014) Anytime heuristic search for scheduling flexible manufacturing systems: a timed colored Petri net approach. Int J Adv Manufac Technol 75(1–4):123–137 52. Baruwa OT, Piera MA, Guasch A (2015) Deadlock-free scheduling method for flexible manufacturing systems based on timed colored Petri nets and anytime heuristic search. IEEE Trans Sys Man Cyber Sys 45(5):831–846 53. Gao KZ, Suganthan PN, Tasgetiren MF, Pan QK, Sun QQ (2015) Effective ensembles of heuristics for scheduling flexible job shop problem with new job insertion. Comput Ind Eng 90:107–117 54. Pérez MAF, Raupp FM (2016) A Newton-based heuristic algorithm for multi-objective flexible job-shop scheduling problem. J Intell Manuf 27(2):409–416 55. Sobeyko O, Mönch L (2016) Heuristic approaches for scheduling jobs in large-scale flexible job shops. Comput Oper Res 68:97–109 56. Romero MAF, García EAR, Ponsich A, Gutiérrez RAM (2018) A heuristic algorithm based on tabu search for the solution of flexible job shop scheduling problems with lot streaming. Paper presented at the Proceedings of the Genetic and Evolutionary Computation Conference 57. Shahgholi Zadeh M, Katebi Y, Doniavi A (2018) A heuristic model for dynamic flexible job shop scheduling problem considering variable processing times. Int J Produc Res, 1–16 58. Ortíz MA, Betancourt LE, Negrete KP, De Felice F, Petrillo A (2018) Dispatching algorithm for production programming of flexible job-shop systems in the smart factory industry. Ann Oper Res 264(1–2):409–433 59. Teymourifar A, Ozturk G, Ozturk ZK, Bahadir O (2018) Extracting new dispatching rules for multi-objective dynamic flexible job shop scheduling with limited buffer spaces. Cog Comput, 1–11 60. Ozturk G, Bahadir O, Teymourifar A (2018) Extracting priority rules for dynamic multiobjective flexible job shop scheduling problems using gene expression programming. Int J Produc Res, 1–17
40
2 Review for Flexible Job Shop Scheduling
61. Bekkar A, Belalem G, Beldjilali B (2019) Iterated greedy insertion approaches for the flexible job shop scheduling problem with transportation times constraint. Int J Manuf Res 14(1):43–66 62. Gao J, Sun L, Gen M (2008) A hybrid genetic and variable neighborhood descent algorithm for flexible job shop scheduling problems. Comput Oper Res 35(9):2892–2907 63. Zhang G, Gao L, Shi Y (2011) An effective genetic algorithm for the flexible job-shop scheduling problem. Expert Syst Appl 38(4):3563–3573 64. Chen JC, Wu C-C, Chen C-W, Chen K-H (2012) Flexible job shop scheduling with parallel machines using Genetic Algorithm and Grouping Genetic Algorithm. Exp Syst Appl 39(11):10016–10021 65. Chang H-C, Chen Y-P, Liu T-K, Chou J-H (2015) Solving the flexible job shop scheduling problem with makespan optimization by using a hybrid Taguchi-genetic algorithm. IEEE Access 3:1740–1754 66. Nouri HE, Driss OB, Ghédira K (2015) Hybrid metaheuristics within a holonic multiagent model for the flexible job shop problem. Proc Comput Sci 60:83–92 67. Liangxiao J, Zhongjun D (2015) An improved genetic algorithm for flexible job shop scheduling problem. Paper presented at the Information Science and Control Engineering (ICISCE), 2015 2nd International Conference on 68. Huang M, Mingxu W, Xu L (2016) An improved genetic algorithm using opposition-based learning for flexible job-shop scheduling problem. Paper presented at the Cloud Computing and Internet of Things (CCIOT), 2016 2nd International Conference on 69. Huang M, Wang L-M, Liang X (2016) An improved adaptive genetic algorithm in flexible job shop scheduling. Paper presented at the Cloud Computing and Internet of Things (CCIOT), 2016 2nd International Conference on 70. Driss I, Mouss KN, Laggoun A (2015) An effective genetic algorithm for the flexible job shop scheduling problems 71. Purnomo MRA (2016) A knowledge-based genetic algorithm for solving flexible job shop scheduling problem. Int Bus Manag 10(19):4708–4712 72. Morinaga E, Sakaguchi Y, Wakamatsu H, Arai E (2017) A method for flexible job-shop scheduling using genetic algorithm. J Adv Manufac Technol (JAMT) 11(2):79–86 73. Li X, Gao L (2016) An effective hybrid genetic algorithm and tabu search for flexible job shop scheduling problem. Int J Prod Econ 174:93–110 74. Gu X, Huang M, Liang X (2017) The improved simulated annealing genetic algorithm for flexible job-shop scheduling problem. Paper presented at the Computer Science and Network Technology (ICCSNT), 2017 6th International Conference on 75. Zhang G, Zhang L, Song X, Wang Y, Zhou C (2018) A variable neighborhood search based genetic algorithm for flexible job shop scheduling problem. Cluster Comput, 1–12 76. Ma W, Zuo Y, Zeng J, Liang S, Jiao L (2014) A memetic algorithm for solving flexible jobshop scheduling problems. Paper presented at the Evolutionary Computation (CEC), 2014 IEEE Congress on 77. Cinar D, Oliveira JA, Topcu YI, Pardalos PM (2016) A priority-based genetic algorithm for a flexible job shop scheduling problem. J Ind Manag Optimi 12(4):1391–1415 78. Brandimarte P (1993) Routing and scheduling in a flexible job shop by tabu search. Ann Oper Res 41(3):157–183 79. Türkyılmaz A, Bulkan S (2015) A hybrid algorithm for total tardiness minimisation in flexible job shop: genetic algorithm with parallel VNS execution. Int J Prod Res 53(6):1832–1848 80. Kaweegitbundit P, Eguchi T (2016) Flexible job shop scheduling using genetic algorithm and heuristic rules. J Adv Mec Design Sys Manufac 10(1):JAMDSM0010-JAMDSM0010 81. Frutos M, Olivera AC, Tohmé F (2010) A memetic algorithm based on a NSGAII scheme for the flexible job-shop scheduling problem. Ann Oper Res 181(1):745–765 82. Yuan Y, Xu H (2015) Multiobjective flexible job shop scheduling using memetic algorithms. IEEE Trans Autom Sci Eng 12(1):336–353 83. Liang X, Weiping S, Huang M (2015) Flexible job shop scheduling based on multi-population genetic-variable neighborhood search algorithm. Paper presented at the Computer Science and Network Technology (ICCSNT), 2015 4th International Conference on
References
41
84. Ren H, Xu H, Sun S (2016) Immune genetic algorithm for multi-objective flexible job-shop scheduling problem. Paper presented at the Control and Decision Conference (CCDC), 2016 Chinese 85. Morinaga Y, Nagao M, Sano M (2016) Balancing setup workers’ load of flexible job shop scheduling using hybrid genetic algorithm with tabu search strategy. Int J Decision Support Sys 2(1–3):71–90 86. Xu L, Xia ZY, Ming H (2016) Study on improving multi-objective flexible job shop scheduling based on Memetic algorithm in the NSGA-II framework. Paper presented at the Cloud Computing and Internet of Things (CCIOT), 2016 2nd International Conference on 87. Deng Q, Gong G, Gong X, Zhang L, Liu W, Ren Q (2017) A bee evolutionary guiding nondominated sorting genetic algorithm ii for multiobjective flexible job-shop scheduling. Comput int neurosci, 2017 88. Zhang W, Wen J, Zhu Y, Hu Y (2017) Multi-objective scheduling simulation of flexible job-shop based on multi-population genetic algorithm. Int J Simulat Model 16(2):313–321 89. Ghasemi M, Farzan A (2017) Pareto-front generation by classical and meta-heuristic methods in flexible job shop scheduling with multiple objectives. Int J Comput Appl 165(1) 90. Teymourifar A, Ozturk G, Bahadir O (2018) A comparison between two modified NSGA-II algorithms for solving the multi-objective flexible job shop scheduling problem. Univer J Applied Math 6(3):79–93 91. Tayebi Araghi M, Jolai F, Rabiee M (2014) Incorporating learning effect and deterioration for solving a SDST flexible job-shop scheduling problem with a hybrid meta-heuristic approach. Int J Comput Integr Manuf 27(8):733–746 92. Jalilvand-Nejad A, Fattahi P (2015) A mathematical model and genetic algorithm to cyclic flexible job shop scheduling problem. J Intell Manuf 26(6):1085–1098 93. Rohaninejad M, Kheirkhah A, Fattahi P (2015) Simultaneous lot-sizing and scheduling in flexible job shop problems. Int J Adv Manufac Technol 78(1–4):1–18 94. Rey GZ, Bekrar A, Trentesaux D, Zhou B-H (2015) Solving the flexible job-shop just-in-time scheduling problem with quadratic earliness and tardiness costs. Int J Adv Manufac Technol 81(9–12):1871–1891 95. Zhang M, Wu K (2016) An improved genetic algorithm for the re-entrant and flexible jobshop scheduling problem. Paper presented at the Control and Decision Conference (CCDC), 2016 Chinese 96. Li H, Li Z, Yang R, Lu H, Zhang Y (2016) A flexible job-shop scheduling for small batch customizing. Paper presented at the Control and Decision Conference (CCDC), 2016 Chinese 97. Chang H-C, Liu T-K (2017) Optimisation of distributed manufacturing flexible job shop scheduling by using hybrid genetic algorithms. J Intell Manuf 28(8):1973–1986 98. Lu P-H, Wu M-C, Tan H, Peng Y-H, Chen C-F (2018) A genetic algorithm embedded with a concise chromosome representation for distributed and flexible job-shop scheduling problems. J Intell Manuf 29(1):19–34 99. Azzouz A, Ennigrou M, Said LB (2017b) A self-adaptive evolutionary algorithm for solving flexible job-shop problem with sequence dependent setup time and learning effects. Paper presented at the Evolutionary Computation (CEC), 2017 IEEE Congress on 100. Elgendy A, Mohammed H, Elhakeem A (2017) Optimizing dynamic flexible job shop scheduling problem based on genetic algorithm. Int J Curr Eng Technol 7:368–373 101. Chen M, Li J-L (2017) Genetic algorithm combined with gradient information for flexible job-shop scheduling problem with different varieties and small batches. Paper presented at the MATEC Web of Conferences 102. Wang L, Luo C, Cai J (2017) A variable interval rescheduling strategy for dynamic flexible job shop scheduling problem by improved genetic algorithm. J Adv Trans, 2017 103. Peng C, Fang Y, Lou P, Yan J (2018) Analysis of double-resource flexible job shop scheduling problem based on genetic algorithm. Paper presented at the Networking, Sensing and Control (ICNSC), 2018 IEEE 15th International Conference on 104. Ahmadi E, Zandieh M, Farrokh M, Emami SM (2016) A multi objective optimization approach for flexible job shop scheduling problem under random machine breakdown by evolutionary algorithms. Comput Oper Res 73:56–66
42
2 Review for Flexible Job Shop Scheduling
105. Yang X, Zeng Z, Wang R, Sun X (2016) Bi-objective flexible job-shop scheduling problem considering energy consumption under stochastic processing times. PLoS ONE 11(12):e0167427 106. Wang C, Tian N, Ji Z, Wang Y (2017) Multi-objective fuzzy flexible job shop scheduling using memetic algorithm. J Stat Comput Simul 87(14):2828–2846 107. Jiang Z, Zuo L, Mingcheng E (2014) Study on multi-objective flexible job-shop scheduling problem considering energy consumption. J Ind Eng Manag 7(3):589–604 108. Yin L, Li X, Gao L, Lu C, Zhang Z (2017) A novel mathematical model and multi-objective method for the low-carbon flexible job shop scheduling problem. Sustain Comput Inform Sys 13:15–30 109. Piroozfard H, Wong KY, Wong WP (2018) Minimizing total carbon footprint and total late work criterion in flexible job shop scheduling by using an improved multi-objective genetic algorithm. Resour Conserv Recycl 128:267–283 110. Wu X, Sun Y (2018) A green scheduling algorithm for flexible job shop with energy-saving measures. J Clean Prod 172:3249–3264 111. Zhang C, Gu P, Jiang P (2015) Low-carbon scheduling and estimating for a flexible job shop based on carbon footprint and carbon efficiency of multi-job processing. Proceed Ins Mec Eng Part B: J Eng Manufac 229(2):328–342 112. Azzouz A, Ennigrou M, Said LB (2017a) A hybrid algorithm for flexible job-shop scheduling problem with setup times 113. Marzouki B, Driss OB (2015) Multi agent model based on chemical reaction optimization for flexible job shop problem. In Computational collective intelligence (pp 29–38), Springer 114. Marzouki B, Driss OB, Ghédira K (2018) Multi-agent model based on combination of chemical reaction optimisation metaheuristic with Tabu search for flexible job shop scheduling problem. Int J Int Eng Inf 6(3–4):242–265 115. Yang Y (2015) A modified biogeography-based optimization for the flexible job shop scheduling problem. Mathematical problems in engineering, 2015 116. Lin J (2015) A hybrid biogeography-based optimization for the fuzzy flexible job-shop scheduling problem. Knowl-Based Syst 78:59–74 117. Zhang G, Shao X, Li P, Gao L (2009) An effective hybrid particle swarm optimization algorithm for multi-objective flexible job-shop scheduling problem. Comput Ind Eng 56(4):1309–1318 118. Singh MR, Mahapatra SS (2016) A quantum behaved particle swarm optimization for flexible job shop scheduling. Comput Ind Eng 93:36–44 119. Muthiah A, Rajkumar A, Rajkumar R (2016) Hybridization of artificial bee colony algorithm with particle swarm optimization algorithm for flexible job shop scheduling. Paper presented at the Energy Efficient Technologies for Sustainability (ICEETS), 2016 International Conference on 120. Yi W, Li X, Pan B (2016) Solving flexible job shop scheduling using an effective memetic algorithm. Int J Comput Appl Technol 53(2):157–163 121. Phu-ang A, Thammano A (2017) Memetic algorithm based on marriage in honey bees optimization for flexible job shop scheduling problem. Memetic Comput 9(4):295–309 122. Ge H, Sun L, Chen X, Liang Y (2016) An efficient artificial fish swarm model with estimation of distribution for flexible job shop scheduling. Int J Comput Intell Sys 9(5):917–931 123. Wang X, Yi P (2016) Improved bacteria foraging optimization algorithm for solving flexible job-shop scheduling problem 124. Wu X, Wu S (2017) An elitist quantum-inspired evolutionary algorithm for the flexible jobshop scheduling problem. J Intell Manuf 28(6):1441–1457 125. Xu H, Bao Z, Zhang T (2017) Solving dual flexible job-shop scheduling problem using a Bat Algorithm. Adv Produc Eng Manag 12(1):5 126. Wang L, Cai J, Li M, Liu Z (2017) Flexible job shop scheduling problem using an improved ant colony optimization. Sci Program, 2017 127. Liang X, Huang M, Ning T (2018) Flexible job shop scheduling based on improved hybrid immune algorithm. J Ambient Intell Humaniz Comput 9(1):165–171
References
43
128. Buddala R, Mahapatra SS (2018) An integrated approach for scheduling flexible job-shop using teaching–learning-based optimization method. J Ind Eng Int, 1–12 129. Jiang T, Zhang C (2018) Application of grey wolf optimization for solving combinatorial problems: job shop and flexible job shop scheduling cases. IEEE Access 130. Gaham M, Bouzouia B, Achour N (2018) An effective operations permutation-based discrete harmony search approach for the flexible job shop scheduling problem with makespan criterion. Appl Int 48(6):1423–1441 131. Rabiee M, Zandieh M, Ramezani P (2012) Bi-objective partial flexible job shop scheduling problem: NSGA-II, NRGA, MOGA and PAES approaches. Int J Prod Res 50(24):7327–7342 132. Xue H, Zhang P, Wei S, Yang L (2014) An improved immune algorithm for multi-objective flexible job-shop scheduling. J Networks 9(10):2843 133. Ma J, Lei Y, Wang Z, Jiao L, Liu R (2014) A memetic algorithm based on immune multiobjective optimization for flexible job-shop scheduling problems. Paper presented at the Evolutionary Computation (CEC), 2014 IEEE Congress on 134. Gong X, Deng Q, Gong G, Liu W, Ren Q (2018) A memetic algorithm for multi-objective flexible job-shop problem with worker flexibility. Int J Prod Res 56(7):2506–2522 135. Mekni S, Fayech BC (2014) A modified invasive weed optimization algorithm for multiobjective flexible job shop scheduling problems. Comput Sci Inf Technol, 51–60. DOI, 10 136. Karthikeyan S, Asokan P, Nickolas S (2014) A hybrid discrete firefly algorithm for multiobjective flexible job shop scheduling problem with limited resource constraints. Int J Adv Manufac Technol 72(9–12):1567–1579 137. Kamble S, Mane S, Umbarkar A (2015) Hybrid multi-objective particle swarm optimization for flexible job shop scheduling problem. Int J Intell Sys App 7(4):54 138. Huang S, Tian N, Wang Y, Ji Z (2016) Multi-objective flexible job-shop scheduling problem using modified discrete particle swarm optimization. SpringerPlus 5(1):1432 139. Huang S, Tian N, Ji Z (2016) Particle swarm optimization with variable neighborhood search for multiobjective flexible job shop scheduling problem. Int J Model Simul Sci Comput 7(03):1650024 140. Zeng R, Wang Y (2018) A chaotic simulated annealing and particle swarm improved artificial immune algorithm for flexible job shop scheduling problem. EURASIP J Wireless Commun Network 2018(1):101 141. Zhu H, He B, Li H (2017) Modified bat algorithm for the multi-objective flexible job shop scheduling problem. Int J Perform Eng 13(7) 142. Rossi A (2014) Flexible job shop scheduling with sequence-dependent setup and transportation times by ant colony with reinforced pheromone relationships. Int J Prod Econ 153:253–267 143. Xu Y, Wang L, Wang S-Y, Liu M (2015) An effective teaching–learning-based optimization algorithm for the flexible job-shop scheduling problem with fuzzy processing time. Neurocomputing 148:260–268 144. Liu B, Fan Y, Liu Y (2015) A fast estimation of distribution algorithm for dynamic fuzzy flexible job-shop scheduling problem. Comput Ind Eng 87:193–201 145. Gao KZ, Suganthan PN, Chua TJ, Chong CS, Cai TX, Pan QK (2015) A two-stage artificial bee colony algorithm scheduling flexible job-shop scheduling problem with new job insertion. Expert Syst Appl 42(21):7652–7663 146. Gao KZ, Suganthan PN, Pan QK, Chua TJ, Chong CS, Cai TX (2016) An improved artificial bee colony algorithm for flexible job-shop scheduling problem with fuzzy processing time. Expert Syst Appl 65:52–67 147. Meng T, Pan Q-K, Sang H-Y (2018) A hybrid artificial bee colony algorithm for a flexible job shop scheduling problem with overlapping in operations. Int J Produc Res, 1–15 148. Zheng X-L, Wang L (2016) A knowledge-guided fruit fly optimization algorithm for dual resource constrained flexible job-shop scheduling problem. Int J Prod Res 54(18):5554–5566 149. Liu Q, Zhan M, Chekem FO, Shao X, Ying B, Sutherland JW (2017) A hybrid fruit fly algorithm for solving flexible job-shop scheduling to reduce manufacturing carbon footprint. J Clean Prod 168:668–678
44
2 Review for Flexible Job Shop Scheduling
150. Zandieh M, Khatami A, Rahmati SHA (2017) Flexible job shop scheduling under conditionbased maintenance: improved version of imperialist competitive algorithm. Appl Soft Comput 58:449–464 151. Nouiri M, Bekrar A, Jemai A, Trentesaux D, Ammari AC, Niar S (2017) Two stage particle swarm optimization to solve the flexible job shop predictive scheduling problem considering possible machine breakdowns. Comput Ind Eng 112:595–606 152. Jamrus T, Chien C-F, Gen M, Sethanan K (2018) Hybrid particle swarm optimization combined with genetic operators for flexible job-shop scheduling under uncertain processing time for semiconductor manufacturing. IEEE Trans Semicond Manuf 31(1):32–41 153. Singh MR, Mahapatra S, Mishra R (2014) Robust scheduling for flexible job shop problems with random machine breakdowns using a quantum behaved particle swarm optimisation. Int J Ser Operat Manag 20(1):1–20 154. Reddy MS, Ratnam C, Rajyalakshmi G, Manupati V (2018) An effective hybrid multi objective evolutionary algorithm for solving real time event in flexible job shop scheduling problem. Measurement 114:78–90 155. Zhang S, Wong TN (2017) Flexible job-shop scheduling/rescheduling in dynamic environment: a hybrid MAS/ACO approach. Int J Prod Res 55(11):3173–3196 156. Azzouz A, Ennigrou M, Said LB (2017) A self-adaptive hybrid algorithm for solving flexible job-shop problem with sequence dependent setup time. Proc Comp Sci 112:457–466 157. Mokhtari H, Dadgar M (2015) A flexible job shop scheduling problem with controllable processing times to optimize total cost of delay and processing. Int J Supply Operat Manag 2(3):871 158. Lu C, Li X, Gao L, Liao W, Yi J (2017) An effective multi-objective discrete virus optimization algorithm for flexible job-shop scheduling problem with controllable processing times. Comput Ind Eng 104:156–174 159. Mokhtari H, Hasani A (2017) An energy-efficient multi-objective optimization for flexible job-shop scheduling problem. Comput Chem Eng 104:339–352 160. Lei D, Zheng Y, Guo X (2017) A shuffled frog-leaping algorithm for flexible job shop scheduling with the consideration of energy consumption. Int J Prod Res 55(11):3126–3140 161. Shen X-N, Han Y, Fu J-Z (2017) Robustness measures and robust scheduling for multiobjective stochastic flexible job shop scheduling problems. Soft Comput 21(21):6531–6554 162. Baykasoglu A (2002) Linguistic-based meta-heuristic optimization model for flexible job shop scheduling. Int J Prod Res 40(17):4523–4543 163. Chen J, Chen K, Wu J, Chen C (2008) A study of the flexible job shop scheduling problem with parallel machines and reentrant process. Int J Adv Manufac Technol 39(3–4):344–354 164. Abdi Khalife M, Abbasi B (2010) A simulated annealing algorithm for multi objective flexible job shop scheduling with overlapping in operations. J Opt Ind Eng, 17–28 165. Shivasankaran N, Senthilkumar P, Raja KV (2014) Hybrid non-dominated sorting simulated annealing Algorithm for flexible job shop scheduling problems. Paper presented at the ICT and Critical Infrastructure: Proceedings of the 48th Annual Convention of Computer Society of India-Vol I 166. Shivasankaran N, Kumar PS, Raja KV (2015) Hybrid sorting immune simulated annealing algorithm for flexible job shop scheduling. Int J Comput Int Sys 8(3):455–466 167. Bo˙zejko W, Pempera J, Wodecki M (2015) Parallel simulated annealing algorithm for cyclic flexible job shop scheduling problem. Paper presented at the International Conference on Artificial Intelligence and Soft Computing 168. Kaplano˘glu V (2016) An object-oriented approach for multi-objective flexible job-shop scheduling problem. Expert Syst Appl 45:71–84 169. Karimi H, Rahmati SHA, Zandieh M (2012) An efficient knowledge-based algorithm for the flexible job shop scheduling problem. Knowl-Based Syst 36:236–244 170. Lei D, Guo X (2014) Variable neighbourhood search for dual-resource constrained flexible job shop scheduling. Int J Prod Res 52(9):2519–2529 171. Vilcot G, Billaut J-C (2011) A tabu search algorithm for solving a multicriteria flexible job shop scheduling problem. Int J Prod Res 49(23):6963–6980
References
45
172. Jia S, Hu Z-H (2014) Path-relinking Tabu search for the multi-objective flexible job shop scheduling problem. Comput Oper Res 47:11–26 173. Rajkumar M, Asokan P, Anilkumar N, Page T (2011) A GRASP algorithm for flexible job-shop scheduling problem with limited resource constraints. Int J Prod Res 49(8):2409–2423 174. Yulianty A, Ma’ruf A (2013) Predictive approach on flexible job shop scheduling problem considering controllable processing times. Int J Innov Manag Technol 4(6):565 175. Li L, Huo J-Z (2009) Multi-objective flexible job-shop scheduling problem in steel tubes production. Sys Engineering-Theory Prac 29(8):117–126 176. Grobler J, Engelbrecht AP, Kok S, Yadavalli S (2010) Metaheuristics for the multi-objective FJSP with sequence-dependent set-up times, auxiliary resources and machine down time. Ann Oper Res 180(1):165–196 177. Hosseinabadi AAR, Siar H, Shamshirband S, Shojafar M, Nasir MHNM (2015) Using the gravitational emulation local search algorithm to solve the multi-objective flexible dynamic job shop scheduling problem in small and medium enterprises. Ann Oper Res 229(1):451–474 178. Tanev IT, Uozumi T, Morotome Y (2004) Hybrid evolutionary algorithm-based real-world flexible job shop scheduling problem: application service provider approach. Appl Soft Comput 5(1):87–100
Chapter 3
Review for Integrated Process Planning and Scheduling
3.1 IPPS in Support of Distributed and Collaborative Manufacturing The IPPS can be defined as: give a set of n parts, which are to be processed on m machines with alternative process plans, manufacturing resources, and other technological constraints, select suitable process plan and manufacturing resources, and sequence the operations so as to determine a schedule in which the technological constraints among operations can be satisfied and the corresponding objectives can be achieved [1]. In the following sections, the authors will provide research on IPPS. In the twentyt-first century, in order to win the competition in a dynamic marketplace, which demands short response time to changing markets and agility in production, the manufacturers need to change their manufacturing system from a centralized environment to a distributed environment [2]. In this case, owing to recent business decentralization and manufacturing outsourcing, the distributed and collaborative manufacturing is very important for the manufacturing companies. In the distributed manufacturing system, there are many unpredictable issues like job delay, urgent-order insertion, fixture shortage, missing tool, and even machine breakdown, which are challenging manufacturing companies. And, engineers often demand adaptive planning and scheduling capability in dealing with daily operations in a distributed manufacturing environment [2]. Process planning and scheduling are two of the most important subsystems in distributed manufacturing systems. To overcome the above uncertain issues in a distributed manufacturing system, the development of process planning and scheduling is very important. Many researches have focused on process planning and scheduling in order to improve flexibility, dynamism, adaptability, agility, and productivity of distributed manufacturing system [2]. While process planning and scheduling are crucial in distributed manufacturing, their integration is also very important [3]. Without the IPPS, a true ComputerIntegrated Manufacturing System (CIMS), which strives to integrate the various © Springer-Verlag GmbH Germany, part of Springer Nature and Science Press, Beijing 2020 X. Li and L. Gao, Effective Methods for Integrated Process Planning and Scheduling, Engineering Applications of Computational Methods 2, https://doi.org/10.1007/978-3-662-55305-3_3
47
48
3 Review for Integrated Process Planning and Scheduling
phases of manufacturing in a single comprehensive system, may not be effectively realized. And IPPS also can improve the flexibility, adaptability, agility, and global optimization of the distributed and collaborative manufacturing. Therefore, the research on IPPS is necessary. And, it can well support the distributed and collaborative manufacturing system
3.2 Integration Model of IPPS In the early studies of CIMS, it has been identified that IPPS is very important to the development of CIMS [4, 5]. The preliminary idea of IPPS was first introduced by Chryssolouris et al. [6, 7]. Beckendorff et al. [8] used alternative process plans to improve the flexibility of manufacturing systems. Khoshnevis and Chen [9] introduced the concept of dynamic feedback into the IPPS. The integration model proposed by Zhang [10] and Larsen [11] extended the concepts of alternative process plans and dynamic feedback and defined an expression to the methodology of the hierarchical approach. Some earlier works of the integration strategy had been summarized in Tan and Khoshnevis [5], and Wang et al. [12]. In recent years, in the area of IPPS, some integration models have been reported and several implementation approaches have been employed. Many models of IPPS have been proposed, and they can be classified into three basic models based on IPPS: Non-Linear Process Planning (NLPP) [8], Closed-Loop Process Planning (CLPP) [9], and Distributed Process Planning (DPP) [10, 11].
3.2.1 Non-Linear Process Planning The methodology of NLPP is to make all alternative process plans for each part with a rank according to process planning optimization criteria. The plan with the highest priority is always ready for submission when the job is required. If the first-priority plan is not suitable for the current shop floor status, the second-priority plan will be provided to the scheduling system. NLPP can also be called as flexible process-planning, multi-process planning or alternative process planning. The basic flowchart of NLPP is shown in Fig. 3.1. On the basis of the basic flowchart of NLPP, the process planning system and scheduling system are separate. NLPP only uses alternative process plans to enhance the flexibility of the manufacturing system. Process Planning
Multi-process plans
Alternative process plans
Rank all alternative plans according to optimization criteria
Fig. 3.1 The basic flowchart of NLPP
Scheduling selecting the process plans base on the current shop floor status
Production
3.2 Integration Model of IPPS
49
NLPP is the most basic model of IPPS. Because the integration methodology of this model is very simple, most of the current research works on the integration model focus on the improvement and implementation of this model. Jablonski et al. [13] described the concept and prototype implementation of a flexible integrated production planning and scheduling system. Kim et al. [14] gave a scheduling system, which was supported by flexible process plans and based on negotiation. Kim and Egbelu [15] formulated a mixed integer programming model for job shop scheduling with multiple process plans. Kim and Egbelu [16] developed a mixed integer programming model for job shop scheduling with multiple process plans, and proposed two algorithms to solve this problem. Saygin and Kilic [17] presented a framework that integrated flexible process plans with off-line scheduling in a flexible manufacturing system. And, it also proposed an approach, namely the dissimilarity maximization method, for selecting the appropriate process plans for a part mix where parts had alternative process plans. Lee and Kim [18] presented the NLPP model based on the GA. Thomalla [19] investigated an optimization methodology for scheduling jobs with alternative process plans in a just-in-time environment. Yang et al. [20] presented a prototype of a feature-based multiple alternative process planning system. Gan and Lee [21] described a process planning and scheduling system that makes use of the branch and bound approach to optimize priority weighted earliness of jobs scheduled in a mold manufacturing shop. Kim et al. [22] used a symbiotic evolutionary algorithm for IPPS. Kis [23] developed two heuristic algorithms for the JSP with alternative process plans: a TS and a GA. Jain et al. [24] proposed a scheme for IPPS that could be implemented in a company with existing process planning and scheduling departments when multiple process plans for each part type were available. Li and McMahon [25] used an SA-based approach for IPPS. Shao et al. [26] used a modified GA to solve this problem. However, through a number of experimental computations, Usher [27] concluded that the advantages gained by increasing the number of alternative process plans for a scheduling system diminish rapidly when the number of the plans reaches a certain level. The computational efficiency needs to be improved when applying to a complex system with a large number of alternative solutions.
3.2.2 Closed-Loop Process Planning The methodology of CLPP is using a dynamic process planning system with a feedback mechanism. CLPP can be used to generate real-time process plans by means of a dynamic feedback from production scheduling system. The process planning mechanism generates process plans based on available resources. Production scheduling provides the information about which machines are available on the shop floor for an incoming job to process planning, so that every plan is feasible and respects to the current availability of production facilities. This dynamic simulation system can enhance the real-time, intuition, and manipulability of process planning system and also enhance the utilization of alternative process plans. CLPP can also be called as
50
3 Review for Integrated Process Planning and Scheduling Process Planning
Scheduling
Production Testing
Real-time Decision System
Fig. 3.2 The basic flowchart of CLPP
dynamic process planning or online process planning. The basic flowchart of CLPP is shown in Fig. 3.2. CLPP can bring the IPPS to a real integration system very well. Khoshnevis and Chen [28] developed an automated planning environment to treat the process planning and scheduling as a unified whole. This method could use a time window to control the planning quantity in every stage. Usher and Fernandes [29] divided the dynamic process planning to the static phase and the dynamic phase. Baker and Maropoulos [30] defined architecture to enable the vertical integration of tooling considerations from early design to process planning and scheduling. The architecture was based on a five-level tool selection procedure, which was mapped to a time-phased aggregate, management, and detailed process planning framework. Seethaler and Yellowley [31] presented a dynamic process planning system, which could give the process plans based on the feedback of the scheduling system. Wang et al. [32] and Zhang et al. [33] introduced a dynamic facilitating mechanism for IPPS in a batch manufacturing environment. Kumar and Rajotia [34] introduced a method of scheduling and its integration with CAPP, so that an online process plan could be generated taking into account the availability of machines and alternative routes. Wang et al. [35] introduced a kind of dynamic CAPP system integration model and used the Back Propagation (BP)-based neural network and relevant algorithm to make machine decision. And, the process plans from this dynamic CAPP system could accord with the production situation and the requirements of the job shop scheduling.
3.2.3 Distributed Process Planning The methodology of DPP uses a concurrent engineering approach to perform both the process planning and the scheduling simultaneously. It divides the process of planning and scheduling tasks into two phases. The first phase is the initial planning phase. In this phase, the characteristics of parts and the relationship between the parts are analyzed. And, the primary process plans and scheduling plan are determined. The process resources are also evaluated simultaneously. The second phase is the detailed planning phase. It has been divided into two phases: the matching planning phase and the final planning phase. In this phase, the process plans are adjusted to the current status of the shop floor. The detailed process plans and scheduling plans are obtained simultaneously. DPP can also be called as just-in-time process planning, concurrent process planning, or collaborative process planning. The basic flowchart of DPP is shown in Fig. 3.3.
3.2 Integration Model of IPPS
51 Process Plans
Process Planning System Rough Process Plans
Initial Planning Phase
Matching Planning Phase
Preliminary Scheduling Plan based on the current status
Final Planning Phase
Detail Scheduling Plans by Matching with the current status
Scheduling System
Production
Detail Process Plans by Matching with the current status
Scheduling Plans
Fig. 3.3 The basic flowchart of DPP
Brandimarte and Calderini [36] proposed a two-phase hierarchical method to integrate these two systems together. In the first phase, a relaxed version of the problem was solved, yielding an approximation of the set of efficient process plans with respect to cost and load-balancing objectives. Each process plan was then considered and the corresponding scheduling problem was solved by TS, and the process plan selection was improved by a two-level hierarchical TS algorithm. Kempenaers et al. [37] demonstrated three modules of the collaborative process planning system. Sadeh et al. [38] described an Integrated Process Planning/Production Scheduling (IP3S) system for agile manufacturing. And, IP3S was based on a blackboard architecture that supported concurrent development and dynamic revision of IP3S solutions along with powerful workflow management functionalities for ‘what–if’ development and maintenance of multiple problem assumptions and associated solutions. Wu et al. [39] gave the integration model of IPPS in the distributed virtual manufacturing environment. Zhang et al. [40] presented the framework of concurrent process planning based on Holon. Wang and Shen [41] proposed a methodology of DPP, and used multi-agent negotiation and cooperation to construct the architecture of the new process planning method. And, this chapter also focused on the supporting technologies such as machining-feature-based planning and function-block-based control. Wang et al. [42] presented the framework of a collaborative process planning system supported by a real-time monitoring system. Sugimura et al. [43] developed an IPPS and applied it to the holonic manufacturing systems. Li et al. [44] used game-theory-based approach to solve IPPS.
3.2.4 Comparison of Integration Models Every model has its advantages and disadvantages. A comparison among integration models is given in Table 3.1.
52
3 Review for Integrated Process Planning and Scheduling
Table 3.1 Comparison of integration models Advantages
Disadvantages
NLPP
Providing all the alternative process plans of, and enhancing the flexibility and the availability of process plans
Because of the need to give all alternative process plans of the parts, this will cause a combinational-explosive problem
CLPP
Based on the current shop floor status, the process plans are all very useful
CLPP needs the real-time data of the current status, if it has to regenerate process plans in every scheduling phase, the real-time data is hard to be assured and updated
DPP
This model works in an interactive, collaborative, and cooperative way
Because the basic integration principle of DPP is a hierarchical approach, it cannot optimize the process plans and scheduling plans as a whole
3.3 Implementation Approaches of IPPS Various Artificial Intelligence (AI)-based approaches have been developed to solve IPPS. The following sections will discuss several typical methods: they are agentbased approaches, Petri-net-based approaches, and optimization-algorithm-based approaches. The critique of the current approaches is also given.
3.3.1 Agent-Based Approaches of IPPS The concept of an agent came from the research of AI [45]. A typical definition of an agent is given by Nwana and Ndumu [46] as: “An agent is defined as referring to a component of software and/or hardware which is capable of acting exactly in order to accomplish tasks on behalf of its user”. On the basis of the definition, one conclusion is that an agent is a software system that communicates and cooperates with other software systems to solve a complex problem that is beyond the capability of each individual software system [12]. Therefore, the agent-based approach has been considered as one important method for studying distributed intelligent manufacturing systems. In the area of IPPS, the agent-based approach has captured the interest of a number of researchers. Shen et al. [2] reviewed the research on manufacturing process planning, scheduling as well as their integration. Wang et al. [12] provided a literature review on the IPPS, particularly on the agent-based approaches for the problem. The advantages of the agent-based approach for scheduling were discussed. Zhang and Xie [45] reviewed the agent technology for collaborative process planning. The focus of this research was on how the agent technology can be further developed in support of collaborative process planning as well as its future research issues and directions in process planning.
3.3 Implementation Approaches of IPPS
53
Gu et al. [47] proposed a Multi-Agent System (MAS) where process routes and schedules of a part were accomplished through the contract net bids. IDCPPS [48] was an integrated, distributed, and cooperative process planning system. The process planning tasks were separated into three levels, namely initial planning, decisionmaking, and detail planning. The results of these three steps were general process plans, a ranked list of near-optimal alternative plans, and the final detailed linear process plans, respectively. The integration with scheduling was considered at each stage with process planning. Wu et al. [39] presented a computerized model that can integrate the manufacturing functions and resolve some of the critical problems in distributed virtual manufacturing. This integration model was realized through a multi-agent approach that provided a practical approach for software integration in a distributed environment. Lim and Zhang [49, 50] introduced a multi-agent-based framework for IPPS. This framework could also be used to optimize the utilization of manufacturing resources dynamically as well as provide a platform on which alternative configurations of manufacturing systems could be assessed. Denkena et al. [51] developed a MASbased approach to study the IPPS in collaborative manufacturing. Wang and Shen [41] proposed a new methodology of DPP. It focused on the architecture of the new approach, using multi-agent negotiation and cooperation, and on the other supporting technologies such as machining-feature-based planning and function-block-based control. Wong et al. [52, 53] developed an online hybrid-agent-based negotiation MAS for integrated process planning with scheduling/rescheduling. With the introduction of the supervisory control into the decentralized negotiations, this approach was able to provide solutions with a better global performance. Shukla et al. [54] presented a bidding-based MAS for IPPS. The proposed architecture consisted of various autonomous agents capable of communicating (bidding) with each other and making decisions based on their knowledge. Fuji et al. [55] proposed a new method in IPPS. A multi-agent-learning-based integration method was devised in the study to solve the conflict between the optimality of the process plan and the production schedule. In the method, each machine made decisions about process planning and scheduling simultaneously, and it had been modeled as a learning agent using evolutionary artificial neural networks to realize proper decisions resulting from interactions between other machines. Nejad et al. [56] proposed an agent-based architecture of an IPPS system for multi-jobs in flexible manufacturing systems. In the literature of agent-based manufacturing applications, many kinds of research applied with simple algorithms such as dispatching rules are applicable for real-time decision-making [57]. These methods are simple and applicable, but they do not guarantee the effectiveness for a complex problem in the manufacturing systems. As efficiency becomes important in the agent-based manufacturing, the recent research works are trying to combine the agent-based approach with other techniques such as GA, neural network, and some mathematical modeling methods [57]. Therefore, one future research trend is presenting more effective algorithms to improve the effectiveness of agent-based approaches. On the basis of the abovementioned comments, one conclusion can be made that the agent-based approach is an effective method to solve IPPS. Because single-agent
54
3 Review for Integrated Process Planning and Scheduling
environments cannot solve the problem effectively, MAS is more suitable to solve it [45]. Although the architecture and the negotiation among agents may be very complex, MAS will have a promising future in solving this problem [45, 52, 57].
3.3.2 Petri-Net-Based Approaches of IPPS Petri net is a mathematical description of the discrete and parallel system. It is suitable to describe the asynchronous and concurrent computer system. Petri net has a strict mathematical formulation, and it also has intuitive graphical expression. Petri net has been widely used in the Flexible Production Scheduling System (FPSS). It also can be used in the IPPS: first, the flexible process plans are described by the Petri net, second, we communicate it with the Petri net of the dynamic scheduling system, and then we can integrate the process planning system and scheduling system together by Petri net. Kis et al. [58] proposed the integration model of IPPS based on the multilevel Petri net and analyzed it.
3.3.3 Algorithm-Based Approaches of IPPS The basic steps of the algorithm-based approach are as follows. First, the process planning system is used to generate the alternative process plans for all jobs and select user-defined number of optimal plans based on the simulation results. Then, the algorithm in the scheduling system is used to simulate scheduling plans based on the alternative process plans for all jobs. Finally, based on the simulation results, the process plan of each job and the scheduling plan are determined. This approach is workable. However, the biggest shortcoming is that the simulation time may be long and the approach cannot be used in the real manufacturing system. Therefore, one important future research trend is finding an effective algorithm for IPPS and developing an effective system. In this approach, most researches focused on the evolutionary algorithm. Swarm intelligence, some other meta-heuristics methods, such as SA, TS, and Artificial Immune System (AIS), and some hybrid algorithms were also used to solve IPPS. Morad and Zalzala [59] described a GA-based algorithm that only considered the time aspect of the alternative machines, and then they extended this scope to include the processing capabilities of alternative machines, different tolerance limits and process costs. Lee and Kim [18] presented the NLPP model, which was based on a GA. Moon et al. [60] proposed an IPPS model for the multi-plants supply chain, which behaved like a single company through strong coordination and cooperation toward mutual goals. And, then they developed a GA-based heuristic approach to solve this model. Kim et al. [22] used a symbiotic evolutionary algorithm for IPPS. Zhao et al. [61] used a fuzzy inference system to select alternative machines for IPPS, and used a GA to balance the load of every machine. Moon and Seo [62] proposed
3.3 Implementation Approaches of IPPS
55
an advanced process planning and scheduling model for the multi-plant and used an evolutionary algorithm to solve this model. Chan et al. [63, 64] proposed a GA with dominant genes to solve distributed scheduling problems. Park and Choi [65] designed a GA-based method to solve IPPS by taking advantage of the flexibility that alternative process plans offer. Choi and Park [66] designed a GA-based method to solve IPPS. Moon et al. [67] first developed mixed integer linear programming formulations for IPPS based on the NLPP model. And, then they used a GA to solve this model. Li et al. [68] presented a GA-based approach to solve this problem. Shao et al. [26] proposed a modified GA-based approach to solve IPPS based on the NLPP model. Rossi and Dini [69] proposed an ant-colony-optimization-based software system for solving FMS scheduling in a job shop environment with routing flexibility, sequence-dependent setup and transportation time. Guo et al. [1] developed the IPPS as a combinatorial optimization model and modified a Particle Swarm Optimization (PSO) algorithm to solve IPPS. Yang et al. [70] and Zhao et al. [71, 72] used a fuzzy inference system to choose alternative machines for IPPS of a job shop manufacturing system and used the hybrid PSO algorithms to balance the load of each machine. Palmer [73] proposed an SA-based approach to solve IPPS. Chen [74] used an SA algorithm to solve IPPS in mass customization. Li and McMahon [25] used an SAbased approach for IPPS. Chan et al. [75] proposed an Enhanced Swift Converging Simulated Annealing (ESCSA) to solve IPPS. Weintraub et al. [76] presented a procedure for scheduling jobs with alternative processes in a general job shop, and then proposed a TS-based scheduling algorithm to solve it. Chan et al. [77] proposed an AIS-based AIS-FLC algorithm embedded with the Fuzzy Logic Controller (FLC) to solve the complex IPPS. Some other methods, such as heuristic dispatching rules [78], neural network [79], object-oriented integration approach [80, 81], and web-based approach [2], have also been proposed to research IPPS.
3.3.4 Critique of Current Implementation Approaches For the abovementioned three implementation approaches of IPPS, the first and the second methods (agent-based approach and Petri net) are used to model the integration system. And, the third one (algorithm-based approach) is used to optimize the integration system. Agent-based approach is a good method to solve IPPS. However, when the number of the agents is large, agents will spend more time processing messages than doing actual work, and it is often difficult to apply the generic agent architectures directly to IPPS systems [45]. Therefore, one future research trend is proposing a simpler, more effective, and workable MAS approach for IPPS applications. The Petri net has been used in the FPSS. If the flexible process plans can be denoted by Petri net and linked with the Petri net for FPSS, the Petri net for IPPS can be well implemented.
56
3 Review for Integrated Process Planning and Scheduling
Therefore, the key technique of Petri net for IPPS is how to denote the flexible process plans by Petri net. This is another future research trend for the modeling of IPPS. The algorithm-based approach is workable, but the biggest shortcoming of this approach is that the simulation time may be long and it cannot be used in the real manufacturing system. Therefore, one important future research trend is researching and finding an effective algorithm for IPPS and developing an effective system.
References 1. Guo Y, Li W, Mileham A, Owen G (2009) Applications of particle swarm and optimization in integrated process planning and scheduling. Robot Comput Int Manufac 25:280–288 2. Wang Y, Zhang Y, Fuh J, Zhou Z, Xue L, Lou P (2008) A web-based integrated process planning and scheduling system, 4th IEEE Conference on Automation Science and Engineering, Washington DC, USA, pp 662–667. In proceedings 3. Wang L (2009) Web-based decision making for collaborative manufacturing. Int J Comput Integr Manuf 22(4):334–344 4. Kumar M, Rajotia S (2005) Integration of process planning and scheduling in a job shop environment. Int J Adv Manuf Technol 28(1–2):109–116 5. Tan W, Khoshnevis B (2000) Integration of process planning and scheduling—a review. J Intell Manuf 11:51–63 6. Chryssolouris G, Chan S, Cobb W (1984) Decision making on the factory floor: an integrated approach to process planning and scheduling. Robot Comput Int Manufac 1(3–4):315–319 7. Chryssolouris G, Chan S (1985) An integrated approach to process planning and scheduling. Annals of the CIRP 34(1):413–417 8. Beckendorff U, Kreutzfeldt J, Ullmann W (1991) Reactive workshop scheduling based on alternative routings. Conference on factory automation and information management, Florida, pp 875–885. In proceedings 9. Khoshnevis B, Chen Q (1989) Integration of process planning and scheduling function, IIE Integrated Systems Conference & Society for Integrated Manufacturing Conference, Atlanta. pp 415–420. In proceedings 10. Zhang H (1993) IPPM-A prototype to integrated process planning and job shop scheduling functions. Annals CIRP 42(1):513–517 11. Larsen N (1993) Methods for integration of process planning and production planning. Int J Comput Integr Manuf 6(1–2):152–162 12. Wang L, Shen W, Hao Q (2006) An overview of distributed process planning and its integration with scheduling. Int J Comput Appl Technol 26(1–2):3–14 13. Jablonski S, Reinwald B, Ruf T (1990) Integration of process planning and job shop scheduling for dynamic and adaptive manufacturing control, Paper presented at the IEEE, pp 444–450 14. Kim K, Song J, Wang K (1997) A negotiation based scheduling for items with flexible process plans. Comput Ind Eng 33(3–4):785–788 15. Kim K, Egbelu J (1998) A mathematical model for job shop scheduling with multiple process plan consideration per job. Produc Plan Control 9(3):250–259 16. Kim K, Egbelu P (1999) Scheduling in a production environment with multiple process plans per job. Int J Prod Res 37(12):2725–2753 17. Saygin C, Kilic S (1999) Integrating flexible process plans with scheduling in flexible manufacturing systems. Int J Adv Manuf Technol 15:268–280 18. Lee H, Kim S (2001) Integration of process planning and scheduling using simulation based genetic algorithms. Int J Adv Manuf Technol 18:586–590 19. Thomalla C (2001) Job shop scheduling with alternative process plans. Int J Prod Econ 74:125– 134
References
57
20. Yang Y, Parsaei H, Leep H (2001) A prototype of a feature-based multiple-alternative process planning system with scheduling verification. Comput Ind Eng 39:109–124 21. Gan P, Lee K (2002) Scheduling of flexible sequenced process plans in a mould manufacturing shop. Int J Adv Manuf Technol 20:214–222 22. Kim Y, Park K, Ko J (2003) A symbiotic evolutionary algorithm for the integration of process planning and job shop scheduling. Comput Oper Res 30:1151–1171 23. Kis T (2003) Job shop scheduling with processing alternatives. Eur J Oper Res 151:307–332 24. Jain A, Jain P, Singh I (2006) An integrated scheme for process planning and scheduling in FMS. Int J Adv Manuf Technol 30:1111–1118 25. Li W, McMahon C (2007) A simulated annealing—based optimization approach for integrated process planning and scheduling. Int J Comput Integr Manuf 20(1):80–95 26. Shao X, Li X, Gao L, Zhang C (2009) Integration of process planning and scheduling—a modified genetic algorithm-based approach. Comput Oper Res 36:2082–2096 27. Usher J (2003) Evaluating the impact of alternative plans on manufacturing performance. Comput Ind Eng 45:585–596 28. Khoshnevis B, Chen Q (1990) Integration of process planning and scheduling functions. J Intell Manuf 1:165–176 29. Usher J, Fernandes K (1996) Dynamic process planning-the static phase. J Mater Process Technol 61:53–58 30. Baker R, Maropoulos P (2000) An architecture for the vertical integration of tooling considerations from design to process planning. Robot Comput Int Manufact 16:121–131 31. Seethaler R, Yellowley I (2000) Process control and dynamic process planning. Int J Mach Tools Manuf 40:239–257 32. Wang J, Zhang Y, Nee A (2002) Integrating process planning and scheduling with an intelligent facilitator, 10th International Manufacturing Conference In China (IMCC2002), Xiamen, China: October 2002. In proceedings 33. Zhang Y, Saravanan A, Fuh J (2003) Integration of process planning and scheduling by exploring the flexibility of process planning. Int J Prod Res 41(3):611–628 34. Kumar M, Rajotia S (2003) Integration of scheduling with computer aided process planning. J Mater Process Technol 138:297–300 35. Wang Z, Chen Y, Wang N (2004) Research on dynamic process planning system considering decision about machines, 5th world congress on intelligent control and automation, Hangzhou, China, 2004, pp 2758–2762. In proceedings 36. Brandimarte P, Calderini M (1995) A hierarchical bicriterion approach to integrated process plan selection and job shop scheduling. Int J Prod Res 33(1):161–181 37. Kempenaers J, Pinte J, Detand J (1996) A collaborative process planning and scheduling system. Adv Eng Softw 25:3–8 38. Sadeh N, Hildum D, Laliberty T, McANulty J, Kjenstad D, Tseng A (1998) A blackboard architecture for integrating process planning and production scheduling. Concur Eng Res Appl 6(2):88–100 39. Wu S, Fuh J, Nee A (2002) Concurrent process planning and scheduling in distributed virtual manufacturing. IIE Trans 34:77–89 40. Zhang J, Gao L, Chan F, Li P (2003) A holonic architecture of the concurrent integrated process planning system. J Mater Process Technol 139:267–272 41. Wang L, Shen W (2003) DPP: An agent based approach for distributed process planning. J Intell Manuf 14:429–439 42. Wang L, Song Y, Shen W (2005) Development of a function block designer for collaborative process planning, CSCWD2005, Coventry, UK, pp 24–26. In proceedings 43. Sugimura N, Shrestha R, Tanimizu Y, Iwamura K (2006) A study on integrated process planning and scheduling system for holonic manufacturing, Process planning and scheduling for distributed manufacturing, (pp 311–334). Springer 44. Li W, Gao L, Li X, Guo Y (2008a) Game theory-based cooperation of process planning and scheduling, CSCWD2008, Xi’an, China, pp 841–845. In proceedings
58
3 Review for Integrated Process Planning and Scheduling
45. Zhang W, Xie S (2007) Agent technology for collaborative process planning: a review. Int J Adv Manuf Technol 32:315–325 46. Nwana H, Ndumu D (1997) An introduction to agent technology, Software agents and soft computing: toward enhancing machine intelligence, concepts and applications, pp 3–36 47. Gu P, Balasubramanian S, Norrie D (1997) Bidding-based process planning and scheduling in a multi-agent system. Comput Ind Eng 32(2):477–496 48. Chan F, Zhang J, Li P (2001) Modelling of integrated, distributed and cooperative process planning system using an agent-based approach. Proceed Ins Mechan Eng Part B: J Eng Manufact 215:1437–1451 49. Lim M, Zhang D (2003) A multi-agent based manufacturing control strategy for responsive Manufacturing. J Mater Process Technol 139:379–384 50. Lim M, Zhang D (2004) An integrated agent-based approach for responsive control of manufacturing resources. Comput Ind Eng 46:221–232 51. Denkena B, Zwick M, Woelk P (2003) Multiagent-based process planning and scheduling in context of supply chains. Lect Notes Art Int 2744:100–109 52. Wong T, Leung C, Mak K, Fung R (2006) Integrated process planning and scheduling/rescheduling—an agent-based approach. Int J Prod Res 44(18–19):3627–3655 53. Wong T, Leung C, Mak K, Fung R (2006) Dynamic shopfloor scheduling in multi-agent manufacturing system. Expert Syst Appl 31:486–494 54. Shukla S, Tiwari M, Son Y (2008) Bidding-based multi-agent system for integrated process planning and scheduling: a data-mining and hybrid tabu-SA algorithm-oriented approach. Int J Adv Manuf Technol 38:163–175 55. Fuji N, Inoue R, Ueda K (2008) Integration of process planning and scheduling using multiagent learning, 41st CIRP Conference on Manufacturing Systems, pp 297–300. In proceedings 56. Nejad H, Sugimura N, Iwamura K, Tanimizu Y (2008) Agent-based dynamic process planning and scheduling in flexible manufacturing system, 41st CIRP Conference on Manufacturing Systems, pp 269–274. In proceedings 57. Shen W, Wang L, Hao Q (2006) Agent-based distributed manufacturing process planning and scheduling: a state-of-the-art survey. IEEE Trans Sys Man Cyber Part C: App Rev 36(4):563– 577 58. Kis J, Kiritsis D, Xirouchakis P (2000) A petri net model for integrated process and job shop production planning. J Intell Manuf 11:191–207 59. Morad N, Zalzala A (1999) Genetic algorithms in integrated process planning and scheduling. J Int Manufac 10:169–179 60. Moon C, Kim J, Hui S (2002) Integrated process planning and scheduling with minimizing total tardiness in multi-plants supply chain. Comput Ind Eng 43:331–349 61. Zhao F, Hong Y, Yu D (2004) A genetic algorithm based approach for integration of process planning and production scheduling, International Conference on Intelligent Mechatronics and Automation, Chengdu, China, 2004, pp 483–488. In proceedings 62. Moon C, Seo Y (2005) Evolutionary algorithm for advanced process planning and scheduling in a multi-plant. Comput Ind Eng 48:311–325 63. Chan F, Chung S, Chan L (2005) An adaptive genetic algorithm with dominated genes for distributed scheduling problems. Expert Sys App 29:364–371 64. Chan F, Chung S, Chan L (2008) An introduction of dominant genes in genetic algorithm for FMS. Int J Prod Res 46(16):4369–4389 65. Park B, Choi H (2006) A genetic algorithm for integration of process planning and scheduling in a job shop. Lect Notes Artific Intell 4304:647–657 66. Choi H, Park B (2006) ‘Integration of process planning and job shop scheduling using genetic algorithm’, 6th WSEAS international conference on simulation, modeling and optimization. Lisbon, Portugal, Sep 22–24(2006):13–18 In proceedings 67. Moon I, Lee S, Bae H (2008) Genetic algorithms for job shop scheduling problems with alternative routings. Int J Prod Res 46(10):2695–2705 68. Li X, Gao L, Zhang G, Zhang C, Shao X (2008) A genetic algorithm for integration of process planning and scheduling problem. Lect Notes Artific Int 5315:495–502
References
59
69. Rossi A, Dini G (2007) Flexible job shop scheduling with routing flexibility and separable setup time using ant colony optimisation method. Robot Comput Int Manufac 23:503–516 70. Yang Y, Zhao F, Hong Y, Yu D (2005) Integration of process planning and production scheduling with particle swarm optimization (PSO) algorithm and fuzzy inference systems, SPIE International Conference on Control Systems and Robotics, May, 2005, vol 6042, No. 60421 W. In proceedings 71. Zhao F, Zhang Q, Yang Y (2006a) An improved particle swarm optimization (PSO) algorithm and fuzzy inference system based approach to process planning and production scheduling integration in holonic manufacturing system (HMS), 5th International Conference on Machine Learning and Cybernetics, Dalian, China, Aug. 13–16, 2006, pp 396-401. In proceedings 72. Zhao F, Zhu A, Yu D (2006b) A hybrid particle swarm optimization (PSO) algorithm schemes for integrated process planning and production scheduling, 6th World Congress on Intelligent Control and Automation, Dalian, China, June 21–23, 2006, pp 6772–6776. In proceedings 73. Palmer G (1996) A simulated annealing approach to integrated production scheduling. J Intell Manuf 7:163–176 74. Chen Y (2003) An integrated process planning and production scheduling framework for mass customization, PhD Dissertation, Hong Kong University of Science & Technology 75. Chan F, Kumar V, Tiwari M (2009) The relevance of outsourcing and leagile strategies in performance optimization of an integrated process planning and scheduling model. Int J Prod Res 47(1):119–142 76. Weintraub A, Cormier D, Hodgson T, King R, Wilson J, Zozom A (1999) Scheduling with alternatives: a link between process planning and scheduling. IIE Trans 31:1093–1102 77. Chan F, Kumar V, Tiwari M (2006) Optimizing the performance of an integrated process planning and scheduling problem: an AIS-FLC based approach, 2006 IEEE Conference on Cybernetics and Intelligent Systems, Bangkok, Thailand, June 7–9, 2006, pp 1–8. In proceedings 78. Gindy N, Saad S, Yue Y (1999) Manufacturing responsiveness through integrated process planning and scheduling. Int J Prod Res 37(11):2399–2418 79. Ueda K, Fuji N, Inoue R (2007) An emergent synthesis approach to simultaneous process planning and scheduling. Annals CIRP 56(1):463–466 80. Bang C (2002) Hybrid integration approach for process planning and shop floor scheduling in agile manufacturing, PhD Dissertation, State University of New York at Buffalo 81. Zhang D, Zhang H (1999) A simulation study of an object-oriented integration test bed for process planning and production scheduling. Int J Flex Manuf Syst 11:19–35 82. Wang L, Shen W (eds) (2007) Process planning and scheduling for distributed manufacturing. Springer-Verlag, London Limited
Chapter 4
Improved Genetic Programming for Process Planning
4.1 Introduction A process plan specifies what raw materials or components are needed to produce a product, and what processes and operations are necessary to transform those raw materials into the final product. It is the bridge between product design and manufacturing. The outcome of process planning is the information for manufacturing processes and their parameters, and the identification of the machine tools, and fixtures required to perform those processes. Generally, the traditional manufacturing system research literature assumed that there was only one feasible process plan for each job. This implied that there was no flexibility possible in the process plan. But, in the modern manufacturing system, most jobs may have a large number of flexible process plans. So, flexible process plan selection in a manufacturing environment has become a crucial problem. Because it has a vital impact on manufacturing system performance, several researchers have examined the flexible process plans selection problem in recent years. Sormaz and Khoshnevis [1] describe a methodology for the generation of alternative process plans in the integrated manufacturing environment. This procedure includes a selection of alternative machining processed, clustering, and sequencing of machining processes, and generation of a process plan network. Kusiak and Finke [2] developed a model to select a set of process plans with the minimum cost of removing material and minimum number of machine tools and other equipment. Bhaskaran and Kumar [3] formalized the selection of process plans with the objective of minimizing the total processing time and the total steps of processing. Lee and Huy [4] presented a new methodology for flexible operation planning using the Petri net, which was used as a unified framework for both operation planning and plan representation. Ranaweera and Kamal [5] presented a technique for evaluating processing plans generated by a cooperative intelligent image analysis framework, and this system was able to rank multiple processing plans. Seo and Egbelu [6] used tabu search
© Springer-Verlag GmbH Germany, part of Springer Nature and Science Press, Beijing 2020 X. Li and L. Gao, Effective Methods for Integrated Process Planning and Scheduling, Engineering Applications of Computational Methods 2, https://doi.org/10.1007/978-3-662-55305-3_4
61
62
4 Improved Genetic Programming for Process Planning
to select a plan based on product mix and production volume. Usher and John [7] used genetic algorithms to determine optimal, or near-optimal, operation sequences for parts of varying complexity. Tiwari [8] used genetic algorithm to obtain a set of process plans for a given set of parts and production volume. Rocha and Ramos [9] used genetic algorithm approach to generate the sequence of operations and to select the machine and tools that minimize some criteria. Dereli and Filiz [10] introduced the GA-based optimization modules of a process planning system called Optimized Process Planning System for PRIsmatic parts (OPPS-PRI). Most of these approaches proposed the models to optimize flexible process plans. Moreover, few of them used evolutionary algorithms, and none of them used genetic programming. But evolutionary algorithm is becoming a useful, promising method for solving complex and dynamic problems [11]. This chapter presents a new methodology that uses genetic programming that can optimize flexible process planning effectively. GP is one of the Evolutionary Algorithms (EA) [12]. In GP, a computer program is often represented as a tree (a program tree) [13], where the internal nodes correspond to a set of functions used in the program and the external nodes (terminals) indicate variables and constants used as the input of functions. Manufacturing optimization has been a major application field for evolutionary computation methods [14]. But it has rarely been the subject of genetic programming research [15, 16]. One of the possible reasons for the lack of GP applications in manufacturing optimization is the difficulty of evolving a direct permutation through GP. Now a new methodology that uses genetic programming that can effectively optimize flexible process plans. The remainder of this chapter is organized as follows. Section 4.2 introduces a flexible process planning. GP is briefly reviewed in Sect. 4.3. GP for flexible process planning is described in Sect. 4.4. Case studies and discussion are reported in Sect. 4.5. The last section is the conclusion.
4.2 Flexible Process Planning 4.2.1 Flexible Process Plans There are three types of flexibility considered in flexible process planning [17, 18]: operation flexibility, sequencing flexibility, and processing flexibility [19]. Operation flexibility [20], which is also called routing flexibility [21], relates to the possibility of performing one operation on alternative machines, with possibly distinct processing time and cost. Sequencing flexibility is decided by the possibility of interchanging the sequence of the required operations. Processing flexibility is determined by the possibility of processing the same manufacturing feature with alternative operations or sequences of operations. Better performance in some criteria (e.g., production time) can be obtained by the consideration of these flexibilities [20]. Figure 4.1 shows an example part that consists of three manufacturing features. And the technical specifications for the part have been defined in Table 4.1. This part
4.2 Flexible Process Planning
63
Fig. 4.1 The example part
Table 4.1 The technical specifications for the part Features
Alternative operations
F1
Turning (Oper1)
M1, M2
41,38
F1
Turning (Oper11)
M3,M4
92,96
F1
Turning (Oper12)
M5,M6
20,23
Fine turning (Oper13)
M1,M2
65,70
F1
Turning (Oper12)
M5,M6
20,23
Grinding (Oper14)
M7,M9
68,72
F2
Drilling (Oper3)
M2,M4
Reaming (Oper4)
M1,M2,M5
F2
Alternative machines
Working time for each alternative machine (s)
20,22 35,29,36
Boring (Oper9)
M2,M3,M4
50,45,50
Drilling (Oper6)
M2,M3,M4
25,20,27
Reaming (Oper7)
M7,M8
54,50
Boring (Oper9)
M2,M3,M4
Reaming (Oper8)
M5,M6
Boring (Oper9)
M2,M3,M4
50,45,50
F2
Reaming (Oper15)
M7,M8,M9
50,56,52
F3
Turning (Oper2)
M5,M7
75,70
F3
Milling (Oper5)
M9,M10
49,47
F3
Milling (Oper10)
M9,M10
70,73
F2
50,45,50 80,76
64
4 Improved Genetic Programming for Process Planning
has three types of flexibility. From Table 4.1, it can be found that every operation can be processed on alternative machines with distinct process time (Oper1 can be processed on M1 and M2 with different processing time), the manufacturing sequence of feature 1 and feature 3 can be interchanged (Oper1 and Oper10 can be interchanged), and in the second column of Table 4.1, every feature has alternative operations (feature 1 has 4 alternative operations, feature 2 has 4 alternative operations, and feature 3 has 3 alternative operations).
4.2.2 Representation of Flexible Process Plans There are many methods used to describe the three types of flexibility [22], such as Petri net [4], AND/OR graphs, and network. And, a network representation proposed by Sormaz [1], Kim [20] and Ho [23] is used here. There are three node types in the network: starting node, intermediate node, and ending node [20]. The starting node and the ending node, which are dummy ones, indicate the start and the end of the manufacturing process of a job. An intermediate node represents an operation, which contains the alternative machines that can perform the operation and the processing time required for the operation according to the machines. The arrows connecting the nodes represent the precedence between them. OR relationships are used to describe the processing flexibility that the same manufacturing feature can be performed by different process procedures. If the links following a node are connected by an ORconnector, it only needs to traverse one of the OR-links (the links connected by the OR-connector are called OR-links). The OR-link path is an operation path that begins at an OR-link and ends as it merges with the other paths, and its end is denoted by a JOIN-connector. For the links that are not connected by OR-connectors, all of them must be visited [20]. Based on the technical specifications and precedence constraints, the flexible process plans of the part can be converted to the network. Figure 4.2 shows the example part’s (see Fig. 4.1) flexible process plans network which is converted from the technical specifications shown in Table 4.1, and this network will be used in Sect. 4.5. In this network, paths {11}, {12, 13}, and {12, 14} are three OR-link paths. An OR-link path can, of course, contain the other OR-link paths, e.g., paths {6, 7} and {8}.
4.2.3 Mathematical Model of Flexible Process Planning In this chapter, the optimization objective of the flexible process planning problem is to minimize the production time (contains working time and transmission time). In solving this problem, the following assumptions are made [20]: (1) Each machine can handle only one job at a time. (2) All machines are available at time zero.
4.2 Flexible Process Planning
Fig. 4.2 Flexible process plans network
65
66
4 Improved Genetic Programming for Process Planning
(3) After a job is processed on a machine, it is immediately transported to the next machine on its process, and the transmission time among machines is constant. (4) The different operations of one job can not be processed simultaneously. Based on these assumptions, the mathematical model of flexible process planning is described as follows: The notations used to explain the model are described below:
N
The total number of jobs;
Gi
The total number of flexible process plans of the ith job;
Oijl
The jth operation in the lth flexible process plan of the ith job;
Pil
The number of operations in the lth flexible process plan of the ith job;
k
The alternative machine corresponding to Oijl ;
TW(i, j, l, k)
The working time of operation Oijl on the kth alternative machine;
TS(i, j, l, k)
The starting time of operation Oijl on the kth alternative machine;
TT(i, l, (j, k1 ), (j +1, k2 ))
The transmission time between the k 1 th alternative machine of the Oijl and the k 2 th alternative machine of the Oi(j+1)l ;
TP(i)
The production time of the ith job;
The objective function is min T P (i) =
Pil j=1
T W (i, j, l, k) +
Pil−1
T T (i, l, ( j, k1 ), ( j + 1, k2 ))
j=1
i ∈ [1, N ], j ∈ [1, Pil ], l ∈ [1, G i ] (4.1) Each machine can handle only one job at a time. This is the constraint of machine. T S(i, j2 , l, k) − T S(i, j1 , l, k) > T W (i, j1 , l, k) i ∈ [1, N ], j1 , j2 ∈ [1, Pil ], l ∈ [1, G i ]
(4.2)
The different operations of one job cannot be processed simultaneously. This is the constraint of different processes for one job. T S(i, ( j + 1), l, k2 ) − T S(i, j, l, k1 ) > T W (i, j, l, k1 ) i ∈ [1, N ], j ∈ [1, Pil ], l ∈ [1, G i ]
(4.3)
The objective function is Eq. (4.1), and the two constraints are in Eqs. (4.2) and (4.3).
4.3 Brief Review of GP
67
4.3 Brief Review of GP Genetic Programming (GP) was introduced by Koza [13, 24] as a method for using natural selection and genetics as a basis for automatically creating computer programs. For a given problem, the work steps of GP are then given as follows: Step 1 Initialize population randomly generated computer programs (trees). Step 2 Evaluate all populations. Step 3 Produce a new generation population: (1) Reproduction Reproduce some excellent individuals and delete the same number of inferior individuals. (2) Crossover According to the user-defined probabilistic, some individuals are selected for crossover. For each two selected trees in a pair, a crossover point is chosen randomly and two offspring (trees) are produced from the pair in terms of the crossover operation and are placed into the new generation. (3) Mutation According to the user-defined probabilistic, some individuals are selected to be mutated. For each selected tree, a mutation point is randomly chosen, and one offspring (tree) is produced from the selected one in terms of the mutation operation and is placed into the new generation. Step 4 Do steps 2 and 3 cyclically until terminating condition satisfied There are a number of issues to be considered in a GP system [12]: (1) Definitions of functions and terminals to be used in the trees generated. (2) Definition of a fitness function for evaluating trees and the way those trees are evaluated. (3) Generation of the initial population. (4) Selection strategies for trees to be included in the next generation population. (5) How reproduction, crossover, and mutation operations are carried out and how often these operations are performed. (6) Criteria for terminating the evolution process and the way to check if the terminating conditions are satisfied. (7) Return of the final results.
68
4 Improved Genetic Programming for Process Planning
4.4 GP for Flexible Process Planning Using GP for flexible process planning has some advantages. GP provides a mathematical representation of the flexible process plans. Now, it is described that how GP can be used to optimize flexible process planning.
4.4.1 The Flowchart of Proposed Method Figure 4.3 shows the flowchart of the proposed method (GP for flexible process Fig. 4.3 Flowchart of GP
Flexible process plan
Initializa population
Gen=0
Evaluate (Fitness function) GP operator Reproduction Crossover Mutation
Gen=Gen+1
No Terminate condition satisfied?
Yes Near optimal process plan
4.4 GP for Flexible Process Planning
69
planning). First, the CAPP system gives flexible process plans. And then, the search begins with an initial population. The individual consists of two parts. One part is represented by the sequence of operations and the set of machines used to accomplish the operation, the other one is composed of discrimination value. The detailed description of an individual will be given in Sect. 4.2. The rest steps of the method are the same as the common GP.
4.4.2 Convert Network to Tree, Encoding, and Decoding 1. Convert network to tree From Fig. 4.2, it is known that flexible process plans can be represented as a network. And in GP, the individual is often represented as a tree (see Sect. 4.3). So, the key of the proposed method is how to convert the network to tree. In order to convert the network to tree, a method has been presented. The first step of this method is deleting the ending node of the network, and the second step is dis-entwining JOIN-connector. The last step is adding the latter intermediate nodes which are linked by the JOIN-connector to the endpoint of each OR-link linked by the JOIN-connector. And then, the network has been converted to tree. A part network of job (see Fig. 4.2) has been taken as an example to explain how to convert the network to tree (see Fig. 4.4). The procedure is as follows:
Fig. 4.4 How to convert the network to tree
70
4 Improved Genetic Programming for Process Planning
Step 1 Delete the ending node. Step 2 Step 2: Disentwine JOIN2-connector and JOIN3-connector. Step 3 Add operation 9 (the latter intermediate node is linked by the JOIN2connector and JOIN3-connector) to the endpoint of path {3, 4}, {6, 7} and {8} (the OR-links which are linked by the JOIN2-connector and JOIN3-connector), respectively. 2. Encoding and decoding GP uses the tree hierarchy frame to express problems. Each tree within a member produces one output. A tree that is made up of nodes can be classified into two sets: the function set and the terminal set. The function node is the method, and the terminal node is the value of the problem. Each node has zero or more inputs and uses those inputs to create its output. A node can have any number of inputs. The terminal can also be thought of as a zero-argument function. Input features and any constants are represented by terminal nodes. A node with one or more inputs is a function; its output is dependent on its inputs. For instance, addition, subtraction, multiplication, and division all are functions. In this chapter, each tree of each individual is generated by the function set F = {switch-case, link} and terminal set T = {discrimination value, gene}. Switch-case is the conditional expression; and link, which links the nodes together, is a user-defined function, and its output is a list. It includes the nodes which are linked by this function. The sequence of the string is from top to bottom. The discrimination value encodes OR-connectors as the decimal integer. It is in concert with the switch-case function to decide which OR-link will be chosen. A gene is a structure and made up of two parts. The first number is the operation. It can be all the operations of a job, even those that may not be performed because of alternative operation procedures. The second one is an alternative machine. It is the ith element of which represents the machine on which the operation corresponding to the ith element of part I is processed. The encoding scheme of a tree is a list that has two parts: part I is made up of genes, and part II is made up of discrimination values. Figure 4.5 shows an example individual of job (see Fig. 4.1). Taking gene (2, 5), for example, 2 is the operation of the job, and 5 is the alternative machine, which corresponds to operation 2. The encoding scheme of this individual is shown in Fig. 4.5. Part I is made up of 19 genes; part II is made up of five discrimination values. The encoding is directly decoded. The selection of the OR-link paths which contain operations and the corresponding machines is decided by the interpretation of part II of the individuals’ encoding scheme. And then the orders appearing in the resulting part I are interpreted as an operation sequence and the corresponding machining sequence for the job. In the above encoding example, the operation sequence together with the corresponding machining sequence is (1, 1)-(5, 9)-(6, 3)-(7, 8)-(9, 2).
4.4 GP for Flexible Process Planning
71
Fig. 4.5 A tree of individual
4.4.3 Initial Population and Fitness Evaluation 1. Initial population In order to operate the evolutionary algorithm, an initial population is needed. The generation of the initial population in GP is usually done randomly. But when generating the individuals for an initial population of flexible process planning, a feasible operation sequence in a process plan has to be taken into account. Feasible operation sequence means that the order of elements in the used encoding does not break constraints on precedence relations of operations [20]. As mentioned above, a method was proposed to generate a random and feasible individual. The procedure of the method is as follows: Step 1 The part I of the initial individual contains all the alternative operations, and the sequence of operations is fixed. Step 2 The second number of part I is created by randomly assigning a machine in the set of machines that can perform the operation placed at the corresponding position in part I. Step 3 The part II of the initial individual, which represents OR-link paths, is initiated by randomly generating a decimal integer for each component of this part. The selection area of each discrimination value is decided by the number of OR-link paths that are controlled by this value. For example, if it
72
4 Improved Genetic Programming for Process Planning
has three OR-link paths, the selection area of the discrimination value is the random decimal integer in [1, 3] 2. Fitness evaluation The objective of the flexible process planning problem is to minimize the production time (contains working time and transmission time) for the given problem. Adjusted fitness has been used as the objective. It can be calculated from the following: min f (i, t) =
1 T P(i, t)
(4.4)
S the size of the population; M the maximal generation; t 1, 2, 3, … M generations; TP(i, t) the production time of ith job in the tth generation (see Eq. (4.1)); The fitness function is calculated for each individual in the population as described in Eq. (4.4).
4.4.4 GP Operators It is important to employ good operators that can effectively deal with the problem and efficiently lead to excellent individuals residing in the population. The GP operators can generally be divided into three classes: reproduction, crossover, and mutation. And in each class, a large number of operators have been developed [25]. 1. Reproduction Tournament selection scheme with a user-defined reproduction probabilistic was used for reproduction operation. In tournament selection, a number of individuals are selected at random (dependent on the tournament size, typically between 2 and 7) from the population and the individual with the best fitness is chosen for reproduction. The tournament selection approach allows a tradeoff to be made between the exploration and exploitation of the gene pool [25]. This scheme can modify the selection pressure by changing the tournament size. This scheme has two working steps: Step 1 Select user-defined tournament size individuals from the population randomly to compose a group. Step 2 Copy the best member of the group (the one with the best fitness value) to the following generation, and then applying the tournament selection scheme to the remaining individuals.
4.4 GP for Flexible Process Planning
73
2. Crossover Subtree exchange crossover has been used as the crossover operator here, and a fitness-proportion selection scheme with a user-defined crossover probabilistic was used for crossover operation. Subtree exchange crossover can generate feasible children individuals that satisfy precedence restrictions and avoid duplication or omission of operations as follows. The cut point is chosen randomly in the tree, and the subtree before the cut point in one parent (parent 1) is passed on to the same position as in the offspring (child 1). The other part of the offspring (child 1) is made up of the subtree after the cut point in the other parent (parent 2). The other offspring (child 2) is made up of the subtree before the cut point in one parent (parent 2) and the subtree after the cut point in the other parent (parent 1). An example of the crossover is presented in Fig. 4.6. The cut point is marked with “black circle”. The crossover operator produces feasible trees since both parents are feasible and offspring are created without violating the feasibility of the parents.
Fig. 4.6 Subtree exchange crossover
74
4 Improved Genetic Programming for Process Planning
Fig. 4.7 Point mutation
3. Mutation Point mutation has been used as the mutation operator here, and a random selection scheme with a user-defined mutation probabilistic was used for mutation operation. Each of the selected individuals is mutated as follows. First, the point mutation scheme is applied in order to change the alternative machine represented in the gene (see Fig. 4.5) of tree. A gene is randomly chosen from the selected individual. Then, the second element of the gene is mutated by altering the machine number to another one of the alternative machines at random. Second, the other mutation is carried out to alter the OR-link path. This is associated with part II of the encoding scheme of tree. A discrimination value is randomly chosen from the selected individual. Then, it is mutated by changing its value in the selection area randomly. In the example depicted in Fig. 4.7, mutation point is marked with “ ”. Gene (5, 9) has changed into (5, 10), and the selected discrimination value has changed from 1 to 2.
4.5 Case Studies and Discussion Some experiments have been conducted to measure the adaptability and superiority of the proposed GP approach. And, the algorithm has been compared with Genetic Algorithm (GA), which is another popular heuristic algorithm. The performance of the approach is satisfactory from the experiments and comparison.
4.5.1 Implementation and Testing For doing the experiments of the proposed approach, two jobs with flexible process plans have been generated. Job 1 has been given in Figs. 4.1 and 4.2, and job 2 is
4.5 Case Studies and Discussion
75
changed from job 1 by assuming the second machine is broken in the current shop status. It has ten machines on the shop floor. The code of the machine in job 2 is the same as the code of machine in job 1. The transmission time (the time units is the same as processing time in Fig. 4.2) between the machines is given in Table 4.2. The objective is to solve the optimization of flexible process plans with the maximum objective function f (i, t) (Eq. (4.4)). The GP parameters for the two jobs are given in Table 4.3. The terminating condition is reaching the maximum generation. The GP is coded in C ++ and implemented on a PC (Pentium (R) 4, CPU 2.40 GHz). The experiments are carried out for the objective: minimizing the production time (see Sect. 4.3.2). The experimental results (fitness is the adjusted fitness) of the 2 jobs, which include the Best Individual’s Fitness (BIF), the Maximum Population’s Average Fitness (MPAF), and CPU time are reported in Table 4.4. From the experimental results which are shown in Table 4.4, the best process plan for each job has been shown in Table 4.5. And, Fig. 4.8 illustrates the convergence curves of GP for the 2 jobs. The curves show the search capability and evolution speed of this algorithm. From the above experiment results which are shown in Table 4.5, comparing job 1 with job 2, job 2 is changed from job 1 by assuming the second machine is broken, the only difference between them is the second machine is broken. But the best process plans for them are completely different. This reveals that an accident in the shop floor can lead to the best process plan changed completely. So, it becomes a very important problem that how to optimize flexible process planning to respond to the current shop status quickly. The method which is proposed in this chapter used genetic programming to optimize flexible process planning. The experimental results of Table 4.4 and Fig. 4.8 show that the GP-based approach can reach good solutions in a short time. So, the GP-based approach is a promising method in solving the optimization of the flexible process planning problem. And the results also show that the proposed method can reach near-optimal solutions in the early stage of evolution. In order to respond to the current shop status, the proposed method can select near-optimal process plans quickly and effectively.
4.5.2 Comparison with GA The algorithm has been compared with GA. The objective of the experiments is minimizing production time. The GA is coded in C ++ and implemented on the same PC with GP. The GA parameters for the two jobs are given in Table 4.3, and the fitness-proportion selection scheme, single-point crossover, and point mutation have been used as the reproduction, crossover, and mutation operators, respectively. Figure 4.9 illustrates the convergence curves of the two algorithms for two jobs. From the results of Fig. 4.9, it can be observed that the two approaches can achieve good results. The GP-based approach usually takes less time (less than 30 generations) to find optimal solutions, and the GA -based approach is slower (nearly reach 60 generations) in finding optimal solutions. The GP-based approach also can find
0
5
8
12
15
4
6
10
13
18
2
3
4
5
6
7
8
9
10
1
1
Machine code
13
10
6
4
6
10
7
3
0
5
2
10
6
4
6
10
7
4
0
3
8
3
Table 4.2 The transmission time between the machines
6
4
6
10
14
3
0
4
7
12
4
4
6
10
12
18
0
3
7
10
15
5
6
4
15
12
8
5
0
18
14
10
6
6
4
6
10
7
3
0
5
12
10
7
8
4
0
3
8
10
6
4
6
10
8
4
0
4
7
12
6
4
6
10
13
9
0
4
8
10
15
4
6
10
13
18
10
76 4 Improved Genetic Programming for Process Planning
4.5 Case Studies and Discussion
77
Table 4.3 The GP and GA parameters Parameters
GP
The size of the population, S
GA
Job1
Job2
Job1
Job2
400
400
400
400
30
30
60
60
Total number of generations, M Probability of reproduction operation, p
0.05
0.05
0.05
0.05
Probability of crossover operation, pc
0.50
0.50
0.50
0.50
Probability of mutation operation, pm
0.05
0.05
0.05
0.05
Tournament size, b
2
2
2
2
Table 4.4 Experiment results
Table 4.5 Best process plan of each job
Job
BIF
MPAF
CPU time(s)
1
0.00467289
0.00444361
113.6
2
0.00444444
0.00437222
130.9
Job
Best process plan
Production time
1
(1, 2)-(2, 7)-(3, 2)-(4, 2)-(9, 3)
213
2
(10, 9)-(11, 3)-(15, 7)
224
(1) Job 1
(2) Job 2
Fig. 4.8 Convergence curves of GP. (1) Job 1 (2) Job 2
near-optimal solutions quicker than the GA-based approach. So, when it is applied to large-scale problems in the real world, it is more suitable to reduce much computation time with a little detriment of the solution quality. Overall, the experiment results indicate that the GP-based approach is a more acceptable optimization approach to flexible process planning.
78
4 Improved Genetic Programming for Process Planning
(1) Job 1
(2) Job 2
Fig. 4.9 The comparison between the convergence curves of GP and GA. (1) Job 1 (2) Job 2
4.6 Conclusion A new approach using Genetic Programming (GP) is proposed to optimize flexible process planning. The flexible process plans and the mathematical model of flexible process planning have been described, and a network representation is adopted to describe the flexibility of process plans. To satisfy GP, the network has been converted to a tree. The efficient genetic representations and operator schemes also have been considered. Case studies have been used to test the algorithm, and the comparison has been made for this approach and GA, which is another popular evolutionary approach to indicate the adaptability and superiority of the GP-based approach. The experimental results show that the proposed method is a promising and very effective method in the optimization research of flexible process planning. A new approach using Genetic Programming (GP) is proposed to optimize flexible process planning. The flexible process plans and the mathematical model of flexible process planning have been described, and a network representation is adopted to describe the flexibility of process plans. To satisfy GP, the network has been converted to a tree. Efficient genetic representations and operator schemes also have been considered. Case studies have been used to test the algorithm, and the comparison has been made for this approach and GA, which is another popular evolutionary approach to indicate the adaptability and superiority of the GP-based approach. The experimental results show that the proposed method is a promising and very effective method in the optimization research of flexible process planning.
References 1. Sormaz D, Khoshnevis B (2003) Generation of alternative process plans in integrated manufacturing systems. J Intell Manuf 14:509–526 2. Kusiak A, Finke G (1988) Selection of process plans in automated manufacturing systems. IEEE J Robot Autom 4(4):397–402
References
79
3. Bhaskaran K (1990) Process plan selection. Int J Prod Res 28(8):1527–1539 4. Lee KH, Junq MY (1994) Petri net application in flexible process planning. Comput Ind Eng 27:505–508 5. Kamal R, Jagath S (2003) Processing plan selection algorithms for a cooperative intelligent image analysis system. In: Proceedings of the International Conference on Imaging Science, Systems and Technology, pp 576–582 6. Seo Y, Egbelu PJ (1996) Process plan selection based on product mix and production volume. Int J Prod Res 34(9):2369–2655 7. Usher JM, Bowden RO (1996) Application of genetic algorithms to operation sequencing for use in computer-aided process planning. Comput Ind Eng 30(4):999–1013 8. Tiwari MK, Tiwari SK, Roy D, Vidyarthi NK, Kameshewaran S (1999) A genetic algorithm based approach to solve process plan selection problems. IEEE Proc Sec Int Conf Intell Proc Manufac Mat 1:281–284 9. Rocha J, Ramos C, Vale Z (1999) Process planning using a genetic algorithm approach. In: IEEE Proceeding of International Symposium on Assembly and Task Planning, pp 338–343 10. Dereli T, Filiz HI (1999) Optimisation of process planning functions by genetic algorithms. Comput Ind Eng 36:281–308 11. Moriarty DE, Miikkulainen R (1997) Forming neural networks through efficient and adaptive coevolution. Evol Comput 5:372–399 12. Kramer MD, Zhang D (2000) GAPS: a Genetic Programming System. In: Proceedings of the 24th Annual International Computer Software and Application Conference (IEEE COMPSAC), pp 614–619 13. Koza JR (1992) Genetic programming: on the programming of computers by means of natural selection and genetics. MIT Press, Cambridge MA 14. Dimopoulos C, Zalzala AMS (2001) Investigating the use of genetic programming for a classic one-machine scheduling problem. Adv Eng Softw 32:489–498 15. Garces PJ, Schoenefeld DA, Wainwright RL (1996) Solving facility layout problems using genetic programming. Proceed-ings of the 1st Annual Conference on Genetic Programming 11 (4):182–190 16. McKay BM, Willis MJ, Hiden HG, Montague GA, Barton GW (1996) Identification of industrial processes using genetic programming. Procee Confer Ident Eng Sys 1996 11 (5):510–519 17. Hutchinson GK, Flughoeft KAP (1994) Flexible process plans: their value in flexible automation systems. Int J Prod Res 32(3):707–719 18. Saygin C, Kilic SE (1999) Integrating flexible process plans with scheduling in flexible manufacturing systems. Int J Adv Manuf Technol 15:268–280 19. Benjaafar S, Ramakrishnan R (1996) Modeling, measurement and evaluation of sequencing flexibility in manufacturing systems. Int J Prod Res 34:1195–1220 20. Kim YK, Park K, Ko J (2003) A symbiotic evolutionary algorithm for the integration of process planning and job shop scheduling. Comput Oper Res 30:1151–1171 21. Lin YJ, Solberg JJ (1991) Effectiveness of flexible routing control. Int J Flex Manuf Syst 3:189–211 22. Catron AB, Ray SR (1991) ALPS-A language for process specification. Int J Comput Integr Manuf 4:105–113 23. Ho YC, Moodie CL (1996) Solving cell formation problems in a manufacturing environment with flexible processing and routing capabilities. Int J Prod Res 34:2901–2923 24. Koza JR (1990) Genetic programming: a paradigm for genetically breeding populations of computer programs to solve problems. Tech Rep STAN-CS-90–1314 Stanford University Computer Science Department 25. Langdon WB, Qureshi A (1995) Genetic programming—computers using “Natural Selection” to generate programs. Tech Rep RN/95/76, Gower Street, London WCIE 6BT, UK 26. Banzhaf W, Nordin P (1998) Genetic programming: an introduction. Morgan Kaufmann Publishers Inc, San Francisco CA
Chapter 5
An Efficient Modified Particle Swarm Optimization Algorithm for Process Planning
5.1 Introduction A process plan specifies what raw materials or components are needed to produce a product, and what processes and operations are necessary to transform those raw materials into the final product. It is the bridge of the product design and the downstream manufacturing specifications by translating design features into the machining process instructions [1]. The outcome of process planning is the information for manufacturing processes and their parameters, and the identification of the machine tools, and fixtures required to perform those processes. In the modern manufacturing system, process planning contains two subproblems, which are operations selection and operations sequencing [2]. Operations selection is the act of selecting necessary operations needed to achieve processing features of a part and determining relevant manufacturing resources for each operation. And operations sequencing is the act of determining a sequence of operations subject to the precedence constraints among machining features. Therefore, there are several flexibilities of the process planning. In the operations selection phase, different combinations of operations may be selected to achieve the same manufacturing feature (processing flexibility) and alternative resources may exist for each operation (such as machine flexibility), and in the operations sequencing phase, a different sequence of operations construct different process plans for the same part (sequence flexibility). Therefore, most parts may have a large number of alternative process plans because of the flexibilities of the processing, resources, and sequences. However, the traditional manufacturing system research literature assumed that there was only one process plan for each part. This implied that there was no flexibility considered in the process plan. This did not match the current manufacturing system status. So, process planning in a manufacturing environment has become a crucial problem. Because it has a vital impact on the manufacturing system performance, in the past 20 years, process planning has received significant attention from many researchers and engineers, and numerous approaches have been proposed to obtain © Springer-Verlag GmbH Germany, part of Springer Nature and Science Press, Beijing 2020 X. Li and L. Gao, Effective Methods for Integrated Process Planning and Scheduling, Engineering Applications of Computational Methods 2, https://doi.org/10.1007/978-3-662-55305-3_5
81
82
5 An Efficient Modified Particle Swarm Optimization …
optimal or near-optimal solutions to this intractable problem. Unfortunately, the available machining resources in the workshop, geometrical as well as the technological requirements of the part, and the precedence relationships among all the operations make conducting of operations selection and operations sequencing simultaneously to be a combinatorial optimization problem. Process planning can be modeled as a constraint-based Traveling Salesman Problem (TSP) [3]. Because TSP is proved to be NP-hard, the process planning is also an NP-hard problem. Therefore, effective methods required from the process planning problem are needed to solve this problem in reasonable computational time. Recently, most works applied the meta-heuristic algorithms to solve the process planning problem, such as Genetic Algorithm (GA), Simulated Annealing (SA), Particle Swarm Optimization (PSO), and hybrid algorithms. Among these metaheuristic methods, PSO is a very effective algorithm. The traditional PSO algorithm is an expert in solving the continuous problems. However, the process planning problem is a discrete combinational optimization problem. Therefore, this chapter modifies the PSO algorithm which can be used to solve the discrete problems and aim to consider the operations selection and operations sequencing concurrently to achieve optimal or near-optimal solutions. The remainder of this chapter is organized as follows. Section 5.5.2 is related work. Section 5.5.3 is the problem formulation. Modified PSO for process planning is proposed in Sect. 5.5.4. Experimental studies and discussions are reported in Sect. 5.5.5. Section 5.5.6 is conclusions and future researches.
5.2 Related Work 5.2.1 Process Planning The proposed methods for process planning can be divided into two categories. The first one is the exact method and the other one is the approximation method. The exact method contains mathematical programming methods and Branch and Bound method. The main approximation method contains some artificial intelligence-based approaches. Xu et al. [2] had reviewed some current works. Kusiak [4] proposed an integer programming approach for the process planning problem. Gan et al. [5] presented a Branch and Bound algorithm-based process planning system for plastic injection mold bases. Xu et al. [6] proposed a clustering-based modeling scheme of the manufacturing resources for process planning. Xu et al. [7] also proposed a novel process planning schema based on the process of knowledge customization. However, exact algorithms are not effective for solving the large-scale combinatorial optimization problems. Therefore, most presented methods on process planning focused on the artificial intelligence-based methods, such as holonic-based approach [8], agent-based approach [9], psycho-clonal-algorithm-based approach
5.2 Related Work
83
[10], artificial immune system [11], flower pollinating with artificial bees [12], symbiotic evolutionary algorithm [13], genetic programming [14], GA, Ant Colony Optimization (ACO), SA, tabu search [15, 16], imperialist competitive algorithm [17], web service-based approach, hybrid algorithms, and so on. Zhang et al. [18] proposed a GA for a novel Computer-aided Process Planning (CAPP) model in a job shop manufacturing environment. Li et al. [19] applied a GA to the CAPP in distributed manufacturing environments. Hua et al. [20] proposed a GAbased synthesis approach for machining scheme selection and operation sequencing optimization. Salehi et al. [21] used a GA in the CAPP in preliminary and detailed planning. Musharavati and Hamouda [22] proposed a modified GA for manufacturing process planning in multiple parts manufacturing lines. Salehi and Bahreininejad [23] applied the intelligent search and GA simultaneously to optimize the process planning effectively. Krishna and Mallikarjuna Rao [24] used an ACO to optimize the operations sequences in CAPP. Tiwari et al. [25] successfully applied an ACO algorithm to solve the complex process plan selection problem. Liu et al. [3] proposed an ACO algorithm for the process planning problem to cut down the total cost for the machining process. Ma et al. [26] developed a SA algorithm for process planning. Mishra et al. [27] adopted a fuzzy goal programming model having multiple conflicting objectives and constraints pertaining to the machine tool selection and operation allocation problem and proposed a random search method termed quick converging SA to solve the above problem. Musharavati et al. [28, 29] developed an enhanced SAbased algorithm and a SA with auxiliary knowledge to solve the process planning problems in reconfigurable manufacturing. Li et al. [30] wrapped a process planning module, which can optimize the selection of machining resources, determination of setup plans, and sequencing of machining operations to achieve optimized process plans, as services and deployed on the Internet to support distributed design and manufacturing analysis. Li [31] wrapped a process planning optimization module, which was based on a tabu search approach, as a web-enabled service and deployed on the Internet to support distributed design and manufacturing analysis. Most of the abovementioned works used only one algorithm to deal with process planning. However, every algorithm has its own advantages and disadvantages. Therefore, some researchers tried to combine several algorithms together to construct some effective hybrid algorithms to solve the process planning. Ming and Mak [32] proposed a hybrid Hopfield network-GA approach as an effective near-global optimization technique to provide a good quality solution to the process plan selection problem. Zhang and Nee [33] applied a hybrid GA and SA for the process planning optimization. Li et al. [34] modeled the process planning as a combinatorial optimization problem with constraints and proposed a hybrid GA and SA approach to solve it. Huang et al. [35] proposed a hybrid graph and GA approach to optimize the process planning problem for prismatic parts.
84
5 An Efficient Modified Particle Swarm Optimization …
5.2.2 PSO with Its Applications PSO is a very effective algorithm for optimization problems. Therefore, many types of research about the PSO with its applications have been reported. Ali and Kaelo [36] proposed some modifications in the position update rule of the PSO algorithm to make the convergence faster. Goh et al. [37] adopted a competitive and cooperative co-evolutionary approach to the multi-objective PSO algorithm design. Shi et al. [38] proposed a cellular PSO which hybridized cellular automata and PSO for function optimization. Because of the good effect of the PSO, some researchers tried to apply it to solve some combinatorial optimization problems. Wang and Tsai [39] proposed the review course composition system, which adopted the discrete PSO to quickly pick the suitable materials. Wang and Liu [40] proposed a chaotic PSO approach to generate the optimal or near-optimal assembly sequences of products. Moslehi and Mahnam [41] applied a Pareto approach based on PSO to solve the multi-objective flexible job shop scheduling problem. Zhang and Zhu [42] proposed a new version of the PSO algorithm for sequence optimization. Chen and Chien [43] proposed a genetic-simulated annealing ant colony system with PSO for the traveling salesman problem. Based on the above survey of PSO, we can find that many types of research on PSO with its applications are reported. PSO attracts many researchers to do the theory researches and has been successfully applied in many application areas. It is also very effective for combinatorial optimization problems. However, there are very few works about using PSO for process planning. Only Guo et al. [44] and Wang et al. [45] applied a PSO for the operation sequencing problem in process planning. Therefore, this chapter proposes a novel and efficient modified PSO to solve the process planning problem. The details of the proposed PSO will be given in the following sections.
5.3 Problem Formulation 5.3.1 Flexible Process Plans There are three types of flexibility considered in flexible process plans [14]: processing flexibility, machine flexibility, and sequencing flexibility. Processing flexibility is determined by the possibility of processing the same manufacturing feature with alternative operations. Machine flexibility relates to the possibility of performing one operation on alternative machines, with distinct processing time or cost. Sequencing flexibility is decided by the possibility of interchanging the sequence of the required operations. Better performance in some criteria (e.g., production time) can be obtained by the consideration of these flexibilities.
5.3 Problem Formulation
85
Table 5.1 The flexible process plan of an example part Feature
Operations
Alternative machines
Processing time
Precedence constraints of the features
F1
O1
M3, M8
8, 13
Before F2, F3
F2
O2-O3
M5, M6, M8/M2
16, 12, 13/21
Before F3
O4-O5
M1, M5, M10/M9
13, 16, 18/17
F3
O6
M5, M8
46, 47
F4
O7
M3, M7, M13
44, 48, 49
Before F5, F6, F7
F5
O8
M5, M6, M13
17, 14, 10
Before F6, F7
O9
M5, M15
16, 13
F6
O10
M3, M11, M15
28, 27, 30
F7
O11
M10, M13
48, 50
F8
O12
M5, M13, M15
31, 32, 36
Before F9, F10, F11
F9
O13
M3, M6, M9
30, 28, 26
Before F10, F11
O14-O15
M2/M1, M14
11/16, 18
F10
O16
M4, M15
18, 19
F11
O17
M3, M10, M14
36, 32, 35
Before F7
Before F11
Table 5.1 shows the flexible process plans of an example part. It contains 11 total features and 17 total operations. And each feature can be processed by one or more operations. For example, feature 9 can be processed by the operation 13. And it also can be processed by the operations 14 and 15. This represents the processing flexibility. Operation 1 can be processed by machines 3 and 8. This represents machine flexibility. There is no sequence constraint between feature 1 and feature 4. Feature 1 can be processed before feature 4. And feature 4 can be processed before feature 1. This represents the sequence flexibility. Therefore, the process planning problem is a very complex problem. The TSP has been proven to be an NP-hard problem. The process planning problem is also an NP-hard problem. So, in order to solve this problem effectively, this chapter presents a novel modified PSO algorithm.
5.3.2 Mathematical Model of Process Planning Problem In this chapter, the optimization objective of the process planning problem is to minimize the production time (contains processing time and transmission time). To solve this problem, the following assumptions are made [14]: 1. The different operations of one part cannot be processed simultaneously. 2. All machines are available at time zero. 3. After a part is processed on a machine, it is immediately transported to the next machine on its process, and the transmission time is assumed to be constant. 4. Setup time for the operations on the machines is independent of the operation sequence and is included in the processing time.
86
5 An Efficient Modified Particle Swarm Optimization …
For more details about the mathematical model of the process planning problem can refer to Li et al. [14].
5.4 Modified PSO for Process Planning 5.4.1 Modified PSO Model 5.4.1.1
Traditional PSO Algorithm
The PSO algorithm is initialized with a population of random candidate solutions called particles. The ith particle in the d-dimensional solution space is denoted as X i = (x i1 , x i2 , …, x id ). The ith particle is assigned with a random velocity V i = (vi1 , vi2 , …, vid ) and iteratively moves in the solution space. During each iteration, the ith particle is updated by the following two best values: Pi = (pg1 , pg2 , …, pgd ), which is the best value that the ith particle has achieved so far and Pg = (pg1 , pg2 , …, pgd ), which is the best value that the population has achieved so far. Each particle is updated iteratively by the following equations: vid = w × vid + c1 × rand() × ( pid − xid ) + c2 × Rand() × pgd − xid (5.1) xid = xid + vid
(5.2)
w used to control the amount of the previous velocity is the inertia weight. c1 and c2 which are the positive constants are the personal and social learning factors. rand() and Rand() are two random numbers uniformly distributed in [0, 1]. To improve the searching efficiency, the traditional PSO has been modified by many researchers. Generally, the research studies on PSO can be categorized into five parts: algorithms, topology, parameters, merging with the other evolutionary computation techniques, and applications [46]. In traditional PSO, particles are updated according to the velocity–displacement model. In this model, each particle is referred as an n-dimensional vector. The components of the vector are independent and there is no sequence constraint among them. Therefore, in traditional PSO, particles are flying in a continuous searching space essentially. However, there are usually many constraints existing in combinatorial optimization problems. For example, in a typical process planning problem, the constraints must be satisfied in the solution vector. Since the velocity–displacement model cannot represent the constraint of the solution vector effectively, it is not suited for the process planning problem. In the following, the optimization mechanism of the traditional PSO is analyzed. And then, a modified PSO model has been introduced.
5.4 Modified PSO for Process Planning
87
In traditional PSO, the velocity equation (Eq. (5.1)) consists of three parts. w×vid is referred as the “momentum” part, which represents the influence of the last velocity towards the current velocity. It provides the necessary momentum for particles to fly in the search space. c1 ×rand () × (pid −x id ) is the “cognitive” part, which represents the primitive thinking by itself. The cognitive component encourages the particles to move toward their own best positions found so far. c2 ×rand () × (pid −x id ) is the “social” part, which represents the cooperation among the particles.
5.4.1.2
Modified Particle Swarm Optimization Model
Through analyzing the optimization mechanism of the traditional PSO, a modified PSO model can be introduced by omitting the concrete velocity–displacement updating method of the traditional PSO. The pseudocode of the modified PSO model is shown in Fig. 5.1 [47]. According to this optimization model, each particle obtains updating information from the experiences of its own and the population. To keep the diversity of the population, a random search is implemented for each particle with a certain probability. In this model, the updating method for each particle has not been defined. Particles can obtain updating information through various approaches. It depends on the problems. Therefore, to deal with the combinatorial optimization problems, effective encoding, updating, and random search methods suiting for the problems can be developed to implement the PSO model. Based on the features of their specific problems, the researchers can design all the parts of the PSO algorithm. Initialize the population randomly; Evaluate each particle in the population; Do { For each particle xi { Update the current particle using its own experience; Update the current particle using the whole population’s experience; Search in balance with random search; } } while maximum iterations or other stopping criteria are not attained Fig. 5.1 The pseudocode of the modified PSO [47]
88
5 An Efficient Modified Particle Swarm Optimization …
5.4.2 Modified PSO for Process Planning According to the above pseudocode, the key step to implement the modified PSO model is to develop effective encoding and updating methods for the particles. Effective random search method is also necessary. According to the optimization model, a modified PSO algorithm combined with the successful uses of GA is proposed to solve the process planning problem. In the following sections, the details of the modified PSO for the process planning problem are introduced, respectively.
5.4.2.1
The Modified PSO Framework
The workflow of the proposed modified PSO is shown in Fig. 5.2. Its framework can be described as follows: Step 1
Step 2 Step 3
Step 4
Step 5 Step 6
Step 7 Step 8
Set the parameters of PSO, including the size of the population (Psize ), size of memory population (M size ), maximum generations (maxG), updating methods probability (Pum ), and random search probability (Pr ); Initialize the population randomly and use the constraint adjustment method to rearrange the infeasible particles; Evaluate the fitness of each particle in the population and retain the best M size particles in the memory population. Set the best solution that each particle achieves equal to its current value; Each particle in current population will perform updating operation with a predefined probability (Pum ). For each selected particle in the population, select a particle from the memory population randomly. Current particle performs updating operation with this particle and the best solution that it achieves so far, respectively. Then two new solutions will be created. Use the constraint adjustment method to rearrange the infeasible particles. Replace current particle by the new solution with better fitness; Evaluate the particles in the population once again. Update the current best solution in the population; Is the terminate criteria satisfied? If yes, go to Step 10; Else go to Step 7; Update the best solution that each particle achieves; Update the memory population by the following criterion: take into account every particle in the population one by one. If its fitness is better than the fitness of the worst particle in the memory population and it does not exist in the memory population, replace the worst particle of the memory population with the current particle. According to this procedure, the memory population will be updated;
5.4 Modified PSO for Process Planning
89
Fig. 5.2 The workflow of the modified PSO
Step 9
Each particle in the current population will perform a random search operation with a predefined probability (Prs ). Use the constraint adjustment method to rearrange the infeasible particle. Replace the current particle with the particle achieved by random search; go to Step 4; Step 10 Stop the algorithm and output the best solutions.
90
5.4.2.2
5 An Efficient Modified Particle Swarm Optimization …
Encoding and Decoding of the Particles
Each particle in the population consists of three parts with different lengths as shown in Fig. 5.3. The first part of the particle is the feature string. It is a sequence of all features of this part. The length of this string is equal to the number of features. The second part of the particle is the alternative operations string. The element in the ith position represents the selected alternative operations of the ith feature of this part. The length of this string is also equal to the number of features. The third part of the particle is the alternative machine string. The element in the jth position represents the selected alternative process of the jth operation of this part. The length of this string is equal to the total number of all operations. Figure 5.3 shows a particle of the example part in Table 5.1. In this example, this part has 11 features and 17 total operations. Therefore, the feature string and alternative operations string are made up of 11 elements, and the alternative machine string is made up of 17 elements. The feature string is a sequence from 1 to 11 according to the sequence constraints between every feature. For the alternative operations string, the second element is 2. It means that the second feature (F 2 ) of this part chooses its second alternative process, i.e., O4 -O5 - in Table 5.1. For the alternative machine string, the fourth element is 2. It means that the fourth operation (O4 ) chooses its second alternative machine, i.e., M 5- in Table 5.1. The particle can be decoded by the encoding principle. The basic working steps of the decoding method are as follows: Step 1 Decoding the alternative operations string firstly, scanning this string from left to right to obtain the selected operations for each feature. For the particle in Fig. 5.3, the selected operations for each feature are F 1 (O1 ), F 2 (O4 -O5 ), F 3 (O6 ), F 4 (O7 ), F 5 (O9 ), F 6 (O10 ), F 7 (O11 ), F 8 (O12 ), F 9 (O14 -O15 ), F 10 (O16 ), and F 11 (O17 ); Step 2 Decoding the feature string, scanning this string from left to right to obtain the sequence of the features. For the particle in Fig. 5.3, the sequence of the features is F 1 -F 4 -F 5 -F 8 -F 6 -F 7 -F 9 -F 10 -F 2 - F 3 -F 11 ; then, obtain the sequence of the operations according to the selected operations for each feature in Step 1, the sequence of the operations is O1 -O7 -O9 -O12 -O10 -O11 -O14 -O15 O16 -O4 -O5 -O6 -O17 ; Feature String:
1
4
5
8
6
7
9
10
2
3
11
Feature No.
Alternative Process String:
1
2
1
1
2
1
1
1
2
1
1
Alternative Process No. for Each Feature
Alternative Machine String:
1
3
1
2
1
1
3
2
2
3
2
Alternative Machine No. for Each Operation
Fig. 5.3 One particle of the process planning problem
2
2
1
1
2
1
5.4 Modified PSO for Process Planning
91
Step 3 Decoding the alternative machine string, scanning this string from left to right to obtain the selected machine for each operation. For the particle in Fig. 5.3, the selected machine for each operation are O1 (M 3 ), O2 (M 8 ), O3 (M 2 ), O4 (M 5 ), O5 (M 9 ), O6 (M 5 ), O7( M 13 ), O8 (M 6 ), O9 (M 15 ), O10 (M 15 ), O11 (M 13 ), O12 (M 13 ), O13 (M 6 ), O14 (M 2 ), O15 (M 1 ), O16 (M 15 ), and O17 (M 3 ); Step 4 According to the sequence of the operations in Step 2 and the selected machine for each operation in Step 3, determine the feasible process plan (including the sequence of the operations and the selected machine with the corresponding processing time for each operation) for this part. Therefore, according to the decoding steps, the example in Fig. 5.3 can be decoded to a feasible process plan, i.e., O1 (M 3 )-O7 (M 13 )-O9 (M 15 )-O12 (M 13 )-O10 (M 15 )O11 (M 13 )-O14 (M 2 )-O15 (M 1 )-O16 (M 15 )-O4 (M 5 )-O5 (M 9 )-O6 (M 5 )-O17 (M 3 ).
5.4.2.3
Updating Method
In traditional PSO, each particle learns from the best particle in the population. In this information exchanging mechanism, particles attain updating information greedily. If the global best particle is trapped in a local optimum, all particles tend to concentrate on the local optimum and the algorithm will converge prematurely [47]. To overcome the limitations of the traditional information-sharing mechanism, a memory population is introduced to record better solutions. Particles obtain updating information from the memory population. Because of the successful use of crossover operation in GA to generate new individuals, the proposed modified PSO algorithm will use the crossover operation for the particles to exchange their information. There are three parts in a particle and three separate crossover operations for the selected particles are developed. The developing principle is that to avoid generating the infeasible offspring. The procedure of the first crossover operation for the feature string is described as follows: Step 1 Select a pair of particles P1 and P2, and initialize two empty offspring: O1 and O2; Step 2 Select two crossover points randomly to divide the two parents into three parts; Step 3 The elements in the middle of each parent are appended to the same positions of the corresponding offspring; Step 4 Delete the existing elements of O1 in P2, and then, the remaining elements of P2 are appended to the remaining empty positions in O1 seriatim. The same process can obtain O2 An example of the first crossover operation is presented in Fig. 5.4. The second crossover operation is implemented in the alternative operations string. Two crossover points are selected randomly at first, and then two new strings (the alternative operation’s strings in O1 and O2) are created by swapping the divided parts
92
5 An Efficient Modified Particle Swarm Optimization …
P1:
1
4
5
8
6
7
9
10
2
3
11
O1:
4
1
5
8
6
7
9
10
2
11
3
P2:
4
8
1
5
9
6
7
10
2
11
3
O2:
4
8
1
5
9
6
7
10
2
3
11
P1:
1
4
5
8
6
7
9
10
2
3
11
Crossover Point 1
Crossover Point 2
Fig. 5.4 The first crossover operation for feature string
Fig. 5.5 The second crossover operation for alternative operations string
Fig. 5.6 The third crossover operation for alternative machine string
of the two parent strings. An example of the second crossover operation for the alternative operations string is presented in Fig. 5.5. The third crossover operation implemented in the alternative machine string has the same procedure as the second crossover. An example of the third crossover operation for the alternative machine string is presented in Fig. 5.6.
5.4.2.4
Random Search
Mutation operation in GA is used as the random search method for each particle. Mutation introduces some extra variability into the population and its function is to
5.4 Modified PSO for Process Planning
93
maintain the diversity of the population. In the modified PSO, there are three parts in a particle and three separate mutation operations for each selected particle are developed. The developing principle is to avoid generating the infeasible offspring. The first mutation operation for the feature string is described as follows: Step 1 Select two positions in the feature string of the parent P and initialize one offspring O; Step 2 Swap the elements in the two selected positions in P to generate the feature string of O The second mutation operation for the alternative operations string is described as follows: Step 1 Select one position in the alternative operations string of the parent P; Step 2 Change the element of this selected position to the other alternative operations in the alternative operations set The third mutation operation for the alternative machine string is described as follows: Step 1 Select one position in the alternative machine string of the parent P; Step 2 Change the element of this selected position to the other alternative machine in the alternative machines set The examples of the mutation operations for the particle are presented in Fig. 5.7. Above the first dash line, it is an example of mutation operation for feature string, the selected two points (8 and 10) are marked out, and O is generated by swapping 8 and 10. Under the first dash line, it is an example of the mutation operation for alternative operations string. The selected element 2 (for feature 5) which is marked
Fig. 5.7 The mutation operations for particle
94
5 An Efficient Modified Particle Swarm Optimization …
out is changed to 1. This means that the selected operations for feature 5 have changed from the second to the first one in the alternative operations set. And under the second dash line, it is an example of the mutation operation for alternative machine string. The selected element 3 (for operation 7) which is marked out is changed to 1. This means that the selected machine for operation 7 has changed from the third to the first one in the alternative machines set.
5.4.2.5
The Constraint Adjustment Method
For the feature string of an initially generated or an adjusted particle after the updating methods, the precedence constraints might not be satisfied. A constraint adjustment method, which can be applied to a complicated and multiple constraint condition, is adopted from Li et al. [34] to rearrange the feature string according to the precedence constraints between each feature. For more details refer to Li et al. [34]. This method is used after each modified PSO operation of the particles (including initialization, the updating methods, and the random search).
5.4.2.6
Terminate Criteria
If the number of iterations that the proposed modified PSO runs reaches to the maximum generations (maxG), the algorithm stops.
5.5 Experimental Studies and Discussions 5.5.1 Case Studies and Results The proposed modified PSO algorithm procedure is coded in C ++ and implemented on a computer with a 2.0 GHz Core (TM) 2 Duo CPU. To verify the feasibility and performance of the proposed modified PSO algorithm, seven cases have been conducted. The parameters of the modified PSO algorithm are set as follows: size of the population Psize 400, size of the memory population Msize 20, the maximum generations maxG 100, the updating methods probability pum 0.8, and the random search probability prs 0.1. The algorithm terminates when the number of generations reaches to the maximum value (maxG). This chapter assumes that there are 15 machines in the workshop. And the transmission time between each machine is given in Table 5.2. Three algorithms have been used to solve these cases. They are the proposed modified PSO algorithm, simple SA, and GA. Each algorithm runs 20 times. The best result is the best one of all the best results in the 20 runs. The average best result is the average of the 20 best results. The average convergent generation is the average of all the convergent generations in the 20 runs.
9
10
11
7
6
14
13
12
10
5
6
9
6
7
8
9
10
11
12
13
14
15
7
3
5
5
4
0
2
1
1
Machine no.
8
7
10
12
7
4
6
7
2
7
5
4
3
0
5
2
9
8
6
5
3
4
2
7
3
4
5
6
0
3
7
3
4
7
6
4
4
0
6
4
9
16
15
14
13
12
10
4
Table 5.2 The transmission time between each machine
8
10
11
12
10
9
8
7
12
10
0
4
5
5
10
5
8
7
6
7
6
5
5
4
4
0
10
4
4
7
11
6
6
3
2
7
8
7
7
7
6
6
6
5
0
4
12
7
3
4
2
4
3
2
4
0
5
4
7
7
7
7
6
8
8
6
7
4
7
5
0
4
6
5
8
4
2
6
14
9
7
14
12
10
8
0
5
2
6
5
9
10
4
4
13
10
10
14
10
7
0
8
7
3
6
6
10
12
3
7
12
11
10
12
10
0
7
10
4
4
7
7
12
13
5
12
10
12
8
8
0
10
10
12
7
2
7
6
11
14
6
10
5
13
9
0
8
12
14
14
6
4
7
7
10
15
8
7
6
14
0
9
8
10
10
7
8
3
8
8
8
16
9
8
9
15
5.5 Experimental Studies and Discussions 95
96
5 An Efficient Modified Particle Swarm Optimization …
Table 5.3 The results of case 1 Algorithm
Best result
Average best result
Average convergent generation
Simple SA
377
378.1
90.6
Simple GA
377
380.2
87.8
Modified PSO
377
377
47.2
Best process plan of the modified PSO
O7(M3)-O1(M3)-O8(M5)-O9(M5)-O12(M5)-O4(M5)O5(M9)-O13(M9)-O16(M4)-O10(M3)-O17(M10)-O11(M10)O6(M8)
5.5.1.1
Case 1
The data of the case 1 has been given in Table 5.1. This part contains 11 features. The results and the comparisons of the modified PSO with the simple SA and GA are given in Table 5.3. The process plan includes the operations sequence with the corresponding machines. The results show that the proposed method has better performance than other two algorithms.
5.5.1.2
Case 2
Case 2 is adopted from Li and McMahon [48]. It contains three parts. The manufacturing information (including features, operations, and machines with processing time) can refer to Li and McMahon [48]. The alternative tools are not considered and the processing time of every machine for the corresponding operation is rounded. The results and the comparisons with the simple SA and GA are given in Table 5.4.
5.5.1.3
Case 3
Case 3 is adopted from Kim et al. [49] and Kim [50]. It contains 18 parts. The results and the comparisons with the simple SA and GA are given in Table 5.5. SA, GA, and PSO represent the simple SA, simple GA, and modified PSO algorithm, respectively.
5.5.1.4
Case 4
Case 4 is adopted from Shin et al. [13, 51]. It contains18 parts. The alternative tools are not considered. Because there are some problems about the data of some parts from the references, 13 parts (except parts 12, 13, 15, 17, and 18) are selected in this chapter. The results and the comparisons with the simple SA and GA are given in Table 5.6. SA, GA, and PSO represent the simple SA, simple GA, and modified PSO algorithm, respectively.
5.5 Experimental Studies and Discussions
97
Table 5.4 The results of case 2 Part
Algorithm
Best result
Average best result
Average convergent generation
1
Simple SA
342
344.6
51.2
Simple GA
342
343.8
46.2
Modified PSO
341
341.5
34.2
Best process plan of the modified PSO
2
Simple SA
187
190.2
46.3
Simple GA
187
188.5
41.1
Modified PSO
187
187
33.1
Best process plan of the modified PSO
3
O16(M4)-O3(M4)-O5(M4)-O1(M4)-O6(M4)O7(M4)-O14(M4)-O11(M4)-O15(M4)-O9(M4)O12(M4)-O4(M4)-O13(M4)-O2(M4)-O8(M4)O10(M4)
Simple SA
176
179.2
55.8
Simple GA
176
177.5
50.6
Modified PSO
176
176
39.5
Best process plan of the Modified PSO
5.5.1.5
O1(M4)-O5(M4)-O18(M4)-O2(M4)-O6(M4)O11(M4)-O12(M4)-O13(M4)-O14(M4)-O7(M4)O4(M4)-O17(M4)-O15(M4)-O16(M4)-O8(M4)O9(M4)-O10(M4)-O19(M4)-O20(M4)-O3(M4)
O1(M4)-O2(M4)-O9(M4)-O10(M4)-O11(M4)O5(M4)-O3(M4)-O6(M4)-O4(M4)-O7(M4)O12(M4)-O13(M4)-O8(M4)-O14(M4)
Case 5
Case 5 is adopted from Ma et al. [26]. The part is shown in Fig. 5.8. This part contains nine features. The data of case 5 has been given in Table 5.7. The alternative tools are not considered. The results and the comparisons of the modified PSO with the simple SA and GA are given in Table 5.8. Because Ma et al. [26] considered the cost as the objective which is different from this chapter, the comparison between the methods of this chapter and Ma et al. [26] is not given.
5.5.1.6
Case 6
Case 6 is adopted from Wang et al. [45]. The part is shown in Fig. 5.9. This part contains seven features. The data of case 6 has been given in Table 5.9. The alternative tools are not considered. The results and the comparisons of the modified PSO with the simple SA and GA are given in Table 5.10. Because Wang et al. [45] considered the cost as the objective which is different with this chapter, the comparison between the methods of this chapter and Wang et al. [45] is not given.
PSO
359
502
314
314
409
304
358
393
264
271
442
216
269
358
248
314
361
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
361
314
248
357
269
216
442
271
264
392
358
304
409
314
314
502
359
360
314
248
357
269
216
442
271
264
391
358
304
408
314
314
498
359
303
363.5
315.1
248
361.3
270.5
216.1
442.8
271.3
264
395.7
358
304.5
411.3
315.8
315.7
504.9
361.2
303
363.7
314.8
248
359.2
269.6
216.2
443.1
271.1
264
394.1
358
304.2
410.3
315.1
314.8
503.6
360.7
303
GA
SA
303
GA
SA
303
Average best result
Best result
1
Part
Table 5.5 The results of case 3 PSO
361.3
314.6
248
357
269
216
442
271
264
391.9
358
304
408.8
314
314
500.3
359
303
74.5
60.3
30.2
76.4
44.5
40.2
47.3
55.8
25.1
121.7
29.6
45.3
63.8
49.1
48.6
72.4
55.2
21.1
SA
60.4
48.7
24.1
60.2
40.1
33.7
40.2
52.1
22.3
102.1
25.2
38.5
56.7
41.5
40.1
60.2
48.3
15.4
GA
Average convergent generation
49.2
40.2
20.7
48.1
36.6
30.2
35.8
45.3
19.2
90.1
20.4
30.8
49.9
34.5
33.7
49.1
39.5
12.3
PSO
98 5 An Efficient Modified Particle Swarm Optimization …
267
165
299
268
204
204
137
181
181
153
151
120
170
2
3
4
5
6
7
8
9
10
11
14
16
167
120
150
153
181
181
137
204
204
268
297
165
267
167
120
149
153
179
181
137
204
204
267
296
163
267
173.6
120
153.4
154.2
183.5
181.1
137
206.8
204
273.3
301.2
167.3
268.7
SA
170.4
120
151.1
153
182.6
181.2
137
206.3
204
271.2
299.6
166.5
268.1
GA
Average best result PSO
SA
GA
Best result
1
Part
Table 5.6 The results of case 4
167
120
149.2
153
179.7
181
137
205.1
204
268.4
297.1
164.2
267
PSO
80.4
77.9
60.2
78.5
110.3
50.1
35.6
122.3
75.1
67.2
74.1
61.6
59.8
SA
61.9
56.3
52.7
58.3
92.1
44.6
29.3
104.5
61.2
56.4
60.7
55.3
52.1
GA
Average convergent generation
48.8
39.4
43.1
43.9
81.2
37.8
26.1
90.6
45.9
49.3
48.5
46.2
43.6
PSO
5.5 Experimental Studies and Discussions 99
100
5 An Efficient Modified Particle Swarm Optimization …
Fig. 5.8 The part in case 5 from Ma et al. [16]
Table 5.7 The data of case 5 Feature
Operations
Alternative machines
Processing time
Precedence constraints of the features
F1
O1
M1, M2, M4, M5
10, 12, 13, 8
Before F6
F2
O2
M1, M2, M4, M5
22, 21, 18, 25
Before F1
F3
O3
M1, M2, M4, M5
15, 16, 18, 20
F4
O4
M1, M2
8, 10
F5
O5-O6
M1, M2
19, 21
F6
O7-O8-O9
M1, M2, M3 M4, M5
8, 6, 7, 10, 12
M1, M2, M3, M5
12, 14, 18, 11
M1, M2
20, 23
M1, M2, M3 M4, M5
8, 6, 7, 9, 5 18, 20
F7
O10-O11
M1, M2 M1, M2
21, 24
F8
O12
M1, M2
31, 33
F9
O13
M1, M2, M4, M5
30, 33, 28, 34
5.5.1.7
Before F5, F6
Before F8
Before F1
Case 7
Case 7 is adopted from Zhang and Nee [33]. The part is shown in Fig. 5.10. This part contains 14 features. The data of case 7 has been given in Table 5.11. The alternative tools are not considered. The results and the comparisons of the modified
5.5 Experimental Studies and Discussions
101
Table 5.8 The results of case 5 Algorithm
Best result
Average best result
Average convergent generation
Simple SA
222
222
70.3
Simple GA
222
222
45.2
Modified PSO
222
222
31.8
Best process plan of the modified PSO
O13(M1)-O3(M1)-O2(M1)-O4(M1)-O1(M1)-O5(M1) -O6(M1)-O7(M1)-O8(M1)-O9(M1)-O12(M1)-O10(M1)O11(M1)
Fig. 5.9 The part in case 6 from Wang et al. [6] Table 5.9 The data of case 6 Feature
Operations
Alternative machines
Processing time
Precedence constraints of the features
F1
O1
M1, M2, M4, M5
20, 18, 22, 25
Before F2, F3, F4, F5, F6, F7
F2
O2
M1, M2, M4, M5
30, 31, 25, 34
F3
O3
M1, M2
28, 24
F4
O4
M1, M2, M4, M5,
45, 50, 48, 39
F5
O5-O6
M1, M2, M3 M4, M5
10, 6, 7, 13, 16
M1, M2, M3 M4, M5
34, 39, 40, 45, 36
M1, M2, M3 M4, M5
20, 22, 28, 25, 19
F6 F7
O7-O8 O9
M1, M2, M3 M4, M5
27, 29, 24, 26, 30
M1, M2, M4, M5
12, 14, 15, 10
102 Table 5.10 The results of case 6
5 An Efficient Modified Particle Swarm Optimization … Algorithm
Best result
Average best result
Average convergent generation
Simple SA
212
212.2
60.7
Simple GA
212
212
51.3
Modified PSO
212
212
42.5
Best process plan of the modified PSO
O1(M2)-O3(M2)-O5(M2)-O6(M5)O4(M5)-O9(M5)O7(M5)-O8(M4)-O2(M4)
Fig. 5.10 The part in case 7 from Zhang and Nee [28]
PSO with the simple SA and GA are given in Table 5.12. Because Zhang and Nee [33] considered the cost as the objective which is different from this chapter, the comparison between the methods of this chapter and Zhang and Nee [33] is not given.
5.5.2 Discussion From the results of all the cases, the best results obtained by the modified PSO are better than or the same with the best results from the simple SA and GA. The average best results of the modified PSO algorithm are also better than or the same with the simple SA and GA. And, the average convergent generations of the modified PSO algorithm are better than the simple SA and GA. Therefore, based on the above results, it can be seen that the modified PSO is more effective than the other two algorithms and costs less running time. The proposed modified PSO algorithm is a
5.5 Experimental Studies and Discussions
103
Table 5.11 The data of case 7 Feature
Operations
Alternative machines
F1
O1
M1, M2
22, 25
Before F5, F7
F2
O2
M1, M2, M4, M5
20, 21, 15, 24
Before F1
F3
O3
M1, M2
18, 14
Before F4, F5
F4
O4
M1, M2
38, 39
F5
O5-O6-O7
M1, M2, M3 M4, M5
10, 6, 7, 13, 16
M1, M2, M3 M4, M5
24, 29, 30, 35, 26
M1, M2
15, 18
M1, M2, M3 M4, M5
10, 12, 18, 15, 9
M1, M2, M3 M4, M5
37, 39, 34, 36, 40
M1, M2
25, 21
F6
O8-O9-O10
Processing time
Precedence constraints of the features
Before F7
F7
O11
M1, M2
15, 18
F8
O12
M1, M2, M4, M5
8, 10, 8, 9
Before F1
F9
O13
M1, M2, M4, M5
18, 21, 24, 20
Before F13, F14
F10
O14
M1, M2
20, 22
F11
O15
M1, M2, M3 M4, M5
27, 29, 24, 26, 30
Before F10
F12
O16
M1, M2, M3 M4, M5
18, 20, 21, 17, 24
Before F10
F13
O17
M1, M2
22, 26
F14
O18
M1, M2, M4, M5
14, 17, 18, 20
Table 5.12 The results of case 7 Algorithm
Best result
Average best result
Average convergent generation
Simple SA
361
363.4
98.1
Simple GA
360
361.1
83.7
Modified PSO
359
360.5
72.3
Best process plan of the modified PSO
O12(M4)-O2(M4)-O15(M4)-O16(M4)-O3(M2)-O13(M1)O8(M1)-O9(M1)-O10(M1)-O4(M1)-O1(M1)O17(M1)-O14(M1)-O5(M1)-O6(M1)-O7(M1)-O11(M1)O18(M1)
promising method in solving the optimization of the process planning problem and more efficient to obtain better results in a reasonable time. We think that there are two reasons: firstly, PSO is an effective optimization algorithm, which has been proved to be very effective in many other fields. This chapter modifies and makes it suitable for solving combinatorial optimization problems. The results prove that this algorithm is also very effective for the process planning problem. Other researchers also can apply this method to solve other combinatorial optimization problems. Secondly,
104
5 An Efficient Modified Particle Swarm Optimization …
we design the operators of the modified PSO based on the characteristics of the process planning problem. The proposed algorithm can reflect the essential features of the process planning problem. Based on the results, we think that according to the characteristics of the combinatorial optimization problems to design algorithm is a good principle. Based on the above reasons, we think that the proposed method is more effective than other algorithms.
5.6 Conclusions and Future Research Studies A new approach, in which the particle swarm optimization algorithm is employed, is proposed to optimize the process planning problem. Based on the characteristics of the process planning problem, we design all the parts of the modified PSO, including the encoding, updating, and random search methods. The results of the cases indicate that the proposed approach is an effective method for the process planning problem. The main contributions of this chapter are as follows: firstly, there are very few works about using PSO for process planning. In this chapter, a modified PSO has been proposed to solve the process planning. The results of the cases show that the proposed PSO has achieved satisfactory improvement. Secondly, this also provides a new way to solve other planning or sequencing problems in the manufacturing field, such as assembly sequencing problem and so on. And the experimental results show that this algorithm may solve these problems effectively because these problems contain several same aspects with the process planning problem. A limitation of this approach is that it did not consider the tool selection and Tool Approach Direction (TAD) selection in the model of the process planning problem. The future researches can start from the following directions: (1) the above aspects of the process planning problem can be considered; (2) the researchers can try other effective algorithms, such as harmony search, artificial bee colony algorithm, and so on, to solve the process planning problems.
References 1. Han JH, Han I, Lee E, Yi J (2001) Manufacturing feature recognition toward integration with process planning. IEEE Trans Syst Man Cybern B Cybern 31:373–380 2. Xu X, Wang LH, Newman ST (2011) Computer-aided process planning—a critical review of recent developments and future trends. Int J Comput Integr Manuf 24(1):1–31 3. Liu XJ, Yi H, Ni ZH (2010) Application of ant colony optimization algorithm in process planning optimization. J Intell Manuf. https://doi.org/10.1007/s10845-010-0407-2 4. Kusiak A (1985) Integer programming approach to process planning. Int J Adv Manuf Technol 1:73–83 5. Gan PY, Lee KS, Zhang YF (2001) A branch and bound algorithm based process-planning system for plastic injection mould bases. Int J Adv Manuf Technol 18:624–632 6. Xu HM, Li DB (2008) A clustering based modeling schema of the manufacturing resources for process planning. Int J Adv Manuf Technol 38:154–162
References
105
7. Xu HM, Yuan MH, Li DB (2009) A novel process planning schema based on process knowledge customization. Int J Adv Manuf Technol 44:161–172 8. Zhang J, Gao L, Chan FTS, Li PG (2003) A holonic architecture of the concurrent integrated process planning system. J Mater Process Technol 139:267–272 9. Nejad HTN, Sugimura N, Iwamura K, Tanimizu Y (2010) Multi agent architecture for dynamic incremental process planning in the flexible manufacturing system. J Intell Manuf 21:487–499 10. Dashora Y, Tiwari MK, Karunakaran KP (2008) A psychoclonal algorithm-based approach to the solve operation sequencing problem in a CAPP environment. Int J Comput Integr Manuf 21:510–525 11. Chan FTS, Swarnkar R, Tiwari MK (2005) Fuzzy goal—programming model with an artificial immune system (AIS) ap- proach for a machine tool selection and operation allocation problem in a flexible manufacturing system. Int J Prod Res 43:4147–4163 12. Houshmand M, Imani DM, Niaki STA (2009) Using flower pollinating with artificial bees (FPAB) technique to determine machinable volumes in process planning for prismatic parts. Int J Adv Manuf Technol 45:944–957 13. Shin KS, Park JO, Kim YK (2011) Multi-objective FMS process planning with various flexibilities using a symbiotic evolutionary algorithm. Comput Oper Res 38:702–712 14. Li XY, Shao XY, Gao L (2008) Optimization of flexible process planning by genetic programming. Int J Adv Manuf Technol 38:143–153 15. Li WD, Ong SK, Nee AYC (2004) Optimization of process plans using a constraint-based tabu search approach. Int J Prod Res 42:1955–1985 16. Lian KL, Zhang CY, Shao XY, Zeng YH (2011) A multi- dimensional tabu search for the optimization of process planning. Sci China Ser E: Technol Sci 54:3211–3219 17. Lian KL, Zhang CY, Shao XY, Gao L (2012) Optimization of process planning with various flexibilities using an imperialist competitive algorithm. Int J Adv Manuf Technol 59:815–828 18. Zhang F, Zhang YF, Nee AYC (1997) Using genetic algorithms in process planning for job shop machining. IEEE Trans Evol Comput 1:278–289 19. Li L, Fuh JYH, Zhang YF, Nee AYC (2005) Application of genetic algorithm to computeraided process planning in distributed manufacturing environments. Robot Comput Integr Manuf 21:568–578 20. Hua GR, Zhou XH, Ruan XY (2007) GA-based synthesis approach for machining scheme selection and operation sequencing optimization for prismatic parts. Int J Adv Manuf Technol 33:594–603 21. Salehi M, Tavakkoli Moghaddam R (2009) Application of genetic algorithm to computer-aided process planning in preliminary and detailed planning. Eng Appl Artif Intell 22:1179–1187 22. Musharavati F, Hamouda ASM (2011) Modified genetic algorithms for manufacturing process planning in multiple parts manufacturing lines. Expert Syst Appl 38:10770–10779 23. Salehi M, Bahreininejad A (2011) Optimization process planning using hybrid genetic algorithm and intelligent search for job shop machining. J Intell Manuf 22:643–652 24. Krishna AG, Mallikarjuna Rao K (2006) Optimisation of operations sequence in CAPP using an ant colony algorithm. Int J Adv Manuf Technol 29:159–164 25. Tiwari MK, Dashora Y, Kumar S, Shankar R (2006) Ant colony optimization to select the best process plan in an automated manufacturing environment. Proc IMechE B J Eng Manuf 220:1457–1472 26. Ma GH, Zhang YF, Nee AYC (2000) A simulated annealing-based optimization algorithm for process planning. Int J Prod Res 38:2671–2687 27. Mishra S, Prakash Tiwari MK, Lashkari RS (2006) A fuzzy goal-programming model of machine-tool selection and operation allocation problem in FMS: a quick converging simulated annealing-based approach. Int J Prod Res 44:43–76 28. Musharavati F, Hamouda ASM (2012) Enhanced simulated annealing based algorithms and their applications to process planning in reconfigurable manufacturing systems. Adv Eng Softw 45:80–90 29. Musharavati F, Hamouda AMS (2012) Simulated annealing with auxiliary knowledge for process planning optimization in reconfigurable manufacturing. Robot Comput Integr Manuf 28:113–131
106
5 An Efficient Modified Particle Swarm Optimization …
30. Li WD, Ong SK, Nee AYC (2005) A Web-based process planning optimization system for distributed design. Comput Aided Des 37:921–930 31. Li WD (2005) A Web-based service for distributed process planning optimization. Comput Ind 56:272–288 32. Ming XG, Mak KL (2000) A hybrid Hopfield network-genetic algorithm approach to optimal process plan selection. Int J Prod Res 38:1823–1839 33. Zhang F, Nee AYC (2001) Applications of genetic algorithms and simulated annealing in process planning optimization. In: Wang J, Kusiak A (eds) Computational intelligence in manufacturing handbook. CRC, Boca Raton, pp 9.1–9.26 34. Li WD, Ong SK, Nee AYC (2002) Hybrid genetic algorithm and simulated annealing approach for the optimization of process plans for prismatic parts. Int J Prod Res 40:1899–1922 35. Huang WJ, Hu YJ, Cai LG (2012) An effective hybrid graph and genetic algorithm approach to process planning optimization for prismatic parts. Int J Adv Manuf Technol 62:1219–1232 36. Ali MM, Kaelo P (2008) Improved particle swarm algorithms for global optimization. Appl Math Comput 196:578–593 37. Goh CK, Tan KC, Liu DS, Chiam SC (2010) A competitive and cooperative co-evolutionary approach to multi-objective particle swarm optimization algorithm design. Eur J Oper Res 202:42–54 38. Shi Y, Liu H, Gao L, Zhang G (2011) Cellular particle swarm optimization. Inf Sci 181:4460– 4493 39. Wang TI, Tsai KH (2009) Interactive and dynamic review course composition system utilizing contextual semantic expansion and discrete particle swarm optimization. Expert Syst Appl 36:9663–9673 40. Wang Y, Liu JH (2010) Chaotic particle swarm optimization for assembly sequence planning. Robot Comput Integr Manuf 26:212–222 41. Moslehi G, Mahnam M (2011) A Pareto approach to multi-objective flexible job-shop scheduling problem using particle swarm optimization and local search. Int J Prod Econ 129:14–22 42. Zhang WB, Zhu GY (2011) Comparison and application of four versions of particle swarm optimization algorithms in the sequence optimization. Expert Syst Appl 38:8858–8864 43. Chen SM, Chien CY (2011) Solving the traveling salesman problem based on the genetic simulated annealing ant colony system with particle swarm optimization techniques. Expert Syst Appl 38:14439–14450 44. Guo YW, Mileham AR, Owen GW, Li WD (2006) Operation sequencing optimization using a particle swarm optimization approach. Proc IMechE B J Eng Manuf 220:1945–1958 45. Wang YF, Zhang YF, Fuh JYH (2009) Using hybrid particle swarm optimization for process planning problem. In: 2009 International Joint Conference on Computational Sciences and Optimization, 304–308 46. Eberhart RC, Shi Y (2004) Guest editorial: special issue on particle swarm optimization. IEEE Trans Evol Comput 8(3):201–203 47. Gao L, Peng CY, Zhou C, Li PG (2006) Solving flexible job shop scheduling problem using general particle swarm optimization. In: Proceeding of the 36th International Conference on Computers & Industrial Engineering, 3018–3027 48. Li WD, McMahon CA (2007) A simulated annealing based optimization approach for integrated process planning and scheduling. Int J Comput Integr Manuf 20(1):80–95 49. Kim YK, Park K, Ko J (2003) A symbiotic evolutionary algorithm for the integration of process planning and job shop scheduling. Comput Oper Res 30:1151–1171 50. Kim YK (2003) A set of data for the integration of process planning and job shop scheduling. Available at http://syslabchonnam.ac.kr/links/data-pp&s.doc. April 2011 51. Shin KS, Park JO, Kim YK (2010) Test-bed problems for multi- objective FMS process planning using multi-objective symbiotic evolutionary algorithm. Available at http://syslab.chonnam.ac. kr/links/MO_FMS_PP_MOSEA.doc. March 2011
Chapter 6
A Hybrid Algorithm for Job Shop Scheduling Problem
6.1 Introduction There are various types of scheduling problems in manufacturing systems. Effective scheduling methods can improve the performance of the manufacturing system well. Therefore, many researchers focus on proposing effective methods for different scheduling problems. Nowadays, it is a hot topic in the research area of manufacturing system. Job shop Scheduling Problem (JSP) which is widespread in the real-world production system is one of the most general and important problems in various scheduling problems [1, 56]. In JSP, there are n jobs that must be processed through m machines. All the operations of every job are processed in a predetermined processing sequence. Each operation has a specified machine with the specified processing time. The aim of JSP is to determine the operations’ processing sequence on each machine and the start time of each operation by optimizing one or more objectives, such as makespan, due date, and so on. Comparing with other scheduling types, its main feature is that all the jobs contain different process plans. JSP had been proved to be the NP-hard problem [2]. Due to the large and complicated solution space and process constraints, JSP is very difficult to find an optimal solution within a reasonable time even for small instances. For example, the optimal solution of the well-known JSP benchmark problem FT10 had not been found until a quarter of a century after the problem was proposed originally [3]. Because of the representativeness and complexity of JSP, a lot of researchers have put their efforts into improving effective optimization methods for it. JSP was primarily handled by Branch and Bound method, some heuristic procedures based on priority rules and shifting bottleneck method. These methods are called as the exact methods. Their main disadvantage is that they cannot solve the large-scale problems (the total number of operations is more than 200). So, during the past decades, most researchers turn their focus to the approximation methods, including
© Springer-Verlag GmbH Germany, part of Springer Nature and Science Press, Beijing 2020 X. Li and L. Gao, Effective Methods for Integrated Process Planning and Scheduling, Engineering Applications of Computational Methods 2, https://doi.org/10.1007/978-3-662-55305-3_6
107
108
6 A Hybrid Algorithm for Job Shop Scheduling Problem
numerous meta-heuristic algorithms which have been used in JSP extensively. These algorithms could be divided into two categories, population-based algorithms and local search algorithms. The most popular and latest population-based algorithms include Genetic Algorithm (GA), Artificial Bee Colony algorithm (ABC), Particle Swarm Optimization (PSO), Biogeography-Based Optimization (BBO), Teaching– Learning-Based Optimization algorithm (TLBO), and so on. The most popular local search algorithms include Tabu Search (TS), neighborhood search, Variable Neighborhood Search (VNS), and so on. Asadzadeh and Zamanifar proposed an agentbased parallel GA for JSP [4]. Yusof, Khalid, Hui, Yusof, and Othman proposed a hybrid parallel micro GA to solve the JSP [5]. Sels, Craeymeersch, and Vanhoucke presented a hybrid single and dual population GA for JSP [6]. Zhang, Song, and Wu proposed a novel ABC for solving JSP with the total weighted tardiness criterion [7]. In this algorithm, a tree search algorithm was devised to enhance the exploitation capability of ABC. Wang and Duan designed a hybrid BBO algorithm for JSP [8]. This method combined the chaos theory and ‘searching around the optimum’ strategy with the basic BBO. Nasad, Modarres, and Seyedhoseini presented a self-adaptive PSO for the lot sizing JSP [9]. This algorithm was self-controller about its working parameters. Baykasoglu, Hamzadayi, and Kose designed a new TLBO algorithm for JSP [10]. Based on the above survey, we find that the main advantage of the population-based algorithms is their powerful global searching ability because of their multiple points searching process. However, also because of this feature, their local searching ability is not good. So, some researchers paid their attention to the local search algorithms for JSP. Lei and Guo proposed a neighborhood search method for the dual-resource constrained interval JSP with the environmental objective [11]. Peng, Lu, and Cheng proposed a TS/path relinking algorithm for the JSP [12]. These algorithms are very powerful for their local searching ability. But, because of the single-point searching process, their global searching ability is not very good. Based on the above analysis of population-based algorithms and local search algorithms, we find that every algorithm contains its own advantages and disadvantages. The hybrid algorithm which combines them together becomes more and more popular for JSP. Compared with local search using a single-point search method, population-based algorithms have the characteristics of multi-point parallel, sharing the current and historical information and faster global convergence speed. For this reason, the combination employ of population-based algorithm and effective local search algorithm is an inevitable trend. In recent years, more and more researchers paid their attentions to design a hybrid algorithm for JSP. Goncalves, Mendes, and Resende utilized GA, schedule generation procedure and local search procedure for JSP [13]. Zhang, Li, Rao, and Li developed an effective combination of GA and Simulated Annealing (SA) to solve the JSP [14]. Zhang, Rao, and Li proposed a hybrid GA based on a local search heuristic for the JSP [15]. Zhang, Li, Rao, and Guan developed the heuristics search approach combining SA and TS strategy to provide a robust and efficient methodology for the JSP [15]. Ge, Sun, Liang, and Qian proposed a computationally effective algorithm of combining PSO with the Artificial Immune System (AIS) for solving the minimum makespan problem of JSP [16]. Gao, Zhang, Zhang, and Li designed an efficient
6.1 Introduction
109
hybrid evolutionary algorithm (memetic algorithm), with a novel local search to solve the JSP [17]. Eswaramurthy and Tamilarasi hybridized TS with ant colony optimization for solving JSP [18]. Zuo, Wang, and Tan proposed an Artificial Immune System (AIS) and TS-based hybrid strategy for JSP [19]. Ren and Wang proposed a hybrid GA based on a new algorithm presented for finding the critical path from the schedule and a local search operator [20]. Nasiri and Kianfar proposed a hybrid algorithm that combined global equilibrium search, path relinking, and TS to solve the JSP [21]. Ponsich and Coello (2013) hybridized differential evolution and TS for solving JSP [22]. Based on the above survey, this chapter also proposes an effective hybrid algorithm for solving JSP. In this method, PSO which is one of the most efficient populationbased algorithms is employed for the global searching process and VNS which is one of the most efficient local search algorithms is designed for the local searching process. PSO, proposed by Kennedy and Eberhart (1995), is a population-based algorithm inspired by the behavior of bird flock [23]. Due to its simple principle and good optimization ability (Vassiliadis & Dounias, 2009), PSO has attracted more and more attention both in continuous optimization and combinatorial optimization problems (Schutte, Reinbolt, Fregly, Haftka, & George, 2004; Wang, Huang, Zhou, & Pang, 2003) [24–26]. Recently, many researchers proposed hybrid algorithms based on PSO and local search method to deal with different optimization problems, such as PSO-Tabu search (Gao, Peng, Zhou, & Li, 2006), PSO-SA (Niknam, Amiri, Olamaei, & Arefi, 2009), and so on [27, 28]. Some hybrid PSO algorithms are also proposed to deal with JSP (Pongchairerks and Kachitvichyanukul, 2009; Sha & Hsu, 2006; Xia & Wu, 2006) [29–31]. VNS, proposed by Mladenovic and Hansen (1997), has quickly gained widespread successful utilization in many domains, such as extremal graphs (Caporossi & Hansen, 2000), traveling salesman problem (Felipe, Ortuno, & Tirado, 2009), and vehicle routing problem (Kuo & Wang, 2012) [32– 35]. The research and application of VNS also have increased gradually in the shop scheduling problem (Bagheri & Zandieh, 2011; Mehmet & Aydin, 2006; Yazdani, Amiri, & Zandieh, 2010) [36–38]. VNS was firstly applied to JSP by Mehmet and Aydin (2006) [37]. The crossover operator and mutation operator of GA were utilized as two neighborhood structures in this VNS, which used the single-point search method. Although the optimal solution could be obtained by multiple trials, the efficiency of the algorithm was not so good. VNS contains two inherent limitations. One is how to select and design efficient neighborhood structures, which impact the performance of VNS greatly. The neighborhood structures’ design has to based on the features of different problems. For example, in this chapter, we need to base on the features of JSP to design effective neighborhood structures. How to evaluate the performance of different neighborhood structures is also another problem. To solve this, this chapter proposes a new neighborhood structure evaluation method based on the logistic model. The other limitation is that VNS often traps into the local optimum because of its single-point searching process. Some researchers use multiple trials to overcome this problem. In this chapter, we combine it with PSO to deal with this. The PSO is used to provide different searching points for VNS.
110
6 A Hybrid Algorithm for Job Shop Scheduling Problem
Almost all of these hybrid algorithms focused on the improvement of the local search procedure. As we all know, neighborhood structure plays a very important role in local search. Preferable solutions for the JSP benchmark could be found after a good neighborhood structure proposed every time. However, most existing hybrid algorithms select the neighborhood structures randomly in the local search procedure and just combine the neighborhood structures with the original algorithm directly. To overcome this blind selection of neighborhood structures, a new neighborhood structure evaluation method based on the logistic model has been developed in this chapter to guide the neighborhood structures’ selection process. This method is utilized to evaluate the performance of different neighborhood structures. Then the neighborhood structures which have good performance are selected as the main neighborhood structures in VNS. Based on this neighborhood structure evaluation method, a hybrid PSO algorithm based on VNS (HPV) has been proposed to solve JSP in this chapter. The remainder of this chapter is organized as follows: Sect. 6.2 is the problem formulation of JSP. Section 6.3 elaborates on the proposed hybrid algorithm. The neighborhood structure evaluation method based on the logistic model is given in Sect. 6.4. Experiments are illustrated in Sect. 6.5 to evaluate the performance of the proposed algorithm with some other state-of-the-art reported algorithms, while in Sect. 6.6 conclusion and future researches are given.
6.2 Problem Formulation In this chapter, the JSP contains a set of jobs J = {1,… j… n}, j = 1,…, n; a set of machines M = {1,… k… m}, k = 1,…, m; and a set of operations O = {1,…, i,…, n × m}, i = 1,…, n × m. The objective is to minimize the makespan, that is, the finish time of the last operation completed in the schedule. This objective which represents the production efficiency of the workshop is one of the classical and most used criteria in JSP. For the conceptual model of the JSP, the notations are described as follows [13]:
N
The number of jobs;
m
the number of machines (the number of operations for one job);
ci
the complete time of operation i;
Pi
the processing time of operation i on a given machine;
di
all predecessor operations of operation i;
f i,k
f i,k = 1 if operation i processed by machine k, otherwise f i,k = 0;
At
the set of operations being processed at time t
6.2 Problem Formulation
111
The mathematical model of JSP can be defined as follows: Minimize (6.1)
Cn×m c1 ≤ c j − p j
f i,k ≤ 1
l ∈ di
(6.2)
t ≥ 0
(6.3)
i∈A(t)
ci ≥ 0
(6.4)
Equation (6.1) is the objective function F. The constraint of precedence relationship is defined in Eqs. (6.2) and (6.3) shows that each machine could process at most one job at a time and each job is only processed by one machine at a time. Finally, Eq. (6.4) forces the finish times to be nonnegative. A 3 × 3 JSP example is given in Table 6.1. The scheduling scheme could be visualized by the Gantt chart. Figure 6.1 is the Gantt chart of this example. The horizontal axis represents processing time, as well as the longitudinal axis represents the machining machine. Table 6.1 A 3 × 3 JSP example
Job No.
Machining machine (processing time) Operation 1
Operation 2
Operation 3
1
1 (8)
2 (2)
3 (4)
2
3 (2)
2 (5)
1 (4)
3
2 (9)
3 (9)
1 (4)
Fig. 6.1 Gantt chart of a 3 × 3 JSP example
112
6 A Hybrid Algorithm for Job Shop Scheduling Problem
6.3 Proposed Hybrid Algorithm for JSP In this chapter, the proposed hybrid algorithm is designed based on the General Particle Swarm Optimization (GPSO) model. In order to balance the diversification and intensification of the proposed hybrid algorithm, the individual extreme library is added in PSO and VNS is utilized as the local search method. The corresponding updating method and random search strategy of JSP are also designed. The basic principle of PSO and VNS, the workflow, and details of the hybrid algorithm are described in the following sections.
6.3.1 Description of the Proposed Hybrid Algorithm To describe the proposed hybrid algorithm clearly, the basic principle of PSO and GPSO are given in Sects. 6.3.1.1 and 6.3.1.2. Then, the workflow of the proposed hybrid algorithm is provided in Sect. 6.3.1.3.
6.3.1.1
Traditional PSO Algorithm
In order to give a clearer description of traditional PSO, some definitions are firstly given as follows [39]. Definition 1 (Particle’s position). Particle is the basic unit of PSO; particle’s position denotes the candidate solutions in the solution space. In the D-dimensional search space, the ith particle’s position can be expressed as X i = (X i,1 , X i,2 , …X i,d ). Definition 2 (Particle’s velocity).The ith particle’s velocity can be expressed as V i = (vi,1 , vi,2 ,…vi,d ), which denotes the change of the particle’s position in one iteration. The vectors in particle’s speed are corresponding to the particle position. Definition 3 (Personal best position). The personal best position, which denotes the ith particle has found so far, can be expressed as Pi = (pi,1 , pi,2 ,…pi,d ). Definition 4 (Global best position). The global best position, which denotes the best position discovered by the swarm, can be expressed as Pg = (pg,1 , pg,2 ,…pg,d ). At each time step t, both of each particle’s velocity and position are updated so that a particle moves to a new position. Based on these definitions, the velocity-position update formulas, the core idea of PSO, are employed to calculate the velocity and position:
6.3 Proposed Hybrid Algorithm for JSP
113
(6.5) Vit+1 = ωVit + C1 × Rand() × Pit − X it + C2 × Rand() × Pgt − X it X it+1 = X it + Vit+1
(6.6)
In Eq. (6.5), ω means the inertia weight, Pti means the personal best position at time t, Ptg and means the global best position at time t. In the three parts of Eq. (6.6), the first part indicates the random search of the particle in the solution space, the second part is called the self-learning of the particle, and the last part is the sociallearning of the particle. C 1 and C 2 are the learning factors and defined as the constant 2, Rand() is uniform random number in (0, 1). C-1 × Rand() and C-2 × Rand() could generate random perturbations, making each particle shock nearby its individual best position and global best position. The main steps of traditional PSO are described as follows: Step 1 Initialize a population of N particles with random positions and random velocities with D dimensions in a given searching space. Step 2 Repeat the following steps until a specified stop condition (the optimal solution is found or the maximal number of iterations is reached) is met. Step 2.1 Evaluate the fitness of each particle in the populations according to the objective function of the problem. Step 2.2 Update the personal best position for each particle and the global best position for all particles. Step 2.3 Update the velocity and the position of each particle.
6.3.1.2
General PSO Model
In traditional PSO, particles are updated according to the velocity–displacement model. It is developed to solve the continuous optimization problems. In order to extend the application area of PSO to deal with the discrete combinatorial optimization problems, Gao et al. (2006) proposed a general PSO (GPSO) model by omitting the concrete velocity–displacement updating method in traditional PSO [27]. The main steps of GPSO are described as follows: Step 1 Initialize the population randomly. Step 2 Evaluate each particle in the population. Step 3 Repeat the following steps until a specified stop condition (the optimal solution is found or the maximal number of iterations is reached) is met. Step 3.1 Update the current particle using its own experience. Step 3.2 Update the current particle using the whole population’s experience. Step 3.3 Search locally in balance with random search.
114
6 A Hybrid Algorithm for Job Shop Scheduling Problem
In this model, different updating methods and local search methods can be designed for the specific combinatorial optimization problem. This model can extend the application area of PSO greatly. Based on the features of different problems, different methods can be developed. In this chapter, we develop or employ several updating methods and local search methods based on the features of JSP.
6.3.1.3
Workflow of Hybrid Algorithm
According to the basic principle of GPSO and VNS, the main workflow of the hybrid PSO and VNS algorithm (HPV) for JSP is shown in Fig. 6.2. The main steps of the proposed algorithm for JSP are described as follows: Step 1 Initialize the parameters of the HPV algorithm, including the maximum number of iterations and population size. Step 2 Generate the initial population. Initiate the population according to the encoding scheme in Sect. 6.3.2 randomly. Evaluate the fitness of each particle in initial population; retain the best 3 particles into the individual extreme library. At the same time, retain 20% of the initial population which have different fitness values into the population extreme library. Step 3 Global search of the particles. For each particle in the population, using the updating strategy until all the particles in the population are updated, and then the new population can be generated. The details of the updating strategy are described in Sect. 6.3.3. Step 4 Local search of the particles. VNS is utilized as the local search method in the hybrid algorithm for each particle. The best particle in local search is selected to update the current particle. The details of the strategy are described in Sect. 6.3.4. Step 5 Evaluate and update the particles in the population. If there is a particle in the new population better than the worst particle in the individual extreme library and the population extreme library, the particle is appended in two libraries as well as the worst particle is removed from the libraries, the sizes of two libraries are constant. Step 6 If the stop condition (the best solution is found or the maximal number of iterations is reached) is not met, go to Step 3, otherwise, output the best result.
6.3.2 Encoding and Decoding Scheme For each particle in the population, the operation-based encoding method of JSP is used as the encoding strategy [40]. For an n jobs and m machines problem, each particle contains n×m elements and each element is an integer in this encoding method. This representation uses an unpartitioned permutation with m-repetitions of
6.3 Proposed Hybrid Algorithm for JSP
115
Fig. 6.2 Workflow of the proposed HPV for JSP
job numbers. In this representation, each job number appears m times in the particle. By scanning the particle from left to right, the jth appearance of a job number refers to the jth operation in the technological sequence of this job, and then a particle could be decoded into a schedule directly. A schedule is obtained by decoding a particle. The decoding scheme in this chapter is scanning the element from left to right, and the operations should be shifted to the left as compact as possible. Schedules are categorized into three classes: nondelay schedule, active schedule, and semi-active schedule. It has been verified that the active schedule contains an optimal schedule with regular objectives (including makespan used in this chapter). So, each particle should be decoded into an active schedule in the decoding procedure to reduce the search space [17, 41].
116
6 A Hybrid Algorithm for Job Shop Scheduling Problem
Because the encoding methods of GPSO and N5 neighborhood are different, when using N5 neighborhood as the local search neighborhood, the preference list-based representation method is used as the encoding strategy [42].
6.3.3 Updating Strategy The main steps of the updating strategy are described as follows: Step 1 A parent particle is selected from the individual extreme library which has a different fitness values compared with the current particle. Then, two new particles are created by crossover operations between the selected particle and the current particle. Step 2 A parent particle is selected randomly from the population extreme library, and then two new particles are generated by crossover operations between the selected particle and the current particle. Step 3 A mutation operation is applied on the current particle. Step 4 The best particle is selected from the five created particles to replace the current particle. The POX (Precedence Operation Crossover) is selected as the crossover operation [15]. And the mutation operation is the insertion mutation method, which could expand the current search range [15].
6.3.4 Local Search of the Particle The design of VNS is based on the regular rule found in the combinatorial optimization problems. The local optimal solutions in different neighborhood structures are not necessarily the same. A global optimal solution is a local optimal solution with respect to all neighborhood structures and the local optimal solutions for different neighborhood structures are relatively close to each other [35]. The basic idea of VNS is changing the neighborhood structures systematically during the process of local search to improve the current solution. The main steps of VNS are given as follows: Step 1 Generate an initial solution x. Define a series of neighborhood structures N k , k = 1, 2, …, k max . Step 2 Set k = 1, repeat the following steps until k = k max : Step 2.1 Shaking. Generate a neighborhood solution X’ of from the kth neighborhood N k of x randomly. Step 2.2 Local Search. Find the best neighbor X” of x’ in the current neighborhood Nk.
6.3 Proposed Hybrid Algorithm for JSP
117
Step 2.3 Move or Not. If x” is better than x, let x = x” and k = 1, restart local search of N 1 ; otherwise, set k = k +1. Two effective neighborhood structures (N1 and N2) are selected by a new neighborhood structure evaluation method. N1 is set as the principal neighborhood and N2 is used as the subordinate neighborhood in VNS. The details of the neighborhood structure evaluation method are given in Sect. 6.4.
6.4 The Neighborhood Structure Evaluation Method Based on Logistic Model 6.4.1 The Logistic Model The solution space of the scheduling problem has intrinsic properties. If the intrinsic properties of the problem is ignored, the algorithm very easily to fall into the local optimum [43]. The study of solution space could guide the design of algorithms to avoid this situation. After Manderick, Weger, and Spiessens first used the “Fitness Landscape Theory” to analyze GA, more and more researchers pay their attentions on the study of solution space of different problems, and applied the concept of fitness landscape to describe the characteristics of the problems and analyze the performance of algorithms [44–49]. Wen, Gao, and Li introduced the logistic model into JSP, which is a core theory in population ecology [50]. It was found that the cumulative distribution of makespan had a perfect match with the S curve. And the neglected distribution property could reflect the structure of the Fitness Landscape of JSP accurately and stably, whether it was a static randomly generated space, or a dynamic algorithm search space. Compared with the previous methods, the logistic model could reflect the intrinsic properties well and distinguish the difficulty level of the problem exactly. The logistic model was proved to be a new simple and accurate way to describe the characteristics of the problem. And it could be used to guide the design and evaluation of algorithms. The following formula of logistic model is used: y =
k 1 + e−(a+bx)
(6.7)
In this chapter, the logistic model proposed by Wen is extended to describe the distribution property of solutions’ fitness in the process of random neighborhood search [50]. And then, it is applied to evaluate and select the neighborhood structures.
118
6 A Hybrid Algorithm for Job Shop Scheduling Problem
Fig. 6.3 Trim of the critical block in N5 neighborhood
6.4.2 Defining Neighborhood Structures There are several definitions of neighborhood structures in JSP. In this research, five neighborhood structures, including N5 neighborhood based on critical block, random whole neighborhood, random insert neighborhood, random reverse neighborhood, and two-point exchange neighborhood, are considered as the candidates in VNS design. These five neighborhood structures are described as follows.
6.4.2.1
N5 Neighborhood
N5 neighborhood was proposed by Nowicki and Smutnicki [1] and it is constructed based on the following properties: for the critical blocks in the critical path, permuting the last two successive operations of the head critical block, exchanging the front two successive operations of the tail block, and permuting the first two operations and the last two operations in the middle block. The critical block is the set of successive operations processed by the same machine. Figure 6.3 is the trim of the critical block in N5 neighborhood; it includes the entire possible shift in the critical block.
6.4.2.2
Random Whole Neighborhood
Random Whole Neighborhood (RWN), which is derived from the mutation operation based on local search proposed by Cheng [55], is evaluating the full permutation of λ different elements in the particle [45]. In this way, the aggregation of the optimal solution in the neighborhood can be utilized. On the other hand, it is easy to jump out of local optimal through this neighborhood. Figure 6.4 is the example of the random whole neighborhood. Set λ = 3, the shaded parts are the 3 selected operations.
6.4.2.3
Random Reverse Neighborhood
Random Reverse Neighborhood (RRN) is a new neighborhood structure, which can be seen as multiple exchanging and inserting. The construction method can be described as follows:
6.4 The Neighborhood Structure Evaluation Method Based …
119
Fig. 6.4 Random whole neighborhood
Step 1 Select multiple pairs of operations randomly. Step 2 Inverse the selected operations in the encoding sequence to form a new sequence.
6.4.2.4
Two-Point Exchange Neighborhood
Two-point Exchange Neighborhood (TEN) is a simple neighborhood, which just exchanges the two selected operations in the encoding sequence randomly.
6.4.2.5
Random Insert Neighborhood
Random Insert Neighborhood (RIN) is a simple neighborhood. The constructed method is randomly selecting one element from the particle and inserting it into another position of the encoding sequence.
6.4.3 The Evaluation Method Based on Logistic Model It is difficult to evaluate the performance of neighborhood structures. Traditionally, the improvement rare between the two neighborhoods during the calculation process has been used to obtain a preliminary judgment. But this method is time-consuming as well as not necessarily accurate. In this chapter, the logistic model proposed by Wen et al. [50] is extended to evaluate and select the neighborhood structures.
120
6 A Hybrid Algorithm for Job Shop Scheduling Problem
The workflow of the evaluation method based on the logistic model is described as follows: Step 1 Generate initial solutions randomly. In this section, 100 initial solutions are generated for FT10 as example. Step 2 Define five neighborhood structures described in Sect. 6.4.2 as N k . Step 3 Select s* from the 100 solutions as the initial solution of the local search. Step 4 Find a neighbor s’ from s ∈ Nk (s ∗ ), which satisfies F(s ) < F(s ∗ ). F(s) is the fitness of solution s. Then, set s* = s’. Step 5 The iteration termination is ∀s ∈ Nk (s ∗ ),F(s) ≥ F(s ∗ ). If iteration termination is not satisfied, go to Step 4. Otherwise, go to Step 6. Step 6 The makespan cumulative distribution curve is illustrated by the initial solutions and the updated solutions in the iteration, which satisfies the logistic distribution. The horizontal axis represents normalized makespan, the longitudinal axis represents the normalized cumulative number of makespan. The characteristic point x * is the inflection point of S curve. Figure 6.5 is the search efficiency comparison of various neighborhood structures. Different neighborhood structures will show different performances using the same search strategy. If a majority of updated solutions in the iteration of the neighborhood A are better than neighborhood B, the neighborhood A will be able to obtain the optimal solution much quickly. In these circumstances, the peak of neighborhood A is on the left of neighborhood B in the histogram and the x * of neighborhood A is smaller than neighborhood B in the S curve. From Fig. 6.5, the descending order of their efficiency is: N5, RWN (λ = 5), RIN, RRN, and TEN. To illustrate the credibility of the result, the best neighborhood N5 is set as the benchmark to test the neighborhood improvement rate, that is, for example,
Fig. 6.5 Search efficiency comparison of different neighborhood structures
6.4 The Neighborhood Structure Evaluation Method Based on Logistic Model
121
Table 6.2 Search result comparison of different neighborhood structures Neighborhood
N5
RWN
RIN
RRN
TEN
x*
0.2185
0.2386
0.3053
0.3004
0.3313
MIR
/
0.1824
0.1350
0.1276
0.0936
if 10 solutions in 100 local optimal solutions produced by N5 are improved by other neighborhood structures, the Makespan Improvement Rate (MIR) is 10%. The result is showed in Table 6.2. It implies that the S curve generated by the logistic model has a good reflection of the neighborhood structures’ search efficiency from Table 6.2. Based on the S curves obtained by the new neighborhood structure evaluation method, N5 and RWN have shown better performance for solving FT10 than the other three neighborhood structures. Therefore, N5 and RWN are selected as the main neighborhood structures (N1 and N2) in VNS. Although the larger value of k in RWN can lead to a better search ability, it is not conducive to balance the algorithm between the breadth of global search and depth of local search, and even take up another neighborhood computation time. As a result, λ is set as 4 when the problem size is small (less than 200 operations) and it is taken as 5 in the large-scale problem.
6.5 Experiments and Discussion In order to verify the effectiveness of the neighborhood structure evaluation method in algorithm design, the search ability comparison among PSO hybridized with VNS guided by the neighborhood structure evaluation method and PSO hybridized with a single neighborhood structure randomly is given in Sect. 6.5.1. Then, a set of JSP famous benchmark instances have been conducted to evaluate the performance of the proposed algorithm and the comparisons among some other state-of-the-art reported algorithms are also presented in Sect. 6.5.2. To further investigate the performance of the proposed HPV algorithm, the convergence rates of HPV on 3 instances with different scales have been compared with that of other benchmarking algorithms in Sect. 6.5.3. The proposed algorithm is programmed by C++ language and runs on a PC with Intel(R) Core(TM)2 2.16 GHZ processor, and 4 GB memory.
6.5.1 The Search Ability of VNS Firstly, two hybrid PSO algorithms are designed by selecting a single neighborhood structure randomly. N5 and RWN are selected as the neighborhood, respectively. Because FT10 is the most classical benchmark in JSP and can represent the features of landscape well, it is selected as the example instance to test the search abilities
122
6 A Hybrid Algorithm for Job Shop Scheduling Problem
Fig. 6.6 The search ability comparison among different hybrid algorithms
Table 6.3 Efficiency of the neighborhood structures The hybrid algorithm
PSO + N5
Indicator
Step
Time(s)
PSO + VNS(λ = 4)
PSO + RWN(λ = 4)
Step
Step
Time(s)
Average
58.4
37.7
13.2
31.45
Variance
1269.93
522.32
52.69
260.47
235 3873.22
Time(s) 103.3 2319.43
of the HPV and the two hybrid PSO algorithms. Figure 6.6 is the search ability comparison among HPV and the hybrid PSO with a single neighborhood. In RWN, k is set as 4. Table 6.3 lists the average and variance of the iteration step and consumed time for 20 trials to obtain the optimum. The results show that when using the VNS designed by the new neighborhood structure evaluation method as the local search algorithm, the number of iterations is fewer, the search speed is faster, and the search ability is more stable compared with the other two methods. It can be said that the neighborhood evaluation method based on the logistic model plays a key role in the design of the hybrid algorithm.
6.5.2 Benchmark Experiments The JSP benchmark problems are taken from the OR-Library to verify the effectiveness of HPV [51]. Instances FT06, FT10, FT20, LA01–LA40, and ORB01–ORB10 are considered in the numerical experiments. These instances are the most famous benchmark problems of JSP. Most reported methods used these JSP benchmark
6.5 Experiments and Discussion
123
Table 6.4 Experimental results of FT and LA benchmarks (Cmax represents the makespan, RD%)
problems to evaluate their performance. The experimental results compared with the results of some other state-of-the-art reported algorithms are listed in Tables 6.4 and 6.5 (The bold values mean the best results obtained by proposed HPV in all compared algorithms). 10 independent runs are conducted for the proposed algorithm on every problem. The reported solution is the best one in 10 runs. The parameters of the proposed HPV are set as follows: the population size is 200 and the maximum number of iterations is 50. The results of instances FT06, FT10, FT20, and LA01–LA40 are compared with some other state-of-the-art reported algorithms including MPSO [52], HIA [16], GASA [14], PGA [4], HGA [20], MA [17], HABC [7], HBBO [8], and TLBO [10]. The results of instances ORB01–ORB10 are compared with some other state-of-theart reported algorithms including HGA’ [41], GRASP [53], TSSB [54], TSAB’ [1], PGA [4], and HBBO [8]. The columns of the two tables include the name of each benchmark problem (Problem), the problem size (Size), the value of the best-known solution for the instances (BKS). In Tables 6.4 and 6.5, the best solutions of makespan (Cmax) Table 6.5 Experimental results of ORB benchmarks (C max represents the makespan, RD%)
124
6 A Hybrid Algorithm for Job Shop Scheduling Problem
Fig. 6.7 Gantt chart of an optimal solution of FT10 (makespan = 930)
obtained by the compared algorithms are given. In Tables 6.4 and 6.5, the ratio of the KS ×100, deviation with respect to the best-known solution (RD%, R D = makespan−B BK S for the compared algorithms is displayed. The CPU time (s) of HPV is also provided in Tables 6.4 and 6.5. From Table 6.4, it could be seen that 41 best-known solutions could be found among 43 experiment problems by utilizing HPV, i.e., in 95% of problem instances. The results obtained by the proposed HPV are much better than MPSO, HIA, PGA, HGA, MA, HABC, HBBO, and TLBO. Compared with the GASA algorithm, the result of proposed HPV for LA29 is worse than it, but the results of the proposed HPV for LA36 and LA39 are better than it. So, they have the same performance for these benchmark problems. Both of HPV and GASA are effective for JSP. Figures 6.7 and 6.8 show the Gantt charts of the optimal solution obtained by the proposed HPV for FT10 and LA40. From Table 6.5, all the best-known solutions could be found among ORB benchmarks by using HPV, i.e., 100% of problem instances. Compared with some other existing algorithms, only HPV can obtain all of the best-known solutions in ORB benchmarks. Figure 6.9 shows the Gantt chart of the optimal solution obtained by the proposed HPV for ORB02.
6.5.3 Convergence Analysis of HPV To further investigate the performance of the proposed HPV algorithm, the convergence rates of HPV on 3 instances with different scales have been compared with that of other benchmarking algorithms, including PSO, BBO, and HBBO [8]. Due to the stochastic nature of these meta-heuristic algorithms, 10 independent runs are
6.5 Experiments and Discussion
125
Fig. 6.8 Gantt chart of an optimal solution of LA40 (makespan = 1224)
Fig. 6.9 Gantt chart of an optimal solution of ORB02 (makespan = 888)
executed on each instance for the HPV algorithm. To ensure the fairness of comparisons, the population sizes and the numbers of iterations of all the four methods are set to be the same. For the first and second instances, population size is set to 50 and the number of generations is equal to 1000. For the third instance with a large scale, population size, and number of generations are set to 100 and 2500, respectively. The first instance is 6 × 6 JSP benchmark [8]. Figure 6.10 presents the convergence curves obtained by different four algorithms. To evaluate the quality of solutions obtained by each algorithm in this experiment, we employ the best fitness value to
126
6 A Hybrid Algorithm for Job Shop Scheduling Problem
Fig. 6.10 Convergence curves of the all algorithms for 6 × 6 JSP instance
measure the convergence performance of each algorithm and use the generation to track the trend of fitness value during the search process. Figure 6.10 shows that the proposed HPV algorithm can converge quickly to the optimal solution compared to the other algorithms, which denotes HPV has good convergence performance. The second instance is 10 × 5 JSP benchmark [8]. Figure 6.11 presents convergence curves obtained by four different algorithms. It can be observed that HPV and HBBO can find the optimal solution while PSO and BBO both fail to obtain the best result. It is noted that HPV can achieve the optimal point at less than 50 generations while HBBO can reach the optimal solution at more than 200 generations. It means that HPV is superior to its rivals in terms of convergence performance. The third instance is 20 × 10 JSP benchmark [8]. Figure 6.12 also presents convergence curves obtained by four algorithms. From this figure, we can clearly observe that HPV significantly outperforms the other optimization algorithms in Fig. 6.11 Convergence curves of all the algorithms for 10 × 5 JSP instance
6.5 Experiments and Discussion
127
Fig. 6.12 Convergence curves of all the algorithms for 20 × 10 JSP instance
terms of convergence performance, since HPV finds a new best solution so far for this instance. Figure 6.13 shows the Gantt chart of the optimal solution obtained by the proposed HPV for 20 × 10 JSP problem instance. The best makespan values of different four methods for the above three instances are listed in Table 6.6 (The bold values mean the best results obtained by proposed HPV in all compared algorithms) in comparison to make the results more convincing and clearer. The results of the CPLEX, PSO, BBO, and HBBO are from reference [8]. From Table 6.6, it is obvious that the HPV is better than the CPLEX, PSO, BBO, and HBBO as it can find the best results for all three instances. Especially for the 20 × 10 instance, HPV finds a new best solution. The results imply that the HPV is more powerful than the CPLEX, PSO, BBO, and HBBO for solving the JSP. The CPU time (s) of HPV which is provided in Table 6.6 shows that the HPV can solve
Fig. 6.13 Gantt chart of an optimal solution of 20 × 10 JSP instance (makespan = 1297)
128
6 A Hybrid Algorithm for Job Shop Scheduling Problem
Table 6.6 The best makespan values of different algorithms for 3 instances Instance 6×6
CPLEX 55
PSO
BBO
55
55
HBBO 55
HPV 55
HPV CPU(s) 2.41
10 × 5
630
685
644
630
630
3.67
20 × 10
1325
1694
1423
1307
1297
40.42
these 3 instances in a short time. For the 6 × 6 JSP benchmark in Table 6.6 which is the same with the FT06 in Table 6.4, because the algorithm parameters of HPV are set to be different, the CPU time (s) of HPV in Tables 6.6 and 6.4 is different.
6.5.4 Discussion Based on the above experiments, it can be found that HPV is effective for solving JSP and has good convergence performance. The proposed algorithm can balance its diversification and intensification well to achieve good performance, which is the reason for the HPV’s prominent performance in solving JSP compared with some other state-of-the-art reported algorithms. Diversification is ensured by adding the individual extreme library into PSO algorithm. When the algorithm has strong local search ability, the updated particle is regularly the same with the individual extreme, so the crossover operation is meaningless, thus adding the individual extreme library is necessary to enable the particle learning from previous search experiences. Intensification is realized by applying VNS as a local search strategy. A new neighborhood structure evaluation method plays a key role in the design of VNS. The neighborhood structures used in VNS are selected through this method which ensures the VNS utilized in HPV is specific for solving JSP.
6.6 Conclusions and Future Works In this chapter, a hybrid PSO and VNS algorithm called HPV is proposed to solve the JSP problem. In order to overcome the blind selection of neighborhood structures during the hybrid algorithm design, a new neighborhood structure evaluation method based on the logistic model has been developed to guide the neighborhood structures’ selection. The experimental results show that using VNS as local search strategy based on the new neighborhood structure evaluation method is more effective than using a single neighborhood structure randomly and the proposed hybrid algorithm has a good performance on JSP comparing with some other state-of-the-art reported algorithms, which also verifies the effectiveness and efficiency of the new neighborhood structure evaluation method in hybrid algorithm design.
6.6 Conclusions and Future Works
129
The contributions of this chapter can be summarized as follows: (1) To overcome the lack of neighborhood structure evaluation method during hybrid algorithm design in previous research, a new neighborhood structure evaluation method based on the logistic model is designed. The experiment results verify the effectiveness of this evaluation method. This evaluation method can be used for other hybrid algorithm designs for solving scheduling problems, including JSP. (2) Based on the proposed evaluation method, HPV is proposed to solve JSP. The experiment results show that the proposed algorithm performs better than the compared algorithms except for GASA. Both HPV and GASA have a good performance on JSP. This method can be used to solve other scheduling problems, such as flexible job shop scheduling problem, open shop scheduling problem, and so on. Although the proposed hybrid algorithm in this chapter succeeds in solving JSP, it also has some limitations. Firstly, the evaluation method is designed based on the preliminary study of essential features in JSP, it is necessary to deepen the research of the essential characteristics in the scheduling problem. Secondly, there are only five neighborhood structures discussed in this chapter, more neighborhood structures could be explored in the future. Another future work is to design more scientific and efficient intelligent optimization algorithms based on the fitness landscape theory.
References 1. Nowicki E, Smutnicki C (1996) A fast taboo search algorithm for the job shop problem. Manage Sci 42:797–813 2. Garey MR, Johnson DS, Sethi R (1976) The complexity of flowshop and jobshop scheduling. Math Ope Res 1:117–129 3. Adams J, Egon B, Zawack D (1988) The shifting bottleneck procedure for job shop scheduling. Manag Sci 34:391–401 4. Asadzadeh L, Zamanifar K (2010) An agent-based parallel approach for the job shop scheduling problem with genetic algorithms. Mathematic Comput Modell 52:1957–1965 5. Yusof R, Khalid M, Hui GT, Yusof SM, Othman MF (2011) Solving job shop scheduling problem using a hybrid parallel micro genetic algorithm. Appl Soft Comput 11:5782–5792 6. Sels V, Craeymeersch K, Vanhoucke M (2011) A hybrid single and dual population search procedure for the job shop scheduling problem. Eur J Operat Res 215:512–523 7. Zhang R, Song S, Wu C (2013) A hybrid artificial bee colony algorithm for the job shop scheduling problem. Int J Product Eco 141:167–178 8. Wang X, Duan H (2014) A hybrid biogeography-based optimization algorithm for job shop scheduling problem. Comput Ind Eng 73:96–114 9. Nasad MK, Modarres M, Seyedhoseini SM (2015) A self-adaptive PSO for joint lot sizing and job shop scheduling with compressible process times. Appl Soft Comput 27:137–147 10. Baykasoglu A, Hamzadayi A, Kose SY (2014) Testing the performance of teaching-learning based optimization (TLBO) algorithm on combinatorial problems: flow shop and job shop scheduling cases. Inf Sci 276:204–218 11. Lei D, Guo X (2015) An effective neighborhood search for scheduling in dual-resource constrained interval job shop with environmental objective. Int J Produc Eco 159:296–303
130
6 A Hybrid Algorithm for Job Shop Scheduling Problem
12. Peng B, Lu Z, Cheng TCE (2015) A tabu search/path relinking algorithm to solve the job shop scheduling problem. Comput Oper Res 53:154–164 13. Goncalves JF, Mendes JJ, Resende MGC (2005) A hybrid genetic algorithm for the job shop scheduling problem. Eur J Oper Res 167(1):77–95 14. Zhang CY, Li PG, Rao YQ, Li S (2005) A new hybrid GA/SA algorithm for the job shop scheduling problem. Lecture Notes in Computer Science, 246–259 15. Zhang CY, Li PG, Rao YQ, Guan ZL (2008) A very fast TS/SA algorithm for the job shop scheduling problem. Comput Operat Res 35(1):282–294 16. Ge H, Sun L, Liang Y, Qian F (2008) An effective PSO and AIS-based hybrid intelligent algorithm for job-shop scheduling. IEEE Trans Sys Man Cyber Part A: Sys Humans 38(2):358– 368 17. Gao L, Zhang GH, Zhang LP, Li XY (2011) An efficient memetic algorithm for solving the job shop scheduling problem. Comput Ind Eng 60:699–705 18. Eswaramurthy VP, Tamilarasi A (2009) Hybridizing tabu search with ant colony optimization for solving job shop scheduling problems. Int J Adv Manufac Technol 40:1004–1015 19. Zuo X, Wang C, Tan W (2012) Two heads are better than one: an AIS- and TS- based hybrid strategy for job shop scheduling problems. Int J Adv Manufac Technol 63:155–168 20. Ren Q, Wang Y (2012) A new hybrid genetic algorithm for job shop scheduling problem. Comput Oper Res 39:2291–2299 21. Nasiri MM, Kianfar FA (2012) GES/TS algorithm for the job shop scheduling. Comput Ind Eng 62:946–952 22. Ponsich A, Coello CAC (2013) A hybrid differential evolution—Tabu search algorithm for the solution of job shop scheduling problems. Appl Soft Comput 13:462–474 23. Kennedy J, Eberhart R (1995) Particle swarm optimization. In: Proceedings of IEEE international conference on neutral networks, Australia, Perth (pp 1942–1948) 24. Schutte JF, Reinbolt JA, Fregly BJ, Haftka RT, George AD (2004) Parallel global optimization with the particle swarm algorithm. Int J Num Methods Eng 61(13):2296–2315 25. Vassiliadis V, Dounias G (2009) Nature-inspired intelligence: a review of selected methods and applications. Int J Art Int Tools 18(4):487–516 26. Wang K, Huang L, Zhou C, Pang W (2003) Particle swarm optimization for traveling salesman problem. In: Proceedings of international conference on machine learning and cybernetics (pp 1583–1585) 27. Gao L, Peng CY, Zhou C, Li PG (2006) Solving flexible job-shop scheduling problem using general particle swarm optimization. In The 36th CIE conference on computers industrial engineering (pp 3018–3027) 28. Niknam T, Amiri B, Olamaei J, Arefi A (2009) An efficient hybrid evolutionary optimization algorithm based on PSO and SA for clustering (pp 512–519). Zhejiang University Press, Springer 29. Pongchairerks P, Kachitvichyanukul V (2009) A two-level particle swarm optimization algorithm on job-shop scheduling problems. Int J Oper Res 4:390–411 30. Sha DY, Hsu C (2006) A hybrid particle swarm optimization for job shop scheduling problem. Comput Ind Eng 51:791–808 31. Xia W, Wu Z (2006) A hybrid particle swarm optimization approach for the job shop scheduling problem. Int J Adv Manufac Technol 29:360–366 32. Caporossi G, Hansen P (2000) Variable neighborhood search for extremal graphs: 1 the AutoGraphiX system. Dis Math 212(1–2):29–44 33. Felipe A, Ortuno MT, Tirado G (2009) The double traveling salesman problem with multiple stacks: A variable neighborhood search approach. Comput Operat Res 36(11):2983–2993 34. Kuo Y, Wang C (2012) A variable neighborhood search for the multi-depot vehicle routing problem with loading cost. Exp Sys App 39(8):6949–6954 35. Mladenovic N, Hansen P (1997) Variable neighborhood search. Comput Oper Res 24(11):1097–1100 36. Bagheri A, Zandieh M (2011) Bi-criteria flexible job-shop scheduling with sequence-dependent setup times—Variable neighborhood search approach. J Manufac Sys 30(1):8–15
References
131
37. Mehmet S, Aydin ME (2006) A variable neighbourhood search algorithm for job shop scheduling problems. Lecture Notes in Computer Science, 261–271 38. Yazdani M, Amiri M, Zandieh M (2010) Flexible job-shop scheduling with parallel variable neighborhood search algorithm. Exp Sys Appl 37(1):678–687 39. Shi Y, Liu HC, Gao L, Zhang GH (2011) Cellular particle swarm optimization. Inf Sci 181(20):4460–4493 40. Fang HL, Ross P, Corne D (1993) A promising genetic algorithm approach to job-shop scheduling, rescheduling, and open-shop scheduling problems. In Proceedings of the fifth international conference on genetic algorithms, 1993 (pp 375–382). San Mateo, California: Morgan Kaufmann Publishers 41. Zhang CY, Rao YQ, Li PG (2008) An effective hybrid genetic algorithm for the job shop scheduling problem. Int J Adv Manufac Technol 39:965–974 42. Falkenauer E, Bouffouix S (1991) A genetic algorithm for job shop. In: The proceedings of the IEEE international conference on robotics and automation, Sacremento, California (pp 824–829) 43. Kauffman SA, Strohman RC (1994) The origins of order: Self-organization and selection in evolution. Int Physiol Behav Sci 29(2):193–194 44. Bierwirth C, Mattfeld D, Watson JP (2004) Landscape regularity and random walks for the job-shop scheduling problem. Evol Comput Combin Opt, 21–30 45. Darwen PJ (2001) Looking for the big valley in the fitness landscape of single machine scheduling with batching, precedence constraints, and sequence- dependent setup times. In Proceedings of the fifth Australasia–Japan joint workshop (pp 19–21) 46. Manderick B, Weger MD, Spiessens P (1991) The genetic algorithms and the structure of the fitness landscape. In: Proceedings of the fourth international conference on genetic algorithms, San Mateo (pp 143–150) 47. Smith-Miles K, James R, Giffin J, Tu Y (2009) A knowledge discovery approach to understanding relationships between scheduling problem structure and heuristic performance. Lecture Notes in Computer Science, 89–103 48. Watson J (2010) An introduction to fitness landscape analysis and cost models for local search. International Series in Operations Research and Management Science, 599–623 49. Watson JP, Beck JC, Howe AE, Whitley LD (2003) Problem difficulty for tabu search in job-shop scheduling. Art Int 143(2):189–217 50. Wen F, Gao L, Li XY (2011) A logistic model for solution space of job shop scheduling problem. Int J Adv Comput Technol 3(9):236–245 51. Beasley JE (1990) OR-library: distributing test problems by electronic mail. J Operat Res Soc 41(11):1069–1072 52. Lin T, Horng S, Kao T, Chen Y, Run R, Chen R, Kuo I (2010) An efficient job- shop scheduling algorithm based on particle swarm optimization. Exp Sys App 37(3):2629–2636 53. Aiex RM, Binato S, Resende MGC (2003) Parallel GRASP with path-relinking for job shop scheduling. Par Comput 29:393–430 54. Pezzella F, Merelli EA (2000) Tabu search method guided by shifting bottleneck for the job shop scheduling problem. Eur J Oper Res 120:297–310 55. Cheng R (1997) A study on genetic algorithms-based optimal scheduling techniques. PhD thesis. Tokyo Institute of Technology 56. Park BJ, Choi HR, Kim HS (2003) A hybrid genetic algorithm for the job shop scheduling problems. Comput Ind Eng 45:597–613
Chapter 7
An Effective Genetic Algorithm for FJSP
7.1 Introduction Scheduling is one of the most important issues in the planning and operation of manufacturing systems [1], and scheduling has gained much attention increasingly in recent years [2]. The classical Job shop Scheduling Problem (JSP) is one of the most difficult problems in this area. It consists of scheduling a set of jobs on a set of machines with the objective to minimize a certain criterion. Each machine is continuously available from time zero, processing one operation at a time without preemption. Each job has a specified processing order on the machines which are fixed and known in advance. Moreover, the processing time is also fixed and known. The Flexible Job shop Scheduling Problem (FJSP) is a generalization of the classical JSP for flexible manufacturing systems [3]. Each machine may have the ability to perform more than one type of operation, i.e., for a given operation must be associated with at least one machine. The problem of scheduling jobs in FJSP could be decomposed into two subproblems: the routing subproblem that assigns each operation to a machine selected out of a set of capable machines, the scheduling subproblem that consists of sequencing the assigned operations on all machines in order to obtain a feasible schedule to minimize the predefined objective function. Unlike the classical JSP where each operation is processed on a predefined machine, each operation in the FJSP can be processed on one out of several machines. This makes FJSP more difficult to solve due to the consideration of both routing of jobs and scheduling of operations. Moreover, it is a complex combinatorial optimization problem. JSP is known to be NP-hard [4]. FJSP is, therefore, NP-hard too. In this chapter, we propose an effective GA to solve the FJSP. Global Selection (GS) and Local Selection (LS) are designed to generate high-quality initial population in the initialization stage which could accelerate convergent speed. In order to assist the initialization method and assure the algorithm performs well, we design an improved chromosome representation method “Machine Selection and Operation © Springer-Verlag GmbH Germany, part of Springer Nature and Science Press, Beijing 2020 X. Li and L. Gao, Effective Methods for Integrated Process Planning and Scheduling, Engineering Applications of Computational Methods 2, https://doi.org/10.1007/978-3-662-55305-3_7
133
134
7 An Effective Genetic Algorithm for FJSP
Sequence”. In this method, we try to find an efficient coding scheme of the individuals which respects all constraints of the FJSP. At the same time, different strategies for crossover and mutation operators are employed. Computational resultss show that the proposed algorithm could get good solutions. The chapter is organized as follows. Section 7.2 gives the formulation of FJSP and shows an illustrative instance. An overview of the relevant literature on the subject is provided in Sect. 7.3. Section 7.4 presents the approach of machine selection and operation sequence, encoding and decoding scheme, global selection, local selection and genetic operators. Section 7.5 presents and analyzes the performance results of an effective genetic algorithm when it is applied to solve some common benchmarks from the literature. Some final concluding remarks and future study directions are given in Sect. 7.6.
7.2 Problem Formulation The flexible Job shop scheduling problem can be formulated as follows. There is a set of N jobs J = {J1 , J2 , . . . Ji , . . . , JN } and a set of M machines M = {M1 , M2 , . . . , Mk , . . . , M M }. Each job Ji consists of a predetermined sequence of operations. Each operation requires one machine selected out of a set of available machines, namely the first subproblem: the routing subproblem. In addition, the FJSP sets its starting and ending time on each machine, namely the second subproblem: the scheduling subproblem. The FJSP is thus to determine an assignment and a sequence of the operations on the machines so that some criteria are satisfied. However, the FJSP is more complex and challenging than the classical JSP because it requires a proper selection of a machine from a set of available machines to process each operation of each job [2]. Some symbols used in our chapter are listed as follows. Ω—the set of all machines N —the number of the total jobs M—the number of total machines k, x—the index of alternative machine set i—the index of the ith job j—the index of the jth operation of job J i Oi j —the jth operation of job J i Jio —the number of the total operations of job J i Ωi j —the set of available machines of Oij Pi jk —the processing time of operation Oi j on machine k Si jk —the start time of operation Oi j on machine k E i jk —the end time of operation Oi j on machine k Sx —the start time of the idle time interval on machine Mx E x —the N end time of the idle time interval on machine Mx L = i=1 Jio —the sum of all operations of all jobs
7.2 Problem Formulation
135
Table 7.1 Processing time table of an instance of P-FJSP Job
Operation
M1
M2
J1
O11
2
6
O12
–
8
O21
3
–
6
O22
4
6
O23
–
7
J2
M3
M4
M5
3
4
4
–
–
5
5
–
–
11
5
8
5 –
Kacem et al. [5] classified the FJSP into P-FJSP and T-FJSP as follows: If Ωi j ⊂ Ω, then it has partial flexibility, it is Partial FJSP (P-FJSP). Each operation could be processed on one machine of a subset of Ω; If Ωi j = Ω, then it has total flexibility, it is Total FJSP (T-FJSP). Each operation could be processed on any machine of Ω. With the same number of machines and jobs, the P-FJSP is more difficult to solve than the T-FJSP [5]. Hypotheses considered in this chapter are summarized as follows: (1) (2) (3) (4)
All machines are available at time 0; All jobs are released at time 0; Each machine can process only one operation at a time; Each operation can be processed without interruption on one of a set of available machines; (5) Recirculation occurs when a job could visit a machine more than once; (6) The order of operations for each job is predefined and cannot be modified. For the simplicity of presenting the algorithm, we designed a sample instance of FJSP which will be used throughout the chapter. In Table 7.1, there are 2 jobs and 5 machines, where rows correspond to operations and columns correspond to machines. Each cell denotes the processing time of that operation on the corresponding machine. In the table, the “—” means that the machine cannot execute the corresponding operation, i.e., it does not belong to the alternative machine set of the operation, so this instance is a P-FJSP.
7.3 Literature Review Bruker and Schile [6] were the first to address this problem in 1990. They developed a polynomial graphical algorithm for a two-job problem. However, exact algorithms are not effective for solving FJSP and large instances [3]. Several heuristic procedures such as dispatching rules, Tabu Search (TS), Simulated Annealing (SA), and Genetic Algorithm (GA) have been developed in recent years for the FJSP. They could produce reasonably good schedules in a reasonable computational time, and could get near-optimal solution easily. However, the meta-heuristics methods have
136
7 An Effective Genetic Algorithm for FJSP
led to better results than the traditional dispatching or greedy heuristic algorithm [3, 7, 8]. FJSP could be turned into the job shop scheduling problem when routing is chosen, so when solving FJSP, hierarchical approach and integrated approach have been used. The hierarchical approach could reduce the difficulty by decomposing the FJSP into a sequence of subproblems. Brandimarte [9] was the first to apply the hierarchical approach to the FJSP. Paulli [10] solved the routing subproblem using some existing dispatching rules, and then solved the scheduling subproblem by different tabu search methods. Integrated approach could achieve better results, but it is rather difficult to be implemented in real operations. Hurink et al. [11] and Dauzère-Pérès et al. [12] proposed different tabu search heuristic approaches to solve the FJSP using an integrated approach. Mastrolilli and Gambardella [13] proposed some neighborhood functions for the FJSP, which can be used in meta-heuristic optimization techniques, and achieve better computational results than any other heuristic developed so far, both in terms of computational time and solution quality. Then Amiri et al. [14] proposed a parallel variable neighborhood search algorithm that solves the FJSP to minimize makespan time. GA is an effective meta-heuristic to solve combinatorial optimization problems, and has been successfully adopted to solve the FJSP. Recently, more and more chapters are talking about this topic. They differ from each other in encoding and decoding schemes, initial population method, and offspring generation strategy. Chen et al. [1] used an integrated approach to solve the FJSP. The genes of the chromosomes, respectively, describe a concrete allocation of operations to each machine and the sequence of operations on each machine. Yang [15] proposed a GA-based discrete dynamic programming approach. Zhang and Gen [16] proposed a multistage operation-based genetic algorithm to deal with the problem from the point of view of dynamic programming. Jia et al. [17] presented a modified GA that is able to solve distribute scheduling problems and FJSP. Kacem et al. [18, 19] used tasks sequencing list coding scheme that combines both routing and sequencing information to form a chromosome representation, and developed an approach by localization (AL) to find a promising initial assignment. Then, dispatching rules were applied to sequence the operations. Tay and Wibowo [20] compared four different chromosome representations, testing their performance on some problem instances. Ho et al. [2] proposed an architecture for learning and evolving of FJSP called Learnable Genetic Architecture (LEGA). LEGA provides an effective integration between evolution and learning within a random search process. Pezzella et al. [3] integrate different strategies for generating the initial population, selecting the individuals for reproducing new individuals. Gao et al. [20] combined the GA and a Variable Neighborhood Descent (VND) for solving the FJSP.
7.4 An Effective GA for FJSP
137
7.4 An Effective GA for FJSP The advantage of GA with respect to other local search algorithms is due to the fact that more strategies could be adopted together to find good individuals to add to the mating pool in a GA framework, both in the initial population phase and in the dynamic generation phase [3]. In this chapter, the proposed GA adopts an improved chromosome representation and a novel initialization approach, which can balance the workload of the machines well and converge to a suboptimal solution in a short time.
7.4.1 Representation Better efficiency of GA-based search could be achieved by modifying the chromosome representation and its related operators so as to generate feasible solutions and avoid repair mechanism. Ho et al. [2] developed extensive review and investigated insightfully on chromosome representation of FJSP. Mesghouni et al. [21] proposed a parallel job representation for solving the FJSP. The chromosome is represented by a matrix where each row is an ordered sequence of each job. Each element of the row contains two terms, the first one is the machine processing the operation, and the second one is the starting time of this operation. The approach requires a repair mechanism and the decoding representation is complex. Chen et al. [1] divided the chromosome into two parts: A-string and B-string. A-string denotes the routing policy of the problem, and B-string denotes the sequence of the operations on each machine, however, this method needs to consider the order of operations and require a repair mechanism. Kacem et al. [5] represented the chromosome by an assignment table representation. A data structure of the assignment table must necessarily describe the set of all machines. This increases the overall computational complexity due to the presence of redundant assignments. Ho et al. [2] also divided the chromosome into two strings, one represents the operation order, the other represents the machine by an array of binary values. This structure can represent the problem clearly and conveniently, but the binary-coded increases the memory space and the operation is not convenient, so when the scale of the problem is oversized, the memory space and computational time will increase tremendously [22]. Based on the analysis of the approach from the above literature, we design an improved chromosome representation to reduce the cost of decoding, due to its structure and encoding rule, it requires no repair mechanism. Our chromosome representation has two components: Machine Selection and Operation Sequence (called MSOS) (see Fig. 7.1). Machine Selection part (MS): We use an array of integer values to represent machine selection. The length equals to L. Each integer value equals the index of the array of alternative machine set of each operation. For the problem in Table 7.1, one
138
7 An Effective Genetic Algorithm for FJSP
Fig. 7.1 Structure of the proposed MSOS chromosome
possible encoding of the machine selection part is shown in Fig. 7.2. For instance, M 2 is selected to process operation O12 since the value in the array of alternative machine set is 1. The value could also equal 2 since operation O12 can be processed on two machines M 2 or M 4 , the valid values are 1 and 2. This demonstrates an FJSP with recirculation if more than one operation of the same job is processed on the same machine. For example, O11 and O12 both belong to J 1 and may be processed on the same machine M 2 or M 4 . If only one machine could be selected for some operations, the value is 1 in the machine selection array. Therefore, the MS representation is flexible enough to encode the FJSP. It may easily use the same structure to represent P-FJSP or T-FJSP. This property improves the search process by requiring less memory and ignoring unused data especially to P-FJSP. Operation Sequence part (OS): We use the operation-based representation, which defines all operations for a job with the same symbol and then interprets them according to the sequence of a given chromosome. The length equals L too. The index i of job J i appears in the operation sequence part J io times to represent its J io ordered operations. It can avoid generating an infeasible schedule by replacing each operation corresponding to the job index. For the instance in Table 7.1, one possible encoding of the operation sequence part is shown in Fig. 7.3. J 1 has two operations O11 and O12 ; J 2 has three operations O21 , O22, and O23 . Reading the data from left to right and increasing the operation index of each job, the operation sequence 2-2-1-1-2 depicted could be translated into a list of ordered operations: O21 -O22 -O11 -O12 -O23 . Fig. 7.2 Machine selection part
Fig. 7.3 Operation sequence part
7.4 An Effective GA for FJSP
139
7.4.2 Decoding the MSOS Chromosome to a Feasible and Active Schedule In the literature [23], schedules are categorized into three classes: non-delay schedule, active schedule, and semi-active schedule. It has been verified and denoted in a Venn diagram in [23] that an active schedule contains an optimal schedule, so only an active schedule is considered in our decoding approach in order to reduce the search space. The steps for decoding an MSOS chromosome to a feasible and active schedule for an FJSP are as follows. Step 1: Machine Selection Part is read from left to right. Then, each gene integer will be transferred to the machine matrix and time matrix depending on the processing time table. For instance, Fig. 7.2 could be transferred to machine matrix (7.1) and time matrix (7.2) depending on Table 7.1. Rows in each matrix correspond to jobs, and columns correspond to operations of each job. For example, in machine matrix, the integer 4 denotes operation O11 which is processed on M 4 and the corresponding processing time is 3 in time matrix.
42 Machine = 325 38 Time = 668
(7.1) (7.2)
Step 2: Operation Sequence Part is also read from left to right. The selected machines and processing time correspond to machine matrix and time matrix, respectively, in Step 1. (a) Decode each integer to the corresponding operation Oij ; (b) Refer to machine matrix and time matrix to obtain the selected machine M k and processing time Pijk ; (c) Let Oi(j+1) be processed on machine M x and Pi(j+1)x its processing time. Let T x be the end time of the last operation on M x . Then find all time intervals on M x , i.e., get an idle time interval [S x , fE x ] beginning from S x and ending at E x on M x . Because of precedence constraints among operations of the same job, operation Oi(j+1) could only be started when its immediate job predecessor Oij has been completed. We could get the earliest starting process time t b of operation Oi(j+1) according to Eq. (7.3); tb = max E i jk , Sx
(7.3)
(d) According to Eq. (7.4), the time interval [S x , E x ] is available for Oi(j+1) if there is enough time span from the starting of Oi(j+1) until the end of the interval to complete it. As can be seen from Fig. 7.4, if Eq. (7.4) is true, assign Oi(j+1) to M x starting at t b (Fig. 7.4a). Otherwise, operation Oi(j+1) is allocated at the
140
7 An Effective Genetic Algorithm for FJSP
Fig. 7.4 Finding enough interval and inserting Oi(j+1)
end of the last operation on M x (i.e., it starts at T x ) (Fig. 7.4b). max E i jk , Sx + Pi( j+1)x ≤ E x
(7.4)
(e) Go to (c) until each operation of Operation Sequence Part is processed on the corresponding machine.
7.4.3 Initial Population Population initialization is a crucial task in evolutionary algorithms because it can affect the convergence speed and the quality of the final solution [24]. In this section, we mainly present two methods to solve the first subproblem by assigning each operation to a suitable machine. These methods take into account both the processing time and the workload of the machines. Global Selection (GS): We define that a stage is the process of selecting a suitable machine for an operation. Thus this method records the sum of the processing time of each machine in the whole processing stage. Then the machine which has the minimum processing time in every stage is selected. In particular, the first job and next job are randomly selected. Detailed steps are as follows. Step 1: Create a new array to record all machines’ processing time, initialize each element to 0; Step 2: Select a job randomly and insure one job to be selected only once, then select the first operation of the job; Step 3: Add the processing time of each machine in the available machines and the corresponding machine’s time in the time array together;
7.4 An Effective GA for FJSP
141
Fig. 7.5 The process of GS
Step 4: Compare the added time to find the shortest time, then select the index k of the machine which has the shortest time. If there is the same time among different machines, a machine is selected randomly among them; Step 5: Set the allele which corresponds to the current operation in the MS part to k; Step 6: Add the current selected machine’s processing time and its corresponding allele in the time array together in order to update the time array; Step 7: Select the next operation of the current job, and execute Step 3 to Step 6 until all operations of the current job are selected, then go to Step 8; Step 8: Go to Step 2 until all jobs are all selected once. The implementation of GS is given in Fig. 7.5. We assume that the first selected job is J 1 , and the next job is J 2 . From Fig. 7.5, we easily see that the processing time on M 1 is the shortest in the alternative machine set of operation O11 . So the machine M 1 is selected to process the operation O11 of job J 1 , and set the corresponding allele in MS to the index of M 1 . Then the processing time is added to the corresponding position in time array. Finally, the selected machines of all operations may be M 1 M 4 -M 1 -M 3 -M 2 , and the corresponding chromosome representation is 1-2-1-3-1. Local Selection (LS): This method is different from GS, it records the processing time of machines when a single job has been processed. It records the stages which belong to a single job instead of the whole stages. Then the machine which has the minimum processing time in the current stage is selected. Detailed procedures of LS are as follows. Step 1: In order to record all machines’ processing time, create a new array (called time array), the length equals to L, and set each element 0; Step 2: Select the first job, and its first operation; Step 3: Set each allele 0 in the array; Step 4: Add the processing time of each machine in the alternative machine set and the corresponding machines’ time in the array together; Step 5: Compare the added time to find the index k of the machine which has the shortest time. If there is the same time among different machines, a machine is randomly selected among them;
142
7 An Effective Genetic Algorithm for FJSP
Step 6: Set the allele which corresponds to the current operation in the MS part to k; Step 7: Add the current selected machine’s processing time and its corresponding allele in the time array together to update the time array; Step 8: Select the next operation of the current job, and go to Step 4 until all the operations of the current job are selected, then go to Step 9; Step 9: Select the next job, and select the first operation of the current job; Step 10: Go to Step 3 until all jobs are selected once. We also take the data in Table 7.1, for instance. The implementation of LS is given in Fig. 7.6. We assume that the first selected job is J 1 , and the next job is J 2 . From Fig. 7.6, we easily see that the processing time on M 1 is the shortest in the alternative machine set of operation O11 . So the machine M 1 is selected to process the operation O11 of job J 1 . Then the processing time is added to the corresponding position in time array. When all operations of job J 1 have been arranged on a suitable machine, each element in the time array is set to 0. Then the next job J 2 is executed in the same way. Finally, the selected machines of all operations may be M 1 -M 4 -M 1 -M 3 -M 4 , and the corresponding chromosome representation is 1-2-1-3-3. From above, we can see that the difference between GS and LS is that the updating style of the time array, i.e., the time array in GS records all the time of all jobs, however, the time array in LS records the time of each job. In addition, we could also generate initial assignments by randomly selecting a machine in an alternative machine set of each operation. In practice, we mix the two methods with Random Selection (RS) to generate an initial population. The advantage of using GS is that it could find different initial assignments in different runs of the algorithm, better exploring the search space and considering the workload of machines. LS could find the machine which has the shortest processing time machine in an alternative machine set of each job. Without loss of generality and enhancing randomicity, we adopt a randomly selecting machine to generate initial assignments. For example, 60% of the initial population could be generated by GS, 30% by LS, and 10% by RS.
Fig. 7.6 The process of LS
7.4 An Effective GA for FJSP
143
Once the assignments are settled, we have to determine how to sequence the operations on the machines. Obviously, the scheduling is feasible if it respects the precedence constraints among operations of the same job, i.e., operation Oij cannot be processed before operation Oi(j+1) .
7.4.4 Selection Operator Choosing individuals for reproduction is the task of selection. The chosen individuals are moved into a mating pool. They could reproduce once or more times. The roulette wheel selection was popular in the past, which is a fitness-based approach. Therefore, the objective value that is achieved by decoding chromosome must be repaired. The parameters for repairing may influence the chosen results. Tournament approach could overcome the scaling problem of the direct fitness-based approach, and make good individuals have more “survival” opportunities. At the same time, it only adapts the relative value of the fitness as the criteria to select the individuals instead of using the proportion of the fitness directly. So it can avoid both the influence of the “super individual” and premature convergence. In this chapter, the tournament approach is adopted. Three individuals are randomly chosen from the parent population, and three objective values are compared to select the best individual to move into the mating pool.
7.4.5 Crossover Operator The goal of the crossover is to obtain better chromosomes to improve the result by exchanging information contained in the current good ones. In our work, we carried out two kinds of crossover operators for the chromosomes. Machine Selection Part: The crossover operation of MS is performed on two machine selection parts and generates two new machine selection parts each of which corresponds to a new allocation of operations to machines. Each machine of the new machine selection parts must be effective, i.e., the machine must be included in the alternative machine set of the corresponding operation. We adopt two different crossover operators. The first crossover operator is the two-point crossover. The detailed process of the two-point crossover was described in [25]. And the second crossover operator is uniform crossover [26]. MS crossover operator only changes some alleles, while their location in each individual i.e., their preceding constraints are not changed. Therefore, the individuals after crossover are also feasible. The procedure could be illustrated in Fig. 7.7. Operation Sequence Part: The crossover operation of OS is different from that of MS. During the past decades, several crossover operators have been proposed for permutation representation, such as order crossover, partial mapped crossover, and
144
7 An Effective Genetic Algorithm for FJSP
Fig. 7.7 MS crossover operator
so on. Here we apply a Precedence Preserving Order-based crossover (POX) for the operation sequence [27]. Detailed implementing steps of POX are as follows. Step 1: Generate two sub-jobsets J s1 /J s2 from all jobs and select two parent individuals as p1 and p2 randomly; Step 2: Copy any allele in p1 /p2 that belong to J s1 /J s2 into two child individuals c1 /c2 , and retain in the same position in c1 /c2 ; Step 3: Delete the alleles that are already in the sub-job J s1 /J s2 from p2 /p1 ; Step 4: Orderly fill the empty position in c1 /c2 with the alleles of p2 /p1 that belong to in their previous sequence; In Table 7.1, there are only two jobs. So it is difficult to present the process of POX clearly. In Fig. 7.8 we use five jobs to illustrate the procedure of generating two child individuals. Fig. 7.8 OS crossover operator
7.4 An Effective GA for FJSP
145
7.4.6 Mutation Operator Mutation introduces some extra variability into the population to enhance the diversity of population. Usually, mutation is applied with small probability. Large probability may destroy the good chromosome. Machine Selection Part: MS mutation operator only changes the assignment property of the chromosomes. We select the shortest processing time from an alternative machine set to balance the workload of the machines. Taking the chromosome from Fig. 7.1, for example, MS mutation is described as follows. Step 1: Select one individual from the population; Step 2: Read the chromosomes of the individual from left to right and generate a probability value randomly; if all the chromosomes have been read, then end the procedure; Step 3: If the probability value is less than or equal to the mutation probability then go to Step 4; otherwise, go to Step 2; Step 4: Select the shortest processing time from the alternative machine set and assign it to the mutation position; An illustrative instance is shown in Fig. 7.9. Suppose the mutative operation is O23 , before the mutation, O23 is processed on M 5 , which is the fourth machine in the alternative machine set, so the allele is 4. In the mutation, the rule that selecting the machine of the shortest processing time is obeyed, so M 4 is selected, and the allele in the chromosome changes into 3. Operation Sequence Part: The OS mutation probability is the same as the MS mutation probability. If one position in the OS chromosome is to be mutated, then another position is randomly generated in [0, L − 1] to exchange with it. According to our chromosome representation, the new individual is a feasible solution all the same. Fig. 7.9 MS mutation operator
146
7 An Effective Genetic Algorithm for FJSP
Fig. 7.10 Framework of the proposed effective GA
7.4.7 Framework of the Effective GA The framework of the proposed genetic algorithm is illustrated in Fig. 7.10. Each individual in the initial population of the MS part is generated by GS, LS, and RS. This procedure makes it possible to assign each operation to a suitable machine by taking into account the processing time and workload of machines. And each individual in the initial population of the OS part is randomly generated according to the chromosome representation in Sect. 7.4.1. The adoption of MSOS improved the chromosome representation and it could generate feasible schedules and avoid repair mechanisms. The proposed effective genetic algorithm terminates when a maximal number of iteration is reached, and the best individual, together with the corresponding schedule, is output.
7.5 Computational Results
147
7.5 Computational Results The proposed effective Genetic Algorithm (eGA) was implemented in C++ on a Pentium IV running at 1.8 GHz and tested on a large number of instances from the literature. Test problems include both the P-FJSP and the T-FJSP. We know the P-FJSP is more complex than the T-FJSP from above, when considering the search space and the computational cost, the approach for solving it is very important. However, in our experiments, the approach matches the P-FJSP well because of the mechanism of adopting real-coded based on the alternative machine set in chromosome representation. Moreover, it can maintain feasibility without any mechanism. We compare our computational results with results obtained by other researchers. Firstly, we tested the performance of the initialization method (GS + LS + RS), we selected Mk04 test problem with 15 jobs and 8 machines randomly from Brandimarte [9]. The results are shown in Table 7.2 and Fig. 7.11. So we could see that GS+LS+RS could improve the quality of initial efficiently. In Fig. 7.12, we draw the decrease of the average best makespan over five runs for the Mk04 problem with 15 jobs and 8 machines initialized by GS + LS + RS and RS, respectively. Compared the results by the two initial methods, the two broken lines have a gap before the generation 84, GS + LS + RS is the lower one, and it could achieve the optimal solution at Generation 40 while Random could gain the same optimal solution until Generation 84. So we could obviously know that GS and LS are effective methods for generating high-quality initial population, they could decrease the average best makespan a lot in the whole computational stage. In order to obtain meaningful results, we ran our algorithm five times in the same instance. The parameters used in effective GA are chosen experimentally in order to get a satisfactory solution in an acceptable time span. According to the complexity of Table 7.2 The comparison of the initialization methods Population size
GS + LS + RS Average of best makespan
RS Average of makespan
Average of best makespan
Average of makespan
50
75.0
91.7
93.6
116.0
100
73.2
92.8
88.8
116.5
150
73.0
92.8
88.4
115.9
200
73.4
92.1
87.6
115.9
250
73.2
92.5
88.2
116.3
300
72.8
92.8
87.0
116.3
350
72.8
92.4
87.0
115.5
400
73.8
92.6
87.8
115.5
450
71.4
92.5
86.4
115.2
500
70.8
92.4
86.8
115.5
148
7 An Effective Genetic Algorithm for FJSP
Fig. 7.11 The comparison of the initialization methods
Fig. 7.12 Decreasing of the makespan (Mk04)
the problems, the population size of the effective GA ranges from 50 to 300. Through experimentation, other parameter values were chosen as follows: number of generation: 100; rate of initial assignments with GS: 0.6; rate of initial assignments with LS: 0.3; rate of initial assignments with RS: 0.1; crossover probability: 0.7; rate of two-point crossover in MS: 0.5; rate of uniform crossover in MS: 0.5; mutation probability: 0.01.
7.5 Computational Results
149
The proposed effective GA is firstly tested from Brandimarte’s data set (BRdata) [9]. The data set consists of ten problems with number of jobs ranging from 10 to 20, number of machines ranging from 4 to 15, and number of operations for each job ranging from 5 to 15. The second data set is a set of 21 problems from Barnes and Chambers [28] (BCdata). The data set was constructed from the three most challenging classical job shop scheduling problems (mt 10, la24, la40) [29, 30] by replicating the machine. The processing times for operations on replicated machines are assumed to be identical to the original. The number of jobs ranges from 10 to 15, the number of machines ranges from 11 to 18, and the number of operations for each job ranges from 10 to 15. The third data set is a set of 18 problems from Dauzère-Pérès and Paulli [12] (DPdata). The number of jobs ranges from 10 to 20, the number of machines ranges from 5 to 10, and the number of operations for each job ranges from 15 to 25. The set of machines capable of performing an operation was constructed by letting a machine be in that set with a probability that ranges from 0.1 to 0.5. In Tables 7.3, 7.4 and 7.5, n × m denotes the jobs and machines about each instance. Flex. denotes the average number of equivalent machines per operation. T o denotes the total number of all operations of all jobs. (LB, UB) denotes the optimum makespan if known, otherwise, the best lower and upper bound found to date. C m denotes makespan time. * indicates the best-known solution to date. AV(C m ) stands for the average makespan out of five runs. t denotes the average running time over five runs. M&G is the approach proposed by Mastrolilli and Gambardella [13]. From Table 7.3, the compared computational results show that the speed of searching for an optimal solution by our algorithm is very fast so far. Among the 10 test problems, Mk03 and Mk08 could get the optimal solution in the first generation by using eGA. 8 problems out of the 10 problems could gain the same good results as M&G. Compared with GENACE, eGA could get better solutions of 8 problems. From Tables 7.4 and 7.5, totally, we found new better 33 solutions in terms of best solutions out of five runs in the 39 test problems. And, the average makespan of the eGA over five runs is better than of M&G on 34 test problems. The running time of the eGA is also much shorter than that of M&G. Obviously, our proposed algorithm is effective and efficient. And, Fig. 7.13 shows the Gantt chart of optimal solution of seti5xy test problem.
7.6 Conclusions and Future Study In this chapter, we proposed an effective genetic algorithm for solving the Flexible Job shop Scheduling Problem (FJSP). An improved chromosome representation scheme is proposed and an effective decoding method interpreting each chromosome into a feasible active schedule is designed. In order to enhance the quality of
To
55
58
150
90
106
150
100
225
240
240
n ×m
10 × 6
10 × 6
15 × 8
15 × 8
15 × 4
10 × 15
20 × 5
20 × 10
20 × 10
20 × 15
Problem
mk01
mk02
mk03
mk04
mk05
mk06
mk07
mk08
mk09
mk10
Table 7.3 Results of BRdata
2.98
2.53
1.43
2.83
3.27
1.71
1.91
3.01
4.01
2.09
Flex.
165, 296
299, 369
523
133, 157
33, 86
168, 186
48, 81
204, 211
24, 32
36, 42
LB, UB
198*
307*
523*
144*
58*
173*
60*
204*
26*
40*
199.2
307.0
523.0
147.0
58.4
173.0
60.0
204.0
26.0
40.0
200
200
200
200
200
200
200
N/A
200
200
Pop
229
320
523
147
67
176
67
N/A
32
40
Cm
GENACE
Cm
AV(C m )
M&G
3.6
3.2
27.6
20.4
19.1
6.9
10.7
7.2
6.1
N/A
t
300
300
50
200
200
200
100
50
300
100
Pop
198*
307*
523*
144*
58*
173*
60*
204*
26*
40*
Cm
Proposed eGA
199
307
523
145
58
173
60
204
26
40
AV(C m )
7.3
6.2
1.3
2.6
1.6
36.6
30.2
2.2
17.3
15.7
t
150 7 An Effective Genetic Algorithm for FJSP
225
225
15 × 16
15 × 17
seti5c12
seti5cc
225
150
15 × 13
setb4xyz
225
150
15 × 12
setb4xy
15 × 18
150
15 × 13
setb4xxx
15 × 17
150
15 × 12
setb4xx
seti5xyz
150
15 × 11
setb4x
seti5xy
150
15 × 12
setb4cc
225
150
15 × 11
setb4c9
15 × 18
100
10×13
mt10xyz
seti5xxx
100
10 × 12
mt10xy
225
100
10 × 13
mt10xxx
225
100
10 × 12
mt10xx
15 × 16
100
10 × 11
mt10x
15 × 17
100
10 × 12
mt10cc
seti5xx
100
10 × 11
mt10c1
seti5x
To
n ×m
Problem
Table 7.4 Results of BCdata
1.20
1.13
1.20
1.13
1.07
1.13
1.07
1.30
1.20
1.30
1.20
1.10
1.20
1.10
1.30
1.20
1.30
1.20
1.10
1.20
1.10
Flex.
955, 1127
955, 1148
955, 1213
955, 1204
955, 1218
955, 1136
1027, 1185
838, 914
845, 924
846, 925
847, 930
846, 937
857, 909
857, 924
655, 849
655, 913
655, 936
655, 929
655, 929
655, 914
655, 927
LB, UB
1125*
1136*
1197*
1199*
1201*
1136*
1174*
905*
916*
925*
925*
925*
909*
919
847*
906
918*
918*
918*
910*
1126.6
1136.4
1198.4
1200.6
1203.6
1136.4
1174.2
908.2
916
925
926.4
925
911.6
919.2
850
906
918
918
918
910
928
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
300
200
1000
1000
1000
300
1000
1000
1000
200
200
1125*
1136*
1204
1204
1209
1136*
1174*
905*
916*
925*
925*
925*
909*
914*
847*
905*
918*
918*
918*
910*
927*
Cm
Pop
AV(C m )
Cm 928
Proposed eGA
M&G
1126.5
1136.3
1204
1204
1209
1136.2
1174
908.1
916
925
925
925
910
914
847
906
918
918
918
910
928
AV(C m )
78.29
67.56
69.53
70.69
40.26
27.78
62.81
54.06
8.92
20.16
12.81
30.24
24.37
25.39
20.38
21.45
19.27
23.25
70.56
70.47
105.25
t
7.6 Conclusions and Future Study 151
To
100
100
100
100
100
100
100
150
150
150
150
150
150
150
225
225
225
225
n ×m
10 × 5
10 × 5
10 × 5
10 × 5
10 × 5
10 × 5
15 × 8
15 × 8
15 × 8
15 × 8
15 × 8
15 × 8
20 × 10
20 × 10
20 × 10
20 × 10
20 × 10
20 × 10
Problem
01a
02a
03a
04a
05a
06a
07a
08a
09a
10a
11a
12a
13a
14a
15a
16a
17a
18a
Table 7.5 Results of DPdata
5.02
2.99
1.34
5.02
2.99
1.34
4.03
2.42
1.24
4.03
2.42
1.24
2.56
1.69
1.13
2.56
1.69
1.13
Flex.
2057, 2139
2088, 2168
2148, 2301
2161, 2171
2161, 2183
2161, 2302
1969, 2047
2017, 2078
2178, 2362
2061, 2074
2061, 2093
2187, 2408
2162, 2216
2189, 2229
2503, 2565
2228, 2235
2228, 2244
2505, 2530
LB, UB
2137
2141
2255
2167
2167
2260
2034
2063*
2291
2066*
2069*
2283
2203
2216
2503*
2229*
2231*
2140.2
2144
2258.8
2167.2
2168
2266.2
2038
2065.6
2305.6
2067.4
2071.4
2297.6
2206.4
2220
2516.2
2229.6
2234
300
200
500
1000
200
500
1000
300
500
300
500
200
500
300
300
1000
1000
300
2089*
2109*
2211*
2165*
2167*
2194*
2019*
2063*
2189*
2066*
2073
2217*
2174*
2208*
2515
2232
2231*
2516*
Cm
Pop
AV(C m ) 2528
Cm 2518
Proposed eGA
M&G AV(C m )
2089
2110
2212.6
2166
2168.2
2194
2022
2065
2191
2066
2073
2218.4
2175
2210
2515
2232.3
2231
2518
98.35
735.13
645.38
494.16
765.48
555.15
522.77
332.44
405.95
373.32
390.22
388.53
396.44
133.05
154.89
109.26
112.02
164.75
t
152 7 An Effective Genetic Algorithm for FJSP
7.6 Conclusions and Future Study
153
Fig. 7.13 The Gantt chart of seti5xy
the initial solution, a new initial assignment method (GS + LS + RS) is designed to generate a high-quality initial population integrating different strategies to improve the convergence speed and the quality of final solutions. Then different strategies for selection, crossover, and mutation operator are adopted. This makes it possible to solve the problem of trade-off resource allocation. Some benchmark problems taken from other literature are solved. The computational results show that the proposed effective genetic algorithm leads to the same level or even better results in computational time and quality compared with other genetic algorithms. These results not only prove that the proposed method is effective and efficient for solving flexible Job shop scheduling problem, but also demonstrate the initialization method is crucial in the genetic algorithm. It can enhance the convergence speed and the quality of the solution, so we expect to gain more insights into the research on the effective initialization method. In the future, it will be interesting to investigate the following issues: Adjust the proportion of the GS, LS, and RS used in the initialization in order to generate better results. Maintain the best individual of the mating pool in a successive generation of chromosomes. Combine with the good local search algorithm to enhance the capability of global selection and local selection.
References 1. Chen H, Ihlow J, Lehmann C (1999) A genetic algorithm for flexible job-shop scheduling. In: IEEE international conference on robotics and automation, Detroit, vol 2, pp 1120–1125 2. Ho NB, Tay JC, Edmund MK Lai (2007) An effective architecture for learning and evolving flexible job shop schedules. Eur J Oper Res 179:316–333
154
7 An Effective Genetic Algorithm for FJSP
3. Pezzella F, Morganti G, Ciaschetti G (2007) A genetic algorithm for the flexible job-shop scheduling problem. Comput Oper Res 35(10):3202–3212 4. Garey MR, Johnson DS, Sethi R (1976) The complexity of flowshop and jobshop scheduling. Math Oper Res 1:117–129 5. Kacem I, Hammadi S, Borne P (2002) Approach by localization and multiobjective evolutionary optimization for flexible job-shop scheduling problems. IEEE Trans Syst Man Cybernet 32(1):1–13 6. Brucker P, Schile R (1990) Job-shop scheduling with multi-purpose machines. Computing 45(4):369–375 7. Mastrolilli M, Gambardella LM (1996) Effective neighborhood functions for the flexible job shop problem. J Sched 3:3–20 8. Najid NM, Dauzère-Pérès S, Zaidat A (2002) A modified simulated annealing method for flexible job shop scheduling problem. IEEE Int Conf Syst Man Cybernet 5:6–12 9. Brandimarte P (1993) Routing and scheduling in a flexible job shop by taboo search. Ann Oper Res 41:157–183 10. Paulli J (1995) A hierarchical approach for the FMS scheduling problem. Eur J Oper Res 86(1):32–42 11. Hurink E, Jurisch B, Thole M (1994) Tabu search for the job shop scheduling problem with multi-purpose machines. Oper Res Spektrum 15:205–215 12. Dauzère-Pérès Paulli E (1997) An integrated approach for modeling and solving the general multi-processor job-shop scheduling problem using tabu search. Ann Oper Res 70:281–306 13. Mastrolilli M, Gambardella LM (2000) Effective neighborhood functions for the flexible job shop problem. J Sched 3(1):3–20 14. Amiri M, Zandieh M, Yazdani M, Bagheri A (2010) A parallel variable neighborhood search algorithm for the flexible job-shop scheduling problem. Expert Syst Appl 37(1):678–687 15. Yang JB (2001) GA-based discrete dynamic programming approach for scheduling in FMS environments. IEEE Trans Syst Man Cybernet Part B 31(5):824–835 16. Zhang HP, Gen M (2005) Multistage-based genetic algorithm for flexible job-shop scheduling problem. J Complexity Int 48:409–425 17. Jia HZ, Nee AYC, Fuh JYH, Zhang YF (2003) A modified genetic algorithm for distributed scheduling problems. Int J Intell Manuf 14:351–362 18. Kacem I (2003) Genetic algorithm for the flexible job-shop scheduling problem. IEEE Int Conf Syst Man Cybernet 4:3464–3469 19. Kacem I, Hammadi S, Borne P (2002) Pareto-optimality approach for flexible job-shop scheduling problems: Hybridization of evolutionary algorithms and fuzzy logic. Math Comput Simul 60:245–276 20. Tay JC, Wibowo D (2004) An effective chromosome representation for evolving flexible job shop schedules, GECCO 2004. In: Lecture notes in computer science, vol 3103. Springer, Berlin, pp 210–221 21. Mesghouni K, Hammadi S, Borne P. Evolution programs for job-shop scheduling (1997). In: Proceedings of the IEEE international conference on computational cybernetics and simulation, vol 1, pp 720–725 22. Liu HB, Abraham A, Grosan C (2007) A novel variable neighborhood particle swarm optimization for multi-objective flexible job-shop scheduling problems. In: 2 nd international conference on digital information management (ICDIM) on 2007, pp 138–145 23. Pinedo M (2002) Scheduling theory, algorithms, and systems. Prentice-Hall, Englewood Cliffs, NJ (Chapter 2) 24. Shahryar R, Hamid RT, Magdy MAS (2007) A novel population initialization method for accelerating evolutionary algorithms. Compu Math Appl 53:1605–1614 25. Watanabe M, Ida K, Gen M (2005) A genetic algorithm with modified crossover operator and search area adaptation for the job-shop scheduling problem. Comput Ind Eng 48:743–752 26. Gao J, Sun LY, Gen M (2008) A hybrid genetic and variable neighborhood descent algorithm for flexible job shop scheduling problems. Comput Oper Res 35(9):2892–2907
References
155
27. Lee KM, Yamakawa T, Lee KM (1998) A genetic algorithm for general machine scheduling problems. Int J Knowl Based Electronic 2:60–66 28. Barnes JW, Chambers JB. Flexible job shop scheduling by tabu search. Graduate program in operations research and industrial engineering, The University of Texas at Austin 1996; Technical Report Series: ORP96-09 29. Fisher H, Thompson GL (1963) Probabilistic learning combinations of local job shop scheduling rules. Prentice-Hall, Englewood Cliffs, NJ, pp 225–251 30. Lawrence S (1984) Supplement to resource constrained project scheduling: an experimental investigation of heuristic scheduling techniques. GSIA, Carnegie Mellon University, Pittsburgh, PA
Chapter 8
An Effective Collaborative Evolutionary Algorithm for FJSP
8.1 Introduction Production scheduling is one of the most important issues in the planning and scheduling of current manufacturing systems [1]. The optimization technology of the production scheduling can introduce significant improvements to the efficiency of manufacturing through eliminating or reducing scheduling conflicts, reducing flow time and work-in-process, improving production resources utilization and adapting to irregular shop floor disturbances. The classical Job shop Scheduling Problem (JSP) is one of the most difficult problems in this area. It has been proved to be an NPhard problem [2]. It assumes that there is no flexibility of the resources (including machines and tools) for each operation of every job. It may suit the traditional manufacturing system. However, in the modern manufacturing enterprise, many flexible manufacturing systems and NC machines are introduced to improve the production efficiency. These machines and systems can process many types of operations. This means that the assumption about one machine processing one operation in the JSP cannot match the current manufacturing states. Its research results cannot be used in the modern manufacturing system. In this case, the Flexible Job shop Scheduling Problem (FJSP) attracts more and more attention from the researchers and engineers. FJSP is an extension of the classical job shop scheduling problem. It allows an operation to be processed by any machine from a given set. It is also an NP-hard problem. After Bruker [3] firstly presented this problem in 1990, many methods have been presented to solve this problem. The current approaches for solving FJSP mainly include Genetic Algorithm (GA) [4], Tabu Search (TS) [5, 6], Variable Neighborhood Search algorithm (VNS) [7, 8], and some hybrid algorithms [9–11]. The FJSP can be decomposed into two subproblems: the Machine Selection problem (MS) and the Operation Sequencing problem (OS). The most current approaches use the integrated encoding method to solve the FJSP. However, these two subproblems are different. The integrated encoding method may lead to the solution space becoming more complex. This can impede the algorithm to exert its whole searching ability. © Springer-Verlag GmbH Germany, part of Springer Nature and Science Press, Beijing 2020 X. Li and L. Gao, Effective Methods for Integrated Process Planning and Scheduling, Engineering Applications of Computational Methods 2, https://doi.org/10.1007/978-3-662-55305-3_8
157
158
8 An Effective Collaborative Evolutionary Algorithm for FJSP
Therefore, this chapter proposes a Multi-Swarm Collaborative Evolutionary Algorithm (MSCEA) to solve FJSP. In this method, the two subproblems are evolved in different swarms. They interact with each other in every generation of the algorithm. Through experimental studies, the merits of the proposed approach can be shown clearly. The remainder of this chapter is organized as follows. Problem formulation is discussed in Sect. 8.2. MSCEA-based approach for FJSP is proposed in Sect. 8.3. Experimental studies are reported in Sect. 8.4. Section 8.5 shows the conclusions.
8.2 Problem Formulation The n × m FJSP can be formulated as follows [12]. There is a set of n jobs J = {J 1 , J 2 , J 3 , …, J n } and a set of m machines M = {M 1 , M 2 , M 3 , …, M m }. Each job J i consists of a sequence of operations {Oi1 , Oi2 , Oi3 , …, Oin }, where ni is the number of operations that J i comprises. Each operation Oij (i = 1, 2, …, n; j = 1, 2, …, ni ) has to be processed by one machine out of a set of given machines M ij ⊆ M. The problem is thus to both determine an assignment and a sequence of the operations on the machines so that some criteria are satisfied. In this chapter, the scheduling objective is to minimize the maximal completion time of all the operations, i.e., makespan. The mathematical model can refer to Fattahi et al. [13].
8.3 Proposed MSCEA for FJSP 8.3.1 The Optimization Strategy of MSCEA The proposed optimization strategy explores the solution spaces of MS and OS of FJSP respectively and collaboratively. In other words, when each of the two components is under optimization, the other component remains unchanged. This optimization process is repeated until the termination criteria reach. It should be mentioned that the two subproblems of FJSP are combined together to evaluate the fitness of a solution. Under this optimization strategy, different optimization algorithms such as GA, TS, and VNS can be employed to search the spaces of MS and OS individually. In this study, the evolutionary algorithm is used for each component. Therefore, it is called as Multi-Swarm Collaborative Evolutionary Algorithm (MSCEA). This optimization strategy for FJSP is shown in Fig. 8.1. The overall procedure of the proposed optimization strategy for FJSP is described as follows: Step 1: Initialization. Assume there are N jobs. Generate N + 1 swarms: including N MS swarms and 1 OS swarm;
8.3 Proposed MSCEA for FJSP
159 MS Swarm 1
MS Swarm 1
C ol l pa abo rt ra ne ti rs ve
Fitness evaluation MS Swarm 2
OS Swarm
MS Swarm N
MS Swarm i MS Swarm 2
C ol l pa abo rt ra ne ti rs ve
OS Swarm MS Swarm 1
Fitness evaluation
MS Swarm N
MS Swarm i
Collaborative Evolution
Fitness evaluation
OS Swarm
MS Swarm N
MS Swarm i
OS Swarm MS Swarm 1 MS Swarm N MS Swarm 2
MS Swarm i
C ol l pa abo rt ra ne ti rs ve
MS Swarm 2
C ol l pa abo rt ra ne ti rs ve
MS Swarm 1
C ol l pa abo rt ra ne ti rs ve
Fitness evaluation MS Swarm 2
OS Swarm
MS Swarm N
Fitness evaluation MS Swarm i
Fig. 8.1 The optimization strategy of MSCEA for FJSP
Step 2: Evaluation. Evaluate every individual in each swarm, wherein the individual is combined with all collaborative partners which are selected from all of the other swarms; Step 3: Terminate criteria satisfied? If yes, stop and output the results, otherwise go to Step 4; Step 4: Collaborative evolution. Update the swarms: generate the new individuals for each swarm by the genetic operators. Go to Step 2.
8.3.2 Encoding Chromosomes are corresponding to the solutions of the FJSP. Encoding of the individual is very important in the evolutionary algorithm. In this chapter, the encoding method in Gao et al. [12] is adopted here. Because this algorithm has two type swarms, the MS and OS subproblems have different encoding methods. The encoding method for the OS swarm is the operation-based representation method. Each MS
160
8 An Effective Collaborative Evolutionary Algorithm for FJSP
swarm represents the MS for the corresponding job. The length of the chromosome for the MS swarm equals to the number of the operations for the corresponding job.
8.3.3 Initial Population and Fitness Evaluation The encoding principle of the OS subproblem in this chapter is an operation-based representation. The important feature of this representation is that any permutation of the chromosome can be decoded to a feasible schedule. It cannot break the constraints on precedence relations of operations. The MS and OS swarms are generated based on the encoding principle. In this chapter, the makespan is used as the objective.
8.3.4 Genetic Operators 8.3.4.1
Selection
The OS swarm and MS swarms have the same selection operator. It is the random selection operator. In this selection, the algorithm randomly selects the individuals to do the crossover and mutation operators.
8.3.4.2
Crossover
A Precedence Operation crossover (POX) has been adopted here to be the crossover operation for the OS swarm. It can inherit the good characteristics of parents to the offspring effectively. The basic working procedure of POX is described as follows (two parents are denoted as P1 and P2 , two offspring are denoted as O1 and O2 ): Step 1: The Jobset J = {J 1 , J 2 , J 3 , …, J n } is divided into two groups Jobset 1 and Jobset 2 randomly; Step 2: Any element in P1 which belongs to Jobset 1 are appended to the same position in O1 and deleted in P1 ; any element in P2 which belongs to Jobset 2 are appended to the same position in O2 and deleted in P2 ; Step 3: The remaining elements in P2 are appended to the remaining empty positions in O1 seriatim; and the remaining elements in PI are appended to the remaining empty positions in O2 seriatim. A two-point crossover has been adopted here to the crossover operation for the MS swarm. In this crossover operation, two positions are selected randomly at first. Then two children strings are created by swapping all elements between the positions of the two parent strings.
8.3 Proposed MSCEA for FJSP
8.3.4.3
161
Mutation
For the OS swarm, the neighborhood mutation method is used as the mutation operator. The working procedure is described as follows: Step 1: Select 3 elements in the parent, and generate all the neighborhood chromosomes; Step 2: Choose one in the neighborhood chromosomes randomly and set it as the current chromosome. For the MS swarm, a mutation operator is designed as follows: Step 1: Select r positions in the parent (r is the half of the length of the chromosome); Step 2: For each position (according to one operation), change the value of this selected position to the other machine in the machine set of the corresponding operation.
8.3.5 Terminate Criteria If the number of iterations that the proposed MSCEA runs reach to the maximum generations (maxGen), the algorithm stops.
8.3.6 Framework of MSCEA The workflow of the proposed MSCEA is shown in Fig. 8.2. The basic procedure of this algorithm is described as follows: Step 1: Set the parameters of MSCEA, including size of the population of OS swarm (Popsize1 ), size of the population of MS swarms (Popsize2 ), maximum generations (maxGen), crossover probabilistic of OS swarm (pc1 ), crossover probabilistic of MS swarm (pc2 ), mutation probabilistic of OS swarm (pm1 ), mutation probabilistic of MS swarm (pm2 ); Step 2: Initialization: Assume there are N jobs. Use the encoding methods to generate N + 1 swarms, including N MS swarms with Popsize2 individuals and 1 OS swarm with Popsize1 individuals. Set Gen = 1, Gen is the current generation; Step 3: Evaluation: Evaluate every individual in each swarm, wherein the individual is combined with all collaborative partners which are randomly selected from all of the other swarms. If Gen = 1, record the best fitness f best with the solution which is made up by all the individuals from each swarm; Else, update the f best with the solution; Step 4: Is the terminate criteria satisfied? (Gen ≤ maxGen?) If yes, go to Step 6; Else, go to Step 5;
162
8 An Effective Collaborative Evolutionary Algorithm for FJSP
Fig. 8.2 Work flow of the proposed MSCEA
Beginning
Parameters Setting Initialize N MS Swarms and 1 OS Swarm Gen = 1 Evaluate All the Swarms Record or Update Best Fitness fbest with the Solution
Terminate Criteria Satisfied? N Collaborative Evolution MS Swarm 1
Crossover
Mutation
MS Swarm 2
Crossover
Mutation
MS Swarm i
Crossover
Mutation
MS Swarm N
Crossover
Mutation
OS Swarm
Crossover
Mutation
Y
Output the fbest with the Solution Ending
New Swarms: including N MS Swarms and 1 OS Swarm Gen=Gen+1
Step 5: Collaborative evolution. Update the swarms: generate the new individuals for each swarm by the genetic operators; q is the serial number of each swarm (q = 1, 2, 3, …, i, …, N, N + 1), initialize q = 1. Step 5.1: For the qth swarm, use the relative genetic operators to generate the new swam to replace the old swarm, and set the new swarm as the current qth swarm; Step 5.2: Is q ≤ N + 1? If yes, go to Step 5.3; Else, set Gen = Gen + 1 and go to Step 3; Step 5.3: Set q = q + 1 and go to Step 5.1; Step 6: Output the best fitness f best with the solution.
8.4 Experimental Studies
163
8.4 Experimental Studies The proposed MSCEA procedure is coded in C++ and implemented on a computer with a 2.0 GHz Core (TM) 2 Duo CPU. To illustrate the effectiveness and performance of the proposed algorithm in this chapter, one famous benchmark problem has been selected. It is adapted from Brandimarte [14]. It contains 10 problems. The parameters of the MSCEA for these problems are set as: the size of the population of OS swarm Popsize1 = 200; the size of the population of MS swarm Popsize2 = 200; the maximum generations maxGen = 200; the crossover probability of OS swarm pc1 = 0.9; the crossover probability of MS swarm pc2 = 0.9; the mutation probability of OS swarm pm1 = 0.1; the mutation probability of MS swarm pm2 = 0.1. Table 8.1 shows the results of the proposed algorithm and the comparisons with some previous methods. TSl , TS2 , LEGA, GA, VNS, PVNS, HTS represent the algorithms in [4, 9, 7, 5, 8, 10, 14]. Figure 8.3 illustrates the Gantt chart of problem MK06. The results marked by * are the best results among these 8 algorithms. From Table 8.1, the proposed MSCEA obtains 8 best results among these algorithms. The numbers of the best results obtained by other algorithms (except TS2 ) are fewer than the proposed MSCEA. However, for the MK07 problem, the result of TS2 is worse than the result of MSCEA. This means that the proposed approach can obtain more good results than TS2 . The experimental results reveal that the proposed method can solve the FJSP effectively. The reason is as follows. In the proposed algorithm, the two subproblems evolve in different swarms. They interact with each other in every generation of the algorithm. This can help the algorithm to exert its whole searching ability. Therefore, the proposed algorithm can solve the FJSP effectively.
8.5 Conclusions FJSP is a very important problem in the modern manufacturing system. This chapter developed a multi-swarm collaborative evolutionary algorithm to solve FJSP. Experimental studies have been used to test the performance of the proposed approach. The results show that the proposed approach has achieved significant improvement.
(36, 42)
(24, 32)
(204, 211)
(48, 81)
(168, 186)
(33, 86)
(133, 157)
(523)
(299, 369)
(165, 296)
MK01
MK02
MK03
MK04
MK05
MK06
MK07
MK08
MK09
MK10
296
369
*523
157
86
186
81
211
32
42
TS1 [14]
N/A means the result was not given by the author
(LB, UB)
Problem
*198
*307
*523
144
*58
173
*60
*204
*26
*40
TS2 [5]
296
369
*523
157
86
186
81
N/A
29
*40
LEGA [10]
Table 8.1 The experimental results and the comparisons with other methods GA [4]
212
311
*523
*139
63
173
*60
*204
*26
*40
VNS [7]
207
*307
*523
140
59
173
*60
*204
*26
*40
PVNS [8]
208
*307
*523
141
60
173
*60
*204
*26
*40
HTS [9]
214
310
*523
140
65
*172
62
*204
*26
*40
MSCEA
*198
*307
*523
141
*58
173
*60
*204
*26
*40
164 8 An Effective Collaborative Evolutionary Algorithm for FJSP
References
165
Fig. 8.3 Gantt chart of problem MK06 (Makespan = 58)
References 1. Chen H, Ihlow J, Lehmann C (1999) A genetic algorithm for flexible job-shop scheduling. In: IEEE international conference on robotics and automation, vol 2, pp 1120–1125 2. Graey MR, Jonmson DS, Sethi R (1976) The complexity of flow shop and job-shop scheduling. Math Oper Res 1:117–129 3. Brucker P, Schile R (1990) Job-shop scheduling with multi-purpose machines. Computing 45(4):369–375 4. Pezzella F, Morganti G, Ciaschetti G (2008) A genetic algorithm for the flexible job shop scheduling problem. Comput Oper Res 35:3202–3212 5. Mastrolilli M, Gambardella LM (2000) Effective neighbourhood functions for the flexible job shop problem. J Sched 3:3–20 6. Mehrabad MS, Fattahi P (2007) Flexible job shop scheduling with tabu search algorithms. Int J Adv Manuf Technol 32:563–570 7. Amiri M, Zandieh M, Yazdani M, Bagheri A (2010) A variable neighbourhoodsearch algorithm for the flexible job shop scheduling problem. Int J Prod Res 8(19):5671–5689 8. Yazdani M, Amiri M, Zandieh M (2010) Flexible job shop scheduling with parallel variable neighborhood search algorithm. Expert Syst Appl 37:678–687 9. Li JQ, Pan QK, Suganthan PN, Chua TJ (2010) A hybrid tabu search algorithm with an efficient neighborhood structure for the flexible job shop scheduling problem. Int J Adv Manuf Technol 10. Ho NB, Tay JC, Lai EMK (2007) An effective architecture for learning and evolving flexible job shop schedules, vol 179, pp 316–333 11. Bozejko W, Uchronski M, Wodecki M (2010) Parallel hybrid metaheuristics for the flexible job shop problem. Comput Ind Eng 59:323–333 12. Gao L, Peng CY, Zhou C, Li PG (2006) Solving flexible job shop scheduling problem using general particle swarm optimization. In: The 36th CIE conference on computers and industrial engineering, pp 3018–3027 13. Fattahi P, Mehrabad MS, Jolai F (2007) Mathematical modeling and heuristic approaches to flexible job shop scheduling problems. J Intell Manuf 18:331–342 14. Brandimarte P (1993) Routing and scheduling in a flexible job shop by taboo search. Ann Oper Res 41(3):157–183
Chapter 9
Mathematical Modeling and Evolutionary Algorithm-Based Approach for IPPS
9.1 Introduction Process planning and scheduling used to link product design and manufacturing are two of the most important functions in a manufacturing system. A process plan specifies what manufacturing resources and technical operations/routes are needed to produce a product (a job). The outcome of process planning includes the identification of machines, tools, and fixtures suitable for a job, and the arrangement of operations for a job. Typically, a job may have one or more alternative process plans. With the process plans of jobs as inputs, a scheduling task is to schedule the operations of all the jobs on machines while precedence relationships in the process plans are satisfied. Although as mentioned above, there is a close relationship between process planning and scheduling, the integration of them is still a challenge in research and applications [21]. In traditional approaches, process planning and scheduling were carried out in a sequential way. Scheduling was conducted after the process plan had been generated. Those approaches have become an obstacle to improve the productivity and responsiveness of manufacturing systems and to cause the following problems in particular [10, 20]: (1) In practical manufacturing, process planner plans jobs individually. For each job, manufacturing resources on the shop floor are usually assigned to it without considering the competition for the resources from other jobs [23]. This may lead to the process planners favoring to select the desirable machines for each job repeatedly. Therefore, the generated process plans are somewhat unrealistic and cannot be readily executed on the shop floor for a group of jobs [12]. Accordingly, the resulting optimal process plans often become infeasible when they are carried out in practice at the later stage. (2) Scheduling plans are often determined after process plans. In the scheduling phase, scheduling planners have to consider the determined process plans. Fixed process plans may drive scheduling plans to end up with severely unbalanced resource load and create superfluous bottlenecks. © Springer-Verlag GmbH Germany, part of Springer Nature and Science Press, Beijing 2020 X. Li and L. Gao, Effective Methods for Integrated Process Planning and Scheduling, Engineering Applications of Computational Methods 2, https://doi.org/10.1007/978-3-662-55305-3_9
167
168
9 Mathematical Modeling and Evolutionary Algorithm …
(3) Even though process planners consider the restriction of the current resources on the shop floor, the constraints in the process planning phase may have already changed due to the time delay between the planning phase and execution phase. This may lead to the infeasibility of the optimized process plan [9]. Investigations have shown that 20–30% of the total production plans in a given period have to be rescheduled to adapt to dynamic change in a production environment [10]. (4) In most cases, both for process planning and scheduling, a single criterion optimization technique is used to determine the best solution. However, the real production environment is best represented by considering more than one criterion simultaneously [10]. Furthermore, process planning and scheduling may have conflicting objectives. Process planning emphasizes the technological requirements of a job, while scheduling attaches importance to the timing aspects and resource sharing of all jobs. If there is no appropriate coordination, it may create conflicting problems. To overcome these problems, there is an increasing need for deep research and application of Integrated Process Planning and Scheduling (IPPS) system. The IPPS introduces significant improvements to the efficiency of manufacturing through eliminating or reducing scheduling conflicts, reducing flow time and work-in-process, improving production resources utilizing and adapting to irregular shop floor disturbances [12]. Without IPPS, a true Computer-Integrated Manufacturing System (CIMS), which strives to integrate the various phases of manufacturing in a single comprehensive system, may not be effectively realized. The remainder of this chapter is organized as follows. Problem formulation and mathematical modeling are discussed in Sect. 9.2. An Evolutionary Algorithm (EA)-based approach for IPPS is proposed in Sect. 9.3. Experimental studies and discussions are reported in Sect. 9.4. Section 9.5 is the conclusion.
9.2 Problem Formulation and Mathematical Modeling 9.2.1 Problem Formulation The IPPS problem can be defined as follows [4]: Given a set of N parts which are to be processed on machines with operations including alternative manufacturing resources, select suitable manufacturing resources and sequence the operations so as to determine a schedule in which the precedence constraints among operations can be satisfied and the corresponding objectives can be achieved.
9.2 Problem Formulation and Mathematical Modeling
169
In the manufacturing systems considered in this study, a set of process plans for each part is designed and maintained. The generation of one scheduling plan and the selection of process plan of each job from a set of process plans are determined based on the minimum objectives. If there are N jobs, and each with Gi alternative process plans, the number of possible process plan combinations is {G1 × G2 × ··· × Gi × ··· × GN }. This problem is an NP-hard problem. Three types of flexibility are considered in process planning [5, 20]: operation flexibility, sequencing flexibility, and processing flexibility [1]. Operation flexibility which is also called routing flexibility [15] relates to the possibility of performing one operation on alternative machines, with possibly distinct processing time and costs. Sequencing flexibility is decided by the possibility of interchanging the sequence of the required operations. Processing flexibility is determined by the possibility of processing the same manufacturing feature with alternative operations or sequences of operations. Better performance in some criteria can be obtained by the consideration of these flexibilities [8].
9.2.2 Mathematical Modeling The mathematical model of IPPS is defined here. The most popular criteria for scheduling include makespan, job tardiness, and the balanced level of machine utilization, while manufacturing cost is the major criterion for process planning [13]. In this chapter, scheduling is often assumed as job shop scheduling, and the mathematical model of IPPS is based on the mixed integer programming model of the Job shop Scheduling Problem (JSP) [16]. In order to solve this problem, the following assumptions are made: (1) Jobs are independent. Job preemption is not allowed and each machine can handle only one job at a time. (2) The different operations of one job cannot be processed simultaneously. (3) All jobs and machines are available at time zero simultaneously. (4) After a job is processed on a machine, it is immediately transported to the next machine on its process, and the transmission time is assumed to be negligible. (5) Set up a time for the operations on the machines is independent of the operation sequence and is included in the processing times. Based on these assumptions, the mathematical model of IPPS is described as follows [14]: The notations used to explain the model are described below: N M di ωi
the total number of jobs; the total number of machines; the due date of job i; the weight of job is basically a priority factor, denoting the importance of job i relative to the other jobs in the system;
170
Gi oijl Pil k t ijlk cijlk ci Li Ti Ei vijlk A
9 Mathematical Modeling and Evolutionary Algorithm …
the total number of alternative process plans of job i; the jth operation in the lth alternative process plan of the job i; the number of operation in the lth alternative process plan of the job i; the alternative machine corresponding to oijl ; the processing time of operation oijl on machine k, t ijlk > 0; the earliest completion time of operation oijl on machine k; the completion time of job i; the lateness of job i; the tardiness of job i; the earliness of job i; the processing cost of operation oijl on machine k; a very large positive number; Ui = X il = Yi jlpqsk =
1 if ci > di the unit penalty of job i 0 otherwise
1 the lth alternative process plan of job i is selected 0 otherwise
1 the operation oi jl precedes the operation o pqs on machine k 0 otherwise 1 if machine k is selected for oi jl Z i jlk = 0 otherwise
Objectives: (1) Minimizing the makespan which is completion time of last operation of all the jobs: Min makespan = Max ci jlk × X il × Z i jlk ∀i ∈ [1, N ], ∀ j ∈ [1, Pil ], ∀l ∈ [1, G i ], ∀k ∈ [1, M]
(9.1)
(2) Minimizing the total processing cost: Min
Pil N vi jlk × X il × Z i jlk ∀i ∈ [1, N ], ∀ j ∈ [1, Pil ], i=1 j=1
∀l ∈ [1, G i ], ∀k ∈ [1, M] (3) Minimizing the lateness:
(9.2)
9.2 Problem Formulation and Mathematical Modeling
171
L i = ci − di ; Min L max = max{L 1 , L 2 , . . . , L N }∀i ∈ [1, N ]
(9.3)
(4) Minimizing the total weighted tardiness: Ti = max{ci − di , 0} = max{L i , 0}; Min
ωi Ti ∀i ∈ [1, N ]
(9.4)
i
(5) Minimizing the weighted number of tardy jobs: Min
ωi Ui ∀i ∈ [1, N ]
(9.5)
i
(6) Minimizing the total earliness plus the total tardiness: E i = max{di − ci , 0}; Min
Ei +
i
Ti ∀i ∈ [1, N ]
(9.6)
i
Subject to (1) For the first operation in the alternative process plan l of job i: (ci1lk × Z i1lk × X il ) + A(1 − X il ) ≥ (ti1lk × Z i1lk × X il ) ∀i ∈ [1, N ], ∀l ∈ [1, G i ], ∀k ∈ [1, M]
(9.7)
(2) For the last operation in the alternative process plan l of job i: ci Pil lk × Z i Pil lk × X il − A(1 − X il ) ≤ makespan ∀i ∈ [1, N ], ∀l ∈ [1, G i ], ∀k ∈ [1, M]
(9.8)
(3) The different operations of one job cannot be processed simultaneously: ci jlk × Z i jlk × X il − ci( j−1)lk1 × Z i( j−1)lk1 × X il + A(1 − X il ) ≥ ti jlk × Z i jlk × X il ∀i ∈ [1, N ], ∀ j ∈ [1, Pil ], ∀l ∈ [1, G i ], ∀k1 ∈ [1, M]
(9.9)
(4) Each machine can handle only one job at a time: c pqsk × Z pqsk × X ps − ci jlk × Z i jlk × X il + A(1 − X il ) + A 1 − X ps + A 1 − Yi jlpqsk × Z i jlk × Z pqsk × X il × X ps ≥ t pqsk × Z pqsk × X ps (9.10)
172
9 Mathematical Modeling and Evolutionary Algorithm …
ci jlk × Z i jlk × X il − c pqsk × Z pqsk × X ps + A(1 − X il ) + A 1 − X ps + A Yi jlpqsk × Z i jlk × Z pqsk × X il × X ps ≥ ti jlk × Z i jlk × X il ∀i, p ∈ [1, N ], ∀ j, q ∈ 1, Pil, ps , ∀l, s ∈ 1, G i, p , ∀k ∈ [1, M] (9.11) (5) Only one alternative process plan can be selected of job i:
X il = 1 ∀i ∈ [1, N ]
(9.12)
l
(6) Only one machine for each operation should be selected: M
Z i jlk = 1 ∀i ∈ [1, N ], ∀ j ∈ [1, Pil ], ∀l ∈ [1, G i ]
(9.13)
k=1
(7) There is only one precedence relation between two operations in a scheduling plan: Yi jlpqsk × X il × X ps × Z i jlk ≤ 1
Yi jlpqsk × X il ≤ Z i jlk × X il
(9.14) (9.15)
Yi jlpqsk × X ps ≤ Z pqsk × X ps ∀i, p ∈ [1, N ], ∀ j, q ∈ 1, Pil, ps , ∀l, s ∈ 1, G i, p , ∀k ∈ [1, M]
Pil Gi N i
j
Yi jlpqsk × X il × Z i jlk
(9.16)
l
o pqsk −1
=
Z okm ∀ p ∈ [1, N ], ∀q ∈ [1, Pil ],∀s ∈ [1, G i ], ∀k ∈ [1, M] (9.17)
ok1
O pqsk −1 Z Okm means the total number of operations before o pqs on where Ok1 machine k; ok1 means the first operation on machine k; opqsk means the current operation on machine k. (8) The completion time of each operation should be either positive or zero. ci jlk × Z i jlk × X il ≥ 0 ∀i ∈ [1, N ], ∀ j ∈ [1, Pil ], ∀l ∈ [1, G i ], ∀k ∈ [1, M] (9.18)
9.2 Problem Formulation and Mathematical Modeling
173
The objective functions are equations from (9.1) to (9.6). In this chapter, the mono-objective problems are considered. It also can use the weighted-sums method or pare to strategy to solve the multi-objective problems. Moreover, the constraints are equations from (9.7) to (9.18). Constraint (9.9) expresses that different operations of a job are unable to be processed simultaneously. This is the constraint of different processes for a job. Constraints (9.10) and (9.11) which are the constraints of machine express that each machine can handle only one job at a time. The constraints from (9.7) to (9.11) are called disjunctive constraints. Constraint (9.12) ensures that only one alternative process plan can be selected for each job. Constraint (9.13) ensures that only one machine for each operation should be selected. Constraints from (9.14) to (9.17) ensure that there is only one precedence relation between two operations in a scheduling plan. Constraint (9.18) ensures that the completion time of each operation should be either positive or zero. The mixed integer programming model of IPPS developed above can be solved by using the mixed integer programming methods. JSP has been proved to be one of the most difficult NP-complete problems [3]. The IPPS problem which is also known as an NP-complete problem is more complicated than JSP [7]. For large problems, it is difficult to find perfect solutions in a reasonable time. Therefore, in this chapter, one evolutionary algorithm-based approach has been developed to solve it.
9.3 Evolutionary Algorithm-Based Approach for IPPS 9.3.1 Representation Each chromosome in the scheduling population consists of two parts with different lengths as shown in Fig. 9.1. The first part of the chromosome is the alternative process plan string (B-string). The positions from 1 to N in B-string represent the jobs from 1 to N. The number in the ith position represents the selected alternative process plan of the job i. The second part of the chromosome is the scheduling plan string (A-string). In this chapter, the scheduling encoding principle is the operation-based representation, which is made up of the Genes (the first number is the job number and the second number is the alternative machine). This representation uses an unpartitioned permutation with Pil -repetitions of job numbers (the first numbers of Genes) [2]. In this representation, each job number occurs Pil times in the chromosome. By scanning the chromosome from left to right, the f th occurrence of a job number (the first number of Gene) refers to the f th operation in the technological sequence
Fig. 9.1 Chromosome of integration
174
9 Mathematical Modeling and Evolutionary Algorithm …
of this job. The important feature of the operation-based representation is that any permutation of the chromosome can be decoded to a feasible schedule [2]. The first numbers of Genes represent a permutation of the operations. Different appearances of this number in the chromosome represent different operations of the job, and the sequence of the appearances of this number is the same with the sequence of the operations of the job. The important feature of this representation is that all offspring formed by crossover are feasible solutions [2]. It is assumed that there are N jobs, and qi is the number of operations of the process plan which has the most operations among all the alternative process plans of the job i. Then the length of the A-string is equal to qi . The number of i (the first number of Genes) in the A-string is equal to the number of operations of the lth selected alternative process plan. Based on this principle, the composition
elements of A-string are determined. If the number of the elements is less than qi , all the other elements are filled with (0, 0). One A-string is generated by arraying all the elements randomly. And the B-string is generated by selecting the alternative process plan randomly for every job. The second numbers of Genes represent the alternative machines of each operation. It denotes the selected machine set of the corresponding operations of all jobs. Assuming the hth operation in the selected lth alternative process plan of job i can be processed by a machine set Silh = {m ilh1 , m ilh2 , . . . , m ilhcilh }. The Gene can be denoted as (i, gilh ). The g is an integer between 1 and cilh and it means that the hth operation in the lth selected alternative process plan of job i is assigned to the gilh machine m ilhgilh in S ilh .
9.3.2 Initialization and Fitness Evaluation The encoding principle of the A-string in the chromosome in this chapter is an operation-based representation. The important feature of the operation-based representation is that any permutation of the chromosome can be decoded to a feasible schedule. It cannot break the constraints on precedence relations of operations [2]. The initial population is generated based on the encoding principle. In this chapter, it used makespan as its objective (see Eq. (9.1)).
9.3.3 Genetic Operators (1) Selection: The tournament selection scheme has been used for selection operator. In tournament selection, a number of individuals are selected randomly (dependent on the tournament size, typically between 2 and 7) from the population and the individual with the best fitness is chosen. The tournament selection
9.3 Evolutionary Algorithm-Based Approach for IPPS
175
approach allows a tradeoff to be made between the exploration and exploitation of the gene pool [11]. This scheme can modify the selection pressure by changing the tournament size. (2) Crossover: The procedure of the crossover for scheduling is described as follows: Step 1: Select a pair of chromosomes P1 and P2 by the selection scheme and initialize two empty offspring: O1 and O2. Step 2: First, the B-strings of O1 and O2 are generated by the following steps: Step 2.1: Compare the B-string of P1 with the B-string of P2, if the element of P1 is the same as P2, record the value and position of this element. This process is repeated until all the elements of B-string have been compared. Step 2.2: The recorded elements in B-string of P1 in Step 2.1 are appended to the same positions in B-string of O1, while the recorded elements in B-string of P2 in Step 2.1 are appended to the same positions in B-string of O2. The other elements (they are the different elements between the Bstrings of P1 and P2) in B-string of P2 are appended to the same positions in B-string of O1, while the other elements in B-string of P1 are appended to the same positions in B-string of O2. Step 3: Secondly, in order to match the B-strings of O1 and O2 and avoid getting unreasonable O1 and O2, the A-strings of P1 and P2 are crossovered as follows: Step 3.1: If the values of the first numbers of Genes in A-string of P1 are the same as the recorded positions in B-string, these Genes (including (0, 0)) are appended to the same positions in A-string of O1 and they are deleted in A-string of P1. If the values of the first numbers of Genes in A-string of P2 are the same as the recorded positions in B-string, these Genes (including (0, 0)) are appended to the same positions in A-string of O2 and they are deleted in A-string of P2. Step 3.2: Get the numbers of the remaining Genes in A-strings of P1 and P2, n1 and n2 . If n1 ≥ n2 , for O1, it implies that the number of empty positions in A-string of O1 is larger than the number of remaining elements in A-string of P2. Therefore, n1 − n2 empty positions in A-string of O1 are selected randomly and are filled with (0, 0). Then, the remaining elements in A-string of P2 are appended to the remaining empty positions in A-string of O1 seriatim. For O2, n1 ≥ n2 implies that the number of empty positions in A-string of O2 is smaller than the number of remaining elements in A-string of P1. So, n1 − n2 (0, 0) are selected randomly in A-string of O2 and are set to empty. And then, the remaining elements in A-string of P1 are appended to the empty positions in A-string of O2 seriatim; if n1 < n2 , the procedure is reversed. Step 4: Then, two valid offspring O1 and O2 are obtained.
176
9 Mathematical Modeling and Evolutionary Algorithm …
Fig. 9.2 Crossover operator
One example is showed in Fig. 9.2. Step 1: Select a pair of chromosomes P1 and P2 by the selection scheme and initialize two empty offspring: O1 and O2. Step 2: First, the B-strings of O1 and O2 are generated by the following steps: Step 2.1: Compare the B-string of P1 with the B-string of P2, record the second and third elements in B-strings of P1 and P2. Step 2.2: The second and third elements in B-string of P1 in Step 2.1 are appended to the same positions in B-string of O1, while the second and third elements in B-string of P2 in Step 2.1 are appended to the same positions in B-string of O2. The other elements which are the first and fourth elements in B-string of P2 are appended to the same positions in B-string of O1, while the other elements which are the first and fourth elements in B-string of P1 are appended to the same positions in B-string of O2. Step 3: Secondly, in order to match the B-strings of O1 and O2 and avoid getting unreasonable O1 and O2, the A-strings of P1 and P2 are crossovered as follows: Step 3.1: The Genes whose first numbers in A-string of P1 equate 2 or 3 (including (0, 0)) are appended to the same positions in A-string of O1 and they are deleted in A-string of P1. The Genes whose the first numbers in A-string of P2 equate 2 or 3 (including (0, 0)) are appended to the same positions in A-string of O2 and they are deleted in A-string of P2. Step 3.2: In this example, n1 = 6, n2 = 7, n1 < n2 and n2 − n1 = 1. For O1, one (0, 0) in O1 is selected randomly and is set to empty, which is marked out in A-string of O1 in Fig. 9.2 (in this example, the fifth position in A-string of O1 is selected). Then, the remaining elements in A-string of P2 are appended to the empty positions in A-string of O1 seriatim. For O2, one empty position is selected randomly and is filled with (0, 0), which is marked out in A-string of O2 in Fig. 9.2 (in this example, the 13th position in A-string of O2 is selected). Then, the remaining elements in A-string of P1 are appended to the empty positions in A-string of O2
9.3 Evolutionary Algorithm-Based Approach for IPPS
177
seriatim. Step 4: Then, two valid offspring O1 and O2 are obtained (see Fig. 9.2). (3) Mutation: In this chapter, three mutation operators have been used. The first one is two-point swapping mutation, the second one is changing one job’s alternative process plan, and the third one is the mutation of alternative machines. In the evolution procedure, one operator has been chosen randomly in every generation. The procedure of two-point swapping mutation for scheduling is described as follows: Step 1: Select one chromosome P by the selection scheme. Step 2: Select two points in the A-string of P randomly. Step 3: Generate a new chromosome O by interchanging these two elements. The procedure of the second mutation (changing one job’s alternative process plan) for scheduling is described as follows: Step 1: Select one chromosome P by the selection scheme. Step 2: Select one point in the B-string of P randomly. Step 3: Change the value of this selected element to another one in the selection range (the number of alternative process plans). Step 4: Judge the number of the operations of the selected job’s alternative process plan which has been changed. If it increases, a new A-string of chromosome O is generated by changing the margin (0, 0)’s which are selected randomly to the job numbers in the A-string of P seriatim; if it decreases, a new A-string of chromosome O is generated by changing the margin job numbers which are selected randomly in the A-string of P to (0, 0)’s seriatim. The mutation of alternative machines is applied in order to change the alternative machine represented. One element in the A-string is randomly chosen from the selected individual. Then, this element is mutated by altering the machine number (the second number of the Gene) to another one of the alternative machines in the machine set at random. One example of mutation is showed in Fig. 9.3. Above the first broken line, it
Fig. 9.3 Mutation operator
178
9 Mathematical Modeling and Evolutionary Algorithm …
is an example of two-point swapping mutation, the selected two points ((4, 1) and (3, 2)) have been marked out, and O is generated by interchanging (4, 1) and (3, 2). Under the first broken line, it is an example of the mutation of changing one job’s alternative process plan. The selected element 3, which is the fourth element (for job 4) in B-string of P and has been marked out, has been changed to 1. Because the number of operations in the first alternative process plan for job 4 is more than the third one, one (0, 0) which is selected randomly in the A-string of P is changed to (4, 2) in A-string of O. And under the second broken line, it is an example of the mutation of alternative machines, the selected point (1, 1) in A-string of P has been marked out, and O is generated by changing (1, 1) to (1, 2). It means that the selected machine of the third operation of job 1 has changed from the first one to the second one in S 113 (see Sect. 9.4.1).
9.4 Experimental Studies and Discussions 9.4.1 Example Problems and Experimental Results Five experiments are presented to illustrate the effectiveness and performance of the proposed mathematical model and EA-based approach for IPPS. We describe the design of the first and second examples which are constructed by various jobs with several alternative process plans. And the other examples are adopted from some other chapters. In this chapter, the makespan is used as the objective. After formulating the mixed integer programming model to represent the examples as a mathematical model, the solutions are obtained by the proposed EA-based method. The integration model has been compared with the no integration model. No integration model means that there is only one process plan for each job (In this chapter, each job selects the first alternative process plan as its process plan in the no integration model.). The proposed EA-based approach procedure was coded in C ++ and implemented on a computer with a 2.0 GHz Core (TM) 2 Duo CPU. For experimental comparisons under the same condition, the parameters used in the proposed evolutionary search approach were set: the size of the population pop_size was 200, total number of generations max_gen was 50, tournament size b was 2, probability of reproduction operator pr was 0.1, probability of crossover operator pc was 0.8, and probability of mutation operator pm was 0.1. Altogether 20 iterations were executed to eliminate the randomness of the searches in the proposed evolutionary search approach. We chose the solutions which appeared the most times. The algorithm terminated when the number of generations reached to the max_gen
9.4 Experimental Studies and Discussions
9.4.1.1
179
Experiment 1
Experiment 1 involves six jobs and five machines. And the alternative process plans and processing time are given in Table 9.1. Table 9.2 shows the experimental results, and Fig. 9.4 illustrates the Gantt chart of this experiment. The experimental results of Table 9.2 show that the selected process plans in the integration approach are not all of the first plans in Table 9.1, such as the selected process plan of job 1 is its third process plan. Comparing the results between the integration model and the no integration model, the selected process plans of the jobs between them are different, and the makespan of the scheduling of no integration model is worse than that of the integration model. Table 9.1 Alternative process plans of 6 jobs Job
Alternative process plans
Job
Alternative process plans
1
1(10)–3(15)–2(10)–5(20)–4(10)
4
1(6)–3(12)–2(8)–5(12)–4(10)
1(10)–3(22)–4(21)–5(12)
3(10)–1(8)–2(9)–4(12)–5(10)
2(10)–3(20)–5(20)–4(15) 2
1(10)–3(18)–4(12)–5(15)
2(8)–3(12)–1(6)–5(14)–4(8) 5
3(8)–2(12)–1(14)–4(13)–5(8)
3(10)–2(16)–4(8)–5(8)
2(10)–4(13)–3(18)–5(14) 3
3(12)–1(16)–5(10)–4(12)
1(10)–2(15)–4(9)–5(10) 4(6)–3(10)–2(8)–1(10)–5(8)
6
5(6)–2(16)–3(10)–4(10)
1(10)–2(8)–3(14)–4(6)–5(10)
1(9)–2(7)–4(8)–5(8)–3(9)
2(6)–1(12)–3(12)–4(8)–5(10)
5(6)–1(10)–2(8)–3(8)–4(9)
The numbers out parenthesis are the machine numbers; the numbers in parenthesis are the processing time
Table 9.2 Experimental results of experiment 1 Job
Alternative process plans Integration
No integration
1
2(10)–3(20)–5(20)–4(15)
1(10)–3(15)–2(10)–5(20)–4(10)
2
2(10)–4(13)–3(18)–5(14)
1(10)–3(18)–4(12)–5(15)
3
3(12)–1(16)–5(10)–4(12)
3(12)–1(16)–5(10)–4(12)
4
3(10)–1(8)–2(9)–4(12)–5(10)
1(6)–3(12)–2(8)–5(12)–4(10)
5
4(6)–3(10)–2(8)–1(10)–5(8)
1(10)–2(15)–4(9)–5(10)
6
5(6)–1(10)–2(8)–3(8)–4(9)
5(6)–2(16)–3(10)–4(10)
Makespan
92
102
CPU Time (s)
3.23
3.31
180
9 Mathematical Modeling and Evolutionary Algorithm …
Fig. 9.4 Gantt chart of experiment 1 (J1.1 means that the first operation of job 1. makespan = 92)
9.4.1.2
Experiment 2
Experiment 2 involves six jobs and five machines. And the alternative process plans and processing time are given in Table 9.3. In this experiment, each job selects the first alternative process plan as its process plan in the no integration model. Table 9.4 shows the experimental results and Fig. 9.5 illustrates the Gantt chart of this experiment. Comparing the results between the integration model and the no integration model, the selected process plans of the jobs between them are different, and the makespan of no integration model is worse than that of the integration model.
9.4.1.3
Experiment 3
Experiment 3 is adopted from Jain [6]. In this experiment, 6 problems are constructed with 18 jobs and 4 machines. Table 9.5 shows the experimental results, and Fig. 9.6 illustrates the Gantt chart of the first problem in experiment 3. The experimental results of experiment 3 show that the results of the no integration model are worse than that of the integration model. And the integration model can get better scheduling plans. This implies that the research on IPPS is necessary.
9.4.1.4
Experiment 4
Problem 1 There are two problems in experiment 4. The first problem is adopted from Nabil [18]. In this problem, there are four jobs and six different machines in the system, and the alternative machines and processing time are given in Table 9.6. Using the proposed approach, the best value obtained is 17 compared with the value of 18 obtained using the heuristic reported by Nabil [18]. Figure 9.7 illustrates the Gantt chart of this problem.
9.4 Experimental Studies and Discussions
181
Table 9.3 Job-related information and alternative operation sequences Job
Operation no.
Operation for alternative resources (operation no., machine no., processing time)
Alternative operation sequences considering precedence constraints
1
O1 , O2
(O1 , 1, 5), (O1 , 2, 5), (O1 , 4, 6)
O1 –O2
(O2 , 2, 5), (O2 , 5, 6) 2
O3 , O4
(O3 , 3, 6), (O3 , 5, 6)
O3 –O4
(O4 , 2, 5), (O4 , 4, 5), (O4 , 5, 6) 3
O5 , O6 , O7
(O5 , 1, 4), (O5 , 3, 3), (O5 , 4, 5) (O6 , 1, 8), (O6 , 3, 7), (O6 , 5, 8)
O5 –O6 –O7 O6 –O5 –O7
(O7 , 2, 5) 4
O8 , O9 , O10
(O8 , 1, 4), (O8 , 3, 4) (O9 , 2, 8), (O9 , 3, 7)
O8 –O9 –O10 O8 –O10 –O9
(O10 , 4, 5), (O10 , 5, 5) 5
O11 , O12 , O13 , O14
(O11 , 3, 4), (O11 , 5, 3) (O12 , 1, 7), (O12 , 4, 7) (O13 , 1, 8), (O13 , 2, 7), (O13 , 5, 8)
O11 –O12 –O13 –O14 O12 –O11 –O13 –O14 O11 –O13 –O12 –O14
(O14 , 1, 5), (O14 , 4, 6) 6
O15 , O16 , O17 , O18
(O15 , 1, 8), (O15 , 2, 7), (O15 , 4, 8)
O15 –O16 –O17 –O18 O16 –O15 –O17 –O18
(O16 , 3, 5), (O16 , 4, 5) (O17 , 2, 8) (O18 , 1, 7), (O18 , 3, 8), (O15 , 5, 8)
Problem 2 In the second problem, the benchmark problem by Nabil is extended to confirm the effectiveness of the proposed method for large-scale and more complex problems. In this problem, eight sizes that contain different jobs and operations are considered. Table 9.7 shows the experimental results. Figure 9.8 illustrates the Gantt chart of size 1 of this problem. These results reveal that the proposed method can solve complex and large-scale problems effectively.
9.4.1.5
Experiment 5
Problem 1 There are two problems in experiment 5. The first problem is adopted from a benchmark problem proposed by Sundaram and Fu [22], which is constructed with five jobs and five machines. Each job undergoes four different operations in a specified order. Alternative machines for processing the parts are given in Table 9.8, along with the respective processing time.
182 Table 9.4 Experimental results of experiment 2
9 Mathematical Modeling and Evolutionary Algorithm … Job
Alternative process plans Integration
No integration
1
O1 M4 –O2 M5
O1 M1 –O2 M2
2
O3 M3 –O4 M4
O3 M3 –O4 M4
3
O6 M3 –O5 M1 –O7 M2
O5 M3 –O6 M5 –O7 M2
4
O8 M1 –O9 M3 –O10 M5
O8 M1 –O9 M2 –O10 M4
5
O11 M5 –O13 M5 –O12 M1 –O14 M4
O11 M5 –O12 M4 –O13 M5 –O14 M4
6
O15 M2 –O16 M4 –O17 M2 –O18 M1
O15 M1 –O16 M3 –O17 M2 –O18 M3
Makespan
27
33
CPU time (s)
3.20
3.20
O1 means operation 1. M1 means machine 1
Fig. 9.5 Gantt chart of experiment 2 (makespan = 27)
Using the simulated annealing [19], genetic algorithm [17], and this proposed approach (see Fig. 9.9), the best value obtained of all the three approaches is 33 compared with the value of 38 obtained using the heuristic algorithm developed in [22]. The advantage of the GA and proposed approach compared with the SA approach is the availability of multiple solutions that introduces more flexibility
9.4 Experimental Studies and Discussions
183
Table 9.5 Experimental results of experiment 3 Problems
Number of jobs
Makespan Integration
CPU time (s) No integration
Integration
No integration
1
8
520
615
3.28
3.42
2
10
621
831
3.38
3.72
3
12
724
934
3.67
3.91
4
14
809
1004
3.69
4.14
5
16
921
1189
3.73
4.39
6
18
994
1249
4.09
4.69
Fig. 9.6 Gantt chart of the first problem in experiment 3 (makespan = 520) Table 9.6 Experiment data of problem 1 in experiment 4 Alternative machines Job 1
Job 2
Job 3
Job 4
1
2
3
4
5
6
O11
2
3
4
/
/
/
O12
/
3
/
2
4
/
O13
1
4
5
/
/
/
O21
3
/
5
/
2
/
O22
4
3
/
/
6
/
O23
/
/
4
/
7
11
O31
5
6
/
/
/
/
O32
/
4
/
3
5
/
O33
/
/
13
/
9
12
O41
9
/
7
9
/
/
O42
/
6
/
4
/
5
O43
1
/
3
/
/
3
184
9 Mathematical Modeling and Evolutionary Algorithm …
Fig. 9.7 Gantt chart of Problem 1 in experiment 4 (makespan = 17)
Table 9.7 Experiments results of problem 2 in experiment 4 Size
Jobs
Production amount in each job
Operations
Makespan
CPU time (s)
Job 1
Job 2
Job 3
1
8
2
2
2
Job 4 2
24
20
1.27
2
32
10
8
8
6
96
67
9.34
3
64
24
10
20
10
192
136
69.92
4
100
25
25
25
25
300
219
249.83
5
150
30
30
40
50
450
358
1091.44
6
200
50
50
60
40
600
453
2148.69
7
300
80
80
70
70
900
658
7559.00
8
400
100
100
100
100
1200
901
15328.15
into the IPPS system. In the SA approach, only one solution is produced for every execution. Problem 2 In the second problem, the benchmark problem by Sundaram and Fu is extended to confirm the effectiveness of the proposed method for large-scale and more complex problems. The search space becomes much larger than the benchmark problem by the extension of the setting. Table 9.9 shows the setting of the production demands in which eight terms of production are used and the experimental results. Figure 9.10 illustrates the Gantt chart of size 1 of Problem 2 in experiment 5.
9.4 Experimental Studies and Discussions
185
Fig. 9.8 Gantt chart of size 1 of Problem 2 in experiment 4 (makespan = 20) Table 9.8 Sundaram and Fu data Job
Operation 1
Operation 2
Operation 3
1
5(M1), 3(M2)
7(M2)
6(M3)
3(M4), 4(M5)
2
7(M1)
4(M2), 6(M3)
7(M3), 7(M4)
10(M5)
3
4(M1), 5(M2), 8(M3)
5(M4)
6(M4), 5(M5)
4(M5)
4
2(M2), 6(M3)
8(M3)
3(M3), 8(M4)
7(M4), 4(M5)
5
3(M1), 5(M3)
7(M3)
9(M4), 6(M5)
3(M5)
The numbers in parenthesis are the machine numbers
Fig. 9.9 Gantt chart of Problem 1 in experiment 5 (makespan = 33)
Operation 4
50
100
100
100
150
200
300
400
1
2
3
4
5
6
7
8
Jobs
Size
80
60
40
30
15
25
20
10
Job 1
100
70
40
25
15
25
20
10
Job 2
80
50
30
30
20
25
20
10
Job 3
Production amount in each job
60
50
50
35
25
25
20
10
Job 4
Table 9.9 Setting of production demand changes and experimental results
80
70
40
30
25
0
20
10
Job 5
1600
1200
800
600
400
400
400
200
Operations
2028
1537
1021
757
521
467
500
252
Makespan
42705.31
15404.79
4045.25
1149.67
523.27
437.78
592.47
75.11
CPU time (s)
186 9 Mathematical Modeling and Evolutionary Algorithm …
9.4 Experimental Studies and Discussions
187
Fig. 9.10 Gantt chart of size 1 of problem 2 in experiment 5 (makespan = 252)
These results reveal that the proposed method can solve the complex and largescale problems effectively. We also can conclude that the proposed method is effective in generating the near-optimal results under various order environments with precedence constraints. And it can solve IPPS problem effectively.
9.4.2 Discussions Overall, the experimental results indicate that the proposed approach is a very effective and more acceptable approach for IPPS, and the integration model of process planning and scheduling can get better scheduling plans than no integration model. The reasons are as follows. First, the proposed approach considers all the conditions of process planning and scheduling synthetically. Second, in some experiments, the proposed approach can attain better results than other previously developed approaches. This means that the proposed approach has more possibilities to get the best solutions of IPPS problems.
9.5 Conclusion Considering the complementarity of process planning and scheduling, the research has been conducted to develop a mathematical model with an evolutionary algorithmbased approach to facilitate the integration and optimization of these two systems. Process planning and scheduling functions are carried out simultaneously. To improve the optimization performance of the proposed approach, efficient genetic representation and operator schemes have been developed. To verify the feasibility of the proposed approach, a number of experimental studies have been carried out to compare this approach with other previously developed approaches. The experimental results show that the proposed approach is very effective for the IPPS problem
188
9 Mathematical Modeling and Evolutionary Algorithm …
and achieve better overall optimization results. With the model developed in this work, it would be possible to increase the efficiency of manufacturing systems. One future work is to apply the proposed method to practical manufacturing systems. The increased use of this will most likely enhance the performances of future manufacturing systems.
References 1. Benjaafar S, Ramakrishnan R (1996) Modeling, measurement and evaluation of sequencing flexibility in manufacturing systems. Int J Prod Res 34:1195–1220 2. Bierwirth C (1995) Ageneralized permutation approach to job shop scheduling with genetic algorithms. OR Spektrum 17:87–92 3. Garey EL, Johnson DS, Sethi R (1976) The complexity of flow-shop and job-shop scheduling. Math Oper Res 1:117–129 4. Guo YW, Li WD, Mileham AR, Owen GW (2009) Applications of particle swarm and optimization in integrated process planning and scheduling. Robot Comput Integr Manuf 25:280–288 5. Hutchinson GK, Flughoeft KAP (1994) Flexible process plans: their value in flexible automation systems. Int J Prod Res 32(3):707–719 6. Jain A, Jain PK, Singh IP (2006) An integrated scheme for process planning and scheduling in FMS. Int J Adv Manuf Technol 30:1111–1118 7. Kim KH, Egbelu PJ (1998) A mathematical model for job shop scheduling with multiple process plan consideration per job. Prod Plann Control 9(3):250–259 8. Kim YK, Park K, Ko J (2003) A symbiotic evolutionary algorithm for the integration of process planning and job shop scheduling. Comput Oper Res 30:1151–1171 9. Kuhnle H, Braun HJ, Buhring J (1994) Integration of CAPP and PPC—interfusion manufacturing management. Integ Manuf Syst 5(2):21–27 10. Kumar M, Rajotia S (2003) Integration of scheduling with computer aided process planning. J Mater Process Technol 138:297–300 11. Langdon WB, Qureshi A (1995) Genetic programming—computers using “Natural Selection” to generate programs. Technical report RN/95/76, Gower Street, London WCIE 6BT, UK 12. Lee H, Kim SS (2001) Integration of process planning and scheduling using simulation based genetic algorithms. Int J Adv Manuf Technol 18:586–590 13. Li WD, Gao L, Li XY, Guo Y (2008) Game theory-based cooperation of process planning and scheduling. In: Proceeding of CSCWD2008, Xi’an, China, pp 841–845 14. Li XY, Shao XY, Gao L (2008) Optimization of flexible process planning by genetic programming. Int J Adv Manuf Technol 38:143–153 15. Lin YJ, Solberg JJ (1991) Effectiveness of flexible routing control. Int J Flex Manuf Syst 3:189–211 16. Michael P (2005) Scheduling: theory, algorithm, and systems, 2nd ed. Pearson Education Asia Limited and Tsinghua University Press 17. Morad N, Zalzala A (1999) Genetic algorithms in integrated process planning and scheduling. J Intell Manuf 10:169–179 18. Nabil N, Elsayed EA (1990) Job shop scheduling with alternative machines. Int J Prod Res 28(9):1595–1609 19. Palmer GJ (1996) A simulated annealing approach to integrated production scheduling. J Intell Manuf 7(3):163–176 20. Saygin C, Kilic SE (1999) Integrating flexible process plans with scheduling in flexible manufacturing systems. Int J Adv Manuf Technol 15:268–280
References
189
21. Sugimura N, Hino R, Moriwaki T (2001) Integrated process planning and scheduling in holonic manufacturing systems. In: Proceedings of IEEE international symposium on assembly and task planning soft research park, vol 4, Japan, Fukuoka, pp 250–254 22. Sundaram RM, Fu SS (1988) Process planning and scheduling. Comput Ind Eng 15(1–4):296– 307 23. Usher JM, Fernandes KJ (1996) Dynamic process planning—the static phase. J Mater Process Technol 61:53–58
Chapter 10
An Agent-Based Approach for IPPS
10.1 Literature Survey In the early studies of CIMS, it has been identified that IPPS is very important for the development of CIMS [1]. The preliminary idea of IPPS was first introduced by Chryssolouris and Chan Chryssolouris et al. [2, 3]. Beckendorff [4] used alternative process plans to improve the flexibility of manufacturing systems. Khoshnevis and Chen [5] introduced the concept of dynamic feedback into IPPS. The integration model proposed by Zhang and Larsen [6, 7] extended the concepts of alternative process plans and dynamic feedback. It also defined an expression to the methodology of the hierarchical approach. In recent years, in the area of IPPS, the agent-based approach has captured the interest of a number of researchers. Zhang and Xie [8] reviewed the agent technology for collaborative process planning. The focus of the research was on how the agent technology can be further developed in support of collaborative process planning as well as its future research issues and directions in process planning. Wang and Shen [9] provided a literature review on IPPS, particularly on the agent-based approaches for the problem. The advantages of the agent-based approach for scheduling were discussed. Shen and Wang [10] reviewed the research on manufacturing process planning, scheduling as well as their integration. Gu et al. [11] proposed a multi-agent system, where process routes and schedules of a part are accomplished through the contract net bids. IDCPPS [12] is an integrated, distributed, and cooperative process planning system. The process planning tasks are separated into three levels, namely, initial planning, decision-making, and detail planning. The results of these three steps are general process plans, a ranked list of nearoptimal alternative plans and the final detailed linear process plans, respectively. The integration with scheduling is considered at each stage with process planning. Wu and Fuh [13] presented a computerized model that can integrate the manufacturing
© Springer-Verlag GmbH Germany, part of Springer Nature and Science Press, Beijing 2020 X. Li and L. Gao, Effective Methods for Integrated Process Planning and Scheduling, Engineering Applications of Computational Methods 2, https://doi.org/10.1007/978-3-662-55305-3_10
191
192
10 An Agent-Based Approach for IPPS
functions and resolve some of the critical problems in distributed virtual manufacturing. This integration model is realized through a multi-agent approach that provides a practical approach for software integration in a distributed environment. Lim and Zhang [14, 15] introduced a multi-agent-based framework for the IPPS problem. This framework can also be used to optimize the utilization of manufacturing resources dynamically as well as provide a platform on which alternative configurations of manufacturing systems can be assessed. Wang and Shen [16] proposed a new methodology of distributed process planning. It focused on the architecture of the new approach, using multi-agent negotiation and cooperation, and on the other supporting technologies such as machining feature-based planning and function block-based control. Wong and Leung [17, 18] developed an online hybrid agent-based negotiation Multi-Agent System (MAS) to integrate process planning with scheduling/rescheduling. With the introduction of the supervisory control into the decentralized negotiations, this approach is able to provide solutions with a better global performance. Shukla and Tiwari [19] presented a bidding-based multi-agent system for solving IPPS. The proposed architecture consists of various autonomous agents capable of communicating (bidding) with each other and making decisions based on their knowledge. Fuji and Inoue [20] proposed a new method in IPPS. A multi-agent learning based integration method was devised in the study to solve the conflict between the optimality of the process plan and the production schedule. In the method, each machine makes decisions about process planning and scheduling simultaneously, and it has been modeled as a learning agent using evolutionary artificial neural networks to realize proper decisions resulting from interactions between other machines. Nejad, Sugimura, Iwamura, and Tanimizu [21] proposed an agentbased architecture of an IPPS system for multiple jobs in flexible manufacturing systems. In the literature of agent-based manufacturing applications, much research applied simple algorithms such as dispatching rules which are applicable for real-time decision-making [10]. These methods are simple and applicable, but they do not guarantee the effectiveness for complex problem in the manufacturing systems. As the efficiency becomes more important in the agent-based manufacturing, the recent research works are trying to combine the agent-based approach with other techniques such as genetic algorithm, neural network, and some mathematical modeling methods [10]. In this research, an agent-based approach with optimization agent and a mathematical model was introduced for improving the generated process plans and scheduling plans.
10.2 Problem Formulation The mathematical model of IPPS is defined here. The most popular criteria for scheduling include makespan, job tardiness, and the balanced level of machine utilization, while manufacturing cost is the major criterion for process planning [22].
10.2 Problem Formulation
193
In this chapter, scheduling is often assumed as job shop scheduling, and the mathematical model of IPPS is based on the mixed-integer programming model of the job shop scheduling problem. In order to solve this problem, the following assumptions are made: (1) Jobs are independent. Job preemption is not allowed and each machine can handle only one job at a time. (2) The different operations of one job can not be processed simultaneously. (3) All jobs and machines are available at time zero simultaneously. (4) After a job is processed on a machine, it is immediately transported to the next machine on its process, and the transmission time is assumed to be negligible. (5) Setup time for the operations on the machines is independent of the operation sequence and is included in the processing times. Based on these assumptions, the mathematical model of the integration of process planning and scheduling is described as follows: The notations used to explain the model are described below: N M Gi ojl Pil k t ijlk cijlk ci Li Ti Ei vijlk A
The total number of jobs; The total number of machines; The total number of alternative process plans of job i; The jth operation in the lth alternative process plan of the job i; The number of operation in the lth alternative process plan of the job i; The alternative machine corresponding to oijl ; The processing time of operation oijl on machine k, t ijlk > 0; The earliest completion time of operation oijl on machine k; The completion time of job i; The lateness of job i; The tardiness of job i; The earliness of job i; The processing cost of operation oijl on machine k; A very large positive number; Ui = X il = Yi jlpqsk =
1 if ci > di : the unit penalty of job i; 0 otherwise
1 the lth flexible process plan of job i is selected 0 otherwise
1 the operation oijl precedes the operation o pqs on machine k Z i jlk =
Objectives:
0 otherwise 1 if machine k is selected for oi jl ; 0 otherwise
;
194
10 An Agent-Based Approach for IPPS
(1) Minimizing the makespan which is the completion time of last operation of all jobs: Min makespan = Max ci jlk
i ∈ [1, N ], j ∈ [1, Pil ], l ∈ [1, G i ], k ∈ [1, M] (10.1)
Subject to For the first operation in the alternative process plan l of job i: ci1lk + A(1 − X il ) ≥ ti1lk i ∈ [1, N ], l ∈ [1, G i ], k ∈ [1, M]
(10.2)
(2) For the last operation in the alternative process plan l of job i: ci Pil lk − A(1 − X il ) ≤ makespan i ∈ [1, N ], l ∈ [1, G i ], k ∈ [1, M] (10.3) (3) The different operations of one job can not be processed simultaneously: ci jlk − ci( j−1)lk1 + A(1 − X il ) ≥ ti jlk i ∈ [1, N ], j ∈ [1, Pil ], l ∈ [1, G l ], k, k1 ∈ [1, M]
(10.4)
(4) Each machine can handle only one job at a time: ci jlk − c pqsk + A(1 − X il ) + A(1 − X ps ) + AYi jlpqsk ≥ ti jlk i, p ∈ [1, N ], j, q ∈ [1, Pil, ps ], l, s ∈ [1, G i, p ], k ∈ [1, M]
(10.5)
(5) Only one alternative process plan can be selected of job i:
X il = 1 l ∈ [1, G i ]
(10.6)
l
(6) Only one machine for each operation should be selected: M k=1
Z i jlk = 1 i ∈ [1, N ], j ∈ [1, Pil ], l ∈ [1, G l ]
(10.7)
10.3 Proposed Agent-Based Approach for IPPS
195
Job Agents Job Agent 2
Job Agent 1
Job Agent n
Job Agent 3
Optimization Agent Machine Agent 1
Machine Agent 2
Machine Agent m
Machine Agent 3
Machine Agents
Resource Database
Knowledge Database
Fig. 10.1 Developed MAS architecture
10.3 Proposed Agent-Based Approach for IPPS 10.3.1 MAS Architecture The architecture of the MAS developed in this study and the relationships between the agents and their subagents are illustrated in Fig. 10.1. In this framework, there are three agents and several databases. The job agents and machine agents are used to represent jobs and machines. The optimization agent is used to optimize alternative process plans and scheduling plans. With the consideration of the scheduling requirements and availability of manufacturing resources, these agents negotiate with each other to establish the actual process plan of every job and the scheduling plans for all jobs. The detailed descriptions of three types of agents are provided in the next section.
10.3.2 Agents Description 10.3.2.1
Job Agent
Job agents represent the jobs to be manufactured on the shop floor. Each agent contains the detailed information of a particular job, which includes the job ID, jobs’ types, quantities, due dates, quality requirements, CAD drawing, tolerance and surface finish requirements, etc. This agent also includes job status. In this study, the following statuses for the job agents are considered: (1) idle: the job agent is idle and waiting for the next manufacturing operations; and (2) manufacturing operation: the
196
10 An Agent-Based Approach for IPPS
job agent is under manufacturing processes on machines. Based on the assumption in Sect. 10.3, when the job is under a manufacturing process on a machine, it cannot be processed by other machines. The function of this agent is to provide the jobs’ information to the MAS. The job agents use the rules from the knowledge database and negotiate with the machine agents to generate all the alternative process plans of each job so they contain the information of alternative process plans. There are three types of flexibility considered in process plans [23, 24]: operation flexibility, sequencing flexibility, and processing flexibility [23]. Operation flexibility [25], also called routing flexibility, relates to the possibility of performing one operation on alternative machines, with possibly distinct processing time and cost. Sequencing flexibility is decided by the possibility of interchanging the sequence of the required operations. Processing flexibility is determined by the possibility of processing the same manufacturing feature with alternative operations or sequences of operations. Better performance in some criteria (e.g., production time) can be obtained through the consideration of these flexibilities [25]. There are many methods used to describe the types of flexibility explained above [26], such as Petri net, and/or graphs and networks. In this research, a network representation proposed by Kim [25] and Sormaz [27] has been adopted here. There are three node types in the network: starting node, intermediate node and ending node [25]. The starting node and the ending node which are dummy ones indicate the start and end of the manufacturing process of a job. An intermediate node represents an operation, which contains the alternative machines that are used to perform the operation and the processing time required for an operation according to the machines. The arrows connecting the nodes represent the precedence between them. OR relationships are used to describe the processing flexibility in which the same manufacturing feature can be processed by different process procedures. If the links following a node are connected by an OR-connector, they only need to traverse one of the OR-links (the links connected by the OR-connector are called OR-links). ORlink path is an operation path that begins at an OR-link and ends as it merges with the other paths, and its end is denoted by a JOIN-connector. For the links that are not connected by OR-connectors, all of them must be visited [25]. Figure 10.2 shows two jobs alternative process plan networks (job 1 and 2) [18]. In the network of Fig. 10.2(2), paths {5, 6} and {7, 8} are two OR-link paths. For the links which are not connected by OR-connectors, such as {6, 7} and {8} in Fig. 10.2(1), all of them must be visited. But they do not have precedence constraint, this means that {6, 7, 8} and {8, 6, 7} are available. In this research, the objective of the process planning problem is the minimization of production time for each job. Adjusted fitness has been used as the objective. In order to calculate fitness [28], the following notations are used to explain the model. N: Gi : S: M:
The total number of jobs; The total number of flexible process plans of the ith job; The size of population; The maximal number of generations;
10.3 Proposed Agent-Based Approach for IPPS
Number of operation Alternative machines
S
197
S
Starting node Lathe 1
1
5
{1,2} [18,22]
{3,5} [12,15]
Processing time
Lathe 2
1 {3,4} [22,25]
OR1
2
5
{3,4} [39,36]
{3,4} [21,23]
Milling Machine 3
OR1
2
5
7
{4,5} [24,22]
{1,2} [24,23]
{3,4} [21,22]
3
6
8
{1,2} [11,10]
{1,2,4} [10,12,15]
{3,5} [45,44]
4
7
3
6
8
{4,5} [31,34]
{1,2} [32,30]
{1,2} [20,19]
{3,4,5} [30,31,29]
{1,5} [32,30]
Milling Machine 4
JOIN 1
JOIN 1
Drilling Machine 5
9
9
{1,2,6} [26,24]
{1,2,6} [27,22] Grinding Machine 6
Ending node
E (1) Process plans of job 1
E Machine agents
(2) Process plans of job 2
Fig. 10.2 Alternative process plan network
t: oijl :
1, 2, 3,…, M generations; The jth operation in the lth flexible process plan of the ith job; The number of operations in the lth flexible process plan Pil : of the ith job; k: The alternative machine corresponding to oijl ; TW (i, j, l, k): The working time of operation ojl on the kth alternative machine; TS(i, j, l, k): The starting time of operation oijl on the kth alternative machine; T (i, l, (j, k 1 ), (j + 1, k 2 )): The transportation time between the k 1 th alternative machine of the oijl and the k 2 th alternative machine of the oi(j + 1)l ; TP(i, t): The production time of ith job in the tth generation; Then the production time is calculated as
198
10 An Agent-Based Approach for IPPS
T P(i, t) =
Pil
T W (i, j, l, k) +
P il −1
j=1
T T (i, l, ( j, k1 ), ( j + 1, k2 )) i ∈ [1, N ], j ∈ [1, Pil ], l ∈ [1, G i ]
j=1
(10.8) Because each machine can handle only one job at a time, the constraint is T S(i, j2 , l, k) − T S(i, j1 , l, k) > T W (i, j1 , l, k) i ∈ [1, N ], j1 , j2 ∈ [1, Pil ], l ∈ [1, G i ]
(10.9)
Because different operations of one job cannot be processed simultaneously, it becomes the constraint of different processes for one job. T S(i, ( j + 1), l, k2 ) − T S(i, j, l, k1 ) > T W (i, j, l, k1 ) i ∈ [1, N ], j ∈ [1, Pil ], l ∈ [1, G i ]
(10.10)
The objective function is max f (i, t) = 1/T P(i, t)
(10.11)
The fitness function is calculated for each individual in the population as described in Eq. (10.11), and the two constraints are Eqs. (10.9) and (10.10).
10.3.2.2
Machine Agent
The machine agents represent the machines. They read the information from the resource database. Each agent contains the information of the particular machine. The information includes the machine ID, the manufacturing features which this machine can process, the processing time, and the machine status. After the job agents are created, the machine agents negotiate with the job agents and determine the jobs’ operations to be processed on each machine, and the processing time of these operations is also determined at the same time. In this research, the following statuses for the machine agents are considered: (1) idle: the machine agent is idle and waiting for next machining operation; (2) manufacturing operation: the machine is processing one job; and (3) breakdown: the machine has been broken and can not process any jobs. Based on the assumption in Sect. 10.3, when the machine is processing one job, it cannot process other jobs. Each machine agent negotiates with the optimization agent and job agents to get the information that includes the operations’ ID to be processed on them, the processing sequence of these operations and the starting time and ending time of each operation. A scheduling plan is then determined. A scheduling plan determines when and how many jobs have to be manufactured within a given period of time. Therefore, this plan has to be carried out according to the
10.3 Proposed Agent-Based Approach for IPPS
199
current shop floor status. If there are any changes in the shop floor and the determined scheduling plan cannot be carried out, the machine agents need to negotiate with other agents (including job agents and optimization agents) to trigger a rescheduling process.
10.3.2.3
Optimization Agent
The optimization agent is a very important part of the proposed MAS. It can optimize the process plans and scheduling plans to get more effective solutions. In order to accomplish this task, the optimization agent explores the search space with the aid of an evolutionary algorithm, i.e., a Modified Genetic Algorithm (MGA). More details of the MGA can be found in Shao and Li [29]. The optimization agent implements the MGA on the IPPS problem in the following steps: Step 1: Step 2: Step 3: Step 4:
Set up the parameters of MGA for the optimization of process planning, set up i = 1; Based on the generation rules, initialize population randomly with PPsize alternative process plan individuals of job i; Evaluate the initial population: calculate the objective function presented in Eq. (10.11) max f (i, t) = 1/T P(i, t); Generate a new population: Step 4.1: Selection: the tournament selection scheme has been used for selection operation. In tournament selection, a number of individuals are selected randomly (dependent on the tournament size, typically between 2 and 7) from the population and the individual with the best fitness is selected; Step 4.2: Crossover: the crossover operator with a user-defined crossover probabilistic is used for process planning crossover operation; Step 4.3: Mutation: the mutation operator with a user-defined mutation probabilistic is used for process planning mutation operation;
Step 5:
Is the terminate condition satisfied? If yes: go to Step 6; Else: go to Step 3; Step 6: Save s near-optimal alternative process plans of job i in PP_i[s], and set i = i + 1; Step 7: If (i > N): Then go to Step 8; Else: go to Step 2; Step 8: Set up the parameters of MGA for the optimization of IPPS; Step 9: Select the li th alternative process plan from PP_i[s] of job i, i ∈ [1, N ]; Step 10: Based on the generation rules, initialize population randomly with SC size alternative process plan individuals of IPPS;
200
10 An Agent-Based Approach for IPPS
Step 11: Evaluate the initial population: calculate the objective function presented in Eq. (10.1); Step 12: Generate a new population of IPPS: Step 12.1: Selection: use the same selection operator in Step 4.1; Step 12.2: Crossover: the crossover operator with a user-defined crossover probabilistic is used for IPPS crossover operation; Step 12.3: Mutation: the mutation operator with a user-defined mutation probabilistic is used for IPPS mutation operation; Step 13: Is the terminate condition satisfied? If yes: go to Step 14; Else: go to Step 11; Step 14: Output the individuals from PP_i[l i ], i ∈ [1, N ], and output the individual of IPPS; Step 15: Decode the individuals to attain the process plan of each job, and the scheduling plan After applying the above procedure, the optimization agent generates the optimal or near-optimal process plan of each job and the scheduling plan.
10.4 Implementation and Experimental Studies 10.4.1 System Implementation The proposed approach has been implemented in a simulation manufacturing system and is illustrated in Fig. 10.3. Microsoft Visual C++ programming language has been used to implement the framework developed in this study, and Microsoft Access is used as the database to store information (e.g., resource and knowledge databases). The agents are executed on three hosts. The inter-agent communication is based on the point to point method which is using TCP/IP protocol, and is managed by the Knowledge Query and Manipulation Language (KQML) [30]. All messages in this research are compliant with a set of standard performatives of KQML. And, Windows Sockets which is supported by Microsoft Foundation Classes (MFC) is used to implement the inter-agent communication. The job agents and machine agents are used to represent jobs and machines. The optimization agent is used to optimize alternative process plans and scheduling plans. In consideration of the scheduling requirements and availability of manufacturing resources, these agents negotiate with each other to establish the actual process plan of every job and the scheduling plans. The optimization agent is shown in Fig. 10.4.
10.4 Implementation and Experimental Studies
201
Job n
Job 1
Job n
Job 1
Machine 1
Optimization Agent
Job Agents
Internet/Intranet
Resource Database
Fig. 10.3 Simulation manufacturing system
Fig. 10.4 Optimization agent of simulation system
Machine m
Machine Agents
Agent Platform
Knowledge Database
202
10 An Agent-Based Approach for IPPS
10.4.2 Experimental Results and Discussion In order to illustrate the effectiveness and performance of the proposed agent-based approach, we consider three experimental case studies. The MGA parameters for process planning and scheduling are given in Table 10.1. The algorithm terminates when the number of generations reaches to the maximum value.
10.4.2.1
Experiment 1
Experiment 1 has been adopted from Fuji et al. [20]. In this experiment, there are two problems. The first problem, which has been adopted from a benchmark problem proposed by Sundaram and Fu [31], is constructed with five jobs and five machines. The makespan is used as the objective. Each job undergoes four different operations in a specified order. Alternative machines for processing the parts are given in Table 10.2, along with the respective processing times. Using the Simulated Annealing (SA) [32], genetic algorithm (GA) [33], and this proposed approach, the best value obtained of all the three approaches is 33 compared with the value of 38 obtained using the heuristic algorithm developed in [31]. The advantage of the GA and the proposed approach compared with the SA approach is Table 10.1 MGA parameters
Parameters
Process planning
Scheduling
The size of the population
100
100
Total number of generations
100
100
Tournament size
2
2
Probability of reproduction operation
0.10
0.10
Probability of crossover operation
0.90
0.80
Probability of mutation operation
0.15
0.10
Table 10.2 Sundaram and Fu data (The number in parenthesis are the machine numbers) Job
Operation 1
Operation 2
Operation 3
Operation 4
1
5(M1), 3(M2)
7(M2)
6(M3)
3(M4), 4(M5)
2
7(M1)
4(M2), 6(M3)
7(M3), 7(M4)
10(M5)
3
4(M1), 5(M2),8(M3)
5(M4)
6(M4), 5(M5)
4(M5)
4
2(M2), 6(M3)
8(M3)
3(M3), 8(M4)
7(M4), 4(M5)
5
3(M1), 5(M3)
7(M3)
9(M4), 6(M5)
3(M5)
10.4 Implementation and Experimental Studies
203
the availability of multiple solutions that introduces more flexibility into the IPPS system. In the SA approach, only one solution is produced for every execution. In the second problem, the benchmark problem by Sundaram and Fu [31] has been extended to confirm the effectiveness of the proposed method for more complex and large-scale problems. The search space becomes much larger than the benchmark problem by the extension of the setting. Table 10.3 shows the setting of the production demand in which three terms of production are used. Table 10.4 shows the experimental results and the comparisons with the methods in Fuji et al. [20]. Figure 10.5 illustrates Gantt charts of Term 1, and Fig. 10.6 shows the convergence curves of the proposed approach in three terms. Based on the experimental results of Experiment 1, the best solution found by the proposed method is not detected using the method in Fuji et al. [20]. The proposed method can get better scheduling plans. And from Fig. 10.6, we can conclude that this method has very fast evolution speed and good search capacity. These results reveal that the proposed method is more effective in more complex and larger scale problems than other methods. Table 10.3 Setting of production demand changes Term
Jobs
Production amount in each job Job 1
Job 2
Job 3
Job 4
Job 5
1
5
20
20
20
20
20
2
4
25
25
25
25
0
3
5
15
15
20
25
25
Table 10.4 Experimental results of this problem (the data which is marked by * is adopted from Fuji et al. [20]) Terms
Makespan Proposed method
Method in Fuji*
Morad*
Integrated*
1
499
500
509
513
2
468
481
481
488
3
521
522
526
534
Fig. 10.5 Gantt charts of term 1 (Makespan = 499)
204
10 An Agent-Based Approach for IPPS
Fig. 10.6 Convergence curves of the proposed method in three terms
10.4.2.2
Experiment 2
Experiment 2 has been adopted from Chan et al. [34]. In this experiment, the problem is constructed with five jobs and three machines. The lot size for the jobs is 10, and the makespan is used as the objective. Table 10.5 shows the experimental results, and Fig. 10.7 illustrates the Gantt chart of this problem. Table 10.5 Experimental results of experiment 2 (the data which is marked by * is adopted from Chan et al. [34]) Makespan
Proposed method
Chan*
Kumar [35]
Lee [36]
350
360
420
439
Fig. 10.7 Gantt chart of experiment 2 (Makespan = 350)
10.4 Implementation and Experimental Studies
205
Based on the experimental results of Experiment 2, the best solution found by the proposed approach is unable to be identified using the approach in Chan et al. [34]. Therefore, the proposed approach can get better scheduling plans.
10.4.2.3
Experiment 3
Experiment 3 has also been adopted from Chan et al. [34]. In this experiment, the problem is constructed with one hundred jobs and ten machines. The makespan is used as the objective. Table 10.6 shows the experimental results, and Fig. 10.8 illustrates the Gantt chart of this problem. Based on the experimental results of Experiment 3, the best solution found by the proposed approach is unable to be found using the approach in Chan et al. [34]. The proposed approach can get better scheduling plans.
10.4.3 Discussion Overall, the experimental results indicate that the proposed approach is a more acceptable approach for IPPS. The reasons are as follows. First, the proposed approach considers all the conditions of process planning and scheduling synthetically. Second, the proposed approach obtains better results than other previously developed approaches.
10.5 Conclusion Considering the complementary roles of process planning and scheduling, the research has been conducted to develop an agent-based approach to facilitate the integration and optimization of these two systems. Process planning and scheduling functions are carried out simultaneously. An optimization agent based on an evolutionary algorithm has been developed to optimize and realize the proper decisions resulting from interactions between the agents. To verify the feasibility of the proposed approach, a number of experimental studies have been carried out to compare this approach and other previously developed approaches. The experimental results show that the proposed approach is very effective for the IPPS problem and achieve better overall optimization results. With the new method developed in this work, it would be possible to increase the efficiency of manufacturing systems. One future work is to use the proposed method to practical manufacturing systems. The increased use of this approach will most likely enhance the performances of future manufacturing systems.
Proposed method
169
Crossover/Mutation rates
Makespan
267
Single
SGA* 259
1% 272
2% 276
3% 298
5%
290
10%
Table 10.6 Experimental results of experiment 3 (the data which is marked by * is adopted from Chan et al. [34])
300
25%
291
50%
286
80% 229
GADG*
206 10 An Agent-Based Approach for IPPS
References
207
Fig. 10.8 Gantt chart of experiment 3 (Makespan = 169)
References 1. Tan W, Khoshnevis B (2000) Integration of process planning and scheduling—a review. J Intell Manuf 11:51–63 2. Chryssolouris G, Chan S (1985) An integrated approach to process planning and scheduling. Ann CIRP 34(1):413–417 3. Chryssolouris G, Chan S, Cobb W (1984) Decision making on the factory floor: An integrated approach to process planning and scheduling. Robot Comput Integr Manuf 1(3–4):315–319 4. Beckendorff, U., Kreutzfeldt, J., & Ullmann, W. (1991). Reactive workshop scheduling based on alternative routings. In: Proceedings of a conference on factory automation and information management, pp 875–885 5. Khoshnevis B, Chen QM (1989) Integration of process planning and scheduling function. In: Proceedings of IIE integrated systems conference and society for integrated manufacturing conference, pp 415–420 6. Larsen NE (1993) Methods for integration of process planning and production planning. Int J Comput Integr Manuf 6(1–2):152–162 7. Zhang HC (1993) IPPM— A prototype to integrated process planning and job shop scheduling functions. Annals of the CIRP 42(1):513–517 8. Zhang WJ, Xie SQ (2007) Agent technology for collaborative process planning: a review. Int J Adv Manuf Technol 32:315–325 9. Wang LH, Shen WM, Hao Q (2006) An overview of distributed process planning and its integration with scheduling. Int J Comput Appl Technol 26(1–2):3–14 10. Shen WM, Wang LH, Hao Q (2006) Agent-based distributed manufacturing process planning and scheduling: a state-of-the-art survey. IEEE Trans Sys Man Cybern Part C: Appl Rev 36(4):563–577 11. Gu P, Balasubramanian S, Norrie D (1997) Bidding-based process planning and scheduling in a multi-agent system. Comput Ind Eng 32(2):477–496 12. Chan FTS, Zhang J, Li P (2001) Modelling of integrated, distributed and cooperative process planning system using an agent-based approach. Proc Inst Mech Eng, Part B: J Eng Manuf 215:1437–1451 13. Wu SH, Fuh JYH, Nee AYC (2002) Concurrent process planning and scheduling in distributed virtual manufacturing. IIE Trans 34:77–89
208
10 An Agent-Based Approach for IPPS
14. Lim MK, Zhang Z (2003) A multi-agent based manufacturing control strategy for responsive manufacturing. J Mater Process Technol 139:379–384 15. Lim MK, Zhang DZ (2004) An integrated agent-based approach for responsive control of manufacturing resources. Comput Ind Eng 46:221–232 16. Wang LH, Shen WM (2003) DPP: An agent-based approach for distributed process planning. J Intell Manuf 14:429–439 17. Wong TN, Leung CW, Mak KL, Fung RYK (2006) Integrated process planning and scheduling/rescheduling—an agent-based approach. Int J Prod Res 44(18–19):3627–3655 18. Wong TN, Leung CW, Mak KL, Fung RYK (2006) Dynamic shopfloor scheduling in multiagent manufacturing system. Expert Syst Appl 31:486–494 19. Shukla SK, Tiwari MK, Son YJ (2008) Bidding-based multi-agent system for integrated process planning and scheduling: A data-mining and hybrid Tabu-SA algorithm-oriented approach. Int J Adv Manuf Technol 38:163–175 20. Fuji N, Inoue R, Ueda K (2008) Integration of process planning and scheduling using multiagent learning. In: Proceeding of 41st CIRP conference on manufacturing systems, pp 297–300 21. Nejad HTN, Sugimura N, Iwamura K, Tanimizu Y (2008) Agent-based dynamic process planning and scheduling in flexible manufacturing system. In: Proceeding of 41st CIRP conference on manufacturing systems, pp 269–274 22. Li, W. D., Gao, L., Li, X. Y., & Guo, Y. (2008a). Game theory-based cooperation of process planning and scheduling. In: Proceeding of CSCWD, pp. 841–845 23. Li WD, McMahon CA (2007) A simulated annealing-based optimization approach for integrated process planning and scheduling. Int J Comput Integr Manuf 20(1):80–95 24. Saygin C, Kilic SE (1999) Integrating flexible process plans with scheduling in flexible manufacturing systems. Int J Adv Manuf Technol 15:268–280 25. Kim YK, Park K, Ko J (2003) A symbiotic evolutionary algorithm for the integration of process planning and job shop scheduling. Comput Oper Res 30:1151–1171 26. Catron AB, Ray SR (1991) ALPS-A language for process specification. Int J Comput Integr Manuf 4:105–113 27. Sormaz D, Khoshnevis B (2003) Generation of alternative process plans in integrated manufacturing systems. J Intell Manuf 14:509–526 28. Li XY, Shao XY, Gao L (2008) Optimization of flexible process planning by genetic programming. Int J Adv Manuf Technol 38:143–153 29. Shao XY, Li XY, Gao L, Zhang CY (2009) Integration of process planning and scheduling—a modified genetic algorithm-based approach. Comput Oper Res 36:2082–2096 30. Liu SP, Zhang J, Rao YQ, Li PG (2000) Construction of communication and coordination mechanism in a multi-agent system. In: Proceedings of 17th international conference computer aided production engineering, pp 28–30 31. Sundaram RM, Fu SS (1988) Process planning and scheduling. Comput Ind Eng 15(1–4):296– 307 32. Palmer GJ (1996) A simulated annealing approach to integrated production scheduling. J Intell Manuf 7(3):163–176 33. Morad N, Zalzala A (1999) Genetic algorithms in integrated process planning and scheduling. J Intell Manuf 10:169–179 34. Chan FTS, Chuang SH, Chan LY (2008) An introduction of dominant genes in genetic algorithm for FMS. Int J Prod Res 46(16):4369–4389 35. Kumar R, Tiwari MK, Shankar R (2003) Scheduling of flexible manufacturing systems: an ant colony optimization approach. Proc Inst Mech Eng, Part B: J Eng Manuf 217:1443–1453 36. Lee DY, DiCesare F (1994) Scheduling flexible manufacturing systems using petri nets and heuristic search. IEEE Trans Robot Autom 10(2):123–132
Chapter 11
A Modified Genetic Algorithm Based Approach for IPPS
11.1 Integration Model of IPPS In this section, the IPPS model is introduced. This model is illustrated in Fig. 11.1. The basic integration methodology of this model is to utilize the advantages of NLPP (alternative process plans) and DPP (hierarchical approach). This integration model is based on the concurrent engineering principle, where the Computer-Aided Process Planning (CAPP) and scheduling systems are working simultaneously. In the whole integration decision-making phase, this model gives expression to interactive, collaborative, and cooperative working. And it also exploits the flexibility of alternative process plans. The detailed working steps of this model are given as follows: Step 1: The CAPP system is working based on the ideal shop floor resources, and generates all the initial alternative process plans for each job. Shop floor resource module provides the current shop floor status to the CAPP system. Step 2: Based on the shop floor resource information by the given optimization criteria, the CAPP system optimizes all the alternative process plans and selects s (s is determined by the user of the system, and it is in the range of 3–5) near-optimal process plans for each job [1]. Because there may be many alternative process plans for each job, this step is to filter out the poor process plans without affecting the flexibility of the model very much, but to avoid the combinational-explosive problem effectively. Step 3: The integration of process planning (use the selected process plans) and scheduling is optimized based on the current shop floor status, generating the optimal scheduling plan, and selecting one optimal process plan for each job from the selected process plans in Step 2. Step 4: The CAPP system is used to generate the detailed process plan for each job, and the scheduling system is used to generate a detailed scheduling plan.
© Springer-Verlag GmbH Germany, part of Springer Nature and Science Press, Beijing 2020 X. Li and L. Gao, Effective Methods for Integrated Process Planning and Scheduling, Engineering Applications of Computational Methods 2, https://doi.org/10.1007/978-3-662-55305-3_11
209
210
11 A Modified Genetic Algorithm Based Approach for IPPS
Fig. 11.1 Integration model
Through the working steps, the advantages of NLPP and DPP are utilized effectively in the integration model. CAPP and scheduling systems are working simultaneously, and the flexibility of alternative process plans remain.
11.2 Representations for Process Plans and Schedules There are three types of flexibility considered in process plans [2, 3]: operation flexibility, sequencing flexibility, and processing flexibility [2]. Operation flexibility [4], also called routing flexibility, relates to the possibility of performing one operation on alternative machines, with possibly distinct processing time and cost. Sequencing flexibility is decided by the possibility of interchanging the sequence of the required operations. Processing flexibility is determined by the possibility of processing the same manufacturing feature with alternative operations or sequences of operations. Better performance in some criteria (e.g., production time) can be obtained through the consideration of these flexibilities [4].
11.2 Representations for Process Plans and Schedules
211
There are many methods used to describe the types of flexibility explained above [5], such as Petri net, and/or graphs and networks. In this research, a network representation proposed by Kim [4] and Sormaz [6] is adopted here. There are three node types in the network: starting node, intermediate node, and ending node [4]. The starting node and the ending node which are dummy ones indicate the start and the end of the manufacturing process of a job. An intermediate node represents an operation, which contains the alternative machines that are used to perform the operation and the processing time required for the operation according to the machines. The arrows connecting the nodes represent the precedence between them. OR relationships are used to describe the processing flexibility in which the same manufacturing feature can be processed by different process procedures. If the links following a node are connected by an OR-connector, they only need to traverse one of the OR-links (the links connected by the OR-connector are called OR-links). OR-link path is an operation path that begins at an OR-link and ends as it merges with the other paths, and its end is denoted by a JOIN-connector. For the links that are not connected by OR-connectors, all of them must be visited [4]. Figure 11.2 shows three jobs’ alternative process plan networks (jobs 1, 3, 5, and jobs 2, 4, 6 will be given in Sect. 11.4) which are used in Sect. 11.4. In the network of Fig. 11.2(1), paths {2, 3, 4}, {5, 6, 7}, and {5, 8} are three OR-link paths. An OR-link path can definitely contain the other OR-link paths, e.g., paths {6, 7} and {8}. In this chapter, scheduling is often assumed as job shop scheduling [7]. In solving this scheduling problem, the following assumptions are made [2, 4]: (1) Job preemption is not allowed and each machine can handle only one job at a time. (2) The different operations of one job cannot be processed simultaneously. (3) All jobs and machines are available at time zero simultaneously. (4) After a job is processed on a machine, it is immediately transported to the next machine on its process, and the transportation time among machines is constant. (5) Setup time for the operations on the machines is independent of operation sequence and is included in the processing times (see Fig. 11.2). A Gantt chart has been popularly used to represent a schedule of a group of parts. In the Gantt chart, the order in which the parts and their operations are carried out is laid out, and the dependencies of the tasks are managed. X-axis of the Gantt chart represents time. Each row in Y-axis represents a machine and the specific arrangement for the operations of the parts on the machine. In this chapter, the Gantt chart is used to represent the schedule.
212
11 A Modified Genetic Algorithm Based Approach for IPPS
Fig. 11.2 Alternative process plans networks
11.3 Modified GA-Based Optimization Approach 11.3.1 Flowchart of the Proposed Approach Figure 11.3 shows the flowchart of the proposed method. First, the CAPP system gives alternative process plans. They are optimized by GA and the near-optimal process plans are found. The next step is to select s near-optimal process plans. And
11.3 Modified GA-Based Optimization Approach Fig. 11.3 Flowchart of the proposed approach
213 Alternative process plans Optimize process plans by GA Near optimal process plans Select process plans Optimize integration of process plan and scheduling by GA Optimized process plan for each job and scheduling plan
then, the integration of process plan and scheduling is optimized by GA. Finally, the optimized process plan for each job and the scheduling plan can be determined.
11.3.2 Genetic Components for Process Planning 11.3.2.1
Encoding and Decoding
Each chromosome in process planning population consists of two parts with different lengths as shown in Fig. 11.4 The first part of the chromosome is the process plan string, and is made up of Gene. Gene is a structure, made up of two numbers. The first number is the operation. It can be all the operations for a job, even those that may not be performed because of alternative operation procedures. The second number is the alternative machine, which is the ith element of which represents the machine on which the operation corresponding to the ith element of part 1 is processed. The second part of the chromosome is the OR string, and is made up of discrimination values. The discrimination value encodes OR-connectors (see Fig. 11.2) as an integer in the decimal system. It is relevant to the process plan string to decide which OR-link is chosen. Figure 11.4 shows an example of individual job 1 (see Fig. 11.2). Taking Gene (2, 8), for example, 2 is the operation of job 1; 8 is the alternative machine, which is corresponding to operation 2. Part 1 (process plan string) of the chromosome
Fig. 11.4 Individual of process plan
214
11 A Modified Genetic Algorithm Based Approach for IPPS
shown in Fig. 11.4 is made up of nine Genes; part 2 (OR string) is made up of two discrimination values. The encoding is directly decoded. The selection of the OR-link paths which contain operations and the corresponding machines are decided by the interpretation of part 2 of the chromosome. And then the orders appearing in the resulting part 1 are interpreted as an operation sequence and the corresponding machining sequence for a job. In the above encoding example, the operation sequence together with the corresponding machining sequence is (1, 2)-(2, 8)-(3, 1)-(4, 6)-(9, 3).
11.3.2.2
Initial Population and Fitness Evaluation
Initial population: In order to operate the evolutionary algorithm, an initial population is needed. The generation of the initial population in GA is usually done randomly. However, when generating the individuals for an initial population, a feasible operation sequence in a process plan has to be taken into account. Feasible operation sequence means that the order of elements in the used encoding does not break constraints on precedence relations of operations [4]. As mentioned above, a method has been proposed to generate a random and feasible chromosome. The procedure of the method is as follows [1]: Step 1: The process plan string of the initial chromosome contains all the alternative operations, and the sequence of operations is fixed. Step 2: The second number of process plan string is created by randomly assigning a machine in the set of machines that can perform the operation placed at the corresponding position in the process plan string. Step 3: The OR string of the initial chromosome, which represents OR-link paths, is initiated by randomly generating an integer for each component of this part. The selection range of each discrimination value is decided by the number of OR-link paths that are controlled by this value. For example, if it has three OR-link paths, the selection range of the discrimination value is the random integer in [4, 5] Fitness evaluation: The objective of the alternative process planning is to minimize the production time (containing working time and transportation time) in the given condition. Adjusted fitness has been used as the objective. In order to calculate fitness [1], the following notations are used to explain the model. N Gi S M t oijl
The total number of jobs; The total number of flexible process plans of the ith job; The size of population; The maximal number of generations; 1, 2, 3, …, M generations; The jth operation in the lth flexible process plan of the ith job;
11.3 Modified GA-Based Optimization Approach
215
Pil
The number of operations in the lth flexible process plan of the ith job; k The alternative machine corresponding to oijl ; TW (i, j, l, k) The working time of operation oijl on the kth alternative machine; TS(i, j, l, k) The starting time of operation oijl on the kth alternative machine; TT (i, l, (j, k 1 ), (j + 1, k 2 )) The transportation time between the k 1 th alternative machine of the oijl and the k 2 th alternative machine of the oi(j+1)l ; TP(i, t) The production time of ith job in the tth generation; Then the production time is calculated as T P(i, t) =
pi j
pil −1
T W (i, j, l, k) +
j=1
T T (i, l, ( j, k1 ), ( j+1, k2 ))
j=1
i ∈ [1, N ], j ∈ [1, Pil ], l ∈ [1, G i ]
(11.1)
Because each machine can handle only one job at a time, the constraint is T S(i, j2 , l, k) − T S(i, j1 , l, k) > T W (i, j1 , l, k), i ∈ [1, N ], j1 , j2 ∈ [1, Pil ], l ∈ [1, G]
(11.2)
Because different operations of one job cannot be processed simultaneously, it becomes the constraint of different processes for one job. T S(i, ( j + 1), l, k2 ) − T S(i, j, l, k1 ) > T W (i, j, l, k1 ) i ∈ [1, N ], j ∈ [1, Pil ], l ∈ [1, G i ]
(11.3)
The objective function is max f (i, t) =
1 T P(i, t)
(11.4)
The fitness function is calculated for each individual in the population as described in Eq. (11.4), and the two constraints are Eqs. (11.2) and (11.3).
11.3.2.3
Genetic Operations for Process Planning
In order to result in excellent individuals residing in the population, it is important to employ good operators for dealing with the problem effectively and efficiently. The GA operators can be generally divided into three classes: reproduction, crossover, and mutation. And in each class, a large number of operators have been developed.
216
11 A Modified Genetic Algorithm Based Approach for IPPS
Reproduction: The tournament selection scheme has been used for reproduction operation. In tournament selection, a number of individuals are selected randomly (dependent on the tournament size, typically between 2 and 7) from the population and the individual with the best fitness is chosen for reproduction. The tournament selection approach allows a tradeoff to be made between the exploration and exploitation of the gene pool [8]. This scheme can modify the selection pressure by changing the tournament size. Crossover: Point crossover has been used as the crossover operator in this study. Each part of the selected chromosomes executes the crossover operation distinguishingly. This crossover generates feasible offspring chromosomes that are within the precedence restrictions and avoids the duplication or omission of operations as follows. The cut point is chosen randomly and the substring before the cut point in one parent (P1) is passed on to the same position as in the offspring (O1). The other part of the offspring (O1) is made up of the substring after the cut point in the other parent (P2). The other offspring (O2) is made up of the substring before the cut point in one parent (P2) and the substring after the cut point in the other parent (P1). An example of the crossover is presented in Fig. 11.5, where in order to express clearer, two P1s are figured out and the cut point is marked out. Each part of the chromosome executes the crossover operation distinguishingly. The crossover operator produces feasible individuals since both parents are feasible and offspring are created without violating the feasibility of the parents. Mutation: Point mutation has been used as the mutation operator in this study. Each of the selected chromosomes is mutated as follows. First, a point mutation scheme is applied in order to change the alternative machine represented in a Gene (see Fig. 11.4). The Gene is randomly chosen from the selected chromosome. Then, the second element of Gene is mutated by altering the machine number to another one of the alternative machines at random. Second, the other mutation is carried out to alter the OR-link path. This is associated with part 2 of the chromosome. A discrimination value is randomly chosen from the selected chromosome. Then, it
Fig. 11.5 Crossover for process planning
11.3 Modified GA-Based Optimization Approach
217
Fig. 11.6 Mutation for process planning
is mutated by changing its value in the selection range randomly. In the example depicted in Fig. 11.6, the mutation point is marked out. Gene (5, 3) has changed into (5, 5), and the selected discrimination value has changed from 1 to 2.
11.3.3 Genetic Components for Scheduling 11.3.3.1
Encoding and Decoding
Each chromosome in the scheduling population consists of two parts with different length as shown in Fig. 11.7. The first part of the chromosome is the scheduling plan string. In this study, the scheduling encoding is an operation-based representation with job numbers. It is represented by a permutation of the operations. It is natural in representing the operation as a sequence and then the crucial information containing parents can be easily passed on to the offspring. The job numbers are used to represent the operations of the jobs. The different appearances of this number in the chromosome represent the different operations of the job, and the sequence of the appearances of this number is the same as the sequence of the operations of the job. It is assumed that there are n jobs, and q is the number of operations of the process plan, which has the most operations among all the alternative process plans of n jobs. The length of the scheduling plan string is equal to n × q. The second part of the chromosome is the process plan string. The positions from 1 to n in this string represent the job from 1 to n. The number in the ith position represents the alternative process plan of the ith job chosen. The number of appearances of i in the scheduling plan string is equal to the number of operations of the alternative process plan which has been chosen. Based on this principle, the composition elements of the scheduling plan string are determined. If the number of elements cannot attain n × q, all the other elements are filled with 0. So, the scheduling plan string is made up of job numbers and 0. One scheduling plan string is generated by arraying all the elements randomly. And the process plan string is generated by choosing the alternative process plan randomly for every job.
Fig. 11.7 Individual of scheduling plan
218
11 A Modified Genetic Algorithm Based Approach for IPPS
Table 11.1 An example of 6 jobs and 3 alternative process plans for each job Job
3 alternative process plans
Job
3 alternative process plans
1
(1,2)-(5,3)-(8,7)-(9,8)
4
(6,1)-(8,5)-(9,2)
(1,2)-(5,3)-(8,7)-(9,3)
(6,1)-(7,6)-(9,2)
(1,4)-(5,3)-(8,7)-(9,8) 2
(1,2)-(5,3)-(8,7)-(9,8)
(1,3)-(4,6)-(5,1)-(9,2) 5
(1,2)-(4,6)-(8,7)-(9,8)
(1,2)-(2,3)-(4,6)-(5,7)-(9,3)
(1,2)-(5,3)-(8,7)-(9,3) 3
(1,3)-(2,6)-(3,2)-(9,2)
(1,2)-(2,3)-(3,2)-(9,3) (1,2)-(2,3)-(3,1)-(9,3)
6
(1,2)-(2,3)-(3,2)-(9,3)
(1,3)-(2,4)-(3,3)-(9,2)
(1,2)-(2,3)-(4,8)-(9,3)
(1,3)-(4,6)-(5,1)-(9,2)
(1,2)-(6,1)-(7,7)-(8,8)-(9,3)
Table 11.1 shows an example of 6 jobs and each job has 3 alternative process plans. Figure 11.7 shows an individual scheduling plan for this example. In this example, n is equal to 6, and q is equal to 5. Therefore, the scheduling plan string is made up of 30 elements and the process plan string is made up of six elements. For job 1, the first alternative process plan is chosen, with four operations in the process plan. The four elements of the scheduling plan string are 1. For job 2, the second alternative process plan is chosen, with four operations in the process plan. The four elements of the scheduling plan string are 2. For job 3, the third alternative process plan is chosen, with four operations in the process plan. The four elements of the scheduling plan string are 3. For job 4, the second alternative process plan is chosen, with three operations in the process plan. The three elements of the scheduling plan string are 4. For job 5, the second alternative process plan is chosen, with five operations in the process plan. The five elements of the scheduling plan string are 5. For job 6, the first alternative process plan is chosen, with four operations in the process plan. The four elements of the scheduling plan string are 6. Therefore, the scheduling plan string is made up of four 1s, four 2s, four 3s, three 4s, five 5s, and four 6s. The other elements of this string are 0, and the number of 0 is equal to 6 = (30 − 4 − 4 − 4 − 3 − 5 − 4). And all these elements are arrayed randomly to generate a scheduling plan string. The permutations can be decoded into semi-active, active, non-delay, and hybrid schedules. The active schedule is adopted in this research. Recall that at this decoding stage, a particular individual of a scheduling population has been determined, that is, a fixed alternative process plan for each job is given. The notations used to explain the procedure are described below: m oij asij sij pij
The total number of machines; The jth operation of the ith job; The allowable starting time of operation oij The earliest starting time of operation oij ; The processing time of operation oij ;
11.3 Modified GA-Based Optimization Approach
219
t xyi The transportation time between the machine x of pre-operation and machine y of the current operation of the ith job cij The earliest completion time of operation oij , i.e. cij = sij + pij ; The procedure of decoding is as follows: Step 1: Generate the chromosome of machines based on the job chromosome. Step 2: Determine the set of operations for every machine: M a = {oij }, 1 ≤ a ≤ m. Step 3: Determine the set of machines for every job: JM d = {machine}, 1 ≤ d ≤ N. Step 4: The allowable starting time for every operation: asij = ci(j−1) + t xyi (oij ∈ M a , x, y ∈ JM d ), ci(j−1) is the completion time of the pre-operation of oij for the same job. Step 5: Check the idle time of the machine of oij , and get the idle ranges [t_s, t_e], check these ranges in turn (if: max(asij , t_s) + pij ≤ t_e, where the earliest starting time is sij = t_s, else: check the next range), if there is no range satisfying this condition: sij = max(asij , c(oij − 1)), c(oij − 1) is the completion time of the pre-operation of oij for the same machine. Step 6: The completion time of every operation: cij = sij + pij . Step 7: Generate the sets of starting time and completion time for every operation of each job: Td(sij , cij ), 1 ≤ d ≤ N. In the procedure, the sets of starting time and completion time for every operation of each job can be obtained. The sets are one scheduling plan for the job shop.
11.3.3.2
Initial Population and Fitness Evaluation
The encoding principle in this study is an operation-based representation. It cannot break the constraints on precedence relations of operations. The initial population is generated based on the encoding principle. In this chapter, two objective functions of the scheduling problem are calculated from Object1 = max(ci j )(ci j ∈ Td (si j , ci j ))the first objective is the makespan. (11.5) Object2 = Object1 +
m pi j − avgmt (oi j ∈ Ma )
(11.6)
a=1
time pi j is the total working time for a machine. avgmt is the average working m m pi j − avgmt (oi j ∈ Ma ) of all the machines: avgmt = ( a=1 pi j )/m, a=1 is the summation of the absolute values of the difference of the total working time
220
11 A Modified Genetic Algorithm Based Approach for IPPS
and avgmt for every machine. The goal of this objective function is lined on the synthetic consideration of the makespan and balanced level of machine utilization.
11.3.3.3
Genetic Components for Scheduling
The reproduction operator for scheduling is the same as in process planning. In this section, the crossover and mutation operators are introduced in detail. Crossover: The procedure of crossover for scheduling is described as follows: Step 1: Select a pair of chromosomes P1 and P2 by the selection scheme and initialize two empty offspring: O1 and O2. Step 2: First, crossover the process plan strings of P1 and P2 and get the process plan strings of O1 and O2 Step 2.1: Compare the process plan string of P1 with the process plan string of P2, if the element of P1 is the same as P2, record the value and position of this element. This process is repeated until the end of comparing all the elements of the process plan string. Step 2.2: The recorded elements in process plan string of P1 in Step 2.1 are appended to the same positions in O1, while the recorded elements in process plan string of P2 in Step 2.1 are appended to the same positions in O2. The other elements (they are the different elements between P1 and P2) in process plan string of P2 are appended to the same positions in O1, while the other elements in process plan string of P1 are appended to the same positions in O2. Step 3: Secondly, in order to match the process plan strings of O1 and O2 and avoid getting unreasonable O1 and O2, the crossover of the scheduling plan strings of P1 and P2 is shown as follows: Step 3.1: If the values of elements in scheduling plan string of P1 are the same as the recorded positions in process plan string, these elements (including 0) are appended to the same positions in O1 and they are deleted in P1. If the values of elements are in scheduling plan they are deleted in P1. If the values of elements in scheduling plan string, these elements (including 0) are appended to the same positions in O2 and they are deleted in P2. Step 3.2: Get the numbers of the remaining elements in the scheduling plan of P1 and P2, they are n1 and n2 . If n1 ≥ n2 , for O1, it implies that the number of empty positions in O1 is larger than the number of remaining elements in P2. Therefore, n1 − n2 empty positions in O1 are selected randomly and be filled with 0. Then, the remaining elements in the scheduling plan of P2 are appended to the remaining empty positions in O1 seriatim. For O2, n1 ≥ n2 implies that the number of empty positions in O2 is smaller than the number of remaining elements in P1. So, n1 − n2 0 s are selected randomly in O2 and are set to empty. And then, the remaining elements in the scheduling plan of P1 are appended to the empty positions in O2 seriatim. If n1 < n2 , the procedure is reversed.
11.3 Modified GA-Based Optimization Approach
221
Fig. 11.8 Crossover for scheduling
Step 4: Then, two valid offspring O1 and O2 are obtained. Take six jobs in Table 11.1 as an example. An example of the crossover is presented in Fig. 11.8. Step 1: Select a pair of chromosomes P1 and P2 and initialize two empty offspring: O1 and O2 (see Fig. 11.8). Step 2: First, crossover the process plan strings of P1 and P2 and get the process plan strings of O1 and O2. Step 2.1: Compare the process plan string of P1 with the process plan string of P2, record the second, fourth, and sixth elements in P1 and P2. Step 2.2: The second, fourth, and sixth elements in process plan string of P1 are appended to the same positions in O1, while the second, fourth, and sixth elements in process plan string of P2 are appended to the same positions in O2. The other elements which are the first, third, and fifth elements in process plan string of P2 are appended to the same positions in O1, while the other elements in process plan string of P1 are appended to the same positions in O2. Step 3: Secondly, in order to match the process plan strings of O1 and O2 and avoid getting unlawful O1 and O2, the scheduling plan strings of P1 and P2 are crossovered as follows: Step 3.1: The elements which equate 2, 4, or 6 (including 0) in scheduling plan string of P1 are appended to the same positions in O1 and they are deleted in P1; the elements which equate 2, 4, or 6 (including 0) in scheduling plan string of P2 are appended to the same positions in O2 and they are deleted in P2. Step 3.2: In this example, n1 = 13, n2 = 12, n1 > n2, and n1 − n2 = 1. For O1, one empty position in O1 is selected randomly and is filled with 0, which has marked out in O1 in Fig. 11.8. Then, the remaining elements in the scheduling plan of P2 are appended to the remaining empty positions in O1 seriatim. For O2, one 0 is selected randomly in O2 and is set to empty, which has marked out in O2 in Fig. 11.8. And then, the remaining elements in the scheduling plan of P1 are appended to the empty positions in O2 seriatim. Step 4: Then, two valid offspring O1 and O2 are obtained (see Fig. 11.8)
222
11 A Modified Genetic Algorithm Based Approach for IPPS
(2) Mutation: In this chapter, there are two mutation operators used. One is twopoint swapping mutation, and the other one is changing one job’s alternative process plan. In the evolution procedure, one operator has been chosen randomly in every generation. The procedure of two-point swapping mutation for scheduling is described as follows: Step 1: Select one chromosome P by the selection scheme. Step 2: Select two points in the scheduling plan string of P randomly. Step 3: Generate a new chromosome O by interchanging these two elements; The procedure of the other mutations (changing one job’s alternative process plan) for scheduling is described as follows: Step 1: Select one chromosome P by the selection scheme. Step 2: Select one point in the process plan string of P randomly. Step 3: Change the value of this selected element to another one in the selection range (the number of alternative process plans). Step 4: Judge the number of operations of the selected job’s alternative process plan which has been changed. If it increases, a new chromosome O is generated by changing the margin 0 s which are selected randomly to the job number in the scheduling plan string of P seriatim. If it decreases, a new chromosome O is generated by changing the margin job numbers which are selected randomly in the scheduling plan string of P to 0 seriatim. An example of the mutation is presented in Fig. 11.9. Above the dash line, it is an example of two-point swapping mutation, the selected two points (2 and 4) are marked out, and O is generated by interchanging 2 and 4. And under the dash line, is an example of the mutation of changing one job’s alternative process plan. The selected element 1 (for job 6) which is marked out in the process plan string is changed to 2. Because the number of the second alternative process plans for job 6 is greater than the first one, the first 0 which is selected randomly in the scheduling plan string of P is changed to 6 in O.
Fig. 11.9 Mutation for scheduling
11.4 Experimental Studies and Discussion
223
11.4 Experimental Studies and Discussion Some experiments have been conducted to measure the adaptability and superiority of the proposed GA-based integration approach. The approach is compared with a hierarchical approach and other methods. The performance of the approach is satisfactory from the experimental results and comparisons.
11.4.1 Test Problems and Experimental Results The proposed modified GA approach procedure was coded in C++ and implemented on a computer with a 2.40 GHz Pentium IV CPU. To illustrate the effectiveness and performance of the method in this chapter, we consider five problem instances. The GA parameters for process planning and scheduling are given in Table 11.2. The algorithm terminates when the number of generations reaches to the maximum value.
11.4.1.1
Experiment 1
(1) Test problem: For doing the experiments of the proposed approach, six jobs with flexible process plans have been generated. Jobs 1, 3, and 5 have been given in Fig. 11.2. And jobs 2, 4, and 6 will be given in Fig. 11.10. There are eight machines on the shop floor. The machines’ codes in these six jobs are the same. The transportation time (the time units is same as Processing time in Fig. 11.2) between the machines are given in Table 11.3. The objective for process planning is the maximum objective function f (i, t) (Eq. 11.4). And the objectives for the IPPS are minimum makespan Object1 (Eq. 11.5) and synthetic consideration of the makespan and balanced level of machine utilization Object2 (Eq. 11.6). Table 11.2 GA parameters
Parameters
Process planning
Scheduling
The size of the population, S
40
500
Total number of generations, M
30
100
Tournament size, b
2
2
Probability of reproduction operation, pr
0.10
0.10
Probability of crossover operation, pc
0.60
0.80
Probability of mutation operation, pm
0.10
0.10
224
11 A Modified Genetic Algorithm Based Approach for IPPS
Fig. 11.10 Alternative process plans networks Table 11.3 Transportation time between the machines Machine code
1
2
3
4
5
6
7
8
1
0
3
7
10
3
5
8
12
2
3
0
4
7
5
3
5
8
3
7
4
0
3
8
5
3
5
4
10
7
3
0
10
8
5
3
5
3
5
8
10
0
3
7
10
6
5
3
5
8
3
0
4
7
7
8
5
3
5
7
4
0
3
8
12
8
5
3
10
7
3
0
11.4 Experimental Studies and Discussion
225
(2) Experimental results: The experiments of process planning were carried out for one objective: minimizing the production time. Table 11.4 shows the experimental results where the adjusted fitness Eq. (11.4) is applied for all the jobs. The results include three near-optimal alternative process plans with their fitness and production time for each job (see Sect. 11.3.2). Figure 11.11 illustrates the convergence curve Table 11.4 Experimental results of process planning Job
3 Alternative process plans
Fitness
Product time
1
(1,2)-(5,3)-(8,7)-(9,8)
0.00862069
116
(1,2)-(5,3)-(8,7)-(9,3)
0.00847458
118
(1,4)-(5,3)-(8,7)-(9,8)
0.00833333
120
(1,2)-(5,3)-(8,7)-(9,8)
0.00862069
116
(1,2)-(4,6)-(8,7)-(9,8)
0.0085701
117
(1,2)-(5,3)-(8,7)-(9,3)
0.00847458
118
(1,3)-(2,6)-(3,2)-(9,2)
0.01052632
95
(1,3)-(2,4)-(3,3)-(9,2)
0.01010101
99
(1,3)-(4,6)-(5,1)-(9,2)
0.01000000
100
(6,1)-(8,5)-(9,2)
0.01075269
93
2
3
4
5
6
(6,1)-(7,6)-(9,2)
0.01063830
94
(1,3)-(4,6)-(5,1)-(9,2)
0.01000000
100
(1,2)-(2,3)-(3,2)-(9,3)
0.00862069
116
(1,2)-(2,3)-(4,6)-(5,7)-(9,3)
0.00847458
118
(1,2)-(2,3)-(3,1)-(9,3)
0.01000000
120
(1,2)-(2,3)-(3,2)-(9,3)
0.00862069
116
(1,2)-(2,3)-(4,8)-(9,3)
0.00854701
117
(1,2)-(6,1)-(7,7)-(8,8)-(9,3)
0.00826446
121
Fig. 11.11 Convergence curve of job 1 for process planning
226
11 A Modified Genetic Algorithm Based Approach for IPPS
Table 11.5 Experimental results of the selected process plan for every job
Job
Optimization criterion Object1
Object2
1
(1,4)-(5,3)-(8,7)-(9,8)
(1,2)-(5,3)-(8,7)-(9,8)
2
(1,2)-(4,6)-(8,7)-(9,8)
(1,2)-(4,6)-(8,7)-(9,8)
3
(1,3)-(2,4)-(3,3)-(9,2)
(1,3)-(2,4)-(3,3)-(9,2)
4
(6,1)-(8,5)-(9,2)
(6,1)-(8,5)-(9,2)
5
(1,2)-(2,3)-(3,1)-(9,3)
(1,2)-(2,3)-(4,6)-(5,7)-(9,3)
6
(1,2)-(2,3)-(3,2)-(9,3)
(1,2)-(2,3)-(4,8)-(9,3)
of job 1 for process planning. The curve shows the search capability and evolution speed of the algorithm. The computation time for job 1 is 1015 ms. The experimental results in Table 11.4 and Fig. 11.11 show that GA-based process planning reaches a good solution in short time. This means that the first optimization step (the optimization of process planning) in the proposed model needs very little computation time. And even in the more complex system, based on the development of computer, computer time is also very little. Therefore, the proposed model with two optimization steps is a promising approach for the IPPS. The experiments on IPPS were carried out for two objectives: minimizing makespan Object1 (Eq. 11.11) and the synthetic consideration of the makespan and balanced level of machine utilization Object2 (Eq. 11.6). Table 11.5 shows the experimental results which include the selected process plan of each job. Figure 11.12 illustrates Gantt charts for scheduling. The schedule in Fig. 11.12(1) yields to a makespan of 162 time units, and Object2 is equals to 712. In Fig. 11.12(2), makespan is equal to 165, Object2 is equal to 640. This means that if the makespan and balanced level of machine utilization are synthetically considered, some optimization of makespan may be lost. If only makespan is considered, some balanced level of machine utilization may be lost. Therefore, which objective is chosen should be based on the goal of the operator.
11.4.1.2
Experiment 2
Experiment 2 is adopted from Moon et al. [9]. In this experiment, the problem was constructed with five jobs with 21 operations and six machines in two plants. The makespan was used as the objective. In this chapter, the transportation times between machines and the lot sizes were not considered. Table 11.6 shows the comparison of the results of operation sequences with machine selection between evolutionary algorithm [9] and modified GA, and Fig. 11.13 illustrates the Gantt chart of this problem.
11.4 Experimental Studies and Discussion
227
Fig. 11.12 (1) Gantt chart of experiment 1 based on Object1 (Makespan = 162, object2 = 712). (2) Gantt chart of experiment 1 based on object2 (Makespan = 165, object2 = 640) Table 11.6 Comparison of the results of operations sequences with machine selection (the data which is marked by * is adopted from Moon et al. [9]) Job
Evolutionary algorithm*
Modified GA
1
1(M4)-3(M3)-2(M1)-4(M6)
1(M1)-2(M4)-4(M6)-3(M3)
2
6(M1)-5(M2)-7(M1)
5(M2)-6(M1)-7(M1)
3
9(6)-11(M6)-8(M3)-10(M2)-12(M5)
9(6)-11(M6)-8(M3)-10(M2)-12(M5)
4
13(M3)-14(M5)-15(M3)-16(M6)-17(M5)
13(M3)-15(M3)-14(M2)-16(M6)-17(M1)
5
19(M2)-18(M4)-20(M5)-21(M4)
18(M4)-19(M5)-20(M5)-21(M4)
228
11 A Modified Genetic Algorithm Based Approach for IPPS
Fig. 11.13 Gantt chart of experiment 2 (P1 means Operation 1, Makespan = 28)
11.4.1.3
Experiment 3
Experiment 3 is adopted from Morad and Zalzala [10]. In this experiment, the problem was constructed with five jobs and five machines. The makespan was used as the objective. Each part undergoes four different operations in a specified order. Alternative machines for processing the parts are given in Table 11.7, along with the respective processing times. Table 11.8 gives the experimental results of experiment 3. Using the SA, GA and modified GA-based approach, the best value obtained is 33 compared with the value of 38 obtained using the heuristic. The advantage of the GA and modified GA approaches compared with the SA approach is the availability of multiple solutions that introduces some flexibility into the IPPS system. In the SA approach, only one Table 11.7 Sundaram and Fu data in Ref. [10] (the numbers in parenthesis are the machine numbers) Job
Operation 1
Operation 2
Operation 3
Operation 4
1
5(M1) 3(M2)
7(M2)
6(M3)
3(M4) 4(M5)
2
7(M1)
4(M2) 6(M3)
7(M3) 7(M4)
10(M5)
3
4(M1) 5(M2) 8(M3)
5(M4)
6(M4) 5(M5)
4(M5)
4
2(M2) 6(M3)
8(M3)
3(M3) 8(M4)
7(M4) 4(M5)
5
3(M1) 5(M3)
7(M3)
9(M4) 6(M5)
3(M5)
Table 11.8 Experimental results for experiment 3 (the data which is marked by * is adopted from Morad and Zalzala [10]) Solution methodology
Heuristic*
Simulated annealing*
GA-based approach*
Modified GA
Makespan
38
33
33
33
11.4 Experimental Studies and Discussion
229
Fig. 11.14 Gantt chart of experiment 3 (Makespan = 33)
solution is produced for every run. Figure 11.14 illustrates the Gantt chart of this problem (modified GA).
11.4.1.4
Experiment 4
Experiment 4 is also adopted from Morad and Zalzala [10]. In this experiment, the problem was constructed with four jobs and three machines. The makespan was used as the objective. Table 11.9 gives a comparison among them Fig. 11.15 illustrates Gantt charts of experiment 4. As indicated in Table 11.9 and Fig. 11.15, the best solution found by modified GA is not detected using the method in Morad and Zalzala [10]. It may not be global optimization. However, the result implies that the proposed approach is more Table 11.9 Comparison among the approaches (the data which is marked by* is adopted from Morad and Zalzala [10]
Makespan
Traditional method*
Approach in [4] *
Modified GA
2030
1265
1100
Fig. 11.15 Gantt chart of experiment 4 (Pt: Part; Op: Operation, Makespan = 1100)
230
11 A Modified Genetic Algorithm Based Approach for IPPS
effective to escape from the local optimization point and easier to get a better solution than the method in Morad and Zalzala [10].
11.4.1.5
Experiment 5
Experiment 5 is adopted from Moon et al. [11]. In this experiment, the problem was constructed with five jobs and five machines. The makespan is used as the objective. Table 11.10 shows the experimental results, and Fig. 11.16 illustrates the Gantt chart of this problem. Based on the experimental results of experiment 5, the best solution found by the modified GA is not detected using the method in Moon et al. [11]. The modified GA can get better scheduling plans. Table 11.10 Experimental results of experiment 5 (The data which is marked by * is adopted from Moon et al. [11]. v1 means operation 1. M1 means machine 1) Moon et al.
Modified GA
Job 1
v1M2-v2M2
v1M1-v2M2
Job 2
v3M4-v4M5
V 3M4-v4M5
Job 3
v5M1-v6M3-v7M2
v7M2-v5M2-v6M3
Job 4
v8M3-v9M4
v8M3-v9M4
Job 5
v10M1-v11M3-v12M1-v13M5
v12M3-v10M1-v11M1-v13M5
Makespan
16
14
Fig. 11.16 Gantt chart of experiment 5 (Makespan = 14)
11.4 Experimental Studies and Discussion
231
Fig. 11.17 Gantt chart of the hierarchical approach based on Object 1 (Makespan = 250, Object2 = 808)
11.4.2 Comparison with Hierarchical Approach The proposed integration approach has been compared with a hierarchical based approach in terms of optimum results. The hierarchical approach is coded in C++, and implemented on the same computer with GA. Hierarchical approaches have been widely used to solve an aggregated problem combining several subproblems that are interlinked. In this approach, the process planning problem is first solved, and then the scheduling problem is considered under the constraint of the solution. For the process planning problem, minimizing the production time is used as an objective, which is maximum f (i, t) (Eq. 11.4). The scheduling objective of the hierarchal approach is to minimize the makespan Object1 (Eq. 11.5). To solve the two problems of the process planning and the scheduling problem hierarchically, in this chapter GA is adopted. The parameters of hierarchical approach are the same as those for the proposed method (see Table 11.3). The solution to the process planning problem obtained becomes an input to the scheduling problem. The selected process plan for each job is the optimal process plan. For the first experiment, they are the first ones of 6 jobs in Table 11.4. Figure 11.17 illustrates the Gantt chart of the hierarchical approach yields to a makespan of 250 time units, and Object2 is equal to 808. The experimental results of Table 11.5 show that the selected process plans in the proposed approach are not all of the optimal plans in Table 11.4, such as the selected process plan of job 1 is the third optimal plan. Comparing the results between the proposed method and the hierarchical approach, the selected process plans of each job among them are different, and the optimized result of scheduling of the hierarchical approach is not as good as the results from the integration model. The reasons for this are that the hierarchical approach lacks the integration and flexibility. There is no connection between process planning and scheduling. Therefore, the IPPS is necessary and it can enhance the productivity of the manufacturing system largely.
232
11 A Modified Genetic Algorithm Based Approach for IPPS
11.5 Discussion Overall, the experimental results indicate that the proposed approach is a more acceptable approach for the IPPS. The reasons are as follows. First, in the proposed approach, the selected process plans are not all the optimal ones, and it considers all the conditions synthetically. Second, in some experiments, the modified GA-based approach can be used to get better results than that of other methods. This means that the proposed approach has more possibilities to get the best results.
11.6 Conclusion In the traditional approach, process planning and scheduling were regarded as two separate tasks and performed sequentially. However, the functions of the two systems are usually complementary. Therefore, the research on the integration of process planning and scheduling is necessary. The research presented in this chapter developed a new integration model with a modified GA-based approach that has been developed to facilitate the integration and optimization of these two systems. With the integration model, the process planning and scheduling systems are working simultaneously. To improve the optimization performance of the proposed method, more efficient genetic representations and operator schemes have been developed. Experimental studies have been used to test the method and the comparisons have been made among this method and other methods to indicate the superiority and adaptability of the proposed approach. The experimental results show that the proposed method is a promising and very effective method in the research of the integration of process planning and scheduling. With the new method developed in this work, it would be possible to increase the efficiency of manufacturing systems. One future work is to use the proposed method to practical manufacturing systems. The increased use of this approach will most likely enhance the performances of future manufacturing systems.
References 1. Li XY, Shao XY, Gao L (2008) Optimization of flexible process planning by genetic programming. Int J Adv Manuf Technol 38:143–153 2. Li WD, McMahon CA (2007) A simulated annealing-based optimization approach for integrated process planning and scheduling. Int J Comput Integr Manuf 20(1):80–95 3. Saygin C, Kilic SE (1999) Integrating flexible process plans with scheduling in flexible manufacturing systems. Int J Adv Manuf Technol 15:268–280 4. Kim YK, Park K, Ko J (2003) A symbiotic evolutionary algorithm for the integration of process planning and job shop scheduling. Comput Oper Res 30:1151–1171 5. Catron AB, Ray SR (1991) ALPS—a language for process specification. Int J Comput Integr Manuf 4:105–113
References
233
6. Sormaz D, Khoshnevis B (2003) Generation of alternative process plans in integrated manufacturing systems. J Intell Manuf 14:509–526 7. Fattahi P, Mehrabad MS, Jolai F (2007) Mathematical modeling and heuristic approaches to flexible job shop scheduling problems. J Intell Manuf 18(3):331–342 8. Langdon WB, Qureshi A. Genetic programming—computers using Natural Selection to generate programs. Technical report RN/95/76, Gower Street, London WCIE 6BT, UK, 1995 9. Moon C, Seo Y (2005) Evolutionary algorithm for advanced process planning and scheduling in a multi-plant. Comput Ind Eng 48:311–325 10. Morad N, Zalzala A (1999) Genetic algorithms in integrated process planning and scheduling. J Intell Manuf 10:169–179 11. Moon C, Lee YH, Jeong CS, Yun YS (2008) Integrated process planning and scheduling in a supply chain. Comput Ind Eng 54:1048–1061
Chapter 12
An Effective Hybrid Algorithm for IPPS
12.1 Hybrid Algorithm Model 12.1.1 Traditionally Genetic Algorithm GA is one of the evolutionary algorithms. It was developed by Holland and Rechenberg [1]. By imitating basic principles of nature evolution, they created an optimization algorithm which has successfully been applied in many areas. GA is able to search very large solution spaces efficiently by providing a concise computational cost, since it uses probabilistic transition rules instead of deterministic ones. It is easy to implement and increasingly used to solve inherently intractable problems called NP-hard problems.
12.1.2 Local Search Strategy Tabu Search (TS) [2] is a meta-heuristic method, which has been successfully applied in many scheduling problems and other combinatorial optimization problems. TS allows the searching process to explore solutions which do not decrease the objective function value if these solutions are not forbidden. It is usually obtained by keeping track of the last solution in terms of the action used to transform one solution to the next. It consists of several elements which contain the neighborhood structure, the move attributes, the tabu list, aspiration criteria, and terminate criteria. TS has emerged as one of the most efficient local search strategies for scheduling problems. In this study, it has been adopted as a local search strategy for every individual. And the neighborhood structure of the job shop scheduling problem used in Nowicki and Smutnicki [3] has been adopted here. Because of the differences between job shop scheduling problem and the IPPS problem, it also has been © Springer-Verlag GmbH Germany, part of Springer Nature and Science Press, Beijing 2020 X. Li and L. Gao, Effective Methods for Integrated Process Planning and Scheduling, Engineering Applications of Computational Methods 2, https://doi.org/10.1007/978-3-662-55305-3_12
235
236
12 An Effective Hybrid Algorithm for IPPS Parameters setting, Set tabu list empty and Generate the initial solution
Terminate criteria satisfied?
Y
Output the best solution
N Generate the neighborhood solutions by the neighborhood structure
Aspiration criteria satisfied?
Y
Set the solution which satisfies the Aspiration criteria as the current solution and best solution
N Select the best solution which is not tabu solution in the neighborhood solutions and set it as the current solution
Update the tabu list Fig. 12.1 Basic flowchart of TS
relatively modified to avoid the unlawful solution. The basic flowchart of TS is shown in Fig. 12.1. In the proposed HA, when an individual is to perform a local search, it should be converted to a feasible schedule at first. And then the solution is used as the initial solution of TS. After the local search, the output solution of TS should be encoded to a feasible individual.
12.1.3 Hybrid Algorithm Model GA is able to search very large solution space efficiently, but its local search ability is not well. TS as a local search algorithm can search the local space very well. Through analyzing the optimization mechanism of these two algorithms, a hybrid algorithm model which synthesizes the advantages of GA and TS has been proposed to solve the IPPS problem. The basic procedure of the HA model is described as follows: Step 1: Initialize population randomly, and set the parameters; Step 2: Evaluate all population; Step 3: Set Gen = Gen + 1; Step 4: Generate a new generation population through reproduction, crossover, and mutation;
12.1 Hybrid Algorithm Model
237
Step 5: Local search by TS for every individual; Step 6: Do Step 2 and 5 cyclically until terminate criteria are satisfied. According to this model, every individual evolves by the genetic operators firstly, and then it focuses on the local search. Based on the above procedure, to implement the HA model effectively, an efficient encoding scheme of individuals, genetic operators, and local search strategy are necessary. In the proposed algorithm, TS is adopted as the local search method. In the following sections, the details of the HA have been presented in detail.
12.2 Hybrid Algorithm for IPPS 12.2.1 Encoding and Decoding Each chromosome in the scheduling population consists of three parts with different lengths as shown in Fig. 12.2. The first part of the chromosome is the alternative process plan string. The positions from 1 to N in this string represent the jobs from 1 to N. The number in the ith position represents the selected alternative process plan of the job i. The second part of the chromosome is the scheduling plan string. In this chapter, the scheduling encoding made up of the jobs’ numbers is the operation-based representation. This representation uses an unpartitioned permutation with Pil -repetitions of job numbers. In this representation, each job number appears Pil times in the chromosome. By scanning the chromosome from left to right, the f th appearance of a job number refers to the f th operation in the selected process plan of this job. The important feature of this representation is that any permutation of the chromosome can be decoded to a feasible solution. It is assumed that there are N jobs, and qi is the number of operations of the process plan which has the most operation numbers among all the alternative process plans of the job i. Then the length of the scheduling plan string is equal to qi . The number of appearances of i in the scheduling plan string is equal to the number of operations of the selected alternative process plan. Based on this principle, the composition elements of the scheduling plan string are determined. If the number of elements is less than qi , all the other elements are filled with 0. Therefore, the scheduling plan string is made up of jobs’ numbers and 0. One scheduling plan string is generated by arraying all the elements randomly. And Scheduling plan 5 3 4 1 2 6 0 4 3 2 1 6 5 0 2 3 4 5 6 1 0 2 1 3 4 0 5 1 6 0 2 4 5 1 3 1 1 1 1 1 1 1 Job 1
1 1 1 2 0 2 3 3 1 1 0 1 1 1 1 1 1 0 2 1 5 5 2 1 1 1 2 0 Job 2
Job 3
Machine string
Fig. 12.2 The chromosome of the integration
Job 4
Job 5
Job 6
Alternative process plan
1 4 5 3 1 5
238
12 An Effective Hybrid Algorithm for IPPS
the alternative process plan string is generated by choosing the alternative process plan randomly for each job. The third part of the chromosome is the machine string. It denotes the selected machine set of the corresponding operations of all jobs. The length of this string is equal to the length of the scheduling plan string. It contains N parts, and the length of ith part is qi . The ith part of this string denotes the selected machine set of the corresponding operations of job i. Assuming the hth operation in the selected lth, alternative process plan of job i can be processed by a machine set Silh = {m ilh1 , m ilh2 , . . . , m ilhcilh }. The ith part of this string can be denoted as {gil1 , gil2 , . . . gilh , . . . , gilqi }, and gilh is an integer between 1 and cilh and it means that the hth operation in the lth selected alternative process plan of job i is assigned to the gilh th machine m ilhgilh in S ilh . If the number of operations of the selected alternative process plan of job i is p and p is less than qi , {gilp . . . gilqi } are equal to 0. Figure 12.2 shows an example of an individual scheduling plan. In this example, qi N is equal to 6, and q1 = 6,q2 = 6, q3 = 6, q4 = 7, q5 = 5, and q6 = 5. is equal to 35. So, the scheduling plan string and machine string are made up of 35 elements, and the process plan string is made up of 6 elements. For job 1, the first alternative process plan has been chosen, and there are 6 operations in the selected process plan, so the 6 elements of the scheduling plan string are 1. And in the first part of machine string, the first element of this part is 1 and it means that the first operation in the selected alternative process plan of job 1 is assigned to the first machine m1111 in S 111 . The other elements in the chromosome can be deciphered, respectively. The number of 0 in the scheduling plan string and the machine string is equal to 5 = 35-6-5-4-6-5-4. The permutations can be decoded into semi-active, active, non-delay, and hybrid schedules. The active schedule is adopted in this chapter. Recall that at this decoding stage, a particular individual of a scheduling plan is determined, that is, a fixed alternative process plan for each job is given. The notations used to explain the procedure are described below: M: the total number of machines; oijl : the jth operation in the lth alternative process plan of the ith job; asijl : the allowable starting time of operation oijl ; sijl : the earliest starting time of operation oijl ; k: the alternative machine corresponding to oijl ; t ijlk : the processing time of operation oijl on machine k, t ijlk > 0; cijl : the earliest completion time of operation oijl , i.e., cijl = sijl + t ijlk . The procedure of decoding is as follows: Step 1: Generate the machine of each operation based on the machine string of the chromosome; Step 2: Determine the set of operations for every machine: Ma = {oi jl } 1 ≤ a ≤ M;
12.2 Hybrid Algorithm for IPPS
239
Step 3: Determine the set of machines for every job: J Md = {machine} 1 ≤ d ≤ N; Step 4: The allowable starting time for every operation: asi jl = ci( j−1)l (oi jl ∈ Ma ), ci( j−1)l is the completion time of the pre-operation of oi jl for the same job; Step 5: Check the idle time of the machine of oi jl , and get the idle areas [t_s, t_e], check these areas in turn (if: max(asi jl , t_s) + ti jlk ≤ t_e, the earliest starting time is si jl = t_s, else: check the next area) if there is no area satisfying this condition:si jl = max(asi jl , c(oi jl − 1)), c(oi jl − 1) is the completion time of the pre-operation of oi jl for the same machine; Step 6: The completion time of every operation: ci jl = si jl + ti jlk ; Step 7: Generate the sets of starting time and completion time for every operation of every job: Td (si jl , ci jl ) 1 ≤ d ≤ N . In the above procedure, it can be obtained that the sets of starting time and completion time are for every operation of every job. It is a schedule for the shop.
12.2.2 Initial Population and Fitness Evaluation The encoding principle of the scheduling plan string in the chromosome in this chapter is an operation-based representation. The important feature of this representation is that any permutation of the chromosome can be decoded to a feasible schedule. It cannot break the constraints on precedence relations of operations. The initial population is generated based on the encoding principle. In this chapter, the makespan is used as the objective.
12.2.3 Genetic Operators for IPPS It is important to employ good operators that can effectively deal with the problem and efficiently lead to excellent individuals in the population. The genetic operators can generally be divided into three classes: reproduction, crossover, and mutation. And in each class, a large number of operators have been developed. (1) Reproduction The tournament selection scheme with a user-defined reproduction probabilistic has been used for reproduction operation. In tournament selection, a number of individuals are selected at random (dependent on the tournament size, typically between 2 and 7) from the population and the individual with the best fitness is chosen for reproduction. The tournament selection approach allows a trade-off to be made between exploration and exploitation of the gene pool [4]. This scheme can modify the selection pressure by changing the tournament size.
240
12 An Effective Hybrid Algorithm for IPPS
(2) Crossover There are three parts in a chromosome and two separate crossover operations for each pair of the selected chromosomes. The procedure of the first crossover for scheduling is described as follows: Step 1: Select a pair of chromosomes P1 and P2 by the selection scheme and initialize two empty offsprings: O1 and O2; Step 2: First, crossover the process plan strings of P1 and P2 and get the process plan strings of O1 and O2: Step 2.1: Generate a crossover point randomly; Step 2.2: The elements in the process plan strings of P1 and P2 which are in the left side of the crossover point are appended to the same positions in O1 and O2, respectively; the elements in the process plan strings of P1 and P2 which are in the right side of the crossover point are appended to the same positions in O2 and O1, respectively. Step 3: Secondly, in order to match the process plan strings of O1 and O2 and avoid obtaining unlawful O1 and O2, the scheduling plan strings of P1 and P2 are crossovered as follows: Step 3.1: If the values of elements in scheduling plan string of P1 are the same as one of the positions in the left side of the crossover point in process plan string, these elements (include 0) are appended to the same positions in O1 and they are deleted in P1; if the values of elements in scheduling plan string of P2 are the same as one of the positions in the left side of crossover point in process plan string, these elements (include 0) are appended to the same positions in O2 and they are deleted in P2; Step 3.2: Obtain the numbers of the remaining elements in scheduling plan strings of P1 and P2, n 1 and n 2 . If n 1 ≥ n 2 , for O1, it implies that the number of empty positions in O1 is bigger than the number of remaining elements in P2. So, n 1 −n 2 empty positions in O1 are selected randomly and filled with 0. Then, the remaining elements in scheduling plan string of P2 are appended to the remaining empty positions in O1 seriatim. For O2, n 1 ≥ n 2 implies that the number of empty positions in O2 is smaller than the number of remaining elements in P1. So, n 1 − n 2 0s are selected randomly in O2 and set to empty. And then, the remaining elements in scheduling plan string of P1 are appended to the empty positions in O2 seriatim; if n 1 < n 2 , the procedure is reversed. Step 4: Then, two valid offsprings O1 and O2 are generated. An example of the first crossover operation is presented in Fig. 12.3. Step 1: Select a pair of chromosomes P1 and P2 and initialize two empty offsprings: O1 and O2 (see Fig. 12.3); Step 2: First, crossover the process plan strings of P1 and P2 and get the process plan strings of O1 and O2: Step 2.1: Generate a crossover point randomly; this example is the third position; Step 2.2: The elements in the process plan strings of P1 and P2 which are in the left side of the third position are appended to the same positions in O1 and O2, respectively; the elements in the process plan strings of P1 and P2 which are in
12.2 Hybrid Algorithm for IPPS
241
Fig. 12.3 The crossover operation for scheduling and alternative process plan strings
the right side of the third position are appended to the same positions in O2 and O1, respectively. Step 3: Secondly, in order to match the process plan strings of O1 and O2 and avoid obtaining unlawful O1 and O2, the scheduling plan strings of P1 and P2 are crossovered as follows: Step 3.1: The elements which equate 0, 1, 2, or 3 in scheduling plan string of P1 are appended to the same positions in O1 and they are deleted in P1; the elements which equate 0, 1, 2, or 3 in scheduling plan string of P2 are appended to the same positions in O2 and they are deleted in P2; Step 3.2: In this example,n 1 = 14,n 2 = 12, n 1 > n 2 , and n 1 −n 2 = 2. For O1, two empty positions in O1 are selected randomly and are filled with 0, which have been marked out in O1 in Fig. 12.3. Then, the remaining elements in the scheduling plan of P2 are appended to the remaining empty positions in O1 seriatim. For O2, two 0s are selected randomly in O2 and are set to empty, which have been marked out in O2 in Fig. 12.3. And then, the remaining elements in the scheduling plan of P1 are appended to the empty positions in O2 seriatim. Step 4: Then, two valid offspring O1 and O2 are generated (see Fig. 12.3). Second, a two-point crossover is implemented on the machine string. In this operation, two positions are selected by randomly generating two numbers at first, and then two new strings (the machine strings of O1 and O2) are generated by swapping all characters between the positions of the two parent strings (the machine strings of P1 and P2). After this procedure, in order to match the process plan strings of O1 and O2 and avoid obtaining unlawful O1 and O2, the machine strings of O1 and O2 have to be checked as follows: Step 1: Record the quantity of each job in scheduling plan strings of O1 and O2 (except 0), n1i and n2i ; Step 2: For O1, compare n1i and the quantity of the elements (except 0) in the ith part of the machine string of O1 (r 1i ). If n1i ≥ r 1i , n1i − r 1i 0s from the r 1i + 1 th position to the n1i th position in this part of the machine string are set to gilh randomly (see Sect. 12.2.1). And, the elements from the first position to the r 1i th position are unchangeable, and the other elements in this part are set to 0. Then, the machine string of O1 matches the other two parts of O1; for O2, do the same procedure with O1. If n1i < r 1i , the procedure is reversed.
242
12 An Effective Hybrid Algorithm for IPPS
Fig. 12.4 The crossover operation for machine string P
5 3 4 1 2 6 0 4 3 2 1 6 5 0 2 3 4 5 6 1 0 2 1 3 4 0 5 1 6 0 2 4 5 1 3
1 4 5 3 1 5
O
5 3 4 1 2 6 0 4 3 2 2 6 5 0 2 3 4 5 6 1 0 1 1 3 4 0 5 1 6 0 2 4 5 1 3
1 4 5 3 1 5
P
5 3 4 1 2 6 0 4 3 2 1 6 5 0 2 3 4 5 6 1 0 2 1 3 4 0 5 1 6 0 2 4 5 1 3
1 4 5 3 1 5
O
5 3 4 1 2 6 0 4 3 2 1 6 0 0 2 3 4 5 6 1 0 2 1 3 4 0 0 1 6 0 2 4 0 1 3
1 4 5 3 2 5
P
1 1 1 1 1 1 1
1 1 1 2 0
2 3 3 1 0 0 1 1 1 1 1 1 0 2 1 5 5 2 1 1 1 2 0
1 4 5 3 2 5
O
1 1 1 1 1 1 1
1 1 1 2 0
2 3 3 1 1 0 1 1 1 1 1 1 0 2 1 4 5 2 1 1 1 2 0
1 4 5 3 2 5
Mutation of Machine-string
Job 1
Job 2
Job 3
Job 4
Job 5
Job 6
Fig. 12.5 Mutation operations
One example of this crossover procedure is presented in Fig. 12.4. (3) Mutation In this chapter, three mutation operations have been used. The first one is a twopoint swapping mutation, the second one is changing one job’s alternative process plan, and the third one is the mutation of machine string. In the evolution procedure, one operator has been chosen randomly in every generation. Figure 12.5 is an example of three mutation operators. The procedure of two-point swapping mutation for scheduling is described as follows: Step 1: Select one chromosome P by the selection scheme; Step 2: Select two points in the scheduling plan string of P randomly; Step 3: Generate a new chromosome O by interchanging these two elements. The procedure of the second mutation (changing one job’s alternative process plan) for scheduling is described as follows: Step 1: Select one chromosome P by the selection scheme; Step 2: Select one point in the process plan string of P randomly; Step 3: Change the value of this selected element to another one in the selection area (the number of alternative process plans);
12.2 Hybrid Algorithm for IPPS
243
Step 4: Judge the number of operations of the selected job’s alternative process plan which has been changed. If it has been enlarged, a new chromosome O is generated by changing the margin 0 which are selected randomly to the job’s number in the scheduling plan string of P seriatim; if it has been lessened, a new chromosome O is generated by changing the margin job’s number which is selected randomly in the scheduling plan string of P to 0 seriatim. The mutation of machine string is applied in order to change the alternative machine represented in the machine string of the chromosome. One element in the machine string is randomly chosen from the selected individual. Then, this element is mutated by altering the machine number to another one of the alternative machines at random.
12.3 Experimental Studies and Discussions 12.3.1 Test Problems and Experimental Results The proposed HA (GA+TS) procedure was coded in C++ and implemented on a computer with a 2.0 GHz Core (TM) 2 Duo CPU. To illustrate the effectiveness and performance of the proposed HA in this chapter, three instances from other papers are adopted here. The objective of this chapter is to minimize makespan. The HA parameters for these problem instances are given in Table 12.1. In HA, GA terminates when the number of generations reaches the maximum value (MaxGen); TS terminates when the number of iterations reaches the maximum size (MaxIterSize, CurIter is the current generation of GA) or the permitted maximum step size with no improving (MaxStagnantStep). From the equation (Max I ter Si ze = 200 × (Cur I ter/Max Gen)), the MaxIterSize becomes bigger Table 12.1 The HA parameters Parameters The size of the population, PopSize
200
Total number of generations, MaxGen
100
The permitted maximum step size with no improving, MaxStagnantStep
20
The maximum iteration size of TS, MaxIterSize
200 × (CurIter/MaxGen)
Tournament size, b
2
Probability of reproduction operation, pr
0.05
Probability of crossover operation, pc
0.8
Probability of mutation operation, pm
0.1
Length of tabu list, maxT
9
244
12 An Effective Hybrid Algorithm for IPPS
along with the CurIter in the computation procedure. In the early stage of the evolution of HA, because the GA cannot supply good initial individual for TS, it is little possible for TS to find good solutions. The MaxIterSize of TS is small. This can save the computation time of HA. In the late stage of the evolution of HA, GA can provide a good initial individual for TS. In this case, enlarging the MaxIterSize of TS can help the TS to find good solutions. Therefore, the maximum iteration size of TS is adaptive adjustment in the evolution process. This can balance the exploitation and exploration of HA very well and save the computation time.
12.3.1.1
Experiment 1
Experiment 1 is adopted from Chan et al. [5]. In this experiment, one problem is constructed with 8 jobs and 5 machines. And in this chapter, the outsourcing machine was not considered. We only used the data from this chapter. In this experiment, the integration model has been compared with the no integration model. No integration model means that there is no integration between process planning and scheduling. The purpose of this experiment is to show that the research on IPPS is necessary. Table 12.2 shows the experimental results, and Fig. 12.6 illustrates Gantt chart of this problem. The experimental result of experiment 1 shows that the result of the no integration model is worse than that of the integration model. And the integration model can get better scheduling plans. This implies that the research on IPPS is necessary. Table 12.2 The experimental results of experiment 1
Model
No integration
Integration
Makespan
28
24
Fig. 12.6 Gantt chart of experiment 1 (makespan = 24)
12.3 Experimental Studies and Discussions
12.3.1.2
245
Experiment 2
Experiment 2 is adopted from Kim et al. [6] and Kim [7]. In this experiment, 24 test-bed problems are constructed with 18 jobs and 15 machines. While designing the problems, the complexity of these problems as well as the number of jobs has been divided into five levels. Table 12.3 shows the experimental results, and the comparisons between the proposed HA and the methods in Kim et al. [8] are also given. Figure 12.7 illustrates the Gantt chart of problem 3 in this experiment. Based on the experimental results of Table 12.3, only one solution (problem 5) of HA is worse than SEA and a few solutions of HA are equal to SEA, almost all solutions (18 problems) of HA are better than other methods. The merits of HA in large-scale problems are obvious. It means that the proposed HA-based approach is more effective to obtain optimal solutions.
12.3.1.3
Experiment 3
Experiment 3 is adopted from Leung et al. [9]. In this experiment, one problem is constructed with 5 jobs and 3 machines. Each job undergoes four different operations and each operation can be processed on one or more machines with respective processing time. Table 12.4 shows the experimental results, and the comparisons between the proposed HA and previous methods [9] are also given. Figure 12.8 illustrates Gantt chart of this problem. The results show that the proposed method can obtain better results. These experimental results reveal that the proposed method can solve the IPPS problem effectively.
12.4 Discussion Overall, the experimental results indicate that the research on IPPS is necessary and the proposed approach is a very effective and more acceptable approach for the IPPS problem. The reasons are as follows. First, in the proposed HA approach, the selected process plans are not all the optimal ones, and it considers all the conditions synthetically. Second, in most experiments, the HA-based approach can obtain better results than other previously developed methods. This means that the proposed approach has more possibilities to obtain better results of IPPS problems.
Number of jobs
6
6
6
6
6
6
6
6
6
9
9
9
9
9
9
12
Problem
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
1-2-3-4-5-6-10-11-12-13-14-15
3-5-6-9-10-11-13-14-16
1-2-4-7-8-12-15-17-18
2-3-6-9-11-12-15-17-18
1-4-5-7-8-10-13-14-16
4-7-8-9-13-14-16-17-18
1-2-3-5-6-10-11-12-15
3-5-9-11-13-16
2-6-7-10-14-18
1-4-8-12-15-17
3-6-9-12-15-18
2-5-8-11-14-17
1-4-7-10-13-16
7-8-9-16-17-18
4-5-6-13-14-15
1-2-3-10-11-12
Job number
Table 12.3 The experimental results and comparisons of experiment 2
521
496
423
505
361
413
504
507
376
386
506
348
328
386
383
483
Hierarchical approach*
512
482
420
498
360
410
476
464
363
378
476
327
312
366
363
458
CCGA*
454
434
381
452
328
369
443
428
343
372
438
319
306
347
343
428
SEA*
(continued)
446
427
380
436
327
369
430
427
343
372
429
322
306
345
343
427
HA
246 12 An Effective Hybrid Algorithm for IPPS
12
12
12
12
12
15
15
18
17
18
19
20
21
22
23
24
1-2-3-4-5-6-7-8-9-10-11-12-13-14-15-16-17-18
1-4-5-6-7-8-9-11-12-13-14-15-16-17-18
2-3-4-5-6-8-9-10-11-12-13-14-16-17-18
2-3-5-6-7-9-10-11-13-14-16-18
1-2-4-6-7-8-10-12-14-15-17-18
2-3-5-6-8-9-11-12-14-15-17-18
1-2-4-5-7-8-10-11-13-14-16-17
4-5-6-7-8-9-13-14-15-16-17-18
Job number
Hierarchical approach*
607
533
560
525
473
550
417
474
611
531
567
501
450
535
396
466
CCGA*
587
498
534
477
447
490
379
431
SEA*
544
470
518
446
432
476
377
423
HA
Symbiotic evolutionary algorithm—SEA; Cooperative co-evolutionary genetic algorithm—CCGA; the data which is marked by * is adopted from Kim et al. [8]
Number of jobs
Problem
Table 12.3 (continued)
12.4 Discussion 247
248
12 An Effective Hybrid Algorithm for IPPS
Fig. 12.7 Gantt chart of problem 3 in experiment 2 (makespan = 345)
Table 12.4 The experimental results of experiment 3 Solution methods
Petri net*
ACO_03*
ACO_Agent06*
ACO_Agent09*
HA
Makespan
439
420
390
380
360
The results marked by * are adopted from Leung et al. [9]
Fig. 12.8 Gantt chart of experiment 3 (makespan = 360)
12.5 Conclusion
249
12.5 Conclusion Considering the complementarity of process planning and scheduling, the research has been conducted to develop a hybrid algorithm-based approach to facilitate the integration and optimization of these two systems. Process planning and scheduling functions are carried out simultaneously. To improve the optimization performance of the proposed approach, the efficient genetic representations, operator, and local search strategy have been developed. To verify the feasibility of the proposed approach, three experimental studies have been carried out to compare this approach with other previous methods. The experimental results show that the research on IPPS is necessary and the proposed approach has achieved significant improvement.
References 1. Franz R (2006) Representations for genetic and evolutionary algorithms. Springer, Verlag, Berling, Heidelberg, Netherlands 2. Glover F, Laguna M (1997) Tabu search. Kluwer Academic Publishers 3. Nowicki E, Smutnicki C (1996) A fast taboo search algorithm for the job shop scheduling problem. Manage Sci 42(6):797–813 4. Langdon WB, Qureshi A (1995) Genetic programming—computers using “Natural Selection” to generate programs. Technical Report RN/95/76, Gower Street, London WCIE 6BT, UK 5. Chan FTS, Kumar V, Tiwari MK (2006) Optimizing the performance of an integrated process planning and scheduling problem: an AIS-FLC based approach. In: Proceedings of CIS, IEEE 6. Kim KH, Song JY, Wang KH (1997) A negotiation based scheduling for items with flexible process plans. Comput Ind Eng 33(3–4):785–788 7. Kim YK (2003) A set of data for the integration of process planning and job shop scheduling. Available at http://syslab.chonnam.ac.kr/links/data-pp&s.doc 8. Kim YK, Park K, Ko J (2003) A symbiotic evolutionary algorithm for the integration of process planning and job shop scheduling. Comput Oper Res 30:1151–1171 9. Leung CW, Wong TN, Mak KL, Fung RYK (2009) Integrated process planning and scheduling by an agent-based ant colony optimization. Comput Ind Eng. https://doi.org/10.1016/j.cie.2009. 09.003
Chapter 13
An Effective Hybrid Particle Swarm Optimization Algorithm for Multi-objective FJSP
13.1 Introduction Job shop Scheduling Problem (JSP), which is among the hardest combinatorial optimization problems [1], is a branch of production scheduling. It is well known that this problem is NP-hard [2]. The classical JSP schedules a set of jobs on a set of machines with the objective to minimize a certain criterion, subjected to the constraint that each job has a specified processing order through all machines, which are fixed and known in advance. Flexible Job Shop Problem (FJSP) is an extension of the classical JSP that allows one operation which can be processed on one machine out of a set of alternative machines. It is closer to the real manufacturing situation. Because of the additional needs to determine the assignment of operations on the machines, FJSP is more complex than JSP, and incorporates all the difficulties and complexities of JSP. The problem of scheduling jobs in FJSP could be decomposed into two sub-problems: a routing sub-problem, which assigns each operation to a machine out of a set of capable machines and a scheduling sub-problem, which is sequencing the assigned operations on all selected machines in order to obtain a feasible schedule with optimized objectives [3]. The scheduling problem in a flexible job shop is at least as hard as the job shop problem, which is considered one of the most difficult problems in combinatorial optimization [4]. Although an optimal solution algorithm for the classical JSP has not been developed yet, there is a trend in the research community to model and solve a much more complex version of the FJSP [5]. The research on the multi-objective FJSP is much less than the mono-objective FJSP. Brandimarte [6] was the first to apply the decomposition approach into the FJSP. He solved the routing sub-problem by using some existing dispatching rules and then focused on the scheduling sub-problem, which was solved by using a tabu search heuristic. Hurink et al. [7] proposed a tabu search heuristic. They
© Springer-Verlag GmbH Germany, part of Springer Nature and Science Press, Beijing 2020 X. Li and L. Gao, Effective Methods for Integrated Process Planning and Scheduling, Engineering Applications of Computational Methods 2, https://doi.org/10.1007/978-3-662-55305-3_13
251
252
13 An Effective Hybrid Particle Swarm Optimization Algorithm …
considered the reassignment and rescheduling as two different types of moves. Mastrolilli and Gambardella [8] proposed some neighborhood functions for the FJSP, which could be used in meta-heuristic optimization techniques. The most important issue in employing meta-heuristics for combinatorial optimization problems is to develop an effective “problem mapping” and “solution generation” mechanism. If these two mechanisms are devised successfully, it is possible to find good solutions to the given optimization problem in an acceptable time. Parsopoulos and Vrahatis [9] conducted the first research on the particle swarm optimization method in multiobjective optimization problems. Kacem et al. [10, 11] proposed a genetic algorithm controlled by the assigned model which was generated by the Approach of Localization (AL) to mono-objective and multi-objective FJSP. They used the integrated approach considering assignment and scheduling at the same time. Rigao [12] developed two heuristics based on tabu search: a hierarchical procedure and a multiple start procedure. Recently, Xia and Wu [3] proposed a hybrid algorithm using Particle Swarm Optimization (PSO) assignment and Simulated Annealing (SA) scheduling to optimize multi-objective FJSP, respectively. To our knowledge, research on multiobjective FJSP is rather limited, and most traditional optimal approaches used only one optimization algorithm for solving multi-objective FJSP. In this chapter, the search mechanism of the particle swarm optimization and tabu search has been taken full advantage of. An effective solution approach is proposed for solving multi-objective FJSP. The proposed approach uses PSO to assign operations on machines and to schedule operations on each machine, and TS is applied to local search for the scheduling sub-problem originating from each obtained solution. The objectives which are considered in this chapter are to minimize maximal completion time, the workload of the critical machine, and the total workload of machines simultaneously. The remainder of this chapter is organized as follows. The formulation and notion of multi-objective FJSP are introduced in Sect. 13.2. Section 13.3 describes the traditional PSO algorithm and summarizes the local search scheme by TS, and then the model of the proposed hybrid algorithm (PSO + TS) is given. At the same time, how to apply it to solve the multi-objective FJSP is described. Section 13.4shows the four representative instances of multi-objective FJSP and the corresponding computational results with the proposed hybrid algorithm. Then, the computational results are compared with other heuristic algorithms in this section. Finally, concluding remarks and further research are given in Sect. 13.5.
13.2 Problem Formulation A general multi-objective minimization problem, which could be defined as minimize a function f(x), with P (P > 1) decision variables and Q objectives (Q > 1), subject to several equality or inequality constraints. More precise definitions of the terms about the multi-objective optimization can be found in Deb [13].
13.2 Problem Formulation
Minimize f (χ ) = ( f 1 (χ ), f 2 (χ ), . . . f q (χ ), . . . f Q (χ ))
253
(13.1)
where is the feasible solution space, χ = {χ1 , χ2 , . . . χ p , . . . , χ P } is the set of p-dimensional decision variables (continuous, discrete, or integer) (1 ≤ p ≤ P), i.e., a possible solution for the considered problem; f q (χ ) is the qth objective function (1 ≤ q ≤ Q). It is obvious that there does not exist an exact solution to such a problem. However, the multi-objective optimization problem should optimize the different objective functions simultaneously. The FJSP could be formulated as follows. There is a set of n jobs and a set of m machines. M denotes the set of all machines. Each job i consists of a sequence of ni operations. Each operation Oi, j (i = 1, 2, . . . , n; j = 1, 2, . . . , n i ) of job i has to be processed on one machine Mk out of a set of given compatible machines Mi, j (for Mk ∈ Mi, j , Mi, j ⊆ M). The execution of each operation requires one machine selected from a set of available machines. In addition, the FJSP needs to set each operation’s starting and ending time. The FJSP is needed to determine both an assignment and a sequence of the operations on the machines in order to satisfy the given criteria. If there is Mi, j ⊂ M for at least one operation, it is Partial flexibility FJSP (P-FJSP); while there is Mi, j = M for each operation, it is Total flexibility FJSP (T-FJSP) [10, 11]. In this chapter, the following criteria are to be minimized: (1) The maximal completion time of machines, i.e., the makespan; (2) The maximal machine workload, i.e., the maximum working time spent on any machine; (3) The total workload of machines, which represents the total working time over all machines. During the process of solving this problem, the following assumptions are made: (1) Each operation cannot be interrupted during its performance (non-preemptive condition); (2) Each machine can perform at most one operation at any time (resource constraint); (3) The precedence constraints of the operations in a job can be defined for any pair of operations; (4) Setting up time of machines and move time between operations are negligible; (5) Machines are independent of each other; (6) Jobs are independent of each other. The notation used in this chapter is summarized in the following: n: total number of jobs; m: total number of machines; ni : total number of operation of job i; Oi, j : the jth operation of job i; M i, j : the set of available machines for the operation Oi, j ; Pijk : processing time of Oi, j on machine k;
254
13 An Effective Hybrid Particle Swarm Optimization Algorithm …
t ijk : start time of operation Oi, j on machine k; C i, j : completion time of the operation Oi, j ; i, h: index of jobs, i, h = 1, 2, …, n; k: index of machines, k = 1, 2, …, m; j, g: index of operation sequence, j, g = 1, 2, …, ni ; C k : the completion time of M k ; W k : the workload of M k . χi jk =
1, if machine k is selected for the operation Oi j 0, otherwise
Our model is then given as follows: min f 1 = max (Ck )
(13.2)
min f 2 = max (Wk )
(13.3)
1≤k≤m
1≤k≤m
min f 3 =
m
Wk
(13.4)
k=1
s.t. Ci j − Ci( j−1) ≥ Pi jk χi jk , j = 2, . . . , n i ; ∀i, j
(13.5)
[(C hg − Ci j − thgk )χhgk χi jk ≥ 0] ∨ [(Ci j − C hg − ti jk )χhg j χi jk ≥ 0], ∀(i, j), (h, g), k (13.6) χi jk = 1, ∀i, j (13.7) k∈Mi, j
Inequity (13.5) ensures the operation precedence constraint. Equation (13.6) ensures each machine could process only one operation each time. Equation (13.7) states that one machine could be selected from the set of available machines for each operation. Many studies have been devoted to the subject of multi-objective meta-heuristic optimization and the developed methods to solve multi-objective optimization can be generally classified into three different types [14]. (1) The transformation toward a mono-objective problem consists of combining the different objectives into a weighted sum. Methods in this class are based on utility functions or E-Constraint and goal programming; (2) The non-Pareto approach utilizes operators for processing the different objectives in a separated way;
13.2 Problem Formulation
255
(3) The Pareto approaches are directly based on the Pareto-optimality concept. They aim at satisfying two goals: converging toward the Pareto front and also obtaining diversified solutions scattered all over the Pareto front. The objective function of this chapter is based on the first type. The weighted sum of the above three objective values is taken as the objective function (see Sect. 13.3.4).
13.3 Particle Swarm Optimization for FJSP Particle swarm optimization is an evolutionary computation technique proposed by [15, 16]. The main principle of PSO is based on a discovery of natural scientists: in order to search food, each member in a flock of birds determines its velocity based on their personal experience as well as information gained through interaction with other members of the flock. Each bird, called particle, flies through the solution space of the optimization problem searching for the optimum solution and thus its position represents a potential solution for the problem. PSO algorithm simulates the behavior of flying birds and their means of information exchange to solve optimization problems.
13.3.1 Traditional PSO Algorithm PSO algorithm is initialized with a population of random candidate solutions, conceptualized as particles. Each individual in PSO flies in the search space with a velocity which is dynamically adjusted according to its own flying experience and its companions’ flying experience. Each individual is treated as a volumeless particle (a point) in the p-dimensional search space. The tth particle is denoted by χt = (χt1 , χt2 , . . . χtd ). The rate of the position change (velocity) for particle t is represented as Vt = (vt1 , vt2 , . . . vtd ). During each iteration, the tth particle is updated by the following two best values: Pl (which is also called pbest) and Pg (which is also called gbest).Pl = ( pl1 , pl2 , . . . , pld ), which is the best solution (local best solution) that the tth particle has achieved so far, and Pg = ( pg1 , pg2 , . . . , pgd ), which is the best solution (global best solution) obtained in the swarm so far. Each particle is updated iteratively according to the following equations: vtd = w × vtd + c1 × rand() × ( pld − χtd ) + c2 × Rand() × ( pgd − χtd ) (13.8) χtd = χtd + vtd
(13.9)
where ω is the inertial weight. It is used to control the amount of the previous velocity between the global exploration and local exploitation abilities of the swarm. c1 and c2 are two positive constants, and they represent the weight of the stochastic acceleration
256
13 An Effective Hybrid Particle Swarm Optimization Algorithm …
terms that pull each particle toward Pl and Pg positions, rand() and Rand() are two random functions in the range [0, 1]. For Eq. (13.8), in traditional PSO, updating of the velocity consists of the following three parts.w×vtd is referred to as “momentum” part which represents the influence of the last velocity toward the current velocity.c1 ×rand()×( pld −χtd ) is the “cognitive” part, which represents the primitive thinking by itself. c2 × rand() × ( pgd − χtd ) is the “social” part, which represents the cooperation among the particles. The process of implementing the traditional PSO algorithm is summarized as follows. Step 1: Initialize a swarm of particles with random positions Xt and velocities Vt in the p-dimensional problem search space; Step 2: Evaluate each particle by fitness function; Step 3: Update pbest and gbest value of the current particle by comparing particle’s fitness value with particle’s pbest and gbest. If current pbest value is better than old pbest, set pbest value equal to the current value, and the pbest position equal to the current position in the p-dimensional space; in the same way, if current gbest value is better than old gbest, reset current gbest to the current particle’s fitness value; Step 4: Update the position and velocity of the particle according to Eqs. (13.8) and (13.9), respectively; Step 5: If the termination criterion which is usually a sufficiently good fitness or a specified number of generations is not met, then go to Step 2; otherwise, go to Step 6; Step 6: Output solutions. The traditional PSO comprises the basic optimization concept that individuals are evolved by cooperation and competition among the individuals to accomplish a common goal. In the search process, each particle of the swarm shares the mutual information globally and benefits from the discoveries and previous experiences of all other colleagues. The velocity–displacement model in the traditional PSO algorithm is just one concrete implementation of the optimization concept. Due to the limitations of the velocity–displacement model, it is difficult for the traditional PSO to address combinatorial optimization problems without modification.
13.3.2 Tabu Search Strategy Tabu search [17] is a meta-heuristic algorithm that has been successfully applied to a variety of scheduling problems among other combinatorial problems. Tabu search allows the search to explore solutions that do not decrease the objective function value if these solutions are not forbidden. It is usually obtained by keeping track of the last solutions in terms of the action used to transform one solution to the next. It consists of several elements called the specification of a neighborhood structure, the move attributes, the tabu list length, aspiration criterion, and stopping rules.
13.3 Particle Swarm Optimization for FJSP
257
TS is one of the most efficient local search strategies for scheduling problems. In this study, it has been adopted as a local search strategy for every particle. At the same time, we choose the neighborhood used in [8]. This neighborhood function is very efficient and very easy to implement. It is based on small displacements of the operations on a critical path in the disjunctive graph. The main distribution of the literature [8] is the reduction of the set of neighbors of a solution while proving that the resulting subset contains the best neighbors. Besides, an efficient approach to calculate every possible neighborhood subset is also developed in this chapter. The parameters of TS could be characterized as follows: a couple (v, k) is the tabu element, where v is the operation being moved and k is the machine to which v is assigned before the move. T is the tabu list. T l is the tabu status length, it is equal to the number of operations of the current critical path plus the number of alternative machines available for operation v. s is represented as a feasible solution of FJSP. V (s) is the neighbor of s. ObjFun(s) is the objective function value of s. ObjFun* is the current minimum value. CurInterNum is the current iteration number. MaxIterNum is the number of iterations that the tabu search performs. s* is the best solution that TS procedure has achieved. The pseudocode could be summarized as follows. Step 1: Initialization; //set the MaxIterNum value; //set CurInterNum = 0, T = Ø, ObjFun* = ObjFun(s*), s = s*; Step 2: CurInterNum = CurInterNum + 1. Get the current neighbors V (s); Step 3: Evaluate the current solution; //get the best move v ∈ V (s), new solution s’ and the new tabu list T ; //set s = s’, T = T ’; Step 4: If ObjFun (s) < ObjFun *, set ObjFun * = ObjFun (s) and s* = s; else go to Step 5; Step 5: If CurInterNum ≤ MaxIterNum, go to Step 2; else stop the iteration and return the new solution s*. In the proposed hybrid algorithm, when a particle is to perform a local search, it should be converted to a feasible solution of FJSP at first. Then the solution is used as the initial solution of TS. Besides, the output solution achieved by TS should be encoded to create a feasible particle.
13.3.3 Hybrid PSO Algorithm Model PSO possesses high search efficiency by combining local search (by self-experience) and global search (by neighboring experience). TS as a local search algorithm employs a certain probability to avoid becoming trapped in a local optimum. By reasonably hybridizing these two methodologies, a hybrid particle swarm optimization algorithm model could be proposed to omit the concrete velocity–displacement
258
13 An Effective Hybrid Particle Swarm Optimization Algorithm …
updating method in traditional PSO for the multi-objective FJSP. The hybrid PSO algorithm model could be represented with the following pseudocode. Step 1: Set the parameter’ values; //set the population size (denoted by Ps); //set the maximum number of generation (denoted by Gen), w, c1 , c2 ; //set the MaxIterNum value, CurInterNum = 0, T = ∅; Step 2: Initialize the population stochastically according to the encoding scheme; Step 3: Evaluate each particle’s objective value in the swarm, and compare them, then set the pbest position with a copy of particle itself and gbest position as the particle with the lowest fitness in the swarm; Step 4: If the termination criterion which is usually a sufficiently good fitness or if a specified number of generations are not met, then go to Step 5; otherwise, go to Step 9; Step 5: Set CurInterNum = CurInterNum + 1; Step 6: Information exchange; Step 7: Convert current particle to an FJSP scheduling solution; start TS procedure from this solution for certain steps; replace the current particle with the new solution s* obtained by TS; Step 8: Evaluate the particles in the swarm once again, and update the current particle; //update the current particle using its own experience; //update the current particle using the whole population’s experience; then go to Step 4; Step 9: Output computational results. According to the implementing process of the proposed hybrid algorithm above, each particle obtains updated information from its own and the swarm’s experiences at first, and then converts current particle to an FJSP scheduling solution for local search using tabu search. And PSO uses the best solutions to continue the search until it meets the termination condition. From the pseudocode above, the key step to implement the hybrid particle swarm optimization algorithm model is by defining the particle’s effective encoding scheme. Effective information exchange method and local search method are needed too. Besides, a proper strategy should be defined to balance the exploitation and exploration. In our proposed algorithm, the tabu search is adopted for the local search method and it has been presented in Sect. 13.3.2. In the following sections, the details of the hybrid algorithm will be introduced separately especially fitness function, encoding scheme, and information exchange.
13.3.4 Fitness Function Fitness function is used to evaluate each particle in the swarm. Each individual in the swarm will be evaluated for the combined objective function of f 1 , f 2, and f 3 ,
13.3 Particle Swarm Optimization for FJSP
259
according to the description in Sect. 13.2. The first method that the different objectives are transformed into one objective is adopted in this chapter. Combined objective function: Minimize F = w1 × f 1 + w2 × f 2 + w3 × f 3
Subject to :
⎧ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎩
(13.10)
w1 ∈ (0, 1) w2 ∈ (0, 1) w3 ∈ (0, 1) w1 + w2 + w3 = 1
where w1 , w2 , w3 are weights. w1 , w2 , w3 could set different values upon requirement. In every generation, after evaluating each particle, the lowest fitness will be superior to other particles and should be reserved in the next generation during the search process.
13.3.5 Encoding Scheme Chromosome representation is the first important task for the successful application of PSO to solve the multi-objective FJSP. Mesghouni et al. [18] used a parallel machine and parallel job representation. A chromosome representation is represented by a matrix where each row consists of a set of ordered operations. The offspring requires a repair mechanism and decoding the representation is complexity. Chen et al. [19] divided the chromosome into two parts: A string and B string to represent. The chromosome representation must consider the order of operations, and a repair mechanism to maintain feasibility is required. While Kacem et al. [10, 11] used an assignment table representation, in this chapter we used improved A-string (machines) and B-string (operations) representation that could be used by PSO to solve the multi-objective FJSP efficiently and avoid the use of a repair mechanism. A-string defines the routing policy of the problem, and B-string defines the sequence of the operations on each machine. The elements of the strings, respectively, describe a concrete allocation of operations to each machine and sequence of operations on each machine. We use an array of integer values to present A-string. The integer values are equal to the index of the array of alternative machine set M i, j of each operation Oi, j , and the integer values between 1 and ni . The length of the array is equal to the sum of all operations of all jobs. To explain this approach, we choose the following example (with total flexibility, i.e., each machine could be selected for an operation) in Table 13.1 (see Sect. 13.4.1). The problem is to execute 4 jobs on 5 machines. One possible encoding of the A-string part is shown in Fig. 13.1. The length is 12 (i.e., 3+3+4+2). For instance, M 4 is selected to process O1, 1 , because the value in the array of the alternative machine
260
13 An Effective Hybrid Particle Swarm Optimization Algorithm …
Table 13.1 Problem 4 × 5 with 12 operations (total flexibility) Job
Oi,j
M1
M2
M3
M4
M5
J1
O1,1
2
5
4
1
2
O1,2
5
4
5
7
5
O1,3
4
5
5
4
5
O2,1
2
5
4
7
8
O2,2
5
6
9
8
5
O2,3
4
5
4
54
5
O3,1
9
8
6
7
9
O3,2
6
1
2
5
4
O3,3
2
5
4
2
4
O3,4
4
5
2
1
5
O4,1
1
5
2
4
12
O4,2
5
1
2
1
2
J2
J3
J4 f *1
= 16,
f *2
= 7,
f *3
= 32
Fig. 13.1 A-string encoding
set of O1, 1 is 4. The value might also be equal to 5 because operation O1, 1 could be processed on 5 machines M 1 , M 2 , M 3 , M 4 , M 5 , the valid values are between [2, 5]. This demonstrates an FJSP with recirculation if more than one operation of the same job is processed on the same machine. For example, O1, 1 and O1, 3 belong to job 1 and they are processed on the same machine M 2 . If only one machine could be selected for some operations, the value is 1 in the A-string array. B-string has the same length as the A-string. It consists of a sequence of job numbers in which job number i occurs ni times. It can avoid creating an infeasible schedule when replacing each operation by the corresponding job index. Take Table 13.1 for instance, one B-string may be 2-1-3-3-2-4-1-3-4-2-1-3. Read the data from left to right, the B-string could be translated into a list of ordered operations:
13.3 Particle Swarm Optimization for FJSP
261
O2, 1 –O1, 1 –O3, 1 –O3, 2 –O2, 2 –O4, 1 –O1, 2 –O3, 3 –O4, 2 –O2, 3 –O1, 3 –O3, 4 . When a particle is decoded, B-string is converted to a sequence of operations at first. Then each operation is assigned to a processing machine according to A-string. Through this coding scheme, it may easily use the same structure to represent total flexibility or partial flexibility, and every particle could decode a feasible FJSP schedule effectively.
13.3.6 Information Exchange According to the proposed hybrid algorithm model, each particle’s updating method has not been defined. Particles could obtain generation information in various ways. Therefore, in order to deal with the combinatorial optimization problems, effective available updating methods could be designed to implement the particle swarm optimization model. In the proposed hybrid algorithm, each particle exchanges information by performing a crossover operation. The two strings of each particle perform crossover separately. A two-point crossover is implemented in the A-string. In this operation, two positions are selected by randomly generating two numbers at first, and then two new strings are created by swapping all characters between the positions of the two parent strings. The procedure could be illustrated as shown in Fig. 13.2. The crossover operation of B-string is different from A-string. Two parent strings selected from swarm are denoted as p1 and p2 , then their two children strings are c1 and c2 . According to the previous coding scheme, the procedure could be described as follows: all jobs are divided into two groups JG1 and JG2 randomly. Any element in p1 /p2 that belongs to JG1 /JG2 will be retained in the same position in c1 /c2 . Then delete the elements that are already in the JG1 /JG2 from p1 /p2 . And the empty positions in c1 /c2 will be orderly filled with the elements of p2 /p1 that belong to their previous sequence. Figure 13.3 illustrates the procedure. In order to maintain diversity of the swarm and ensure to obtain feasible solutions, we adopted mutation operation in a genetic algorithm. Mutation operation is only
Fig. 13.2 A-string crossover operation
262
13 An Effective Hybrid Particle Swarm Optimization Algorithm …
Fig. 13.3 B-string crossover operation
performed by the B-string of each particle. In this operation, two positions are selected by randomly generating two numbers, and the substring between the positions are inverted to get a new B-string, and then the new B-string replaced the previous B-string.
13.4 Experimental Results To illustrate the effectiveness and performance of the hybrid algorithm (PSO+TS) proposed in this chapter, the algorithm procedure was implemented in C++ on a personal computer with Pentium IV 1.8 G CPU. Four representative instances based on practical data have been selected. Each instance can be characterized by the following parameters: number of jobs (n), number of machines (m), and each operation Oi, j of job i. Four problem instances (problem 4 × 5, problem 8 × 8, problem 10 × 10, and problem 15 × 10) are all taken from Kacem et al. [10, 11]. The parameters of the hybrid algorithm are as follows. Set the particle swarm size PS = 100, the maximum number of iterations Gen = 50, and the number of iterations that the tabu search performs MaxIterNum = 50, c1 and c2 are set to 2. The inertial weights are chosen as the same set as in Xia and Wu [3].
13.4.1 Problem 4 × 5 Firstly, we use a small-scale instance to display the optimizing ability and evaluate the effectiveness of our hybrid algorithm. This instance is taken from Kacem et al. [11]. We have applied it to the instance in Table 13.1. The best values by our hybrid algorithm (PSO+TS) are shown as follows: Solution: Makespan = 11, Max(Wtw ) = 10, Wtw = 32.
13.4 Experimental Results Table 13.2 Comparison of results on problem 4 × 5
263 AL + CGA
PSO + TS
Makespan
16
11
Max(W tw )
10
10
W tw
34
32
where makespan is maximum complete time, Max(Wtw ) is used to represent the critical machine workload, and Wtw represents the total workload of all machines. The comparison of our hybrid algorithm with other algorithms is shown in Table 13.2. f *1 , f *2 , f *3 denote the lower bound of the solutions, respectively, according to the literature [11]. In Fig. 13.4, the solution is shown by Gantt chart. J1, 1 (job, operation) denotes the first operation of job 1, and so on. And blocks with shadow represent the machine idle periods. The computational time of the proposed approach is 0.34 s. The column labeled “AL+CGA” refers to Kacem et al. [11], and the next label refers to the computational results of our algorithm. From Table 13.2 we could obviously see that the proposed hybrid algorithm has reduced two objectives under the same value of Max(Wtw ): the makespan (11 instead of 23) and the Wtw (32 instead of 34). By comparing the data in these two columns, it is clearly shown that the proposed novel hybrid algorithm is effective.
Fig. 13.4 Optimization solution of problem 4 × 5 (makespan = 11, Max(Wtw = 10, Wtw = 32)
264
13 An Effective Hybrid Particle Swarm Optimization Algorithm …
13.4.2 Problem 8 × 8 Table 13.3 displays the instance of the middle-scale problem 8 × 8. The computational results of the proposed hybrid algorithm are shown in the following: Solution 1: Makespan = 14, Max (W tw ) = 12, W tw = 77. Solution 2: Makespan = 15, Max (W tw ) = 12, W tw = 75. The solution 1 representation is shown in Fig. 13.5. The average computational time is 1.67 s under the mentioned computer’s configuration. In Table 13.4, the column labeled “Temporal Decomposition” refers to F. Chetouane’s method [10] and the next column labeled “classic GA” refers to the classical genetic algorithm. “Approach by Localization” and “AL+CGA” are two algorithms by Kacem et al. [10, 11], and the next column labeled “PSO+SA” refers to Xia and Wu [3]. From Table 13.4, by comparison with other algorithms, the effectiveness of our proposed hybrid algorithm is shown obviously.
13.4.3 Problem 10 × 10 A middle-scale instance which is taken from Kacem et al. [11] is shown in Table 13.5. And it was taken to evaluate the efficiency of our proposed hybrid algorithm. The computational results obtained by the proposed hybrid algorithm are given in the following: Solution: Makespan = 7, Max (W tw ) = 6, W tw = 43. The schedule’s Gantt chart representation corresponding to the solution is shown in Fig. 13.6. The computational time is 2.05 s. The comparison of our hybrid algorithm with other algorithms is shown in Table 13.6. The column labeled “Temporal Decomposition” refers to F. Chetouane’s method [10] and the next column labeled “classic GA” refers to classical genetic algorithm. “Approach by Localization” and “AL+CGA” are two algorithms by Kacem et al. [10, 11], and the next column labeled “PSO+SA” refers to Xia and Wu [3]. From Table 13.2, it is shown that the proposed hybrid algorithm has reduced one objective under the same value of Max(Wtw ). And makespan comparing the “PSO+SA”: the Wtw (43 instead of 44). Table 13.6 shows our hybrid algorithm’s effectiveness by comparison with other algorithms.
J7
J6
J5
J4
J3
5
10
O6,3
O7,1
11
O6,2
11
O5,4
6
–
O5,3
O6,1
10
O5,2
4
O4,3
3
12
O4,2
O5,1
3
1
O3,3
O4,1
–
O3,2
10
O2,4
10
–
O2,3
O3,1
5
–
O2,2
–
O1,3
O2,1
10
O1,2
J2
5
O1,1
J1
M1
Oi,j
Job
8
–
–
4
5
7
9
9
6
6
11
1
4
10
–
10
8
7
10
–
3
M2
Table 13.3 Problem 8 × 8 with 27 operations (partial flexibility) M3
2
9
9
1
–
8
7
7
2
7
6
5
6
–
9
–
5
3
–
5
5
6
10
9
4
6
7
4
8
10
8
5
6
4
7
6
5
2
9
5
8
3
M4
9
7
11
9
6
7
4
9
9
3
10
–
8
6
4
6
6
8
6
3
3
M5
M6
–
–
7
9
5
2
8
–
9
5
7
10
9
5
7
4
7
–
2
9
–
M7
10
10
6
–
3
7
6
10
5
6
8
–
10
2
–
1
10
9
4
9
10
6
7
9
4
7
4
7
9
–
– (continued)
4
10
–
–
–
–
–
–
5
6
9
M8
13.4 Experimental Results 265
7
9
9
O8,2
O8,3
O8,4
–
O7,3
2
–
O7,2
O8,1
M1
Oi,j
f *1 = N/A, f *2 = N/A, and f *3 = N/A
J8
Job
Table 13.3 (continued)
–
9
4
8
8
9
M2
M3
3
–
7
5
9
–
7
8
8
9
3
9
M4
M5
–
1
5
9
8
11
5
6
–
4
6
9
M6
M7
8
7
10
–
–
10
–
– 1
10
10
5
M8
266 13 An Effective Hybrid Particle Swarm Optimization Algorithm …
13.4 Experimental Results
267
Fig. 13.5 Optimization solution of problem 8 × 8 (makespan = 14, Max (W tw ) = 12, W tw = 77)
13.4.4 Problem 15 × 10 To evaluate the efficiency and validity of our algorithm on larger scale problems, we have chosen the instance shown in Table 13.7 which is taken from Kacem et al. [11]. The best computational results by our hybrid algorithm are shown in the following: Solution: Makespan = 11, Max (W tw ) = 11, W tw = 93. The computational result’s schedule is represented by Gantt chart in Fig. 13.7. The computational time is 10.88 s under the mentioned computer’s configuration. The comparison of our hybrid algorithm with other algorithms is shown in Table 13.8. From Table 13.8, we know that the proposed hybrid algorithm has reduced the makespan (11 instead of 23) and the Wtw (93 instead of 95) under the same values
19
19
91
Makespan
Max(W tw )
W tw
Temporal decomposition
77
11
16
Classic GA
Table 13.4 Comparison of results on problem 8 × 8
75
13
16
Approach by localization
79
13
15
75
13
16
AL+CGA
75
12
15
PSO+SA
73
13
16
77
12
14
PSO+TS
75
12
15
268 13 An Effective Hybrid Particle Swarm Optimization Algorithm …
J7
J6
J5
J4
J3
1
3
5
O7,3
4
O6,3
O7,2
7
O7,1
8
6
O5,3
O6,2
5
O6,1
7
7
O4,3
O5,2
4
O5,1
5
7
O3,3
O4,2
9
O4,1
8
6
O2,3
O3,2
4
O3,1
2
O2,2
3
O1,3
O2,1
4
O1,2
J2
1
O1,1
J1
M1
Oi,j
Job
4
8
7
7
3
9
1
6
10
3
2
10
1
3
5
11
8
10
2
1
4
M2
2
1
8
3
12
10
4
3
4
12
3
6
8
6
8
2
7
4
5
1
6
M3
Table 13.5 Problem 10 × 10 with 30 operations (total flexibility) M4
1
2
3
6
5
8
1
9
5
1
8
4
5
1
9
7
1
5
1
3
9
2
3
4
3
4
4
10
8
6
6
7
9
4
2
4
5
9
9
5
4
3
M5
1
6
9
4
3
2
4
2
3
5
4
5
9
6
3
3
6
8
6
8
5
M6
8
11
4
1
6
7
3
8
5
8
6
1
1
4
5
5
1
4
9
10
2
M7
14
2
13
5
9
8
11
6
15
3
9
7
2
1
3
14
10
15
5
4
8
M8
5
13
10
1
2
3
13
1
2
5
8
1
3
7
8
9
7
8
10
11
9
M9
(continued)
7
3
7
11
15
10
9
7
6
2
4
6
4
2
1
2
1
4
3
4
5
M 10
13.4 Experimental Results 269
f *1 =7, f *2 =
J 10
5,
9
O10,3
= 41
3
f *3
4
8
O9,3
O10,2
4
O10,1
3
O9,2
6
O8,3
O9,1
8
O8,2
J9
5
O8,1
J8
M1
Oi,j
Job
Table 13.5 (continued)
2
1
3
5
6
9
2
3
7
M2
M3
4
8
1
4
2
1
13
10
11
M4
2
1
6
8
5
3
5
7
3
3
9
7
6
7
8
4
5
2
M5
5
4
1
1
3
1
3
13
9
M6
2
1
2
2
1
6
5
4
8
M7
4
4
6
3
9
7
7
6
5
M8
M9
10
17
20
10
6
5
9
8
12
23
15
6
12
7
4
5
4
8
M 10
270 13 An Effective Hybrid Particle Swarm Optimization Algorithm …
13.4 Experimental Results
271
Fig. 13.6 Optimization solution of problem 10 × 10 (makespan = 7, Max (W tw ) = 6, W tw = 43) Table 13.6 Comparison of results on problem 10 × 10 Temporal decomposition
Classic GA
Approach by localization
AL+CGA
PSO+SA
PSO+TS
Makespan
16
7
8
7
7
Max(W tw )
16
7
6
5
6
7 6
W tw
59
53
46
45
44
43
of Max (W tw ) (11) compared with “AL+CGA”. Table 13.8 shows our hybrid algorithm’s effectiveness and validity by comparison with other algorithms on a large scale. In this chapter, we have applied our hybrid algorithm to solve the problems ranging from small scale to large scale. The testing results showed that the proposed algorithm performed at the same level or better with respect to the three objective functions in almost all instances, when compared to the results from the other alternative solution methods. And all results could be got in the reasonable computational time. It proves that our hybrid algorithm is efficient and effective.
J6
J5
J4
J3
6
O54
4
6
O5,3
O6,1
5
11
O4,4
O5,2
9
O4,3
6
8
O5,1
6
7
O3,4
O4,2
4
O3,3
O4,1
5
O3,2
9
O2,4
7
8
O2,3
O3,1
6
O2,2
10
O14
4
2
O1,3
O2,1
1
O1,2
J2
1
O1,1
J1
M1
Oi,j
Job
1
5
2
4
9
4
6
5
2
3
2
10
1
3
5
11
8
4
5
1
4
M2
3
4
4
6
2
5
2
7
5
12
3
6
8
6
8
2
7
5
1
3
6
M3
Table 13.7 Problem 15 × 10 with 56 operations (total flexibility)
2
2
3
3
3
6
4
4
4
1
8
4
5
1
9
7
1
9
5
4
9
M4
6
3
6
5
5
2
5
1
1
6
7
9
4
2
4
5
9
8
6
8
3
M5
9
2
5
2
8
7
1
2
2
5
4
5
9
6
3
3
6
4
9
10
5
M6
8
5
2
28
7
5
3
36
3
8
6
1
1
4
5
5
1
15
5
4
2
M7
5
4
4
7
4
4
6
5
6
3
9
7
2
1
3
14
10
8
10
11
8
M8
4
7
7
4
1
2
5
8
5
5
8
1
3
7
8
9
7
4
3
4
9
M9
(continued)
2
5
9
5
2
1
2
5
4
2
4
6
4
2
1
2
1
4
2
3
4
M 10
272 13 An Effective Hybrid Particle Swarm Optimization Algorithm …
J 12
J 11
J 10
J9
J8
J7
Job
5
4
O11,4
O12,2
3
O11,3
9
2
O12,1
1
3
O10,4
O11,2
6
O10,3
O11,1
2
O10,2
9
O9,4
5
20
O9,3
O10,1
2
1
O8,4
O9,2
3
O8,3
6
4
O9,1
2
O8,2
2
O8,1
1
O7,2
1
O6,2
O7,1
M1
Oi,j
Table 13.7 (continued)
8
8
1
6
3
2
2
3
5
8
8
17
3
3
2
5
5
3
1
4
3
M2
9
5
45
2
6
3
5
2
6
7
7
12
2
2
36
4
6
6
4
2
6
M3
5
6
6
5
3
6
6
5
9
4
4
5
12
22
5
2
2
2
5
5
5
M4
4
3
2
8
2
5
5
4
8
56
5
9
15
44
2
5
3
5
2
3
4
M5
75
6
4
4
1
2
8
7
5
3
8
6
10
11
3
49
5
4
3
6
7
M6
63
5
1
6
4
1
7
4
4
2
7
4
12
10
6
8
4
1
5
9
5
M7
6
2
25
3
10
4
4
5
2
5
4
7
14
23
4
5
1
5
4
8
4
M8
5
4
2
2
12
2
5
2
5
4
56
5
18
5
11
4
2
8
2
5
6
M9
(continued)
21
2
4
5
1
1
2
1
4
1
2
6
16
1
2
5
5
7
5
4
5
M 10
13.4 Experimental Results 273
f *1
= 23,
J 15
J 14
J 13
Job
f *2
f *3
6
O15,4
= 91
4
O15,3
= 10, and
5
8
O14,4
O15,2
3
O14,3
2
6
O15,1
2
3
O134
O14,2
5
O13,3
O14,1
3
O13,2
8
O12,4
4
12
O12,3
O13,1
M1
Oi,j
Table 13.7 (continued)
2
5
6
5
5
25
2
3
2
4
5
2
7
5
M2
11
2
2
6
6
4
4
5
5
5
4
5
9
4
M3
14
3
5
8
4
8
5
4
6
8
7
6
5
6
M4
2
5
4
5
2
5
8
6
5
5
5
8
6
3
M5
3
2
2
6
3
6
6
5
4
4
8
5
3
2
M6
6
8
5
3
6
3
5
4
8
6
6
6
2
5
M7
5
4
3
2
8
2
4
85
5
5
6
4
5
4
M8
4
7
2
5
5
5
2
5
6
4
3
6
8
2
M9
8
5
5
4
4
4
6
5
4
2
2
2
4
5
M 10
274 13 An Effective Hybrid Particle Swarm Optimization Algorithm …
13.4 Experimental Results
275
Fig. 13.7 Optimization solution of problem 15 × 10 (makespan = 11,Max(Wtw ) = 11, Wtw = 93) Table 13.8 Comparison of results on problem 15 × 10 AL+CGA
PSO+SA
PSO+TS
Makespan
23
24
12
11
Max(W tw )
11
11
11
11
W tw
95
91
91
93
276
13 An Effective Hybrid Particle Swarm Optimization Algorithm …
13.5 Conclusions and Future Research In this chapter, an effective hybrid particle swarm optimization algorithm is proposed to solve the multi-objective flexible job shop scheduling problems. The performance of the presented approach is evaluated in comparison with the results obtained from other authors’ algorithms for four representative instances. The obtained computational results and time demonstrated the effectiveness of the proposed approach. And a more comprehensive computational study should be made to test the efficiency of the proposed solution technique. The future research directions include the following: (1) Designing more efficient information sharing mechanisms and more effective local search strategies for solving multi-objective FJSP; (2) Developing effective theory and algorithm for this complex combinatorial optimization problem is still needed; (3) Applying PSO to other useful extensions research directions.
References 1. Sonmez AI, Baykasoglu A (1998) A new dynamic programming formulation of (n * m) flow shop sequencing problems with due dates. Int J Prod Res 36:2269–2283 2. Garey MR, Johnson DS, Sethi R (1976) The complexity of flow shop and job- shop scheduling. Math Operat Res 1:117–129 3. Xia WJ, Wu ZM (2005) An effective hybrid optimization approach for multi- objective flexible job-shop scheduling problems. Comput Ind Eng 48(2):409–425 4. Lawler El, Lenstra JK, Rinnooy Kan AHG, Shmoys DB (1993) Sequencing and scheduling: Algorithms and complexity. In SC Graves et al. (eds), Logistics of production and inventory. Amsterdam, North Holland, pp 445–522 5. Baykasoglu A, Ozbakir L, Sonmez AI (2004) Using multiple objective tabu search and grammars to model and solve multi-objective flexible job-shop scheduling problems. J Intell Manuf 15(6):777–785 6. Brandimarte P (1993) Routing and scheduling in a flexible job shop by taboo search. Ann Oper Res 41(3):157–183 7. Hurink E, Jurisch B, Thole M (1994) Tabu search for the job shop scheduling problem with multi-purpose machine. Operat Res Spektrum 15:205–215 8. Mastrolilli M, Gambardella LM (2000) Effective neighborhood functions for the flexible job shop problem. J Sched 3(1):3–20 9. Parsopoulos KE, Vrahatis MN (2002). Recent approaches to global optimization problems through particle swarm optimization. Nat Comput 1(2–3):235–306 10. Kacem I, Hammadi S, Borne P (2002) Approach by localization and multiobjective evolutionary optimization for flexible job-shop scheduling problems. IEEE Trans Syst Man Cybernet Part C 32(1):1–13 11. Kacem I, Hammadi S, Borne P (2002) Pareto-optimality approach for flexible job-shop scheduling problems: hybridization of evolutionary algorithms and fuzzy logic. Math Comput Simul 60:245–276 12. Rigao C (2004) Tardiness minimization in a flexible job shop: a tabu search approach. J Intell Manuf 15(1):103–115
References
277
13. Deb K (2001) Multi-objective optimization using evolutionary algorithms. Wiley, Chichester, UK 14. Hsu T, Dupas R, Jolly D, Goncalves G (2002) Evaluation of mutation heuristics for the solving of multiobjective flexible job shop by an evolutionary algorithm. In Proceedings of the 2002 IEEE international conference on systems, man and cybernetics, vol 5, pp 655–660 15. Kennedy J (1997) Particle swarm: social adaptation of knowledge. In Proceedings of the 1997 IEEE international conference on evolutionary computation, Indianapolis, USA, pp 303–308 16. Kennedy J, Eberhart R (1995) Particle swarm optimization. In Proceedings of the 1995 IEEE international conference on neural network, vol 4(4). Pp 1942–1948 17. Glover F, Laguna M (1997) Tabu search. Kluwer Academic Publishers 18. Mesghouni K, Hammadi S, Borne P (1997) Evolution programs for job-shop scheduling. In Proceedings of the 1997 IEEE international conference on systems, man and cybernetics, vol. 1. pp 720–725 19. Chen H, Ihlow J, Lehmann C (1999) A genetic algorithm for flexible job-shop scheduling. In Proceedings of IEEE international conference on robotics and automation, Detroit, pp 1120– 1125
Chapter 14
A Multi-objective GA Based on Immune and Entropy Principle for FJSP
14.1 Introduction Most of the research on FJSP has been concentrated on a single objective. However, several objectives must be considered simultaneously in the real-world production situation and these objectives often conflict with each other. For an enterprise, different departments have different expectations in order to maximize their own interests. For example, the manufacturing department expects to reduce costs and improve work efficiency, corporate executives want to maximize the utilization of existing resources, and the sales department hopes to better meet the delivery requirements of the customers. It is detrimental to the overall development of enterprises if the interests of any one department are ignored, and it is important to seek a reasonable compromise for scheduling decision-making. Recently, multi-objective FJSP has gained the attention of some researchers. Kacem et al. [1–3] used an approach by localization and multi-objective evolutionary optimization and proposed a Pareto approach based on the hybridization of Fuzzy Logic (FL) and Evolutionary Algorithms (EAs) to solve the FJSP. Baykaso˘glu et al. [4] presented a linguistic-based meta-heuristic modeling and multiple objective tabu search algorithm to solve the flexible job shop scheduling problem. Xia and Wu [5] proposed a practical hierarchical solution approach for solving MOFJSP. The proposed approach utilizes Particle Swarm Optimization (PSO) to assign operations on machines and Simulated Annealing (SA) algorithm to schedule operations on each machine. Liu et al. [6] proposed the Variable Neighborhood Particle Swarm Optimization (VNPSO) consisting of a combination of the Variable Neighborhood Ssearch (VNS) and Particle Swarm Optimization (PSO) for solving the multi-objective flexible job shop scheduling problems. Ho and Tay [7] presented an efficient approach for solving the multi-objective flexible job-shop by combining
© Springer-Verlag GmbH Germany, part of Springer Nature and Science Press, Beijing 2020 X. Li and L. Gao, Effective Methods for Integrated Process Planning and Scheduling, Engineering Applications of Computational Methods 2, https://doi.org/10.1007/978-3-662-55305-3_14
279
280
14 A Multi-objective GA Based on Immune …
evolutionary algorithm and guided local search. They also solved the multi-objective flexible job-shop problems by using dispatching rules discovered through genetic programming [8]. Gao et al. [9] developed a hybrid Genetic Algorithm (GA) for the FJSP with three objectives: min makespan, min maximal machine workload, and min total workload. Zhang et al. [10] combined the PSO algorithm and Tabu Search (TS) algorithm for the multi-objective flexible job shop problem. Xing et al. [11] proposed an efficient search method for multi-objective flexible job shop scheduling problems. Obviously, the optimal solutions for multi-objective problems are not only one generally, they are trade-off solutions between these objectives, thus making the multi-objective problems more difficult than the single-objective problems. Among the above researches, some consider the objectives with priorities, and most of the algorithms solved the multi-objective FJSP problem by transforming it to a monoobjective one through giving each objective a different weight but with the difficulty of assigning a weight for each objective. If objectives are optimized concurrently, the problem is to design the effective search algorithm for some extra steps and the considerable increment of time complexity [12]. Moreover, a key problem for a multi-objective evolutionary algorithm is how to take measures to preserve diversity in the population. In this chapter, an improved GA based on immune and entropy principle is used to solve the multi-objective flexible job shop scheduling problem. In this improved MOGA, the fitness scheme based on Pareto-optimality is applied. The improved GA utilizes a novel chromosome structure in order to prevent the loss or destruction of elite solutions. Advanced crossover and mutation operators are proposed to adapt to this special chromosome structure. The immune and entropy principle is used to keep the diversity of individuals and overcome premature convergence. The remainder of the chapter is organized as follows. In Sect. 14.2, the multiobjective flexible job shop scheduling problem is introduced. Some basic concepts about multi-objective optimization are introduced in Sect. 14.3. In Sect. 14.4, the scheme of the multi-objective GA based on immune and entropy principle for MOFJSP is elaborated. In Sect. 14.5 the computational results and the comparison with other approaches are presented. The final conclusions are given in Sect. 14.6. The notation used in this chapter is summarized in the following:
n
total number of jobs
m
total number of machines
ni
total number of operations of job i
Oij
the jth operation of job i
M i,
j
the set of available machines for the operation Oij
Cj
completion time of the job J j
Pijk
processing time of Oij on machine k
x ijk
a decision variable (continued)
14.1 Introduction
281
(continued) F1
makespan (the maximal completion time)
F2
total workload of machines (the total working time of all machines)
F3
critical machine workload (the machine with the biggest workload)
ni
the number of individuals that are dominated by individual i
N
population size of antibody
M
length of antibody (fixed length)
S
size of a symbolic set, which is the total number of the symbols that may appear in the locus of the individual
pkm
probability that the kth symbol appears at the mth locus
H m (N)
entropy of the mth locus of the individual for N antibodies
H k (2)
entropy of the kth locus of the individual for two antibodies
H i, j (2)
average entropy of individual i and individual j
Ci
Density of antibody i
fi
Initial (usual) fitness function of antibody i
K
positive and regulative coefficient
fi
aggregation fitness
pu
processing time of the process u
ST u
start time of the process u
14.2 Multi-objective Flexible Job Shop Scheduling Problem The FJSP is described as follows: there are n jobs ( ji , i ∈ {1, 2, . . . , n}), which need to be processed on m machines (Mk , k ∈ {1, 2, . . . m}). Job J i consists of one or more operations (Oi j , j ∈ {1, 2, . . . , n i }), ni is the total number of operations for job J i ), and operation Oi j is allowed to be executed by one machine out of a given set Mi, j , which is consisted of all the capable machines of the jth operations for job ji . The processing time of the jth operation for job ji on the machine k is denoted by pi jk . The task of scheduling is to assign each operation on the given machines and sequence the operations on all the machines, in order to optimize some objectives. In addition, some restrictions must be met: (1) One machine can process only one job at the same time; (2) One job can be processed only on one machine at one moment, and one operation can’t be broken off when being processed; (3) There aren’t any priority restrictions between the operations for different jobs; (4) All jobs have equal priorities. For example, an FJSP with four jobs and five machines is illustrated in Table 14.1, the number in the table is processing time and the symbol “–” means the operation cannot be processed on the corresponding machine.
282
14 A Multi-objective GA Based on Immune …
Table 14.1 A FJSP with four jobs and five machines Processing time J1
J2
J3
J4
M1
M2
M3
M4
M5
O11
4
5
–
7
–
O12
–
3
2
–
2
O13
3
2
–
2
4
O21
2
–
3
–
4
O22
–
7
9
6
5
O23
7
–
5
–
6
O31
–
4
4
8
7
O32
5
6
7
–
6
O33
–
4
–
5
5
O41
–
4
5
7
–
O42
6
–
6
7
8
In this chapter, we aim to minimize the three objectives as below: (1) Makespan (the maximal completion time):
F1 = max{C j | j = 1, . . . , n}
(14.1)
(2) The total workload of machines (the total working time of all machines):
F2 =
ni n m
pi jk xi jk , k = 1, 2, . . . , m
(14.2)
i=1 j=1 k=1
(3) Critical machine workload (the machine with the biggest workload):
F3 = max
n m
pi jk xi jk , k = 1, 2, . . . , m
(14.3)
i=1 j=1
where n is the total number of the operations, which need to be processed; m is the total number of machines; C j is the completion time of job J j ; x ijk is a decision variable, and if the jth operation of job J i is processed on the machine k, then x ijk = 1, else x ijk = 0.
14.3 Basic Concepts of Multi-objective Optimization
283
14.3 Basic Concepts of Multi-objective Optimization The general multi-objective optimization problem is described in this form: Minimize y = f (x) = ( f 1 (x), f 2 (x), . . . , f q (x))
(14.4)
where x ∈ R p , and y ∈ R q , p is the number of dimensions of the variable x, and q is the number of sub-objectives. The following two basic concepts that are often used in the multi-objective optimization case are adopted in this chapter too. Non-dominated solutions: A solution a is said to dominate solution b if and only if (1) f i (a) ≤ f i (b),
∀i ∈ {1, 2, . . . , q}
(14.5)
(2) f i (a) < f i (b),
∃i ∈ {1, 2, . . . q}
(14.6)
Pareto-optimality: A feasible solution is called Pareto-optimal when it is not dominated by any other solution in the feasible space. Pareto-optimal set which is also called the Efficient Set is the collection of all Pareto-optimal solutions, and their corresponding allocations in the objective space are called the Pareto-optimal frontier.
14.4 Handing MOFJSP with MOGA Based on Immune and Entropy Principle 14.4.1 Fitness Assignment Scheme In this chapter, the fitness of each individual is determined by the factors of dominance and being dominated. It resembles the fitness assignment scheme in SPEA2 which was firstly proposed by Zitzler [13]. However, different from SPEA2, our proposed scheme doesn’t consider the strength niche because the diversity strategy based on immunity and entropy principle has been applied already. For the non-dominated individuals, the fitness of individual i is defined as follows: fitness(i) = n i /(N + 1)
(14.7)
N is the size of the population, n-i is the number of individuals that are dominated by individual i. For the dominated individuals, the fitness of individual j is defined as follows:
284
14 A Multi-objective GA Based on Immune …
fitness( j) = 1 +
fitness(i).
(14.8)
i∈N DSet,i j
In this equation, NDSet is the set of all the non-dominated individuals, i j represents that the individual i dominates the individual j.
14.4.2 Immune and Entropy Principle Because of three effects: selection pressure, selection noise, and operation disruption, EAs based on a finite population tend to converge to a single solution, according to Mahfoud [14]. However, the goal of multi-objective optimization is to find a set of non-dominated solutions to the problem or approximate the Pareto front of the problem rather than to obtain only a single solution. A non-niching strategy based on the immune and entropy principle using special fitness calculation is proposed by Cui et al. [15]. The more similar the individuals are located in the current population, the more the reproduction probability of an individual is degraded. This strategy does not require any distance parameter but uses exponential fitness rescaling method based on genetic similarity between an individual and the rest of the population. Immunity is a physiological reaction of a life system, and the immune system can generate the corresponding antibodies to defend the attack from alien agents, which are called antigens. After the combination of antibodies and antigens, a sequence of reactions will take place, which will destroy the antigens by immune phagocytosis. Despite the complexity of the immune system, the immune system has presented the self-adaptive ability to defend the antigens. Antigens and the antibodies can be regarded as the objectives of the optimization problem and the solution individuals, respectively. Then, we can improve evolutionary algorithms by utilizing the self-adaptive mechanism. For the features of the immune system, the self-adaptive mechanism of the immune system has the capabilities to keep immune balance, which is controlling the density of antibodies by inhibiting and boosting them. The process of inhibiting and boosting antibodies can be considered as the inhibiting and boosting of individual reproduction in MOEAs. Here, the antibody of an immune system is taken as an individual in MOEAs. Suppose that N denotes population size, M is the length of the antibody (fixed length), and S denotes the size of the symbolic set. The strategy is described as follows: Step 1: Information-theoretic entropy of antibody. If a random vector X denotes the status feature of an uncertain system (where X = {x1 , x2 , . . . , xn }) and the probability value of X is denoted n by P (where P = { p1 , p2 , . . . , pn }, 0 ≤ pi ≤ 1, i = 1, 2, . . . , n, and, i=1 pi = 1), the information-theoretic entropy of the system is mathematically defined as H =−
n k=1
pk ln( pk ).
(14.9)
14.4 Handing MOFJSP with MOGA Based on Immune and Entropy …
285
An individual generated from an evolutionary process can be thought of as an uncertain system in entropy optimization principle, and the entropy of the mth locus of the individual is defined as Hm (N ) = −
S k=1
pkm ln( pkm )
(14.10)
where pkm denotes the probability that the kth symbol that appears at the mth locus, and it can be calculated as pkm = (total number of the kth symbol that appears at the mth locus among individual s)/N. Step 2: Similarity of antibody. The similarity of antibody indicates a similar extent between individual i and individual j: Ai, j =
1 1 + Hi, j (2)
(14.11)
where Hi, j (2) is the average entropy of individual i and individual j, and it can be calculated according to formula (14.12) as below: Hi, j (2) =
1 M Hk (2) k=1 M
(14.12)
The range Ai, j is within [0, 1]. If the value Ai, j is higher, the individual i is more similar to j. Ai, j = 1 means that the genes of the two individuals are absolutely the same. Step 3: Density of antibody. The density of antibody means the ratio of similar antibodies of antibody i and the population size, and it is denoted by Ci : Ci = (number of antibodies in the population whose antibody similarity to the individual i exceeds λ)/N, where 1 is similarity constant, and generally its range is 0.9 ≤ λ ≤ 1 [15–17]. Step 4: Aggregation fitness. We define aggregation fitness of an individual as a trade-off result of two evaluations: f i = f i × exp(K × Ci )
(14.13)
where f i is initial (usual) fitness function of antibody i, which directly indicates the object of the solving problem; K is a positive and regulative coefficient, which is determined by the size of population and experience. Note that f i , f i are optimized by minimization principle here, namely, if the fitness is lower, the reproduction probability would be higher. If they are optimized by maximization, K must be taken as a negative value.
286
14 A Multi-objective GA Based on Immune …
14.4.3 Initialization The quality of the initial population has a great effect on the performance of an algorithm. At present, the approach by localization proposed by Kacem [1, 2] is often used to generate initial solutions, which takes into account the processing times and workloads of machines on which we have already assigned operations, but with the drawback of lacking diversity and possibly leading to prematurity. In this chapter, a simple and practical strategy is utilized to generate the initial population. Firstly, the operation sequence is generated randomly, and then we select two machines from the set of capable machines for each operation. Secondly, if a random generated number Random (between 0 and 1) is less than 0.8, then choose the one with the shorter processing time on it; else, choose the one with the longer processing time on it.
14.4.4 Encoding and Decoding Scheme For the flexible job shop problem, the representation of each solution in GA consists of two parts, as is illustrated in Fig. 14.1. The first part is used to determine the processing sequence for all the operations, and the other part is used to assign a suitable machine for each operation. By integrating the two parts of representation, a feasible solution can be generated. For example, as shown in Fig. 14.1, for the operation sequence vector, the number of genes equals to the total number of all the operations. The operations of each job are denoted by the corresponding job number; the kth occurrence of a job number refers to the kth operation in the sequence of this job. For example, the operation sequence [121213233] represents the operation sequence [O11 O21 O12 O22 O13 O31 O23 O32 O33 ]. For the machine assignment vector, each number represents the machine assigned for each operation successively. Then the chromosome in Fig. 14.1 represents the below operation sequence and their assigned machines: (O21- , M -1 ), (O11- , M -1 ), (O22- , M -4 ), (O12- , M -3 ), (O13- , M -4 ), (O31- , M -3 ), (O23- , M -5 ), (O32 , M -2 ), (O41- , M -2 ), (O33- , M -2 ), (O42- , M -3 ). According to the processing time in Table 14.1, the sequence of the processing time is [2 4 6 2 2 4 6 6 4 4 6]. Decoding is the process of transferring the chromosomes to the scheduling solutions. Let u denote one operation Oij in the chromosome with the processing time denoted by pu and the start time denoted by ST u , and then the completion time is ST u + pu . For the process u, the job predecessor and the machine predecessor are
Fig. 14.1 Illustration of the chromosome representation
14.4 Handing MOFJSP with MOGA Based on Immune and Entropy …
287
Fig. 14.2 Gantt chart corresponding to the chromosome in Fig. 14.1 by this decoding method
denoted by JP[u] and MP[u], respectively; if they exist, then the starting time of u is determined by the maximal completion time of its job predecessor and machine predecessor. Let all the jobs start to process in time 0, then the starting time of u can be calculated by Eq. 14.14. STu = max(STJ P[u] + p J P[u] , STM P[u] + p M P[u] )
(14.14)
During the decoding stage, we first choose the machine for each operation according to the machine assignment vector, and then determine the sequence of operations according to the operation sequence representation. After transferring the chromosome to the operation sequence and the machine assignment, we can decode according to Eq. 14.14. In this chapter, we introduce an inserting greedy algorithm to determine the sequence of operations on each machine in order to produce active schedules merely. This decoding method is described as follows: Firstly, assign the first operation in the sequence according to the sequence of all the operations. Then, assign the second operation on the corresponding machine in the earliest capable time, and repeat in this way until all the operations are assigned. According to this approach, the Gantt chart which is decoded according to the representation in Fig. 14.1 is shown in Fig. 14.2.
14.4.5 Selection Operator In our algorithm, the selection strategy includes two parts: the method of keeping the best individuals and tournament selection. The method of keeping the best individuals is to copy the 1% best individuals in the parent solutions to the children. The tournament selection strategy is proposed by Goldberg [18], and it works as below: two solutions are selected randomly as the parent solutions firstly if a random number generated between 0 and 1 randomly is smaller than the probability r which usually is set to 0.8, then we select the better one; otherwise, we select the other one; and
288
14 A Multi-objective GA Based on Immune …
then, the selected individual is put back to the population and can be selected as a parent chromosome again.
14.4.6 Crossover Operator In FJSP, it is not only necessary to determine the sequence of the operations, but also assign an appropriate machine for each operation. In our algorithm, crossover operation of the two parts of chromosomes is implemented separately, in which the crossover for the operation sequence utilizes Improved Precedence Operation Crossover (IPOX), while the crossover for the machine assignment vector utilizes Multipoint Preservative Crossover (MPX) suggested by Zhang et al. [19]. IPOX is a modification basing on the Precedence Operation Crossover (POX), which was proposed by Zhang et al. [20], and it is merely utilized to cross the sequence of the operations, and the machine assignment is kept unchanged, while MPX merely crosses the machine assignment vector and keeps the operation sequence unchanged. For example, P1 and P2 are the parents’ chromosomes; C 1 and C 2 are their children chromosomes by crossing. The crossover operator of IPOX works as in Fig. 14.3, and the crossover operator of MPX works as in Fig. 14.4. In our algorithm, the crossover operation works as follows:
J1 = {2, 3}, J 2 ={1, 4} Fig. 14.3 IPOX crossover operation for the operation sequence
Fig. 14.4 MPX crossover operation for the machine assignment
14.4 Handing MOFJSP with MOGA Based on Immune and Entropy …
289
J1 = {2, 3}, J2 = {1, 4} Step 1: Select the operation sequence vectors of the parents P1 and P2 , and all the jobs are randomly divided into two sets J1 and J2 . Step 2: Copy the elements of P1 that are included in J1 to C1 in the same position and copy the elements of P2 that are included in J2 to C2 in the same position. Step 3: Copy the elements of P2 that are included in J2 to C1 in the same order and copy the elements of P1 that are included in J1 to C2 in the same order. Step 4: Select the machine assignment vectors of the parents P1 and P2 . Step 5: Generate a set Rand0_1, which is consisted of integer 0 and 1, and it has the same length as all the chromosomes. Step 6: Exchange the machine assignment number in P1 and P2 in the same place with 1 in set Rand0_1 by turn. Step 7: Copy the rest machines’ numbers in the same position to the next generation C1 and C2 .
14.4.7 Mutation Operator In order to improve the ability of local search and keep the diversity of the population, we adopt mutation operation in our algorithm. For the operation sequence, the mutation operator is implemented as shown in Fig. 14.5; and for the machine assignment, the mutation operation is implemented as shown in Fig. 14.6. The mutation operation works as follows:
Fig. 14.5 Mutation of the operation sequence vector
Fig. 14.6 Mutation of the machine assignment vector
290
14 A Multi-objective GA Based on Immune …
Step 1: Select the operation sequence vector of one parent chromosome; Step 2: Choose a gene randomly, and insert it in a position before a random operation; Step 3: Select the machine assignment vector; Step 4: Choose two genes randomly, and then change each number with one other machine from the set of capable machines for these two operations.
14.4.8 Main Algorithm The algorithm keeps a fixed size of the population. The main algorithm is executed in the following steps: Step 1: Initialize parameters and initial antibodies. Step 2: Calculate the density of antibody and aggregation fitness for each antibody. Step 3: Find all the non-dominated solutions from the current population. Step 4: Evaluate each individual by the aggregation fitness. Step 5: If the termination condition is met, then terminate the search; else go to the next step. Step 6: Copy 1% individuals with the best fitness according to the tournament selection strategy. Step 7: If the fitness of two parent solutions isn’t equal, crossover with probability pc will be performed, and the two best individuals are selected as the child solutions. Then mutation operations with probability pm will be performed. After this step, the children solutions are generated. Step 8: If the terminating condition is satisfied, the algorithm ends; else, go to step 2.
14.5 Experimental Results To test the performance of the algorithm, four representative instances (problem 4 × 5, problem 8 × 8, problem 10 × 10, and problem 15 × 10) taken from Kacem et al. [6] without release date and ten instances Mk01–10 taken from Brandimarte [21] have been taken into this experiment. In our algorithm, the population size is 200, the maximal generation is 200, the crossover probability is 0.8, and the mutation probability is 0.1.
14.5 Experimental Results
291
Table 14.2 Comparison of results on problem 4 × 5 PSO+SA
HPSO
F1
–
11
11
SM 11
12
11
MOGA 11
12
F2
–
32
32
34
32
32
34
32
F3
–
10
10
9
8
10
9
8
Times(s)
–
0.34
11.219
13.625
12.343
5.8
To illustrate the efficiency of our algorithm, we compare the results with PSO + SA (taken from Xia [5]), HPSO (taken from Zhang [10] and SM (taken from Xing [11]). Our algorithm is named MOGA for short. The computational results and comparisons are given in Tables 14.2, 14.3, 14.4, 14.5, and 14.6. The system was implemented using C++, running on a PC with 2 GHz CPU and 2 GB RAM. The processing time of our algorithm is the average computational time by running ten times, and the processing time of SM is the minimum time for the obtained solution taken from Xing [11]. The symbol “–” means the time hasn’t been given in the chapter. Compared with HPSO, our algorithm performs better or the same for all the problems, although the test shows a relatively larger computational cost. Compared with SM, the computational time of our algorithm is much less. In Table 14.4, for the problem 10 × 10, the results are almost the same with the results by SM, but we can obtain a different Pareto-optimal solution. In Table 14.3, for the problem 8 × 8, although two of the Pareto-optimal solutions obtained by SM are found by our algorithm, it can obtain one new solution which is Pareto-optimal. For the problem 15 × 10, two new Pareto-optimal solutions are obtained in Table 14.5. In Table 14.6, except MK03, we see that our algorithm can obtain at least one solution dominating the solution obtained by SM for the problems MK01–10 and some other Paretooptimal solutions are found simultaneously. For the problem Mk03, our algorithm can obtain one solution which is better for the objectives F 1 and F 3 , but a little worse for the objective F 2 and some other Pareto-optimal solutions. For all the problems, the computational time is much less. In summary, these values show the efficiency of our algorithm, and our algorithm can obtain good solutions with low computational cost. Moreover, we test our algorithm with the problems from Dauzère-Pérès and Paulli [22], which aimed to minimize the makespan. The results are shown in Table 14.7. M & G is the approach proposed by Mastrolilli and Gambardella [23], and there is no other multi-objective optimization algorithm to compare for these problems in the papers. From the data in Table 14.7, our solutions can find some Pareto-optimal solutions simultaneously with a low computational cost. By applying the fitness scheme based on Pareto-optimality to the genetic algorithm, our algorithm is capable to find several optimal solutions simultaneously. Because the genetic algorithm is by nature a multipoint stochastic search method, it is suitable to solve multi-objective problems and is time consuming. Moreover,
15
75
12
–
F1
F2
F3
Times(s)
PSO+SA
13
73
16
1.67
12
75
15
HPSO
Table 14.3 Comparison of results on problem 8 × 8
12
77
14
70.765
12
75
15
74.219
13
73
16
SM
76.937
11
77
16
70.672
12
77
14
9.5
11
81
15
12
75
15
MOGA
13
73
16
292 14 A Multi-objective GA Based on Immune …
7
44
6
–
F1
F2
F3
Times(s)
PSO SA
2.05
6
43
7
HPSO
Table 14.4 Comparison of results on problem 10 × 10
57.297
5
42
8
SM 7
53.422
6
42
8
54.812
7
41
8
14.2
5
42
6
42
7
MOGA 8 7
41
7 5
45
14.5 Experimental Results 293
294
14 A Multi-objective GA Based on Immune …
Table 14.5 Comparison of results on problem 15 × 10 PSO+SA
HPSO
SM
MOGA
F1
12
11
11
11
12
11
F2
91
93
91
91
95
98
F3
11
11
11
11
10
10
Times(s)
–
10.88
194.98
87.5
the immune and entropy principle is used to keep the diversity of individuals, and advanced crossover and mutation operators are proposed to improve the efficiency of our multi-objective genetic algorithm.
14.6 Conclusions Recently, the multi-objective flexible job shop scheduling problem has attracted many researchers’ attention. The complexity of this problem leads to the appearance of many heuristic approaches, and the research is mainly concentrated on the hybrid and evolutionary algorithms. In this chapter, we put forward an efficient modified multi-objective genetic algorithm basing on immune and entropy principle, for solving multi-objective flexible job shop scheduling problems. In our algorithm, the fitness scheme based on Paretooptimality is applied, and efficient crossover and mutation operators are proposed to adapt to the special chromosome structure. Meanwhile, selection pressure of similar individuals can be decreased by combining the immune and entropy principle. The numerical experiments indicate the effectiveness of the proposed approach. However, there are still a number of further works that need to be considered in the future. In the proposed algorithm, we can construct the external archive to store some of the non-dominated solutions produced in the searching process. Some other objectives can be considered for FJSP but not limited to three objectives. Furthermore, some other heuristic algorithms such as PSO and Ant Colony Optimization (ACO) can substitute GA to generate more efficient multi-objective optimization algorithms based on immune and entropy principle.
14.6 Conclusions
295
Table 14.6 Experiment results on problems MK01–10 SM MK01
MK02
MK03
MK04
MK05
MOGA
F1
F2
F3
Times (min)
F1
F2
F3
Times (min)
42
162
42
4.78
42
158
39
0.49
44
154
40
43
155
40
28
204
68
177
155
852
352
702
28
204
67
177
3.02
26.14
17.74
8.26
40
169
36
26
151
26
27
146
27
29
145
27
29
143
29
31
141
31
33
140
33
204
855
199
204
871
144
204
882
135
204
884
133
213
850
199
214
849
210
221
847
199
222
848
199
231
848
188
230
848
177
66
345
63
65
362
63
63
371
61
62
373
61
61
382
60
60
390
59
73
350
55
74
349
54
74
348
55
90
331
76
173
683
173
175
682
175
183
677
183
185
676
185
179
679
179
0.75
4.75
1.76
2.34
(continued)
296
14 A Multi-objective GA Based on Immune …
Table 14.6 (continued) SM MK06
MK07
MK08
MK09
MK10
MOGA
F1
F2
F3
Times (min)
F1
F2
F3
Times (min)
75
431
67
18.79
62
424
55
1.93
65
417
54
60
441
58
62
440
60
76
362
60
76
356
74
78
361
60
73
360
72
72
361
72
150
523
311
227
717
2524
2374
1989
150
523
299
221
5.68
67.67
77.76
122.52
100
330
90
139
693
139
140
686
138
144
673
144
151
667
151
157
662
157
162
659
162
166
657
166
523
2524
515
523
2534
497
524
2519
524
578
2489
578
587
2484
587
311
2290
299
310
3514
299
311
2287
301
314
2315
299
315
2283
299
332
2265
302
329
2266
301
328
2259
308
325
2275
299
224
1980
219
225
1976
211
233
1919
214
4.92
12.04
19.48
17.87
(continued)
14.6 Conclusions
297
Table 14.6 (continued) SM F1
MOGA F2
F3
Times (min)
F1
F2
F3
235
1897
218
235
1895
225
240
1905
215
240
1888
216
242
1913
214
246
1896
215
252
1884
224
256
1919
211
260
1869
215
266
1864
254
268
1858
264
276
1857
256
281
1854
268
217
2064
207
214
2082
204
Times (min)
Table 14.7 Experiment results on problems 01–18a M&G 01a
MOGA
Time(s)
F1
F1
F2
F3
2518
2568
11137
2505
2572
11137
2568
2594
11137
2554
02a
2231
2289
11137
2263
2313
11137
2238
03a
2229
2287
11137
2248
2256
11137
2252
2550
11090
2503
2569
11076
2565
2579
11080
2552
04a
05a
2503
2216
3095
11064
2727
2292
11077
2252
2293
11091
2242
2297
11054
2255
2315
11063
2272
2343
11050
2298
2358
11038
2322
122.5
153.4 174.0 124.2
142.4
(continued)
298
14 A Multi-objective GA Based on Immune …
Table 14.7 (continued) M&G F1
06a
07a
2203
2283
MOGA
Time(s)
F1
F2
F3
2376
11022
2243
2904
10941
2620
2945
10941
2571
3056
10941
2507
2250
11009
2233
2254
10994
2223
2398
10973
2219
2437
10988
2280
2744
10850
2448
2902
10847
2439
2967
10839
2840
2450
16485
2413
2457
16485
2299
2484
16485
2289
16485
2102
08a
2069
2187 2171
16485
2104
09a
2066
2157
16485
2113
2144
16485
2119
10a
11a
12a
2291
2063
2034
2158
16485
2102
2461
16505
2433
2470
16537
2310
2478
16533
2330
2482
16499
2360
2501
16547
2265
2501
16528
2312
2547
16490
2476
3064
16464
2734
2182
16449
2170
2202
16476
2114
2210
16442
2113
2337
16377
2185
2874
16247
2389
2894
16247
2330
2962
16247
2312
2161
16295
2107
2168
16220
2130
2191
16355
2084
185.6
457.8
496.0 609.6
452.8
608.2
715.4
(continued)
14.6 Conclusions
299
Table 14.7 (continued) M&G F1
MOGA
Time(s)
F1
F2
F3
2210
16331
2103
2315
16292
2125
2366
16237
2105
2493
16124
2297
2631
16112
2309
2637
16113
2303
2683
16104
2397
13a
2260
2408
21610
2326
1439.4
14a
2167
2340
21610
2251
1743.2
2334
21610
2258
15a
2167
2285
21610
2247
2287
21610
2218
2447
21602
2354
2450
21590
2380
2487
21584
2454
2492
21576
2417
2540
21547
2396
2550
21545
2492
2568
21540
2428
3013
21478
2588
3106
21478
2548
2322
21433
2240
2322
21362
2280
2323
21454
2238
2343
21420
2224
2480
21344
2285
2528
21313
2231
2789
21198
2448
2808
21200
2303
2816
21197
2370
2267
21483
2235
2269
21408
2206
2320
21354
2208
2437
21311
2221
2531
21285
2310
2545
21282
2305
16a
17a
18a
2255
2141
2137
1997.1 1291.4
1708.0
1980.4
300
14 A Multi-objective GA Based on Immune …
References 1. Kacem I, Hammadi S, Borne P (2002) Approach by localization and multiobjective evolutionary optimization for flexible job-shop scheduling problems. IEEE Sys Man Cybern 32(1):1–13 2. Kacem I, Hammadi S, Borne P (2002) Approach by localization and multiobjective evolutionary optimization for flexible job-shop scheduling problems. IEEE Sys Man Cybern 32(2):172–172 3. Kacem I, Hammadi S, Borne P (2002) Pareto-optimality approach for flexible job-shop scheduling problems: hybridization of evolutionary algorithms and fuzzy logic. Math Comput Simul 60(3–5):245–276 4. Baykaso˘glu A, Özbakir L, Sönmez A (2004) Using multiple objective tabu search and grammars to model and solve multi- objective flexible job shop scheduling problems. J Intell Manuf 15(6):777–785 5. Xia WJ, Wu ZM (2005) An effective hybrid optimization approach for multi-objective flexible job-shop scheduling prob- lems. Comput Ind Eng 48(2):409–425 6. Liu HB, Abraham A, Choi O, Moon SH (2006) Variable neighborhood particle swarm optimization for multi-objective flexible job-shop scheduling problems. Lect Notes Comput Sci 4247:197–204 7. Ho NB, Tay JC (2007) Using evolutionary computation and local search for solving multiobjective flexible job shop problems. In Genetic and Evolutionary Computation Conference, GECCO 2007, London, pp 821–828 8. Tay JC, Ho NB (2008) Evolving dispatching rules using genetic programming for solving multi-objective flexible job-shop prob- lems. Comput Ind Eng 54(3):453–473 9. Gao J, Gen M, Sun LY, Zhao XH (2007) A hybrid of genetic algorithm and bottleneck shifting for multiobjective flexible job shop scheduling problems. Comput Ind Eng 53(1):149–162 10. Zhang GH, Shao XY, Li PL, Gao L (2009) An effective hybrid particle swarm optimization algorithm for multi-objective flexible job-shop scheduling problem. Comput Ind Eng 56(4):1309–1318 11. Xing LN, Chen YW, Yang KW (2009) An efficient search method for multi-objective flexible job shop scheduling problems. J Intell Manuf 20:283–293 12. Lei DM (2009) Multi-objective production scheduling: a survey. Int J Adv Manuf Technol 43(9–10):926–938 13. Zitzler E (1999) Evolutionary algorithms for multiobjective optimization: methods and applications. Dissertation, Swiss Federal Institute of Technology 14. Mahfoud SW (1995) Niching methods for genetic algorithms. Dissertation, University of Illinois at Urbana-Champaign 15. Cui XX, Li M, Fang TJ (2001) Study of population diversity of multiobjective evolutionary algorithm based on immune and entropy principles. In Proceedings of the 2001 Congress on Evolutionary Computation. OPAC, Seoul, pp 1316–1321 16. Shimooka T, Shimizu K (2004) Artificial immune system for personal identification with finger vein pattern. Lect Notes Comput Sci 3214:511–518 17. Xiao RB, Cao PB, Liu Y (2007) Eng Immune Comput. Science Press, Beijing, China 18. Goldberg DE, Deb K (1991) A comparative analysis of selection schemes used in genetic algorithms. In: Rawlins G (ed) Foundations of genetic algorithms. Morgan Kaufmann, San Mateo, pp 69–93 19. Zhang CY, Rao YQ, Li PG, Shao XY (2007) Bilevel genetic algorithm for the flexible job-shop scheduling problem. Jixie Gongcheng Xuebao/Chinese. J Mech Eng 43(4):119–124 In Chinese 20. Zhang CY, Li PG, Rao YQ, Li SX (2005) A new hybrid GA/SA algorithm for the job shop scheduling problem. Lect Notes Comput Sci 3448:246–259 21. Brandimarte P (1993) Routing and scheduling in a flexible job shop by taboo search. Ann Oper Res 41:157–183 22. Dauzère-Pérès S, Paulli J (1997) An integrated approach for modeling and solving the general multiprocessor job-shop sched- uling problem using tabu search. Ann Oper Res 70(3):281–306 23. Mastrolilli M, Gambardella LM (2000) Effective neighborhood functions for the flexible job shop problem. J Schedul 3(1):3–20
Chapter 15
An Effective Genetic Algorithm for Multi-objective IPPS with Various Flexibilities in Process Planning
15.1 Introduction Decision maker could not get a satisfactory result for the whole manufacturing system if process planning and scheduling were optimized independently. In fact, Integrated Process Planning and Scheduling (IPPS) could overcome these above problems well. IPPS could bring significant improvement to the efficiency of manufacturing through removing resource conflicts, decreasing flow time, and work-in-process, improving production resources utilizing and adapting to irregular shop floor disturbances [1]. Therefore, it is important to integrate process planning and scheduling more closely to achieve the global optimum in a manufacturing system. With the development of the market economy, competitions among manufacturers become more and more intense. In order to enhance their competitiveness, manufacturers often need to meet the diverse needs of customers, such as faster processing speed and better quality. Meanwhile, enterprise managers want to reduce manufacturing costs and improve the utilization of machines. Only considering the single objective could not meet the demand from the real-world production [2]. There are many objectives existing in IPPS, such as makespan, total workload of machines, maximal machine workload, and lateness. Decision makers always need to make a trade-off among different objectives while determining a final schedule. The remainder of this chapter is organized as follows: problem description of multi-objective IPPS is given in Sect. 15.2. The workflow of the proposed algorithm and the detailed components in the proposed algorithm are described in Sect. 15.3. Experiments and discussions are given in Sect. 15.4 while Sect. 15.5 is the conclusion and future works.
© Springer-Verlag GmbH Germany, part of Springer Nature and Science Press, Beijing 2020 X. Li and L. Gao, Effective Methods for Integrated Process Planning and Scheduling, Engineering Applications of Computational Methods 2, https://doi.org/10.1007/978-3-662-55305-3_15
301
302
15 An Effective Genetic Algorithm for Multi-objective IPPS …
15.2 Multi-objective IPPS Description 15.2.1 IPPS Description Suppose there are n jobs that need to be produced on m machines. Each job has various operations and alternative manufacturing resources. The aim of IPPS is to select suitable manufacturing resources for each job, determine the operations’ processing sequence and the start time of each operation on each machine by satisfying the precedence constraints among operations and achieving several corresponding objectives [3]. In order to describe the mathematical model clearly, the following assumptions should be given at first: (1) Jobs and machines are independent of each other. All jobs have the same priorities. (2) Each machine can only handle one operation at a time. (3) Different operations from one job can’t be processed at the same time. (4) One operation can’t be interrupted when being processed. (5) All the jobs and machines are available at time zero. (6) The transport time is negligible. (7) The setup time for the operations is independent of the operation sequence and is included in the processing time. Based on these assumptions, the mathematical model of multi-objective IPPS addressed in this chapter is stated as follows, which is referred to from [1]. The maximal completion time of machines (makespan), the Maximal Machine Workload (MMW), and Total Workload of Machines (TWM) are taken into account for multi-objective IPPS. In this chapter, the aim of multi-objective IPPS is to minimize these three objectives simultaneously. The notations used to explain the model are described below: n m g¬i p¬il o¬ijl k t¬ijlk c¬ijlk w¬k A
Total number of jobs; Total number of machines; Total number of alternative process plans of job i; Number of operations in the lth alternative process plan of the job i; The jth operation in the lth alternative process plan of job i; Alternative machine corresponding to o¬ijl; The processing time of operation o¬ijl on machine k; The earliest completion time of operation oijl on machine k; The workload of machine k; A very large positive number. X il =
1 the lth flexible process plan of job i is selected 0 otherwise
15.2 Multi-Objective IPPS Description
Yi jlpqsk =
303
1 the operation oi jl precedes the operation o pqs on machine k 0 otherwise 1 if machine k is selected for oi jl Z i jlk = 0 otherwise
The following five objectives are considered to be optimized simultaneously. (1) f 1 : Minimizing the maximal completion time of machines (makespan): Min f 1 = makespan = Max ci jlk i ∈ [1, n], j ∈ [1, pil ], l ∈ [1, gi ], k ∈ [1, m]
(15.1)
(2) f 2 : Minimizing the Maximal Machine Workload (MMW): Min f 2 = MMW = maxwk k ∈ [1, m]
(15.2)
(3) f 3 : Minimizing the Total Machine Workload (TMW): Min f 3 = TMW =
m
wk k ∈ [1, m]
(15.3)
k=1
Subject to: (1) Operation constraint: different operations of one job can’t be processed at the same time. (ci jlk0 × Z i jlk0 × X il ) − (ci( j−1)lk1 × Z i( j−1)lk1 × X il ) + A × (1 − X il ) ≥ (ti jlk0 × Z i jlk0 × X il ) i ∈ [1, n], j ∈ [1, pil ], l ∈ [1, gi ], k0 , k1 ∈ [1, m]
(15.4)
(2) Machine constraint: Each machine can only handle one operation at a time. (c pqsk × Z pqsk × X ps ) − (ci jlk × Z i jlk × X il ) + A × (1 − X il ) + A × (1 − X ps ) +A × (1 − Yi jlpqsk × Z pqsk × X ps × Z i jlk × X il ) ≥ (t pqsk × Z pqsk × X ps ) (ci jlk × Z i jlk × X il ) − (c pqsk × Z pqsk × X ps ) + A × (1 − X il ) + A × (1 − X ps ) +A × (Yi jlpqsk × Z pqsk × X ps × Z i jlk × X il ) ≥ (ti jlk × Z i jlk × X il ) i, p ∈ [1, n], j, q ∈ [1, pil, ps ], l, s ∈ [1, gi, p ], k ∈ [1, m]
(15.5)
(3) Process plan constraint: Only one alternative process plan can be selected for job i. gil l=0
X il = 1l ∈ [1, gil ]
(15.6)
304
15 An Effective Genetic Algorithm for Multi-objective IPPS …
Table 15.1 Processing information for 3 jobs machined in 5 machines Jobs
Job 1
Processing information Features
Candidate operations
Candidate machines
F1
O1
M1, M2, M3
O2 -O3
M 2 , M 3 /M 1 , M 2
O4
M2, M3, M4
O5 -O6
M 3 , M 5 /M 3 , M 4
F3
O7
M1, M4
F4
O8 -O9
M 3 , M 5 /M 1 , M 5
O10
M4, M5
F5
O11
M1, M3, M4, M5
F2
Job 2
4, 3, 6
Before F 2 , F 3
8, 10, 9 4, 3/5, 7 8, 9
Before F 4
7, 9/5, 8 14, 19 20, 17, 19, 23
F6
O12
M1, M4, M5
O1 -O2
M 2 , M 3 /M 4
F2
O3
M1, M2
O4 -O5
M 1 , M 3 /M 2 , M 5
O6
M3, M4
O7 -O8
M 1 , M 3 /M 2
F1
O1
M2, M4
4, 7
F2
O2
M1, M5
10, 8
F3
O3 -O4
M 3 , M 4 /M 1 , M 2
O5
M4, M5
O6 -O7
M 1 , M 3 /M 2 , M 3
F4
Precedence constraints
2, 3/3, 5
F1
F3 Job 3
Process time
18, 13, 17 3, 6/4
Before F 3 3, 5
Before F 3
10, 9/7, 12 7, 12 5, 6/10
14, 15/13, 16
Before F 3 , F 4 Before F 4
12, 14 17, 19/20, 16
Table 15.1 gives the processing information for 3 jobs machined in 5 machines. Each job has various features, alternative operations, and machines. Precedence constraints are existing among different features. The outcome of process planning is the specific process plans for jobs. Then the scheduling system will arrange the jobs over time according to the specific process plan for each job.
15.2.2 Multi-objective Optimization The general Multi-objective Optimization Problem (MOP) is defined as follows: Minimize f (x) = { f 1 (x), f 2 (x), . . . , f k (x)} Subject to: g j (x) ≤ 0, j = 1, 2, . . . , m x ∈ X, f (x) ∈ Y
(15.7)
15.2 Multi-Objective IPPS Description
305
k is the number of objectives, m is the number of inequality constraints, x is the decision variable, f (x) is the objective. X is the decision space, Y is the objectives space. In MOP, for decision variables a and b, a dominates b: iff f i (a) ≤ f i (b) ∀i ∈ (1, 2, . . . , k) f j (a) < f j (b) ∃ j ∈ (1, 2, . . . , k)
(15.8)
a and b are non-dominated: iff f i (a) ≤ f j (b)& f i (a) ≥ f j (b) ∀i, j ∈ {1, 2, . . . , k}
(15.9)
A solution x* is called the Pareto-optimal solution if no solution in the decision space X can dominate x*. The Pareto-optimal set is formed by all the Pareto-optimal solutions. The target of MOP is to find a finite number of Pareto-optimal solutions instead of a single optimum in a single-objective optimization problem.
15.3 Proposed Genetic Algorithm for Multi-objective IPPS 15.3.1 Workflow of the Proposed Algorithm An effective genetic algorithm is proposed to solve multi-objective IPPS problems with various flexibilities in process planning effectively. The workflow of the proposed algorithm is given in Fig. 15.1. The main steps of the proposed algorithm are described as follows. Step 1: Set the parameters of the proposed algorithm, including the size of the process planning population (PopSizePP ), the size of the scheduling population (PopSizeS ), the size of the Pareto set (ParetoSet), maximum generations for IPPS(MaxGenIPPS ), maximum generations for process planning (MaxGenPP ), maximum generations for scheduling (MaxGenS ), crossover probability for process planning (PPc ), crossover probability for scheduling (SPc ), mutation probability for process planning (PPm ), and mutation probability for scheduling (SPm ). Step 2: Generate n initial populations of flexible process planning for n jobs, respectively. Step 3: Generate new population for each job by GA, respectively. Step 4: For each job, select a process plan from the corresponding population randomly. Step 5: According to the determinate process plan for each job, generate the initial population for scheduling. Step 6: Optimize the scheduling plan by GA. Output the optimal solution in scheduling.
306
15 An Effective Genetic Algorithm for Multi-objective IPPS …
Fig. 15.1 Workflow of the proposed algorithm
Step 7: Compare the obtained solution with the solutions in the Pareto set, and then use the Pareto set update scheme to update the solutions. The Pareto set update scheme will be given in Sect. 15.3.4. Step 8: If the terminate criteria are satisfied, output the solutions in the Pareto set. Otherwise, go to Step 2. The detailed genetic components for process planning and scheduling are given in Sects. 15.3.2 and 15.3.3.
15.3 Proposed Genetic Algorithm for Multi-objective IPPS
307
Fig. 15.2 One individual in process planning population
15.3.2 Genetic Components for Process Planning 15.3.2.1
Encoding and Decoding Scheme
The aim of process planning in this research is to provide various near-optimal process plans for the scheduling system. To deal with three different kinds of flexibilities in process planning effectively, each individual in process planning population contains three parts with different length [4]. The first part of the individual is the feature sequence, which is the machining sequence of all features for one job. The second part is the selected operation sequence. The element in the ith position represents the selected candidate operations of the ith feature of this job. The third part is the selected machine sequence. The element in the jth position represents the selected candidate machines of the jth operation of this job. Figure 15.2 gives one feasible individual for job 1 in Table 15.1. In this individual, this job has 6 features and 12 operations. Therefore, the length of feature sequence and candidate operation sequence is 6, the length of candidate machine sequence is 12. From the feature sequence, it is clear that the machining sequence of the features for job 1 is F 1 -F 3 -F 4 -F 2 -F 6 -F 5 . For the candidate operation sequence, the second element is 2, it means the second feature (F 2 ) chooses its second candidate operations (O5 -O6 ). For the candidate machine sequence, the first element is 3, it means the first operation (O1 ) chooses its third candidate machines (M 3 ). Based on the encoding scheme, this individual could be decoded easily. From the candidate operation sequence, the selected operations for each feature are F 1 (O1 ), F 2 (O5 -O6 ), F 3 (O7 ), F 4 (O10 ), F 5 (O11 ), F 6 (O12 ). From the candidate machine sequence, the selected machines for each selected operations are O1 (M 3 ), O5 (M 5 ), O6 (M 3 ), O7 (M 1 ), O10 (M 4 ), O11 (M 5 ), and O12 (M 5 ). So, the process plan determined by this individual is O1 (M 3 )-O7 (M 1 )-O10 (M 4 )-O5 (M 5 )-O6 (M 3 )-O12 (M 5 )-O11 (M 5 ).
15.3.2.2
Initial Population and Fitness Evaluation
Using the encoding scheme described above, the feature sequence is randomly arranged, and the candidate operation sequence, machine sequence for jobs is randomly determined from the corresponding candidates. Since there are precedence
308
15 An Effective Genetic Algorithm for Multi-objective IPPS …
constraints among features, some feature sequences in the initial population may be infeasible. In this chapter, the constraint adjustment method proposed by [5] is adopted to regulate the infeasible feature sequence into a feasible one. The processing time of one job is used as the fitness evaluation directly. When the processing time is shorter, the individual is better. After the optimization of process planning, the total machine workload and maximal machine workload are determined.
15.3.2.3
Genetic Operators for Process Planning
Crossover operator: there are three parts in an individual in process planning population, so three crossover operators are developed for feature sequence, candidate operation sequence, and candidate machine sequence. The crossover operator of feature sequence works as in Fig. 15.3. First, select two individuals P1 and P2 from the current population, initialize two empty offsprings O1 and O2. Second, select two crossover points randomly to divide P1 and P2 into three parts. Third, append the element in the middle of P1 and P2 to the same positions in O1 and O2. At the end, delete the existing elements of O1 in P2, and then append the remaining elements of P2 to the rest positions in O1. O2 can be obtained by the same method. This crossover operator can maintain the precedence constraints among features, the new individuals obtained by this operator must be feasible solutions. The crossover operator of candidate operation sequence is shown in Fig. 15.4. First, select two crossover points randomly, and then two offspring O1 and O2 are created by swapping divided middle parts of P1 and P2. The crossover operator of candidate machine sequence Fig. 15.3 Crossover operator for feature sequence
15.3 Proposed Genetic Algorithm for Multi-objective IPPS
309
Fig. 15.4 Crossover operator for selected operation sequence
has the same procedure with the crossover operator of candidate operations as shown in Fig. 15.5. Mutation operator: For feature sequence in the individual, the mutation operation selects two positions at random and then swaps the elements in these positions. If the new sequence is infeasible, use the constraint adjustment method to regulate the infeasible sequence into a feasible one. For candidate operation sequence and candidate machine sequence, the mutation operation selects a position randomly and
Fig. 15.5 Crossover operator for selected machine sequence
310
15 An Effective Genetic Algorithm for Multi-objective IPPS …
then changes the element of this selected position to another alternative operation or machine in the candidate operations or machines set. Selection operator: The tournament selection is used as the selection operation. Select two individuals from the population randomly, and then generate a random value between 0 and 1; if the value is less than a given probability, select a better individual, otherwise, select another one. In this chapter, this probability is set as 0.8.
15.3.3 Genetic Components for Scheduling 15.3.3.1
Encoding and Decoding
For each chromosome in the scheduling population, the operation-based encoding method is used as the encoding strategy. As the example described in Sect. 15.2, after the optimization of process planning, suppose that job 1 has 6 operations, job 2 has 4 operations, and job 3 has 5 operations. One feasible solution in scheduling can be encoded as [1 1 2 3 2 1 1 2 3 2 1 1 2 3 3 3]. The second element in the chromosome is 1, 1 has been repeated twice, so this element represents the second operation of job 1. Each chromosome should be decoded into the active schedules in the decoding procedure [6].
15.3.3.2
Initial Population and Fitness Evaluation
After the optimization of process planning, the number of operations for each job is determined. Each individual in the populations is encoded randomly according to the results of process planning. The maximal machine workload and total workload of machines have been determined after process planning. Therefore, in the scheduling optimization process, makespan is used as the fitness evaluation criterion directly. The makespan can be obtained after decoding the individual into an active schedule.
15.3.3.3
Genetic Operations for Scheduling
Crossover operation: The crossover operation is Precedence Operation Crossover (POX), which could be referred to from [6]. The crossover operator works as in Fig. 15.6. First, select two individuals P1 and P2 from the current population, initialize two empty offsprings O1 and O2. Second, 3 jobs are divided into two subsets. Job 2 is included in JobSet 1. Job 1 and Job 3 are included in JobSet 2. Third, append the elements in JobSet 1 of P1 to the same positions in O1. Append the elements in JobSet 1 of P2 to the same positions in O2. At the end, append the elements in JobSet 2 of P2 to the same positions in O1. Append the elements in JobSet 2 of P1 to the same positions in O2.
15.3 Proposed Genetic Algorithm for Multi-objective IPPS
311
Fig. 15.6 POX crossover operation
Fig. 15.7 Mutation operations for scheduling
Mutation operator: The mutation operator works as in Fig. 15.7. First, select an individual from the population as P randomly. Second, select a pair of different elements in P. At the end, O is obtained by swapping the selected elements. Selection operator: The selection operator for scheduling is the same with the selection operation for process planning.
15.3.4 Pareto Set Update Scheme The result of the multi-objective optimization problem is not a single solution; it is a Pareto-optimal set. Pareto set is utilized to store and maintain the solutions obtained during the optimization procedure. The solutions in the Pareto set are non-dominated with each other. When there is a new solution obtained, the following operations will be applied to update the Pareto set: (1) If the new solution is dominated by any solution in the Pareto set, it will be discarded. (2) If there are solutions in the Pareto set dominated by the new solution, they will be removed from the Pareto set while the new solution will be added into the Pareto set. (3) If the new solution is non-dominated with all the solutions in the Pareto set and the Pareto set is not full, it will be added into the Pareto set. If the Pareto set is full at this time, remove the solution with the minimum crowded distance from the archive and then add the new solution into the archive.
312
15 An Effective Genetic Algorithm for Multi-objective IPPS …
The crowded distance for each solution in the Pareto set could be computed by the method in NSGA-II [7].
15.4 Experimental Results and Discussions To evaluate the performance of the proposed algorithm, two different experiments have been selected in this chapter. In order to compare with other algorithms, three different scale instances from the literature were selected in Experiment 1. Due to the deficiency of benchmark instances on multi-objective IPPS with various flexibilities in process planning, we present Experiment 2 based on six typical parts with various flexibilities in process planning from the previous literature. The proposed algorithm in this chapter was coded in C++ and implemented on a computer with a 2.0 GHz Core(TM) 2 Duo CPU. The parameters of the proposed algorithm are selected after a lot of trials and shown in Table 15.2.
15.4.1 Experiment 1 There are three different scale instances in Experiment 1. The first instance obtained from [8] has 5 jobs and 5 machines with 20 operations. The second instance obtained from [9] has 8 jobs and 8 machines with 37 operations. The third instance has 20 jobs and 5 machines with 80 operations which is also obtained from [9]. The detailed part data of the three problem instances can be referred to from [8] and [9]. The comparisons among the grammatical approach, GRASP and proposed algorithm for the first instance are shown in Table 15.3. The comparisons between GRASP and proposed algorithm for the second and third instances are shown in Table 15.4 and Table 15.5, respectively. It is clear that the proposed algorithm could obtain several Pareto-optimal solutions instead of a single solution obtained by the methods in the literature. The results of the proposed algorithm are obtained by running the algorithm 20 times. The results of the grammatical approach are obtained from [8]. And the results of GRASP are obtained from [9]. From the comparisons in Tables 15.3, 15.4, and Table 15.2 Parameters of the proposed algorithm
Parameter
Value
Parameter
Value
PopSizePP
100
PopSizeS
200
MaxGenPP
10
MaxGenS
100
PPc
0.80
SPc
0.80
PPm
0.10
SPm
0.05
MaxGenIPPS
100
ParetoSet
10
15.4 Experimental Results and Discussions Table 15.3 Comparisons among three algorithms for the first instance
Table 15.4 Comparisons between two algorithms for the second instance
Table 15.5 Comparisons between two algorithms for the third instance
313
Algorithm
Pareto-optimal solutions Makespan
MMW
TWM
Grammatical approach
394
328
770
GRASP algorithm
242
217
750
Proposed algorithm
212
188
721
198
193
722
207
187
737
238
172
730
210
182
735
211
199
718
191
172
745
218
187
731
233
197
719
226
181
739
Algorithm
Pareto-optimal solutions Makespan
MMW
TWM
GRASP algorithm
253
237
1189
Proposed algorithm
234
211
1146
214
199
1163
218
200
1159
236
181
1164
233
187
1153
213
207
1149
236
200
1142
251
189
1137
228
221
1139
236
203
1135
Algorithm
Pareto-optimal solutions Makespan
MMW
TWM
GRASP algorithm
924
889
2963
Proposed algorithm
806
806
2836
708
708
2960
747
747
2923
784
784
2856
871
871
2835
883
883
2830
314
15 An Effective Genetic Algorithm for Multi-objective IPPS …
Table 15.6 Process plans of the second Pareto-optimal solution for the first instance Jobs
Detailed process plans Operation sequence
Machine sequence
Job 1
5-4-2-3
1-4-2-5
Job 2
1-2-4
3-2-5
Job 3
3-5-1-4
2-2-3-4
Jobs
Detailed process plans Operation sequence
Machine sequence
Job 4
4-3-5-2
4-5-2-4
Job 5
4-3-1
4-5-3
Table 15.7 Process plans of the second Pareto-optimal solution for the second instance Jobs
Detailed process plans Operation sequence
Machine sequence
Job 1
4-5-2-3
4-6-2-2
Job 2
1-2-4
6-2-7
Job 3
1-5-3-4
Job 4
5-3-2-4
Jobs
Detailed process plans Operation sequence
Machine sequence
Job 5
1-4-3
3-4-5
Job 6
5-3-1-4
5-8-1-4
3-3-8-1
Job 7
4-3-5-2
5-4-2-2
2-8-1-5
Job 8
4-1-3
7-6-5
Table 15.8 Process plans of the second Pareto-optimal solution for the third instance Jobs
Detailed process plans
Jobs
Detailed process plans
Operation sequence
Machine sequence
Job 1
4-5-2-3
4-2-2-1
Job 2
1-4-2
3-4-2
Job 12
2-1-4
2-3-5
Job 3
1-5-3-4
3-5-2-5
Job 13
5-3-1-4
5-2-1-4
Job 4
3-2-5-4
5-4-2-4
Job 14
3-2-5-4
5-5-2-4
Job 5
4-1-3
4-3-5
Job 15
4-3-1
4-5-3
Job 6
3-5-2-4
1-2-2-4
Job 16
4-5-2-3
4-2-2-4
Job 7
4-2-1
5-2-3
Job 17
2-1-4
2-3-5
Job 8
3-5-1-4
2-2-3-2
Job 18
5-3-1-4
5-2-1-3
Job 9
3-2-5-4
5-2-4-5
Job 19
5-3-2-4
2-5-4-4
Job 10
4-1-3
2-3-3
Job 20
4-1-3
1-3-5
Job 11
Operation sequence
Machine sequence
3-5-2-4
1-3-2-4
15.5, all the Pareto-optimal solutions obtained by the proposed algorithm could dominate the solutions obtained by the grammatical approach and GRASP algorithm. The detailed process plans of the second Pareto-optimal solution for the first, second, and third instance are given in Table 15.3, Table 15.4, and Table 15.5, respectively. The corresponding Gantt charts are shown in Fig. 15.8, Fig. 15.9, and Fig. 15.10, respectively.
15.4 Experimental Results and Discussions
315
Fig. 15.8 Gantt chart of the second Pareto-optimal solution for the first instance
Fig. 15.9 Gantt chart of the second Pareto-optimal solution for the second instance
15.4.2 Experiment 2 Due to the lack of benchmark instances of the multi-objective IPPS problem with various flexibilities in process planning, we design a problem based on 6 typical jobs selected from the previous literature. Job 1, Job 2, and Job 3 are acquired from [10]. The detailed processing information of Job 1, Job 2, and Job 3 used in this
316
15 An Effective Genetic Algorithm for Multi-objective IPPS …
Fig. 15.10 Gantt chart of the second Pareto-optimal solution for the third instance
experiment is given in Table 15.9, Table 15.10, and Table 15.11, respectively. Job 4 is obtained from [11] which contains 9 features and 13 operations. Job 5 is obtained from [12] which contains 7 features and 9 operations. Job 6 is acquired from [13] which contains 15 features and 16 operations. The detailed machining information of Job 4, Job 5, and Job 6 can be referred to from [4]. Suppose that these 6 jobs are machined in 5 machines in a workshop. Table 15.12 shows the Pareto-optimal solutions obtained by the proposed algorithm for Experiment 2 by running the algorithm 20 times. The detailed process plans of the second and last Pareto-optimal solution for Experiment 2 are given in Table 15.13 and Table 15.14 respectively. The corresponding Gantt charts are shown in Fig. 15.12 and Fig. 15.3, respectively.
15.4.3 Discussions From all of the above experimental results, it is clear that three objectives of the IPPS problem considered in this chapter are conflicting. In Experiment 2, Tables 15.6, 15.7, and 15.8 show that almost all the operations in Job 1, Job 2, and Job 3 have a shorter processing time in machine 4. If all the operations are supposed to be machined in machine 4, TWM will be smaller. In this case, makespan will be longer. For example, the second Pareto-optimal solution has a shorter MMW compared with the last Pareto-optimal solution in Experiment 2. It is obvious that more operations need to be machined in machine 4 in the second solution compared with the last solution from Figs. 15.11 and 15.12. More operations are waiting to be processed in machine 4.
15.4 Experimental Results and Discussions
317
Table 15.9 Processing information of Job 1 with 14 features and 20 operations Features
Candidate operations
Candidate machines
Processing time
Precedence constraints
F1
O1
M2, M3, M4
40, 40, 30
Before all features
F2
O2
M2, M3, M4
40, 40, 30
Before F 10 , F 11
F3
O3
M2, M3, M4
20, 20, 15
F4
O4
M1, M2, M3, M4
12, 10, 10, 8
F5
O5
M2, M3, M4
35, 35, 27
Before F 4 , F 7
F6
O6
M2, M3, M4
15, 15, 12
Before F 10
F7
O7
M2, M3, M4
30, 30, 23
Before F 8
F8
O8 -O9 -O10
M1, M2, M3, M4
22, 18, 18, 14
M2, M3, M4
10, 10, 8
M2, M3, M4, M5
10, 10, 8, 12
F9
O11
M2, M3, M4
15, 15, 12
Before F 10
F 10
O12 -O13 -O14
M1, M2, M3, M4
48, 40, 40, 30
Before F 11 , F 14
M2, M3, M4
25, 25, 19
M2, M3, M4, M5
25, 25, 19, 30
M1, M2, M3, M4
27, 22, 22, 17
F 11
O15 -O16
M2, M3, M4
20, 20, 15
F 12
O17
M2, M3, M4
16, 16, 12
F 13
O18
M2, M3, M4
35, 35, 27
F 14
O19 -O20
M2, M3, M4
12, 12, 9
M2, M3, M4, M5
12, 12, 9, 15
Before F 4 , F 12
But the other machines have a lot of free time in Fig. 15.11. As a result, the makespan of the second solution is longer than the last solution. The proposed algorithm in this chapter could optimize these conflicting objectives simultaneously and help decision makers to make a trade-off among these objectives while determining a final schedule. Based on the results of Experiment 1, the proposed algorithm could obtain more and better Pareto-optimal solutions compared with the grammatical approach and GRASP algorithm. It shows that the proposed algorithm has achieved satisfactory improvement compared with previous research works. The problem presented in Experiment 2 considers various flexibilities in process planning simultaneously during the whole optimization procedure. Each job has many different process plans according to the processing information. As a result, this problem is much more complex than instances in Experiment 1. On the other hand, this problem is much closer to the realistic production process compared with Experiment 1. The proposed algorithm can also obtain good solutionsto Experiment 2 effectively. The reasons for the proposed algorithm’s superior performance in solving multiobjective IPPS problem are as follows.
318
15 An Effective Genetic Algorithm for Multi-objective IPPS …
Table 15.10 Processing information of Job 2 with 15 features and 16 operations Features
Candidate operations
Candidate machines
F1
O1
M1, M2, M3, M4
F2
O2
M2, M3, M4
20, 20, 15
F3
O3
M2, M3, M4
18, 18, 14
F4
O4
M2, M3, M4
16, 16, 12
F5
O5
M2, M3, M4
F6
O6 - O7
M1, M2, M3, M4
F7
O8
M1, M2, M3, M4
F8
O9
M2, M3, M4
15, 15, 11
F9
O10
M1, M2, M3, M4
10, 8, 8, 6
F 10
O11
M2, M3, M4
10, 10, 8
Before F 11
F 11
O12
M2, M3, M4
10, 10, 8
Before F 9
F 12
O13
M1, M2, M3, M4
10, 8, 8, 6
F 13
O14
M2, M3, M4
16, 16, 12
F 14
O15
M1, M2, M3, M4
10, 8, 8, 6
F 15
O16
M1, M2, M3, M4
36, 30, 30, 23
M2, M3, M4
Processing time
Precedence constraints
12, 10, 10, 8
Before F 2 Before F 4
15, 15, 11 30, 25, 25, 19
Before F 7
25, 25, 19 14, 12, 12, 9 Before F 7
Before F 14 Before all features
Table 15.11 Processing information of Job 3 with 11 features and 14 operations Features
Candidate operations
Candidate machines
Processing time
Precedence constraints
F1
O1
M2, M3, M4
20, 20, 15
Before all features
F2
O2
M2, M3, M4
20, 20, 15
Before F 3 -F 11
F3
O3
M2, M3, M4
F4
O4
M1, M2, M3, M4
F5
O5
M2, M3, M4
F6
O6
M2, M3, M4
F7
O7
M2, M3, M4
15, 15, 11
F8
O8
M2, M3, M4
25, 25, 19
F9
O9 -O10 -O11
M1, M2, M3, M4 M2, M3, M4
F 10
O12 -O13 O14
Before F 10 , F 11
15, 15, 11, 18
Before F 10 , F 11
15, 15, 11
Before F 10 , F 11
15, 15, 11
30, 25, 25, 19 20, 20, 15
M2, M3, M4, M5
20, 20, 15, 24
M1, M2, M3, M4
10, 8, 8, 6
M2, M3, M4 F 11
15, 15, 11
M1, M2, M3, M4
8, 8, 6 6, 5, 5, 4
Before F 10 , F 11
Before F 7 , F 8
15.4 Experimental Results and Discussions Table 15.12 Pareto-optimal solutions obtained by the proposed algorithm for Experiment 2
319
Algorithm
Pareto-optimal solutions Makespan
MMW
TWM
Proposed algorithm
617
617
1522
599
599
1528
540
520
1534
520
511
1541
511
511
1551
494
484
1552
494
469
1562
502
453
1568
460
439
1569
459
432
1579
Table 15.13 Process plans for the second Pareto-optimal solution in Experiment 2 Jobs
Detailed process plans Operation sequence
Machine sequence
Job 1
1-11-5-7-6-18-4-3-2-12-13 -14-19-20-8-9-10-15-16-17
4-4-4-4-2-4-2-4-4-4-4 -4-2-4-1-4-4-3-4-4
Job 2
16-5-13-9-14-6-7-11 -3-15-12-8-1-2-4-10
4-4-4-4-4-4-4-2 -4-4-3-2-2-4-4-2
Job 3
1-2-5-3-6-9-10 -11-8-4-14-7-12-13
2-4-3-4-2-4-4 -4-4-1-4-2-4-2
Job 4
4-5-6-13-2-3-10 -11-1-7-8-9-12
1-1-2-1-2-1-1 -2-5-5-1-2-1
Job 5
1-3-5-6-7-8-4-2-9
2-2-3-5-5-4-5-4-5
Job 6
2-12-3-14-1-5-6-7-11 -18-15-13-4-16-17-8-9-10
4-1-1-1-2-2-5-1-1 -2-3-5-1-1-1-1-1-2
Table 15.14 Process plans for the last Pareto-optimal solution in Experiment 2 Jobs
Detailed process plans Operation sequence
Machine sequence
Job 1
1-3-18-6-17-11-5-7-4-2-1213-14-15-16-8-9-10-19-20
4-3-2-3-3-2-4-3-2-4-4-2 -2-2-4-2-3-4-4-4
Job 2
16-6-7-14-1-2-3-13-9 -8-5-15-11-12-10-4
4-4-3-4-2-4-4-3-2 -4-4-4-3-3-4-3-2
Job 3
1-2-5-6-9-10-11-7 -8-3-4-12-13-14
4-4-4-4-4-2-2-4 -4-4-3-2-4-1
Job 4
2-4-13-1-7-8-9 -3-5-6-12-10-11
2-1-4-1-2-1-5 -1-1-3-1-1-1
Job 5
1-5-6-4-2-7-8-9-3
5-2-1-5-4-2-1-1-2
Job 6
14-15-8-9-10-13-16-3-4-12 -18-17-2-5-6-7-11-1
1-3-5-3-2-1-1-2-1-2 -1-1-5-3-5-2-1-1
320
15 An Effective Genetic Algorithm for Multi-objective IPPS …
Fig. 15.11 Gantt chart of the second Pareto-optimal solution for Experiment 2
Fig. 15.12 Gantt chart of the last Pareto-optimal solution for Experiment 2
Firstly, effective genetic operations based on the characteristics of IPPS are employed in the proposed algorithm. This can make the proposed algorithm suitable for solving multi-objective IPPS problem.
15.4 Experimental Results and Discussions
321
Secondly, from the framework of the proposed algorithm, process planning system provides many different process plans of jobs dynamically based on various flexibilities in process planning for scheduling system, which ensure that the algorithm explores IPPS solution space fully. Finally, the Pareto set could store and maintain the solutions obtained during the searching procedure; the proposed algorithm could get several Pareto-optimal solutions during one searching process.
15.5 Conclusion and Future Works This chapter has presented an effective genetic algorithm for solving multi-objective IPPS problems with various flexibilities in process planning. Makespan, maximal machine workload, and total workload of machines are considered as optimization objectives simultaneously. To compare with the other algorithms, three different scale instances have been employed to test the performance of the proposed algorithm. The experiment results show that the proposed algorithm has achieved satisfactory improvement. Due to the lack of benchmark instances of multi-objective the IPPS problem with various flexibilities in process planning, a problem was presented based on six typical jobs with various flexibilities in process planning in the literature. The proposed algorithm could also settle this problem effectively. There are also some limitations in the proposed algorithm. Only three objectives are optimized in this study, more objectives of IPPS can be taken into account in future works. Exploring more effective algorithms to solve multi-objective IPPS problems is another future work.
References 1. Li XY, Gao L, Li WD (2012) Application of game theory based hybrid algorithm for multiobjective integrated process planning and scheduling. Expert Syst Appl 39:288–297 2. Li XY, Gao L, Zhang CY, Shao XY (2010) A review on integrated process planning and scheduling. Int J Manuf Res 5:161–180 3. Guo YW, Li WD, Mileham AR, Owen GW (2009) Applications of particle swarm optimisation in integrated process planning and scheduling. Robot Comput-Integr Manuf 25:280–288 4. Li XY, Gao L, Wen XY (2013) Application of an efficient modified particle swarm optimization algorithm for process planning. Int J Adv Manuf Technol 67:1355–1369 5. Li WD, Ong SK, Nee AYC (2002) Hybrid genetic algorithm and simulated annealing approach for the optimization of process plans for prismatic parts. Int J Prod Res 40:1899–1922 6. Zhang C, Li P, Rao Y, Li S (2005) A new hybrid GA/SA algorithm for the job shop scheduling problem. Lect Notes Comput Sci 3448:246–259 7. Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans Evol Comput 6:182–197 8. Baykaso˘glu A, Özbakır L (2009) A grammatical optimization approach for integrated process planning and scheduling. J Intell Manuf 20:211–221
322
15 An Effective Genetic Algorithm for Multi-objective IPPS …
9. Rajkumar M, Asokan P, Page T, Arunachalam S (2010) A GRASP algorithm for the integration of process planning and scheduling in a flexible job-shop. Int J Manuf Res 5:230–251 10. Li WD, McMahon CA (2007) A simulated annealing-based optimization approach for integrated process planning and scheduling. Int J Comput Integr Manuf 20:80–95 11. Ma GH, Zhang YF, Nee AYC (2000) A simulated annealing-based optimization algorithm for process planning. Int J Prod Res 38:2671–2687 12. Wang YF, Zhang YF, Fuh JYH (2009) Using hybrid particle swarm optimization for process planning problem. In: Proceedings of the computational sciences and optimization, 2009. pp 304–308 13. Zhang YF, Nee AYC (2001) Applications of genetic algorithms and simulated annealing in process planning optimization. In: Wang J, Kusiak A (eds) Computational intelligence in manufacturing handbook
Chapter 16
Application of Game Theory-Based Hybrid Algorithm for Multi-objective IPPS
16.1 Introduction In traditional approaches, process planning and scheduling were carried out in a sequential way. Those methods have become an obstacle to improve the productivity and responsiveness of the manufacturing systems and cause the following problems [13]. In a traditional manufacturing organization, process planner plans jobs separately. For each job, manufacturing resources on the shop floor are usually assigned to it without considering the competition for the resources from other jobs [29]. This may lead to the process planners favoring to select the desirable resources for each job repeatedly. Therefore, the resulting optimal process plans often become infeasible when they are carried out in practice at the later stage [15]. Even though process planners consider the restriction of the current resources on the shop floor, because of the time delay between the planning phase and execution phase, the constraints which have been considered in the planning phase may have already changed greatly; this may lead to the optimal process plans infeasible [12]. Traditionally, scheduling plans are often determined after process plans. In the scheduling phase, scheduling planners have to consider the determined process plans. Fixed process plans may drive scheduling plans to end up with severely unbalanced resource load and create superfluous bottlenecks. In most cases, both for process planning and scheduling, a single-criterion optimization technique is used for determining the best solution. However, the real production environment is best represented by considering more than one criterion simultaneously [13]. Furthermore, process planning and scheduling may have conflicting objectives. Process planning emphasizes the technological requirements of a task, while scheduling involves the timing aspects of it. If there is no appropriate coordination, it may create conflicting problems.
© Springer-Verlag GmbH Germany, part of Springer Nature and Science Press, Beijing 2020 X. Li and L. Gao, Effective Methods for Integrated Process Planning and Scheduling, Engineering Applications of Computational Methods 2, https://doi.org/10.1007/978-3-662-55305-3_16
323
324
16 Application of Game Theory-Based Hybrid Algorithm …
To overcome these problems, there is an increasing need for deep research and application of the IPPS system. It can introduce significant improvements to the efficiency of manufacturing through eliminating or reducing scheduling conflicts, reducing flow-time and work-in-process, improving production resources, and utilizing and adapting to irregular shop floor disturbances [15]. Without IPPS, a true Computer Integrated Manufacturing System (CIMS), which strives to integrate the various phases of manufacturing in a single comprehensive system, may not be effectively realized. Therefore, in a complex manufacturing situation, it is ideal to integrate the process planning and scheduling more closely to achieve a global optimum in manufacturing, and increase the flexibility and responsiveness of the systems [16]. In the beginning research of CIMS, some researchers have found that the IPPS is very important to the development of CIMS [27]. The preliminary idea of IPPS was introduced by Chryssolouris et al. [4, 5]. Beckendorff [2] used alternative process plans to improve the flexibility of manufacturing systems. Khoshnevis et al. [9] introduced the concept of dynamic feedback into IPPS. The integration model proposed by Zhang [33] and Larsen [14] extended the concepts of alternative process plans and dynamic feedback and defined an expression to the methodology of the hierarchical approach. Some earlier works of IPPS have been summarized in Tan et al. [27] and Wang et al. [31]. In recent years, in the area of IPPS, several models have been reported, and they can be classified into three basic models based on IPPS [18]: nonlinear process planning [10, 18], closed-loop process planning [29], and distributed process planning [30, 34]. In the past decades, the optimization approaches of the IPPS problems also have achieved several improvements. Especially, several optimization methods have been developed based on the modern meta-heuristic algorithms and artificial intelligence technologies, such as evolutionary algorithms, Simulated Annealing (SA) algorithm, Particle Swarm Optimization (PSO) algorithm and the Multi-Agent System (MAS)based approach. Kim et al. [11] used a symbiotic evolutionary algorithm for the integration of process planning and job shop scheduling. Shao et al. [23] used a modified Genetic Algorithm (GA) to solve the IPPS problem. Li et al., [19] proposed the mathematical models of IPPS and an evolutionary algorithm-based approach to solve it. Chan et al. [3] proposed an enhanced swift converging simulated annealing algorithm to solve the IPPS problem. Guo et al. [6, 7] proposed PSO-based algorithms to solve the IPPS problem. Shen et al. [24] provided a literature review on IPPS, particularly on the agent-based approaches for the IPPS problem. It was also provided with the advantages of the agent-based approach for scheduling. Wong et al. [32] presented an online hybrid agent-based negotiation multi-agent system for integrating process planning with scheduling/rescheduling. Shukla et al. [25] presented a bidding-based multi-agent system for solving IPPS. Li et al. [20] developed an agent-based approach to facilitate the IPPS. Most of the current researches on IPPS have been concentrated on a single objective. However, because different departments in a company have different expectations in order to maximize their own interests, for example, the manufacturing department expects to reduce costs and improve work efficiency, the managers want to maximize the utilization of the existing resources, and the sales department hopes
16.1 Introduction
325
to better meet the delivery requirements of the customers, in this case, only considering single objective cannot meet the requirements from the real-world production. Therefore, further studies are required for IPPS, especially on the multi-objective IPPS problem. However, only seldom papers focused their researches on multiobjective the IPPS problem. Morad et al. [22] proposed a GA based on weightedsum method to solve the multi-objective IPPS problem. Li et al. [16] proposed a simulated annealing-based approach for the multi-objective IPPS problem. Baykasoglu et al. [1] proposed an approach which made use of grammatical representation of generic process plans with a multiple objective Tabu Search (TS) framework to solve multi-objective IPPS effectively. Zhang et al. [35] proposed a multi-objective GA approach for solving process planning and scheduling problems in a distributed manufacturing system. In this chapter, a novel approach has been developed to facilitate the multiobjective IPPS problem. A game theory-based hybrid algorithm has been applied to solve the multi-objective IPPS problem. Experiment results to verify the effectiveness of the approach are presented. The remainder of this chapter is organized as follows: Problem formulation is discussed in 16.2. In 16.3, the game theory model of the multi-objective IPPS has been presented. The proposed algorithm for solving the multi-objective IPPS problem is given in Sect. 16.4. Experimental results are reported in Sect. 16.5.1 and 16.5.2. And Sect. 16.5.3 is the conclusion.
16.2 Problem Formulation In this research, scheduling is often assumed as the job shop scheduling, and the mathematical model of IPPS is based on the Mixed Integer Programming model of the Job shop Scheduling Problem (JSP). In this research, the following three criteria are considered to be optimized: in order to improve work efficiency, selecting the maximal completion time of machines, i.e., the makespan, as one objective; in order to improve the utilization of the existing resources, especially for the machines, selecting the Maximal Machine Workload (MMW), i.e., the maximum working time spent on any machine, and the Total Workload of Machines (TWM), i.e., the total working time of all machines, as the other two objectives. In order to solve this problem, the following assumptions are made: (1) Jobs are independent. Job preemption is not allowed and each machine can handle only one job at a time. (2) The different operations of one job cannot be processed simultaneously. (3) All jobs and machines are available at time zero simultaneously. (4) After a job is processed on a machine, it is immediately transported to the next machine on its process, and the transmission time is assumed to be negligible. (5) Setup time for the operations on the machines is independent of the operation sequence and is included in the processing time.
326
16 Application of Game Theory-Based Hybrid Algorithm …
Based on these assumptions, the mathematical model of the multi-objective IPPS problem is described as follows [19]: The notation used to explain the model are described below: N M G Pil oijl k t ijlk cijlk Wk ci A
Total number of jobs; Total number of machines; Total number of alternative process plans of job i; Number of operations in the lth alternative process plan of the job i; The jth operation in the lth alternative process plan of job i; Alternative machine corresponding to oijl; The processing time of operation oijl on machine k; The earliest completion time of operation oijl on machine k; The workload of machine k; The completion time of job i; A very large positive number. X il = Yi jlpqsk =
1 the lth flexible process plan of job i is selected 0 otherwise
1 the operation oi jl precedes the operation o pqs on machine k 0 otherwise 1 if machine k is selected for oi jl Z i jlk = 0 otherwise
Objectives: Minimizing the maximal completion time of machines (makespan): Min f 1 = Makespan = Max ci jlk i ∈ [1, N ], j ∈ [1, Pil ], l ∈ [1, G i ], k ∈ [1, M] (16.1) Minimizing the Maximal Machine Workload (MMW): k ∈ [1, M] Min f 2 = MMW = Max Wk
(16.2)
Minimizing the Total Workload of Machines (TWM): Min f 3 = TWM =
M
Wk k ∈ [1, M]
(16.3)
k=1
Subject to: (1) For the first operation in the alternative process plan l of job i: ci1lk + A(1 − X il ) ≥ ti1lk i ∈ [1, N ], l ∈ [1, G i ], k ∈ [1, M]
(16.4)
16.2 Problem Formulation
327
(2) For the last operation in the alternative process plan l of job i: ci Pil lk − A(1 − X il ) ≥ makespan i ∈ [1, N ], l ∈ [1, G i ], k ∈ [1, M] (16.5) (3) The different operations of one job cannot be processed simultaneously: ci jlk − ci( j−1)lk1 + A(1 − X il ) ≥ ti jlk i ∈ [1, N ], j ∈ [1, Pil ], l ∈ [1, G i ], k, k1 ∈ [1, M]
(16.6)
(4) Each machine can handle only one job at a time: c pqsk − ci jlk + A(1 − X il ) + A 1 − X ps + A 1 − Yi jlpqsk ≥ t pqsk
(16.7)
ci jlk − c pqsk + A(1 − X il ) + A 1 − X ps + AYi jlpqsk ≥ ti jlk
(16.8)
i, p ∈ [1, N ], j, q ∈ [1, Pil, ps ], l, s ∈ [1, G i, p ], k ∈ [1, M] (5) Only one alternative process plan can be selected for job i:
X il = 1
l ∈ [1, G i ]
(16.9)
l
(6) Only one machine for each operation should be selected: M
Z i jlk = 1 i ∈ [1, N ], j ∈ [1, Pil ], l ∈ [1, G l ]
(16.10)
k=1
The objective functions are equations from (16.1) to (16.3). In this research, the multiple objectives have been considered for the IPPS problem. The constraints are in equations from (16.4) to (16.10). Constraint (16.6) expresses that the different operations of a job are unable to be processed simultaneously. This is the constraint of different processes for a job. Constraints (16.7) and (16.8) show that each machine can handle only one job at a time. This is the constraint of a machine. Constraint (16.9) ensures that only one alternative process plan can be selected for each job in one schedule. Constraint (16.10) ensures that only one machine for each operation should be selected. Many studies have been devoted to do the research on multi-objective optimization. These developed methods can be generally classified into the following three different types [8]: • The first type is using the weighted-sum method to transform the multi-objective problem to a mono-objective problem.
328
16 Application of Game Theory-Based Hybrid Algorithm …
• The second type is the non-Pareto approach. This method utilizes operators for processing the different objectives in a separated way. • The third type is the Pareto approach. This method is directly based on the Paretooptimality concept. In this chapter, a game theory-based approach has been used to deal with multiple objectives. After dealing with the multiple objectives, a hybrid algorithm has been used to solve the multi-objective IPPS problem.
16.3 Game Theory Model of Multi-objective IPPS Game theory is a good method to analyze the interaction of several decision makers. It is a very important tool in the modern economy. Recently, it has been used to solve some complex engineering problems, such as power systems and collaborative product design [17]. In this chapter, non-cooperative game theory has been applied to deal with the conflict and competition among the multiple objectives in the multiobjective IPPS problem. In this approach, the objectives of this problem can be seen as the players in the game, and the Nash equilibrium solutions are taken as the optimal results.
16.3.1 Game Theory Model of Multi-objective Optimization Problem The definition of the general Multi-objective Optimization Problem (MOP) is described as follows: General MOP contains n variables, k objectives, and m constraints. The mathematical definition of MOP is described as follows: Max/Min y = f (x) = { f 1 (x), f 2 (x), . . . , f k (x)}
(16.11)
s.t. e(x) = {e1 (x), e2 (x), . . . , em (x)} ≤ 0 x = (x1 , x2 , . . . , xn ) ∈ X y = (y1 , y2 , . . . , yk ) ∈ Y where x is the variable; y is the objective; X is the variables space; Y is the objectives space, e(x) ≤ 0 is the constraints. In order to apply game theory to deal with the multiple objectives, the mapping between MOP and game theory should be presented. This means that the game theory model of the MOP should be constructed. Comparing the MOP with the game theory, the MOP can be described by game theory as follows: k objectives in MOP can be described as the k players in game
16.3 Game Theory Model of Multi-objective IPPS
329
theory; X in MOP can be described as the decision space S in game theory; f i (x) in MOP can be described as the utility function ui in game theory; e(x) in MOP can be described as the constraints in game theory. By defining mapping φ i : X → S i as the decision strategies space of the ith player, k Si = X ; defining mapping ϕ i : f i → ui as the decision strategies set of the ith i=1
player, the game theory model of MOP can be defined as follows: G = {S; U } = {S1 , S2 , . . . , Sk ; u 1 , u 2 , . . . , u k }
(16.12)
16.3.2 Nash Equilibrium and MOP Nash equilibrium is a very important concept in the non-cooperative game theory. In Nash equilibrium, the strategy of each player is the best strategy when giving the strategies of the other players. If the number of players is limited, at least, the game has one Nash equilibrium solution. The Nash equilibrium can be defined as follows:s ∗ = {s1∗ , s2∗ , . . . , sk∗ }is a strategy set of the game in Eq. (16.12). If si∗ is the best strategy for the ith player when giving j ∗ the strategies (s...i ) of the other players, i.e., for the any ith player and si ∈ Si , Eq. (16.13) is right, si∗ can be seen as one Nash equilibrium solution in this game. ∗ ∗ u i (si∗ , s...i ) ≥ u i (si , s...i ) j
(16.13)
∗ ∗ ∗ s... = { s1∗ , s2∗ , . . . , si−1 , si+1 , . . . , sk∗ }
Therefore, for an MOP (Eq. 16.11), { f 1 (x), f 2 (x), . . . , f k (x)} can be seen as the k players in a game. The decision strategies space S equals to the variables space X. And the utility function for each player is f i (S). The Nash equilibrium solution s ∗ = { s1∗ , s2∗ , . . . , sk∗ } can be seen as one solution of the MOP (Eq. 16.11). Therefore, in Nash equilibrium, each objective has its own effect on the whole decision of the MOP, no one can dominate the decision-making process.
16.3.3 Non-cooperative Game Theory for Multi-objective IPPS Problem In order to using non-cooperative game theory to deal with the multiple objectives in the multi-objective IPPS problem, the game theory model of the multi-objective IPPS problem should be constructed. In this chapter, the multi-objective IPPS problem has three objectives. They can be seen as the three players in the game. The utility function
330
16 Application of Game Theory-Based Hybrid Algorithm …
of the first player is the first objective function (u1 = f 1); the utility function of the second player is the second objective function (u2 = f 2 ); and the utility function of the third player is the third objective function (u3 = f 3 ) (f 1 , f 2 and f 3, see Sect. 16.3.1). The game theory model of the multi-objective IPPS problem can be described as follows: G = {S; u 1 , u 2 , u 3 }
(16.14)
The Nash equilibrium solution of this model is taken as the optimal result of the multi-objective IPPS problem.
16.4 Applications of the Proposed Algorithm on Multi-objective IPPS 16.4.1 Workflow of the Proposed Algorithm In order to solve the game theory model of the multi-objective IPPS problem effectively, one algorithm with a hybrid algorithm (the hybrid of GA and TS) has been proposed. The workflow of the proposed algorithm is shown in Fig. 16.1. The basic procedure of the proposed algorithm is described as follows: Step 1: Set the parameters of the algorithm, including the parameters of the hybrid algorithm and the Nash equilibrium solution algorithm; Step 2: Initialize population randomly; Step 3: Evaluate all population, and calculate all the three objectives of every individual; Step 4: Use the Nash equilibrium solutions algorithm to find the Nash equilibrium solutions in the current generation, and record them; Step 5: Is the terminate criteria satisfied? If yes, go to Step 8; Else, go to Step 6; Step 6: Generate the new population by the hybrid algorithm; Step 6.1: Generate the new population by the genetic operations, including reproduction, crossover, and mutation; Step 6.2: Local search by TS for every individual; Step 7: Go to Step 3; Step 8: Use the Nash equilibrium solutions algorithm to compare all the recorded Nash equilibrium solutions in every generation, and select the best solutions; Step 9: Output the best solutions.
16.4 Applications of the Proposed Algorithm on Multi-objective IPPS
331
Parameters setting Initial population Evaluate every individual, and Calculate their objectives Nash equilibrium solutions algorithm, and record the Nash equilibrium solutions of the current generation Compare all the recorded solutions Terminate criteria satisfied?
Y
N
Output the best solutions
Generate the new population by hybrid algorithm Genetic operations Local search by TS for every individual
Fig. 16.1 Workflow of the proposed algorithm
The Nash equilibrium solutions algorithm and the hybrid algorithm are presented in the next subsections.
16.4.2 Nash Equilibrium Solutions Algorithm for Multi-objective IPPS In Nash equilibrium, the strategy of each player is the best strategy when giving the strategies of the other players. The main purpose of Nash equilibrium solution is to keep every objective trying its best to approximate to its own best result and cannot damage the benefits of the other objectives. In Nash equilibrium, each objective has its own effect on the whole decision of the MOP, no one can dominate the decisionmaking process. And, one criterion which has been proposed to judge the solutions is described as follows:
332
16 Application of Game Theory-Based Hybrid Algorithm …
NashE j =
3
DOBJ ji
(16.15)
i=1
DOBJ ji =
CurrentObjective ji − BestObjectivei BestObjectivei
(16.16)
NashEj is the Nash equilibrium criterion of the jth individual. DOBJii is calculated by Eq. (16.15). CurrentObjectiveji is the objective of the ith objective of the jth individual. BestObjectivei is the best objective of the ith objective. The workflow of the Nash equilibrium solutions algorithm is shown in Fig. 16.2. The basic procedure of the proposed algorithm is described as follows: Step 1: Firstly, use the hybrid algorithm [21] to optimize the mono-objective IPPS problem and get the best result for every objective; Step 2: Calculate the NashEj for every individual in the current population; Step 3: Find the best NashEj , and set it as the NashEbest ; Step 4: Compare each NashEj with the NashEbest ; Step 5: Is j ≤ Popsize? Use hybrid algorithm to solve the best result of every objective respectively Calculate the NashEj for every individual in the current population Find the best NashEbest j=1 Compare each NashEj with NashEbest
j
N
Popsize ?
Output the Nash equilibrium solutions
Y
j=j+1
N
NashE j
NashEbest
?
Y Record this solution
Fig. 16.2 Workflow of the Nash equilibrium solutions algorithm
j=j+1
16.4 Applications of the Proposed Algorithm on Multi-objective IPPS
333
If yes, go to Step 6; Else, go to Step 9; Step 6: Is NashEj – NashEbest ≤ ε? (ε is the Nash equilibrium solution factor) If yes, go to Step 7; Else, go to Step 8; Step 7: Record this solution and j = j + 1, go to Step 4; Step 8: j = j + 1, go to Step 4; Step 9: Output the Nash equilibrium solutions for this generation. This algorithm is also used to select the final results from all the recorded Nash equilibrium solutions in every generation. From the workflow of this algorithm, we can find that every objective is trying its best to approximate to its own best result and no one of them can dominate the whole decision of the multi-objective IPPS problem. The job shop scheduling problem had been proved to be an NP-hard problem. The IPPS problem is more complicated than the JSP problem. It is also an NP-hard problem. For the large-scale problems, the conventional algorithms (including the exact algorithms) are often incapable of optimizing nonlinear multi-modal functions in reasonable time. To address this problem effectively, one modern optimization algorithm has been used to quickly find a near-optimal solution in a large search space through some evolutional or heuristic strategies. In this research, the Hybrid Algorithm (HA) which is the hybrid of the genetic algorithm and the tabu search has been applied to facilitate the search process. This algorithm has been successfully applied to solve the mono-objective IPPS problem [21]. Here, this algorithm has been developed further to solve the multi-objective IPPS problem. The working steps of this algorithm are explained here for illustration. The HA is used in this research to generate new generations. Therefore, there is no fitness function to evaluate the population. The workflow of the hybrid algorithm is shown in Fig. 16.3. The basic procedure of the proposed algorithm is described as follows: Step 1: Set the parameters of HA, including size of the population (Popsize), maximum generations (maxGen), reproduction probabilistic (pr ), crossover probabilistic (pc ), mutation probabilistic (pm ), and parameters of the tabu search; Step 2: Initialize population randomly, and set Gen = 1; Step 3: Is the terminate criteria satisfied? If yes, go to Step 7; Else, go to Step 4; Step 4: Generate a new generation by genetic operations; Step 4.1: Selection: the random selection scheme has been used for selection operation;
334
16 Application of Game Theory-Based Hybrid Algorithm … Parameters setting
Initialize population randomly. Gen=1
Terminate criteria satisfied?
Compare all the recorded solutions
Y
Output the best solutions
N
Reproduction
Crossover
Selection
Generate the new generation by the genetic operations
Mutation
Local search by TS for every individual
Set Gen=Gen+1
Fig. 16.3 Workflow of the hybrid algorithm
Step 4.2: Reproduction: reproduce the Popsize× p-r individuals from the Parent generation to the Offspring generation; Step 4.3: Crossover: the crossover operation with a user-defined crossover probabilistic (pc ) is used for IPPS crossover operation; Step 4.4: Mutation: the mutation operation with a user-defined mutation probabilistic (pm ) is used for the IPPS mutation operation; Step 5: Local search by TS for every individual; Step 6: Set Gen = Gen + 1, and go to Step 3; Step 7: Use the Nash equilibrium solutions algorithm to compare all the recorded Nash equilibrium solutions in every generation, and select the best solutions; Step 8: Output the best solutions. According to this algorithm, every individual evolves by the genetic operations firstly, and then it focuses on the local search. More details of the hybrid algorithm can be referred to from [21].
16.5 Experimental Results
335
16.5 Experimental Results In this chapter, the proposed algorithm was coded in C++ and implemented on a computer with a 2.0 GHz Core (TM) 2 Duo CPU. To illustrate the effectiveness and performance of the proposed algorithm in this chapter, two instances have been selected. The first instance was adopted from the other paper. Because of the lack of benchmark instances on the multi-objective IPPS problem, we presented the second instance. The parameters of the proposed algorithm for these problem instances were given in Table 16.1. The proposed algorithm terminates when the number of generations reaches the maximum value (maxGen); TS terminates when the number of iterations reaches the maximum size (maxIterSize, CurIter was the current generation of GA) or the permitted maximum step size with no improving (maxStagnantStep).
16.5.1 Problem 1 Problem 1 was adopted from Baykasoglu et al. [1]. It was constructed with 5 jobs and 5 machines. The data was shown in Table 16.2. Table 16.3 showed the experimental results and the comparison with the other algorithm. Table 16.4 showed the selected operation sequence for each job. Figure 16.4 illustrated the Gantt chart of solution 1 of the proposed algorithm. Figure 16.5 illustrated the Gantt chart of solution 2. From the experimental results of problem 1 (Table 16.3), the results of the proposed algorithm can dominate the results of the grammatical approach. They are better than the results of the other algorithm. This means that the proposed approach is more effective to obtain good solutions to the multi-objective IPPS problem. Table 16.1 The parameters of the proposed algorithm Parameters The size of the population, Popsize
400
Total number of generations, maxGen
200
The permitted maximum step size with no improving, maxStagnantStep
20
The maximum iteration size of TS, maxIterSize
200 × (CurIter/maxGen)
Probability of reproduction operation, pr
0.05
Probability of crossover operation, pc
0.6
Probability of mutation operation, pm
0.1
Length of tabu list, maxT
10
Nash equilibrium solution factor ε
0.1
336
16 Application of Game Theory-Based Hybrid Algorithm …
Table 16.2 The data of problem 1 [1] Job
Operation
1
2
Alternative machines with processing time
Alternative operation sequences
M1
M2
M3
M4
M5
O1
57
40
88
62
77
O3 -O4 -O1 -O2
O2
7
10
11
10
5
O4 -O3 -O1 -O2
O3
95
74
76
71
93
O2 -O4 -O1 –O3
O4
24
18
22
28
26
O3 -O1 -O4 -O2
O1
84
76
68
98
84
O1 -O2 -O3
O2
20
10
15
20
19
O1 -O3 -O2
O3
91
88
98
87
90
O3 -O2 -O1 O2 „ O1 „ O3
3
4
5
O1
65
87
58
80
74
O2 -O4 -O1 –O3
O2
46
21
38
52
18
O1 -O4 -O2 -O3
O3
19
22
19
14
22
O1 -O4 -O5
O4
73
56
64
72
60
O4 -O2 -O1 -O3
O5
96
98
96
95
98
O1
13
7
13
12
11
O1 -O2 -O4 -O3
O2
52
64
97
47
40
O2 -O1 -O4 -O3
O3
20
30
17
11
14
O4 -O2 -O1 -O3
O4
94
66
80
79
95
O3 -O2 -O4 -O1
O1
94
97
55
78
85
O3 -O1 -O2
O2
31
23
19
42
17
O3 -O4
O3
88
65
76
64
80
O3 -O2 -O1
O4
88
74
90
92
75
O1 -O3 -O2
Table 16.3 Experimental results of problem 1 Criteria
Grammatical approacha
Solution 1
Solution 2
Makespan
394
165
170
Maximal Machine Workload (MMW)
328
159
158
Total Workload of Machines (TWM)
770
764
740
a Data
was adopted from [1]
16.5.2 Problem 2 Owing to the lack of the benchmark instances on the multi-objective IPPS problem, we presented problem 2. The data of problem 2 was shown in Table 16.5. It was constructed with 8 jobs and 8 machines. Table 16.6 showed the experimental results.
16.5 Experimental Results Table 16.4 Selected operation sequence for each job of problem 1
337 Job
Solution 1
Solution 2
1
O3 -O4 -O1 -O2
O3 -O4 -O1 -O2
2
O1 -O2 -O3
O3 -O2 -O1
3
O2 -O4 -O1 -O3
O2 –O4 -O1 -O3
4
O4 -O2 -O1 -O3
O4 -O2 -O1 -O3
5
O3 -O4
O3 -O4
Fig. 16.4 Gantt chart of solution 1 of problem 1
Fig. 16.5 Gantt chart of solution 2 of problem 1
5
4
3
2
12
20
18
O2
O3
O4
20
O4
8
30
O3
O1
10
O2
40
O4
57
30
O3
O1
7
10
20
O4
O2
10
O3
O1
10
O2
30
O4
50
10
O3
O1
7
20
O2
1
M1
20
22
11
7
18
28
12
60
42
31
12
7
20
10
8
54
32
11
19
8
M2
21
24
11
6
19
32
14
62
44
35
14
6
19
10
9
48
33
12
21
6
M3
22
25
12
9
21
31
15
61
45
29
13
8
21
11
10
52
29
9
22
7
M4
Alternative machines with processing time
O1
Operation
Job
Table 16.5 The data of problem 2
19
23
10
10
23
29
13
63
38
32
14
10
22
12
11
51
31
10
18
8
M5
25
18
13
12
22
30
16
68
40
35
15
9
24
12
13
53
32
13
23
9
M6
24
19
9
10
20
27
12
58
43
33
10
9
18
13
12
50
33
10
20
10
M7
23
21
10
11
18
33
11
59
40
32
11
8
20
10
15
48
30
11
21
8
M8
O1 -O2 -O3 -O5 -O4
O1 -O2 -O5 -O4 -O3
O1 -O3 -O4 -O5 -O2
O1 -O2 -O4 -O5 -O3
O1 -O2 -O4 -O3
O1 -O2 -O3 -O4
O3 -O4 -O2 -O1
O3 -O2 -O4 -O1
O2 -O3 -O1 -O4
O2 –O4- -O3 -O1
O2 -O3 -O4 -O1
O2 -O1 -O3 -O4
O2 -O1 -O4 -O3
O1 -O2 -O4 -O3
O2 -O1 -O3 -O4
O1 -O2 -O3 -O4
(continued)
Alternative operation sequences
338 16 Application of Game Theory-Based Hybrid Algorithm …
8
7
6
Job
7
16
25
11
O3
O4
O5
21
O5
70
30
O4
O2
50
O3
O1
10
O5
O2
40
O4
20
20
O3
O1
7
10
O2
9
30
M1
12
26
18
8
73
23
32
53
18
22
44
24
12
6
6
33
M2
13
25
15
9
79
24
33
55
16
21
42
22
11
7
7
36
M3
14
24
14
10
80
26
34
45
12
24
38
23
9
8
8
40
M4
Alternative machines with processing time
O1
O5
Operation
Table 16.5 (continued)
16
20
12
8
65
28
29
47
11
26
36
25
10
9
10
28
M5
10
23
13
10
67
27
31
48
13
23
37
21
13
12
12
29
M6
12
21
20
11
69
20
28
49
15
24
39
18
14
10
11
33
M7
15
26
14
13
68
23
35
51
9
25
41
19
8
8
8
34
M8
O2 -O1 -O3 -O4 -O5
O3 -O2 -O1 -O4 -O5
O3 -O1 -O2 -O4 -O5
O1 -O3 -O2 -O4 -O5
O1 -O2 -O3 -O4 -O5
O4 -O5 -O3 -O2 -O1
O3 -O2 -O4 -O5 -O1
O2 -O3 -O1 -O4 -O5
O1 -O2 -O3 -O4 -O5
O1 -O5 -O2 -O3 -O4
O5 -O1 -O2 -O4 -O3
O5 -O3 -O1 -O2 -O4
O5 -O1 -O3 -O2 -O4
Alternative operation sequences
16.5 Experimental Results 339
340
16 Application of Game Theory-Based Hybrid Algorithm …
Table 16.6 Experimental results of problem 2 Criteria
Solution 1
Solution 2
Solution 3
Makespan
122
122
123
Maximal Machine Workload (MMW)
106
102
107
Total Workload of Machines (TWM)
751
784
750
Table 16.7 showed the selected operation sequence for each job. Figure 16.6 illustrated the Gantt chart of solution 1 of the proposed algorithm. Figure 16.7 illustrated the Gantt chart of solution 2. And, Fig. 16.8 illustrated the Gantt chart of solution 3. From the experimental results of problem 2 (Table 16.6), the proposed approach can obtain good solutions to the multi-objective IPPS problem effectively. Table 16.7 Selected operation sequence for each job of problem 2 Job
Solution 1
Solution 2
Solution 3
1
O1 -O2 -O3 -O4
O1 -O2 -O3 -O4
O1 -O2 -O3 -O4
2
O2 -O3 -O4 -O1
O2 -O3 -O4 –O1
O2 -O1 -O3 -O4
3
O2 -O3 -O1 -O4
O2 -O3 -O1 -O4
O2 -O3 -O1 -O4
4
O1 -O2 -O3 -O4
O1 -O2 -O3 -O4
O1 -O2 -O4 -O3
5
O1 -O3 -O4 -O5 -O2
O1 -O3 -O4 -O5 -O2
O1 -O2 -O4 -O5 -O3
6
O5 -O1 -O2 -O4 -O3
O5 -O1 -O2 -O4 -O3
O5 -O1 -O3 -O2 -O4
7
O4 -O5 -O3 -O2 -O1
O4 -O5 -O3 -O2 -O1
O1 -O2 -O3 -O4 -O5
8
O2 -O1 -O3 -O4 -O5
O2 -O1 -O3 -O4 -O5
O3 -O2 -O1 -O4 -O5
Fig. 16.6 Gantt chart of solution 1 of problem 2
16.5 Experimental Results
341
Fig. 16.7 Gantt chart of solution 2 of problem 2
Fig. 16.8 Gantt chart of solution 3 of problem 2
16.5.3 Conclusions Considering the complementarity of process planning and scheduling, and the multiple objectives requirement from the real-world production, the research has been conducted to develop a game theory-based hybrid algorithm to facilitate the multiobjective IPPS problem. In this proposed approach, the Nash equilibrium in the game
342
16 Application of Game Theory-Based Hybrid Algorithm …
theory has been used to deal with multiple objectives. And a hybrid algorithm has been used to optimize the IPPS problem. Experimental studies have been used to test the performance of the proposed approach. The results show that the developed approach has achieved significant improvement. The contributions of this research include the following: • The game theory has been used to deal with the multiple objectives of the IPPS problem. This is a new idea on multi-objective manufacturing problems. And it also can be used to deal with the multiple objectives of other problems in the manufacturing field, such as the process planning problem, assembly sequencing problem, and scheduling problem. • To find optimal or near-optimal solutions from the vase search space efficiently, a hybrid algorithm has been applied to the multi-objective IPPS problem. Experiments have been conducted and the results showed the effectiveness of applying this approach.
References 1. Baykasoglu A, Ozbakir L (2009) A grammatical optimization approach for integrated process planning and scheduling. J Intell Manuf 20:211–221 2. Beckendorff U, Kreutzfeldt J, Ullmann W (1991) Reactive workshop scheduling based on alternative routings. In: Proceedings of a conference on factory automation and information management, pp 875–885 3. Chan FTS, Kumar V, Tiwari MK (2009) The relevance of outsourcing and leagile strategies in performance optimization of an integrated process planning and scheduling model. Int J Prod Res 47(1):119–142 4. Chryssolouris G, Chan S, Cobb W (1984) Decision making on the factory floor: an integrated approach to process planning and scheduling. Robot Comput-Integr Manuf 1(3–4):315–319 5. Chryssolouris G, Chan S (1985) An integrated approach to process planning and scheduling. Ann CIRP 34(1):413–417 6. Guo YW, Li WD, Mileham AR, Owen GW (2009) Optimisation of integrated process planning and scheduling using a particle swarm optimization approach. Int J Prod Res 47(14):3775–3796 7. Guo YW, Li WD, Mileham AR, Owen GW (2009) Applications of particle swarm optimisation in integrated process planning and scheduling. Robot Comput-Integr Manuf 25(2):280–288 8. Hsu T, Dupas R, Jolly D, Goncalves G (2002) Evaluation of mutation heuristics for the solving of multiobjective flexible job shop by an evolutionary algorithm. In: Proceedings of the 2002 IEEE international conference on systems, man and cybernetics, vol 5, pp 655–660 9. Khoshnevis B, Chen QM (1989) Integration of process planning and scheduling function. In: Proceedings of IIE integrated systems conference & society for integrated manufacturing conference, pp 415–420 10. Kim KH, Song JY, Wang KH (1997) A negotiation based scheduling for items with flexible process plans. Comput Ind Eng 33(3–4):785–788 11. Kim YK, Park K, Ko J (2003) A symbiotic evolutionary algorithm for the integration of process planning and job shop scheduling. Comput Oper Res 30:1151–1171 12. Kuhnle H, Braun HJ, Buhring J (1994) Integration of CAPP and PPC—interfusion manufacturing management. Integr Manuf Syst 5(2):21–27 13. Kumar M, Rajotia S (2003) Integration of scheduling with computer aided process planning. J Mater Process Technol 138:297–300
References
343
14. Larsen NE (1993) Methods for integration of process planning and production planning. Int J Comput Integr Manuf 6(1–2):152–162 15. Lee H, Kim SS (2001) Integration of process planning and scheduling using simulation based genetic algorithms. Int J Adv Manuf Technol 18:586–590 16. Li WD, McMahon CA (2007) A simulated annealing—based optimization approach for integrated process planning and scheduling. Int J Comput Integr Manuf 20(1):80–95 17. Li WD, Gao L, Li XY, Guo Y (2008) Game theory-based cooperation of process planning and scheduling. In: Proceeding of the 12th international conference on computer supported cooperative work in design, China, pp 841–845 18. Li XY, Gao L, Zhang CY, Shao XY (2010) A review on integrated process planning and scheduling. Int J Manuf Res 5(2):161–180 19. Li XY, Gao L, Shao XY, Zhang CY, Wang CY (2010) Mathematical modeling and evolutionary algorithm based approach for integrated process planning and scheduling. Comput Oper Res 37:656–667 20. Li XY, Zhang CY, Gao L, Li WD, Shao XY (2010) An agent-based approach for integrated process planning and scheduling. Expert Syst Appl 37:1256–1264 21. Li XY, Shao XY, Gao L, Qian WR (2010) An effective hybrid algorithm for integrated process planning and scheduling. Int J Prod Econ. https://doi.org/10.1016/j.ijpe.2010.04.001 22. Morad N, Zalzala AMS (1999) Genetic algorithms in integrated process planning and scheduling. J Intell Manuf 10:169–179 23. Shao XY, Li XY, Gao L, Zhang CY (2009) Integration of process planning and scheduling—a modified genetic algorithm-based approach. Comput Oper Res 36:2082–2096 24. Shen WM, Wang LH, Hao Q (2006) Agent-based distributed manufacturing process planning and scheduling: a state-of-the-art survey. IEEE Trans Syst, Man, Cybern-Part C: Appl Rev 36(4):563–577 25. Shukla SK, Tiwari MK, Son YJ (2008) Bidding-based multi-agent system for integrated process planning and scheduling: a data-mining and hybrid tabu-sa algorithm-oriented approach. Int J Adv Manuf Technol 38:163–175 26. Sugimura N, Hino R, Moriwaki T (2001) Integrated process planning and scheduling in holonic manufacturing systems. In: Proceedings of IEEE international symposium on assembly and task planning soft research park, vol 4, pp 250–254 27. Tan W, Khoshnevis B (2000) Integration of process planning and scheduling—a review. J Intell Manuf 11:51–63 28. Thomalla CS (2001) Job shop scheduling with alternative process plans. Int J Prod Econ 74:125–134 29. Usher JM, Fernandes KJ (1996) Dynamic process planning-the static phase. J Mater Process Technol 61:53–58 30. Wang LH, Song YJ, Shen WM (2005) Development of a function block designer for collaborative process planning. In: Proceeding of CSCWD2005. Coventry, UK, pp 24-26 31. Wang LH, Shen WM, Hao Q (2006) An overview of distributed process planning and its integration with scheduling. Int J Comput Appl Technol 26(1–2):3–14 32. Wong TN, Leung CW, Mak KL, Fung RYK (2006) Integrated process planning and scheduling/ rescheduling—an agent-based approach. Int J Prod Res 44(18–19):3627–3655 33. Zhang HC (1993) IPPM-a prototype to integrated process planning and job shop scheduling functions. Ann CIRP 42(1):513–517 34. Zhang J, Gao L, Chan FTS (2003) A holonic architecture of the concurrent integrated process planning system. J Mater Process Technol 139:267–272 35. Zhang WQ, Gen M (2010) Process planning and scheduling in distributed manufacturing system using multiobjective genetic algorithm. IEEJ Trans Electr Electron Eng 5:62–72
Chapter 17
A Hybrid Intelligent Algorithm and Rescheduling Technique for Dynamic JSP
17.1 Introduction Job shop Scheduling Problems (JSPs) are concerned with the allocation of resources over time to perform a collection of tasks. This is an important aspect of production management which has a significant effect on the performance of the shop floor [15, 22]. JSPs which have been proved to be NP-hard are well-known to be the hardest combinatorial optimization problem. Various methods like mathematical techniques, dispatching rules, and artificial intelligence have been successfully used to solve the static JSPs [17]. However, the static JSPs ignore the real-time events such as random job arrivals and machine breakdowns. This does not match the status of the real manufacturing systems. In most real manufacturing environments, scheduling systems usually operate in highly dynamic and uncertain environments in which several unexpected disruptions (such as random job arrivals, machine breakdowns, an due date changes) prevent the execution of production schedules exactly as they are developed. Unfortunately, most manufacturing systems operate in dynamic environments subject to various real-time events, which may render the predictive optimal schedule neither feasible nor optimal. Therefore, dynamic scheduling is of great importance for the successful implementation in real-world scheduling systems [14]. The dynamic JSPs, which consider the real-time events, attract more and more attentions from many researchers and engineers. The static JSPs have been proven to be a NP-hard problem. Due to the dynamic feature, the dynamic JSPs are much more complex than the static JSPs. Therefore, the dynamic JSPs must be a strongly NP-hard problem. And it is more difficult to solve dynamic JSPs. The first study about the dynamic JSPs was published by Holloway and Nelson. Then the dynamic JSPs have attracted many researches because they exist in most of the manufacturing systems in various forms. A rescheduling method with constructing a new schedule from scratch has been widely used in dynamic JSPs [4, 9, 30, 32, 33]. But due to the process continuity in a problem condition during the © Springer-Verlag GmbH Germany, part of Springer Nature and Science Press, Beijing 2020 X. Li and L. Gao, Effective Methods for Integrated Process Planning and Scheduling, Engineering Applications of Computational Methods 2, https://doi.org/10.1007/978-3-662-55305-3_17
345
346
17 A Hybrid Intelligent Algorithm …
planning horizon, using a rescheduling method with constructing a new schedule from scratch can reduce the performance of the method. Preventing such a problem, Zhou et al. [33] applied the ant colony algorithm to two dynamic JSPs exploring its unique property of seeking solutions through the adaptation of its pheromone matrix. The inspiration is from the phenomenon that ants will not go back to their nest to restart searching for a new route when the existing one is not available. Renna [19] created the schedule by a pheromone-based approach which was carried out by a multi-agent architecture. The pheromone approaches proposed were two: one based on moving average and the other one on exponential moving average. In recent years, with the development of scheduling theory, the dynamic JSPs have attracted more and more attention. In the literature the approaches to solve dynamic JSPs include meta-heuristics [6], dispatching rules [13], multi-agent system [10], etc. Dispatching rules have been widely used in dynamic JSPs. For example, Subramaniam et al. [24] proposed three machine section rules and evaluated their effectiveness through a simulation study of a dynamic job shop. Singh et al. [23] considered several dispatching rules simultaneously for selecting a job for processing and continuously monitoring the attained values of performance measures. Nie et al. [13] proposed gene expression programming-based scheduling rules constructor. Meta-heuristics methods, such as Genetic Algorithm (GA), Tabu Search (TS), and Simulated Annealing (SA), have been widely used to solve the static deterministic scheduling problems [12]. Ouelhadj and Petrovic [14] pointed out that a few research works had addressed the use of meta-heuristics in dynamic scheduling. Chryssolouris and Subramaniam [2] developed a GA method which was significantly superior to that of the common dispatching rules. Vinod and Sridharan [27] studied the dynamic job shop production system. Rangsaritratsamee et al. [18] studied dynamic JSPs that simultaneously considered efficiency and stability based on a genetic local search algorithm and periodic rescheduling policy. Liu et al. [8] considered a two-stage multi-population GA to solve the bi-objective single-machine scheduling with random machine breakdowns. Malve and Uzsoy [11] proposed a family of iterative improvement heuristics with the genetic algorithm for the problem of minimizing maximum lateness on parallel identical batch processing machines with dynamic job arrivals. Adibi et al. [1] presented a Variable Neighborhood Search (VNS) method, which had high effectiveness and efficiency in various shop floor conditions. Combined meta-heuristic algorithm with other scheduling techniques is the future for solving dynamic JSPs. Xiang and Lee [29] built efficient ant colony intelligence in multi-agent for dynamic scheduling. Renna [19] created the schedule by a pheromone-based approach which was carried out by a multi-agent architecture. Wu et al. [28] proposed a novel threefold approach to solving dynamic JSPs by the artificial immune algorithm. Zandieh and Adibi [30] developed VNS with an artificial neural network to solve dynamic JSPs, which had high efficiency and effectiveness. GA and TS have been successfully used to find optimal solutions to static scheduling problems [3]. Recently, several researchers have suggested using hybrid GA consisting of a pure genetic algorithm with another search procedure [18]. The hybrid GA and TS can combine the advantages of both GA and TS together and are highlevel heuristics which guide local search heuristics to escape from the local optima.
17.1 Introduction
347
Moreover, most research published works around the dynamic JSPs construct a new schedule from scratch. The rescheduling technique, which constructs a new schedule from the last information, has not been studied well. Finally, to our knowledge, the real-time events are difficult to express and are taken into account by the mathematical model. In this research, a hybrid GA and TS rescheduling technique is proposed to generate the new schedules for dynamic JSPs with random job arrivals and machine breakdowns. A new initialization method is proposed to improve the performance of the hybrid intelligence algorithm. In order to solve the difficulty of using the mathematical model to express the unexpected disruptions, a simulator is designed to tackle the complexity of the problem. The performance measures investigated respectively are mean flow time, maximum flow time, mean tardiness, maximum tardiness, and number of tardy jobs. The proposed rescheduling technique is evaluated in various job shop environments where the jobs arrive over time and the machines’ breakdown and repair occur. The remainder of this chapter is organized as follows: Sect. 17.2 describes the dynamic JSPs. Section 17.3 presents the framework based on the hybrid GA and TS with a simulator algorithm. In Sect. 17.4 the experimental environments and the results are provided. Finally, Sect. 17.5 is the conclusions and several promising research directions.
17.2 Statement of Dynamic JSPs 17.2.1 The Proposed Mathematical Model A typical JSP can be formulated as follows. There are n jobs, each composed of several operations that must be processed on m machines. Each operation uses one of the m machines for a fixed duration. Each machine can process at most one operation at a time, and once an operation initiates processing on a given machine, it must complete processing on that machine without interruption. The operations of a given job have to be processed in a given order. The problem consists in finding a schedule of the operations on the machines, taking into account the precedence constraints and the objectives [20]. In this section, the dynamic mathematical model for the considered problem according to the static model of JSP is developed. It assumes that a set of n jobs are scheduled at the beginning of the schedule and a set of n new jobs are arrived after the start of the schedule. There are m machines which are used to execute the jobs’ operations. Let i, i , j, j , and k denote indexes of the machine, old job, new job, and operation, respectively, throughout the chapter. Each job consists of a sequence of operations Oij , i = 1, 2, …, n, j = 1, 2, …, ni , (O i j , i = 1, 2, …, n , j = 1, 2, …, n j ).
348
17 A Hybrid Intelligent Algorithm …
The following notations are used for the problem formulation. ai a i di d i C ijk C i j k pijk p i j k
Arrival time of job i; Arrival time of job i; Due date of job i; Due date of job i ; The completed time of the operation Oij on machine k; The completed time of the operation Oi j on machine k; The processing time of the operation Oij on machine k; The processing time of the operation O i j on machine k.
The other parameters and variables are defined after using then in equations. In dynamic JSP, minimizing makespan is of less interest because the scheduling horizon is open and the makespan gives no credit for jobs that finish well before the last one finishes [7]. Therefore, five other performance measures are considered respectively and described as follows. Mean flow time: n n 1 ¯ × Ci ,n ,k − ai Ci,ni ,k − ai + (17.1) F= i (n + n ) i=1 i =1 Maximum flow time:
Fmax = max max Ci,ni ,k − ai , max Ci ,n ,k − ai 1≤i≤n
1≤i ≤n i
i
(17.2)
i
Mean tardiness: T¯ =
⎞ ⎛ n n 1 ⎠ ⎝ C , 0 C , 0 + max − d max − d × i,n i ,k i i ,n ,k ii (n + n ) i
(17.3)
i =1
i=1
Maximum tardiness: ⎛ Tmax = max⎝ max max Ci,n i ,k − di , 0 + 1≤i≤n
max
1≤i ≤n i
⎞
max Ci ,n ,k − di , 0 ⎠ i
(17.4)
Number of tardy jobs: NT =
n i=1
δi +
n
δi
(17.5)
i =1
The above equations describe the objective functions. These objectives consist of the mean flow time, the maximum flow time, the mean tardiness, the maximum tardiness, and the number of tardy jobs, respectively.
17.2 Statement of Dynamic JSPs
349
Subject to C hlk − phlk + M(1 − A) ≥ Ci jk
(17.6)
Ch l k − ph l k + M(1 − A) ≥ Ci j k
(17.7)
A=
1,
Oi j ≺ Ohl or Oi j ≺ Oh l
(17.8)
0, otherwise
Constraints (17.6) and (17.7) respect the precedence constraints. Each operation can be executed when its precedence operation has been executed. For each job, the operations must be executed in a predetermined sequence. A is 0 − 1 variable, where C ijk denotes the completed time of the operation Oi j on machine k. pi jk denotes the processing time of the operation Oij on machine k. n, ni and m denotes the number of the jobs, operations, and machines, respectively. i, j, k, ,h and l denote the indexes of the job, operations, and machine, respectively, where i = 1, 2, …, n, j = 1, 2, …, ni , h = 1, 2, …, n, l = 1, 2, …, nh , i = 1, 2, …, n , j = 1, 2, …, n ¬i , h = 1, 2, …, n , l = 1, 2, …, n h’ , and k denotes the selected machine index by the operation Oij or O i j . k denotes the selected machine index by the operation Ohl or O ¬h l . ⎧⎛ ⎫ ⎞ n i ni m m n n ⎨ ⎬
M> ⎝ pi jk + pi j k ⎠ − min min pi jk , min pi j k ⎩ ⎭ i=1 j=1 k=1
i =1 j =1 k =1
(17.9) M is a large number, which comes from Van Hulle [25]. Now, in order to fit the dynamic scheduling environment, this constraint is modified so that the large number M is greater than the sum processing time of all jobs minus the maximum processing time at each scheduling point. C hlk − thlk + M(1 − B) ≥ Ci jk
(17.10)
C h l k − ph l k + M(1 − B) ≥ Ci j k
(17.11)
⎧ ⎪ ⎨ 1, Oi j is processed on Mk berfore Ohl B = 1, Oi j is processed on Mk berfore Oh ⎪ ⎩ 0, otherwise
(17.12)
Constraints (17.14) enforces that each operation can be performed only when the related machine is idled. Each operation can be started when its precedence operation on the related machine was executed. B is 0 − 1 variable, where i = 1, 2, …, n, j = 1, 2, …, ni , h = 1, 2, …, n, l = 1, 2, …, nh , i = 1, 2, …, n , j = 1, 2, …, n i’ , h = 1,
350
17 A Hybrid Intelligent Algorithm …
2, …, n , l = 1, 2, …, n h’ and k denotes the selected machine index by the operation Oij or O hl . k denotes the selected machine index by the operation Oi j or O h l ¬ . ni
X i jk = 1
(17.13)
X i j k = 1
(17.14)
j=1 n
i
X i jk =
X i j k =
j =1
1, ifmachinekselectedforOi j 0, otherwise
(17.15)
1, if machine k selected for Oi j 0, otherwise
(17.16)
Constraints (17.13) and (17.14) enforce that each job must be performed only one time on one machine. For each job, the machine can perform only one operation of any job at a time. X i jk and X i are 0 − 1 variables, where i = 1, 2, …, n , i = 1, 2, …, n , k = 1, 2, …, m, and k = 1, 2, …, m . δi = δi =
1, ifCi,ni ,k > di 0, otherwise 1, if Ci ,n ,k > di i
0, otherwise
(17.17)
(17.18)
Constraint (17.17) and (17.18) represent whether the job is the tardy job, where i = 1, 2, …, n , i = 1, 2, …, n . The above equations define the objective functions and the constraints. Because the machine available time and the rescheduling jobs are related to the rescheduling strategy, the detail will be introduced in the following text.
17.2.2 The Reschedule Strategy Prediction-reactive scheduling is conducted to modify the created schedule for responding to the unexpected disruptions [20, 21]. It is the most common dynamic scheduling approach used in manufacturing systems [14]. In this research, we study the prediction-reactive scheduling in the JSP with random job arrivals and machine breakdowns. The prediction reactive scheduling needs to address two issues: when and how to react to real-time events. Regarding the first issue, a hybrid periodic and event-driven rescheduling policies are selected for continuous processing in a dynamic environment. In the hybrid
17.2 Statement of Dynamic JSPs
351
rescheduling policies, schedules are generated at regular schedule frequency and also when a key unexpected disruption appears. A machine breakdown will be selected randomly to be the key unexpected disruption at the beginning of the simulation. A rescheduling will be triggered when this machine breaks down. Regarding the second issue, the complete rescheduling strategy, which regenerates a new schedule from scratch, is used at each rescheduling point. All those jobs (named the rescheduling jobs) which contain all unprocessed jobs and new jobs are performed. The right-shift rescheduling strategy is used when other machines break down. In the following section, some important factors in dynamic JSPs are discussed to introduce as to how to simulate the expected disruptions and how to calculate the due date. 1. Job arrivals: it has been observed the distribution of the job arrival process closely follows the Poisson distribution. Hence, the interarrival time between two job arrivals is exponentially distributed [18]. Interarrival time between two job arrivals is obtained using the following relationship [26]: t=
μ p μg Um
(17.19)
where U denotes the shop utilization. μ p denotes the mean processing time per operation. μg denotes the mean number of operations per job. In this research, it assumes that U = 0.95, 0.90, 0.85, 0.80, and 0.75. 2. Machine breakdowns and repair: the Mean Time Between Failure (MTBF) and the Mean Time To Repair (MTTR) are assumed to follow an exponential distribution [30]. When machines are subjected to unpredictable breakdown, the distribution of operations on machines should be revised. After the machine has been repaired, the distribution of operations should also be modified. 3. Due date of jobs: the Total Work Content (TWK) method has been used widely for due date assignment. The due date of a job is determined using the following equations [26]: di = ai + k × pi
(17.20)
where pi denotes the total processing time of job i. k is the tightness factor. This tightness factor is assigned from a uniform distribution and will be introduced in Sect. 17.4.1.
17.2.3 Generate Real-Time Events The random job arrivals and machine breakdowns are considered simultaneously in this study. In this subsection, how to generate real-time events, which contains the
352
17 A Hybrid Intelligent Algorithm …
job arrivals simulator, and the machine breakdown simulator are developed in detail. The following notations are used for generating real-time events. (1) JAEL denotes the interarrival time between two jobs. JAEL is assigned again when a new job arrives. (2) MBEL denotes the interval of time between two failures of each machine. MBEL is assigned again when the machine breakdown event occurs. (3) MREL denotes the interval of time between the failure and the repair of each machine. Assume that every machine is available to be maintained. MREL is also assigned again when the machine repair event occurs. (4) The RP represents the set of all rescheduling points. Initialize RP with a fixed rescheduling frequency.
17.2.3.1
The Job Arrivals Simulator
The job arrivals simulator has the following steps: Step 1 Initialize JAEL with random numbers in exponential distribution and t corresponds to the average interarrival time between two jobs, where t denotes the average interarrival time between two job arrivals. JAELi = exp _rand(t), for all new jobs i = 1, 2, . . .
(17.21)
Step 2 Calculate the arrival time of job i: ai = JAEL + ai−1 i > 1 ai = JAELi
i =1
(17.22)
Step 3 Repeat Step 1 and Step 2 until the rescheduling point is met.
17.2.3.2
The Machine Breakdowns Simulator
The machine breakdowns and repair simulator has the following steps: Step 1 Generate MBEL with random numbers in exponential distribution. Calculate the machine failure time for each machine, where mtk denotes the breakdown time of machine k. mak denotes the available time of machine k. MBELk = exp _rand(MTBF)for all machines k = 1, 2, . . . , m
(17.23)
mtk = MBELk + mak for all machines k = 1, 2, . . . .m
(17.24)
17.2 Statement of Dynamic JSPs
353
Step 2 If M¬k is the key machine, a rescheduling is triggered. Add mt¬k¬ to RP set. Generate an exponential random number. Calculate the machine repair time for each machine. MRELk = exp _rand(MTTR) for all machines k = 1, 2, . . . , m
(17.25)
mak = MRELk + mtk for all machines k = 1, 2, . . . , m
(17.26)
Step 3 Repeat Step 1 and Step 2 until the next rescheduling point is met. In this study, the simulator can generate unexpected disruptions until the planning horizon is met in the real manufacturing environment.
17.3 The Proposed Rescheduling Technique for Dynamic JSPs A hybrid GA and TS rescheduling technique with a new initialization method is presented by integrating with a simulator to solve the dynamic JSPs. The new initialization method will be introduced in Sect. 17.3.1 and tested in Sect. 17.4.2. This section introduces the rescheduling technique in general and the hybrid GA and TS in a dynamic JSP environment.
17.3.1 The Rescheduling Technique in General A successful implementation of a scheduling system in a dynamic environment usually requires either a real manufacturing facility or a simulation environment. In this research, we use the simulation approach since it is a cost-effective way of creating dynamic conditions. In the dynamic job shop framework, the proposed scheduling system consists of three major components: prediction schedule, simulator, and the rescheduling technique. The process of the proposed method is summarized in Fig. 17.1. The prediction schedules are generated and implemented for the production process successfully at each rescheduling point. The prediction schedules are generated by considering machines, sequence, and jobs. At the beginning of the scheduling system, all machines are available at time 0. The initial information on the machines and jobs is determined. The prediction schedule at time 0 is generated by the initial information. The main task of the simulator is to generate the real-time events until the planning horizon is not met. The simulator uses two sets of input data: job-related data
354
17 A Hybrid Intelligent Algorithm …
Simulator
New Job Arrivals
Repair of Machines Available Machine
Pending Jobs
Next Rescheduling Point
Machine Breakdown
Update the Problem Condition
The Hybrid Algorithm Generate and Implement Schedule Start
Schedules
In Planning Horizon? N
Y Go to next rescheduling point if: · Key machine breakdown · The next rescheduling periodic
End Fig. 17.1 The proposed rescheduling technique for solving dynamic JSPs
and machine-related data. How to generate real-time events has been introduced in Sect. 17.2.3. The rescheduling technique used in the schedule generation is a hybrid intelligence algorithm based on the GA and TS. According to the real-time events generated by the simulator and the shop condition, the problem condition needs to be updated. Then, the hybrid intelligence algorithm optimizes the new problem and generates a new prediction schedule for the production process. The problem condition contains the machine available time named AT αk , where k denotes the machine index, α denotes the index of the rescheduling point, the rescheduling job set, and so on. The machine can be classified into three categories: the machine breakdown, the machine busy (one operation being processed on the machine), and the machine idle. In the case of the machine breakdown, the machine available time will be assumed the available time of the machine repaired. It can be formulated that AT k α = mak . In the case of the machine busy, the machine available
17.3 The Proposed Rescheduling Technique for Dynamic JSPs
355
time will be assumed the completed time of the operation on the machine. It can be formulated that AT k α = CT k , where CT k denotes the completed time of the operation being processed on the machine k at the α rescheduling point. In the case of the machine idle, the machine available time will be assumed the rescheduling point. It can be formulated that AT k α = RPα .
17.3.2 The Hybrid GA and TS for Dynamic JSP The proposed hybrid GA and TS rescheduling technique is implemented when a rescheduling is triggered. The proposed rescheduling technique could well balance its diversification and intensification to find high-quality solutions to the optimization problem. In the proposed rescheduling technique, the TS is applied to each child with a certain probability to search for a better solution. The steps of the proposed rescheduling technique are shown in Fig. 17.2. The details of the proposed approach, including chromosome encoding and decoding, initialization, crossover operator, mutation operator, neighborhood structure, tabu list, and move selection and termination criterion, will be introduced in the following subsections.
17.3.2.1
Chromosome Encoding and Decoding
The operation-based representation is adopted as the encoding method because it can ensure that any permutation of the chromosome can be decoded to be a feasible schedule sequence. Each chromosome contains the number of genes which is equal to the number of all operations in the rescheduling jobs. Each gene responds to the index of job i. Each job i appears n i times in the chromosome. n i represents the number of all operations of job i in the rescheduling jobs. For example, at one rescheduling point, it is assumed that the unprocessed jobs set = {O12 O13 O14 O23 O24 }, and the Step 1: Generate initial population. Set parameters including population size, max iteration, mutation probability, crossover probability, etc. Encode every initial solution into a chromosome. Step 2: Decode each individual to obtain the fitness value. Compare them to obtain the best solution. Check the termination criteria. When the termination criterion is met, stop the rescheduling technique and output the optimal solution. Otherwise, go to step 3. Step 3: Generate new population for the next generation. Three operations including selection, crossover and mutation is applied to create offspring for the next population. In this study, the tournament approach [16] is adopted to perform the selection. Step 4: Randomly generate one probability value for each individual. When the probability is less than the certain probability, apply the TS procedure to improve the quality of the individual. Repeat this step until the number of individual is equal to the population size. Following this, the algorithm goes back to step 2.
Fig. 17.2 The steps of the proposed rescheduling technique
356
17 A Hybrid Intelligent Algorithm …
new jobs set = {O31 O32 O33 O41 O42 O43 }, where Oi j denotes the j th operation of job i. Thus, the rescheduling jobs set is defined as {O12 O13 O14 O23 O24 O31 O32 O33 O41 O42 O43 }. One chromosome {2, 1, 3, 4, 2, 1, 3, 4, 1, 4, 3} is given based on the above rescheduling jobs set. Thus, according to the operation-based representation method, the chromosome can be interpreted as {O23 O12 O31 O41 O24 O13 O32 O42 O14 O43 O33 }. Because all the objectives considered are nonregular, the semiactive schedule is adopted in the decoding approach to increase the search space. Step 1: Generate initial population. Set parameters including population size, max iteration, mutation probability, crossover probability, etc. Encode every initial solution into a chromosome. Step 2: Decode each individual to obtain the fitness value. Compare them to obtain the best solution. Check the termination criteria. When the termination criterion is met, stop the rescheduling technique and output the optimal solution. Otherwise, go to Step 3. Step 3: Generate new population for the next generation. Three operations including selection, crossover, and mutation are applied to create offspring for the next population. In this study, the tournament approach [16] is adopted to perform the selection. Step 4: Randomly generate one probability value for each individual. When the probability is less than a certain probability, apply the TS procedure to improve the quality of the individual. Repeat this step until the number of individuals is equal to the population size. Following this, the algorithm goes back to Step 2.
17.3.2.2
Initialization
Two initialization methods (named random initialization and partial initialization) are proposed to initialize the problem at each rescheduling point. This subsection only introduced two initialization methods. It will be discussed as to how to combine them in Sect. 17.5. Random initialization is to discard the original population and construct the new population from scratch. This initialization procedure is demonstrated in Fig. 17.3. The genes between positions 1 and 5 are the best sequence in the original schedule before initialization. The genes between positions 6 and 11 are the new job Fig. 17.3 An example of a random initialization procedure
Position
1
2
3
4
5
6
7
8
9 10 11
Before 2 initialization
1
2
1
1
3
3
3
4
4
4
1
2
4
3
1
4
3
Original Random 3 initialization
1
2
4
17.3 The Proposed Rescheduling Technique for Dynamic JSPs
Position
Fig. 17.4 An example of a partial initialization procedure
357
1
2
3
4
5
6
7
8
9 10 11
Before 2 initialization
1
2
1
1
3
3
3
4
4
4
3
2
4
1
4
1
3
Original Partial 4 initialization
3
2
1
indexes before initialization. The chromosome is generated randomly, and it must be guaranteed that each job i occurs n i times in every chromosome. Partial initialization is to reserve the original jobs sequence to the new population, and randomly insert the new jobs into the new population. This initialization procedure is demonstrated in Fig. 17.4. The genes between positions 1 and 5 are the best sequence in the original schedule before initialization. The genes between positions 6 and 11 are the new job codes before initialization. The sequence of the genes between positions 1 and 5 remains unchanged, and other genes of new jobs are randomly inserted into the original genes.
17.3.2.3
Crossover Operator
The crossover operator tends to increase the quality of the population. Precedence operation crossover [31] is adopted. This crossover procedure is demonstrated in Fig. 17.5. J1 = {1,3} is randomly selected from the set J = {1,2,3,4}. The new child 1 preserves the subset J1 position in parent 1 and keeps the subset (J − J1) sequence in parent 2. The new child 2 preserves the subset J1 position in parent 2 and keeps the subset (J − J1) sequence in parent 1. Position 1 2 3
4
5
6
7
8
9 10 11
4
1
2
4
3
1
Position 1 2 3
Parent 1 3
1
4
5
6
7
8
9 10 11
Child 1 2
4
3
3
1
4
2
1
4
2
3
1
4
3
Child 2 2 3
4
1
3
2
4
1
4
1
3
1, 3 Parent 2 4 3 2
1
3
4
2
1
4
1
3
Fig. 17.5 An example of crossover procedure
358
17 A Hybrid Intelligent Algorithm …
Fig. 17.6 An example of flip mutation procedure
Fig. 17.7 An example of insert mutation procedure
Position
1
2
3
4
5
6
7
8
9 10 11
Before mutation
2
1
3
4
2
1
3
4
1
4
3
2
1
4
3
4
2
1
3
1
4
3
After mutation
17.3.2.4
Mutation Operator
Mutation is used to produce perturbations on chromosomes in order to maintain the diversity of the population. Two types of mutation (which are named flip mutation and insert mutation) are randomly selected in this research. Flip mutation flips the substring between two different random positions. This mutation procedure is demonstrated in Fig. 17.6. Positions 3 and 8 are randomly selected to be the mutation points. Flipping the gene occurs between positions 3 and 8; then, the genes between positions 3 and 8 are flipped to generate the new child. Insert mutation selects two elements randomly and inserts the back one before the front one. This mutation procedure is demonstrated in Fig. 17.7. Positions 3 and 8 are randomly selected to be the mutation points. Inserting the gene of position 8 before position 3 is done, then, the genes between positions 3 and 8 are changed to generate the new child.
17.3.2.5
Neighborhood Structure
A neighborhood structure can obtain a new set of neighbor solutions by applying a small perturbation to a given solution. At each rescheduling point, the new problem can be represented with a disjunctive graph. Table 17.1 describes an example of the new problem at a rescheduling point. The rescheduling job set contains {O12 O13 O14 O23 O24 O31 O32 O33 O41 O42 O43 }. The disjunctive graph is shown in Fig. 17.8. Any operation on the critical path is called a critical operation [33]. In Fig. 17.9, the critical path is {0, O12 , O31 , O32 , O14 , 12} and the length of the critical path is 17. A block is a maximal sequence of adjacent critical operations that is processed on the same machine. In Fig. 17.9, the critical operations are {O12 , O31 , O32 , O14 },
17.3 The Proposed Rescheduling Technique for Dynamic JSPs
359
Table 17.1 An example of the problem Jobs
Machines sequence
Processing time
J1
2-3-4
4-3-6
J2
4-3
5-2
J3
2-4-1
2-5-4
J4
3-1-2
4-6-2
Operations in M3
Operations in M2 O12
O13
Operations in M4
O14
O23
O24
0
12 O31
O32
O41
O33
O42
Operations in M1
O43
Fig. 17.8 The disjunctive graph of the example in Table 17.1
Operations in M2
Operations in M3 O12
O13
Operations in M4
O14
O24
O23 0
12 O31
O32
O41
O33
O42
Operations in M1
O43
Critical path
Fig. 17.9 A feasible solution for the disjunctive graph in Fig. 17.7
and the critical path is divided into two blocks, block 1 = {O12 , O31 } and block 2 = {O32 , O14 }. In this research, the neighborhood structures based on the critical path are adopted. The neighborhood structures swap any two randomly selected operations in every
360
17 A Hybrid Intelligent Algorithm …
block, each of which contains at least two operations. If the critical block only has one operation, then make no exchange. If the critical block contains more than two operations, then randomly select two operations and swap them.
17.3.2.6
Tabu List and Move Selection
The purpose of the tabu list is to avoid the search process turning back to the solutions visited in the previous steps. The length of the tabu list is a fixed size. When the tabu list is full, the oldest element of the list is replaced by the new element. More precisely, if the swapping (x, y) is the best neighborhood of one chromosome, the swapping (x, y) is added to the tabu list. Assume that the neighborhoods of the current solution are not empty. Otherwise, all critical blocks contain less than one operation and the TS method terminates. The move method is to select the best neighborhood which is non-tabu or satisfies the aspiration criterion which accepts the move that provides that the fitness value is better than the current best solution found so far. However, there is a situation that all neighborhoods are tabu and none of them satisfy the aspiration criterion. In such case, a neighborhood is selected randomly among all neighborhoods.
17.3.2.7
Termination Criterion
Termination criterion is used to determine whether the proposed method should be stopped. In this study, when iteration times are over the maximum iterations (iterTS ) or all critical blocks contain less than one operation, stop the TS. When iteration times are over the maximum iterations (iterGATS ), stop the proposed rescheduling technique and output the optimal solution for the next rescheduling. When no new jobs arrive or the number of new jobs is reached to the preset quantity in the simulation, the planning horizon is met and the simulator does not generate disturbances.
17.4 Experiential Environments and Results The proposed rescheduling technique has been performed in C++ language on a PC with Intel Core 2 Duo CPU 2.0 GHz processor and 2.00 GB RAM memory. Simulation starts with a 6 × 6 static job shop problem (FT06) [5] with makespan = 55 extracted from the literature.
17.4 Experiential Environments and Results
361
17.4.1 Experimental Environments The experimental conditions of the studied dynamic JSPs are illustrated in Table 17.2. All machines have the same MTTR and MTBF. Ag = MTTR/(MTBF + MTTR) denotes the breakdown level of the shop floor that is the percentage of time the machines have failures. For example, for Ag = 0.05 and MTTR = 5 time units follows MTBF = 45, thus, on an average of 45 time units, a machine is available and then breaks down with a mean time to repair of 5 time units. The interarrival time between two jobs on the shop floor is an independent exponential random variable. The average interarrival time is decided by the shop utilization, the average processing time per operation, the average number of operations per job, and the number of machines on the shop floor. It is assumed that the processing time of an operation on each machine follows a uniform distribution. The average shop utilization has five levels—0.75, 0.80, 0.85, 0.90, and 0.95. New jobs arrive one by one with the quantity of new jobs from 20 to 200. Table 17.3 gives the average shop utilization and the average interarrival time under the heavy and moderate shop load level. Table 17.2 The experimental conditions Dimension
Characteristic
Specification
Floor shop
Size
6 machines
Machine breakdown level
0.025
Jobs
Performance measures
MTTR
5
MTBF, MTTR
Exponential distribution
Random arrival
Poisson distribution
The quantity of new jobs
[20,200]
Job release policy
Immediate
Processing time of an operation
U [1, 10]
Schedule frequency
12
Mean flow time Mean tardiness Maximum flow time Maximum mean tardiness Number of tardy jobs
Table 17.3 The average interarrival time under different shop load levels Heavy
Moderate
Average interarrival time
5.26
5.55
5.88
6.25
6.66
Average shop utilization
0.95
0.90
0.85
0.80
0.75
362
17 A Hybrid Intelligent Algorithm …
Table 17.4 The range of due date tightness for different shop load levels Shop load
Heavy
Due date tightness k
Loose
Tight
Loose
Moderate Tight
(2,6)
(1.5,4.5)
(1,5)
(0.75,3.75)
Since due date tightness has been shown to influence scheduling decisions [22], two levels of due date tightness, namely loose and tight due dates, are considered. When a new job arrives, the tightness factor k is generated by the uniform distribution. Table 17.4 shows the range of due date tightness k for different shop load levels.
17.4.2 Results and Discussion It is worth to mention that there are some parameters to be determined in the proposed rescheduling technique. Some parameters on the proposed rescheduling technique are set as follows: population size (popsize = 20), maximum iteration of the hybrid GA and TS method (iterGATS = 50), maximum iteration of TS (iterTS = 20), crossover probability (pc = 0.9), mutation probability (pm = 0.1), the certain probability (pt = 0.5), and the length of tabu list (lt = 10). The proposed rescheduling technique runs 10 times for the considered problem. Simulation for each state continues until the number of new jobs that have arrived on the shop floor is from 20 to 200. The proposed rescheduling technique runs 10 times for each problem condition. The simulation experiments are conducted in the same experimental conditions for all initialization methods (i.e., the quantity of new job arrivals, the number of machines, schedule frequency, and machine breakdown level).
17.4.2.1
The Effect of Several Initializations
Tables 17.5 and 17.6 summarize the results for different initializations under heavy shop load level or moderate shop load level, respectively. The average shop utilization and the number of new jobs are equal to 0.95, 0.75 and 20, 20, respectively. We report the average value among 10 running times. Loose and tight due dates have no influence on the mean flow time and maximum flow time. Take partial (p = 0.1) as an example; each individual is initialized by partial initialization with the probability of 0.1 and by random initialization with the probability of 0.9. Random represents that all population is initialized by random initialization. Partial (p = 1) represents that all population is initialized by partial initialization. The results of Tables 17.5 and 17.6 reveal that the initialization has a significant effect on the performance of the hybrid intelligence algorithm. The performance of the hybrid intelligence algorithm will be degraded when the probability of the partial
17.4 Experiential Environments and Results
363
Table 17.5 The results for different initializations under heavy shop load levels
Random
Loose
Mean flow time
Maximum flow time
Mean tardiness
Maximum tardiness
Number of tardy jobs
94.0
138.9
2.63
29.5
5.5
7.29
29.6
5.4
29.6
5.7
Tight Partial ( p = 0.1)
Loose
Partial ( p = 0.2)
Loose
Partial ( p = 0.3)
Loose
Partial ( p = 0.4)
Loose
Partial ( p = 0.5)
Loose
Partial ( p = 0.6)
Loose
Partial ( p = 0.7)
Loose
Partial ( p = 0.8)
Loose
Partial ( p = 0.9)
Loose
Partial ( p = 1)
Loose
94.0
139.6
3.17 7.20
30.1
5.8
89.5
138.9
3.58
33.8
5.9
6.93
30.4
4.2
2.63
27.0
5.1
8.49
29.6
5.8
26.4
6.0
Tight Tight 93.2
140.5
Tight 91.6
141.1
3.45 6.79
24.4
5.2
93.0
140
3.08
28.6
5.4
6.60
27.8
4.8
2.77
27.7
6.9
7.13
24.5
6.8
38.6
5.3
Tight Tight 93.0
142
Tight 92.9
145.1
3.16 6.13
30.0
6.8
92.2
143.2
3.19
30.3
6.1
7.49
33.0
6.6
3.89
37.5
6.9
6.74
34.0
5.4
3.57
34.0
6.2
6.95
36.2
6.6
Tight Tight 95.1
144.2
Tight Tight
93.0
139.7
Data in bold show the best objection values under different probabilities of the partial initialization
initialization is high. These show that the high probability of the partial initialization may reduce the population diversity and the hybrid intelligence algorithm will be easy to fall in local optima. But the low probability of the partial initialization can improve the performance of the hybrid intelligence algorithm. These illustrate that the low probability of the partial initialization can keep the population diversity and also can make the searching process tend to global optima. As illustrated in Tables 17.5 and 17.6, the partial initialization with the probability of 0.2 is statistically better than other conditions for mean flow time and maximum flow time under different shop load levels. The probability of the partial initialization is different when the best performance measures that mean tardiness, maximum tardiness, and number of tardy jobs are obtained under different shop load levels and due date tightness. The better values of mean tardiness are obtained with the probability of the partial initialization from 0.1 to 0.3 in a statistical sense under different conditions. The better values of maximum tardiness are obtained with the
364
17 A Hybrid Intelligent Algorithm …
Table 17.6 The results for different under moderate shop load level
Random
Loose
Mean flow time
Maximum flow time
77.1
116.6
Tight Partial ( p = 0.1)
Loose
77.9
119.9
Tight Partial ( p = 0.2)
Loose
74.8
112.5
Tight Partial ( p = 0.3)
Loose
78.0
118.7
77.5
115
77.1
122.5
76.4
119.5
78.4
119.9
Tight Partial ( p = 0.4)
Loose Tight
Partial ( p = 0.5)
Loose
Loose Tight
Partial ( p = 0.7)
Loose
Partial ( p = 0.8)
Loose
116.8
Tight Partial ( p = 0.9)
Loose
79.9
118.3
80.8
124.2
Tight Partial ( p = 1)
Loose Tight
5.1
22.2 22.2
7.0
2.22
22.6
5.4
10.12
24.2
4.8
2.10
19.9
5.8
10.97
22.8
5.1
2.3
22.8
6.3
10.87
19.8
5.0
2.70
26.7
6.3
2.12
23.3
5.4
24.7
6.2
11.66
23.5
6.4
2.50
25.9
5.9
24.6
4.8
20.0
5.7
23.1
5.8
2.36
22.0
5.9
11.38
23.8
5.8
2.34
24.6
5.8
2.42 12.6
77.3
Number of tardy jobs
2.81
10.4
Tight
Maximum tardiness
10.77
10.8
Tight Partial ( p = 0.6)
Mean tardiness
10.93
27.9
5..8
2.16
24.1
5.3
11.78
23.9
5.8
Data in bold show the best objection values under different probabilities of the partial initialization
probability of the partial initialization from 0.2 to 0.4 in a statistical sense under different conditions. The better values of the number of tardy jobs are obtained with the probability of the partial initialization from 0.1 to 0.3 in a statistical sense under different conditions. Consequently, the probability of the partial initialization is selected to be 0.2 to test the performance of the proposed rescheduling technique with the performance
17.4 Experiential Environments and Results
365
measure of mean flow time, maximum flow time, mean tardiness, and maximum flow time, respectively. The probability of the partial initialization is selected to be 0.3 to test the performance of the proposed rescheduling technique with the performance measure of maximum tardiness.
17.4.2.2
The Performance of the Proposed Rescheduling Technique
In order to illustrate the potential of the proposed rescheduling technique for the dynamic JSPs, it is compared with some common dispatching rules that have been widely used in the literature [9, 33] and several meta-heuristic algorithms. A list of these rescheduling techniques is as follows: (1) the Shortest Processing Time (SPT) dispatching rule; (2) the Longest Processing Time (LPT) dispatching rule; (3) the Most Work Remaining (MWKR) dispatching rule; (4) the Least Work Remaining (LWKR) dispatching rule; (5) the First In First Out (FIFO) dispatching rule; (6) the Last In First Out (LIFO) dispatching rule; (7) the Earlier Due Date (EDD) dispatching rule; (8) the genetic algorithm with random initialization (GA − I)—the genetic algorithm which does not consider the tabu search in this chapter is chosen; (9) the genetic algorithm with the new initiation (GA + I)—the partial initialization with probability of 0.3 is selected; and (10) the Variable Neighborhood Search (VNS), which comes from the literature [8], is given in this chapter. Its parameter setting of the VNS approach is that of the number of out loop iteration N = 20, the number of inner loop iteration q¬max¬ = 150, and the predetermined constant dr = 8. According to the steps of the VNS approach, it is implemented in the same experimental environment with the proposed method on the same computer. The experimental results obtained by these rescheduling techniques are shown in the following tables. The results for 10 replications are averaged. The results for the comparison of five performance measures are presented in Tables 17.7, 17.8, 17.9, 17.10, and 17.11, respectively. (1) Flow time-based analysis The results for the flow time-based performance measures are presented in Tables 17.7 and 17.8. Tables 17.7 and 17.8 compare the results of all rescheduling techniques with the objective function of the mean flow time and the maximum flow time, respectively. As illustrated in Tables 17.7 and 17.8, among these rescheduling techniques, the proposed rescheduling technique can improve the mean flow time or the maximum flow time for dynamic JSPs. It plays a better performance under different quantity of new jobs and shop load levels than other rescheduling techniques. With the increase of the new job arrivals, the mean or maximum flow time increases largely among these dispatching rules. However, it increases little among these meta-heuristic algorithms, especially the proposed rescheduling techniques. Tables 17.7 and 17.8 also show that GA + I is slightly better than the GA − I. These illustrate that the new initialization method is an effective approach to improve
U75
U80
U85
U90
U95
74.8
83.6
S
L
119
L
164.3
64.3
L
S
90.5
126.2
S
80.3
L
137.9
L
S
89.5
S
GATS
85.8
74.6
122.1
77.9
177.8
88.3
127.3
81.5
134.6
101.3
VNS
126.6
86.4
187.5
69
272.2
97.9
238.5
86.8
214.3
103.9
GA + I
133.9
83.4
193.7
70.1
277.9
99.9
238.6
87.9
214.5
104.9
GA − I
871.7
205.3
959.4
223
915.6
214.7
1011
214
1009
229.3
FIFO
Table 17.7 Comparison of all rescheduling techniques for the mean flow time
2069
282.2
2130
301.8
2129
320
2266
304.2
2275
303.2
LIFO
1496
249.3
1548
182.7
1680
252
1647
197.3
1561
250.1
EDD
1124
232.5
1204
220.3
1180
204.8
1327
208.3
1192
252.2
SPT
1413
247.9
1494
228.1
1748
220
1730
281.6
1565
304.2
LPT
7689
1557
6941
1271
6336
1465
5023
1017
8194
1237
MWRK
871.7
205.3
959.4
223
915.6
214.7
1011
214
1009
229.3
LWRK
366 17 A Hybrid Intelligent Algorithm …
U75
U80
U85
U90
U95
112.5
177.2
S
L
256
L
287.2
102.2
L
S
143.9
224
S
109
L
300.7
L
S
138.9
S
GATS
235
143.7
362.9
116
432.8
162.2
339.3
125.5
422
160.5
VNS
215.8
134
294.8
114.8
356.1
159.7
300.7
133
401.6
162.1
GA + I
215
136.7
321.5
115.9
379.2
163
319
126.6
408.4
167.5
GA − I
2189
351
2434
450
2240
383
2568
418
2572
412
FIFO
Table 17.8 Comparison of all rescheduling techniques for the maximum flow time
4412
545
4385
561
4484
621
4709
614
4663
613
LIFO
3342
503
3331
394
3550
489
3520
480
3438
497
EDD
2085
395
2214
357
2113
328
2428
332
2303
439
SPT
2426
412
2505
375
2599
339
2787
416
2615
458
LPT
3185
390
3053
352
3728
336
2917
452
4127
377
MWRK
870
249
1223
224
1135
240
1139
381
1022
317
LWRK
17.4 Experiential Environments and Results 367
U75
U80
U85
U90
U95
10.1
2.1
7.4
7.3
LT
LL
19.3
LL
SL
19.6
LT
ST
2.6
SL
42.6
LL
3
40
LT
ST
12.5
SL
17.6
LL
9.9
18.8
LT
ST
0.9
30
LL
1
34.2
LT
SL
2.6
SL
ST
6.9
ST
GATS
10.2
10.8
3.1
14.7
39.8
40.1
3.9
4.1
92.7
90.6
12.6
13.0
43.6
40.3
1.3
1.1
50.8
49.9
2.6
7.0
VNS
14.5
14.7
5
16.4
47.7
44.4
4.4
5.3
136.9
139.1
18.7
18.5
38.8
43.7
1.9
1.6
83.7
93
10.1
15.7
GA + I
15.6
15.6
5.2
17.6
45.1
51.9
4.9
4.1
141.6
145.5
18
18.9
51.4
53.4
2.6
2.6
87.7
92.9
11
16.2
GA − I
770.8
781.9
98.8
130.7
860.2
885.7
122.6
151.3
816.1
842.5
115
145.3
908.6
958.2
80.6
125.8
909
932.7
104.6
127.4
FIFO
Table 17.9 Comparison of all rescheduling techniques for the mean tardiness
1974
1993
177.7
211
2031
2043
203.6
230.6
2031
2057
224.3
255
2164
2182
175.2
214.7
2176
2179
185.3
206.3
LIFO
1395
1340
137.7
177.7
1448
1418
77.2
143.4
1580
1586
146
154
1542
1428
55.8
125.9
1460
1482
114.6
148.3
EDD
1028
1040
129
161
1107
1102
126.2
153.8
1085
1109
108
137.2
1231
1226
73.8
120.4
1096
1103
136.9
155.3
SPT
1317
1339
141.7
174.9
1400
1422
132.3
162.9
1651
1688
126.9
151.3
1630
1645
156.5
197.2
1472
1477
177.4
203.4
LPT
1135
1156
101.4
131.3
1220
1240
69.9
92.2
1523
1564
108.9
138.1
940.3
989.4
142.6
190
1543
1553
98.3
117.5
MWRK
90.7
108.1
32.8
55.8
144.3
171.8
33
50.2
132.9
157.9
57.9
87.6
106
128.8
83.6
125.9
130.4
144.4
62
74.4
LWRK
368 17 A Hybrid Intelligent Algorithm …
U75
U80
U85
U90
U95
19.8
19.9
96.7
100.6
LT
LL
147.2
LL
SL
148.6
LT
ST
25
SL
190.5
LL
24.8
188.3
LT
ST
53.3
SL
139.7
LL
46.2
149.8
LT
ST
21.8
223.7
LL
25.2
223
LT
SL
26.4
SL
ST
24.4
ST
GATS
151.2
166.6
36.4
58.4
260.2
233.4
32.3
31.4
333
332
61.9
66
247.8
243.8
19.3
19.8
353.8
345.4
44.2
60.2
VNS
160.2
155.3
46.8
43.1
226.5
223.4
34.9
32.4
301.7
305.7
72.2
69.8
236.4
224
26.3
27.4
324.7
327.4
54.9
66.9
GA + I
165.8
152.8
43.1
41.1
217.5
237
33.9
32
291.6
306.1
77.1
79.6
217.6
234
28.9
28.3
338
344
56.6
58
GA − I
2099
2144
285
304
2380
2356
349
407
2169
2178
328
319
2483
2515
327
343
2487
2484
238
321
FIFO
Table 17.10 Comparison of all rescheduling techniques for the maximum tardiness
4297
4350
448
501
4288
4341
464
517
4382
4435
524
577
4591
4619
514
542
4558
4586
513
541
LIFO
3104
2930
336
386
3197
3151
224
313
3402
3373
315
365
3366
3228
228
324
3257
3418
266
361
EDD
2049
2044
294
348
2149
2159
307
308
2058
2069
264
282
2366
2363
265
265
2249
2244
312
376
SPT
2326
2392
362
366
2471
2486
325
315
2566
2556
302
291
2713
2760
315
352
2565
2574
331
366
LPT
3121
1202
271
328
2853
1125
233
290
3636
957
291
309
2789
1170
349
393
4008
1291
285
293
MWRK
821
814
167
174
1094
1160
135
176
989
1083
171
207
1068
1114
281
309
940
947
224
219
LWRK
17.4 Experiential Environments and Results 369
U75
U80
U85
U90
U95
4.8
5.1
71.7
72.8
LT
LL
116.8
LL
SL
118.8
LT
ST
4.9
SL
139.8
LL
5.1
142
LT
ST
9.4
SL
117.8
LL
10.1
117.4
LT
ST
1.9
114.4
LL
2.4
114.4
LT
SL
5.1
SL
ST
4.2
ST
GATS
174.8
178.6
13.8
21.4
190.6
189.6
11.6
10.4
201
199.2
16.6
16.8
201.6
201
7.8
7.4
192.4
191
14.2
19.8
VNS
93.4
91
6.8
7.8
141.8
139.8
6.1
5.5
167
165.7
10.8
11.6
149.6
148.1
2.8
3.9
135.8
140.5
7.1
8.1
GA + I
95.8
88.2
7.1
7.2
143.8
140.9
5.6
5.5
166.1
165.9
11.8
11.8
149.9
150
3.3
3.7
141.9
138.1
8.1
8.1
GA − I
202
205
22
25
199
205
22
25
203
205
23
25
201
203
20
23
203
203
22
23
FIFO
Table 17.11 Comparison of all rescheduling techniques for the number of tardy jobs
182
205
21
25
196
205
23
25
190
205
21
25
194
203
17
23
192
203
20
23
LIFO
203
205
24
25
203
205
23
25
202
205
24
25
203
206
16
25
202
204
21
23
EDD
171
180
18
22
186
189
18
23
180
186
19
24
163
183
17
20
177
178
19
20
SPT
188
191
20
22
185
190
21
21
197
197
19
23
192
198
21
22
184
191
24
24
LPT
194
199
19
23
191
194
18
21
192
196
21
25
180
187
21
22
195
195
18
19
MWRK
141
159
13
22
149
161
14
20
156
163
20
23
141
158
21
20
147
161
17
20
LWRK
370 17 A Hybrid Intelligent Algorithm …
17.4 Experiential Environments and Results
371
the performance of the genetic algorithm for solving dynamic JSPs with the objective function of the mean or maximum flow time. Otherwise, from Tables 17.7 and 17.8, it can be found that the mean or maximum flow time under the shop load level of 0.85 is larger than other shop load levels. The mean flow time is smaller in the low shop load level than those in the high shop load levels. These illustrate that the shop load level has an effect on the mean or maximum flow time. (2) Tardiness-based analysis The results for the tardiness-based performance measures are presented in Tables 17.9 and 17.10. Tables 17.9 and 17.10 compare the results of all rescheduling techniques with the objective function of the mean tardiness and the maximum tardiness, respectively. As illustrated in Tables 17.9 and 17.10, among these rescheduling techniques, the proposed rescheduling technique outperformed for solving dynamic JSPs regarding the tardiness criterion. It gives a better performance under different quantity of new jobs and shop load levels than other rescheduling techniques. It can be found that with the increase of the new job arrivals, the mean or maximum tardiness increases largely for these dispatching rules. However, they increase little for these meta-heuristic algorithms, especially the proposed rescheduling techniques. Tables 17.9 and 17.10 also show that GA + I is slightly larger than the GA − I. The new initialization method has a good effect on the performance of the genetic algorithm for solving dynamic JSPs with the mean or maximum tardiness. These illustrate that the new initialization method is an effective approach to improve the performance of the genetic algorithm for solving dynamic JSPs with the objective function of the mean or maximum tardiness. Otherwise, Tables 17.9 and 17.10 illustrate that the mean or maximum tardiness under the shop load level of 0.85 is larger than other shop load levels. The mean or maximum tardiness is smaller in the low shop load level than those in the high shop load levels. These reveal that the shop load level has an effect on the tardiness. (3) Number of tardy jobs-based analysis The results for the number of tardy jobs-based performance measures are presented in Table 17.11. Table 17.11 compares the results of all rescheduling techniques with the objective function of the number of tardy jobs. As illustrated in Table 17.11, among those rescheduling techniques, the proposed rescheduling technique can improve the number of tardy jobs for dynamic JSPs. According to the results of the GA + I and GA − I in Table 17.11, it can be found that GA + I outperforms GA − I. It illustrates that the new initialization method is an effective approach to improve the performance of the genetic algorithm for solving dynamic JSPS with the number of tardy jobs criterion. Moreover, Table 17.11 also reveals that the number of tardy jobs under the shop load level of 0.85 is larger than other shop load levels. The number of tardy jobs is smaller in the low shop load level than those in the high shop load levels. These show that the shop load level has an effect on the performance measures.
372
17 A Hybrid Intelligent Algorithm …
Regarding the results obtained from the computational study, it seems that the proposed rescheduling technique can be an effective approach for dynamic JSPs. The computational results show that the proposed rescheduling technique is superior to other rescheduling techniques with respect to five objectives, different shop load levels, and different due date tightness. These also illustrate that the proposed rescheduling technique has good robustness in the dynamic manufacturing environment.
17.4.2.3
ANOVA Analysis
The ANalysis Of VAariance (ANOVA) has been carried out for each performance measure. The ANOVA is performed using the commercial statistical software Excel 2010. Simulation results are obtained for two-factor experiments in the proposed and the six quantities of new job arrivals (20, 40, 60, 80, 100, 200) form the first factor, and the five levels of shop load (95, 90, 85, 80, and 75%) form the second factor. Table 17.12 shows the results of two-factor ANOVA where the main effects and the interaction effect are significant for all the performance measures. In all cases, since the P-value is less than 0.05, there is statistical significance for testing each performance. In this study, effects are considered significant if the P-value is less than 0.05. Hence, the result indicates that for all performance measures, the shop load level and the interaction between the shop load level and the quantity of new jobs have a statistically significant impact on the performance measures. The quantity of new jobs has a statistically significant impact on the mean tardiness and the number of tardy jobs.
17.5 Conclusions and Future Works In a real manufacturing system, one effective rescheduling technique is very important for decision makers. The selection of a different and appropriate scheduling rule improves the system performance under different manufacturing environments. In contrast to that claim, this chapter proposed a new rescheduling technique to solve the dynamic JSPs with random job arrivals and machine breakdowns. A simulator is designed to generate the disruptions. Five performance measures, which contain mean flow time, maximum flow time, mean tardiness, maximum tardiness, and number of tardy jobs, are applied, respectively, in the scheduling process. At each rescheduling point, the simulator generates disturbances for the next step. The scheduling scheme is optimized by the hybrid intelligence algorithm. The main conclusions of this chapter are as follows: • The new initialization method is an effective approach to improve the performance of the hybrid intelligence algorithm for solving dynamic JSPs. The low probability
Interaction AB
14.51
0.04
A: Shop level
B: Quantity of jobs
F
101.1
Main effects
0.000
0.828
0.000
P
Mean flow time
Source of variation
Table 17.12 ANOVA results for two-way analysis
30.64
3.81
58.6
F
0.000
0.066
0.000
P
Maximum flow time
27.15
21.00
21.29
F
0.000
0.000
0.000
P
Mean tardiness
26.62
0.33
31.43
F
0.000
0.56
0.000
P
Maximum tardiness
26.56
4.41
20.49
F
0.000
0.042
0.000
P
Number of tardy jobs
17.5 Conclusions and Future Works 373
374
17 A Hybrid Intelligent Algorithm …
of partial initialization can keep the population diversity and also can make the searching process tend to global optima. • The proposed rescheduling technique is superior to other rescheduling techniques with respect to five objectives, different shop load levels, and different due date tightness. It has good robustness in the dynamic manufacturing environment. • The performance measures are smaller in the low shop load level than those in the high shop load levels in general. A marked inflection can be found in the shop load level of 0.85 for most of the performance measures. These show that the shop load level has an effect on the performance measures. • The shop load level U and the interaction between the shop load level and the quantity of jobs have a statistically significant impact on the performance measures. The quantity of new job arrivals has a statistically significant impact on the mean tardiness and the number of tardy jobs. The results presented in the chapter should be interpreted with reference to the assumptions and experimental conditions described earlier. There is an important topic which considers simultaneously efficiency and stability of the schedules in the future. Moreover, one such condition is to test different breakdown levels of the shop floor for further research. The best key real-time event is the need to test. Finally, the simulation study can be extended to cover other combinations of experimental factors (i.e., processing time variations and due date).
References 1. Adibi MA, Zandieh M, Amiri M (2010) Multi-objective scheduling of dynamic job shop using variable neighborhood search. Expert Syst Appl 37:282–287 2. Chryssolouris G, Subramaniam V (2001) Dynamic scheduling of manufacturing job shops using genetic algorithms. J Intell Manuf 12:281–293 3. Damodaran P, Hirani NS, Velez-Gallego MC (2009) Scheduling identical parallel batch processing machines to minimize makespan using genetic algorithms. Eur J Ind Eng 3:187–206 4. Dominic PDD, Kaliyamoorthy S, Kumar MS (2004) Efficient dispatching rules for dynamic job shop scheduling. Int J Adv Manuf Technol 24:70–75 5. Gao L, Zhang GH, Zhang LP, Li XY (2011) An efficient memetic algorithm for solving the job shop scheduling problem. Comput Ind Eng 60:699–705 6. Lei D (2011) Scheduling stochastic job shop subject to random breakdown to minimize makespan. Int J AdvManuf Technol 55:1183–1192 7. Lin SC, Goodman ED, Punch WF (1997) A genetic algorithm approach to dynamic job shop scheduling problems. In: The 7th International Conference on Genetic Algorithm. Morgan Kaufmann, San Francisco 8. Liu L, Gu HY, Xi YG (2007) Robust and stable scheduling of a single machine with random machine breakdowns. Int J Adv Manuf Technol 31:645–654 9. Liu SQ, Ong HL, Ng KM (2005) A fast tabu search algorithm for the group shop scheduling problem. Adv Eng Softw 36:533–539 10. Lou P, Liu Q, Zhou Z, Wang H, Sun SX (2012) Multi-agent-based proactive—reactive scheduling for a job shop. Int J Adv Manuf Technol 59:311–324 11. Malve S, Uzsoy R (2007) A genetic algorithm for minimizing maximum lateness on parallel identical batch processing machines with dynamic job arrivals and incompatible job families. Comput Oper Res 34:3016–3028
References
375
12. Megala N, Rajendran C, Gopalan R (2008) An ant colony algorithm for cell-formation in cellular manufacturing systems. Eur J Ind Eng 2:298–336 13. Nie L, Shao X, Gao L, Li W (2010) Evolving scheduling rules with gene expression programming for dynamic single-machine scheduling problems. Int J Adv Manuf Technol 50:729–747 14. Ouelhadj D, Petrovic S (2009) A survey of dynamic scheduling in manufacturing systems. J Sched 12:417–431 15. Pan QK, Wang L (2008) A novel differential evolution algorithm for the no-idle permutation flow shop scheduling problems. Eur J Ind Eng 2:279–297 16. Park BJ, Choi HR, Kim HS (2003) A hybrid genetic algorithm for the job shop scheduling problems. Comput Ind Eng 45(4):597–613 17. Pessan C, Bouquard JL, Neron E (2008) An unrelated parallel machines model for an industrial production resetting problem. Eur J Ind Eng 2:153–171 18. Rangsaritratsamee R, Ferrel JWG, Kurtz MB (2004) Dynamic rescheduling that simultaneously considers efficiency and stability. Comput Ind Eng 46:1–15 19. Renna P (2010) Job shop scheduling by pheromone approach in a dynamic environment. Int J Comput Integr Manuf 23:412–424 20. Sabuncuoglu I, Bayiz M (2000) Analysis of reactive scheduling problems in a job shop environment. Eur J Oper Res 126:567–586 21. Sabuncuoglu I, Goren S (2009) Hedging production schedules against uncertainty in manufacturing environment with a review of robustness and stability research. Int J Comput Integr Manuf 22(2):138–157 22. Shafaei R, Brunn P (1999) The performance of heuristic scheduling rules in a dynamic job shop environment using a rolling time horizon approach. Int J Prod Res 37:3913–3925 23. Singh A, Mehta NK, Jain PK (2007) Multicriteria dynamic scheduling by swapping of dispatching rules. Int J Adv Manuf Technol 34:988–1007 24. Subramaniam V, Lee GK, Ramesh T, Hong GS, Wong YS (2000) Machine selection rules in a dynamic job shop. Int J Adv Manuf Technol 16(12):902–908 25. Van Hulle MM (1991) A goal programming network for mixed integer linear programming: a case study for the job-shop scheduling problem. Int J Neural Netw 2(3):201–209 26. Vinod V, Sridharan R (2008) Dynamic job-shop scheduling with sequence-dependent setup times: simulation modeling and analysis. Int J Adv Manuf Technol 36(3–4):355–372 27. Vinod V, Sridharan R (2011) Simulation modeling and analysis of due-date assignment methods and scheduling decision rules in a dynamic job shop production system. Int J Prod Econ 129:127–146 28. Wu SS, Li BZ, Yang JG (2010) A three-fold approach to solve dynamic job shop scheduling problems by artificial immune algorithm. Adv Mater Res 139–141:1666–1669 29. Xiang W, Lee HP (2008) Ant colony intelligence in multi-agent dynamic manufacturing scheduling. Eng Appl Artif Intell 21:73–85 30. Zandieh M, Adibi MA (2010) Dynamic job shop scheduling using variable neighbourhood search. Int J Prod Res 48:2449–2459 31. Zhang CY, Rao YQ, Li PG (2008) An effective hybrid genetic algorithm for the job shop scheduling problem. Int J Adv Manuf Technol 39:965–974 32. Zhou R, Lee HP, Nee AYC (2008) Applying ant colony optimization (ACO) algorithm to dynamic job shop scheduling problems. Int J Manuf Res 3(3):301–320 33. Zhou R, Nee AYC, Lee HP (2009) Performance of an ant colony optimization algorithm in dynamic job shop scheduling problems. Int J Prod Res 47:2903–2920
Chapter 18
A Hybrid Genetic Algorithm and Tabu Search for Multi-objective Dynamic JSP
18.1 Introduction Job shop Scheduling Problem (JSP) is the process of allocating and timing resource usage in the manufacturing system to complete jobs over time according to some desired criteria. It has attracted many researches and engineers because it still exists in most of the manufacturing systems in various forms [2]. Even the simplified problems (with deterministic and static assumptions) are NP-hard or analytically intractable [10]. A previous feasible schedule in a deterministic manufacturing environment may change to be infeasible when the real-time events, such as random job arrivals and machine breakdowns, occur. Taking into account these real-time events, the JSP is termed as the dynamic JSP, which is very important and complicated for the successful implementation of real-world scheduling systems [20]. The dynamic feature of the real manufacturing system can be seen as the major source of the gap between scheduling theory and practice. Cowling and Johansson [6] addressed an important gap between scheduling theory and practice and stated that scheduling models and algorithms were unable to make use of real-time information. Ouelhadj and Petrovic [20] defined the problem and provided a review of the state of the art of currently developing research. They also discussed and compared the scheduling techniques, rescheduling polices, and rescheduling strategies in detail. Recently, a great deal of effort has been spent on developing methods, such as how to reschedule, when to reschedule, to cope with the unexpected disruptions in the real manufacturing scheduling [4, 20]. Moreover, many researches also try to propose a mathematical model or a simulation model to simplify the problem. The challenge of addressing the dynamic feature of the real manufacturing system also affects the performance measure ways of the choice. Sabuncuoglu and Goren [26] pointed out that developing a bi-criteria approach which considered both the stability and the robustness measures simultaneously was further research. Zandieh
© Springer-Verlag GmbH Germany, part of Springer Nature and Science Press, Beijing 2020 X. Li and L. Gao, Effective Methods for Integrated Process Planning and Scheduling, Engineering Applications of Computational Methods 2, https://doi.org/10.1007/978-3-662-55305-3_18
377
378
18 A Hybrid Genetic Algorithm and Tabu Search …
and Adibi [34] had taken the mean flow time as the performance measure. Renna [25] investigated five performance measures: throughput time, throughout, work-inprocess, machine average utilization, and tardiness. In general, the most published works about the dynamic JSP which deals with schedule efficiency and schedule stability simultaneously have not been studied very well. This chapter tries to improve the schedule efficiency and maintain the schedule stability through a method which uses a hybrid genetic algorithm and tabu search with a multi-objective performance measure for dynamic JSP. Moreover, the realtime events are difficult to be expressed and taken into account by the mathematical model. A simulator is also proposed to tackle the complexity of the problem. The remainder of this chapter is organized as follows. Section 18.2 gives a literature review. The multi-objective dynamic JSP is defined in Sect. 18.3. In Sect. 18.4, the proposed method is presented. In Sect. 18.5, the experimental design and results are discussed. Finally, Sect. 18.6 gives the conclusionsand future works.
18.2 Literature Review The dynamic JSP was first published by Holloway and Nelson [11]. They developed a multi-pass heuristic scheduling procedure by generating schedules periodically for dynamic JSP with the processing time that was a random variable. Subsequently, a simple heuristic dispatching rule, called shift from standard rules, was designed by Pierreval and Mebarki [21]. The same year, Kouiss et al. [19] first proposed a scheduling strategy based on a multi-agent architecture for dynamic JSP. Dispatching rules have been widely used in dynamic JSP. Rajendran and Holthaus [23] proposed three dispatching rules and considered a total of 13 dispatching rules for dynamic JSP. Subramaniam et al. [29] proposed three machine selection rules. Dominic et al. [8] provided several efficient dispatching rules for dynamic JSP by combining different dispatching rules. Malve and Uzsoy [16] considered a family of iterative improvement heuristics to solve the problem of minimizing maximum lateness on parallel identical batch processing machines with dynamic job arrivals. Alpay and Yuzugullu [3] pointed out that the due date assignment model was very successful for improving the missed due date performance and the developed dispatching rule was also very successful for meeting the assigned due dates. In order to construct effective scheduling rules for dynamic single-machine scheduling problems, Nie et al. [17] proposed the gene expression programming-based scheduling rules constructor. Though many studies discussed the rescheduling, there were no standard definitions or classification of the strategies, policies, and methods presented in the rescheduling literature. Vieira et al. [30] and Ouelhadj and Petrovic [20] presented definitions appropriate for most applications of dynamic manufacturing systems. Sabuncuoglu and Kiqilisik [27] proposed several reactive scheduling policies. They
18.2 Literature Review
379
found that the full scheduling scheme generally performed better than partial scheduling. Liu et al. [15] presented a framework to model the dynamic JSP as a static groupshop-type scheduling problem, and applied a proposed meta-heuristic for solving the static JSP to a number of dynamic JSP benchmark problems. Sabuncuoglu and Goren [26] discussed the major issues involved in scheduling decisions and analyzed the basic approaches to tackle these problems in manufacturing environments. Multi-agent also has been widely used in dynamic JSP. Dewan and Joshi [7] proposed the combination of auctions with Lagrangian relaxation. Xiang and Lee [33] built an efficient agent-based dynamic scheduling for real-world manufacturing systems with various products. Renna [25] created a pheromone-based approach which is carried out by a multi-agent architecture. Ouelhadj and Petrovic [20] pointed out that a few research works had addressed the use of meta-heuristics in dynamic scheduling. The GA scheduling approach produced better scheduling performance in comparison with several common dispatching rules [5]. Zhou et al. [37] studied ant colony optimization in the area of dynamic JSP. Adibi et al. [2] and Zandieh and Adibi [34] presented a Variable Neighborhood Search (VNS) method for a multi-objective dynamic JSP. Goren and Sabuncuoglu [10] showed that the beam search heuristic was capable of generating robust schedules with little average deviation from the optimal objective function value and it performed significantly better than a number of heuristics available in the literature. The schedule efficiency implies the high machine utilization. It can improve the shop efficiency. But the schedule, which only considers the schedule efficiency, may deviate significantly from the original schedule. It can seriously impact other planning activities which are based on the original schedule and may lead to poor performance. In summary, there are many studies that investigate the dynamic JSP. However, the dynamic JSP with multi-objective, such as the schedule efficiency and the schedule stability simultaneously, has not been studied well. Moreover, to our knowledge, the real-time events are difficult to be expressed and taken into account by the mathematical model. Research on the dynamic JSP using classic performance measures like makespan or tardiness to construct a new schedule may induce instability or a very undesirable effect in shop floor control [24]. Therefore, it is very important to study the dynamic JSP by considering the schedule efficiency and the schedule stability simultaneously. The goal of this chapter is to improve the schedule efficiency and maintain the schedule stability through a method using a hybrid Genetic Algorithm (GA) and Tabu Search (TS) for dynamic JSP.
18.3 The Multi-objective Dynamic Job Shop Scheduling The dynamic JSP subjects to the following technological constraints and assumptions: (1) Each machine can perform only one operation of any job at a time. (2) An operation of a job can be performed by only one machine at a time. (3) All machines are available at time 0. (4) Once an operation has been processed on a machine, it must not be interrupted except machine breakdown. If an operation is
380
18 A Hybrid Genetic Algorithm and Tabu Search …
interrupted, the remained processing time is equal to the total processing time minus the completed processing time. (5) An operation of a job cannot be performed until its preceding operations were completed. (6) There is no flexible routing for each job. (7) Operation processing time and the number of operable machines are known in advance. In the practical manufacture system, there are many real-time events, such as random job arrivals, machine breakdowns. In the following section, the effects of the dynamic factors are discussed in detail: (1) Jobs arrive at the system dynamically over time. In dynamic job shops, the distribution of job arrivals process closely follows a Poisson distribution. Hence, the time between job arrivals closely follows an exponential distribution [13, 24, 31]. When rescheduling is triggered, there are four types of job sets: finished job set, being processed job set, unprocessed job set, and new job set. When the new jobs arrive, the jobs are pushed into the new job set. (2) Machine breakdowns and repairs: the time between two machine failures and the repair time are assumed to follow an exponential distribution. The Mean Time Between Failure (MTBF) and the Mean Time To Repair (MTTR) are two parameters related to machine breakdown [34]. Note that, the production orders are generated after the schedule. The shop floor begins execution of the processes. In practice, larger deviations or changes to the sequence occur when the real-time events disrupt the initial schedule [30]. In this research, a hybrid periodic and event-driven rescheduling policy is presented for continuous processing in a dynamic environment. In the hybrid rescheduling policy, schedules are generated at regular schedule frequency and also when a key realtime event appears. A key machine, whose breakdown is as the key real-time event, is selected randomly at the beginning of the simulation. A rescheduling will be triggered when the key machine breaks down. A right-shift rescheduling strategy is used when other machines break down. The complete rescheduling strategy, which regenerates a new schedule from scratch, is used at each rescheduling point. The complete rescheduling strategy may result in instability and lack of continuity in detailed plant schedules, leading to additional production costs attributable to what has been termed shop floor nervousness [20]. This chapter attempts to keep the balance between the schedule stability and the schedule efficiency. A multi-objective performance measure is applied as the objective function to construct schedules. The schedule efficiency and the schedule stability are considered in evaluating the solutions. The schedule efficiency factor is evaluated by makespan (C max ). The smaller C max implies higher machine utilization [12]. Now, the makespan traditionally is defined as the total time that is required to process a group of jobs. To fit the dynamic scheduling environment, this definition is modified so that the group of jobs includes all jobs scheduled at each scheduling point. The schedule stability factor is measured by the starting time deviations for all jobs between the new schedule and the original schedule [20]. The objective function can be described as
18.3 The Multi-objective Dynamic Job Shop Scheduling
f =
min makespan min the starting time deviations
381
(18.1)
The various weights of these objectives are studied by some researchers and show that weight 5 for makespan and 1 for the starting time deviations is better than the other weights [24]. The objective function can be formulated as: min f = 5 × Cmax +
STi − ST i
(18.2)
all i
where STi and STi denote the starting time of job i in the original schedule and the new schedule, respectively.
18.4 The Proposed Method for Dynamic JSP In this research, a hybrid GA and TS method has been presented by integrating with a simulator to solve the multi-objective dynamic JSP.
18.4.1 The Flow Chart of the Proposed Method A successful implementation of a scheduling system in a dynamic environment usually requires updating the problem condition, generating and implementing for the production process at each rescheduling point. The general process of the proposed approach is summarized in Fig. 18.1. At the beginning of the scheduling system, all machines are available at time 0. A prediction schedule is predetermined at time 0. Then, the shop floor begins execution of the processes as the prediction schedule until the next rescheduling point. Thirdly, if the rescheduling point is smaller than the planning horizon, the reschedule will be triggered. The problem condition, which contains the machine available time, the rescheduling job set and so on, is updated. The multi-objective genetic algorithm is executed to generate the optimal solutions. The shop floor executes the processes as the optimal schedule. Finally, the above process is repeated until the planning horizon is met. The planning horizon in this work has been defined such that no new job arrives or the number of new jobs has reach a preset value. At each rescheduling point, the machine can be classified into two categories: the machine busy (an operation is being processed on this machine) and the machine idle. In the case of the machine busy, the machine available time will be assumed the completed time of the operation on the machine. In the case of the machine idle, the machine available time will be assumed the rescheduling point. The proposed hybrid GA and TS method is implemented when a rescheduling is triggered. The steps of the proposed hybrid GA and TS method are as follows:
382
18 A Hybrid Genetic Algorithm and Tabu Search …
Fig. 18.1 Flowchart of the hybrid GA and TS method
Step 1: Set parameters including population size (popsize), maximum iteration of the hybrid GA and TS (iter_GATS), maximum iteration of TS (iter_TS), crossover probability (pc ), mutation probability (pm ), the length of tabu list (lt ), etc. Step 2: Randomly generate popsize population. Decode each individual to obtain the fitness value. Compare them to obtain the initial solution x Step 3: Set n = 1. Step 4: Generate new population for the next generations. Genetic evolution with three operators including selection, crossover, and mutation is applied to create offspring for the next population. Let chi , i = 1, 2, …, popsize be the new population. The roulette wheel strategy is used to perform the selection. Step 5: Let each individual of the new population chk , k = 1, 2, … popsize be the initial solution of TS, respectively. Repeat Step 6—Step 8 to improve the quality of each individual until all individuals are performed. Then go to Step 9. Step 6: Set j = 1.
18.4 The Proposed Method for Dynamic JSP
383
Step 7: Find the neighborhoods of the current solution of TS. Select the best neighborhood which is non-tabu or satisfies the aspiration criterion. Step 8: Update the tabu list and check the termination criterion. If the termination criterion (j = iter_TS) is met or all critical blocks contain less than one operation, stop TS and output the optimal solution. The individual chk is replaced by the optimal solution. Otherwise, j = j + 1 and go back to Step 7. Step 9: Check the termination criterion. If the termination criterion (n = iter_GATS) is met, stop the hybrid GA and TS method and output the optimal solution. Otherwise, n = n + 1 and go back to Step 4. In the following subsections, the simulator and the hybrid GA and TS will be discussed.
18.4.2 Simulator Since job arrivals and machine breakdowns occur randomly, a simulator is required to simulate these situations. The simulator, which contains job arrivals and the machine breakdowns and repair, is used to generate the disturbances. The following notations are used for the simulator. 1. The Jobs Event List (JEL) denotes the interval between a pair of adjacent job arrivals. When a job arrives, the JEL is assigned again. The AT represents the job arrival time. Initialize AT with zero. 2. The Machines Event List (MEL) denotes the interval between failures for each machine. For each machine, when a machine breakdown event occurs, the MEL is assigned again. The MT represents the time when a machine breaks down. Initialize MT with zero. 3. The Machines Repair Event List (MREL) denotes the interval to repair for each machine. Assume that every machine is available to repair. For each machine, when a machine repair event occurs, the MREL is assigned again. The MR represents the time when a machine is repaired. Initialize MR with zero. 4. The RP set represents the rescheduling point set. Initialize RP with periodic numbers. The job arrivals simulator has the following steps: Step 1 Initialize JEL with random numbers in exponential distribution and α corresponds to the mean time between a pair of adjacent job arrivals. J E L[Ji ] = exp _rand(α), for all new jobs Ji , i = 1, 2, . . . , n
(18.3)
Step 2 Calculate the arrival time of J i : AT [Ji ] =
J E L[Ji ] + AT Ji−1 i > 1 i =1 J E L[Ji ]
(18.4)
384
18 A Hybrid Genetic Algorithm and Tabu Search …
Step 3 Repeat Step 1 and Step 2 until the rescheduling point is met. In the case of a machine breakdown, the machine repair must be performed. Hence, the machine breakdowns and repairs simulator has the following steps: Step 1 Generate MEL with random numbers in exponential distribution and MTBF corresponds to the mean value. And calculate the machine failure time of each machine. M E L[Mk ] = exp _rand(M T B F), for all machines Mk , k = 1, 2, . . . m (18.5) M T [Mk ] = M E L[Mk ] + M R[Mk ],
for all machines Mk , k = 1, 2, . . . m (18.6)
Step 2 If M k is the key machine, a rescheduling is triggered, and add MT [M k ] to RP set. Generate an exponential random number where MTTR corresponds to the mean value. And calculate the machine repaired time of each machine. M R E L[Mk ] = exp _rand(M T T R), for all machines Mk , k = 1, 2, . . . m (18.7) M R[Mk ] = M R E L[Mk ] + M T [Mk ], for all machines Mk , k = 1, 2, . . . m (18.8) Step 3 Repeat Step 1 and Step 2 until the rescheduling point is met. According to the simulator with respect to the job arrivals simulator and the machine breakdowns and repairs simulator, all disturbances between two adjacent rescheduling points can be obtained.
18.4.3 The Hybrid GA and TS for Dynamic JSP The hybrid GA and TS will be performed to generate an optimal schedule for continuous processing in a dynamic environment at each rescheduling point. It contains chromosome representation and decoding, crossover operator, mutation operator, neighborhood structure, tabu list and move selection, and termination criterion.
18.4.3.1
The Hybrid GA
1. Chromosome representation and decoding An inappropriate encoding may lead to infeasible solutions in generations after performing crossover and mutation. Therefore, the design of a chromosome tries to
18.4 The Proposed Method for Dynamic JSP
385
avoid generating infeasible solutions after the crossover and the mutation operations. The operation-based representation method ensures that any permutation of the chromosome can be decoded to a feasible schedule [32]. For example, a chromosome [3 1 2 2 1 3 1 2 1] is given, where {1 2 3} denotes the corresponding job {J 1 J 2 J 3 }, respectively. {J 1 J 2 J 3 } contains 4, 3, 2 operations respectively. From the left to right, the first gene 3 represents the first operation of the third job to be processed first on the corresponding machine. Then, the second gene 1 represents the first operation of the first job, and so on. Therefore, the chromosome [3 1 2 2 1 3 1 2 1] is denoted as [O31 O11 O21 O22 O12 O32 O13 O23 O14 ] where Oij denotes the jth operation of ith job. In this research, the operation-based representation method is used to encode a schedule. Because of job arrivals randomly, the rescheduling job set contains an unprocessed job set and a new job set at each rescheduling point. A matrix ϑ is selected to represent the rescheduling job set, where Oij denotes that the jth operation of job i. j being comprised between the first unprocessed operations of job i and the maximum number of operations is assigned to a resource. For example, when a rescheduling is triggered, the unprocessed job set = {O12 O13 O23 }. The new job set = {O31 O32 O33 }. The rescheduling job set matrix is defined as follows: ⎞ O12 O13 ⎠ ϑ = ⎝ O23 O31 O32 O33 ⎛
(18.9)
Therefore, each job i appears in the chromosome exactly ni times. For instance, a chromosome [1–3] is given. Thus, according to the operation-based representation method, the chromosome can be interpreted as [O23 O12 O31 O32 O13 O33 ]. In the above representation, each possible chromosome always represents a feasible operation sequence. It has been verified and denoted in a Venn diagram in Pinedo [22] that the active schedule contains an optimal schedule. Therefore, only the active schedule is considered in the decoding approach to reduce the search space. 2. Crossover operator Crossover operator determines which parents will have children, and tends to increase the quality of the populations. Zhang et al. [36] proposed a crossover operator named Precedence Operation Crossover (POX) which could satisfy the characteristics preservation and the feasibility between parents and their children better. In this research, POX [36] is adopted and the procedure is as follows: Step 1: Randomly choose the set of job numbers {1, 2, …, n} into one nonempty exclusive subset J1. Step 2: Copy those numbers in J1 from parent 1 to child 1 and from parent 2 to child 2, preserving their locus. Step 3: Copy those numbers in J1, which are not copied at Step 2, from parent 2 to child 1 and from parent 1 to child 2, preserving their order.
386
18 A Hybrid Genetic Algorithm and Tabu Search …
3. Mutation operator Mutation is used to make perturbations on chromosomes in order to maintain the diversity of a population. In this research, two types of mutation are applied. One mutation flips the substring between two different random positions. The other mutation selects two elements randomly and inserts the back one before the front one or the front one after the back one.
18.4.3.2
Tabu Search
• Neighborhood structure A neighborhood structure is a mechanism which can obtain a new set of neighbor solutions by applying a small perturbation to a given solution. Neighborhood structure is directly effective on the efficiency of the TS algorithm [35]. In this research, the neighborhood structure N5 proposed by Nowicki and Smutnicki [18] is adopted because it is effective for JSP. Nowicki and Smutnicki [18] considered only a single (arbitrarily selected) critical path. However, in order to extend search space, all critical paths are considered. The neighborhood structure N5 is a well-known neighborhood based on the critical block. The critical block is a maximal sequence of adjacent critical operations that is processed on the same machine. Any operation on the critical path, which is the longest path from start to end in a directed graph and its length represents the makespan, is called a critical operation. Let us take a critical path u and blocks B1 , B2 , …, Br defined for u as an example. We swap the first two (and last two) operations in every block B2 , …, Br −1 , each of which contains at least two operations. In the first block B1 , we swap only the last two operations, and via symmetry in the last block we swap only the first two operations. Figure 18.2 illustrates the neighborhood of moves.
Fig. 18.2 The neighborhood of moves
18.4 The Proposed Method for Dynamic JSP
387
• Tabu list and move selection The purpose of the tabu list is to avoid the search process turning back to the solutions visited in the previous steps. The length of a tabu list is a fixed size. The elements stored in the tabu list are the attributes of moves. When the tabu list is full, the oldest element of the list is replaced by the new element. More precisely, if the swapping (x, y) is the best neighborhood of one chromosome, the swapping (x, y) is added to the tabu list. Assume that the neighborhoods of the current solution aren’t empty. Otherwise, if all critical blocks contain less than one operation, the TS method terminates. The move method is to select the best neighborhood which is non-tabu or satisfies the aspiration criterion which accepts the move provided that the fitness value is better than the current best solution found so far. However, there is a situation that all neighborhoods are tabu and none of them satisfy the aspiration criterion. In such case, a neighborhood is selected randomly among all neighborhoods. • Termination criterion Termination criterion is used to determine whether the proposed method should stop. In this research, if the number of iterations is equal to the maximum iterations (iter_TS) or all critical blocks contain less than one operation, TS terminates. If the number of iterations is equal to the maximum iterations (iter_GATS), the hybrid GA and TS method terminates. If there are no job arrivals, the planning horizon is met and the simulator doesn’t generate disturbances.
18.5 Experimental Design and Results 18.5.1 Experimental Design The proposed method has been performed in C++ language on a computer with Intel Core 2 Duo CPU 2.0 GHz processor and 2.00 GB memory. Vinod and Sridharan [31] pointed out that most of the studies have used between four and ten machines to test the performance of the schedule system. The example considered in this work is a job shop system with six machines. Simulation starts with a 6 × 6 static JSP (FT06) [9] with makespan = 55 extracted from the literature. In order to illustrate the potential of the proposed method for the multi-objective dynamic JSP, it is compared with some common dispatching rules [8, 32] and meta-heuristic algorithms that have widely been used in the literature. A list of the dispatching rules is as follows: (1) The Shortest Processing Time (SPT) dispatching rule. (2) The Longest Processing Time (LPT) dispatching rule. (3) The Most Work Remaining (MWKR) dispatching rule. (4) The Least Work Remaining (LWKR) dispatching rule. (5) The First In First Out (FIFO) dispatching rule. (6) The Last In First Out (LIFO) dispatching rule. A list of the meta-heuristic algorithms is as follows: (1) GA which doesn’t consider tabu search is performed in this chapter. (2) Variable Neighborhood Ssearch (VNS), which
388
18 A Hybrid Genetic Algorithm and Tabu Search …
comes from the literature [34] is given in this chapter. Its parameters setting of the VNS approach is that the number of out loop iteration N = 20, the number of inner loop iteration qmax = 150, and the predetermined constant dr = 8. According to the steps of the VNS approach, it is implemented in the same experimental environment with the proposed method on the same computer. All machines have the same MTTR and MTBF. Ag = MTTR/(MTBF + MTTR) denotes the breakdown level of the shop floor that is the percentage of time the machines have failures. For example, for Ag = 0.05 and MTTR = 5 time units follows MTBF = 45. Thus, on an average of 45 time units a machine is available and then breaks down with a mean time to repair of 5 time units. The parameter λ (named job arrivals rate) is the mean of the Poisson random variable. In this research, it denotes the average number of new jobs occurrence per unit time. For example, λ = 0.25 means that there is average 0.25 new job occurrence per unit time. That is to say, a new job arrives in the shop floor on average 4 unit time. In order to evaluate the performance of the proposed method, there are 4 different job arrivals rate λ and 6 different number of job arrivals. Therefore, total of 4 × 6=24 experiments are conducted. Since job arrivals rate has been shown to influence scheduling performance [28], two levels of job arrivals rate, namely sparse and dense job arrivals are considered. Three types (small, medium, and large) of job arrivals scale are also considered. Simulation for each state continues until the number of new jobs that have arrived at the shop floor from 20 reaching 200. Makespan and the starting time deviations during the planning horizon are selected as the performance measure. In order to conduct simulation experiments, we created the same experimental conditions (i.e., the number of job arrivals, the number of machines, schedule frequency, and machine breakdown level). The schedule frequency is equal to 4 for all experiments. Table 18.1 illustrates the experimental conditions.
18.5.2 Results and Discussions It is worth mentioning that there are some parameters to be determined in the proposed method. Some parameters on the proposed method are taken as population size (popsize = 100), maximum iteration of the hybrid GA and TS method (iter_GATS = 20), maximum iteration of TS (iter_TS = 20), crossover probability (pc = 0.9), mutation probability (pm = 0.1), and the length of tabu list (lt = 12). The proposed method runs ten times for the considered problem. We report the average values of objectives found by the proposed method and other methods. The experimental results are obtained by all methods in the following table and figures.
18.5 Experimental Design and Results Table 18.1 The experimental conditions
Dimension
Characteristic
Specification
Floor shop
Size
6 machines
Machine breakdown level
0.025
Jobs
Performance measures
18.5.2.1
389
MTTR
5
MTBF, MTTR
Exponential distribution
Key machine
1
Random arrival
Poisson distribution
Job arrivals rate
[0.125, 0.25, 0.5, 1]
The number of new jobs
[20, 40, 60, 80, 100, 200]
Job release policy
Immediate
Processing time of an operation
U [1, 10]
Schedule frequency
4
Makespan The starting time deviations
The Experimental Results
(1) The sparse job arrivals Table 18.2 and Table 18.3 summarize the results of the experiments with sparse job arrivals. The job arrivals rate λ is equal to 0.125 and 0.25, respectively. Table 18.2 compares the performance measure of the proposed method with other methods. Because the high schedule efficiency can enhance the machine utilization, the makespan corresponding to the performance measure is given in Table 18.3. Table 18.2 reveals that the proposed method and the VNS outperform these methods. Compared with the VNS, the proposed method is capable of achieving the optimal solutions for most case studies. It also can be found that makespan of the proposed method are shortest or nearly shortest among these methods from Table 18.3. These illustrate that the proposed method performs better among these methods, and it can improve the performance of the schedule system by keeping the schedule stability and the schedule efficiency simultaneously when the job arrivals are sparse. (2) The dense job arrivals Table 18.4 and Table 18.5 summarize the results of the experiments with dense job arrivals. The job arrivals rate λ is equal to 0.5 and 1, respectively. Table 18.4 compares
L
M
S
8587
200
3454
4813
80
100
2858
2026
40
60
1361
20
11081
200
4593
5464
80
100
S: The scale of new job arrivals is small; M: The scale of new job arrivals is medium; L: The scale of new job arrivals is large.
0.25
L
3875
60
2529
40
M
1243
20
S
0.125
GATS
Jn
λ
9148
5169
3649
3132
2276
1609
11111
5555
4639
3911
2561
1350
GA
8597
4889
3461
2874
2089
1284
11095
5454
4594
3876
2556
1244
VNS
74287
27148
19881
13658
11598
5176
31617
17425
10544
9180
5421
3953
SPT
Table 18.2 The performance measure of all methods with sparse job arrivals
65433
24468
20127
11591
8789
4906
28237
14191
11778
8505
5415
3871
LPT
46437
20256
15334
10736
8217
4629
28091
14524
9967
8046
5632
3475
MWKR
42361
20328
15520
12193
8468
4394
27016
13575
9571
8237
5255
3451
LWKR
19285
9960
8200
6335
4460
2520
18505
9705
7190
5520
3850
2520
FIFO
60421
34150
19645
15086
8911
4550
56166
24833
18748
11080
7072
3754
LIFO
390 18 A Hybrid Genetic Algorithm and Tabu Search …
0.25
0.125
λ
L
M
S
L
M
S
667
1195
200
482
80
100
384
274
60
172
40
1714
200
20
825
728
80
100
611
399
60
173
40
GATS
20
Jn
1240
670
491
407
287
189
1715
838
729
615
401
185
GA
1185
668
482
389
273
164
1714
824
728
611
398
176
VNS
Table 18.3 The makespan of all methods with sparse job arrivals
1333
731
556
448
321
190
1741
870
747
629
404
203
SPT
1312
724
523
398
308
196
1714
834
738
633
419
187
LPT
1234
700
514
416
311
184
1714
857
734
623
413
195
MWKR
1203
710
492
422
309
184
1737
847
738
627
510
191
LWKR
3857
1992
1640
1267
892
504
2701
1941
1438
1104
770
504
FIFO
3600
5018
2861
2235
1274
564
9174
4023
3025
1868
1189
543
LIFO
18.5 Experimental Design and Results 391
L
M
S
22105
200
8870
10589
80
100
5958
4401
40
60
2297
20
14111
200
6914
6641
80
100
S: The scale of new job arrivals is small; M: The scale of new job arrivals is medium; L: The scale of new job arrivals is large.
1
L
4297
60
2891
40
M
1711
20
S
0.5
GATS
Jn
λ
43970
15594
11906
7969
5911
3164
30548
9444
8807
5894
3901
2065
GA
33933
14365
8933
5790
4457
2223
24529
6656
5679
3862
2814
1601
VNS
167651
51406
35265
24874
13573
5347
173676
47359
35454
21366
12623
5822
SPT
Table 18.4 The performance measure of all methods with dense job arrivals
186295
55719
35137
22837
15556
5768
182092
45175
36925
20415
12275
5298
LPT
193685
53515
39076
25052
13773
6112
171799
41005
35138
18383
12149
5802
MWKR
144998
41770
29244
20248
12101
5293
144650
37687
31210
16670
11975
4934
LWKR
26175
13040
10830
7215
5555
2715
25410
12755
10040
6710
5010
2725
FIFO
86685
44236
33604
15818
15661
5693
75728
24384
20898
15314
13062
5697
LIFO
392 18 A Hybrid Genetic Algorithm and Tabu Search …
1
L
M
S
L
995
200
475
531
80
100
366
284
40
60
151
20
1002
200
478
497
80
100
330
60
271
40
M
165
20
S
0.5
GATS
Jn
λ
1292
688
554
407
321
193
1285
666
536
409
321
177
GA
930
537
508
359
281
168
1018
600
490
369
297
165
VNS
Table 18.5 The makespan of all methods with dense job arrivals
645
623
503
401
281
142
1271
653
421
405
267
189
SPT
610
499
452
292
302
146
890
460
352
340
246
138
LPT
1137
574
502
382
292
176
1141
604
483
355
290
174
MWKR
743
299
297
259
212
173
771
624
395
297
326
126
LWKR
5235
2608
2166
1443
1111
543
5082
2551
2008
1342
1002
545
FIFO
6783
3405
2467
1292
1104
396
7455
2606
2030
1382
1262
571
LIFO
18.5 Experimental Design and Results 393
394
18 A Hybrid Genetic Algorithm and Tabu Search …
the performance measure of the proposed method with other methods. The makespan corresponding to the performance measure is given in Table 18.5. Tables 18.4 and 18.5 reveal that the proposed method and the VNS outperform these methods. With the increase of the job arrivals, the proposed method for solving the dynamic job shop problems has a better performance than the VNS. These illustrate that the proposed method effectively keeps a balance between the schedule stability and the schedule efficiency, especially in the large scale of job arrivals. Therefore, the proposed method performs better among these methods. In the case of the sparse job arrivals, the proposed method improves the performance of the schedule system by keeping the schedule stability and worsening the schedule efficiency to a small extent when the job arrivals are dense.
18.5.2.2
Analysis of the CPU Time
It is worth mentioning that the required CPU time plays a critical role in a prompt effective dynamic schedule. Tables 18.6 and 18.7 list the required CPU time of all methods. The required CPU time is the mean value of each schedule frequency. From Tables 18.6 and 18.7, it can be found that the required CPU time of the dispatching rules are much shorter than meta-heuristic algorithms. However, the dispatching rules reduce the performance measure largely, especially in the case of the large and dense job arrivals. Both the proposed method and the VNS take an approximate required CPU time. Compared with the dispatching rules, the meta-heuristic algorithms take a longer CPU time, but they can enhance the schedule efficiency and the schedule stability of the schedule system significantly. Otherwise, the required CPU time of the proposed method can be acceptable for a real manufacturing system.
18.5.2.3
Analysis of Their Performance in Different Job Arrivals Rate λ
Figure 18.3 shows the comparison of the proposed method and other methods in different job arrivals rate λ. The x-axis represents the number of job arrivals, and the y-axis represents the performance measure. The job arrivals rate λ of Fig. 18.3a, Fig. 18.3b, Fig. 18.3c, and Fig. 18.3d are equal to 0.125, 0.25, 0.5, and 1, respectively. As indicated in Fig. 18.3, the performance measure increases with the increase in the number of job arrivals in different job arrivals rate λ. With the increasing of the number of job arrivals, the performance measure of SPT, LPT, MWRK, LWRK, and LIFO increased largely. However, the performance measure of FIFO increased steadier among the dispatching rules. According to Tables 18.2–18.5, it can be found that the schedule stability of FIFO is always equal to zero. These show that FIFO obtains the better schedule stability but the worse schedule efficiency. From Fig. 18.3a–d, the proposed method has better performance in a large number of job arrivals under different job arrivals rates. Among these methods, the proposed method plays a good
0.25
L
M
S
L
1.41
200
1.39
1.61
80
100
1.46
1.91
40
60
1.89
20
0.61
200
0.71
0.76
80
100
0.61
60
0.58
40
M
0.97
20
S
0.125
GATS
Jn
λ
1.18
0.90
0.93
0.86
0.74
0.65
0.67
0.62
0.58
0.55
0.52
0.53
GA
1.68
1.55
0.74
0.98
1.26
1.39
0.55
0.51
0.25
0.21
0.21
0.54
VNS
Table 18.6 The CPU time of all method with sparse job arrivals
0.008
0.008
0.002
0.001
0.007
0.004
0.002
0.002
0.008
0.002
0.006
0.002
FIFO
0.010
0.004
0.003
0.007
0.007
0.005
0.006
0.005
0.009
0.009
0.006
0.006
LIFO
0.007
0.014
0.001
0.002
0.007
0.010
0
0.002
0.029
0.045
0
0.001
SPT
0.011
0.020
0.003
0.007
0.010
0.014
0.006
0.006
0.042
0.062
0.003
0.008
LPT
0.004
0.010
0
0.001
0.006
0.006
0
0.001
0.020
0.033
0
0.001
MWKR
0.009
0.017
0.003
0.011
0.008
0.013
0.006
0.005
0.033
0.048
0.003
0.004
LWKR
18.5 Experimental Design and Results 395
1
0.5
λ
L
M
S
L
M
S
88.1
195
200
64
80
100
39.4
23.1
60
7.3
40
155
200
20
31.9
39.1
80
100
20.0
9.10
60
4.46
40
GATS
20
Jn
4.82
2.37
2.11
1.75
1.35
0.97
3.83
2.02
1.86
1.36
1.03
0.85
GA
221
52.2
33.3
45.1
24.3
8.17
163
16.4
27.5
18.6
8.59
4.68
VNS
Table 18.7 The CPU time of all methods with dense job arrivals
0.002
0.002
0.003
0.003
0.002
0.004
0.008
0.008
0.002
0.001
0.007
0.004
FIFO
0.003
0.005
0.002
0.004
0.003
0
0.003
0.006
0.004
0.005
0.007
0.009
LIFO
0.004
0.003
0
0.003
0.005
0.007
0
0.002
0.009
0.009
0.001
0.002
SPT
0.007
0.012
0.002
0.003
0.003
0.005
0.002
0.008
0.008
0.009
0.007
0.005
LPT
0.003
0.007
0
0.001
0.002
0
0
0.001
0.004
0.003
0
0.001
MWKR
0.011
0.011
0.002
0.003
0.005
0.009
0.006
0.010
0.002
0.009
0.010
0.006
LWKR
396 18 A Hybrid Genetic Algorithm and Tabu Search …
18.5 Experimental Design and Results
397
performance to solve the multi-objective dynamic JSP, especially in a large number of job arrivals. It’s also observed that the performance measure of the proposed method and FIFO approximately linearly change with different number of new jobs in the four figures. From Fig. 18.3a–d, it also can be found that the angle between the curve of the proposed method and the curve of FIFO gets small when the job arrivals become dense. These illustrate that the proposed method will reduce the schedule efficiency in order to maintain the schedule stability when the job arrivals rate λ becomes large. Fig. 18.3 a Comparison of the proposed method and other methods. b Comparison of the proposed method and other methods. c Comparison of the proposed method and other methods. d Comparison of the proposed method and other methods
398
18 A Hybrid Genetic Algorithm and Tabu Search …
Fig. 18.3 (continued)
18.5.2.4
The ANOVA for the Performance Measure
In order to analyze the performance of the schedule efficiency and the schedule stability with the job arrivals rate λ, the ANalysis Of Variance (ANOVA) is performed using the commercial statistical software Excel 2010. The ANOVA, given in Table 18.8, is carried out for statistical significance test of the factors where factor λ represents the job arrivals rate. In this study, effects are considered significantly if the P-value is less than 0.05. As indicated in Table 18.8, the job arrivals rate has a statistically significant impact on the performance measures of the schedule efficiency and the schedule stability for all problems.
18.5 Experimental Design and Results
399
Table 18.8 The ANOVA for the performance measures Problem
Factor
Source of variation
J20
λ
Between groups Within groups
J40
λ
Between groups
J60
λ
Between groups
Within groups Within groups J80
λ
Sum of squares
Mean square
5461719.06
5461719.06
671464.45
111910.74
17538373.2 3139347.20 36066055.3 4997726.45
17538373.2
70978399.7
70978399.7
17526921.2
2921153.5
94566487.6
94566487.6
Between groups
J200
λ
Between groups
390351486
Within groups
103521112.4
Within groups
20091305.2
48.80