179 2 1MB
English Pages 177 Year 2020
Helmut A. Sedding
Time-Dependent Path Scheduling Algorithmic Minimization of Walking Time at the Moving Assembly Line
Time-Dependent Path Scheduling
Helmut A. Sedding
Time-Dependent Path Scheduling Algorithmic Minimization of Walking Time at the Moving Assembly Line
Helmut A. Sedding Institute of Theoretical Computer Science Ulm University Ulm, Germany Dissertation, Ulm University, Germany, 2019
ISBN 978-3-658-28414-5 ISBN 978-3-658-28415-2 (eBook) https://doi.org/10.1007/978-3-658-28415-2 Springer Vieweg © Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer Vieweg imprint is published by the registered company Springer Fachmedien Wiesbaden GmbH part of Springer Nature. The registered company address is: Abraham-Lincoln-Str. 46, 65189 Wiesbaden, Germany
Acknowledgements
This monograph is my dissertation thesis in partial fulfillment of the doctoral degree of natural sciences at Ulm University, Germany. It was only possible to get to this point with the support of several other persons. I feel indebted to Prof. Dr. Uwe Schöning for generously offering the supervision of my dissertation and appointing me a doctoral position at the Institute of Theoretical Computer Science at Ulm University. I am highly grateful to Prof. Dr. Nils Boysen for his support and his review of my thesis. As well, I would like to warmly recognize Prof. Dr. Enno Ohlebusch for his effort to referee my thesis. For their committee membership, let me thank Prof. Dr. Birte Glimm, Prof. Dr.-Ing. Franz Hauck, and PD Dr. Friedhelm Schwenker. In addition, I especially thank Prof. Dr. Stanisław Gawiejnowicz for his invitation to Poznań and the valuable discussions about time-dependent scheduling. Let me further take the opportunity to thank the Initiative Wissenschaft und Automobilindustrie in Jena for awarding me their research price of the year 2016. Last but not least, I am highly pleased to thank my family, friends, and colleagues for their enduring support in devising this thesis. Helmut A. Sedding
Abstract
Moving assembly lines are the stepping stone for mass production of automobiles. Here, every second counts. Therefore, nonproductive walking time of workers is meticulously optimized by production planners. This is difficult at moving assembly lines because walking distances between workpiece and line-side are immanently time-dependent. Therefore, there exist only few computational approaches in optimizing them. In this work, we introduce a core model of the problem settings, analyze combinatorial properties, computational complexity, and provide means for algorithmically optimizing walking time. We optimize the sequence of all operations, and the position of the corresponding line-side material containers at each work station. The performance of our algorithms is evaluated in numerical experiments followed by statistical analyses. This shows that our results provide the base for decision support systems, thus enable to computationally plan moving assembly lines while minimizing time-dependent walking time.
Zusammenfassung
Fließfertigungslinien bilden die Grundlage der Fahrzeug-Massenproduktion. Dabei zählt jede Sekunde. Nichtproduktive Wegezeiten der Werker werden von Produktionsplanern daher sorgfältig optimiert. Dies ist bei bewegten Montagelinien jedoch anspruchsvoll, da die Laufdistanzen zwischen Werkstück und Linienseite immanent zeitabhängig sind. Daher existieren hierfür kaum rechnergestützte Optimierungsverfahren. Diese Arbeit stellt ein Kernmodell für die Problemstellungen vor, analysiert kombinatorische Eigenschaften sowie die Komplexität und bietet Mittel für die algorithmische Optimierung von Wegezeiten. Optimiert wird einerseits die Reihenfolge aller Arbeitsvorgänge, andererseits die Position von Materialbehältern an jeder Arbeitsstation. Eine Evaluation der Leistungsfähigkeit entwickelter Algorithmen findet mit numerischen Experimenten gefolgt von statistischen Analysen statt. Diese zeigt, dass die Arbeitsergebnisse eine Grundlage von Entscheidungsunterstützungssystemen liefern und damit befähigen, Fließfertigungslinien unter Minimierung von zeitabhängigen Wegezeiten rechnergestützt zu planen.
Contents
List of Definitions
XV
List of Algorithms
XVII
List of Statements
XIX
List of Figures
XXI
List of Tables I
XXIII 1
Introduction
1 Introduction 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Organization . . . . . . . . . . . . . . . . . . . . . . . . . 2 Modeling 2.1 Modeling . . . . . . . . . . . 2.2 Related literature . . . . . . . 2.3 Assumptions . . . . . . . . . 2.3.1 Variable assumptions . 2.3.2 Common assumptions 2.4 Walking strategies . . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
3 3 5 7 7 8 12 12 13 16
II Operation sequencing
21
3 Operation sequencing 3.1 Introduction . . . . . . . . . . . . . 3.2 Problem definition . . . . . . . . . 3.3 Related literature . . . . . . . . . . 3.3.1 Assembly line balancing . . 3.3.2 Classic scheduling . . . . . 3.3.3 Time-dependent scheduling
23 23 24 25 25 27 30
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
XII
Contents
3.4 3.5 3.6 3.7
3.8
3.9
Polynomial cases . . . . . . . . . . Lower bound . . . . . . . . . . . . Dominance rule . . . . . . . . . . . Solution algorithms . . . . . . . . . 3.7.1 Dynamic programming . . . 3.7.2 Mixed integer program . . . 3.7.3 Basic heuristics . . . . . . . 3.7.4 Branch and bound algorithm Numerical results . . . . . . . . . . 3.8.1 Instance generation . . . . . 3.8.2 Exact algorithms . . . . . . 3.8.3 Heuristics . . . . . . . . . . Conclusion . . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
4 Operation sequencing with a single box position 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . 4.2 Related literature . . . . . . . . . . . . . . . . . . . 4.3 Computational complexity . . . . . . . . . . . . . . 4.4 Dynamic programming algorithm . . . . . . . . . . 4.5 Fully polynomial time approximation scheme . . . . 4.6 Polynomial algorithm for a variable global start time
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . .
33 36 39 40 40 41 42 43 44 45 46 49 50
. . . . . .
51 51 53 54 59 62 67
III Box placement
71
5 Box placement for one product variant 5.1 Introduction . . . . . . . . . . . 5.2 Related literature . . . . . . . . 5.3 Problem definition . . . . . . . 5.4 Polynomial cases . . . . . . . . 5.5 Computational complexity . . . 5.6 Lower bound . . . . . . . . . . 5.7 Dominance rule . . . . . . . . . 5.7.1 Exact dominance rule . . 5.7.2 Heuristic dominance rule 5.8 Solution algorithms . . . . . . .
73 73 74 75 77 82 87 90 90 97 98
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
Contents
5.8.1 Mixed integer program . . . . . . . . 5.8.2 Basic heuristics . . . . . . . . . . . . 5.8.3 Branch and bound algorithm . . . . . 5.8.4 Truncated branch and bound heuristic 5.9 Numerical results . . . . . . . . . . . . . . . 5.9.1 Instance generation . . . . . . . . . . 5.9.2 Test setup . . . . . . . . . . . . . . . 5.9.3 Exact algorithms . . . . . . . . . . . 5.9.4 Heuristics . . . . . . . . . . . . . . . 5.10 Conclusion . . . . . . . . . . . . . . . . . .
XIII
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
98 99 100 101 101 102 103 104 107 109
6 Box placement for multiple product variants 6.1 Introduction . . . . . . . . . . . . . . . . . . . . 6.2 Problem definition . . . . . . . . . . . . . . . . 6.3 Computational complexity . . . . . . . . . . . . 6.4 Mixed integer programs . . . . . . . . . . . . . . 6.4.1 Disjunctive sequencing . . . . . . . . . . 6.4.2 Space-indexing . . . . . . . . . . . . . . 6.5 Lower bound . . . . . . . . . . . . . . . . . . . 6.5.1 Lagrangian relaxation of processing times 6.5.2 Subgradient search . . . . . . . . . . . . 6.5.3 Solving the Lagrangian relaxation . . . . 6.5.4 Determining box positions . . . . . . . . 6.5.5 Determining walk times . . . . . . . . . 6.5.6 Greedily determining walk times . . . . . 6.6 Solution algorithms . . . . . . . . . . . . . . . . 6.6.1 Basic heuristics . . . . . . . . . . . . . . 6.6.2 Branch and bound algorithm . . . . . . . 6.6.3 Truncated branch and bound algorithm . 6.7 Numerical results . . . . . . . . . . . . . . . . . 6.7.1 Instance generation . . . . . . . . . . . . 6.7.2 Test setup . . . . . . . . . . . . . . . . . 6.7.3 Exact algorithms . . . . . . . . . . . . . 6.7.4 Heuristics . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
111 111 113 114 115 115 116 118 119 120 120 122 123 124 128 128 129 130 130 131 132 132 139
XIV
6.8
Contents
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 141
IV Conclusion
143
7 Conclusion 145 7.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 145 7.2 Future steps . . . . . . . . . . . . . . . . . . . . . . . . . 146 8 Summary of major contributions 149 8.1 Sequencing assembly operations . . . . . . . . . . . . . . 149 8.2 Line side placement . . . . . . . . . . . . . . . . . . . . . 150 Bibliography
153
Publications
169
List of Definitions
Definition Definition Definition Definition Definition Definition Definition Definition
2.1 3.1 4.1 4.2 4.3 5.1 5.2 6.1
Walk time $ j (t) . . . . . . . . . . . . . . . . . Problem S . . . . . . . . . . . . . . . . . . . . Problem Sˆ . . . . . . . . . . . . . . . . . . . . Problem Cˆ . . . . . . . . . . . . . . . . . . . . Even Odd Partition (Garey et al., 1988) . . . . . Problem P . . . . . . . . . . . . . . . . . . . . Three Partition (3P) (Garey and Johnson, 1979) Problem Pm . . . . . . . . . . . . . . . . . . .
. 16 . 24 . 51 . 52 . 54 . 75 . 82 . 113
List of Algorithms
Algorithm 1 Algorithm 2 Algorithm 3 Algorithm 4 Algorithm 5
Dynamic programming algorithm for Sˆ . . . . . 60 FPTAS for Sˆ . . . . . . . . . . . . . . . . . . . 63 Combinatorial lower bound for P . . . . . . . . . 89 Dominance rule for P . . . . . . . . . . . . . . . 96 Weighted nearest identity sequence heuristic f. Pm 129
List of Statements
Lemma 3.1 Lemma 3.2 Property 3.3 Theorem 4.1 Theorem 4.2 Remark 4.3 Property 4.4 Property 4.5 Property 4.6 Theorem 4.7 Corollary 4.8 Theorem 4.9 Remark 4.10 Remark 4.11 Lemma 5.1 Lemma 5.2 Lemma 5.3 Lemma 5.4 Theorem 5.5 Property 5.6 Property 5.7 Property 5.8 Theorem 6.1 Corollary 6.2 Remark 6.3 Lemma 6.4
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
33 35 39 54 60 61 62 64 65 67 67 67 68 69 77 82 84 85 87 93 94 94 114 115 117 122
XX
Remark 6.5 Lemma 6.6 Property 6.7 Corollary 6.8 Lemma 6.9 Property 6.10 Remark 6.11
List of Statements
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
123 124 125 126 126 127 131
List of Figures
Figure 1 Figure 2 Figure 3 Figure 4 Figure 5 Figure 6 Figure 7 Figure 8 Figure 9 Figure 10 Figure 11 Figure 12 Figure 13 Figure 14 Figure 15 Figure 16 Figure 17 Figure 18 Figure 19 Figure 20
An example assembly station . . . . . . . . . . . Comparison of walking strategies . . . . . . . . . Example instance for S . . . . . . . . . . . . . . Time-dependent scheduling models with piecewise-linear processing times . . . . . . . . . . . An example for a polynomial case of S . . . . . . Another example for a polynomial case of S . . . Lower bound on a S instance . . . . . . . . . . . S: MIP, B&B runtime in box and scatter plots . . Example Sˆ instance . . . . . . . . . . . . . . . . Example instance for P . . . . . . . . . . . . . . Example instance for P with uniform assembly times and uniform box widths, respectively . . . . Example of a polynomial case of P . . . . . . . . Corresponding P instance in the pseudopolynomial reduction of an example 3P instance . . . . . Lower bound on a partial P schedule . . . . . . . Dominance rule on a partial P schedule . . . . . P: MIP, B&B runtime in box and scatter plots . . Example instance for Pm . . . . . . . . . . . . . Pm: MIP, B&B runtime in box and scatter plots for each n . . . . . . . . . . . . . . . . . . . . . Pm: MIP, B&B runtime in box and scatter plots for each m . . . . . . . . . . . . . . . . . . . . . Pm: MIP, B&B runtime in a product plot for each n and m pair . . . . . . . . . . . . . . . . . . . .
9 17 25 32 37 37 39 47 52 76 77 77 84 90 91 105 112 133 134 135
List of Tables
Table 1 Table 2 Table 3 Table 4 Table 5 Table 6 Table 7 Table 8 Table 9 Table 10 Table 11 Table 12 Table 13 Table 14 Table 15 Table 16
Slope values by walking velocity and walking strategy . . . . . . . . . . . . . . . . . . . . . . . . . Complexity results on related classic objectives for single machine scheduling . . . . . . . . . . . . S: MIP, B&B runtime and performance . . . . . S: Mean walking time percentage by assembly time and box width setting . . . . . . . . . . . . S: Mean walking time percentage by assembly time and box width setting . . . . . . . . . . . . S: Heuristics’ runtime and performance . . . . . P: MIP, B&B runtime and performance . . . . . P: B&B runtime by walking velocity and walking strategy . . . . . . . . . . . . . . . . . . . . . . P: B&B runtime by assembly time and box width setting . . . . . . . . . . . . . . . . . . . . . . . P: Heuristics’ runtime and performance . . . . . Pm: MIP, B&B runtime and performance . . . . Pm: MIP, B&B runtime and performance for each n and m pair . . . . . . . . . . . . . . . . . . . . Pm: B&B runtime by walking velocity and walking strategy . . . . . . . . . . . . . . . . . . . . Pm: B&B runtime by assembly time and box width setting . . . . . . . . . . . . . . . . . . . . . . . Pm: Heuristics’ runtime and performance . . . . Pm: Heuristics’ runtime and performance for each n and m pair . . . . . . . . . . . . . . . . . . . .
20 29 47 48 48 49 104 107 107 108 136 137 138 138 139 140
Part I Introduction
1 Introduction
1.1 Introduction Production systems possess a high number of variables that influence their productivity. This is a major motivation for production planners to utilize proficient planning software that assists them in making informed decisions. A key component for improving productivity is the elimination of nonproductive time. Particularly in automotive assembly, worker walking times are a significant contributive factor: Scholl et al. (2013) indicate that about 10–15% of total production time at a major German car manufacturer is spent on fetching parts from the production line side. Although this walking time is optimized manually since the advent of moving assembly lines in 1913 (Ford and Crowther, 1922, p. 80), there are only few computational approaches in the literature. The main difficulty for analyzing, estimating, and minimizing walking time arises from the movement of the conveyor line, which induces time-dependency for walk distances to the line side. There are two main factors that decide on the walking time (Scholl et al., 2013): (a) the sequence of assembly operations, and (b) the line side placement of parts. In practice, assembly operation sequencing (a) is performed by the experienced assembly line worker on his or her own. At the same time, the goal of an assembly line is to distribute workload equally among workers for improved productivity. As the walking time contributes significantly to the workload, it is thus vital to individually assess the walking time optimization potential by changing the order of a worker’s assembly operations. © Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2020 H. A. Sedding, Time-Dependent Path Scheduling, https://doi.org/10.1007/978-3-658-28415-2_1
4
1 Introduction
Recently emerging decision support software already shows a preview of incurred walking time. Nonetheless, the sequence optimization still needs to be performed manually. We assume this is due to the problem’s distinctness from known sequencing problems, which are restricted to fixed walking distances, e. g., at nonmoving assembly lines, for which Scholl et al. (2013) devise an optimization algorithm. The line side placement of parts (b) is largely planned in a collaborative effort of all involved parties, visualized by placing full-size card boxes. In recent years, dedicated software planning tools introduce digital decision support for placing the boxes by previewing resulting worker paths. Although such software allows to move boxes virtually for a manual placement, an automated optimization as, e.g., described in Klampfl et al. (2006) is still in its infancy, in particular for moving conveyor lines (Boysen et al., 2015). The objective of this work is to create computational approaches to optimize walking time, in particular regarding these two main factors, (a) and (b). The methodology is to model the contained problem settings adequately in terms of mathematical optimization, analyze their computational complexity, devise algorithms for solving them, and test their computational performance. The main factor which differentiates results in this work from existing research on planning walking paths is the continuous movement of the assembly line. As we essentially optimize schedules of either jobs or part containers, our research is anchored in the field of time-dependent scheduling. The devised model of the assembly line abstracts from reality such that it still describes the real problem, but becomes suitable for mathematical optimization. Moreover, the quintessential problem setting addressing (a) and (b) is common to many variations of the real world problem. This problem core and its properties are described in this work, being readily available for derivations, which extend the problem setting, or impose restrictions on it. Addressing the optimization of these models computationally in a structured way requires to first assess their computational complexity, searching
1.2 Organization
5
for the edge between easy and hard problem instances. The complexity is shown analytically by finding polynomial time algorithms for easy problems, and by reducing NP-complete problems to difficult problems. Moreover, a polynomial time approximation algorithm is introduced for instances that are not NP-hard in the strong sense. To solve the models, the contained problem settings are analyzed to devise properties that describe optimal solutions. Furthermore, dominance relations are devised, allowing to compare some partial solutions. In the search for an optimal solution, such relations allow to remove some search trees in favor of provable better ones. Combinatorial properties are also utilized in calculating a lower bound on the walking time even if only parts of the solution are available. This enables to incrementally construct and remove suboptimal solutions in branch and bound search algorithms. For finding heuristic solutions quickly, truncated branch and bound searches are derived, additionally employing greedy algorithms and metaheuristics. Performance of the devised solution algorithms is tested in a computational experiment on generated instances with varying difficulty and structure, being either artificial or close to reality. A statistical analysis evaluates and compares the novel algorithms to the performance of a state of the art mathematical optimization solver and a metaheuristic simulated annealing approach. The assessment shows that our algorithms outperform other approaches with respect to runtime and solution quality. Hence, they provide a means for quickly solving problem instances and assisting planners in reducing walking time at moving conveyor lines. 1.2 Organization The work is organized as follows. It begins with the introduction of a model that allows to optimize walking time at moving assembly lines in Chapter 2, and reviews generally related literature. The focus lies on two strategies: optimizing (a) the assembly operation sequence, and (b) the line side part placement. For each problem setting, we devise combinatorial properties,
6
1 Introduction
study its computational complexity, and introduce fast exact and heuristic solution algorithms, and test them in a computational experiment. In Part II, the sequencing problem is studied with respect to several aspects. The definition, a literature review and polynomial cases are introduced in Chapter 3. For this general case we provide a fast exact algorithm that is evaluated in a computational test. Chapter 4 considers the special case with one common place for all parts and identify its computational complexity with a NP-completeness proof and a fully polynomial time approximation scheme. The placement problem is considered in Part III. Chapter 5 begins with a study on deciding the part placement for a single product variant. First, the model is defined, and a literature review is provided. Then, we analyze its computational complexity, observe polynomial cases, and provide exact and heuristic algorithms, which are assessed in computational tests. This model is extended to intermix the placement of several product variants in Chapter 6. We study this problem’s computational complexity, provide a Lagrangian based lower bound that enables exact and heuristic algorithms, and assess them in a computational test on an extended instance set. To wrap up, Part IV summarizes the major results of this work, describes their implications, and motivates further research steps to extend the model and build upon the results.
2 Modeling
2.1 Modeling To provide a base for optimizing walking time, we devise a model that finds a balance between closely depicting the reality and enabling a fast combinatorial optimization of all variables. Moreover, we aim for a model that catches the problem core, such that it enables the derivation of further models at later points in time. In this chapter, we first describe our devised model, give an overview on literature about the practical problem in Section 2.2, and discuss our model assumptions in Section 2.3. With these, we show how to calculate walking time in several walking strategies along the assembly line in Section 2.4. We approach the two main problems of optimizing the sequence of assembly operations and the line side placement of parts with one model that captures the quintessential planning problem, taking time-dependent walking time into account. The assembly line manufactures either only one product variant, or intermixes multiple product variants in given production rates. In essence, we are given a workpiece that continuously moves along a straight conveyor line. We focus on the cycle time of one given worker at this workpiece, which corresponds to the working time of the worker at this workpiece until he or she visits the next workpiece. If workpieces are of different product variants, we optimize the sequence for each variant separately, and the part placement for all variants at once such that it minimizes the sum of cycle times, weighted by production rate. In the cycle time of a worker at a workpiece, the worker is given n operations that solely depend on the workpiece’s product variant. The worker Parts of this chapter are previously published in Sedding (2019). © Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2020 H. A. Sedding, Time-Dependent Path Scheduling, https://doi.org/10.1007/978-3-658-28415-2_2
8
2 Modeling
performs these operations one after another in a certain sequence. In the process, he or she leaves the workpiece before each assembly operation, walks along the assembly line to the corresponding box, picks up required parts for this operation, and returns to the workpiece. We call this a walk. At the workpiece, the worker continues by performing the assembly operation. This takes a fixed assembly time, specific to the operation. Together, we use the term job for referring to an assembly operation in conjunction with its preceding walk. Accordingly, the job’s processing time function sums a time-dependent walk time and an assembly time. All parts for an operation are stored in a separate box at the line side, each with a certain width. Hence, there are n boxes. They are placed side-to-side in a single row along the conveyor line. Thus, the worker walks along this row of boxes in parallel to the conveyor to gather parts. The sum of all walk and assembly times yields the cycle time. Then, the objective is to reduce the cycle time by minimizing the (total) walking time, which we define as the sum of all walk times. On the one hand, we permute the sequence of operations, and on the other hand, we permute the order of boxes at the line side. A depiction of our model and both approaches are given in Figure 1. For a walking time minimization, each box should be placed close to the point where its operation is performed. Intuitively, it should suffice to order the boxes in the same sequence as the jobs. Although this is a common practice, it is often better to accept a longer walk time for some operations to obtain a global minimum. This gives rise to a combinatorial optimization approach. 2.2 Related literature Assembly lines are a classic method to distribute work equally among workers. They are considered as a stepping stone for the taylorized production, as it greatly reduced lead times. To the present day, the production method is the standard in car factories. Assembly lines operate with a fixed cycle time, producing one product in each cycle. Thus, the productivity can be measured by the inverse of the cycle time. Particularly in automobile pro-
2.2 Related literature
9
walks and assembly operations
1
3
2
4
(a) 1
2
3
4
3
4
boxes
1
4
3
2
(b) 1
1
2
2
3
4
(c) 1
3
4
2
Figure 1: This figure shows an example assembly station for the right rear wheel of a car. The conveyor line is shown at four points in time; each is at the start of an assembly operation (marked with a dashed line). Before each operation, the worker walks to pick up required parts from the corresponding box at the line side. An intuitive placement strategy is to order the assembly operations and boxes in the same sequence h1, 2, 3, 4i, as shown in (a). By changing the sequence of assembly operations to h1, 4, 2, 3i in (b), the walking time is reduced by 23%. In (c), the box placement sequence is instead changed, to h1, 3, 4, 2i, which reduces walking time by 42%. Clearly, it can be better to accept a longer walk for some of the operations as it may result in a much shorter total walking time.
10
2 Modeling
duction, assembly lines are usually the bottleneck of the factory, and all other systems like part logistics meet its demands. Therefore, each reduction of the cycle time directly increases output. A reduction is classically achieved by a high degree of automation. However, there is a limit on the benefits of automation, thus some manufacturers retain or even decrease the level of automation (Gorlach and Wessel, 2008; Salmi et al., 2016). Therefore, weight is placed on meticulously optimizing the worker’s workload and work station. A well-known optimization problem in this area is the assembly line balancing problem. It concerns with the assignment of operations to work stations such that at each station, the assigned workload is below a given cycle time. This problem belongs to one of the first mathematical optimization problems and is formulated in Salveson (1955). Surveys on this problem are found in Battaïa and Dolgui (2013); Baybars (1986); Scholl and Becker (2006). As available time at each station is limited, it generalizes the bin-packing problem (Wee and Magazine, 1982). In addition to the limited time, available space is often scarce at work stations. However in the assembly of a workpiece, space is required to store assembly parts in boxes at the line side. Therefore, Bautista and Pereira (2007) introduce space constraints of work stations to the line balancing procedure, while space utilization is optimized in Bukchin and Meller (2005) to reduce line stoppage from late part replenishments. As production stops not only happen if an assembly worker starves from missing parts, but as well if the workload repeatedly exceeds available time, it is essential to properly anticipate and plan each worker’s workload. The workload is classically estimated with the established system of Methods-Time Measurement, which averages time measurements for several workers (Maynard et al., 1948). In particular, nonproductive operations like walking may take up a considerable amount (Boysen et al., 2015). To consider this, the models in Andrés et al. (2008); Scholl et al. (2013) factor in sequence-dependent working time for, e.g., fetching parts. They assume fixed walk distances. However, this assumption is inaccurate for walks between a moving assembly line and static locations.
2.2 Related literature
11
To design and plan an assembly line for maximum efficiency, it is important to obtain detailed time estimates. This necessitates a model that incorporates time-dependent walking distances. Such time-dependent walks are, e. g., modeled in Gusikhin et al. (2003). On optimizing time-dependent walking times, we are aware of only one previous work by other authors that takes this into account (Klampfl et al., 2006). They provide a quadratic integer program to optimize the box placement. Their approach is limited to rather small instances, as they report a slow performance already for a tiny case study of five operations. Although manufacturers demand sophisticated, intelligent decision support systems in this area, we are not aware of further computational approaches in the literature. This research gap is moreover explicitly identified in Boysen et al.’s (2015) recent review on automotive part logistics: “While constituting important initial steps, the existing papers leave plenty of space for future research”. This motivates our development of improved optimization approaches that include a detailed time measurement. An analogy can be seen in the history of the traveling salesman problem, which is to find a shortest tour through all cities (Garey and Johnson, 1979). The classic model assumes constant travel distances. Similarly, it is later extended to time-dependent travel times to model delay, e. g., during rush hours (Alfa, 1987; Beasley, 1981). Continuous travel time functions are first considered in Ahn and Shin (1991) and Malandraki and Daskin (1992). The former remark that this function needs to ensure that by pausing at a node, arrival time must not decrease. This requirement is present in our model as well, and it is fulfilled if the conveyor velocity is smaller than the worker velocity, which is usually the case. The variant in Helvig et al. (2003) is called the moving traveling salesman problem, and requires to visit cities that continuously move along a line. Each city is given a start point and a velocity. With one uniform velocity for all cities, this problem corresponds to our assembly operation sequencing problem, albeit with zero assembly times, by modeling each box as a city. A recent literature review on the time-dependent traveling salesman problem is given in Gendreau et al. (2015). We aim for a similar extension of classic optimization with fixed
12
2 Modeling
distances onto time-dependent distances for a greatly improved accuracy in optimizing walking time at moving conveyor lines. 2.3 Assumptions For modeling the problem setting in its quintessence, we take several assumptions for which we suppose that they on the one hand subsume the practical problem core, and other other hand allow for extensions and derivative work. 2.3.1 Variable assumptions We study several different models. They differ in the following assumptions. A1 (Single product variant). In practice, a production-mix of multiple product variants is nowadays the norm. This is either achieved by placing a separate part container for each variant, or by kitting parts of several variants in the same container. Kitting effectively requires to consider a single variant only, because the same part container is used in every variant. Kitting is a common strategy in practice, and it allows to assume a single product variant which has only one list of operations and no further containers. Hence, by depicting only a single product variant, we can already cover some real world cases with multiple product variants. A2 (Fixed operation sequence). In practice, workers may autonomously change the order of some operations. With this assumption, we can abstain from predicting this and assume an immutable list of operations. A3 (Fixed box positions). Changing box positions involves several parties, among them are logistics and assembly planners, assembly and part replenishment workers. Hence, this considerable effort is often eschewed. We can model this by assuming fixed box positions.
2.3 Assumptions
13
2.3.2 Common assumptions Assumptions that are the same in all of our models are listed as follows. A4 (Single station). In practice, an assembly line is divided into a row of equal-sized, separately operated stations. As the station’s length equals the distance between succeeding workpieces, it contains exactly one workpiece at all times. Moreover, the station’s line side is not shared with other stations. Therefore, we focus on one work station and one workpiece. A5 (Single worker). In practice, usually one worker is assigned to the same station, although this assignment can be extended include multiple workers at a station (Becker and Scholl, 2009). Therefore, our model considers operations and placement area of one worker. However, our model can even depict the multiple worker case by assuming no interference occurs and each worker’s material is placed on a separate, contiguous area. A6 (Single cycle). A cycle starts when the workpiece enters, and ends when it leaves the station. In practice, a worker might finish a cycle early or late depending on product variations and his or her current condition. Thus, he or she might float up- or downstream the conveyor line. However, this is hard to predict. Therefore, we focus on average assembly times and disregard floating. Hence, we only model a single cycle. Then, each operation always happens at the same time and takes the same time. A7 (Single work point). In practice, larger workpieces have several work points with significant walking distance in between. However, such operations are usually marked incompatible during line balancing and thus, they get assigned to different workers anyhow (Becker and Scholl, 2009). Therefore, in most cases all assembly work happens at one spatial point of workpiece on the moving conveyor, which is depicted in our model. A8 (One box per operation). In practice, an operation sometimes requires parts of several containers. However, even then it is advisable to store them side-by-side by to avoid further walking time. We assume that this is the case and therefore subsume these containers by an imaginary box that
14
2 Modeling
encompasses them. This also comprises stacked containers in a rack or on a dolly. A9 (One-dimensional box placement). In practice, larger containers are placed on pallets or dollies on the floor, while smaller containers are placed in racks along the line. Our model indeed allows stacking of containers within a box as long as they correspond to the same operation. Then, placement reduces to a single row of boxes at the line side. A10 (No space between boxes). In practice, there is commonly a lack of space at the line side. Therefore, we disallow space between boxes entirely in our model and place boxes side-to-side without spacing. A11 (Stationary box positions). In practice, container positions remain fixed during production, even if larger containers are sometimes on wheels or mounted on dollies or overhead cranes. Accordingly, we assume fixed positions, and optimize them offline. A12 (Uniform walking velocity). In practice, a worker requires time for acceleration and deceleration. Moreover, heavier parts reduce the velocity. In this model, we approximate walk times by using a constant average velocity for all parts. Then, walking time is linear function of the distance. A13 (One-dimensional walking). In practice, containers are placed as close to the conveyor as possible to reduce walking time. Therefore, we assume that the containers are within gripping distance. As a result, the worker only walks in parallel to the conveyor belt. Hence, we calculate walking time by measuring distance in just one dimension along the line. A14 (One walk per operation). In practice, parts from the line side are usually fetched in a single walk directly before starting the corresponding operation. Hence, we model a walk at the start of each operation. After that, each assembly time is constant. Sometimes, the worker may bring along parts for several operations at once. Then, we encapsulate these parts in one larger container. If, by this, one of the corresponding operations no longer requires a separate walk, we join it with its preceding operation into one longer operation. Hence, it again suffices to model one walk per operation.
2.3 Assumptions
15
With this approach, it is also possible to depict gathering all small parts like screws in the walk of, e.g., the first operation, and additional larger parts in later walks. If an operation requires no parts at all, we also append it to its preceding operation, or, if it is the first, adjust the worker’s start time. A15 (No picking time). In practice, the case study in Finnsgård et al. (2011) reports that the time for picking parts accounts for only 6% of the nonproductive time (mean picking time 1.6 seconds, mean walking time 26.4 seconds). Therefore, we ignore the picking time. A16 (Picking at upstream side). In practice, depending on the spatial arrangement of parts within a box, the pick point at larger boxes may vary. We eschew modeling such detail and thus settle on picking at a single point, the upstream (left) side of the box. A17 (One operation per box). In practice, it occurs that small parts like screws are required for several operations. However, it is then usually possible to gather all of them at once for all operations in this cycle. Therefore, we assume that each box is visited only once per cycle, assigned to exactly one operation. A18 (One product variant per box). In practice, some parts can be the same in multiple product variants. Moreover, similar, but differring parts may be stored in the same box. In this case, the same box is shared between operations of several product variants. This decreases the number of boxes at a station. This reduction is not considered in our model. A19 (No precedence constraints). In practice, sometimes precedence between assembly operations needs to be respected, for example if one operation obstructs another (Scholl et al., 2008). This decimates the number of feasible solutions. Nonetheless, a precedence graph is seldomly available in practice, which as well motivates Klindworth et al.’s (2012) study of learning it. With this status quo, we neglect precedence constraints and consider the complete set of solutions in our optimization.
16
2 Modeling
2.4 Walking strategies A one-dimensional walk distance measurement suffices to model walking time along a moving conveyor line (see Assumption A13). Moreover, we show in the following that walking time can be calculated by just a piecewise linear function of two pieces. The conveyor moves linearly along the assembly line with constant velocity. Distance is a size that can be measured by the amount of time it takes the conveyor to travel it. Indeed, we let all measurements base on conveyor velocity vconv = 1, and scale all distance measures accordingly to this unit (for example, box widths). Moreover, we equate the workpiece’s position with time: at time t, the workpiece is at t · vconv = t. Furthermore, we express a box position by the time it is passed by the wokpiece. For the worker, we assume a constant walking velocity v > vconv (see Assumption A12). Let us estimate, for some job j ∈ J, the worker’s walk time from the workpiece to box position π j and back if the walk starts at t. If t = π j , the walk time is zero. Else, we distinguish if (i) the workpiece moves toward the box: t < π j ; or (ii) the workpiece moves away from the box: t > π j . In each case, walk time is proportional to distance π j − t. This leads to Definition 2.1 (Walk time $ j (t)). Walk time of a job j ∈ J if starting to walk at time t is calculated by $ j (t) = max a π j − t , b t − π j , with a linear factor a ∈ (0, 1) for case (i), and a linear factor b ∈ (0, ∞) for case (ii). We show in the following that by choosing a, b accordingly, it is possible to cover all common walking strategies from reality. If the floor is fixed and just the workpiece moves (as in Klampfl et al. (2006)), strategy (A) applies. If the floor plates move together with the workpiece (Jaehn and Sedding (2016)
(C)
t0
displacement
(B)
tˆ
box
iece workp
r worke
lˆ
t 00
πj
Cˆ
time
box
worker
tˆ
t0
displacement
(A)
17
displacement
2.4 Walking strategies
lˆ
t 00
Cˆ
workpiece π j time
box iece workp
r worke
tˆ
t0
t 00
lˆ
Cˆ
πj
time
Figure 2: In this example, we show the displacement of the worker during one job (drawn with a solid thick line) in walking strategy (A), (B), and (C). The worker commences with the job by leaving the workpiece (dotted line) at time tˆ, visits the box at time t 0, returns to the workpiece at time t 00, remains there for asˆ and completes the job at time C. ˆ Additionally, the diagram shows sembly time l, the displacement of the corresponding box (dashed line) to the workpiece on the moving conveyor in the course the job; they pass each other at time π. ˆ It is visible that walk time t 00 − tˆ is the same in (A) and (B). However in (C), the walk time is smaller, hence t 00 and Cˆ are smaller as well (as can be seen from the vertical, dashed reference lines).
consider this case), strategy (B) applies. Their combination (C) applies if the worker can freely alternate between a fixed floor and moving floor plates. These strategies are listed below. Additionally, Figure 2 depicts the displacement of the worker, the workpiece, and a box for an example job.
18
2 Modeling
Walking strategy (A). The worker walks beside the conveyor or underneath an overhead conveyor. Here, the walk time if starting at time t is $ j (t) from Definition 2.1 with factor a = 2/(v + 1) ∈ (0, 1) and b = 2/(v − 1) ∈ (0, ∞) for worker velocity v > 1. Proof. Here, we fix the coordinate system on the floor. The target box has the constant position f (t) = π j . Let us consider case (i). To walk to the box, starting at time tˆ at position tˆvconv = tˆ, the worker’s position is calculated by function g(t) = (t − tˆ)v + tˆ = tv + tˆ (1 − v) for t ≥ tˆ. Then, the box visit time t 0 is g(t 0) = f (t 0) ⇐⇒ t 0 v + tˆ (1 − v) = π j ⇐⇒ t 0 = π j /v + tˆ (1 − 1/v). For returning, the worker movement function is h(t) = −(t − t 0)v + f (t 0), for t ≥ t 0. Then, it meets the conveyor, which is described by q(t) = t, at return time t 00, which is h(t 00) = q(t 00) ⇐⇒ −(t 00 − t 0)v + π j = t 00 ⇐⇒ t 00 = (π j + t 0 v)/(1 + v). Substituting t 0, the walk time then is t 00 − tˆ = (π j − tˆ) · 2/(1 + v) = (π j − tˆ) a. The walk time thus proportionally depends on the distance between the box and the work point. Case (ii) is calculated similarly. Here, the worker’s movement is g(t) = −(t − tˆ)v + tˆ = −tv + tˆ (1 + v). Then, t 0 is g(t 0) = f (t 0) ⇐⇒ t 0 = −π j /v + tˆ (1 + 1/v). Returning, the worker is at h(t) = (t − t 0)v + f (t 0), and meets the conveyor at t 00, which is h(t 00) = q(t 00) ⇐⇒ t 00 = (π j − t 0 v)/(1 − v). The walk time in this second case is t 00 − tˆ = (π j − tˆ) · 2/(1 − v) = (π j − tˆ) b. Both cases combined yield the stated walk time. Walking strategy (B). The worker walks upon the conveyor, which has mounted floor plates. Here, the walk time at t is $ j (t) from Definition 2.1 with a = 2/(v + 1) ∈ (0, 1) and b = 2/(v − 1) ∈ (0, ∞) for v > 1. Proof. Here, we fix the coordinate system on the workpiece (moved by the conveyor), for easing the calculation of the worker’s walk time. Again, this is depicted in Figure 2. We begin with case (i). The box movement function then is f (t) = −t + π j . The worker starts walking at time tˆ. The worker’s forward movement is described by g(t) = (t − tˆ)v, for t ≥ tˆ. The worker visits the box at time t 0, thus g(t 0) = f (t 0) ⇐⇒ (t 0 − tˆ)v = −t 0 + π j ⇐⇒ t 0 = (π j + tˆv)/(1 + v). After that, the worker returns to
2.4 Walking strategies
19
the workpiece by walking the same path backward. Therefore, the walk time is 2(t 0 − tˆ) = (π j − tˆ) · 2/(1 + v). Case (ii) is calculated similarly. The worker’s backward movement function is g(t) = −(t − tˆ)v, for t ≥ tˆ. Then, g(t 0) = f (t 0) ⇐⇒ t 0 = (π j − tˆv)/(1 − v). Again doubling the distance for including the return, the walk time is 2(t 0 − tˆ) = (π j − tˆ) · 2/(1 − v). Combining both cases yields the stated walk time. Walking strategy (C). When walking forwards, the worker velocity adds to the conveyor velocity. Therefore, we let the worker walk atop the moving conveyor floor plates in the forward direction (strategy (B)). Backwards instead, we rather let the worker walk beside the conveyor on the stationary floor, such that the opposing conveyor velocity does not reduce his or her velocity (strategy (A)). In summary, the forward velocity is 1 + v, and the backward velocity is v. Then, walk time at t is $ j (t) from Definition 2.1 with a = (2v + 1)/(1 + v)2 and b = (2v + 1)/v 2 for v > 1. Proof. Shown by combining the proofs of strategy (A) and (B).
These results show that both walking strategy (A) and (B) yield the same walk time. Therefore, neither walking beside nor walking atop the conveyor is superior. However, their combination (C) allows to reduce the walk time significantly: for worker velocity v = 13.6 as in Klampfl et al. (2006), factor a reduces by 3.5% and b by 4.1%. Hence, can be advantageous to install both moving conveyor floor plates and stationary edges at assembly lines, as it enables workers to apply walking strategy (C). All three described walking strategies are covered by the same piecewise linear function in Definition 2.1. Moreover, as a < b in each strategy, the walk time is shorter if the workpiece moves toward the box (case (i)). Exemplary a, b values for different worker velocities v are shown in Table 1.
20
2 Modeling
Table 1: Numeric a, b slope values in walking strategies (A), (B), (C) for several worker velocities v. (A) and (B)
(C)
v
a
b
a
b
2 4 8 16 32
0.667 0.400 0.222 0.118 0.061
2.000 0.667 0.286 0.133 0.065
0.556 0.360 0.210 0.114 0.060
1.250 0.562 0.266 0.129 0.063
Part II Operation sequencing
3 Operation sequencing
3.1 Introduction This chapter considers the problem of optimizing a worker’s sequence of assembly operations at a workpiece to minimize total walking time between the moving workpiece and parts from the line side. Each assembly operation is given a static line side box position along the workpiece line. Then, each assembly operation is preceded by a walk to this box position. The walk together with the assembly operation is subsumed as a job. The worker’s walk time in this job is minimized if it starts close to the point in time it passes its box position. At this point in time, the walk time is minimum, hence we call it the job’s ideal start time. The objective is to find a sequence for the jobs such that the total distance to the ideal start times, or equivalently, the total time-dependent processing time (commonly called the makespan) is minimized. With a given fixed start time, minimizing the makespan is also equivalent to minimizing the last job’s completion time. This is a common objective in time-dependent scheduling research. Notably, the employed V-shaped processing time function yields another stepstone in scheduling research. It brings along nonmonoticity to previously only monotonous piecewise linear processing time functions. Therefore, we intensively study this problem’s computational properties to advance foundational scheduling theory in addition to the more practical walking time minimization problem. In this chapter, we first introduce a formal definition of the time-dependent job sequencing problem in Section 3.2. Then, we review related literature on this topic both from a practical, and an algorithmic view in Section 3.3. We introduce polynomial cases that occur if a certain job sorting order results © Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2020 H. A. Sedding, Time-Dependent Path Scheduling, https://doi.org/10.1007/978-3-658-28415-2_3
24
3 Operation sequencing
in start times that are all not after or not before each job’s ideal start time. These polynomial cases are used in a lower bound that gives rise to a branch and bound search for exactly solving the sequencing problem. We test its performance computationally on a set of generated instances with varying properties. Moreover, we compare its performance to an exact mixed integer program, and several heuristics. In the subsequent Chapter 4, we consider the problem variant of just a single common ideal start time for all jobs and study its computational complexity. 3.2 Problem definition Let us formally define the problem for sequencing jobs. Definition 3.1 (Problem S). We are given rational slopes a ∈ [0, 1], b ∈ [0, ∞), and a set of n jobs J = {1, . . . , n}. Each job j ∈ J is given an assembly time l j ∈ Q ≥0 and a box at position π j ∈ Q. We decide on the job sequence S : J → {1, . . . , n}, which is a permutation of the jobs J that assigns each job to a distinct position. Its inverse is denoted by S −1 . For each job j ∈ J, we calculate start time t j = CS −1 (S(j)−1) , iteratively from the given global start time tmin = C0 (usually zero), and completion time C j = t j + p j (t j ) with start time dependent processing time p j (t) = l j + $ j (t), and time-dependent walk time $ j (t) as of Definition 2.1. The objective is to find a job sequence S that minimizes makespan φ(S) = CS −1 (n) − tS −1 (1) = Cmax − tmin . In three-field notation for scheduling problems (Graham et al., 1979), problem S can be stated as 1 p j = l j + max a π j − t , b t − π j Cmax . A further word on notation: we denote a sequence S often in the notation of hS −1 (1), . . . , S −1 (n)i. Hence, h3, 1, 2i indicates a sequence S with S(3) = 1, S(1) = 2, S(2) = 3. The processing time p j = p j (t) of job j ∈ J is shortest if j starts at t = π j . If t < π j decreases, p j (t) increases with slope a, else with slope b; thus p j (t) is asymmetric. Hence, p j ≥ l j ≥ 0. Moreover, p j = l j ⇐⇒ t j = π j .
3.3 Related literature
25
π1
π2 1
4
p1 = l1
tmin
π4
p4
C1
l4
C4
2
3
p2 l2
p3 l3
C2
π3 0 Cmax
C3 = Cmax
Figure 3: Example instance of Figure 1 in terms of Definition 3.1: factors a = b = 0.1 and n = 4 jobs, assembly times l1 = · · · = l4 = 2 all equal, a = b = 0.1 and n = 4 jobs, assembly times l1 = · · · = l4 = 2 all equal, box positions π1 = 0, π2 = 4, π3 = 5, π4 = 6, and fixed start time tmin = 0. The optimal job sequence S = h1, 3, 4, 2i reaches Cmax = 6.595, the intuitive S 0 = h1, 2, 3, 4i 0 yields Cmax = 6.732. The calculation of completion times is done iteratively: e. g., in S, there is C1 = tmin + p1 (tmin ) = p1 (0), C4 = C1 + p4 (C1 ) and so on. In the visualization we see sequence S. The dashed line in each job indicates the value of $ j , or the time of return from walking.
Completion time C j = t + p j (t) is monotonically increasing with t because a ≤ 1 and b ≥ 0. Therefore, idle time between jobs can only increase the objective; it is thus excluded by definition. As an example, we visualize the instance from Figure 1 in terms of the formal definition for S in Figure 3. 3.3 Related literature Let us first look at related work on the practical side, then review classic scheduling problems and its extension to time-dependent processing times, which our problem setting corresponds to. 3.3.1 Assembly line balancing The common simple assembly line balancing problem as introduced in Salveson (1955) assigns assembly operations to stations such that the workload is distributed equally among all stations and all operation precedence constraints are respected. Here, the sequence of operations within stations
26
3 Operation sequencing
is ignored. This is a valid approach if the sequence has no influence on the sum of processing times. However in many real world assembly lines, the sequence influences the total processing time. For example, the model in Scholl et al. (2008) assumes sequence-dependent processing times of operations. Their motivation is to consider mutually prolonging operations, e. g., while it is possible to install a seat belt even after mounting the seat, it takes longer time then. This allows to lift some of the precedence constraints, which increases the number of feasible solutions. Arcus (1965) notes that each operation may require a different tool. If this is the case, they add they add a tool handling time for obtaining a tool, and a tool handling time for setting it aside. Furthermore, Arcus (1965) remarks that gathering parts takes time. However, they assume this time is constant. Hence, they add it to the standard operation time. The study of Andrés et al. (2008); Pastor et al. (2010) notes that the time to gather an operation’s parts from a distanced container can depend on the previous operation. We assume this mainly arises from operation specific work points, extending our assumption Assumption A7. Therefore, they model a sequence-dependent setup time between operations, which is also able to depict tool handling times between operations. Their heuristics are refined by the results, e.g., in Martino and Pastor (2010); Seyed-Alagheband et al. (2011). The study in Nazarian et al. (2010) adds varying inter-product setup times in multiple product variants. The model in Scholl et al. (2013) integrates both types of setup times in a holistic model for the single product variant case. Its extension to multiple product variants is studied in Akpınar and Baykasoğlu (2014a,b); Akpınar et al. (2017, 2013). A shortcoming of a sequence-dependent setup time is their constant duration, independent of time. By this, the described models are restricted to depict non-moving conveyors. With a moving assembly line, they are not able to portray the highly time-dependent walking time correctly. As this walking time can contribute a significant amount of non-productive working time (Boysen et al., 2015; Scholl et al., 2013), it is important to estimate them adequatly in order to minimize them when deciding for an operation sequence at a station.
3.3 Related literature
27
3.3.2 Classic scheduling Finding an optimum sequence of jobs is the traditional topic of machine scheduling research. Relevant to our study is the single machine case. There, all jobs are permuted in one single sequence. This sequence decides on the job execution order on the machine. In an optimization, one typically measures the completion time C j of each job j ∈ J, and calculates a cost Í value to be minimized, like the total completion time j ∈J C j In classic scheduling, each job is given a constant processing time (Brucker, 2007). This provides the nice property that any set of adjacent jobs has a constant sum of processing times—irrespective of their order. Thus, changing their order has no impact on other jobs, which speeds up solution algorithms. Hence, minimizing the total completion time is solved by sorting the jobs by nondecreasing processing time (Smith, 1956). A classic scheduling objective jobs such that they minimize is to sequence Í total tardiness cost j ∈J max 0, C j − d j , which adds each job’s deviation from its due date d j if it completes after its due date. Hence, if a job is late, we linearly add its tardiness to the total tardiness cost, and if it is early or on time, the job incurs no cost. With all-equal due dates d j = d for all j ∈ J, this problem is solved by sorting the jobs by nondecreasing processing time in polynomial time (Lawler and Moore, 1969). With arbitrary due dates however, the problem is NP-complete in the ordinary sense. This means that the problem is only NP-hard if the input is binary encoded. This is shown by the existence of a pseudopolynomial time algorithm, which solves the problem in polynomial time for the much longer unary input encoding, unless P = NP. Such an algorithm is introduced in Lawler (1977). In Lawler (1982), it is refined to a fully polynomial time approximation scheme (FPTAS) with runtime O n7 /ε for any ε > 0 such that its objective value φ is below (1 + ε) φ∗ for the optimum objective φ∗ . Later, Du and Leung (1990) show ordinary NP-hardness of the problem by reduction from the Even-Odd Partition Problem (see Definition 4.3). Hence, both results establish a sharp boundary on the problem’s complexity. A review on the total tardiness problem is found in Koulamas (2010).
28
3 Operation sequencing
An extension of the total tardiness problem adds earliness costs, which symetrically increase on early completion. Then, this problem minimizes Í the total earliness and tardiness cost j ∈J C j − d j , i. e., the sum of each job’s completion time deviation from its due date. Hence the cost function has a V-shape, where a job incurs zero costs if and only if it completes just in time; thus the problem is often denoted as the just in time scheduling problem. This problem is as well sharply characterized being NP-hard in the ordinary sense in the common due date case d j = d for all j ∈ J, because it is NP-hard as shown by reduction from the Even Odd Partition Problem, and allows for a pseudopolynomial time algorithm (Hall et al., 1991; Hoogeveen and van de Velde, 1991). Fully polynomial time approximation schemes (FPTAS) are introduced in (Kahlbacher, 1993; Kellerer and Strusevich, 2010; Kovalyov and Kubiak, 1999). However, if the global start time is a variable, the problem becomes polynomially solvable Kanet (1981). On the other hand, for arbitrary due dates, the problem just recently proved to be strongly NP-complete (Wan and Yuan, 2013). It implies that the problem is NP-hard even with unary input encoding, and rules out the existence of a pseudopolynomial algorithm for the problem unless P = NP. This is shown by a polynomial reduction of the Three Partition Problem (see Definition 5.2). An overview on the computational complexity with total tardiness objective, and the total earliness and tardiness objective is given in Table 2. For solving the total earliness and tardiness scheduling problem, the currently fastest solution methods involve branch and bound schemes. Here, typically a depth-first search schedules the jobs subsequently, adding one job per search node. By employing a lower bound procedure in each node, it is possible to fathom nodes where the lower bound on the objective is higher than some upper bound on objective value. The upper bound is typically the objective value returned by a heuristic. A first lower bound is described for the problem in Tahboub (1986, Theorem 12), and probably independently, in Ow and Morton (1989). It splits each job j ∈ J in a number of unit-size jobs, usually p j many given that processing times are integer. Then, the jobs are assigned to unit-time intervals by solving a minimum weight bipartite
3.3 Related literature
29
Table 2: Complexity results on related classic objectives for single machine scheduling total tardiness
total earliness and tardiness
dj = d
polynomial (Lawler and Moore, 1969)
NP-hard (Hall et al., 1991; Hoogeveen and van de Velde, 1991) FPTAS (Kahlbacher, 1993; Kellerer and Strusevich, 2010; Kovalyov and Kubiak, 1999)
d j arbitrary
NP-hard (Du and Leung, 1990) FPTAS (Lawler, 1982)
strongly NP-hard (Wan and Yuan, 2013)
matching problem, which takes polynomial time. This yields a lower bound on total costs attained by jobs that are not yet sequenced in a branch and bound node. The principle of assigning the processing of each job into a series of time-intervals is a classic way to formulate scheduling problems, it is called time-indexing (Pritsker et al., 1969). Lower bound methods based on this principle, often in combination with a Lagrangian relaxation, are used also in today’s fastest exact algorithms as described in Bülbül et al. (2007); Sourd (2009); Sourd and Kedad-Sidhoum (2008); Tanaka and Fujikuma (2012); Tanaka et al. (2009). Another lower bound approach is described in Kim and Yano (1994), and further used in Chang (1999), which is to consider the number of overlapping jobs in any time interval if completing each job at its due date. Then, incurred costs are at least the number of excess jobs in each such time interval multiplied by the length of this time interval. Hoogeveen and van de Velde, S. L. (1996) systematically relax given constraints to devise further lower bounds. A comparison of some lower bounds is provided in Schaller (2007). In addition, dominance conditions, e. g. as described in Fry et al. (1996); Kim and Yano (1994); Szwarc (1993), allow to compare branch nodes and remove dominated nodes. Reviews on solving the total earliness and tardiness scheduling problem are
30
3 Operation sequencing
found in Baker and Scudder (1990); Józefowska (2007); Kanet and Sridharan (2000); a recent review on heuristics in Kramer and Subramanian (2019). In the total earliness and tardiness scheduling problem, delaying the execution of an early job decreases its earliness costs. Hence, delaying a job can decrease the objective. Wan and Yuan (2013) show that the problem remains NP-hard in the strong sense when idle time is permitted. Therefore, if idle time is permitted, solving the scheduling problem involves deciding on idle time in addition to deciding on the job sequence. This can be solved with a linear program (Fry et al., 1987, 1996). A dedicated algorithm for setting idle times in O(n log n) time is described in Garey et al. (1988). A review on algorithms that decide on idle times is found in Kanet and Sridharan (2000). As described in Section 3.2, idle times are superfluous in S because they only increase the objective, thus we can skip idle time insertion in our problem. 3.3.3 Time-dependent scheduling The classic way of scheduling with fixed processing processing times is extended by scheduling models with start time dependent processing times, as reviewed in Alidaee and Womer (1999); Cheng et al. (2004a), and treated in-depth in Gawiejnowicz (2008). Here, the processing time p j of each job j ∈ J is a function of the job’s start time t. Classic, constant processing times provide the nice property that a partial sequence of adjacent jobs does not influence start times of any other job. With time-dependent processing times however, the effect of, e. g., swapping two jobs at the start of the schedule here not only changes their respective processing times, but additionally entails a timing change of all subsequent jobs. Thus, by a change of already a small part of the sequence, all subsequent jobs need to be reoptimized accordingly. This implication of time-dependent processing times usually adds another layer of complexity to the problem. Hence, the time-dependency commonly poses a challenge already when minimizing the makespan Cmax , which is most often trivial for fixed processing times.
3.3 Related literature
31
The earliest work in time-dependent scheduling dates back to Melnikov and Shafransky (1979). They consider a processing time function of the form p j (t) = l j + $ j (t) for job j ∈ J with a monotonic, job-uniform added function $ j = $. In fact, most of the literature until today considers monotonic processing time functions. These are either nondecreasing (%), or nonincreasing (&), as visualized in Figure 4. In the nondecreasing category (%), Melnikov and Shafransky (1979) show that problems with job-uniform, nondecreasing $ = $ j are solvable in polynomial time by sorting the jobs nondecreasingly by l j . The job-specific, increasing linear case of $ j (t) = b j t with b j ≥ 0 is also solved by sorting the jobs, in this case nondecreasingly by l j /b j (jobs with b j = 0 last), shown independently in Browne and Yechiali (1990); Gawiejnowicz and Pankowska (1995); Gupta and Gupta (1988); Tanaev et al. (1994, p. 189); and, according to Gawiejnowicz (2008), in Wajs (1986, in Polish language). An extension is to add a point in time until which the processing time is constant. This yields the piecewise-linear, job-specific, nondecreasing function $ j (t) = max 0, b j (t − π) for a given common point in time π. Then, the problem still permits an FPTAS (Cai et al., 1998; Kovalyov and Kubiak, 1998), but the decision version of the problem becomes NP-hard (Kononov, 1997; Kubiak and van de Velde, 1998). Instead allowing arbitrary π j for a common slope b j = b, the problem remains NP-hard (Kononov, 1997). These results are similar to the nonincreasing symmetric category (&), although it necessitates conditions on l j to ensure p j ≥ 0 for each job j ∈ J. Again, sorting solves job-independent nonincreasing $ j = $ (Melnikov and Shafransky, 1979), and job-specific linear $ j (t) = −a j t, with a j ∈ [0, 1] (Ho et al., 1993). The piecewise-linear nonincreasing $ j (t) = max a j (π − t), 0 ,
32
3 Operation sequencing
p j (t)
p j (t) b
lj
t (a)
lj
p j (t)
a t (b)
lj
π
a
b t
(c)
Figure 4: Time-dependent scheduling models with piecewise-linear processing times in the literature are mostly monotonic, e.g., as depicted in (a), (b). In this work, we study the nonmonotonic case with slopes as in (c).
with a j ∈ [0, 1] in turn is NP-hard and has an FPTAS (Cheng et al., 2003; Ji and Cheng, 2007). In this study, we join these known, monotonic forms into one, nonmonotonic form. In its basic form, $ j consists of two linear pieces: one decreasing and one increasing piece. Hence, $ j (t) = max a π j − t , b t − π j for an ideal start time π j of each job j ∈ J, and slopes a ∈ [0, 1], b ∈ [0, ∞), as in Definition 2.1. The two monotonic cases and its combination is depicted in Figure 4. This is a novel category, first treated for the problem variant of a variable global start time, a common ideal start time π j = π, and a = b in Farahani and Hosseini (2013). They provide a polynomial time algorithm and show that in optimal schedules, one job starts exactly at π, the jobs before π are sorted nonincreasingly by l j , and the jobs after π are sorted nondecreasingly by l j . In this work, we introduce a novel algorithm that confirms this result and extends it to the case of arbitrary slopes a, b in Section 4.6. The problem with equal slopes a = b and a fixed global start time is first introduced in Sedding and Jaehn (2014). They sketch two lower bounds and a branch and bound scheme. The study in Jaehn and Sedding (2016) as well uses nonmonotonic piecewise linear time-dependent processing time function. However, they state a problem that calculates the job’s processing time not by measuring each job’s start time difference to its ideal start time. Instead, they measure the job’s absolute difference between its so-called
3.4 Polynomial cases
33
mid-time m and a given ideal mid-time M j , j ∈ J. The mid-time is when exactly half of the job is processed, hence m = (t + C)/2 for the job’s actual start time t and completion time C. This has the advantage that a job’s processing time is completely symmetric around M j . Therefore, optimal schedules exhibit a V-shape for common M j = M: the jobs before M are sorted nonincreasingly by l j , and the jobs after M are sorted nondecreasingly by l j . Then, it is to decide which jobs go before M j , and which after. Jaehn and Sedding (2016) show that this problem is NP-hard by reduction from Even Odd Partition. Moreover, they show that the variant with a variable global start time is polynomial. For arbitrary M j , they provide a lower bound and dominance rules that are utilized in a branch and bound procedure and as well in a truncated, heuristic branch and bound search. They as well establish a dynamic programming search that builds on Held and Karp’s (1962) scheme for scheduling programs. Moreover, Jaehn and Sedding (2016) formulate a mixed integer program, which uses positional assignment variables to assign each job to a position in the sequence. A numerical experiment compares the performance of these procedures on instances with up to n = 64 jobs, with convincing results. The problem in this work studies start time dependency of processing times with asymmetric slopes a, b, and the schedule starts at a given global start time. Although it is possible to transfer some lessons learned from Sedding and Jaehn (2014); Jaehn and Sedding (2016), the novel problem setting requires new insights on polynomial cases for constructing a lower bound and a branch and bound algorithm and a novel NP-hardness proof. 3.4 Polynomial cases Let us introduce two polynomial cases of S, which are moreover utilized in lower bounds for a branch and bound search. We first state and prove the lemmata, then give several examples for an illustration.
34
3 Operation sequencing
Lemma 3.1. Given an instance of S and a job sequence S. If each job starts at or after its ideal start time t j ≥ π j for all j ∈ J, then Õ Cmax = tmin (1 + b)n + l j − bπ j (1 + b)n−S(j) , and (1) j ∈J
tmin = Cmax (1 + b)−n −
Õ
l j − bπ j (1 + b)−S(j) .
(2)
j ∈J
If S sorts the jobs nondecreasingly by l j − bπ j , then Cmax in Equation 1 is minimum. Proof. Given S as specified above and tmin = 0. Without loss of generality, S( j) = j for all j ∈ J. We first express φ(S) in closed-form. First note Í Í that C j = k=1,...,j pk . Accordingly, φ(S) = Cn = k=1,...,n pk . Given condition π j ≤ t j , the max-term in the processing time function is simplified. Let p0 = l0 = π0 = t0 = 0 for a virtual job 0 and compare p j to p j−1 for j ∈ J: © Õ © Õ ª ª p j − p j−1 = l j + b pk − π j ® − l j−1 − b pk − π j−1 ® «k=1,...,j−1 «k=1,...,j−2 ¬ ¬ ⇐⇒ p j − p j−1 = l j − l j−1 − bπ j + bπ j−1 + bp j−1 ⇐⇒ p j − (1 + b) p j−1 = l j − l j−1 − bπ j + bπ j−1 (1 + b) p j−1 l j − l j−1 − bπ j + bπ j−1 pj ⇐⇒ − = j (1 + b) (1 + b) j (1 + b) j pj p j−1 l j − l j−1 − bπ j + bπ j−1 ⇐⇒ − = . j j−1 (1 + b) (1 + b) j (1 + b) Define ψ j = p j /(1 + b) j . Then, ψ0 = 0, ψ j − ψ j−1 = ψ j − ψ0 =
l j − l j−1 − bπ j + bπ j−1 (1 + b) j Õ k=1,...,j
ψk − ψk−1
, and
3.4 Polynomial cases
35
Õ lk − lk−1 − bπk + bπk−1
⇐⇒ ψ j =
k=1,...,j
Õ
⇐⇒ p j =
(1 + b)k (lk − lk−1 − bπk + bπk−1 ) (1 + b) j−k .
k=1,...,j
We append virtual job n + 1 with ln+1 = πn+1 = 0. It has Õ (lk − lk−1 − bπk + bπk−1 ) (1 + b)n+1−k pn+1 = k=1,...,n+1
Õ
=
(lk − bπk ) (1 + b)n+1−k
k=1,...,n+1
− (lk−1 − bπk−1 ) (1 + b)n+1−k Õ (lk − bπk ) (1 + b)n+1−k − (1 + b)n−k = k=1,...,n
=b
Õ
(lk − bπk ) (1 + b)n−k .
k=1,...,n
As job n + 1 starts at Cn , its processing time is pn+1 = bCn and Õ (lk − bπk ) (1 + b)n−k . Cn = k=1,...,n
Factors (1 + b)n−k are positive and decreasing for k = 1, . . . , n. By requirement, l j − bπ j ≥ l j−1 − bπ j−1 . As of Hardy et al. (1923, Theorem 368, p. 261), sequence S minimizes Cn . A value tmin , 0 is accounted for by prepending a virtual job 0 with p0 = l0 = tmin , which can be negative, and t0 = 0. Then, Equation 1 follows. A multiplication of Equation 1 with (1 − a)−n and basic arithmetic operations yield Equation 2.
36
3 Operation sequencing
Lemma 3.2. Given an instance of S and a job sequence S. If each job starts at or before its ideal start time t j ≤ π j for all j ∈ J, then Õ Cmax = tmin (1 − a)n + l j + aπ j (1 − a)n−S(j) , and (3) j ∈J
tmin = Cmax (1 − a)−n −
n Õ
l j + aπ j (1 − a)−S(j) .
(4)
j ∈J
If S sorts the jobs nonincreasingly by l j + aπ j , then Cmax is minimum. Proof. Analogous to Lemma 3.1.
Both polynomial cases of S require certain sort orders with respect to l j and π j . In the first case of Lemma 3.1, if sorting the jobs nondecreasingly with respect to l j − bπ j yields a sequence where each start time is not before π j , then the sequence is optimal. For example, given an instance with π j = tmin for all jobs j ∈ J: then, ordering the jobs in this way yields an optimal sequence because each job’s start time is not before its position π j . An example for distinct π j is given in Figure 5. The second polynomial case in Lemma 3.2 emerges from the symmetric variant: the sequence is optimal if each job starts not after its corresponding position when sorting the jobs nonincreasingly with respect to l j + aπ j . Consider an instance with slope a = 0.3, π j = 20 and l j = j for j = 1, . . . , n with n = 4. Then, sorting the jobs nonincreasingly by l j + aπ j yields sequence h4, 3, 2, 1i with Cmax = 20.44, and each job starts not after its position (the last starts at t1 = 19.2). Therefore, the sequence attains a minimum objective Cmax . An example for distinct π j is given in Figure 6. In addition to the optimality criteria, Lemma 3.1 enables calculating Cmax with one closed formula for any sequence with t j ≥ π j for all j ∈ J, and similarly Lemma 3.2 with one other closed formula if t j ≤ π j for all j ∈ J. 3.5 Lower bound We construct a lower bound on the minimum objective that can be attained by completing a given partial solution of a S instance. A partial solution is
3.5 Lower bound
π1
π2 1
37
π3
π4
2
3
4
tmin
Cmax
Figure 5: Example instance for S with b = 0.2, n = 4, π j = j − 1, and l j = j for j = 1, . . . , 4 except for l4 = 3. Sorting these jobs nondecreasingly by l j − bπ j results in sequence h1, 2, 4, 3i. We see this sequence has t j ≥ π j for all j = 1, . . . , 4. Therefore, this sequence satisfies the conditions in Lemma 3.1. Hence, its objective Cmax = 9.8 is optimal. Please note that sequence h1, 2, 3, 4i has a slightly higher objective of Cmax = 9.84. Hence, it is suboptimal, which is counterintuitive on a first glance.
τ1 1 tmin
τ2 2
τ3
τ4
3
4 Cmax
Figure 6: Example instance for S with a = 0.2, n = 4, π j = j, and l j = 1 for j = 1, . . . , 4. Sorting these jobs nonincreasingly by l j + aπ j results in sequence h1, 2, 3, 4i. We see this sequence has t j ≤ π j for all j = 1, . . . , 4. Therefore, this sequence satisfies the conditions in Lemma 3.2. Hence, its objective Cmax = 4.5904 is optimal.
given by a possibly empty job sequence S that starts at time 0 and completes at C. Jobs sequenced by S are denoted by set JF , the fixed jobs. The remaining open jobs are denoted by set JO = J \ JF . A trivial lower bound adds all open job base lengths to C: Õ LB0 = C + lj . j ∈JO
This bound is tight if a sequence S 0 can be constructed from S that has t j = π j for all j ∈ JO .
38
3 Operation sequencing
We utilize the polynomial cases in Section 3.4 to construct an improved (y) lower bound. It partitions JO into three sets: set JO is still treated as in LB0 , set JO(x) incorporates Lemma 3.1, and set JO(z) uses Lemma 3.2. Assuming (y) we already know such a partition, let us first consider JO(x) and JO . • If it is possible to sequence the jobs nondecreasingly by l j − bπ j in sequence S 0, starting S 0 at C, and we get t j ≥ π j for all j ∈ JO(x) , then we improve LB0 by using Equation 1 to Õ Õ (x) (x) 0 LB1 = C (1 + b) |JO | + l j − bπ j (1 + b) |JO |−S (j) + lj . j ∈J 0
(x)
j ∈JO (y)
where J 0 = JO . This bound is tight if JO(x) = J. • Secondly, let us look at set JO(z) . If it is possible to sequence the jobs nonincreasingly by l j + aπ j in sequence S 00, starting S 00 at LB1 , and we get t j ≤ π j for all j ∈ JO(z) , then we improve LB1 by using Equation 3 to Õ (z) (z) 00 LB2 = LB1 (1 − a) |JO | + l j + aπ j (1 − a) |JO |−S (j) . (z)
j ∈JO (y)
This bound is tight if JO = ∅. (y)
Above, we assume a given partition of JO into JO(x) , JO , and JO(z) . To complete the lower bound calculation, we obtain it as follows by employing several linear time greedy procedures. For JO(x) , we require a list of all open jobs JO sorted nondecreasingly by l j − bπ j , which is obtained in linear time by subsetting a presorted list for all jobs J. We iterate this list of open jobs and append a job j to sequence S 0 as described above if t j ≥ π j , as well insert it into JO(x) . Job set JO(z) is a subset of the remaining jobs JO \ JO(x) . To obtain it, we require an upper bound value UB for the given problem instance, attained by, e. g., a heuristic. In any optimal solution, no job completes later than UB. Therefore, we construct a sequence S 00 that completes at
3.6 Dominance rule
τ1
τ2
τ3
τ5
τ4 3
1
2
τ2
tmin
τ4 3
6
JO(z)
τ3
1
τ6 5
4
JO(x)
tmin τ1
39
2
4
τ5
UB τ6
5 LB0
6 LB10 LB2
Figure 7: Given an example instance for S with a = b = 0.2, n = 6, π = (0, 1, 2, 7, 8, 9), and l = (2, 2, 2, 1, 1, 1). The upper chart displays the job (y) arrangement that yields sets JO(x) and JO(z) , whereas JO = ∅ here. The lower chart shows the calculation of the lower bound LB2 = 9.7952. It also indicates (y) LB1 = 9.6 assuming JO = {4, 5, 6}, and shows LB0 = 9 assuming JO = J.
UB, as in Equation 4. For this, we reversely iterate the remaining jobs sorted nonincreasingly by l j + bπ j . Hence each iteration step prepends a job j to S 00 if t j ≤ π j , and as well insert it into JO(z) . Finally, we set (y) JO = JO \ JO(x) ∪ JO(z) . The resulting lower bound is LB2 . 3.6 Dominance rule Dominance rules speed up algorithms like branch and bound search by allowing for a comparison of partial solutions. We say that a partial solution S is dominated by a partial solution S 0 if the objective for S 0 is smaller than the objective for S for any possible placement of the remaining open jobs. The corresponding search branch for S can then be eliminated, thereby speeding up the search. In S, it is possible to establish a dominance condition between partial solutions that share the same set of fixed jobs.
40
3 Operation sequencing
Property 3.3. Given a S instance and partial solutions S and S 0 with the same set of fixed jobs JF . If for the respective completion times C 0, C, there is C 0 ≤ C, then S 0 dominates S. Proof. Completion time C j (t j ), j ∈ J, monotonically increases with t j . A composition of such functions is still monotonically increasing. Therefore, the completion time of any sequence of open jobs JO = J \ JF is minimized if they start as early as possible. A similar dominance rule is used for classic scheduling problems with nondecreasing cost functions. There, of two partial solutions with the same set of fixed jobs, the one with smaller costs dominates the other. 3.7 Solution algorithms We introduce several methods to solve S exactly and heuristically. 3.7.1 Dynamic programming Property 3.3 allows us to establish a dynamic programming scheme for exactly solving S instances. Assume j is the last job in an optimum sequence S. By Property 3.3, its completion time is minimum if and only if starts at the minimum completion time of the preceding job set J \ { j}, which is found recursively. As is it not known in advance which of the jobs in J is the last, we probe each of them and return the one with minimum completion time. This principle is described by recurrence equation C (J) = ∗
tmin ,
J = ∅,
min C j (C ∗ (J \ { j})) , else. j ∈J
(5)
to determine the minimum objective value Cmax = C ∗ (J) for a given instance of S. In a recursive evaluation, same subsets of J occur multiple times. Therefore, dynamic programming can be used, similar to Held and Karp (1962). Given n jobs, the resulting runtime is in O(n2n ).
3.7 Solution algorithms
41
3.7.2 Mixed integer program A given instance of S can be stated as a mixed integer program (MIP) to minimize Cn
(6a)
subject to C0 = 0,
(6b)
Ck ≥ Ck−1 +
Õ
l j x jk
j=1,...,n
Õ © ª − a Ck−1 − π j x jk ® , j=1,...,n « ¬ Õ Ck ≥ Ck−1 + l j x jk
k = 1, . . . , n,
(6c)
k = 1, . . . , n,
(6d)
k = 1, . . . , n,
(6e)
j = 1, . . . , n,
(6f)
j=1,...,n
Õ
x jk
Õ © ª + b Ck−1 − π j x jk ® , j=1,...,n « ¬ = 1,
j=1,...,n
Õ
x jk = 1,
k=1,...,n
with assignment variables x jk ∈ {0, 1}, for j, k = 1, . . . , n, each of which is one if job j is sequenced at S( j) = k and zero otherwise, and completion time variables Ck ∈ R, for sequence position k = 0, . . . , n. By Constraint 6e and Constraint 6f, each job is assigned to a distinct sequence position. The objective is to choose a feasible assignment that minimizes Cn , the completion time of the job at the last sequence position. Constraint 6b sets the start time for the first job to zero. Constraint 6c and Constraint 6d limit the completion time from below, iteratively from the preceding completion time, which is either larger or smaller than π j of the assigned job j, thus requiring two inequations. Note that these two constraints are written more concisely as Õ Ck ≥ (1 − a) Ck−1 + l j + aπ j x jk , k = 1, . . . , n, (6g) j=1,...,n
42
3 Operation sequencing
Ck ≥ (1 + b) Ck−1 +
Õ
l j − bπ j x jk ,
k = 1, . . . , n.
(6h)
j=1,...,n
Such a positional assignment based model is similarly constructed for classic scheduling models with constant processing times in Lasserre and Queyranne (1992), and further discussed in Keha et al. (2009); Queyranne and Schulz (1994). 3.7.3 Basic heuristics An intuitive job sequence is in nondecreasing order of box positions π j . We call this the linear sequence (LS) heuristic. This linear sequence is mostly far from optimum, see our test results in Section 3.8.3. Therefore, it is reasonable to improve this solution. To this end, we apply a steepest descent hill climbing search to improve this initial box sequence. Repeatedly, the best neighbor of a sequence is chosen as the next sequence, until arriving at a local optimum. The neighborhood consists of swaps between all pairs of two jobs. We call this the linear sequence with a neighborhood search (LSNS) heuristic. The resulting solution can be improved even more with a second metaheuristic that tries to evade local minima. An example is the simulated annealing (SA) method (Kirkpatrick et al., 1983). It is effective in solving many combinatorial problems, most noteworthy the traveling salesman problem. The origin lies in the analysis of crystallizing fluids. The idea is that a perfect crystal corresponds to a global minimum. Slower cooling of a fluid most often results in a better formed crystal. The simulated annealing method tries to replicate this process. For changing the solution, one can use the same neighborhood as before: swapping arbitrary job pairs. In contrast to a descending hill climbing search, this metaheuristic allows to ascend to worse solutions to a certain degree. This enables leaving local minima for finding the global minimum.
3.7 Solution algorithms
43
3.7.4 Branch and bound algorithm Let us describe a branch and bound (B&B) algorithm to attain an exact solution for a given S instance. This solution procedure is a common strategy for solving optimization problems (Schöning, 2001). It searches the solution space by branching, and only visits those branches whose lower bound to the objective is larger than the upper bound. The upper bound value is set to the objective value of the best known solution. It is usually initialized by a heuristic. We initialize the upper bound with the LSNS heuristic described in Section 3.7.3. Our branching strategy is to append the jobs one another to an initially empty sequence. The root node thus has the empty sequence, open jobs JO = J, and fixed jobs JF = ∅. From an arbitrary node with JO , ∅, we branch out to each of the open jobs. Hence, for each j ∈ JO , a new node emerges by appending j to the existing sequence, removing j from JO , and adding j to JF . For each of the new nodes, we carry out a bounding and a dominance check. The bounding check succeeds if the lower bound of the partial solution, as described in Section 3.5, is less than the current upper bound. The dominance check succeeds if we know of no dominant other sequence, as described below. If one of them fails, the node is eliminated. If we reach a node with JO = ∅, the sequence is complete. If, moreover, its objective is lower than the current upper bound, its sequence is saved as the currently best, providing a new upper bound value. Dominance Check In the dominance check for a partial solution given by sequence S, we perform two tests. The first test swaps the last job j in S with a preceding job. If the resulting sequence S 0 has a lower completion time than S, it is, by Property 3.3, dominant. To speed up this test in longer sequences, we only execute swaps of j with the last k = max{|JO |, n − |JO |} but one jobs. Note that we only consider swapping the last job to avoid repetition of tests that already happened earlier in the search tree. For the second test, we memoize the completion time of S in a hash table indexed by the set of fixed jobs JF . If, later in the search, a node with the same JF set occurs, we
44
3 Operation sequencing
recall the completion time C 0. If C 0 is smaller than the current completion time C, the current node is dominated. Else, we reduce the stored value to C, the new best completion time for the fixed job set JF . The number of stored values grows however, similar to the dynamic programming approach in Section 3.7.1, to a high number. To reduce the memory consumption we limit the number of stored values and employ a first-in first-out replacement strategy for the memoization hash table. Traversing Order We employ a depth first search for traversing the search tree. The order in which to visit children nodes affects total computation time. It is reduced if it sorts promising nodes to the front. We rank nodes by their lower bound value from Section 3.5 and visit nodes with a smaller value first. (y)
Polynomial Cases Note that if it occurs that JO = ∅, then a polynomial case as of Lemma 3.1 or 3.2 is present. By the described traversing order, the according nodes are already visited first. Moreover, we can ignore all other branches in current and descendant nodes. 3.8 Numerical results In a numerical experiment, we assess the advantage of optimizing job sequences. Also, we like to analyze the performance of the algorithms from Section 3.7. We test them on a variety of generated instances and statistically compare their results. As a main criterion, we utilize median runtime and quartile deviation on instance groups of similar parameters, to find which of the algorithms quickly and robustly yield exact solutions. For the heuristics, we additionally compare solution quality by counting optimally solved instances and calculating mean error. We use these criteria to evaluate which heuristic provides the best tradeoff between runtime and solution quality.
3.8 Numerical results
45
3.8.1 Instance generation We generate instances of a variety of types to to evaluate performance in different settings. A problem instance is characterized by its size, the factors a and b, and each assembly time and position. As instance size, we test for number of jobs n ∈ {8, 12, 16, 20, 24, 28}. The worker velocity v is a multiple of the conveyor velocity. Therefore, we let v ∈ {2, 4, 8, 16}. In Section 2.4, we distinguish between three walking strategies. As strategies (A) and (B) are effectively equivalent, we obtain two variants for setting factors a, b for each v: (S1) a = 2/(v + 1) and b = 2/(v − 1), as in (A) and (B), (S2) a = (2v + 1)/(v + 1)/2 and b = (2v + 1)/v/2, as in (C). The resulting factors for both variants are listed in Table 1. The assembly time generation follows an established scheme (Jaehn and Sedding, 2016): (L1) all equal assembly times l1 , . . . , ln = 1, (L2) all distinct assembly times {l1 , . . . , ln } = {1, . . . , n}, randomly permuting integers 1, . . . , n, (L3) assembly times drawn uniformly from {1, . . . , 10}, (L4) assembly times drawn from a geometric distribution with the random variable X = d−λ ln Ue for U uniformly distributed in [0, 1] and λ = 2. Additionally, we generate n box widths in four variants: (W1) all equal box widths w1 , . . . , wn = 1, (W2) all distinct box widths {w1 , . . . , wn } = {1, . . . , n} in a random permutation, (W3) box widths drawn uniformly from {2/0, . . . , 2/3} ∪ {3 · 2/0, . . . , 3 · 2/2}, reflecting seven divisions of ISO1-pallets,
46
3 Operation sequencing
(W4) box widths drawn from rounded up gamma variates with shape 1.25 and unit scale, representing measured box width distributions at automotive assembly lines. Í Then, we assign job j ∈ J to position π j = i=1,...,j−1 wi . To fill the station (here, we let its width s = 10 · n), all positions of an inÍ stance are normalized for a total width of s by scaling each by s/ i=1,...,k wi , then rounding to integers. As processing times and positions are initially of different scale, we perform a harmonization. Ideally, the station width s equals the last completion time Cmax . The linear sequence heuristic in Section 3.7.3 orders the jobs by box position. For harmonizing assembly times and positions, we use this sequence and scale the assembly times linearly by a positive rational factor. The appropriate factor minimizes the absolute difference |Cmax − s| and is determined with the algorithm of Brent (1971) for finding a zero of an univariate function. In summary, the described scheme considers six numbers of jobs n, four times two variants for a and b, four assembly time variants, and four box width variants. For each parameter combination, we generate 10 instances. This yields a total of 6 · (4 · 2) · 4 · 4 · 10 = 7 680 test instances. 3.8.2 Exact algorithms In Table 3, measured MIP and B&B runtimes are grouped by instance size n. The MIP runtime grows exponentially with n, as it is expected to happen for an NP-complete problem. Instances of size n = 24 are for the most part (75%) solved by the MIP within the time limit, but this number drops to solving only 38% of size n = 28. Similarly, the quartile deviation increases with n, except for the largest runtimes. Here it decreases because we terminate solving an instance after the time limit. In contrast, the B&B manages to solve all given instances within the time limit. Its median runtime is 0.00 seconds even for the largest instances with n = 28. As it is visible from Figure 8, there are at least some instances mea-
3.8 Numerical results
47
Table 3: MIP, B&B runtime and performance. MIP
B&B
Md
QD
solved
Md
QD
solved
8 12 16 20 24 28
0.04 0.32 3.41 36.00 179.14 ≥ 600.00
0.01 0.17 2.73 29.94 269.26 170.31
100% 100% 100% 98% 75% 38%
0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00
100% 100% 100% 100% 100% 100%
all
6.33
70.99
85%
0.00
0.00
100%
n
B&B runtime in secs (logscale)
MIP runtime in secs (logscale)
Md: median runtime in seconds QD: quartile deviation in seconds solved: percentage of instances solved in 10 minutes 103 102 101 100 10-1 10-2 10-3 8
12
16
20
n
24
28
103 102 101 100 10-1 10-2 10-3 8
12
16
20
24
28
n
Figure 8: The exact MIP and B&B algorithm’s runtime in seconds is shown with a logarithmic scale in box plots, and a scatter plot with a normal confidence ellipse at 95% for each n.
48
3 Operation sequencing
Table 4: Mean walking time percentage MW for instances with n = 24 in optimum solutions for all pairs of assembly time and box width settings. v=2
v=4
v=8
v = 16
S1 S2
50% 43%
33% 28%
18% 18%
10% 10%
MW
46%
31%
18%
10%
Table 5: Mean walking time percentage MW for instances with n = 24 in optimum solutions for all pairs of assembly time and box width settings.
W1 W2 W3 W4
L1
L2
L3
L4
0% 10% 12% 10%
8% 11% 12% 10%
7% 11% 12% 9%
9% 12% 13% 12%
surably over 0.00 seconds. The most difficult of those exhibit an exponential growth in runtime with n, as it is expected. In the literature, material fetching accounts for about 10–15% of total work time (Scholl et al., 2013). Let us look at our most realistic choices for worker velocity, v = 16 and v = 8 (e.g., the case study in Klampfl et al. (2006) similarly assumes v = 13.6). Our results yield 10–18% mean walking time of total work time in optimal solutions, see Table 13. This similar to the value in the literature. We discerned the walking time by walking strategy as well, and see that the advantage of strategy S2 becomes apparent for lower worker velocities. In Table 5, we differentiate the mean walking time percentage by assembly time and box width variant. We see that the trivial pair (L1, W1) achieves zero walking time because assembly time and box width are both of size one. Overall, box width variant W1 yields the smallest amount of walking time, and W3 the highest. While for
3.8 Numerical results
49
Table 6: Heuristics’ runtime and performance. ID
HC
SA
n
opt
MPE
opt
MPE
opt
MPE
8 12 16 20 24 28
52% 38% 26% 21% 18% 16%
10% 14% 16% 16% 16% 16%
98% 96% 92% 88% 86% 80%
0.08% 0.11% 0.17% 0.17% 0.17% 0.19%
99% 99% 97% 94% 92% 88%
0.02% 0.02% 0.03% 0.05% 0.03% 0.05%
all
29%
15%
90%
0.15%
95%
0.03%
opt: MPE:
percentage of optimally solved instances mean percentage error to minimum walking time
the assembly time, the smallest amount is indeterminate, the highest is for L4. Pair (L4, W3) has the highest amount overall. 3.8.3 Heuristics Results for the heuristics are shown in Table 6. It lists median runtimes, fraction of optimally solved instances, and mean percentage error MPE, which is the mean of the percentage walking time error PE =
φ − φ∗ Í · 100%, φ∗ − j ∈J l j
(7)
comparing achieved and minimum walking time, where φ∗ is the optimum and φ the heuristic’s objective. The ID heuristic orders all boxes in job sequence. Surprisingly, this is optimal in 29% of all instances. However, its use is fairly limited: a MPE of 15% is quite high. In practice, this natural order is often used, hence the motivation for improvement is high. Applying a hill climbing search (HC) on this greatly improves the result. Then, the
50
3 Operation sequencing
number of optimally solved instances raises to 90% while the MPE drops to 0.15%. Moreover, the median computation time is still not measurable. This result improves further with a simulated annealing search (SA) that temporarily allows worse solutions. Here, the number of solved instances increases to 95% and decreases the MPE to 0.03%. However, applying the SA results in an increased computation time. For the largest instance size n = 28 to 0.029 seconds. Given that the runtime of the B&B algorithm far less, and given that B&B delivers exact solutions, the SA heuristic’s use is limited. If one echews the implementation effort, using a described heuristic is able to deliver adequate results nonetheless. 3.9 Conclusion In this chapter, we model assembly operation sequencing at a workpiece that is on a moving assembly line to minimize walking time to static destinations at the line side. The resulting time-dependent scheduling problem permits asymmetric nonmonotonic processing times. This as well continues research streams in the scheduling literature and performs the next logical step in unifying the monotonic processing time models; it furthermore extends the symmetric processing time case in Sedding and Jaehn (2014). We find important polynomial cases, which are used to derive a lower bound on partial schedules. Moreover, we introduce a dominance relation that yields a dynamic programming algorithm, and, together with the lower bound, an improved branch and bound search. By truncating its search tree, we obtain a heuristic version. For a comparison, we devise a greedy heuristic that is enhanced with a local search and a simulated annealing procedure, and a mixed integer program based on positional assignment. The algorithms are evaluated in a numerical test on generated instances that are artificial and realistic, as well as trivial and hard. The results show that the mixed integer program is slow, the heuristics are fast, and the branch and bound based algorithms are even faster than the heuristics on most tested instances. With this contribution in time-dependent scheduling and walking time optimization, we lay the base for derivative research in several directions.
4 Operation sequencing with a single box position
4.1 Introduction Let us consider a special case of problem S in the previous chapter to consider the assembly operation or job sequencing problem: with one common box position for all jobs. This covers the practice case where all parts fit into one box or shelf. Then, a walk time optimization needs to decide which of the jobs shall start before the box position, and which after. First we analyze the complexity of this problem: on the one hand we show that it is NP-hard by reduction from Even Odd Partition, and on the other, we devise a fully polynomial time approximation scheme (FPTAS). This result implies that the general problem S of Definition 3.1 is NP-hard as well. Then, we consider a relaxed, variable global start time: this variant permits a polynomial time algorithm that includes solving a bipartite weighted matching problem. Therefore, it is usable to improve the lower bound in Section 3.5 for subsets of jobs with common box positions. Solving this optimization problem has the intesting side effect that it implicitly places the common box to its best position π − tmin in relation to the global start time tmin . Therefore, it solves the problem of placing a single box and optimizing the operation sequence at the same time, which removes both Assumption A2 and Assumption A3. Let us first formally define the two considered problems.
Section 4.3 is previously published in Sedding (2018a,b); an abstract of Section 4.5 is published in Sedding (2017a). © Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2020 H. A. Sedding, Time-Dependent Path Scheduling, https://doi.org/10.1007/978-3-658-28415-2_4
52
4 Operation sequencing with a single box position
τ
tmin 4
3 12 χ A
Cmax 5
6
7
8
B
Figure 9: Example instance for Sˆ with a = 0.1, b = 0.2, n = 8, π = tmin + 10, and l j = j for j = 1, . . . , 8. The depicted optimum sequence has straddler job χ = 2 with t χ ≤ τ ≤ Cχ , jobs A = {1, 3, 4} coming before χ, and B = {5, 6, 7, 8}. Within A and B, jobs are sequenced sorted with respect to l j . Note that a job with smallest l j is not automatically qualified for being a straddler job.
ˆ The special case of S with only one common Definition 4.1 (Problem S). ˆ box position π j = π, for all j ∈ J of a given instance, is denoted by S. ˆ The variant of Sˆ with a variable global start Definition 4.2 (Problem C). ˆ time tmin is denoted by C. In the remainder, we assume that given instances of Sˆ are not corresponding to an already studied polynomial case of Section 3.4. Then, an optimum solution partitions the job set J into three sets: A is the set of jobs that start at or before π, set B are jobs that start at or after π, and job set { χ} consists of a single straddler job that starts at or before π, and completes at or after π. Hence, the straddler job is scheduled in between A and B. Please note that the job with minimum assembly time is not necessarily the straddler job in an optimal sequence. Within A or B, the job sequence is trivial, because Lemma 3.2 and Lemma 3.1 apply, respectively. Therefore, the main computational challenge is to decide on the partition into sets A, { χ}, B. These circumstances are depicted with an example in Figure 9. This chapter is organized as follows. First, we review literature that is closely related to the two problems. In Section 4.3, we show that Sˆ is NPhard. Then, we introduce a dynamic programming algorithm in Section 4.4. It provides the basis of the FPTAS in Section 4.5. Lastly, we consider the Cˆ problem in Section 4.6 and devise a polynomial time algorithm, and describe an improved lower bound for S.
4.2 Related literature
53
4.2 Related literature Related literature on these problem settings is extensively reviewed in Section 3.3. Closely related literature regarding the complexity of Sˆ is the NP-hardness proof in Jaehn and Sedding (2016) for a similar problem, for which the job’s processing time is perfectly symmetric before and after the common box position. They as well reduce from Even Odd Partition. However, it is not possible to transfer the result to Sˆ because it is start time dependent, and furthermore allows for asymmetric slopes a, b. To incorporate the novel problem setting, we devise a different proof, which is, incidentally, much more compact than the proof in Jaehn and Sedding (2016). The closest literature regarding the approximation of Sˆ lies in the area of monotonic time-dependent processing times, hence they require either a = 0 or b = 0, see Figure 4. There, the sequencing problem is solvable in polynomial time by sorting jobs nondecreasingly by l j . By allowing for job-specific slopes a j or b j , the monotonic processing time functions are NPhard (Cheng et al., 2003; Kononov, 1997; Kubiak and van de Velde, 1998), but permit an FPTAS (Cai et al., 1998; Ji and Cheng, 2007; Kovalyov and Kubiak, 1998). Our results draw a clear border to the monotonic problem variant: We recognize that the nonmonotonic Sˆ it is already NP-hard for job-uniform slopes a, b; nonetheless, it permits an FPTAS. The Cˆ problem with a flexible global start time is previously treated in the literature with symmetric slopes a = b by Farahani and Hosseini (2013). They state a greedy polynomial time algorithm to solve the symmetric variant in polynomial time. In comparison to Farahani and Hosseini (2013), we ˆ This extended problem is nonetheless incorporate asymmetric slopes in C. solved in polynomial time by repeatedly solving a minimum weight bipartite matching problem.
54
4 Operation sequencing with a single box position
4.3 Computational complexity ˆ We reduce from the Let us analyze the computational complexity of S. Even Odd Partition problem, which is introduced in Garey et al. (1988), shown as NP-complete in the ordinary sense by reduction from the Partition problem. Definition 4.3 (Even Odd Partition (Garey et al., 1988)). We are given a set of n = 2h natural numbers X = {x1 , . . . , xn } where xi−1 < xi for all i = 2, . . . , n. The question is whether there exists a partition of X into Í Í subsets X1 and X2 := X \ X1 such that x ∈X1 x = x ∈X2 x, while for each i = 1, . . . , h, set X1 contains exactly one element of set {x2i−1 , x2i }. ˆ which We reduce from Even Odd Partition to the decision version of S, asks, for a given threshold Φ ∈ Q, if there exists a sequence S with φ(S) ≤ Φ. Theorem 4.1. The decision version of Sˆ is NP-complete. Proof. Given an arbitrary instance of Even Odd Partition as of Definition 4.3, ˆ and show let us define a corresponding instance of the decision version of S, in the following that a solution exists if and only if there exists a solution for the Even Odd Partition instance. In the corresponding instance, define the job set J = {1, . . . , 2n + 1}, Í with ln+j = 0 for j = 1, . . . , n, with l2n+1 = 2q for q = 21 i ∈X xi , and with l2k−i = x2k−i (1 + b)k−h−1 for k = 1, . . . , h and i = 0, 1. Hence, l j−1 < l j for j = 2, . . . , n, and ln < l2n+1 . Choose an arbitrary common slope a j = a for all j ∈ J with 0 < a < 1, and set b j = b = (1 − a)−1 − 1. Then, b > 0 and (1 + b) = (1 − a)−1 . Moreover, set the common ideal start time π = 0, the global start time tmin = −q, and the threshold Φ = 4q. Solving the decision version of Sˆ results in an sequence S with objective φ(S) = Cmax − tmin ≤ Φ. This sequence either already has a certain format, or it can be aligned to this format in polynomial time without increasing the objective as follows. First, assume job 2n + 1 is the last job in S. Else, if t2n+1 ≥ 0, by sorting the jobs starting after 0 according to Lemma 3.1 in polynomial time, it
4.3 Computational complexity
55
can take the last position without increasing Cmax . Otherwise if t2n+1 < 0, repeatedly swap it with its successor job j and sort all other jobs starting before 0 according to Lemma 3.2; this does not increase Cmax either, because l j < l2n+1 and C j = C j (C2n+1 (t2n+1 )) = (t2n+1 (1 − a) + l2n+1 )(1 + b) + l j > C2n+1 (C j (t2n+1 )) = t2n+1 (1 − a) + l j (1 + b) + l2n+1 . Second, the jobs completing before or at 0 can be sorted according to Lemma 3.2, while the jobs with zero basic processing time l j = 0 are the last that complete before or at 0. Analogously, the jobs starting at or after 0 adhere to Lemma 3.1, while the jobs with l j = 0 are the first. Now, sequence S can be narrowed down to attain either of the following two forms: (i) Either, the sequence can be split into S = S1 , S0 , S2 where partial sequence S1 contains the jobs completing before or at 0, while S0 contains all the jobs that start and complete at 0, and S2 the jobs starting at or after 0. (ii) Otherwise, it can be split into S = S1 , S10 , h χi, S20 , S2 where S10 and S20 together contain all the jobs with l j = 0, while h χi contains a straddling job χ with t χ < 0 < Cχ , S1 the jobs completing before or at 0, and S2 the remaining jobs. While form (i) is the desired form, let us rule out form (ii). Consider sequences S10 and S20 . They contain all n jobs with zero basic processing time l j = 0. Let ν denote the number of jobs in S20 . Then, S10 contains n − ν jobs. Sequence S10 starts at some time t < 0. According to Equation 4, it completes at t χ = t (1 − a)n−ν . The straddler job χ completes at Cχ = lχ + t χ (1 − a). Sequence S20 starts at Cχ , hence it completes according to Equation 1 at C = Cχ (1 + b)ν . Together, the completion time for S10 , h χi, S20 is C = lχ + t (1 − a)n−ν+1 (1 + b)ν
56
4 Operation sequencing with a single box position
= lχ + t (1 + b)ν−n−1 (1 + b)ν . Its first and second derivatives are d C = lχ + 2t (1 + b)ν−n−1 (1 + b)ν log (1 + b) dν d2 ν−n−1 (1 (1 + b)ν log2 (1 + b). C = l + 4t + b) χ dν 2 d C = 0. As t < 0, The completion time C has an extremum at a ν with dν d2 the second derivative at the same ν is dν2 C < 0. Therefore, this ν value maximizes C. It follows that C is minimized either for ν = 0 or for ν = n with 0 ≤ ν ≤ n. Therefore, the jobs with zero basic processing time can altogether be moved to S10 or S20 , respectively, without increasing the objective. Assume that jobs n + 1, . . . , 2n with l j = 0 all start at t j ≥ 0. Hence, they are either in sequence S0 for case (i), or they are in sequence S20 while S10 is empty for case (ii) in the following elaboration; the opposite case with t j ≤ 0 is performed analogously. With this assumption, iff S adheres to form (i), ˆ Otherwise for form (ii), the sequence S1 completes at time 0, denoted by C. straddler job χ completes at Cχ > 0, denoted by Cˆ as well. Thus, Cˆ ≥ 0 in sequence S. Let tˆ specify the start time of S2 . Hence, sequence S0 , or S20 in case (ii) starts at C˜ and completes at t˜ = C˜ (1 + b)n . Define h1 as the number of jobs in S1 , and define h2 = n − h1 . Given C˜ ≥ 0 and Equation 4, there is Õ tmin = C˜ (1 − a)−h1 − lS −1 (k) (1 − a)−k 1
k=1,...,h1
= C˜ (1 + b)h1 −
Õ
lS −1 (k) (1 + b)k .
k=1,...,h1
1
As S2 starts at t˜ = C˜ (1 + b)n , with Equation 1 there is Õ Cmax = C˜ (1 + b)n+h2 +1 + lS −1 (k) (1 + b)h2 +1−k k=1,...,h2 +1
= C˜ (1 + b)n+h2 +1 + l2n+1 +
2
Õ k=1,...,h2
lS −1 (k) (1 + b)h2 +1−k 2
4.3 Computational complexity
57
= C˜ (1 + b)n+h2 +1 + l2n+1 +
Õ k=1,...,h2
Define g¯ =
Í
k=1,...,n
g1 (k) =
g2 (k) =
lS −1 (h2 +1−k) (1 + b)k . 2
(g1 (k) + g2 (k)) (1 + b)k with
lS −1 (k) ,
1 ≤ k ≤ h1 ,
0, l −1
else,
1
S2 (h2 +1−k) ,
0,
1 ≤ k ≤ h2 , else,
and d = (1 + b)n+h2 +1 − (1 + b)h1 . Because h1 ≤ n and h2 ≥ 0, there is d > 0. Then, Φ − tmin ≥ Cmax − tmin ˜ + l2n+1 + ⇐⇒ 4q ≥ Cd
Õ k=1,...,h1
+
Õ k=1,...,h2
˜ + ⇐⇒ 2q ≥ Cd
Õ
lS −1 (k) (1 + b)k 1
lS −1 (h2 +1−k) (1 + b)k 2
(g1 (k) + g2 (k)) (1 + b)k
k=1,...,n
˜ + g. = Cd ¯ Sequence S fulfills this inequality by requirement. Let us show that the minimum of g¯ is 2q, which means that C˜ = 0 in the inequality. For any i, j ∈ {1, 2} such that i , j, if fi (k) = 0 for some k while $ j (k + 1) > 0, then g¯ is not minimum: it decreases by resequencing S such that $i (k) > 0 and $ j (k + 1) = 0 because (1 + b)k < (1 + b)k+1 . By this argument and with h1 + h2 = 2h, it follows that h1 = h2 = h. Moreover, a minimum g¯ has $i (k − 1) ≥ $ j (k) for k = 2, . . . , h and any i, j = 1, 2, because (1 + b)k−1 < (1 + b)k . This is the case for an optimum S as of Lemma 3.1 and Lemma 3.2. Therefore, S has {S(2k − 1), S(2k)} = {h + 1 − k, h + k} (in any order) for k = 1, . . . , h, and Õ (g1 (k) + g2 (k)) (1 + b)k g¯ = k=1,...,n
58
4 Operation sequencing with a single box position
=
Õ
(l2k−1 + l2k ) (1 + b)k
k=1,...,h
=
Õ
x2k−1 (1 + b)−k + x2k (1 + b)−k (1 + b)k
k=1,...,h
=
Õ
x2k−1 + x2k = 2q.
k=1,...,h
Hence, the minimum value of g¯ is 2q. Because Φ ≥ Cmax − tmin holds for S, it follows that Cmax − tmin = Φ and C˜ = 0. Sequence S thus adheres to form (i). Following these arguments, there is g¯ = 2q, and S1−1 (h + 1 − j) ∈ {2k − 1, 2k} for j = 1, . . . , h. With tmin = −q and Equation 3, there is Õ q= lS −1 (k) (1 + b)k 1
k=1,...,h1
=
Õ
lS −1 (h+1−k) (1 + b)h+1−k 1
k=1,...,h1
=
Õ
xS −1 (h+1−k) (1 + b)k−h−1 (1 + b)h+1−k 1
k=1,...,h1
=
Õ j=1,...,h1
xS −1 (j) . 1
On the other hand, S2−1 ( j) ∈ {2k − 1, 2k} \ {S1−1 (h + 1 − j)} for j = 1, . . . , h, and Cmax = 3q. This allows to analogously transform Equation 1 to Õ Õ q= xS −1 (j) = xS −1 (j) . j=1,...,h1
2
j=1,...,h1
1
Concluding, sets X1 = {xS −1 (j) | j = 1, . . . , h} and X2 = X \ X1 are a 1 solution for the Even-Odd Partition instance. Vice versa, a solution X1 , X2 allows to construct such a sequence S with φ(S) = Φ. Therefore, the Even Odd Partition instance solves the corresponding Sˆ decision instance and vice versa. As the reduction is polynomial and as, given a correct partition, S and φ(S) (where rational values are encoded as a division of two integers) can be obtained in polynomial time, the decision version of Sˆ is NP-complete, and Sˆ is NP-hard.
4.4 Dynamic programming algorithm
59
4.4 Dynamic programming algorithm In the following, we describe the dynamic programming algorithm that is later used for formulating an FPTAS. It is started n times, once for each possible straddler job χ ∈ J. For a simplified presentation, we decrease n by one, relabel the jobs such that l j ≥ l j+1 for 1 ≤ j < n, and let χ = n + 1 in the following. With this, the algorithm consists of n stages. In the jth stage, job j is inserted into all partial solutions of the preceding stage. There are two ways to insert job j: either into set A, which represents the jobs started before π, or into set B. Any partial solution is represented by a three-dimensional vector [x, y, z] of nonnegative rational numbers, which are described as follows: • The first component, x, denotes the maximum completion time Ck of the jobs in set A with last job k = max(A), which has the longest assembly time with the given job numbering. • The y component describes the proportional increase of the z component if changing the start time t of the jobs in set B. i. e., y = dtd z. By Lemma 3.1, y = (1 + b) |B | . • The z component denotes the makespan of the jobs in set B if they are started at time π, i. e., z = max j ∈B C j − π given that min j ∈B t j = π. After the last stage, the straddler job χ is inserted in between the sequences χ of the job sets A and B. Then, the smallest makespan Cmax for straddler χ is returned. We require that the instances are nontrivial in the sense that sequence h1, . . . , n, χi yields Cmax > π. The job sequence can be reconstructed by recording for each stage j = 1, . . . , n and each state in Vj , from which state in Vj−1 it originates. The solution of the algorithm then corresponds to a path from the final state in Vn to the initial state in V0 . The job sequence is built by following the path backwards from j = 1 to n − 1. We begin with two empty subsequences, S A and SB . If the path’s state in Vk was generated by Equation 8b, we append job k to S A. If, instead, it was generated by Equation 8c, we prepend job k
60
4 Operation sequencing with a single box position
Algorithm 1 Dynamic programming algorithm for Sˆ with straddler job χ Initialize state set V0 = {[0, 1, 0]}. For each job j = 1, . . . , n, generate state set Vj = [C j (x), y, z] [x, y, z] ∈ Vj−1 , x < π ∪ x, (1 + b) y, z + y l j [x, y, z] ∈ Vj−1 .
(8a) (8b) (8c)
Return χ Cmax = min π + y · max Cχ (x) − π, 0 + z [x, y, z] ∈ Vχ .
(8d)
to SB . Finally, we append straddler job χ and SB to S A in the resulting sequence S. In the calculation in Equation 8d, the first job in SB is started at max{Cχ (x), π}. If the state was invalid in the sense that job χ completes at Cχ (x) < π, idle time is inserted before the first job in SB such that it starts at π, hence it is dominated by a solution for another straddler job. Theorem 4.2. Given any Sˆ instance, Algorithm 1 returns a minimum objective value φ∗ . ˆ the algorithm is run as follows for each Proof. Given an instance of S, possible straddler job χ. Consider a stage j = 1, . . . , n. In Vj , there is at least one vector for each possible subset of jobs A ⊆ {1, . . . , j} where tk ≤ π for all k ∈ A. Each vector [x, y, z] ∈ Vj stems from a source vector [x 0, y 0, z 0]. We distinguish two cases: • If the vector is created in Equation 8b, job j is in set A. The value x 0 describes the start time of job j, completing at x. If x 0 = 0, job j is the first job in set A, starting at time 0. As l j ≤ l j−1 , the makespan x of the jobs in set A is minimum. The condition t j ≤ π ensures that the job is early. As the set B is unchanged, y = y 0 and z = z 0 remain the same.
4.4 Dynamic programming algorithm
61
• Else, if the vector is created in Equation 8c, job j is instead in set B = {1, . . . , j} \ A. For this, j is (for now) started at t j = π. Then, j completes at C j (π). If z 0 = 0, job j is the first job in set B. Then, z = C j (π) − π. If z 0 > 0, job j is prepended to the jobs B 0 = B \ { j}. Then, they start later, by C j (π) − π. By Lemma 3.1, their makespan 0 increases nonlinearly by factor (1 + b) |B | . Each job that is inserted in 0 set B multiplies the previous y by (1 + b). Therefore, y 0 = (1 + b) |B | . Then, z expresses the sum of processing times of all jobs in set B when started at π. Moreover, the jobs are sequenced as j, . . . , min B. Thus, z = Cmin B − π. As l j ≤ l j−1 , this makespan is minimum for the jobs in set B if started at or after π. In the last step, the straddler job χ is appended to the early jobs in each source vector [x 0, y 0, z 0]. For this, χ starts at time x 0, and completes at x = Cn (x 0). χ To return a correct Cmax , we distinguish three cases in Equation 8d: Case x ≥ π + lχ : Then, job χ is tardy, hence t χ ≥ π. Thus, it belongs to set B. The nB remaining jobs in set B now start at x. In Equation 8d, their completion time π + z 0 is correctly increased by (x − π) · y 0 as in Lemma 3.1 with tmin = x − π, y 0 = (1 + b)n B , Í 0 and z 0 = j 0 ∈B l j 0 (1 + b) | {k ∈B |k> j } | . Therefore, the return value χ correctly calculates Cmax of the job set A corresponding to [x 0, y 0, z 0]. Case π ≤ x < π + lχ : Here, job χ is early with t χ < π, and belongs to set A. The remaining jobs are moved such that they start at x. Because x ≥ π, the first job in set B is still tardy, i. e., tmax B ≥ π. Therefore, as in the previously described case, Equation 8d calculates a correct χ Cmax according to Lemma 3.1. Case x < π: In this case, we insert idle time at x until π. Then, the first job in set B, k = max B, is scheduled at the common ideal time: tk = π. χ k The resulting Cmax in Equation 8d is dominated by Cmax for k as the straddler job. Remark 4.3. Let us consider the number of states in Algorithm 1. We look at distinct values in each dimension. For y, there are just n + 1 possible
62
4 Operation sequencing with a single box position
distinct values (1 + b)0 , . . . , (1 + b)n . However, for x and z, we are not aware of a polynomial bound for the number of values in terms of unary encoded input size. Given a state [x, y, z], one might consider to eliminate all states [x 0, y, z] with x 0 > x. Alternatively, all states [x, y, z 0] with z 0 > z. However, unless that there is a polynomial number of z values in the former alternative, or x values in the latter, the total number of states is not bounded by any polynomial even for unary encoded input size, i. e., in terms of input values. 4.5 Fully polynomial time approximation scheme Let us briefly state a definition of a fully polynomial time approximation scheme (FPTAS): An FPTAS is an algorithm that, given an arbitrary input value ε ∈ (0, 1], returns a solution with objective value φ ≤ (1 + ε) · φ∗ for an instance with minimum objective φ∗ and runs in polynomial time of input length and 1/ε. Commonly, an FPTAS is derived from a pseudopolynomial time exact algorithm (Garey and Johnson, 1979, p. 140). We deviate from this common path and base an FPTAS for Sˆ on the non-polynomial exact algorithm from the previous section. The following algorithm for Sˆ takes the same input as Algorithm 1, plus the additional parameter ε ∈ (0, 1]. It builds on the idea of trimmingthe-state-space as described in Ibarra and Kim (1975), combined with the ε interval partition technique described in Woeginger (2000). Let ∆ := 1 + 2n . dlog x e Let h(x) := ∆ ∆ for any real x > 0. Note that for any positive x ∈ R, this function satisfies x/∆ < h(x) ≤ x · ∆. Property 4.4. For j = 1, . . . , n and all vectors [x, y, z] ∈ Vj of Algorithm 1, there exists a vector [x # , y # , z # ] ∈ Vj# in Algorithm 2 with x # ≤ x,
(10a)
#
y ≤ y, and #
z ≤ z·∆ . j
Proof. Let us show the given hypothesis by forward-induction of j.
(10b) (10c)
4.5 Fully polynomial time approximation scheme
63
Algorithm 2 FPTAS for Sˆ with straddler job χ Initialize V0# = {[0, 1, 0]}.
(9a)
For each job j = 1, . . . , n, generate state set n o # V˜j# = [C j (x), y, z] [x, y, z] ∈ Vj−1 ,x n A¯ .
As l j ≥ l j+1 for j < n by definition, the matching problem is solved by sorting jobs 1, . . . , n nondecreasingly by f value (Hardy et al., 1923, Theorem 368, p. 261), i. e., we set sequence S such that lS −1 (k) < lS −1 (k 0) ⇐⇒ f (k) > f (k 0). It remains to decide on the number n A¯ . It is restricted to a linear number of distinct values. Thus, we repeat the steps above for each n A¯ ∈ {0, . . . , n} and return the minimum makespan Cmax − tmin . We note that the optimum objective value on Cˆ poses a lower bound on ˆ This leads to the variant with a fixed global start time, S. Remark 4.10. With Theorem 4.9, we can improve the lower bound LB2 in Section 3.5 if there are subsets of jobs with the same box position. Let
4.6 Polynomial algorithm for a variable global start time
69
us consider such a subset Jˆ ⊆ JO(1) with the common box position π. ˆ We use the approach in Theorem 4.9 to sequence the jobs in Jˆ before and after Í π. ˆ This gives a lower bound on their total processing time j ∈Jˆ p j , which Í improves the previous lower bound j ∈Jˆ l j . The new lower bound can be implemented such that the minimum objective value is obtained in linear time: by precomputing a sorted list of the assignment costs to each position, then selecting the | Jˆ| cheapest positions from the resulting list, we directly ˆ The resulting overall lower bound is obtain a position for each job in J. Õ © ª min p j ®® n o (1) (1) j ∈JO : π j =πˆ πˆ ∈ π j j ∈JO « ¬ Í where each πˆ is a distinct box position and min j ∈J (1) : π LB3 = LB2 +
Õ
O
ˆ j =π
p j represents
the described shortest total processing time for all jobs j ∈ JO(1) with π j = π. ˆ Thus, Cˆ can be practically used in a lower bound for Sˆ instances, and to improve the lower bound in Section 3.5 on S instances for subsets of jobs with common box positions. Remark 4.11. Cˆ furthermore solves the problem of placing a single box and optimizing the operation sequence at the same time, which removes both Assumption A2 and Assumption A3.
Part III Box placement
5 Box placement for one product variant
5.1 Introduction In this chapter, we introduce an approach for reducing walking time at moving assembly lines by optimizing the line side placement of parts, given that there is only one product variant, or one set of operations and parts. This is an intriguing problem because each container position influences the worker walking times. Intuitively, it should suffice to order the boxes in the same sequence as the jobs in order to place each close to the point where its job is performed.. Although this is a common practice, it can be better to place certain boxes further away which yields a longer walking time locally, but reduces it globally, like for instance in Figure 1. We observe that optimizing the placement is a strongly NP-hard problem, even with the assumptions in Section 2.3 for sequential placement along a line and one product variant. To investigate the subject, we highlight polynomial cases, construct a lower bound for the walk time, and formulate dominance conditions between partial placements. This facilitates an exact and a truncated branch and bound search. In extensive tests, they consistently deliver superior performance compared to several mixed integer programming and metaheuristic approaches. Moreover, our findings suggest a mean walk time reduction of 15% compared to intuitive solutions, which underlines the need for a walk time optimized line side placement. In Section 5.2, we begin by studying related literature, then define an optimization model in 5.2, We highlight polynomial cases of this model in Section 5.4 and study its computational complexity in Section 5.5. SecParts of this chapter are previously published in Sedding (2019); an abstract is published in Sedding (2017b). © Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2020 H. A. Sedding, Time-Dependent Path Scheduling, https://doi.org/10.1007/978-3-658-28415-2_5
74
5 Box placement for one product variant
tion 5.6 exploits these properties and introduces, for partially solved problems, a lower bound on the objective. A dominance rule allows to compare partial solutions and eliminate dominated ones in Section 5.7. These results are combined in a branch and bound (B&B) algorithm in Section 5.8, from which we deduce a heuristic version. We empirically evaluate their performance against several heuristics and an integer programming approach in a numerical experiment in Section 5.9. 5.2 Related literature The literature reports a large optimization potential on optimizing part placement (Boysen et al., 2015; Scholl et al., 2013). A walk distance reduction by 52% is, e.g., achieved in the case study in Finnsgård et al. (2011) by means of a manual optimization. A first operational research approach for changing the line side part placement to minimize walk times is given in Klampfl et al. (2006) in increasing levels of detail. They begin with an unconstrained model which allows overlapping of part containers, and is easy to solve. In a second and a third model, they place part containers in one and two dimensions, respectively. As they calculate walk times in the Euclidean metric, these models are nonlinear. They cope with this by applying a generic nonlinear program solver for a heuristic solution. However, they report rather long computation times even for a very small case study of five part containers. For devising a method that optimizes the box sequence, one may turn to existing scheduling literature. However, we are not aware of any existing model that matches the given setting, Therefore, let us at least identify related scheduling problems and their discrepancy to our setting in the following. To minimize the walking time, each box ought to be placed close to the location of its assembly operation. This resembles the classic scheduling objective of completing each job close to its due date, while any deviation increases costs linearly. The corresponding minimization problem is is known as the total earliness and tardiness scheduling problem. It is a strongly NP-complete problem (Wan and Yuan, 2013), see also the literature
5.3 Problem definition
75
review in Section 3.3.2. However, there is a clear shortcoming of this model because, in our given application, it is infeasible to assume constant due dates: As each walk influences the position of succeeding operations, the due dates need to change accordingly. Hence, due dates are impacted by the job sequence. There is literature on variable due dates, also in combination with the total earliness and tardiness objective (Cheng and Gupta, 1989; Gordon et al., 2002b,a, 2004; Kaminsky and Hochbaum, 2004; Shabtay, 2016). Nonetheless, the closest relation we see is the quotation of due dates in dependence of job processing times. In our case however, each due date should depend both on the actual start time of the job, and the preceding due date (to leave enough space for placing the box), and we are not aware of any literature taking this into account. A second perspective on the placement problem is to regard assembly operations as jobs with processing times that depend on start time (Alidaee and Womer, 1999; Cheng et al., 2004a; Gawiejnowicz, 2008). In particular, we refer to our study in Part II. However, this assumes fixed ideal start times. This is relieved with variable ideal start times (Cheng et al., 2004b; Gordon et al., 2012; Yin et al., 2013), but in this stream of work, the functions of processing time are independent of ideal start times. Furthermore, it is not ensured that there is enough space after each ideal start time for placing the corresponding box. Moreover, these models allow to permute the jobs as well, although their sequence is fixed in our case. Hence, this literature is of limited use for the problem at hand. 5.3 Problem definition Given the model assumptions from Section 2.3 except Assumption A3, we proceed with a formal definition of the box placement problem setting. Definition 5.1 (Problem P). We are given a set of n jobs J = {1, . . . , n}. Each job j ∈ J is given the assembly time l j ∈ Q ≥0 and, for its corresponding Í box, a box width w j ∈ Q >0 . Total box width is Π = j ∈J w j . We need to find a placement for the boxes which is defined by a box sequence S :
76
5 Box placement for one product variant
jobs 0 boxes
1
2
3
p1 = l1
p2 l2
p3 l3
C1
C2
w1
C3
π3
w3
l4
p4
3
1 π1
4
4 π4
w4
C4
C40 2
π2
w2
Π Figure 10: Example instance of Figure 1 in terms of Definition 5.1: factors a = b = 0.1 and n = 4 jobs, assembly times l1 = · · · = l4 = 2 all equal, a = b = 0.1 and n = 4 jobs, assembly times l1 = · · · = l4 = 2 all equal, and box widths w1 = w2 = 4, w3 = w4 = 1. Optimum box sequence S = h1, 3, 4, 2i reaches Cmax = C4 = 6.514, the intuitive S 0 = h1, 2, 3, 4i yields C40 = 6.732.
J → {1, . . . , n} that places the box of a job j ∈ J at position π j = Í k ∈J, S(k) 1 begin with a nonzero walk time, hence p j > l j . A second example in Figure 11 has all-equal assembly times and all-equal box widths. Finding an optimum box sequence is, however, difficult because repositioning a box j impacts the processing time of all the jobs k ≥ j in the job sequence. In turn, new start times of these jobs require a reoptimization of all the corresponding boxes k > j. On the other hand, if each job’s assembly time and box width are nearly the same, ordering boxes in job sequence is a good strategy. In practice, however, even their linear correlation is usually small, which motivates more elaborate strategies.
5.4 Polynomial cases
jobs
1 0
77
3
2 C1
boxes
C2
1
4 C3
C4
2
π1
π2
3
4 π4
π3
Π
Figure 11: Example instance with a = b = 0.1 and n = 4 jobs, assembly times l1 = · · · = l4 = 3 all equal, and box widths w1 = · · · = w4 = 5 all equal, has the optimum box sequence S = h1, 2, 4, 3i.
jobs
1
t t1
boxes
3
2 t2
t3 π1
t4 3
1
F 0
4
π3
2 π2
C4 4
π4
Π
Figure 12: Given is an instance with n = 4 jobs. The first job starts at t1 = t. Here, the boxes may only be placed at and behind F. The depicted placement sorts the boxes nondecreasingly by w j (1 − a) j . Furthermore, each box j ∈ J is placed at or behind its start time (π j ≥ t j ). Then, by Lemma 5.1, this placement results in a minimum last completion time, here C4 .
5.4 Polynomial cases In this section, we introduce polynomial cases of P. They are later used for analyzing the complexity of P and to derive lower bounds. Given an instance, let us sort the boxes by w j (1 − a) j , for each j ∈ J. If this yields a placement where each box is at or behind their job’s start time, this placement is optimum. Hence, the instance is polynomially solvable. An example is depicted in Figure 12. Lemma 5.1. Given an instance of P with n jobs. Also given a start time t, and a value F ≥ t. We require that the box of job 1 is placed at π1 = F. The box sequence is given by S. We further require that by S, each box
78
5 Box placement for one product variant
gets placed at or behind its job start: π j ≥ t j for all j ∈ J. If S is sorted nondecreasingly by w j (1 − a) j , the last completion time Cn is minimum. Proof. First, we derive a closed formula for the completion time Cn of the last job n by induction as follows. For ease of description, we express time duration t by a virtual job 0, which starts at 0 and completes at t, with processing time p0 . For this, we use w0 = F, π0 = 0, l0 = p0 . Thus, the job set extends to J 0 = {0} ∪ J = {0, 1, . . . , n}. Given some box sequence S with S(0) = 0 (the virtual job being the first). ÍS(j)−1 For the given S, the box positions then are π j = k=0 wS −1 (k) for each j ∈ J 0. Í We like to calculate Cn = nj=0 p j . For this, we need to know the value of p j for all j ∈ J 0. We begin with the definition p j = l j + max{a (π j − t j ), b (t j − π j )}. Knowing that π j ≥ t j for all j ∈ J 0, we simplify it to p j = l j + a (π j − t j ). Then, p j = l j + a(π j − t j ) ⇐⇒ p j = l j + a
S(j)−1 Õ
wS −1 (k) −
k=0
For j ∈ J, we define ∆( j) =
j−1 Õ
! pk .
k=0
ÍS(j)−1 k=0
wS −1 (k) −
p j − p j−1 = l j − l j−1 − a
j−1 Õ
ÍS(j−1)−1 k=0
pk + a
k=0
+a
S(j)−1 Õ
wS −1 (k) − a
k=0
j−2 Õ
pk k=0 S(j−1)−1 Õ k=0
⇐⇒ p j − p j−1 = l j − l j−1 − ap j−1 + a ∆( j) ⇐⇒ p j − (1 − a)p j−1 = l j − l j−1 + a ∆( j).
wS −1 (k) . Then,
wS −1 (k)
5.4 Polynomial cases
79
This is a recurrence relation for p j , starting with p0 . We reformulate this as a closed-form expression. We multiply it with 1/(1 − a) j . For j ∈ J, we define Φ j = p j /(1 − a) j . Then, pj
⇐⇒
(1 − a) pj
j
−
(1 − a) j
(1 − a)p j−1 (1 − a) p j−1
−
=
j
=
(1 − a) j−1
⇐⇒ Φ j − Φ j−1 =
l j − l j−1 + a ∆( j) (1 − a) j l j − l j−1 + a ∆( j) (1 − a) j l j − l j−1 + a ∆( j) (1 − a) j
.
The base case is Φ0 = p0 = t, therefore j Õ
Φ j − Φ0 = ⇐⇒ Φ j − Φ0 = ⇐⇒
pj (1 − a) j
Φk − Φk−1
k=1 j Õ
lk − lk−1 + a ∆(k)
k=1
(1 − a)k
= p0 +
j Õ lk − lk−1 + a ∆(k)
(1 − a)k
k=1
⇐⇒ p j = t (1 − a) + j
j Õ
(lk − lk−1 + a ∆(k)) (1 − a) j−k .
k=1
We use this closed form expression for p j to calculate Cn :
Cn =
n Õ
pj =
j=0
n Õ
t (1 − a) + j
j=0
=
n Õ
! (lk − lk−1 + a ∆(k)) (1 − a)
k=1
t (1 − a) +
j=0
+
j Õ
j n Õ Õ j=0 k=1
j
j Õ
! (lk − lk−1 ) (1 − a)
k=1
a ∆(k) (1 − a) j−k .
j−k
j−k
80
5 Box placement for one product variant
Changing the box sequence S only influences the last summand, which we denote by µ. Using equations n Õ
(1 − a) j−k =
j=k
1 − (1 − a)n−k+1 , a
S(0) = 0, 1 − (1 − a)1 = a, and 1 − (1 − a)n−j − 1 − (1 − a)n−j−1 = (1 − a)−1 − 1 (1 − a)n−j
= a (1 − a)n−j−1 , we reformulate µ: µ=
j n Õ Õ
a ∆(k) (1 − a) j−k
j=0 k=1
= =
n Õ k=1 n Õ
∆(k) a
n Õ
(1 − a) j−k
j=k
∆(k) 1 − (1 − a)n+1−k
k=1
=
n Õ j=1
1 − (1 − a)
n+1−j
S(j)−1 Õ k=0
wS −1 (k) −
S(j−1)−1 Õ k=0
! wS −1 (k)
5.4 Polynomial cases
81
S(1−1)−1 Õ = − 1 − (1 − a)n+1−1 wS −1 (k) k=0
+
n Õ
1 − (1 − a)n+1−(j−1) − 1 − (1 − a)n+1−j
j=2
·
S(j−1)−1 Õ
! wS −1 (k)
k=0
+ 1 − (1 − a)n+1−n
S(n)−1 Õ
wS −1 (k)
k=0
=a
S(n)−1 Õ
wS −1 (k)
k=0
+
n−1 Õ
S(j)−1 Õ 1 − (1 − a)n+1−j − 1 − (1 − a)n−j wS −1 (k)
j=1
=a
k=0
S(n)−1 Õ
wS −1 (k) + a
=a =a
j=1 n Õ j=0
(1 − a)n−j
j=1
k=0 n Õ
n−1 Õ
(1 − a)n−j
S(j)−1 Õ
S(j)−1 Õ
wS −1 (k)
k=0
wS −1 (k)
k=0
wj
n Õ
(1 − a)n−S
−1 (k)
.
k=S(j)+1
Term µ is minimized if S is sorted nondecreasingly by w j (1 − a) j . We show this by contradiction, using an adjacent job interchange argument. ˜ let j, j 0 be adjacent jobs such that S( ˜ j) + 1 = S( ˜ j 0) for Given schedule S, 0 which w j (1 − a) j > w j 0 (1 − a) j . Suppose, for a contradiction, that term µ˜ Û identical to S˜ with the in S˜ is minimal. Construct a second schedule S, 0 0 Û Û ˜ j ) and S( j ) = S( ˜ j), hence j and j 0 are swapped. exception that S( j) = S( Then, the corresponding µ˜ and µÛ are almost identical, except that µ˜ contains 0 summand x˜ = aw j (1 − a)n−j , and µÛ contains xÛ = aw j 0 (1 − a)n−j . That
82
5 Box placement for one product variant 0
0
Û and w j (1 − a) j > w j 0 (1 − a) j ⇐⇒ w j (1 − a)−j > is, µ˜ − x˜ = µÛ − x, 0 w j 0 (1 − a)−j ⇐⇒ aw j (1 − a)n−j > aw j 0 (1 − a)n−j ⇐⇒ x˜ > xÛ ⇐⇒ Û It follows that µ˜ is not minimal. This completes the contradiction: µ˜ > µ. any schedule that is not sorted nondecreasingly by w j (1 − a) j does not possess a minimal term µ. A symmetric polynomial case follows if sorting the boxes nonincreasingly by w j (1 + b) j results in start times that all are at or behind their box. Lemma 5.2. Given an instance of P with n jobs. Also given a start time t, and a value F ≤ t. We require that the box of job 1 is placed at π1 = F. The box sequence is given by S. We further require that by S, each box gets placed at or before its job start: π j ≤ t j for all j ∈ J. If S is sorted nonincreasingly by w j (1 + b) j , the last completion time Cn is minimum. Proof. The proof is analogous to the proof of Lemma 5.1. Here instead, p j can be simplified to p j = l j + b t j − π j . The remaining steps are similar. Therefore, we omit a full proof here. 5.5 Computational complexity For analyzing the computational complexity of P, we need to specify a decision version of P. As usual for minimization problems, this is done by setting a threshold τ for the objective φ. If, for a given instance, there exists a solution with objective φ ≤ τ, the instance is called a Yes-instance. Else, it is called a No-instance. Then, it is possible to polynomially reduce from, e.g., the NP-hard Partition Problem (Garey and Johnson, 1979, p. 47) to the decision version of P. However, a stronger result is a reduction from the strongly NP-hard Three Partition Problem (Garey and Johnson, 1979, p. 96), which follows. Definition 5.2 (Three Partition (3P) (Garey and Johnson, 1979)). Given a bound B ∈ N and 3z elements in multiset X = {x1 , . . . , x3z } ⊂ N with Í B/4 < x j < B/2, j = 1, . . . , 3z, and x ∈X x = zB. The question is: does
5.5 Computational complexity
83
there exists a partition of X into disjoint multisets A(i) , i = 1, . . . , z with Í x ∈ A(i) x = B? In such a partition, each multiset consists of three elements (Garey and Johnson, 1979). For any 3P instance, denoted by 3P I , we introduce a corresponding instance of P’s decision version, and denote it by P I : Given a 3P instance 3P I . Set a = b = 1/(3z). Define q = log1+b 2(z + b + zB)/b , and r = b2 /(1 − (1 + b)−q ).
Note that r is polynomial in the input size as q is a rounded up logarithm of input sizes. Then, we define n = z + q + 3z jobs that are in a sequence of three parts: (1) z filler jobs j = 1, . . . , z with box width w j = 1 and assembly time l j = B + 1, (2) q enforcer jobs j = z + 1, . . . , z + q with equal box width and assembly time w j = l j = r/(1 + b) j−z , (3) 3z partition jobs j = z + q + 2, . . . , n with box width w j = x j and assembly time l j = 0. The total box width of enforcer jobs is Õ Õ (1 + b)−i wj = r j=z+1,...,z+q
i=1,...,q
= r (1 − (1 + b)−q )/b = b. Of all jobs, total box width is Π = z + b + zB. The sum of assembly times is z (B + 1) + a = Π as well. At last, we set τ = de · Πe as threshold (e is Euler’s number). As z ≥ 1 and B ≥ 3, there is Π ≥ 4 and τ < 3 · Π. An example of a Yes-instance P I is given and visualized in Figure 13.
84
5 Box placement for one product variant
jobs boxes 0
Π
φτ
Figure 13: Given is an instance P I of P that corresponds to a 3P Yes-instance 3P I with B = 12 and z = 4. Thus, a = b = 1/12, q = 90, n = 106, Í Í Π = nj=1 w j = nj=1 l j = 52 + 1/12, and τ = 142. The job sequence begins with z filler jobs, then has q enforcer jobs, and finishes with 3z partition jobs. The partition jobs start behind Π. The box sequence alternates between a filler job box and three partition job boxes, then ends with all q enforcer job boxes. For the partition jobs, we just depicted their upper bound processing time, which emerges if assuming π j = 0 for each partition job j. This visualizes that for any Yes-instance, there exists a solution with φ < τ in P I , see Lemma 5.3.
Lemma 5.3. Given a 3P instance 3P I and the corresponding P instance P I . If 3P I is a Yes-instance, then there exists a solution of P I with φ < τ. Proof. Given a Yes-instance 3P I and a solution A(i) , i = 1, . . . , z. Let us construct a placement S for the corresponding instance P I . For each filler job j = 1, . . . , z, set π j = ( j − 1) B. Then, for i = 1, . . . , q, set πz+i = z + zB + r
Õ
(1 + b)−i .
k=1,...,i
The partition jobs are placed between the filler jobs: For each i = 1, . . . , z and x j ∈ A(i) , we set Õ πz+q+i = (i − 1) (B + 1) + x j0 . x j 0 ∈ A(i) : j 0 < j
There is no gap and no overlap in the constructed placement, therefore it is feasible. All filler and enforcer jobs j = 1, . . . , z + q start at t j = π j . Hence,
5.5 Computational complexity
85
the first partition job, z + q + 1, starts at Cz+q = Π. By this, each partition job j = z + q + 1, . . . , n is late, as it needs to be placed before Π. Still, π j is positive. As a result, C j = C j−1 + b C j−1 − π j > C j−1 + bC j−1 . Solving this recurrence for all 3z partition jobs yields Cn < (1 + b)3z Cr where Cr = Π. Therefore, the objective φ = Cn of this placement is bounded from above by φ < Π · (1 + b)3z < Π · e < τ with the known inequality (1 + 1/x)x < e.
Lemma 5.4. Given a 3P instance 3P I and the corresponding P instance P I . If 3P I is a No-instance, then all solutions of P I have φ > τ. Proof. Given a No-instance 3P I and the corresponding instance P I . Consider a placement S with minimum objective φ∗ . As 3P I is a No-instance, there exists no partition of the elements into triple-sets A(i) , i = 1, . . . , z, of equal size B. Hence, in placement S, there is least one filler job j ≤ z with |t j − π j | ≥ 1, thus p j ≥ l j + a. Therefore, Cz ≥ Π − a + a = Π. Now, all enforcer jobs are late. By Lemma 5.2, sorting the boxes by w j−z g( j − z) = r minimizes Cz+q . As the sort criterion has equal value r for each of the enforcer jobs, any order yields the same Cz+q value. Then, the recurrence for j = z + 1, . . . , z + q is Õ © © ªª C j = C j−1 + l j + b C j−1 − Π − wk ®® k=z+1,...,j « « ¬¬ Õ r br = (1 + b) C j−1 − bΠ + + (1 + b) j−z k=1,...,j−z (1 + b)k
86
5 Box placement for one product variant
Õ b ª © 1+b = (1 + b) C j−1 − bΠ + r + ® j−z k (1 + b) (1 + b) k=1,...,j−z−1 « ¬ 1 1 = (1 + b) C j−1 − bΠ + r +1− (1 + b) j−z−1 (1 + b) j−z−1 = (1 + b) C j−1 − bΠ + r. Dividing both sides with (1 + b) j yields Cj C j−1 r − bΠ = + . j j−1 (1 + b) (1 + b) j (1 + b) Then, let S j = C j /(1 + b) j−z . Thus, Sz ≥ Π/(1 + b)0 = Π and, for j = z + 1, . . . , z + q there is S j = S j−1 +
r − bΠ . (1 + b) j−z
Rewriting Sz+q as a sum, we have Õ
Sz+q = Sz +
j=z+1,...,z+q
= Sz + (r − bΠ)
r − bΠ (1 + b) j−z
1 − (1 + b)−q . b
Returning to Cz+q , we obtain the closed form Cz+q = Sz+q (1 + b)q = Sz (1 + b)q + (r − bΠ) (1 − (1 + b)−q ) (1 + b)q /b = Sz (1 + b)q + b (1 + b)q − Π (1 − (1 + b)−q ) (1 + b)q = Sz (1 + b)q + b (1 + b)q − Π ((1 + b)q − 1) ≥ Π (1 + b)q + Π · (1 − (1 + b)q ) + b (1 + b)q = Π + b (1 + b)q = Π + b (1 + b)log1+b 2Π/b = 3·Π
5.6 Lower bound
87
> τ. Processing times for jobs j > z + q are not negative, thus Cn > τ.
Theorem 5.5. The decision version of P is strongly NP-complete. Proof. For any 3P I , there is a corresponding P I . By Lemmata 5.3 and 5.4, instance P I is a Yes-instance if and only if 3P I is a Yes-instance. Therefore, we constructed a reduction from 3P to P. Testing for φ < τ is done in polynomial time, thus P is in NP. As 3P is strongly NP-complete, and the reduction is pseudopolynomial, we conclude that the decision version of P is strongly NP-complete. 5.6 Lower bound To construct a branch and bound algorithm for P, it is necessary to calculate a lower bound on the minimum attainable objective value of a possibly empty partial solution. We let such a partial schedule be expressed by a possibly empty box sequence S. The set of jobs placed in S is called the fixed job set JF ⊆ J. Their boxes are placed in the order of S beginning from 0. Í Thus, they occupy a contiguous space that ends at F = j ∈JF w j . The partial schedule is extended to a full schedule by appending the remaining open jobs JO = J \ JF to S in some order. As this order is undetermined for a partial schedule, we construct a lower bound on the objective value attained by S and any order of appending the remaining open boxes. Note that in any of these solutions, the box of an open job j ∈ JO is placed in the interval [F, Π − w j ]. We begin with the construction of two preliminary lower bounds derived from Lemma 5.1 and Lemma 5.2, respectively. In the first, we select a job subset J 0 ⊆ JO and place their boxes in nondecreasing order of w j (1 − a) j at and behind F. Then, we set for each remaining job j ∈ JO \ J 0 a box position π j = max{F, t j }, which is as near to t j as it may occur in a solution. Finally, we compute the start and completion times of all jobs, which results
88
5 Box placement for one product variant
in some objective value. If and only if π j ≥ t j for all j ∈ J 0, by Lemma 5.1, this objective value is a lower bound on the given partial schedule. The second preliminary lower bound is constructed symmetrically, by using Lemma 5.2. Here, we place the boxes of J 0 in nonincreasing order of w j g( j) directly before Π. For the remaining jobs j ∈ JO \ J 0, we set the box positions to π j = min{t j , Π − w j } ≤ t j . Again, we compute the start and completion times of all jobs, resulting in some objective value. If and only if π j ≤ t j for all j ∈ J 00, by Lemma 5.2, this objective is a lower bound on the given partial schedule. Sorting the boxes is the bottleneck in the runtime of both preliminary lower bound algorithms. However, these orders only rely on constant sorting criteria. Therefore, we calculate both orders in advance for all jobs J. Then, the bounding step only needs to select the relevant boxes from the sorted lists, merely taking O(n) time. Let us then integrate the two preliminary lower bounds into one. This is summarized in Algorithm 3. Here, the jobs J = {1, . . . , n} are iteratively visited, from first to last. After each iteration, i.e., after having visited jobs 1, . . . , j, the value t is a lower bound on their completion time. Also, t is a lower bound on the start time of the succeeding job j + 1. If a job j is in JF , its box position π j is known. Then, we increase t by the according processing time p j (t). If a job j is in JO , its box is preliminarily placed to the nearest possible place, π j = min{Π − w j , max{F, t}}. However, if there are jobs after j that are also in JO , we group them into set J 0, and try to place their boxes according to the two preliminary lower bounds described above. The grouping step heuristically maximizes the J 0 set of open jobs that meet the conditions. For this, it tests subsets of the smallest k elements in JO . A linear search for maximizing k delivers a J 0 set in reasonably small, quadratic time. Then, the total runtime for checking a partial schedule is O(n · |JO |). The resulting lower bound is illustrated by an example in Figure 14. The lower bound is further improved by an additional step in Algorithm 3, which is dedicated to open jobs that are not covered by either of the above rules. In the description above, we place such a job j at its lower bound start
5.6 Lower bound
89
Algorithm 3 Combinatorial lower bound for P 1: function LB(J = {1, . . . , n}, JF ⊆ J, π j ∀ j ∈ JF ) Í 2: t ← 0, j ← 0, F := j ∈JF w j . t is current start time, j is current job 3: while j < n do 4: j ← j +1 . iterate job sequence 5: if j ∈ JF then . j is a fixed job 6: t ← C j (t) 7: else . j is an open job 8: for jmax ← max{ j 0 | ({ j, j + 1, . . . , j 0 } ∩ JF ) = ∅} to j step −1 do
. find large J 0 set if jmax = j then . fallback case if only = { j} is possible π j ← nearest-gcd-multiple(t) . round box position π j ← min{Π − w j , max{F, π j }} . constrain to limits t ← C j (t) else J 0 = { j, . . . , jmax } . place J 0 according to the polynomial cases if t ≤ F then place J 0 nonincreasingly by w j (1 − a) j at F and behind else place J 0 nondecreasingly by w j (1 + b) j before Π J0
9: 10: 11: 12: 13: 14: 15: 16: 17: 18:
t0 ← t . temporarily calculate next start time for j 0 ← j to jmax do . test if the jobs in J 0 satisfy the cases conditions of the polynomial if t ≤ F ∧ t 0 ≤ π j 0 ∨ t ≥ F ∧ t 0 ≥ π j 0 then t 0 ← C j 0 (t 0) else t0 ← ∞ 0 if t , ∞ then . conditions satisfied; continue after jmax t ← t 0, j ← jmax exit for
19: 20: 21: 22: 23: 24: 25: 26: 27: 28: 29:
C j(LB) ← t return t
90
5 Box placement for one product variant
jobs
1 0
boxes
3
2 C1
C2
4 π4
C3
1 π1
5
4 C4 3
2 π2
π3
C5
5 π5
Π
Figure 14: In the depicted partial schedule, jobs 1 and 4 are fixed: JF = {1, 4}. The lower bound places open boxes 2 and 3 behind the fixed boxes, while it places box 5 just before Π.
time π j = t j . However, box widths often follow a certain scheme in practice, e.g., divisions of ISO1- or EUR-pallets. Therefore, we can improve this placement by rounding it to a multiple of the greatest common divisor of all box widths. As rounding step needs to ensure a lower bound on C j , hence, we round to the nearest multiple with the smallest possible walk distance. For example, with a greatest common divisor of 1, the nearest multiples are dte and btc, hence the job is either early or late. If (dte − t) a < (t − btc) b, the rounded up value leads to a smaller walk distance, hence we set π j = dte. Else, we set π j = btc. 5.7 Dominance rule 5.7.1 Exact dominance rule Dominance conditions are a common strategy for speeding up search algorithms like branch and bound by allowing for a comparison of partial solutions. We say that a partial schedule S is dominated by partial schedule S 0 if the objective for S 0 is smaller than the objective for S for any possible placement of the open jobs. The corresponding search branch for S can then be eliminated, thereby speeding up the search. In the following, we introduce a dominance rule that checks if a partial schedule is dominated by a partial schedule that swaps the boxes of the last two fixed jobs, or swaps
5.7 Dominance rule
jobs
1
2
0 l1 C1
l2
boxes
2 π2
(a) jobs
4 π4
1
boxes
C2
4 π40
π60
C4 6
π3
C5
C6∗ C6
5
C30
Π
5
4 C40
6 C50
C60
C6∗
6 5
3 π30
6
π5
C20 1
π10
C3
3
C10
5
4
3
2
2 π20
3
π6
π1
1 0
(b)
91
π50
Π
Figure 15: The depicted instance has slopes a = b = 0.1 and n = 6 jobs with assembly times l1 = 1, l2 = 4, l3 = · · · = l6 = 1, box widths w1 = · · · = w5 = 1, w6 = 6. Displayed is (a) partial schedule S = h2, 4, 1i and (b) partial schedule S 0 = h2, 1, 4i that swaps jobs 4 and 1. The open jobs 3, 5, 6 are arranged by Algorithm 3, but neither S nor S 0 is pruned by bounding because the optimum value C6∗ is larger than the lower bound. Nonetheless, the dominance rule shows that S 0 dominates S for all possible placements of the open jobs 3, 5, 6. Therefore, it allows us to prune S.
the last job’s box with any other fixed box of the same width. Note that any such swap operation allows to keep all other fixed boxes at the same place, which we require for proving correctness of the dominance rule. Moreover, we only consider swapping the last fixed box with some other box as, e.g., if used in a Branch and Bound search, any preceding box is already tested earlier in the search tree. An example with a fulfilled dominance condition on a swap of the last two fixed boxes is shown in Figure 15.
92
5 Box placement for one product variant
To compare partial schedules S and S 0, we need to take all possible placements for each open job into account. Say that S and S 0 have the same set of fixed jobs. Then, we only need to compare S and S 0 at equal placements of the open jobs. For this, we need to relate each job’s completion times in both schedules. If the job is started at time t in S, it is started at time t − δ in S 0 for a given pair ht, δii , where δ denotes the difference t − t 0 of start times. In any such pair, the two values are a result of an equal placement of all preceding open jobs. This gives a recurrence relation for ht, δii . The corresponding pair of the next job i + 1 if placing the boxes at πi in S and at πi0 in S 0 is ht‘, δ‘ii+1 = hCi (t), Ci (t) − Ci0(t − δ)ii+1 . If a job i is fixed, πi and πi0 are determined. If i is an open job, the placement value is undetermined, but situated at equal positions πi = πi0 ∈ [F, Π − wi ] in the remaining space. Hence, we commence with ht, 0i1 and continue inductively until reaching the virtual job n + 1 with ht, δin . If this δ is positive in all pairs ht, δin+1 , then S 0 dominates S. However, generating this pair for all possible placements requires an exhaustive search. In the following, we constrain the search and merely generate relevant pairs. max We define T j = [t min j , t j ] as the range of possible start times of a job j. Start and end of the interval describe a lower and an upper bound on start time of job j for all possible placements of the open jobs JO . A lower bound is obtained for some jobs during execution of Algorithm 3: Let q ≤ j be the last job for which Algorithm 3 set a Cq(LB) value. Then, we can use t min = j Í (LB) Cq + r=q+1,...,j−1 lr . An upper bound value is obtained by subtracting assembly times of succeeding jobs from a global upper bound UB on Cn , Í i.e., t max = UB − k=j,...,n lk . Note these bounds relate t min + l j = t min j j j+1 and max max t j + l j = t j+1 for all j = 1, . . . , n − 1. We are given a partial schedule S and two fixed jobs j, k, j < k, for which either πk = π j + w j , π j = πk + wk , or w j = wk hold. Consider partial schedule S 0 which places the boxes equally except for swapping the box placement of j and k. Note that a swap of j and k affects no other box position. Therefore, completion times of jobs q < j remain the same. Moreover, job j starts at the same time in S and in S 0. Also, T 0j = T j , and ht, 0i j for all t ∈ T j .
5.7 Dominance rule
93
Property 5.6. Given pair ht, 0i j for start time t ∈ T j of job j and π j + ω = π 0j . Let ht‘, δ‘i j+1 be a corresponding pair for job j + 1. If ω ≥ 0, there is −aω ≤ δ‘ ≤ bω, else bω ≤ δ‘ ≤ −aω. Moreover, δ‘ is extremal if it equals (a) max (b) δ(a) or δ(b) in ht min j+1 , δ i j+1 , ht j+1 , δ i j+1 . Proof. Given start time t, there is δ‘ = C j (t) − C j0(t) = max{a (π j − t), b (t − π j )} − max{a (π 0j − t), b (t − π 0j )} = max{a (π j − t), b (t − π j )} + min{−a (π 0j − t), b (π 0j − t)} = max{a (π j − t) + min{−a (π j + ω − t), b (π j + ω − t)}, b (t − π j ) + min{−a (π j + ω − t), b (π j + ω − t)}} = max{min{−aω, (a + b) (π j − t) + bω}, min{− (a + b) (π j − t) − aω, bω}} If (a + b)(π j − t) + bω ≤ −aω, there is (a + b)(π j − t + ω) ≤ 0. However, if ω ≥ 0, there is (a + b)(π j − t + ω) ≤ −(a + b)(π j − t) and (a + b)(π j − t) + bω ≤ −(a + b)(π j − t) − aω. Thus, for ω ≥ 0, there is δ‘ = max{−aω, min{(a + b)(t − π j ) − aω, bω}}. Case ω ≤ 0 is analogous, in this case there is δ‘ = max{min{−aω, (a + b)(π j − t) + bω}, bω}. Moreover, δ‘ is a monotonic function of t ∈ T j in both cases. Hence, it is extremal for extreme values of t. Depending on π j and πk , the value of ω is either positive or negative. We denote extremal value pairs by ht (a) , δmin i j+1 and ht (b) , δmax i j+1 , obtained max (a) is the larger value if ω is from ht min j , 0i j and ht j , 0i j . Note that t negative. The box of job q, with j , q , k, is placed at the same position πq = πq0 in both S and S 0. The difference of its completion time between S and S 0 is both influenced by box positions πq = πq0 and by the start times as defined by ht, δiq . Furthermore, if q ∈ JO , its box position is undetermined. Even then, we can, however, state bounds for the start time difference if we restrict our considerations to same signs of all difference values δ of job q, i.e., require that δqmin · δqmax ≥ 0.
94
5 Box placement for one product variant
Property 5.7. Given a job q ∈ J \ { j, k} with πq = πq0 and ht, δiq . For this, let ht‘, δ‘iq+1 be the corresponding pair of job q + 1. If δ‘ ≥ 0, then ˆ ≤ (1 + b)δ. Else, (1 + b)δ ≤ δ‘ ≤ (1 − a)δ. Moreover, δ‘ is (1 − a)δ ≤ δ‘ extremal for this placement if ht, δiq ∈ {ht (a) , δmin iq , ht (b) , δmax iq }. Proof. Let t 0 = t − δ. Then, δ‘ = Cq (t) − Cq (t 0) = δ + max{a (πq − t), b (t − πq )} − max{a (πq − t 0), b (t 0 − πq )} = δ + max{a (πq − t), b (t − πq )} + min{−a (πq − t 0), −b (t 0 − πq )} = δ + max{min{a (πq − t) − a (πq − t 0), a (πq − t) − b (t 0 − πq )}, min{b (t − πq ) − a (πq − t 0), b (t − πq ) − b (t 0 − πq )}} = δ + max{min{−aδ, −aδ + (a + b) (πq − t 0)}, min{bδ + (a + b) (t 0 − πq ), bδ}} = δ + max{−aδ + min{0, (a + b) (πq − t 0)}, bδ + min{(a + b) (t 0 − πq ), 0}} = δ + max{−aδ + min{0, (a + b) (πq − t + δ)}, bδ + min{(a + b) (t − δ − πq ), 0}} As min{0, x} ≤ 0 for any number x, we derive the stated bounds for the difference δ‘. Furthermore, δ‘ is a monotonic function of t and δ. Therefore, the extrema of δ‘ are obtained for extremal values of both t and δ. Finding extremal values for the difference is similar for job k. Property 5.8. For job k, we are given placements πk , πk0 and ht, δik . Let ht‘, δ‘ik+1 be the corresponding k + 1 pair. Then, δ‘ is extremal if ht, δik ∈ {ht (a) , δmin ik , ht (b) , δmax ik }.
5.7 Dominance rule
95
Proof. Let t 0 = t − δ and γ = πk − πk0 . Then, δ‘ = Ck (t) − Ck0 (t 0) = δ + max{−a (t − πk ), b (t − πk )} − max{−a (t 0 − πk0 ), b (t 0 − πk0 )} = δ + max{−a (t − πk ), b (t − πk )} − max{−a (t 0 + γ − πk ), b (t 0 + γ − πk )} = Ck (t) − Ck (t 0 + γ). Adding γ to t 0, we establish difference δ¯ = δ − γ. As the offset to δ is ¯ k. constant, Property 5.7 applies for πk and ht, δi In Property 5.6, Property 5.7, and Property 5.8, the function for δ‘ is monotonic. Therefore, a larger range for t (which we have) only extends the extremal value range of δ. If q ∈ JO , its box position πq is undetermined. Thus, we cannot find precise extremal values for δˆq . Nonetheless, given difference δ, a lower bound on the succeeding δ‘ is determined by multiplying δ with 1 − a if δ is nonnegative, else with 1 + b. The sign of δ‘ equals the sign of δ. Therefore, we can inductively state for all q, where j < q < k, that the sign of δ‘ for job q equals the sign of the difference δˆ after job j, and δ‘ ≥ δˆ ·
(1 − a)q−j ,
δˆ ≥ 0,
(1 + b)q−j ,
else.
(12)
Similarly, we can inductively state for all q > k that the sign of δ‘ for job q ˇ the difference after job k. Hence, we barely need to equals the sign of δ, min ˆ compute δ for t j and t max j , multiply it for a lower bound as in Equation 12, min ˇ and calculate δ for tk and tkmax . If δˇ > 0 for both start times, we now can say that Cn > Cn0 for all possible open job placements. In this case, partial schedule S 0 dominates S. The runtime of testing the described dominance criterion on S, S 0 is O(n) in a naïve implementation. We can precompute a list of cumulated assembly
96
5 Box placement for one product variant
times, size n. Furthermore, Algorithm 3 can store, for each job j ∈ J, a pointer to the last job q ≤ j for which it calculated a lower bound on the completion time, i.e., Cq(LB) ≥ 0. The resulting steps are described in Algorithm 4. This reduces the runtime of the dominance criterion to constant time O(1) for comparing S to S 0. Algorithm 4 Dominance rule for P 1: function CheckDominance(J = {1, . . . , n}, UB, JF ⊆ J, j, k ∈ JF for j < k,
2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23:
sequence S with box positions π j , πk (either adjacent or with w j = wk ), and sequence S 0 with swapped π 0j , πk0 ) n o Í 0 tmin ← tmin ← max Cq(LB) + r=q+1,...,j−1 lr q ≤ j . initialize earliest start time of job j Í 0 tmax ← tmax ← UB − k=j,...,n lk . initialize latest start time of job j 0 tmin ← C j (tmin ), tmin ← C j0 (tmin ) . process job j 0 0 tmax ← C j (tmax ), tmax ← C j (tmax ) 0 δˆmin ← tmin − tmin . calculate difference j 0 ˆδmax ← tmax − tmax j if δˆmin · δˆmax ≥ 0 then . constrain to differences of same sign if δˆmin ≥ 0 and δˆmax ≥ 0 then . calculate lower bound on difference (LB) min (1 − a) j−k δˆk−1 ← δˆmin j (LB) max (1 − a) j−k δˆk−1 ← δˆmax j else (LB) min (1 + b) j−k δˆk−1 ← δˆmin j (LB) max (1 + b) j−k δˆk−1 ← δˆmax j Í tmin ← tmin + q=j+1,...,k−1 lq . calculate earliest start of job k Í tmax ← tmax + q=j+1,...,k−1 lq (LB) min 0 tmin ← tmin − δˆk−1 . calculate latest start of job k (LB) max 0 ˆ tmax ← tmax − δk−1 0 tmin ← Ck (tmin ), tmin ← Ck0 (tmin ) . process job k 0 tmax ← Ck (tmax ), tmax ← Ck0 (tmax ) 0 0 if tmin > tmin ∧ tmax > tmax then . swapped completes earlier return swapped-is-dominant return swapped-is-non-dominant
5.7 Dominance rule
97
5.7.2 Heuristic dominance rule Integrating more than an adjacent box swap results in a change of more than two box positions. Then, upper and lower bounds of the difference between two partial schedules are not only harder to obtain, but of inferior quality. Nonetheless, it is still of interest to conduct a comparison of two partial schedules S, S 0 with the same set of fixed jobs but placing more than two boxes at different positions. Of course, appending the same sequence of open jobs to both S and S 0 allows comparing objective values. Testing dominance with all possible sequences of the open jobs requires, however, an exponential search in the worst case. To avoid this, we pick out corner cases of placing the open jobs. Although this yields only a heuristic dominance rule, it is relatively effective if these corner cases are representative of many other open job placements. Then, if S 0 dominates S for all these corner cases, we can decide that S is probably not leading to an optimum solution. In particular, we pick the following two corner cases of placing the open jobs JO : (a) set π j = F for all j ∈ JO , (b) set π j = Π − w j for all j ∈ JO . Both corner case placements are in fact infeasible. Still, we can calculate each resulting objective Cmax , and compare S to S 0 for each corner case. The rationale for picking these two corner cases is as follows. The jobs can be looked at in segments, namely X and Y : the jobs that start before F, and those that start at or after F. Let us consider segment X. Here, corner case (a) starts each job in X as early as possible, while (b) starts them as late as possible. Hence, the corner cases cover both extremes in the X segment. Secondly, we look at segment Y . In Y , all fixed jobs are necessarily late. Any further delay has a proportional effect on each of the fixed jobs, only caused by the open job placement. Therefore, as long as each open job’s placement is the same in both corner cases, the exact place is unimportant. Note that the division into the X and Y segments may differ between each
98
5 Box placement for one product variant
corner case and between schedules. Still, the comparison is fair as the boxes are equally placed in both schedules, respectively for each corner case. Thus, the heuristic compares each partial schedule only by each corner case’s objective value. Moreover, it suffices to conduct only a single comparison, which ideally is to the best known partial schedule with the same set of fixed jobs. Therefore, each time we visit a non-dominated partial schedule, we store each of the two corner case’s Cn value in memory. The set of fixed jobs can be encoded by a binary vector of length n that determines membership of each job in set JF . Hence, we store at most 2n values for each corner case in a table. If available memory is limited, one may reduce the number of stored values to a constant size with a memory cache. Concluding, calculating the corner case objectives takes O(n) and comparing S to all previously checked partial schedules with the same set of fixed jobs takes O(1) time. 5.8 Solution algorithms Approaches for computationally solving instances of P are manifold. In this chapter, we first introduce a mixed integer program (MIP) and (meta) heuristic methods. Then, the described lower bound (Section 5.6) and dominance rules (Section 5.7) are utilized in a branch and bound algorithm. Finally, we derive a heuristic version of the branch bound algorithm. 5.8.1 Mixed integer program The box placement problem is described as a sequencing problem, which lends for an application of established modeling strategies for single machine scheduling (Błażewicz et al., 1991; Keha et al., 2009; Pinedo, 2016). In a preliminary test between MIP approaches, we compared time indexing, linear ordering/sequencing variables, and disjunctive box overlapping constraints. The latter turned out as the quickest variant by far. Therefore, we describe it in the following.
5.8 Solution algorithms
99
For each job j = 1, . . . , n, there is a completion time variable C j and a box placement variable π j . To determine a box sequence, we introduce binary variables x jk , 1 ≤ j < k ≤ n, each of which is zero if box k is placed before box j and one otherwise. Then, minimize Cn
(13a)
subject to C0 = 0,
(13b)
C j ≥ C j−1 + l j − a (C j−1 − π j ),
1 ≤ j ≤ n,
(13c)
C j ≥ C j−1 + l j + b (C j−1 − π j ),
1 ≤ j ≤ n,
(13d)
π j + w j ≤ πk + Π (1 − x jk ),
1 ≤ j < k ≤ n,
(13e)
πk + wk ≤ π j + Π x jk ,
1 ≤ j < k ≤ n,
(13f)
1 ≤ j ≤ n,
(13g)
1 ≤ j < k ≤ n.
(13h)
0 ≤ πj ≤ Π − wj , x jk ∈ {0, 1},
Constraint 13b sets the start time for the first job to zero. Constraint 13c and Constraint 13d calculate the completion time iteratively from the completion time of the preceding job, which is either larger or smaller than the box placement variable, thus requiring two inequations. Depending on x jk , 1 ≤ j < k ≤ n, either Constraint 13e or Constraint 13f ensure as disjunctive constraints that boxes j, k are not overlapping while each is placed in its interval Constraint 13g. The objective is to choose feasible box placements π1 , . . . , πn that minimize Cn , the completion time of the last job. 5.8.2 Basic heuristics An intuitive placement for the boxes is to order them identical to the job sequence. This strategy is often used as a standard guideline by production planners in practice. Let us call this the identity sequence (ID) heuristic. However, the identity sequence is mostly far from optimum, as we observe from test results in Section 5.9.4. Therefore, it is sensible to improve this solution. We apply a steepest-descent hill-climbing search to improve this
100
5 Box placement for one product variant
initial box sequence. Repeatedly, the best neighbor of a sequence is chosen as the next sequence, until arriving at a local optimum. The neighborhood consists of swaps between all pairs of two boxes. We call this the identity sequence with a hill climbing search (HC) heuristic. The resulting solution can be improved even more with a second metaheuristic that tries to evade local minima. An example is the simulated annealing (SA) method (Kirkpatrick et al., 1983). For changing the solution, one can use the same neighborhood as before: swapping arbitrary box pairs. In contrast to a descending hill-climbing search, this metaheuristic allows ascending to worse solutions to a certain degree. This enables leaving local minima for finding the global minimum. 5.8.3 Branch and bound algorithm To solve the given problem exactly, we introduce a branch and bound (B&B) algorithm. We begin by saving the best sequence of the HC heuristic presented in Section 5.8.2 and set the upper bound value to its objective value. Then, we start a depth-first search. We begin at the root node with the empty partial sequence where all jobs J are open jobs, JO = J. For each job in JO , in order of the job index, a children node is created by removing the job from JO and appending the job to the current partial sequence. Then, we check if bounding and the dominance rule allows pruning the new partial sequence. Pruning is possible (a) if the lower bound of Algorithm 3 is larger or equal to the best known upper bound, or (b) if swapping the last placed box with the box right before, or with any other fixed box of the same width yields a partial schedule for which the dominance rule of Section 5.7.1 applies. Else, this branch node is further explored recursively. When reaching a branch node with JO = {}, all boxes are placed and the objective φ can be calculated. If φ is smaller than the current upper bound, we save the
5.9 Numerical results
101
sequence and set the upper bound to φ. After exploring of all non-pruned branches, the last saved sequence is returned, it is an optimal solution. 5.8.4 Truncated branch and bound heuristic In a truncated branch and bound heuristic (TrB&B), we limit the size of the B&B tree. For this, we constrain the branching factor, which is the number of nodes that emerge from each node, by maximum branching factor BFmax = min{|JO |, max{dψe, b|JO |/σc}}, for positive constants ψ, σ. Parameter ψ controls the maximum number of branches at the end of the search tree, and σ at the start and in the middle of the search tree, depending on the number of free boxes |JO |. To rank and select the most promising branches, we evaluate all |JO | emerging nodes. For this, we use the identity sequence as in the ID heuristic in Section 5.8.2. Hence, we rank by job sequence and, therefore, select the BFmax smallest job indices from set JO . A further reduction of the tree size is achieved by use of the heuristic dominance criterion from Section 5.7.2: First, we test heuristic dominance on swapping the last placed box with any other fixed box. Secondly, we test against the best-known solution with the same JF set. 5.9 Numerical results In a numerical experiment, we assess optimizing the box placement quantitatively. Moreover, we like to analyze the performance of the algorithms from Section 5.8. Hence, we test them on a variety of generated instances and statistically compare their performance. As a primary criterion, we utilize median runtime and quartile deviation on instance groups of similar parameters, to find which of the algorithms quickly and robustly yield exact solutions. For the heuristics, we additionally compare solution quality by counting optimally solved instances and calculating mean error. We use these criteria to evaluate which heuristic provides the best tradeoff between runtime and solution quality.
102
5 Box placement for one product variant
5.9.1 Instance generation We generate test instances in several variants to evaluate performance in different settings. A problem instance is characterized by its size, the factors a and b, and each assembly time and box width. The main influence on the instance size in practice is the cycle time. In the automotive industry, high-volume cars in the compact segment have rather small cycle times, which leaves room for only few operations; their number can be as low as five to ten. On the other hand, low-volume products like luxury vehicles or trucks have much longer cycle times. They allow for many more operations per cycle, often in the range from twenty up to almost thirty. Naturally, this is a more challenging setting, even more so as P is an NP-hard problem. Therefore, we focus our tests on these larger instance sizes, with number of jobs n ∈ {8, 12, 16, 20, 24, 28}. The worker velocity v is a multiple of the conveyor velocity. In practice, it is commonly in the range 8 to 14. To test the algorithms for extreme velocities as well, we let v ∈ {2, 4, 8, 16}. In Section 2.4, we distinguish between three walking strategies. As strategies (A) and (B) are effectively equivalent, we obtain two variants for setting factors a, b for each v: (S1) a = 2/(v + 1) and b = 2/(v − 1), as in (A) and (B), (S2) a = (2v + 1)/(v + 1)2 and b = (2v + 1)/v 2 , as in (C). The resulting factors for both variants are listed in Table 1. The assembly time generation follows an established scheme (Jaehn and Sedding, 2016): (L1) all equal assembly times l1 , . . . , ln = 1, (L2) all distinct assembly times {l1 , . . . , ln } = {1, . . . , n}, randomly permuting integers 1, . . . , n, (L3) assembly times drawn uniformly from {1, . . . , 10}, (L4) assembly times drawn from a geometric distribution with the random variable X = d−λ ln Ue for U uniformly distributed in [0, 1] and λ = 2.
5.9 Numerical results
103
As well, we generate box widths in four variants: (W1) all equal box widths w1 , . . . , wn = 1, (W2) all distinct box widths {w1 , . . . , wn } = {1, . . . , n} in a random permutation, (W3) box widths drawn uniformly from {20 , . . . , 23 } ∪ {3 · 20 , . . . , 3 · 22 }, reflecting seven divisions of ISO1-pallets, (W4) box widths drawn from rounded up gamma variates with shape 1.25 and unit scale, representing measured box width distributions at automotive assembly lines. To fill the station (here, we let its width s = 10 · n), all boxes of an instance are normalized for a total width of s by scaling and rounding each. The processing times and the box widths are initially of different scale. This necessitates a harmonization. Ideally, the station length would equal the last completion time Cn . The identity sequence heuristic ID in Section 5.8.2 places the boxes according to the job sequence. For harmonizing assembly times and box widths, we use this placement and scale the assembly times linearly by a positive rational factor. The appropriate factor minimizes the absolute difference |Cn − Π| and is determined by the algorithm of Brent (1971) for finding a zero of a univariate function. In summary, the described scheme considers four numbers of jobs n, five times two variants for a and b, four assembly time variants, and four box width variants. For each parameter combination, it generates 10 instances. This yields a total of 6 · (4 · 2) · 4 · 4 · 10 = 7 680 test instances. 5.9.2 Test setup The tested algorithms are implemented in C++. Our data structures use plain STL containers without additional dependencies. The TrB&B heuristic is parameterized with ψ = 5, σ = 7. For the simulated annealing algorithm,
104
5 Box placement for one product variant
Table 7: MIP, B&B runtime and performance. MIP
B&B
Md
QD
solved
Md
QD
solved
8 12 16 20 24 28
0.01 0.04 0.28 2.11 21.18 156.82
0.00 0.03 0.65 10.76 138.01 296.42
100% 100% 100% 95% 81% 58%
0.00 0.00 0.00 0.04 0.51 6.62
0.00 0.00 0.00 0.13 2.24 68.79
100% 100% 100% 100% 98% 83%
all
0.45
8.81
89%
0.01
0.19
97%
n
Md: median runtime in seconds QD: quartile deviation in seconds solved: percentage of instances solved in 10 minutes
we rely on the reference implementation of Press et al. (1992, pp. 448– 451) with default parameters. All code is compiled with GCC 7.2 on Ubuntu Linux, Kernel 4.4, and each instance is executed separately on an Intel Xeon E5-2680 CPU at 2.80 GHz. The MIP model is solved with Gurobi 7.5. We use its C++ interface to ensure the best performance and disabled multiprocessing capabilities for a fair comparison. Each instance and algorithm is terminated after a time limit of 10 minutes, hence at least delivering a lower bound on the runtime. This allows calculating median and quartiles while taking terminated instances into account. 5.9.3 Exact algorithms In Table 7, measured MIP and B&B runtimes are grouped by instance size n. Apparently, they grow exponential, as it is expected to happen for a strongly NP-hard problem. However, high quartile deviations for both
105
B&B runtime in secs (logscale)
MIP runtime in secs (logscale)
5.9 Numerical results
103 102 101 100 10-1 10-2 10-3 8
12
16
20
24
103 102 101 100 10-1 10-2 10-3 8
28
12
16
20
24
28
n
n 103
MIP runtime in secs (logscale)
102
101
100
n 8 10-1
12 16 20
10-2
24 28
10-3 10-3
10-2
10-1
100
101
102
103
B&B runtime in seconds (logscale)
Figure 16: The exact MIP and B&B algorithm’s runtime in seconds is shown with a logarithmic scale in box plots and a scatter plot with a normal confidence ellipse at 95% for each n.
106
5 Box placement for one product variant
algorithms show that difficulty between instances varies by a high degree. Nonetheless, both algorithms agree on the difficulty of each instance, as there is a weak positive correlation of value 0.3 between MIP and B&B runtimes, see Figure 16. Every instance that MIP solves, is as well solved by B&B, which moreover is 169.57 times faster (median ratio) and solves nearly all instances. Let us break down the B&B performance in greater detail for one size, n = 24. Median runtimes are shown for each (v, S) pair in Table 8, and each (L, W) pair in Table 9. First, the worker velocity apparently affects problem difficulty. It peaks at v = 16 and is easiest for the slowest velocity v = 2. We assume that this can be explained by the behavior of lower bounds. A slow worker velocity corresponds to high a, b values. Hence, wrongly placed boxes increase processing times by large. In the following, many open jobs become late, the combinatorial lower bound can predict their placement better, and thus, its value is higher. The walking strategy influences problem difficulty as well. Instances with walking strategy (S1) are throughout easier to solve. The assembly time variant influences the difficulty in so far that case (L4) is most difficult and a uniform assembly time in (L1) is easiest. At the same time, the uniform box width case (W1) is easiest while case (W3) is hardest. Trivial is case (L1, W1) as the optimum box sequence is the job sequence here. Most difficult is case (L4, W3), its median runtime is about four times the median for all instances with n = 24. Still, its median is within the upper quartile. Hence, the median runtime increase is moderate for the most difficult instance class. In the literature, material fetching accounts for about 10–15% of total work time (Scholl et al., 2013). Let us look at our most realistic choices for worker velocity, v = 16 and v = 8 (e.g., the case study in Klampfl et al. (2006) similarly assumes v = 13.6). Our results conform to these values with 8–15% mean walking time of total work time in optimal solutions, see Table 8.
5.9 Numerical results
107
Table 8: B&B median runtime in seconds for instance size n = 24 is displayed for all pairs of walking velocity and walking strategy settings. Below is the mean walking time percentage MW in optimum solutions for different velocities. v=2
v=4
v=8
v = 16
S1 S2
0.01 0.07
0.35 0.49
1.92 2.18
3.17 3.28
MW
42%
27%
15%
8%
Table 9: B&B median runtime in seconds for instance size n = 24 is displayed for all pairs of assembly time and box width settings.
W1 W2 W3 W4
L1
L2
L3
L4
0.00 0.39 0.34 0.21
0.20 1.59 1.62 0.66
0.09 0.68 1.00 0.33
0.52 2.76 2.44 2.92
5.9.4 Heuristics Results for the heuristics are shown in Table 10. It lists median runtimes, fraction of optimally solved instances, and mean percentage error MPE, which is the mean of the percentage walking time error PE =
φ∗
φ − φ∗ Í · 100%, − j ∈J l j
(14)
comparing achieved and minimum walking time, where φ∗ is the optimum and φ the heuristic’s objective. The ID heuristic orders all boxes in job sequence. Surprisingly, this is optimal in 29% of all instances. However, its use is fairly limited: a MPE of 15% is quite high. In practice, this natural order is often used,
108
5 Box placement for one product variant
Table 10: Heuristics’ runtime and performance on instances that B&B solved in 10 minutes. ID
HC
TrB&B
SA
SA+TrB&B
n
opt
MPE
opt
MPE
opt
MPE
opt
MPE
opt
MPE
8 12 16 20 24 28
54% 38% 27% 21% 17% 18%
9% 13% 16% 17% 18% 16%
97% 95% 91% 86% 84% 85%
0.21% 0.27% 0.43% 0.41% 0.26% 0.24%
100% 99% 98% 98% 97% 98%
0.01% 0.02% 0.09% 0.05% 0.04% 0.04%
98% 97% 94% 92% 89% 88%
0.1% 0.14% 0.27% 0.17% 0.11% 0.15%
100% 100% 99% 99% 98% 99%
0.01% 0.01% 0.05% 0.01% 0.02% 0.02%
all
29%
15%
90%
0.31%
98%
0.04%
93%
0.16%
99%
0.02%
opt: MPE:
percentage of optimally solved instances mean percentage error to minimum walking time
hence the motivation for improvement is high. Applying a hill climbing search (HC) on this greatly improves the result. Then, the number of optimally solved instances raises to 90% while the MPE drops to 0.31%. Moreover, the median computation time is still not measurable. This result improves further with a simulated annealing search that temporarily allows worse solutions. Here, the number of solved instances increases to 93% and decreases the MPE to 0.16%. This at least increases median computation time for the largest instance size n = 28 to 0.029 seconds. A different approach is taken by the TrB&B heuristic. Here, the solution quality further improves by a fairly large amount by optimally solving 98% instances, with a further reduced MPE of 0.04%. Moreover, the median TrB&B runtime of 0.012 seconds for n = 28 is fairly lower than the runtime of the simulated annealing heuristic. A slight further improvement is achieved by using the simulated annealing objective as an initial upper bound for the TrB&B heuristic. Then, TrB&B solves 99% instances optimally, with an MPE of 0.02%, and a total (SA and TrB&B) median runtime of 0.057 seconds for n = 28. Concluding, although the TrB&B heuristic is
5.10 Conclusion
109
more elaborate to implement, both runtime and quality results suggest this effort is worthwhile. 5.10 Conclusion We introduce a core box placement problem at a moving assembly line which considers worker walk times for minimization. Although it turns out as NP-hard in the strong sense, we find two polynomial cases which enable us to construct a lower bound. A dominance rule allows to compare partial solutions if they exchange two boxes without affecting others. This constraint is alleviated in a heuristic dominance rule. By this, we introduce a branch and bound search as well as a heuristic version. Numerical results show they possess superior runtime and quality compared to solving mixed integer programming models (for exact solutions) and metaheuristic approaches (for heuristic solutions). Moreover, the tests provide convincing evidence for the practical relevance of this approach: compared to the commonly used pattern of placing boxes in the order of assembly operations, optimum solutions attain a substantial mean walk time reduction of 15%.
6 Box placement for multiple product variants
6.1 Introduction Car manufacturers cope with the demand for a variety of product variants often by manufacturing different product variants on the same assembly line (Sternatz, 2014). This extends the classic assembly of a single product to a set of several product variants that are assembled in an intermixed succession. Here, assembly operations can differ between product variants. A balanced workload for all stations is, as in the single product variant case, achieved by assigning a similar workload to each station. However, instead of restricting a station’s maximum workload, this production system restricts the mean workload assuming the worker can exceed the cycle time for some product variants. Moreover, each product variant usually has an individual production rate, which specifies the fraction of cycles allotted to it. Then, the objective is to ensure that the mean weighted workload is below the cycle time. Over all stations, this value is often closer to the cycle time. Therefore, an alternating sequence of several product variants can increase assembly line utilization, because it allows for a further balance of a station’s workload over time. Considering multiple product variants, the first literature on finding a balanced workload assignment is found in Thomopoulos (1967), and recent reviews are found in Battaïa and Dolgui (2013); Boysen et al. (2008a). Furthermore, Thomopoulos (1967) optimize the product variant succession to smoothen the workload over time, which is an optimization problem on its own; see Boysen et al. (2009a, 2012) for recent reviews. The model facilitates a productivity increase by minimizing walking time to fetch parts from the line side by means of placing the parts close to each © Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2020 H. A. Sedding, Time-Dependent Path Scheduling, https://doi.org/10.1007/978-3-658-28415-2_6
112
product 3 product 2 product 1
6 Box placement for multiple product variants
C3,max C2,max C1,max
boxes Figure 17: This figure visualizes an instance with three product variants M = {1, 2, 3}, each with a fixed sequence of jobs. Each job is associated with a box of a certain width. The job’s processing time depends on the position of its box. Í To minimize the sum of last completion times i ∈M ri Ci,max , which is weighted by each product’s production rate ri , i ∈ M, we optimize the box placement sequence.
assembly operation. However, parts for multiple product variants need to be intermixed at the line side. As the space at the line side is scarce, the boxes for the parts need to compete for good positions, as it is visualized in Figure 17. This extends our results in Chapter 5 where we minimize the walking time for one model, or equivalently, the completion time of the last job. With multiple product variants, we minimize the weighted average last completion time. Effectively, this leads to a higher walking time on product variants with small production rate as they contribute little to the weighted average; product variants with a high production rate instead attain box positions with the shortest distances. We approach the problem of optimizing the box placement of multiple product variants as follows. First, we introduce a formal definition of a quintessential model, relying on the assumptions in Section 2.3. Then, we analyze the problem complexity: By reduction from Three Partition, we show the problem in its decision version is NP-complete in the strong sense for an arbitrary number of product variants. We state a mixed integer program that relies on disjunctive constraints that prevent box overlaps, which provides in practice a better performance than other modeling approaches.
6.2 Problem definition
113
A second mixed integer program instead uses space-indexing for assigning box positions. We utilize its Lagrangian relaxation of walk time constraints to construct a lower bound, and show it can be directly solved by a dedicated polynomial algorithm. With this, we construct a lower bound that is used in an exact and a truncated branch and bound search. We evaluate both along with several heuristics, and compare it to the fastest mixed integer program in a numerical experiment. This shows that the devised branch and bound search clearly outperforms the fastest mixed integer program, and the devised heuristics give good results in a short amount of time. For related literature on the placement problem in addition to our results in Chapter 5, we refer to Section 5.2 on the single model case. Let us highlight that Klampfl et al. (2006) as well study the placement problem with multiple product variants, although with only subpar performance results for more than few jobs. This chapter is structured as follows. First, a formal definition of the placement problem for multiple product variants is introduced in Section 6.2. We analyze its computational complexity in Section 6.3. Two mixed integer programs for the problem are stated in Section 6.4, one of them provides a Lagrangian lower bound that is solved by a polynomial algorithm in Section 6.5. It is used in the construction of solution algorithms in Section 6.6, which are numerically tested in Section 6.7. 6.2 Problem definition The placement problem for minimizing walking times with a mix of multiple product variants, given the model assumptions from Section 2.3 except Assumption A1 and Assumption A3, is defined as Definition 6.1 (Problem Pm). We are given factors a ∈ (0, 1), b ∈ (0, ∞), and m product variants in set M = {1, . . . , m}. A product variant i ∈ M Í is given a production rate ri ≥ 0 such that i0 ∈M ri0 = 1, and consists of a set of ni jobs in set Ji = {(i, 1), . . . , (i, ni )}. Thus in total, there are Í Ð n = i ∈M ni jobs, subsumed in set J = i ∈M Ji . Each job (i, j) ∈ J is given
114
6 Box placement for multiple product variants
an assembly time li,j ∈ Q ≥0 and a corresponding box of width wi,j ∈ Q >0 . Í Then, Π = (i,j)∈J wi,j is the total box width. A box placement is given by a variable box sequence S : J → {1, . . . , n}. From S, the box of a Í job (i, j) ∈ J is placed at position πi,j = (h,k)∈J : S((h,k)) 1. Hence, we show that Pm remains strongly NP-hard in this case. Theorem 6.1. The decision version of Pm is strongly NP-complete for m > 1. Proof. For any 3P instance 3P I , we introduce a corresponding instance of Pm’s decision version as follows: The factors a, b may be of any allowed positive value. Let m = 2. Let r1 = 0, r2 = 1. For the product variant 1, we define 3z jobs in set J1 with assembly times of arbitrary allowed value, and box sizes w1,j = X j , j = 1, . . . , 3z. Furthermore, we define z + 1 jobs for product variant 2 in set J2 , with l2,j = B + 1 and w2,j = 1 for each j = 1, . . . , z + 1. The decision problem is to find a solution with objective φ ≤ Φ for threshold Φ = Í j=1,...,z+1 l2,j = (B + 1) (z + 1). For such an instance, we show that any existing solution with φ ≤ Φ corresponds to a solution to 3P I and vice versa. The objective is φ =
6.4 Mixed integer programs
115
r1C1,3z + r2C2,z+1 = C2,z+1 ≥ Φ, hence Φ is a lower bound on φ. Therefore, φ ≤ Φ ⇐⇒ φ = Φ. Assume that φ = Φ. Then, there is p2,j = l2,j for all j = 1, . . . , z + 1. This happens if and only if the boxes of the z + 1 jobs in J2 are exactly at the locations (B + 1) · k, for k = 0, . . . , z. Between the boxes of two subsequent jobs, there is a space of length B. Each such space is exactly covered by exactly three of the product variant 1 jobs’ boxes if and only if there exists a solution to the corresponding 3P instance. Note that in a 3P solution, each multiset consists of exactly three elements. Concluding, if and only if there exists a solution with φ ≤ Φ, there exists a solution to the corresponding 3P instance. This represents a pseudopolynomial reduction of the 3P to the decision version of Pm with m = 2. This reduction is generalizable to m > 2. Testing for φ ≤ Φ is done in polynomial time, thus Pm is in NP. Corollary 6.2. By Theorem 5.5 and Theorem 6.1, the decision version of Pm is strongly NP-complete for arbitrary numbers of product variants m. 6.4 Mixed integer programs The box placement problem can be formulated as a mixed integer program. As in the single product variant case, there are several ways to formulate it as a sequencing problem. First, we introduce an extended version of the model for a single product variant. Additionally, we introduce a space-indexing approach which is later used in a Lagrangian relaxation for constructing an efficient lower bound algorithm in Section 6.5. 6.4.1 Disjunctive sequencing The model in Section 5.8.1 for a single product variant uses disjunctive sequencing constraints. To adapt it for multiple product variants, we add a separate sequence of completion times for each product variant and modify
116
6 Box placement for multiple product variants
the objective. The box overlapping constraints are defined for all distinct pairs of jobs. As each pair needs to be ordered, let us define vector set J ≺ = {(i, j, h, k) | (i, j) ∈ J, (h, k) ∈ J, i < h ∨ (i = h ∧ j < k)} that contains all distinct ordered pairs of jobs as vectors. Hence, it excludes the symmetric swapped pair, facilitating the disjunctive overlapping constraints. Then, the model for Pm is to Õ minimize ri C(i,ni ) (15a) i ∈M
subject to C(i,0) = 0,
i ∈ M, (15b)
C(i,k) ≥ C(i,k−1) + l(i,k) − a C(i,k−1) − π(i,k) , (i, k) ∈ J, (15c) C(i,k) ≥ C(i,k−1) + l(i,k) + b C(i,k−1) − π(i,k) , (i, k) ∈ J, (15d) π(i,j) + w(i,j) ≤ π(h,k) + Π 1 − x(i,j,h,k) , (i, j, h, k) ∈ J ≺ , (15e)
π(h,k) + w(i,k) ≤ π(i,j) + Π x(i,j,i,k) , 0 ≤ π(i,j) ≤ Π − w(i,j) , x(i,j,h,k) ∈ {0, 1},
(i, j, h, k) ∈ J ≺ , (15f) (i, k) ∈ J, (15g) (i, j, h, k) ∈ J ≺ . (15h)
Equation 15b sets start time for the first job in each product variant to zero. Equation 15c and Equation 15d calculate the completion time iteratively from the completion time of the preceding job, which is either larger or smaller than the box placement variable, thus requiring two inequations. Depending on x(i,j,h,k) , (i, j, h, k) ∈ J ≺ , either Equation 15e or Equation 15f ensure as disjunctive constraints that boxes (i, j) and (h, k) are not overlapping while each is placed in its interval Equation 15g. A solution then chooses feasible box positions πi,j , (i, j) ∈ J that minimize the objective in Equation 15a. 6.4.2 Space-indexing The following model has a pseudopolynomial number of variables but allows to derive an effective lower bound. This large number of variables originates
6.4 Mixed integer programs
117
from dividing the time horizon into equal sized time units. Each time unit is represented by an assignment variable, allowing to set the execution time of a job. This method is called time-indexing, and it is a classic way to formulate scheduling problems (Pritsker et al., 1969). It is applicable if there is a known time horizon for the schedule, preferably without idle times, and if the time units are a common divisor of all processing times. Time-indexed models are in general well suited for LP relaxations (Koné et al., 2013). Its principle is transferable to space. This lead to the term space-indexing, coined in Allen et al. (2012). We use the principle of space-indexing in the following to formulate a model for Pm. Remark 6.3. Box widths in Pm instances are given as rational numbers. Let lcd ∈ N be the lowest common denominator of all box widths wi,j , (i, j) ∈ J. Thus, all feasible box positions πi,j = k/lcd, k = 0, . . . , Π · lcd are expressed as a fraction of two integers, each of them pseudopolynomially bounded by unary encoded input size, i. e., by input values. However, we simplify the notation without loss of generality in the current and the following section by assuming integer box widths with lcd = 1. Our model introduces (Π + 1) · n binary variables xi,j,k , for job (i, j) ∈ J and box position k = 0, . . . , Π. A value of 1 for xi,j,k decides that the box for job (i, j) is placed at position k, i. e., sets πi,j = k. Other than the model in Section 6.4.1, we constrain completion times in a closed formula, which is split into several parts to later facilitate a certain Lagrangian relaxation. To calculate the completion time of a job (i, j) ∈ J, we introduce ωi,j as its walk time, and δi,j as the deviation between a job’s start time Ci,j−1 and box position πi,j . Following that, the space-indexed model of Pm is to Õ minimize ri Ci,ni (16a) i ∈M
subject to Ci,j =
Õ
li,j 0 + ωi,j 0 ,
(i, j) ∈ J,
(16b)
ωi,j ≥ −aδi,j ,
(i, j) ∈ J,
(16c)
ωi,j ≥ bδi,j ,
(i, j) ∈ J,
(16d)
j 0 =1,...,j
118
6 Box placement for multiple product variants
δi,j = Ci,j−1 − πi,j , Õ πi,j = xi,j,k k,
(i, j) ∈ J,
(16e)
(i, j) ∈ J,
(16f)
(i, j) ∈ J,
(16g)
k = 0, . . . , Π,
(16h)
(i, j) ∈ J, k = 0, . . . , Π.
(16i)
k=0,...,Π
Õ
xi,j,k = 1,
k=0,...,Π
Õ
xi,j,k 0 ≤ 1,
(i,j)∈J, k 0 =max{1, k−wi, j +1},...,k
xi,j,k ∈ {0, 1},
The completion time of each job is set in Constraint 16b as a sum of all preceding job’s processing time, where each is a sum of base length and walk time. As each walk time is minimized, it is constrained from below in both Constraint 16c and Constraint 16d, while at least one of them is nonnegative. The deviation from a job’s start time and its box position is set in Constraint 16e. Constraint 16f to Constraint 16i contain the spaceindexed box positioning. By Constraint 16g, each box is placed exactly once. Constraint 16h ensures there is no overlap of boxes. 6.5 Lower bound Utilizing Model 16, we devise a lower bound on the minimum objective value of Pm instances in order to construct a branch and bound algorithm. To evaluate its branch nodes, it also accepts partially solved instances. As in Section 5.6, a partial solution is given by a sequence S of fixed jobs JF ⊆ J. Í This (partial) sequence places their job’s boxes from 0 to F = (i,j)∈JF wi,j ; Í hence πi,j = (i,k)∈JF : S((i,k)) κi . Furthermore, we change each of the constraints to an equality constraint, introducing a linear slack variable yi,j ∈ [0, 1] instead. In summary, we Õ Õ minimize θ i,j ωi,j (20a) i ∈M j=1,...,ni
subject to ωi,j ≥ 0,
(i, j) ∈ J,
(20b)
(i, j) ∈ J, j ≤ κi ,
(20c)
(i, j) ∈ J, j > κi ,
(20d)
(i, j) ∈ J,
(20e)
0 qi,j = qi,j − Π + 1,
(i, j) ∈ J,
(20f)
0 ≤ yi,j ≤ 1,
(i, j) ∈ J,
(20g)
i ∈ M.
(20h)
Õ © 0 ª ωi,j = −ayi,j qi,j + ωi,j ® , j 0 =1,...,j−1 « ¬ Õ © ª ωi,j 0 = byi,j qi,j + ωi,j 0 ® , j 0 =1,...,j−1 ¬ Õ« qi,j = li,j 0 , j 0 =1,...,j−1
κi ∈ {0, . . . , ni },
A solution of the model is described by the values for the decision variables κi , i ∈ M, and yi,j , (i, j) ∈ J. Property 6.7. Given an arbitrary solution with κi , i ∈ M for an instance of the model in (20). Then, there exists asolution with equal objective value Í 0 0 where κi ≤ κi for all i ∈ M, and −a qi,j + j 0=1,...,j−1 ωi,j ≥ 0 for all (i, j) ∈ J with j ≤ κi0.
126
6 Box placement for multiple product variants
Proof. In any feasible solution, there is ωi,j ≥ 0 for j) ∈ J. Therefore, all (i, Í 0 for i ∈ M, there must be yi,j = 0 for j = κi if −a qi,j + j 0=1,...,j−1 ωi,j 0 < 0 in order to fulfill Constraint 20c. As a result, there is ωi,j = 0. In the solution with κi0 = κi − 1, the value of ωi,j is instead constrained by Constraint 20d, while all other constraints can remain the same. With yi,j = 0, we still get ωi,j = 0. Therefore, the objective remains unchanged. Repeating the steps above shows the property. Corollary 6.7, there exists an optimum solution 6.8. ÍFollowing Property 0 with −a qi,j + j 0=1,...,j−1 ωi,j ≥ 0 for all j ≤ κi , i ∈ M. Lemma 6.9. Given the minimum κi , for each i ∈ M, for which there exists an optimum solution. Then, there exists an optimum solution with κi , i ∈ M, where for each (i, j) ∈ J, yi,j = with ci,k =
0,
θ i,j +
1,
else
Í
k=j+1,...,ni
−a,
k ≤ κi ,
b,
else.
yi,k ci,k θ i,k > 0,
(21a)
(21b)
Proof. We begin with ni , the last job. Here, ! Õ
ωi,ni = yi,ni · ci,ni di,ni +
ωi,k
k=1,...,ni −1
|
{z ≥0
}
0 with ci,ni = b, di,ni = qi,ni if ni > κi , and ci,ni = −a, di,ni = qi,n if ni = κi . i The choice of κi implies with Corollary 6.8 that yi,ni is multiplied with a nonnegative value. Therefore, for any feasible yi,ni , we have ωi,ni ≥ 0. The resulting ωi,ni occurs in the objective’s sum term as θ i,ni ωi,ni . This is the sole influence of yi,ni on the objective. For a negative θ i,ni , the maximum possible value yi,ni = 1 decreases the objective. If θ i,ni > 0, choosing
6.5 Lower bound
127
yi,ni = 0 avoids an increase of the objective value. Hence, there exists an optimum solution where yi,ni ∈ {0, 1} as in Equation 21. We continue the induction with j + 1 7→ j. Here, we assume Equation 21 is valid for j + 1, . . . , ni . Again, Õ © ª ωi,j = yi,j · ci,j di,j + ωi,k ® k=1,...,j−1 « ¬ | {z } ≥0
0 otherwise. With with ci,j = b, di,j = qi,ni if j > κi , and ci,j = −a, di,j = qi,j the same argument as before, yi,j is multiplied with a nonnegative value, hence any choice yields a nonnegative result. In the objective however, ωi,j occurs not only as θ i,j ωi,j : Additionally, its value influences ωi,k for k = j + 1, . . . , ni if yi,k = 1. Each such ωi,k then consists of ci,k ωi,j . Hence, ωi,j influences the objective additively by
θ i,j · ωi,j +
Õ
θ i,k yi,k ci,k · ωi,j
k=j+1,...,ni
Õ © ª = θ i,j + yi,k ci,k θ i,k ® · ωi,j . k=j+1,...,ni « ¬ | {z } ιi, j
Hence the impact of ωi,j is linear with slope ιi,j . If ιi,j is positive, we set yi,j = 0, else yi,j = 1 in order to minimize the objective. Property 6.10. There exists an optimum solution where, for each i ∈ M, there is κi ≤ κi , where κi ∈ {0, . . . , ni } is the greatest value for which 0 −aqi,κ ≥ 0. i Proof. We see that Constraint 20c is Õ © 0 ª 0 ωi,j = −ayi,j qi,j + ωi,j ® ≤ −ayi,j qi,j j 0 =1,...,j−1 « ¬
128
6 Box placement for multiple product variants
for j < κi , as ωi,j 0 ≥ 0 for all (i, j 0) ∈ J. Moreover, 0 0 −ayi,j qi,j < 0 ⇐⇒ −aqi,j < 0. 0 , (i, j) ∈ J is nondecreasing with j for the same i. Therefore, The value qi,j 0 < 0 for some (i, k) ∈ J, there is −a q 0 + Í if −aqi,k 1. As a lower bound, we use a combinatorial lower bound and the Lagrangian based lower bound to evaluate the partial solution of a branch node. The combinatorial lower bound relaxes the restriction of placing boxes non-overlappingly by allowing an overlap of the boxes of all open jobs JO (as defined in Section 6.5) within the free space [F, Π] of the partial solution. This approach avoids any box interference of of different product variants and jobs, which permits a very fast lower bound algorithm as follows. We iteratively calculate for each product variant i ∈ M the completion time of each job j = 1, . . . , ni by placing its box at πi,j ; while setting πi,j = min{max{t, F}, Π} if j is in the set of , where t equals either the completion time of the preceding job or 0 if j = 1, and F equals the first free box place. Hence, the box of an open job is placed as close to the job’s start time as
130
6 Box placement for multiple product variants
Í the free space of the partial solution allows it. Then, i ∈M ri Ci,ni yields a lower bound on the optimum objective value. Secondly, we evaluate a given partial solution with the Lagrangian based lower bound of Section 6.5. Here, we use just one set of Lagrangian multipliers during the whole depth first search and as well during backtracking similar to Fisher (2004). This avoids a repeated long search for good multipliers. As well, we let the number of maximum subgradient search iterations depend on the number of open jobs |JO | in the evaluated partial solution: n = |JkOo|, we iterate at most 10n times; else, we iterate at most n j If p max 1, 4 · |JO | times. The exact dominance rule in Section 5.7.1 is not used, because it not viable to set an upper bound on the last completion time of single product variants. As a traversing order in the employed depth first search, it proved advantageous to rank children nodes increasingly by the lower bound’s πi,j values obtained in Section 6.5.4. 6.6.3 Truncated branch and bound algorithm The heuristic version of the branch and bound in Section 6.6.2 is similar to the truncated branch and bound in Section 5.8.4. As a notable change for m > 1, we replace the ID heuristic for ranking new nodes. Instead, we rank a new node by its appended job’s box position of the Lagrangian lower bound as obtained in Section 6.5.4. The heuristic dominance rule in Section 5.7.2 is applicable without any change except for the objective value calculation, which additionally weights the last completion times by production rate. 6.7 Numerical results In a numerical experiment, we quantitatively evaluate our algorithms for optimizing the box placement similar to Section 5.9, adding specifics for multiple product variants m > 1 .
6.7 Numerical results
131
6.7.1 Instance generation We generate instances for Pm by deriving from the generation scheme in Section 5.9.1. We generate several numbers of product variants, set each production rates, assign jobs to product variants, and harmonize all processing times. We keep the same instances for m = 1, hence the tests are comparable. As number of product variants m, we generate instances for m = 2, 4, 8. Together with the existing instances for m = 1, the number of test instances quadruples to 4 · 7 680 = 30 720, compared to 7 680 instances in Section 5.9.1. For each product variant i ∈ M = {1, . . . , m}, we set a positive production Í rate ri such that i ∈M ri = 1. The production rate for each product variant is generated analogously to the approach in Boysen et al. (2008b, 2009b,a). They assume a fixed number of production cycles PC. Starting with a unit demand of each product variant i ∈ M, they repeatedly select a product variant randomly with equal probability, and increase its demand by one, until the sum of demands reaches PC. This is equivalent to drawing (PC − m) values uniformly from M, counting the number of occurrences for each variant i ∈ M, and increasing each by one. Then, each number equates the respective demand. From this, we obtain a production rate ri , i ∈ M, by dividing the product variant’s demand by PC. In our instance generation, we assume a large PC = 1000m to smoothen quantization effects. Remark 6.11. We note that the method for demand generation by Boysen et al. (2008b, 2009b,a) asymptotically corresponds to a Poisson distribution for each product variant’s demand in the limit for a large number of production cycles PC, with mean λ = PC/m. Moreover, we distribute the total number of n jobs to the m product variants. Starting with one job for each product variant i ∈ M, we repeatedly select a product variant randomly with equal probability, then append one job to it, until all n jobs have been assigned to a product variant. As in Section 5.9.1, processing times and box widths are initially of different scale, hence it necessitates a harmonization. In the multiproduct
132
6 Box placement for multiple product variants
case here, the station length ideally equals the average last completion time, depicted by objective φ. As an intuitive good solution, we see the WNID heuristic in Section 6.6.1. Similar to Section 5.9.1 for harmonizing assembly times and box widths, we use the heuristic’s placement and scale the assembly times linearly by a positive rational factor such that it minimizes |φ − Π|, using the algorithm of Brent (1971) for finding a zero of a univariate function. As the WNID heuristic may then deliver another placement, we repeat this process with the new placement with a total of most ten times. 6.7.2 Test setup The test setup is identical to the setup in Section 5.9.2. 6.7.3 Exact algorithms In Table 11, measured MIP and B&B runtimes are grouped by instance size n, and by the number of product variants m. In detail for each m, n pair, the runtimes are listed Table 12. Irrespective of m, the runtime grows exponentially with n, as it is expected to happen for a strongly NP-complete problem. We see that the MIP with instances of size 20, solving only 60%, and 38% of size 24. The B&B is 68.93 times faster (median ratio). Hence, with m > 1, nearly all instances are still solved for n = 20. For n = 24, this is reduced to three quarters. The largest (n = 28) instances are however not adequately solved with either of them. Overall there are 25280 of all 26555 solved instances (82.29%) which B&B solves faster than MIP, and 1275 (4.15%) which MIP solves faster than B&B. Moreover, we observe that difficulty grows with m in both approaches. High quartile deviations for both algorithms show that difficulty between instances varies by a high degree. Note that the quartile deviation decreases for large runtimes because we terminate solving an instance after the time limit. Both algorithms agree on the difficulty of each instance, as there is a weak positive correlation of
133
B&B runtime in secs (logscale)
MIP runtime in secs (logscale)
6.7 Numerical results
103 102 101 100 10-1 10-2 10-3 8
12
16
20
24
103 102 101 100 10-1 10-2 10-3 8
28
12
16
20
24
28
n
n 103
MIP runtime in secs (logscale)
102
101
100
n 8 10-1
12 16 20
10-2
24 28
10-3 10-3
10-2
10-1
100
101
102
103
B&B runtime in seconds (logscale)
Figure 18: The exact MIP and B&B algorithm’s runtime in seconds for m = 4 is shown with a logarithmic scale in box plots and a scatter plot with a normal confidence ellipse at 95% for each n.
6 Box placement for multiple product variants
B&B runtime in secs (logscale)
MIP runtime in secs (logscale)
134
103 102 101 100 10-1 10-2 10-3 1 2
4
103 102 101 100 10-1 10-2 10-3 1 2
8
4
8
m
m 103
MIP runtime in secs (logscale)
102
101
100
10-1
m 1 2
10-2
4 8 -3
10
10-3
10-2
10-1
100
101
102
103
B&B runtime in seconds (logscale)
Figure 19: The exact MIP and B&B algorithm’s runtime in seconds for n = 24 is shown with a logarithmic scale in box plots and in a scatter plot with a normal confidence ellipse at 95% for each m.
6.7 Numerical results
135
28
MIP runtime in secs (logscale)
24
[600, ∞) [60, 600)
20
n
[6, 60) [0.6, 6)
16
[0.06, 0.6) [0.006, 0.06) [0, 0.006)
12
8
28
B&B runtime in secs (logscale)
24
[600, ∞) [60, 600)
20
n
[6, 60) [0.6, 6)
16
[0.06, 0.6) [0.006, 0.06) [0, 0.006)
12
8
1
2
4
8
m
Figure 20: MIP and B&B algorithm’s logarithmic runtimes grouped in intervals and shown in a product plot for each n and m pair.
136
6 Box placement for multiple product variants
Table 11: MIP, B&B runtime and performance. MIP
B&B
Md
QD
solved
Md
QD
solved
8 12 16 20 24 28
0.02 0.18 3.85 130.97 ≥ 600.00 ≥ 600.00
0.01 0.51 28.95 296.92 268.75 0.00
100% 100% 93% 60% 38% 24%
0.00 0.00 0.07 2.92 102.68 ≥ 600.00
0.00 0.00 0.10 4.79 206.65 179.78
100% 100% 100% 100% 81% 31%
m
Md
QD
solved
Md
QD
solved
1 2 4 8
0.45 11.29 31.50 125.40
8.81 299.94 299.89 299.11
89% 68% 63% 58%
0.01 0.33 0.56 1.45
0.19 43.62 68.66 159.36
97% 83% 82% 78%
9.17
299.92
69%
0.15
21.02
85%
n
all
Md: median runtime in seconds QD: quartile deviation in seconds solved: percentage of instances solved in 10 minutes
value 0.29 between MIP and B&B runtimes. See also Figure 18, Figure 19, and Figure 20. Let us break down the B&B performance in greater detail for one number of product variants and jobs, m = 4 and n = 24. Its median runtimes are shown for each (v, S) pair in Table 13, and each (L, W) pair in Table 14. Interestingly, the observed problem difficulties in the tables are different in the multiple product variants and in the single product variant case. With m = 1, the B&B is fastest for v = 2 and slowest for v = 16 (see Table 8). With m = 4, runtime peaks at v = 4 and is easiest for the fastest velocity v = 16. Furthermore, the walking strategy has no clear influence
6.7 Numerical results
137
Table 12: MIP, B&B runtime and performance for each n and m pair. MIP
B&B
m
n
Md
QD
solved
Md
QD
solved
8 8 8 8
1 2 4 8
0.01 0.02 0.02 0.25
0.00 0.00 0.01 0.25
100% 100% 100% 100%
0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00
100% 100% 100% 100%
12 12 12 12
1 2 4 8
0.04 0.14 0.26 3.16
0.03 0.16 0.36 8.59
100% 100% 100% 100%
0.00 0.00 0.00 0.01
0.00 0.00 0.00 0.00
100% 100% 100% 100%
16 16 16 16
1 2 4 8
0.28 3.73 7.42 108.27
0.65 7.78 28.35 237.00
100% 99% 97% 78%
0.00 0.08 0.10 0.21
0.00 0.09 0.09 0.19
100% 100% 100% 100%
20 20 20 20
1 2 4 8
2.11 137.76 ≥ 600.00 ≥ 600.00
10.76 291.78 283.02 249.71
95% 66% 49% 32%
0.04 3.16 4.19 8.68
0.13 5.08 3.93 9.40
100% 100% 100% 99%
24 24 24 24
1 2 4 8
21.18 ≥ 600.00 ≥ 600.00 ≥ 600.00
138.01 55.40 0.00 0.00
81% 27% 23% 22%
0.51 136.06 153.36 310.22
2.24 282.77 185.80 239.50
98% 75% 83% 69%
28 28 28 28
1 2 4 8
156.82 ≥ 600.00 ≥ 600.00 ≥ 600.00
296.42 0.00 0.00 0.00
58% 15% 8% 13%
6.62 ≥ 600.00 ≥ 600.00 ≥ 600.00
68.79 26.83 0.00 0.00
83% 26% 11% 3%
Md: median runtime in seconds QD: quartile deviation in seconds solved: percentage of instances solved in 10 minutes
138
6 Box placement for multiple product variants
Table 13: B&B median runtime in seconds for instance size n = 24, m = 4 is displayed for all pairs of walking velocity and walking strategy settings. Below is the mean walking time percentage MW in optimum solutions for different velocities.
S1 S2 MW
v=2
v=4
v=8
v = 16
210.55 216.01
271.23 262.37
131.73 108.52
68.81 87.39
36%
21%
11%
6%
Table 14: B&B median runtime in seconds for instance size n = 24, m = 4 is displayed for all pairs of assembly time and box width settings.
W1 W2 W3 W4
L1
L2
L3
L4
106.52 102.37 148.55 208.82
118.78 180.76 124.23 201.55
103.84 136.40 295.41 281.78
76.35 154.03 220.50 237.31
on runtimes. We suppose this is caused by the new type of lower bound. Similarly, it is indiscernible which of assembly time variants and box width cases are more difficult or easy. In the literature, material fetching accounts for about 10–15% of total work time (Scholl et al., 2013). Let us look at our most realistic choices for worker velocity, v = 16 and v = 8 (e.g., the case study in Klampfl et al. (2006) similarly assumes v = 13.6). Our results yield 6–11% mean walking time of total work time in optimal solutions, see Table 13. This is less than in the literature. Moreover, it is clearly less than for the single product variant case m = 1 which has 9–15% mean walking time in our experiments, see Table 8.
6.7 Numerical results
139
Table 15: Heuristics’ runtime and performance on instances that B&B solved in 10 minutes. WNID
HC
TrB&B
SA
SA+TrB&B
n
opt
MPE
opt
MPE
opt
MPE
opt
MPE
opt
MPE
8 12 16 20 24 28
16% 10% 7% 5% 5% 12%
52% 79% 99% 112% 116% 57%
86% 61% 46% 37% 36% 65%
1.91% 5.07% 6.93% 8.79% 9.07% 5.17%
94% 72% 54% 43% 42% 75%
0.44% 2.22% 4.32% 6% 4.79% 3.67%
95% 84% 73% 65% 60% 76%
0.7% 2.48% 2.75% 3.19% 3.01% 3.06%
98% 89% 78% 69% 65% 85%
0.11% 0.7% 1.39% 2.01% 2% 1.52%
m
opt
MPE
opt
MPE
opt
MPE
opt
MPE
opt
MPE
1 2 4 8
29% 2% 0% 1%
15% 100% 146% 108%
90% 41% 34% 47%
0.31% 10.98% 9.93% 4.4%
93% 66% 67% 74%
0.16% 7.13% 2.3% 0.42%
98% 46% 44% 55%
0.04% 7.92% 5.17% 1.38%
99% 72% 71% 78%
0.02% 3.28% 1.45% 0.32%
all
9%
89%
55%
6.18%
63%
3.52%
76%
2.44%
81%
1.23%
opt: MPE:
percentage of optimally solved instances mean percentage error to minimum walking time
6.7.4 Heuristics Results for the heuristics are shown in Table 15 and Table 16. They list median runtimes, fraction of optimally solved instances, and mean percentage error MPE, which is the mean of the percentage walking time error PE =
φ∗
−
φ − φ∗ Í
(i,j)∈J ri l j
· 100%,
(22)
comparing achieved and minimum weighted walking time, where φ∗ is the optimum and φ the heuristic’s objective. The WNID heuristic’s performance is poor in comparison, almost never achieving an optimum solution for m > 1. Nonetheless, an MPE of 89% is
140
6 Box placement for multiple product variants
Table 16: Heuristics’ runtime and performance for each n and m pair on instances that B&B solved in 10 minutes. WNID
HC
TrB&B
SA
SA+TrB&B
m
n
opt
MPE
opt
MPE
opt
MPE
opt
MPE
opt
MPE
8 8 8 8
1 2 4 8
54% 8% 0% 4%
9% 74% 86% 39%
97% 77% 70% 100%
0.21% 4.12% 3.31% 0%
100% 87% 88% 100%
0.01% 1.23% 0.54% 0%
98% 91% 92% 100%
0.1% 2.35% 0.34% 0%
100% 96% 97% 100%
0.01% 0.36% 0.07% 0%
12 12 12 12
1 2 4 8
38% 1% 0% 0%
13% 97% 125% 82%
95% 49% 44% 54%
0.27% 10.17% 7.4% 2.46%
99% 57% 60% 73%
0.02% 5.7% 2.72% 0.44%
97% 75% 79% 85%
0.14% 8.52% 1.14% 0.11%
100% 82% 85% 91%
0.01% 2.16% 0.56% 0.06%
16 16 16 16
1 2 4 8
27% 0% 0% 0%
16% 108% 156% 117%
91% 33% 28% 35%
0.43% 11.95% 10.9% 4.42%
98% 36% 36% 45%
0.09% 9.37% 6.32% 1.5%
94% 63% 65% 69%
0.27% 7.95% 2.39% 0.38%
99% 68% 69% 74%
0.05% 3.85% 1.4% 0.26%
20 20 20 20
1 2 4 8
21% 0% 0% 0%
17% 113% 173% 147%
86% 25% 16% 20%
0.41% 13.39% 13.49% 7.85%
98% 27% 20% 27%
0.05% 10.82% 9.78% 3.3%
92% 55% 54% 60%
0.17% 8.34% 3.46% 0.78%
99% 60% 56% 62%
0.01% 4.75% 2.57% 0.7%
24 24 24 24
1 2 4 8
17% 0% 0% 0%
18% 113% 187% 174%
84% 20% 11% 15%
0.26% 14.26% 15.13% 8.7%
97% 22% 14% 19%
0.04% 11.98% 6.38% 1.81%
89% 47% 44% 51%
0.11% 7.14% 4.36% 0.99%
98% 51% 46% 54%
0.02% 4.99% 2.75% 0.68%
28 28 28 28
1 2 4 8
18% 0% 0% 0%
16% 100% 224% 207%
85% 24% 23% 25%
0.24% 18.17% 12.71% 3.14%
98% 28% 24% 30%
0.04% 13.81% 7.97% 1.8%
88% 47% 55% 57%
0.15% 12.37% 3.96% 0.53%
99% 54% 58% 65%
0.02% 6.08% 2.58% 0.3%
opt: MPE:
percentage of optimally solved instances mean percentage error to minimum walking time
not excessive. It is well suited as input for the hill climbing search HC. After applying the HC, the number of optimally solved instances raises to 55% while the MPE drops to 6.18%. Moreover, the median computation time is
6.8 Conclusion
141
still not measurable. Further improvement is obtained with the simulated annealing search SA, which temporarily allows worse solutions. Here, the number of solved instances increases to 76% and the MPE decreases to 2.44%. The SA increases median computation time for the largest instance size n = 28 to 0.032 seconds. The TrB&B heuristic’s performance is reported for the HC’s, and the SA’s objective as an initial upper bound, respectively. We see that TrB&B improves both respective solutions by a fairly large amount. In the HC’s case, it now optimally solves 63% instances, reducing the MPE to 3.52% with a median runtime of 0.004 seconds for the largest instances n = 28. In the SA’s case, it optimally solves 81% instances, with a further reduced MPE of 1.23% and a total (SA and TrB&B) runtime of 0.047 seconds. Hence, although the TrB&B heuristic is more elaborate to implement, it allows to improve all of the other solutions by a great amount with a moderate increase in runtime. These results suggest that this effort is worthwhile. 6.8 Conclusion In this chapter, we consider the intermixed placement of boxes for multiple product variants such that it minimizes worker walking time at a moving conveyor line. We show that this optimization problem is NP-hard in the strong sense. Modeling it as a space-indexed mixed integer program allows us to construct a lower bound by Lagrangian relaxation. This bound can be obtained efficiently because it consists of several polynomial time problems. With this, we devise a branch and bound search to place the boxes and minimize the walking time of all given product variants. A truncated search tree yields a heuristic. For comparison, we construct a mixed integer program that employs disjunctive constraints to avoid box overlapping. Also, we extend the greedy heuristic of the single product variant case. Numerical results show that the branch and bound based algorithms provide superior runtime and quality compared to solving mixed integer programming models (for exact solutions) and metaheuristic approaches (for heuristic solutions).
Part IV Conclusion
7 Conclusion
7.1 Conclusion In this work, we find that optimizing walking time with time-dependent paths to moving destinations is a computationally challenging problem. In most cases, it turns out as NP-hard, and we find edge cases that are polynomial or permit fully polynomial time approximation. Nonetheless, it is possible to establish fast exact search algorithms. These are enabled by the construction of effective bounds and dominance relations, emerging from a thorough analysis of innate properties of the respective problem variants. Our experiments show that the algorithms not only outperform standard approaches, but allow to solve large instances with artificial and real world structures in a small timeframe. In practice, this is a leap forward to enable a computationally assisted optimization of walking time at moving assembly lines. Firstly, our real world model and dedicated algorithms allow to predict the self-optimization potential of workers of reordering assigned assembly operations. Here, our exact algorithm outperforms heuristic approaches even though it is an NP-hard problem. Secondly, we model the optimization of the line side part placement to minimize walking time. Here, we consider a single product variant assembly, and the assembly of different product variants, which requires an intermix of part containers. This setting turns out as computationally more demanding, but our exact and heuristic algorithms are nonetheless able to consistently outperform other approaches. Concluding, the positive results on the core problems show our models and algorithms provide a means for a computationally assisted planning of moving assembly lines that minimizes time-dependent walking time. © Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2020 H. A. Sedding, Time-Dependent Path Scheduling, https://doi.org/10.1007/978-3-658-28415-2_7
146
7 Conclusion
7.2 Future steps The assessment of model assumptions (see Section 2.3) shows that the constructed model is already close to reality. As it is nonetheless relatively generic, it should also be suited for derivation and extension in several directions. Hence, by shifting the assumptions we suppose it is feasible to expand our models’ applicability to further real world scenarios. Such a change may even lead to a shorter walking time: For example, if it is allowed to alter the sequence of assembly operations (removing Assumption A2) in addition to optimizing the part container placement in a holistic optimization, the alignment of job start times and container positions can improve. For this, one can couple the sequence and placement optimizations to a combined, holistic optimization of both. A first result in this direction is the polynomial time algorithm in Section 4.6 that is suitable to optimize both the position of one common part container and the operation sequence at the same time, as stated in Remark 4.11. As both optimization steps are often still separated in practice, much is gained as well by extending each of the optimizations on its own. (a) In optimizing the sequence of operations, a common constraint in assembly line balancing is to respect a precedence graph for all operations. If such a graph is available, this greatly reduces the number of feasible sequences by removing Assumption A19. The described branch and bound search can be extended to respect the graph accordingly. Its lower bound and dominance checks apply without modifications except for the first, job swapping test. By additionally considering the precedence constraints, one has the option to strengthen the lower bound, and as well reinstate the job swapping dominance check. (b) The placement optimization in its current form cannot consider sharing of part containers between operations. This applies either if the same container is visited multiple times within one cycle (Assumption A17), or if the same container is shared between product variants (Assumption A18). We assume that the lower bounds for the single
7.2 Future steps
147
and the multiple product variant case can be extended to accomodate both forms of box sharing to enable quick branch and bound based algorithms. We see further relevant extensions that apply to both problem settings. For example, we currently assume that all parts for an operation are stored in a single part container, although this can be a bundle of several smaller containers (Assumption A8). However, if these containers are spread along the line for operations with a low production rate, the space may be better utilized. This also harmonizes well with the above extension of sharing part containers between several operations. The increased detail, however, usually requires to also consider how smaller containers are stacked within shelves. Although such an optimization can reduce the spatial requirements as well as the walking time, it requires to find a two dimensional rectangle packing problem. Currently, the walking time calculation assumes an insignificantly small picking time at the part container (Assumption A15). This is incorrect if the worker needs to pick many parts. Incorporating a picking time is possible, although it yields a piecewise linear walking time function of more than two pieces. In this spirit, one may as well allow assembly at multiple work points at the workpiece, e. g., at the front and at the rear corner of a larger car (Assumption A7). A further pivot is taken if one considers part containers that are placed further apart from the line side. Then, the offset can become significant enough to warrant a two dimensional Euclidean distance calculation, breaking Assumption A13, which entails a nonlinear walking time calculation. In summary, there is a variety of extensions to our assumptions. We recognize several derivations of or models, and translations of our insights to more complex scenarios.
8 Summary of major contributions
Let us summarize the major contributions for computationally minimizing walking time at moving assembly lines in two groups: optimization of (a) the sequence of assembly operations, and (b) the line side placement of parts. 8.1 Sequencing assembly operations With respect to (a), we introduce a model that abstracts reality such that it still contains the quintessential problem, but allows for an efficient computational optimization. The main contribution of this model is: • Introduction of the first walking time optimization model along continously moving conveyor lines. Chapter 2 • Establishment of walking time optimization in the field of time-dependent scheduling. Within this field, we are among the first to study nonmonotonic processing time functions. Section 3.3.3 • Our results on this novel problem in several variants define a clear border on its computational complexity, especially in relation to timedependent scheduling problems in the literature. Chapter 4 Major results with respect to this model are: • Two polynomial cases, where time-dependence of operations is converted into position-dependency. Then, assignment of operations to positions is solved by a sorting criterion. Section 3.4 © Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2020 H. A. Sedding, Time-Dependent Path Scheduling, https://doi.org/10.1007/978-3-658-28415-2_8
150
8 Summary of major contributions
• An effective lower bound and a dominance rule, which are utilized in a branch and bound search. Numerical experiments show that this outperforms mixed integer programming by far, completely eliminating the need for heuristic solutions. Chapter 3 And for the variant with one common part container for all operations: • A proof of NP-hardness by reduction from Even Odd Partition (cf. Sedding (2018a,b)). Section 4.3 • A fully polynomial time approximation scheme that employs a trimming-the-state-space technique (cf. Sedding (2017a)). Section 4.5 • A polynomial time algorithm if the global start time is relaxed, by transformation to a weighted bipartite matching problem. Section 4.6 8.2 Line side placement With respect to the line side optimization of part boxes (b), we as well define a quintessential model that allows for derivation in several directions. The formulated model builds on the related scheduling model in (a). Regarding this model, our main contributions (cf. Sedding (2017b, 2019)) are: • Two polynomial cases of placing boxes which are solved by a sorting criterium and used in a lower bound. Section 5.4; Section 5.6 • Proof of NP-hardness in the strong sense by reduction from Three Partition. Section 5.5 • A dominance rule to compare partial placements with swapped adjacent boxes. Moreover, a heuristic dominance rule for comparing placements with arbitrary replacement of the same set of boxes. Section 5.4 • A branch and bound search and a heuristic version, which perform far better than mixed integer programming and metaheuristic approaches regarding runtime and solution quality. Section 5.8; Section 5.9
8.2 Line side placement
151
• Provision of a heuristic sorting criterion. Our evaluation reports a good performance on a majority of instances. Section 5.8; Section 5.9 This model is extended to the widespread production system of modelmix assembly of several product variants. Here, our main contributions are: • Proof of NP-hardness in the strong sense by reduction from Three Partition for two or more product variants. Section 6.3 • A lower bound that solves a Lagrangian relaxed space-indexed mixed integer formulation exactly by means of solving a number of subproblems with polynomial algorithms. Section 6.5 • A branch and bound search which fares favorably compared to mixed integer programming. Its heuristic version considerably improves solutions of metaheuristic approaches. Section 6.6; Section 6.7
Bibliography
B.-H. Ahn, J.-Y. Shin, Vehicle-Routeing with Time Windows and TimeVarying Congestion, The Journal of the Operational Research Society 42 (5) (1991) 393–400, doi:10.2307/2583752. . . . . . . . . . . . 11 S. Akpınar, A. Baykasoğlu, Modeling and Solving Mixed-Model Assembly Line Balancing Problem with Setups. Part I: A Mixed Integer Linear Programming Model, Journal of Manufacturing Systems 33 (1) (2014a) 177–187, doi:10.1016/j.jmsy.2013.11.004. . . . . . . . . . . . . . 26 S. Akpınar, A. Baykasoğlu, Modeling and Solving Mixed-Model Assembly Line Balancing Problem with Setups. Part II: A Multiple Colony Hybrid Bees Algorithm, Journal of Manufacturing Systems 33 (4) (2014b) 445– 461, doi:10.1016/j.jmsy.2014.04.001. . . . . . . . . . . . . . . . . 26 S. Akpınar, A. Elmi, T. Bektas, Combinatorial Benders Cuts for Assembly Line Balancing Problems with Setups, European Journal of Operational Research 259 (2) (2017) 527–537, doi:10.1016/j.ejor.2016.11.001. . 26 S. Akpınar, G. Mirac Bayhan, A. Baykasoğlu, Hybridizing Ant Colony Optimization via Genetic Algorithm for Mixed-Model Assembly Line Balancing Problem with Sequence Dependent Setup Times between Tasks, Applied Soft Computing 13 (1) (2013) 574–589, doi:10.1016/j.asoc.2012. 07.024. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 A. S. Alfa, A Heuristic Algorithm for the Travelling Salesman Problem with Time-Varying Travel Costs, Engineering Optimization 12 (4) (1987) 325–338, doi:10.1080/03052158708941106. . . . . . . . . . . . . 11
© Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2020 H. A. Sedding, Time-Dependent Path Scheduling, https://doi.org/10.1007/978-3-658-28415-2
154
Bibliography
B. Alidaee, N. K. Womer, Scheduling with Time Dependent Processing Times: Review and Extensions, The Journal of the Operational Research Society 50 (7) (1999) 711–720, doi:10.2307/3010325. . . . . . 30, 75 S. D. Allen, E. K. Burke, J. Mareček, A Space-Indexed Formulation of Packing Boxes into a Larger Box, Operations Research Letters 40 (1) (2012) 20–24, doi:10.1016/j.orl.2011.10.008. . . . . . . . . . . . 117 C. Andrés, C. Miralles, R. Pastor, Balancing and Scheduling Tasks in Assembly Lines with Sequence-Dependent Setup Times, European Journal of Operational Research 187 (3) (2008) 1212–1223, doi:10.1016/j.ejor. 2006.07.044. . . . . . . . . . . . . . . . . . . . . . . . . . . . 10, 26 A. L. Arcus, COMSOAL - A Computer Method of Sequencing Operations for Assembly Lines, International Journal of Production Research 4 (4) (1965) 259–277, doi:10.1080/00207546508919982. . . . . . . . . 26 K. R. Baker, G. D. Scudder, Sequencing with Earliness and Tardiness Penalties: A Review, Operations Research 38 (1) (1990) 22–36, doi: 10.2307/171295. . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 O. Battaïa, A. Dolgui, A Taxonomy of Line Balancing Problems and Their Solution Approaches, International Journal of Production Economics 142 (2) (2013) 259–277, doi:10.1016/j.ijpe.2012.10.020. . . . 10, 111 J. Bautista, J. Pereira, Ant Algorithms for a Time and Space Constrained Assembly Line Balancing Problem, European Journal of Operational Research 177 (3) (2007) 2016–2032, doi:10.1016/j.ejor.2005.12.017. 10 İ. Baybars, A Survey of Exact Algorithms for the Simple Assembly Line Balancing Problem, Management Science 32 (8) (1986) 909–932, doi: 10.1287/mnsc.32.8.909. . . . . . . . . . . . . . . . . . . . . . . . 10 J. Beasley, Adapting the Savings Algorithm for Varying Inter-Customer Travel Times, Omega 9 (6) (1981) 658–659, doi:10.1016/0305-0483(81) 90055-4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Bibliography
155
C. Becker, A. Scholl, Balancing Assembly Lines with Variable Parallel Workplaces: Problem Definition and Effective Solution Procedure, European Journal of Operational Research 199 (2) (2009) 359–374, doi: 10.1016/j.ejor.2008.11.051. . . . . . . . . . . . . . . . . . . . . . 13 J. Błażewicz, M. Dror, J. Weglarz, ˛ Mathematical Programming Formulations for Machine Scheduling: A Survey, European Journal of Operational Research 51 (3) (1991) 283–300, doi:10.1016/0377-2217(91) 90304-E. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 N. Boysen, S. Emde, M. Hoeck, M. Kauderer, Part Logistics in the Automotive Industry: Decision Problems, Literature Review and Research Agenda, European Journal of Operational Research 242 (1) (2015) 107– 120, doi:10.1016/j.ejor.2014.09.065. . . . . . . . . . 4, 10, 11, 26, 74 N. Boysen, M. Fliedner, A. Scholl, Assembly Line Balancing: Which Model to Use When?, International Journal of Production Economics 111 (2) (2008a) 509–528, doi:10.1016/j.ijpe.2007.02.026. . . . . . . . . 111 N. Boysen, M. Fliedner, A. Scholl, Sequencing mixed-model assembly lines to minimize part inventory cost, OR Spectrum 30 (3) (2008b) 611–633, doi:10.1007/s00291-007-0095-2. . . . . . . . . . . . . . . . . . 131 N. Boysen, M. Fliedner, A. Scholl, Level Scheduling for Batched JIT Supply, Flexible Services and Manufacturing Journal 21 (1-2) (2009b) 31–50, doi: 10.1007/s10696-009-9058-z. . . . . . . . . . . . . . . . . . . . 131 N. Boysen, M. Fliedner, A. Scholl, Level Scheduling of MixedModel Assembly Lines under Storage Constraints, International Journal of Production Research 47 (10) (2009a) 2669–2684, doi:10.1080/ 00207540701725067. . . . . . . . . . . . . . . . . . . . . 111, 131 N. Boysen, A. Scholl, N. Wopperer, Resequencing of mixed-model assembly lines: Survey and research agenda, European Journal of Operational Research 216 (3) (2012) 594–604, doi:10.1016/j.ejor.2011.08.009. 111
156
Bibliography
R. P. Brent, An Algorithm with Guaranteed Convergence for Finding a Zero of a Function, The Computer Journal 14 (4) (1971) 422–425, doi: 10.1093/comjnl/14.4.422. . . . . . . . . . . . . . . . . 46, 103, 132 S. Browne, U. Yechiali, Scheduling Deteriorating Jobs on a Single Processor, Operations Research 38 (3) (1990) 495–498, doi:10.1287/opre.38.3.495. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 P. Brucker, Scheduling Algorithms, Springer, Berlin and Heidelberg, doi: 10.1007/978-3-540-69516-5, 2007. . . . . . . . . . . . . . . . . . 27 Y. Bukchin, R. D. Meller, A space allocation algorithm for assembly line components, IIE Transactions 37 (1) (2005) 51–61, doi:10.1080/ 07408170590516854. . . . . . . . . . . . . . . . . . . . . . . . . 10 K. Bülbül, P. Kaminsky, C. Yano, Preemption in single machine earliness/tardiness scheduling, Journal of Scheduling 10 (4-5) (2007) 271–292, doi:10.1007/s10951-007-0028-6. . . . . . . . . . . . . . . . . . . 29 J.-Y. Cai, P. Cai, Y. Zhu, On A Scheduling Problem of Time Deteriorating Jobs, Journal of Complexity 14 (2) (1998) 190–209, doi:10.1006/jcom. 1998.0473. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31, 53 P. C. Chang, A Branch and Bound Approach for Single Machine Scheduling with Earliness and Tardiness Penalties, Computers & Mathematics with Applications 37 (10) (1999) 133–144, doi:10.1016/S0898-1221(99) 00130-3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 T. C. E. Cheng, Q. Ding, M. Y. Kovalyov, A. Bachman, A. Janiak, Scheduling Jobs with Piecewise Linear Decreasing Processing Times, Naval Research Logistics 50 (6) (2003) 531–554, doi:10.1002/nav.10073. . . . 32, 53 T. C. E. Cheng, Q. Ding, B. M.-T. Lin, A Concise Survey of Scheduling with Time-Dependent Processing Times, European Journal of Operational Research 152 (1) (2004a) 1–13, doi:10.1016/S0377-2217(02)00909-8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30, 75
Bibliography
157
T. C. E. Cheng, M. C. Gupta, Survey of scheduling research involving due date determination decisions, European Journal of Operational Research 38 (2) (1989) 156–166, doi:10.1016/0377-2217(89)90100-8. . . . . 75 T. C. E. Cheng, L. Kang, C. T. Ng, Due-Date Assignment and Single Machine Scheduling with Deteriorating Jobs, Journal of the Operational Research Society 55 (2) (2004b) 198–203, doi:10.1057/palgrave.jors. 2601681. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 J. Du, J. Y.-T. Leung, Minimizing Total Tardiness on One Machine Is NP-Hard, Mathematics of Operations Research 15 (3) (1990) 483–495, doi:10.1287/moor.15.3.483. . . . . . . . . . . . . . . . . . . . 27, 29 M. H. Farahani, L. Hosseini, Minimizing Cycle Time in Single Machine Scheduling with Start Time-Dependent Processing Times, The International Journal of Advanced Manufacturing Technology 64 (9) (2013) 1479–1486, doi:10.1007/s00170-012-4116-1. . . . . . . . . 32, 53, 67 C. Finnsgård, C. Wänström, L. Medbo, W. P. Neumann, Impact of Materials Exposure on Assembly Workstation Performance, International Journal of Production Research 49 (24) (2011) 7253–7274, doi: 10.1080/00207543.2010.503202. . . . . . . . . . . . . . . . . 15, 74 M. L. Fisher, The Lagrangian Relaxation Method for Solving Integer Programming Problems, Management Science 50 (12 Supplement) (2004) 1861–1871, doi:10.1287/mnsc.1040.0263. . . . . . . . . . . 120, 130 H. Ford, S. Crowther, My Life and Work, Doubleday, Page & Co., Garden City, NY, 1922. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 T. D. Fry, R. D. Armstrong, J. H. Blackstone, Minimizing Weighted Absolute Deviation in Single Machine Scheduling, IIE Transactions 19 (4) (1987) 445–450, doi:10.1080/07408178708975418. . . . . . . . . 30 T. D. Fry, R. D. Armstrong, K. Darby-Dowman, P. R. Philipoom, A Branch and Bound Procedure to Minimize Mean Absolute Lateness on a Single
158
Bibliography
Processor, Computers & Operations Research 23 (2) (1996) 171–182, doi:10.1016/0305-0548(95)00008-A. . . . . . . . . . . . . . . 29, 30 M. R. Garey, D. S. Johnson, Computers and Intractability – A Guide to the Theory of NP-Completeness, Series of Books in the Mathematical Sciences, W.H. Freeman, San Francisco, 1979. . . XV, 11, 62, 82, 83 M. R. Garey, R. E. Tarjan, G. T. Wilfong, One-Processor Scheduling with Symmetric Earliness and Tardiness Penalties, Mathematics of Operations Research 13 (2) (1988) 330–348, doi:10.2307/3689828. . XV, 30, 54 S. Gawiejnowicz, Time-Dependent Scheduling, Monographs in Theoretical Computer Science, Springer, Berlin, Heidelberg, doi:10.1007/ 978-3-540-69446-5, 2008. . . . . . . . . . . . . . . . . . . 30, 31, 75 S. Gawiejnowicz, L. Pankowska, Scheduling Jobs with Varying Processing Times, Information Processing Letters 54 (3) (1995) 175–178, doi:10. 1016/0020-0190(95)00009-2. . . . . . . . . . . . . . . . . . . . . 31 M. Gendreau, G. Ghiani, E. Guerriero, Time-Dependent Routing Problems: A Review, Computers & Operations Research 64 (2015) 189–197, doi: 10.1016/j.cor.2015.06.001. . . . . . . . . . . . . . . . . . . . . . . 11 V. S. Gordon, J.-M. Proth, C. Chu, Due Date Assignment and Scheduling: SLK, TWK and Other Due Date Assignment Models, Production Planning & Control 13 (2) (2002b) 117–132, doi:10.1080/ 09537280110069621. . . . . . . . . . . . . . . . . . . . . . . . . 75 V. S. Gordon, J.-M. Proth, C. Chu, A Survey of the State-of-the-Art of Common Due Date Assignment and Scheduling Research, European Journal of Operational Research 139 (1) (2002a) 1–25, doi: 10.1016/S0377-2217(01)00181-3. . . . . . . . . . . . . . . . . . . 75 V. S. Gordon, J.-M. Proth, V. A. Strusevich, Scheduling with Due Date Assignment, in: J. Y.-T. Leung (Ed.), Handbook of Scheduling: Algorithms, Models, and Performance Analysis, Chapman&Hall/CRC, Boca Raton, FL, doi:10.1201/9780203489802, 2004. . . . . . . . . . . . . . . . 75
Bibliography
159
V. S. Gordon, V. A. Strusevich, A. Dolgui, Scheduling with Due Date Assignment under Special Conditions on Job Processing, Journal of Scheduling 15 (4) (2012) 447–456, doi:10.1007/s10951-011-0240-2. . . . . . . 75 I. Gorlach, O. Wessel, Optimal Level of Automation in the Automotive Industry, Engineering Letters 16 (1) (2008) 141–149. . . . . . . . . 10 R. L. Graham, E. L. Lawler, J. K. Lenstra, Rinnooy Kan, A. H. G., Optimization and Approximation in Deterministic Sequencing and Scheduling: A Survey, Annals of Discrete Mathematics 5 (1979) 287–326, doi: 10.1016/S0167-5060(08)70356-X. . . . . . . . . . . . . . . . . . 24 J. N. D. Gupta, S. K. Gupta, Single Facility Scheduling with Nonlinear Processing Times, Computers & Industrial Engineering 14 (4) (1988) 387–393, doi:10.1016/0360-8352(88)90041-1. . . . . . . . . . . . 31 O. Gusikhin, E. Klampfl, G. Rossi, C. Aguwa, G. Coffman, T. Martinak, E-Workcell: A Virtual Reality Web-Based Decision Support System for Assembly Line Planning, in: M. Piattini, J. Filipe, J. Braz (Eds.), Enterprise Information Systems IV, Kluwer Academic Publishers, Hingham, MA, 4–10, 2003. . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 N. G. Hall, W. Kubiak, S. P. Sethi, Earliness-Tardiness Scheduling Problems, II: Deviation of Completion Times About a Restrictive Common Due Date, Operations Research 39 (5) (1991) 847–856, doi:10.1287/opre.39. 5.847. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28, 29 G. H. Hardy, J. E. Littlewood, G. Pólya, Inequalities, Cambridge University Press, Cambridge, UK, 1923. . . . . . . . . . . . . . . . . . . 35, 68 M. Held, R. M. Karp, A Dynamic Programming Approach to Sequencing Problems, Journal of the Society for Industrial and Applied Mathematics 10 (1) (1962) 196–210, doi:10.1137/0110015. . . . . . . . . . . 33, 40 M. Held, P. Wolfe, H. P. Crowder, Validation of Subgradient Optimization, Mathematical Programming 6 (1) (1974) 62–88, doi:10.1007/ BF01580223. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
160
Bibliography
C. S. Helvig, G. Robins, A. Zelikovsky, The Moving-Target Traveling Salesman Problem, Journal of Algorithms 49 (1) (2003) 153–174, doi: 10.1016/S0196-6774(03)00075-0. . . . . . . . . . . . . . . . . . . 11 K. I.-J. Ho, J. Y.-T. Leung, W.-D. Wei, Complexity of Scheduling Tasks with Time-Dependent Execution Times, Information Processing Letters 48 (6) (1993) 315–320, doi:10.1016/0020-0190(93)90175-9. . . . . 31 J. A. Hoogeveen, S. L. van de Velde, Scheduling around a Small Common Due Date, European Journal of Operational Research 55 (2) (1991) 237– 242, doi:10.1016/0377-2217(91)90228-N. . . . . . . . . . . . 28, 29 J. A. Hoogeveen, van de Velde, S. L., A Branch-and-Bound Algorithm for Single-Machine Earliness–Tardiness Scheduling with Idle Time, INFORMS Journal on Computing 8 (4) (1996) 402–412, doi: 10.1287/ijoc.8.4.402. . . . . . . . . . . . . . . . . . . . . . . . . . 29 O. H. Ibarra, C. E. Kim, Fast Approximation Algorithms for the Knapsack and Sum of Subset Problems, Journal of the ACM 22 (4) (1975) 463–468, doi:10.1145/321906.321909. . . . . . . . . . . . . . . . . . . . . . 62 F. Jaehn, H. A. Sedding, Scheduling with Time-Dependent Discrepancy Times, Journal of Scheduling 19 (6) (2016) 737–757, doi:10.1007/ s10951-016-0472-2. . . . . . . . . . . . . . . 16, 32, 33, 45, 53, 102 M. Ji, T. C. E. Cheng, An FPTAS for Scheduling Jobs with Piecewise Linear Decreasing Processing Times to Minimize Makespan, Information Processing Letters 102 (2-3) (2007) 41–47, doi:10.1016/j.ipl.2006.11. 014. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 53 J. Józefowska, Just-in-Time Scheduling - Models and Algorithms for Computer and Manufacturing Systems, vol. 106 of International Series in Operations Research & Management Science, Springer, New York, doi: 10.1007/978-0-387-71718-0, 2007. . . . . . . . . . . . . . . . . . 30
Bibliography
161
H. G. Kahlbacher, Scheduling with Monotonous Earliness and Tardiness Penalties, European Journal of Operational Research 64 (2) (1993) 258– 277, doi:10.1016/0377-2217(93)90181-L. . . . . . . . . . . . . 28, 29 P. Kaminsky, D. S. Hochbaum, Due Date Quotation Models and Algorithms, in: J. Y.-T. Leung (Ed.), Handbook of Scheduling: Algorithms, Models, and Performance Analysis, Chapman&Hall/CRC, Boca Raton, FL, doi: 10.1201/9780203489802, 2004. . . . . . . . . . . . . . . . . . . . 75 J. J. Kanet, Minimizing the Average Deviation of Job Completion Times about a Common Due Date, Naval Research Logistics Quarterly 28 (4) (1981) 643–651, doi:10.1002/nav.3800280411. . . . . . . . . . . . 28 J. J. Kanet, V. Sridharan, Scheduling with Inserted Idle Time: Problem Taxonomy and Literature Review, Operations Research 48 (1) (2000) 99–110, doi:10.1287/opre.48.1.99.12447. . . . . . . . . . . . . . . 30 J. J. Kanet, C. E. Wells, An Examination of Job Interchange Relationships and Induction-Based Proofs in Single Machine Scheduling, Annals of Operations Research 253 (1) (2017) 345–351, doi:10.1007/ s10479-016-2289-y. . . . . . . . . . . . . . . . . . . . . . . . . 122 A. B. Keha, K. Khowala, J. W. Fowler, Mixed integer programming formulations for single machine scheduling problems, Computers & Industrial Engineering 56 (1) (2009) 357–367, doi:10.1016/j.cie.2008.06.008. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42, 98 H. Kellerer, V. A. Strusevich, Minimizing Total Weighted EarlinessTardiness on a Single Machine around a Small Common Due Date: An FPTAS Using Quadratic Knapsack, International Journal of Foundations of Computer Science 21 (3) (2010) 357–383, doi:10.1142/ S0129054110007301. . . . . . . . . . . . . . . . . . . . . . . 28, 29 Y.-D. Kim, C. A. Yano, Minimizing Mean Tardiness and Earliness in Single-Machine Scheduling Problems with Unequal Due Dates,
162
Bibliography
Naval Research Logistics (NRL) 41 (7) (1994) 913–933, doi:10.1002/ 1520-6750(199412)41:73.0.CO;2-A. 29 S. Kirkpatrick, C. D. Gelatt, M. P. Vecchi, Optimization by Simulated Annealing, Science 220 (4598) (1983) 671–680, doi:10.1126/science. 220.4598.671. . . . . . . . . . . . . . . . . . . . . . . . . . 42, 100 E. Klampfl, O. Gusikhin, G. Rossi, Optimization of workcell layouts in a mixed-model assembly line environment, International Journal of Flexible Manufacturing Systems 17 (4) (2006) 277–299, doi:10.1007/ s10696-006-9029-6. . . . . . . . 4, 11, 16, 19, 48, 74, 106, 113, 138 H. Klindworth, C. Otto, A. Scholl, On a Learning Precedence Graph Concept for the Automotive Industry, European Journal of Operational Research 217 (2) (2012) 259–269, doi:10.1016/j.ejor.2011.09.024. . . 15 O. Koné, C. Artigues, P. Lopez, M. Mongeau, Comparison of Mixed Integer Linear Programming Models for the Resource-Constrained Project Scheduling Problem with Consumption and Production of Resources, Flexible Services and Manufacturing Journal 25 (1) (2013) 25–47, doi: 10.1007/s10696-012-9152-5. . . . . . . . . . . . . . . . . . . . 117 A. V. Kononov, On Schedules of a Single Machine Jobs with Processing Times Nonlinear in Time, Discrete Analysis and Operational Research 391 (1997) 109–122, doi:10.1007/978-94-011-5678-3_10. . . . 31, 53 C. Koulamas, The Single-Machine Total Tardiness Scheduling Problem: Review and Extensions, European Journal of Operational Research 202 (1) (2010) 1–7, doi:10.1016/j.ejor.2009.04.007. . . . . . . . . . . . . . 27 M. Y. Kovalyov, W. Kubiak, A Fully Polynomial Approximation Scheme for Minimizing Makespan of Deteriorating Jobs, Journal of Heuristics 3 (4) (1998) 287–297, doi:10.1023/A:1009626427432. . . . . . . . . 31, 53 M. Y. Kovalyov, W. Kubiak, A Fully Polynomial Approximation Scheme for the Weighted Earliness-Tardiness Problem, Operations Research 47 (5) (1999) 757–761, doi:10.1287/opre.47.5.757. . . . . . . . . . . 28, 29
Bibliography
163
A. Kramer, A. Subramanian, A Unified Heuristic and an Annotated Bibliography for a Large Class of Earliness-Tardiness Scheduling Problems, Journal of Scheduling 22 (1) (2019) 21–57, doi:10.1007/s10951-017-0549-6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 W. Kubiak, S. L. van de Velde, Scheduling Deteriorating Jobs to Minimize Makespan, Naval Research Logistics 45 (5) (1998) 511–523, doi:10.1002/ (SICI)1520-6750(199808)45:53.0.CO;2-6. . 31, 53 J. B. Lasserre, M. Queyranne, Generic Scheduling Polyhedra and a New Mixed-Integer Formulation for Single-Machine Scheduling, in: E. Balas, G. Cornuéjols, R. Kannan (Eds.), 2nd Conference on Integer Programming and Combinatorial Optimization, Carnegie Mellon University, Pittsburgh, 136–149, 1992. . . . . . . . . . . . . . . . . . . . . . . . . 42 E. L. Lawler, A “Pseudopolynomial” Algorithm for Sequencing Jobs to Minimize Total Tardiness, Annals of Discrete Mathematics 1 (1977) 331– 342, doi:10.1016/S0167-5060(08)70742-8. . . . . . . . . . . . . . 27 E. L. Lawler, A Fully Polynomial Approximation Scheme for the Total Tardiness Problem, Operations Research Letters 1 (6) (1982) 207–208, doi:10.1016/0167-6377(82)90022-0. . . . . . . . . . . . . . . 27, 29 E. L. Lawler, J. M. Moore, A Functional Equation and Its Application to Resource Allocation and Sequencing Problems, Management Science 16 (1) (1969) 77–84, doi:10.1287/mnsc.16.1.77. . . . . . . . . 27, 29 C. Malandraki, M. S. Daskin, Time Dependent Vehicle Routing Problems: Formulations, Properties and Heuristic Algorithms, Transportation Science 26 (3) (1992) 185–200, doi:10.1287/trsc.26.3.185. . . . . . . 11 L. Martino, R. Pastor, Heuristic Procedures for Solving the General Assembly Line Balancing Problem with Setups, International Journal of Production Research 48 (6) (2010) 1787–1804, doi:10.1080/ 00207540802577979. . . . . . . . . . . . . . . . . . . . . . . . . 26
164
Bibliography
H. B. Maynard, G. J. Stegemerten, J. L. Schwab, Methods-Time Measurement, McGraw-Hill, New York, 1948. . . . . . . . . . . . . . . . . 10 O. I. Melnikov, Y. M. Shafransky, Parametric Problem in Scheduling Theory, Cybernetics 15 (3) (1979) 352–357, doi:10.1007/BF01075095. . . 31 E. Nazarian, J. Ko, H. Wang, Design of Multi-Product Manufacturing Lines with the Consideration of Product Change Dependent Inter-Task Times, Reduced Changeover and Machine Flexibility, Journal of Manufacturing Systems 29 (1) (2010) 35–46, doi:10.1016/j.jmsy.2010.08.001. . . . 26 P. S. Ow, T. E. Morton, The Single Machine Early/Tardy Problem, Management Science 35 (2) (1989) 177–191, doi:10.1287/mnsc.35.2.177. . 28 R. Pastor, C. Andrés, C. Miralles, Corrigendum to “Balancing and Scheduling Tasks in Assembly Lines with Sequence-Dependent Setup” [European Journal of Operational Research 187 (3) (2008) 1212–1223], European Journal of Operational Research 201 (1) (2010) 336, doi: 10.1016/j.ejor.2009.02.019. . . . . . . . . . . . . . . . . . . . . . 26 M. L. Pinedo, Scheduling: Theory, Algorithms, and Systems, Springer, doi:10.1007/978-3-319-26580-3, 2016. . . . . . . . . . . . . . . . 98 W. H. Press, S. A. Teukolsky, W. T. Vetterling, B. P. Flannery, Numerical Recipes in C: The Art of Scientific Computing, Cambridge University Press, 1992. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 A. A. B. Pritsker, L. J. Waiters, P. M. Wolfe, Multiproject Scheduling with Limited Resources: A Zero-One Programming Approach, Management Science 16 (1) (1969) 93–108, doi:10.1287/mnsc.16.1.93. . . 29, 117 M. Queyranne, A. S. Schulz, Polyhedral Approaches to Machine Scheduling, Tech. Rep., Technische Universität Berlin, Fachbereich 3, 1994. . . 42 A. Salmi, P. David, E. Blanco, J. D. Summers, A Review of Cost Estimation Models for Determining Assembly Automation Level, Computers & Industrial Engineering 98 (2016) 246–259, doi:10.1016/j.cie.2016.06.007. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Bibliography
165
M. E. Salveson, The Assembly Line Balancing Problem, Journal of Industrial Engineering 6 (3) (1955) 18–25. . . . . . . . . . . . . . . 10, 25 J. Schaller, A Comparison of Lower Bounds for the Single-Machine Early/Tardy Problem, Computers & Operations Research 34 (8) (2007) 2279– 2292, doi:10.1016/j.cor.2005.09.003. . . . . . . . . . . . . . . . . 29 A. Scholl, C. Becker, State-of-the-Art Exact and Heuristic Solution Procedures for Simple Assembly Line Balancing, European Journal of Operational Research 168 (3) (2006) 666–693, doi:10.1016/j.ejor.2004.07.022. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 A. Scholl, N. Boysen, M. Fliedner, The Sequence-Dependent Assembly Line Balancing Problem, OR Spectrum 30 (3) (2008) 579–609, doi: 10.1007/s00291-006-0070-3. . . . . . . . . . . . . . . . . . . 15, 26 A. Scholl, N. Boysen, M. Fliedner, The Assembly Line Balancing and Scheduling Problem with Sequence-Dependent Setup Times: Problem Extension, Model Formulation and Efficient Heuristics, OR Spectrum 35 (1) (2013) 291–320, doi:10.1007/s00291-011-0265-0. . . . . . . . . . . . . . . . . . . . . . . . 3, 4, 10, 26, 48, 74, 106, 138 U. Schöning, Algorithmik, Spektrum Akademischer Verlag, Heidelberg, 2001. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 H. A. Sedding, Box Placement as Time Dependent Scheduling To Reduce Automotive Assembly Line Worker Walk Times, in: 13th Workshop on Models and Algorithms for Planning and Scheduling Problems, 92–94, 2017b. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73, 150 H. A. Sedding, Scheduling of Time-Dependent Asymmetric Nonmonotonic Processing Times Permits an FPTAS, in: 15th Cologne-Twente Workshop on Graphs and Combinatorial Optimization, 135–138, 2017a. 51, 150 H. A. Sedding, On the Complexity of Scheduling Start Time Dependent Asymmetric Convex Processing Times, in: Proceedings of the 16th Inter-
166
Bibliography
national Conference on Project Management and Scheduling, Università di Roma “Tor Vergata”, Rome, Italy, 209–212, 2018a. . . . . 51, 150 H. A. Sedding, Scheduling Non-Monotonous Convex Piecewise-Linear Time-Dependent Processing Times, in: 2nd International Workshop on Dynamic Scheduling Problems, Adam Mickiewicz University, Poznań, Poland, 79–84, 2018b. . . . . . . . . . . . . . . . . . . . . . 51, 150 H. A. Sedding, Line Side Part Placement for Shorter Assembly Line Worker Paths, IISE Transactions doi:10.1080/24725854.2018.1508929. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7, 73, 150 H. A. Sedding, F. Jaehn, Single Machine Scheduling with Nonmonotonic Piecewise Linear Time Dependent Processing Times, in: T. Fliedner, R. Kolisch, A. Naber (Eds.), Proceedings of the 14th International Conference on Project Management and Scheduling, TUM School of Management, 222–225, 2014. . . . . . . . . . . . . . . . . . . . 32, 33, 50 S. Seyed-Alagheband, S. F. Ghomi, M. Zandieh, A Simulated Annealing Algorithm for Balancing the Assembly Line Type II Problem with SequenceDependent Setup Times between Tasks, International Journal of Production Research 49 (3) (2011) 805–825, doi:10.1080/00207540903471486. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 D. Shabtay, Optimal Restricted Due Date Assignment in Scheduling, European Journal of Operational Research 252 (1) (2016) 79–89, doi: 10.1016/j.ejor.2015.12.043. . . . . . . . . . . . . . . . . . . . . . 75 W. E. Smith, Various Optimizers for Single-Stage Production, Naval Research Logistics Quarterly 3 (1-2) (1956) 59–66, doi:10.1002/nav. 3800030106. . . . . . . . . . . . . . . . . . . . . . . . . . . 27, 122 F. Sourd, New Exact Algorithms for One-Machine Earliness-Tardiness Scheduling, INFORMS Journal on Computing 21 (1) (2009) 167–175, doi:10.1287/ijoc.1080.0287. . . . . . . . . . . . . . . . . . . . . . 29
Bibliography
167
F. Sourd, S. Kedad-Sidhoum, A Faster Branch-and-Bound Algorithm for the Earliness-Tardiness Scheduling Problem, Journal of Scheduling 11 (1) (2008) 49–58, doi:10.1007/s10951-007-0048-2. . . . . . . . . . . 29 J. Sternatz, Enhanced Multi-Hoffmann Heuristic for Efficiently Solving Real-World Assembly Line Balancing Problems in Automotive Industry, European Journal of Operational Research 235 (3) (2014) 740–754, doi: 10.1016/j.ejor.2013.11.005. . . . . . . . . . . . . . . . . . . . . 111 W. Szwarc, Adjacent Orderings in Single-Machine Scheduling with Earliness and Tardiness Penalties, Naval Research Logistics (NRL) 40 (2) (1993) 229–243, doi:10.1002/1520-6750(199303)40:23.0.CO;2-R. . . . . . . . . . . . . . . . . 29 Z.-A. A.-M. Tahboub, Sequencing to Minimize Total Earliness-Tardiness Penalties on a Single-Machine, Ph.D. thesis, The Ohio State University, 1986. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 V. S. Tanaev, V. S. Gordon, Y. M. Shafransky, Scheduling Theory: Single-Stage Systems, Springer Netherlands, Dordrecht, doi:10.1007/ 978-94-011-1190-4, 1994. . . . . . . . . . . . . . . . . . . . . . . 31 S. Tanaka, S. Fujikuma, A Dynamic-Programming-Based Exact Algorithm for General Single-Machine Scheduling with Machine Idle Time, Journal of Scheduling 15 (3) (2012) 347–361, doi:10.1007/s10951-011-0242-0. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 S. Tanaka, S. Fujikuma, M. Araki, An Exact Algorithm for Single-Machine Scheduling without Machine Idle Time, Journal of Scheduling 12 (6) (2009) 575–593, doi:10.1007/s10951-008-0093-5. . . . . . . . . . 29 N. T. Thomopoulos, Line Balancing-Sequencing for Mixed-Model Assembly, Management Science 14 (2) (1967) B59–B75, doi:10.1287/mnsc.14. 2.B59. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 W. Wajs, Polynomial Algorithm for Dynamic Sequencing Problem, Archiwum Automatyki i Telemechaniki 31 (3) (1986) 209–213. . . . . . 31
168
Bibliography
L. Wan, J. Yuan, Single-Machine Scheduling to Minimize the Total Earliness and Tardiness Is Strongly NP-Hard, Operations Research Letters 41 (4) (2013) 363–365, doi:10.1016/j.orl.2013.04.007. . . . . . 28, 29, 30, 74 T. Wee, M. J. Magazine, Assembly Line Balancing as Generalized Bin Packing, Operations Research Letters 1 (2) (1982) 56–58, doi:10.1016/ 0167-6377(82)90046-3. . . . . . . . . . . . . . . . . . . . . . . . 10 G. J. Woeginger, When Does a Dynamic Programming Formulation Guarantee the Existence of a Fully Polynomial Time Approximation Scheme (FPTAS)?, INFORMS Journal on Computing 12 (1) (2000) 57–74, doi: 10.1287/ijoc.12.1.57.11901. . . . . . . . . . . . . . . . . . . . . . 62 Y. Yin, M. Liu, T. C. E. Cheng, C.-C. Wu, S.-R. Cheng, Four Single-Machine Scheduling Problems Involving Due Date Determination Decisions, Information Sciences 251 (2013) 164–181, doi:10.1016/j.ins.2013.06.035. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Publications
Parts of this work, in particular Chapter 2, Section 4.3, Chapter 5, and a preview on Section 4.5 appear in • H. A. Sedding, Box Placement as Time Dependent Scheduling To Reduce Automotive Assembly Line Worker Walk Times, in: 13th Workshop on Models and Algorithms for Planning and Scheduling Problems, 92–94, 2017a. • H. A. Sedding, Scheduling of Time-Dependent Asymmetric Nonmonotonic Processing Times permits an FPTAS, in: 15th Cologne-Twente Workshop on Graphs & Combinatorial Optimization, 135–138, 2017b. • H. A. Sedding, On the complexity of scheduling start time dependent asymmetric convex processing times, in: Proceedings of the 16th International Conference on Project Management and Scheduling, 209–212, 2018a. • H. A. Sedding, Scheduling non-monotonous convex piecewise-linear time-dependent processing times, in: Proceedings of the 2nd International Workshop on Dynamic Scheduling Problems, 79–84, 2018b. • H. A. Sedding, Line Side Part Placement for Shorter Assembly Line Worker Paths, IISE Transactions (in press). Related works, in particular on Chapter 3, are ◦ H. A. Sedding, F. Jaehn, Single machine scheduling with nonmonotonic piecewise linear time dependent processing times, in: Proceedings of the 14th International Conference on Project Management and Scheduling, 222–225, 2014. ◦ F. Jaehn, H. A. Sedding, Scheduling with time-dependent discrepancy times, Journal of Scheduling 19 (6) (2016) 737–757. © Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2020 H. A. Sedding, Time-Dependent Path Scheduling, https://doi.org/10.1007/978-3-658-28415-2