215 61 15MB
English Pages [396] Year 2023
Springer Series in Advanced Manufacturing
Duc Truong Pham Natalia Hartono Editors
Intelligent Production and Manufacturing Optimisation— The Bees Algorithm Approach
Springer Series in Advanced Manufacturing Series Editor Duc Truong Pham, University of Birmingham, Birmingham, UK
The Springer Series in Advanced Manufacturing includes advanced textbooks, research monographs, edited works and conference proceedings covering all major subjects in the field of advanced manufacturing. The following is a non-exclusive list of subjects relevant to the series: 1. Manufacturing processes and operations (material processing; assembly; test and inspection; packaging and shipping). 2. Manufacturing product and process design (product design; product data management; product development; manufacturing system planning). 3. Enterprise management (product life cycle management; production planning and control; quality management). Emphasis will be placed on novel material of topical interest (for example, books on nanomanufacturing) as well as new treatments of more traditional areas. As advanced manufacturing usually involves extensive use of information and communication technology (ICT), books dealing with advanced ICT tools for advanced manufacturing are also of interest to the Series. Springer and Professor Pham welcome book ideas from authors. Potential authors who wish to submit a book proposal should contact Anthony Doyle, Executive Editor, Springer, e-mail: [email protected].
Duc Truong Pham · Natalia Hartono Editors
Intelligent Production and Manufacturing Optimisation—The Bees Algorithm Approach
Editors Duc Truong Pham Department of Mechanical Engineering University of Birmingham Birmingham, UK
Natalia Hartono Department of Mechanical Engineering University of Birmingham Birmingham, UK
ISSN 1860-5168 ISSN 2196-1735 (electronic) Springer Series in Advanced Manufacturing ISBN 978-3-031-14536-0 ISBN 978-3-031-14537-7 (eBook) https://doi.org/10.1007/978-3-031-14537-7 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
With the advent of the Fourth Industrial Revolution, production and manufacturing processes and systems have become more complex. To obtain the best performance from them requires efficient and effective optimisation techniques that do not depend on the availability of process or system models. Such models are usually either not obtainable or mathematically intractable due to the high degrees of nonlinearities and uncertainties in the processes and systems to be represented. The Bees Algorithm is a powerful swarm-based intelligent optimisation metaheuristic inspired by the foraging behaviour of honeybees. The algorithm is conceptually elegant and extremely easy to apply. All it needs to solve an optimisation problem is a means to evaluate the quality of potential solutions. This is the reason why, since the algorithm was first published by Pham et al. in 2005, it has attracted users from virtually all fields of engineering and natural, physical, medical and social sciences. This book is the first work dedicated to the Bees Algorithm. Following a gentle introduction to the main ideas underpinning the algorithm, the book divides into five parts focusing on recent results and developments relating to the algorithm and its use to solve optimisation problems in production and manufacturing. Part 1 comprising four chapters presents manufacturing process optimisation applications. The chapter “Minimising Printed Circuit Board Assembly Time Using the Bees Algorithm with TRIZ-Inspired Operators” (by Ang and Ng) discusses the optimisation of process plans for printed circuit board assembly. The chapter describes how the Bees Algorithm was effectively combined with TRIZ to minimise assembly time. The chapter “The application of the Bees Algorithm in a Digital Twin for Optimising the Wire Electrical Discharge Machining (WEDM) Process Parameters” (by Packianather, Alexopoulos and Squire) deals with optimising process parameters for wire electrical discharge machining. The authors used the Bees Algorithm to obtain the best combination of process parameters for the digital twin of the product being machined. The chapter “A Case Study with the BEE-Miner Algorithm: Defects on the Production Line” (by Ay, Baykasoglu, Ozbakir and Kulluk) examines defect classification in manufacturing. The authors showcase the strong performance of Bee-Miner, a cost-sensitive classification algorithm for data mining derived from
v
vi
Preface
the Bees Algorithm. The chapter “An Application of the Bees Algorithm to Pulsating Hydroforming” (by Öztürk, Sen, ¸ Kalyoncu and Halkacı) describes the use of the Bees Algorithm to find the parameters (pulse frequency, amplitude and base) for a test to obtain the biaxial stress-strain curves required to control a pulsating hydroforming machine to yield a uniform thickness distribution. Part 2, the most voluminous section of the book, consists of seven chapters broadly covering production equipment optimisation. The chapter “Shape Recognition for Industrial Robot Manipulation with the Bees Algorithm” (by Castellani, Baronti, Zheng and Lan) is relevant to 3D vision systems. The authors used the Bees Algorithm to fit primitive shapes to point-cloud scenes for real-time 3D object recognition. The chapter “Bees Algorithm Models for the Identification and Measurement of Tool wear” (by d’Addona) describes the use of the Bees Algorithm for tool wear identification and measurement during turning operations. The author’s goal was to use the bees to define the contours of the wear area of a tool and locate the point of maximum wear. The chapter “Global Optimisation for Point Cloud Registration with the Bees Algorithm” (by Lan, Castellani, Wang and Zheng) complements the previous chapter “Shape Recognition for Industrial Robot Manipulation with the Bees Algorithm” and looks at the problem of finding a spatial transformation that aligns two point clouds. The authors employed singular value decomposition to increase the search efficiency of the Bees Algorithm, achieving higher consistency, precision and robustness than the popular Iterative Closest Point method. The chapter “Auto¸ matic PID Tuning Toolkit Using the Multi-Objective Bees Algorithm” (by Sahin and Çakıro˘glu) investigates the tuning of PID control systems such as those for robotic equipment. A multi-objective Bees Algorithm was successfully applied to minimise the settling time, rise time, overshoot and system error all at once. The chapter “The Effect of Harmony Memory Integration into the Bees Algorithm” (by ¸ compares the original Bees Algorithm and a hybrid version Acar, Sa˘glam and Saka) that incorporates a harmony memory on the design of spherical four-link mechanisms for robot grippers. The results obtained by the authors show the enhancement afforded by hybridisation. The chapter “Memory-Based Bees Algorithm with Lévy Flights for Multilevel Image Thresholding” (by Shatnawi, Sahran and Nasrudin) also concerns enhancing the Bees Algorithm. An earlier memory-based version proposed by the authors was improved by adding a Lévy search facility to reduce the number of parameters that users need to select and applied successfully to multilevel thresholding, a basic image processing function in computer vision. The chapter “A New Method to Generate the Initial Population of the Bees Algorithm for Robot Path Planning in a Static Environment” (by Kashkash, Darwish and Joukhadar) presents a modified Bees Algorithm with a new initial population generation method and describes its use to find the shortest collision-free path for a mobile robot. Against other algorithms tested by the authors, the modified algorithm demonstrated the best performance.
Preface
vii
Part 3 deals with production plan optimisation and includes three chapters. The chapter “Method for the Production Planning and Scheduling of a Flexible Manufacturing Plant Based on the Bees Algorithm” (by Wang, Chen and Li) relates to production planning and scheduling for a sheet metal fabrication plant. The authors found that both the basic Bees Algorithm and the version implementing the site abandonment strategy yielded good results, with the latter providing a superior performance due to its ability to escape from local optima, as expected. The chapter “Application of the Dual-population Bees Algorithm in a Parallel Machine Scheduling Problem with a Time Window” (by Song, Xing and Chen) covers the solution of the parallel machine scheduling problem with time windows. The authors employed a Bees Algorithm with two populations: a search population of scout bees and a supplementary population of forager bees. The purpose of having this dual population was to increase the speed of convergence of the algorithm, enabling it to find better solutions than could be achieved by the standard algorithm and other optimisation procedures. The chapter “Parallel Multi-indicator-Assisted Dynamic Bees Algorithm for Cloud-Edge Collaborative Manufacturing Task Scheduling” (by Li, Peng, Laili and Zhang) discusses task scheduling for a cloud-edge collaborative manufacturing environment. The authors present a dynamic version of the Bees Algorithm in which the operators were adjusted according to a set of indicators, and a parallel sorting scheme was adopted to accelerate the scheduling and selection of cloud-edge resources and collaboration modes. Part 4 comprises two chapters related to logistics and supply chain optimisation. The chapter “Bees Traplining Metaphors for the Vehicle Routing Problem Using a Decomposition Approach” (by Ismail and Pham) describes the use of the latest and simplest incarnation of the Bees Algorithm to solve the capacitated vehicle routing problem. The algorithm which integrates exploration and exploitation requires the setting of only two parameters, the number of scout bees and the number of bees recruited by the scout that found the best flower patch. To speed up the solution, a decomposition approach was adopted whereby customers were first clustered before the optimal route was found for each cluster. The chapter “Supply Chain Design and Multi-objective Optimisation with the Bees Algorithm” (by Mastrocinque) studies the Bees Algorithm as a tool for optimising supply chain networks and considers the case of designing a supply chain for a bulldozer based on its Bill of Materials. The author shows that the Bees Algorithm performed better than Ant Colony Optimisation, another well-known nature-inspired algorithm. Part 5 the final section of the book, contains four chapters about remanufacturing optimisation. The chapter “Collaborative Optimisation of Robotic Disassembly Planning Problems using the Bees Algorithm” (by J Liu, Q Liu, Zhou, Pham, Xu and Fang) focuses on planning robotic disassembly for remanufacturing. The authors employed a discrete Bees Algorithm simultaneously to optimise disassembly sequences and balance the disassembly line, with the analytic process network assigning weights to the different optimisation objectives. As it title implies, the chapter “Optimisation of Robotic Disassembly Sequence Plans for Sustainability Using the Multi-objective Bees Algorithm” (by Hartono, Ramirez and Pham) also deals with multi-objective optimisation. The aim was to devise robotic disassembly sequence plans to achieve
viii
Preface
maximum profit while minimising energy consumption and greenhouse gas emissions. The authors compared a multi-objective Bees Algorithm (MOBA) against the Non-dominated Sorting Genetic Algorithm II and the Pareto Envelope-based Selection Algorithm II and found MOBA to produce the best disassembly plans for the two gear pumps studied. The chapter “Task Optimisation for a Modern Cloud Remanufacturing System Using the Bees Algorithm” (by Caterino, Fera, Macchiaroli and Pham) proposes the concept of cloud remanufacturing and discusses the application of the Bees Algorithm to task allocation in a cloud remanufacturing system. The chapter describes a full-factorial experiment to determine the best values of the algorithm parameters, highlighting the importance of increasing the number of scout bees and exploiting the best and elite sites. The chapter “Prediction of the Remaining Useful Life of Engines for Remanufacturing Using a Semi-supervised Deep Learning Model Trained by the Bees Algorithm” (by Zeybek) investigates using the Bees Algorithm to train a Long Short-Term Memory (LSTM) deep learning network to predict the remaining useful life (RUL) of turbofan engines before they are available for remanufacturing. The author proposes a modified version of the ternary Bees Algorithm which has a minimal population of only three scout bees, each representing an LSTM model. The results obtained show that the trained model could predict engine RUL with an accuracy as high as 98%. In assembling this small sample of applications of the Bees Algorithm, we want to demonstrate the simplicity, effectiveness and versatility of the algorithm. We hope this will encourage its further adoption and development by engineers and researchers across the world to realise smart and sustainable manufacturing and production in the age of Industry 4.0 and beyond. Birmingham, UK
Duc Truong Pham Natalia Hartono
Acknowledgements
Our warm thanks go to the authors for their contributions to the book and, more importantly, to the development, application and dissemination of the Bees Algorithm. The field of nature-inspired optimisation is richer through their work. Numerous people have added value to the book, in particular, the committee members of the 2021 International Workshop on the Bees Algorithm and its Applications, by critically reviewing and helping to improve submissions. We acknowledge the input of Dr Luca Baronti, Dr Turki Binbakir, Dr Marco Castellani, Dr Mario Caterino, Dr Naila Fares, Dr Fabio Fruggiero, Dr Asrul Harun Ismail, Ms Kaiwen Jiang, Dr Joey Lim, Dr Murat Sahin and Dr Sultan Zeybek. Professor Zude Zhou and Professor Wenjun Xu, the keynote speakers at the workshop, set the scene for the book and deserve special credit. The book was produced with the patient and expert support of Springer’s Executive Editor Mr Anthony Doyle and his colleagues Mr Subodh Kumar Mohar Sahu, Mr Prashanth Ravichandran, Ms Kavitha Sathish, Ms Vidyalakshmi Velmurugan and Mr Manju Ramanathan to whom we express our sincere appreciation.
ix
Contents
Introduction The Bees Algorithm—A Gentle Introduction . . . . . . . . . . . . . . . . . . . . . . . . . Marco Castellani and D. T. Pham
3
Manufacturing Process Optimisation Minimising Printed Circuit Board Assembly Time Using the Bees Algorithm with TRIZ-Inspired Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . Mei Choo Ang and Kok Weng Ng The application of the Bees Algorithm in a Digital Twin for Optimising the Wire Electrical Discharge Machining (WEDM) Process Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Michael S Packianather, Theocharis Alexopoulos, and Sebastian Squire A Case Study with the BEE-Miner Algorithm: Defects on the Production Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Merhad Ay, Adil Baykasoglu, Lale Ozbakir, and Sinem Kulluk An Application of the Bees Algorithm to Pulsating Hydroforming . . . . . . Osman Öztürk, Muhammed Arif Sen, ¸ Mete Kalyoncu, and Hüseyin Selçuk Halkacı
25
43
63 79
Production Equipment Optimisation Shape Recognition for Industrial Robot Manipulation with the Bees Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Marco Castellani, Luca Baronti, Senjing Zheng, and Feiying Lan
97
Bees Algorithm Models for the Identification and Measurement of Tool Wear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Doriana M. D’Addona
xi
xii
Contents
Global Optimisation for Point Cloud Registration with the Bees Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Feiying Lan, Marco Castellani, Yongjing Wang, and Senjing Zheng Automatic PID Tuning Toolkit Using the Multi-Objective Bees Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Murat Sahin ¸ and Semih Çakıro˘glu The Effect of Harmony Memory Integration into the Bees Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Osman Acar, Hacı Sa˘glam, and Ziya Saka ¸ Memory-Based Bees Algorithm with Lévy Flights for Multilevel Image Thresholding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Nahla Shatnawi, Shahnorbanun Sahran, and Mohamad Faidzul Nasrudin A New Method to Generate the Initial Population of the Bees Algorithm for Robot Path Planning in a Static Environment . . . . . . . . . . . 193 Mariam Kashkash, Ahmed Haj Darwish, and Abdulkader Joukhadar Production Plan Optimisation Method for the Production Planning and Scheduling of a Flexible Manufacturing Plant Based on the Bees Algorithm . . . . . . . . . . . . . . . . . . . 211 Chao Wang, Tianxiang Chen, and Zhenghao Li Application of the Dual-population Bees Algorithm in a Parallel Machine Scheduling Problem with a Time Window . . . . . . . . . . . . . . . . . . . 229 Yanjie Song, Lining Xing, and Yingwu Chen A Parallel Multi-indicator-Assisted Dynamic Bees Algorithm for Cloud-Edge Collaborative Manufacturing Task Scheduling . . . . . . . . 243 Yulin Li, Cheng Peng, Yuanjun Laili, and Lin Zhang Logistics and Supply Chain Optimisation Bees Traplining Metaphors for the Vehicle Routing Problem Using a Decomposition Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 A. H. Ismail and D. T. Pham Supply Chain Design and Multi-objective Optimisation with the Bees Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 Ernesto Mastrocinque Remanufacturing Collaborative Optimisation of Robotic Disassembly Planning Problems using the Bees Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Jiayi Liu, Quan Liu, Zude Zhou, Duc Truong Pham, Wenjun Xu, and Yilin Fang
Contents
xiii
Optimisation of Robotic Disassembly Sequence Plans for Sustainability Using the Multi-objective Bees Algorithm . . . . . . . . . . . 337 Natalia Hartono, F. Javier Ramírez, and D. T. Pham Task Optimisation for a Modern Cloud Remanufacturing System Using the Bees Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 Mario Caterino, Marcello Fera, Roberto Macchiaroli, and D. T. Pham Prediction of the Remaining Useful Life of Engines for Remanufacturing Using a Semi-supervised Deep Learning Model Trained by the Bees Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 Sultan Zeybek Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
Introduction
The Bees Algorithm—A Gentle Introduction Marco Castellani and D. T. Pham
1 Introduction Every engineering problem entails choosing amongst a set of alternatives the element that optimises one or more quality measures. When designing racing cars, mechanical engineers typically test the aerodynamics of many different body shapes. Their aim is to choose the shape that maximises down force or minimises drag. In manufacturing, the workload on different machines of a production line needs to be balanced to maximise production or minimise late orders. The list of examples is virtually endless, and in a global market dominated by cut-throat competition the ability to minimise costs and maximise quality is key to survival. Unfortunately, the relationship between design parameters and quality measure is often too complex to be analytically understood, and the set of alternatives is typically too large to be exhaustively evaluated. For the above reasons, optimisation has been traditionally conducted via trial and error, starting from an initial prototype built based on expertise, and repeating several cycles of testing and improvement until a solution of desired quality has been obtained, or the designer feels no further improvement is attainable. Although useful in many fields, this approach had limited success when dealing with complex systems. Complex systems can be defined as large systems of many interacting elements, possibly characterised by non-linear and time-varying relationships, time-delays, and feedback loops amongst the components. Examples of complex systems are power grids, logistic networks, manufacturing plants, etc. M. Castellani (B) · D. T. Pham Department of Mechanical Engineering, College of Engineering and Physical Sciences, The University of Birmingham, England B15 2TT, UK e-mail: [email protected] D. T. Pham e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 D. T. Pham and T. Hartono (eds.), Intelligent Production and Manufacturing Optimisation – The Bees Algorithm Approach, Springer Series in Advanced Manufacturing, https://doi.org/10.1007/978-3-031-14537-7_1
3
4
M. Castellani and D. Pham
Due to scientific and technological progress, complex systems are now a common occurrence in engineering: products are becoming more sophisticated, industrial processes more complex, and networks more ramified and crowded. As a result, managers and engineers must face difficult optimisation challenges with increasing frequency. A solution to complex optimisation problems comes from the observation of biological systems and in particular animal societies. Through millions of years of evolution, Nature has shaped the behaviour of living beings to create highly efficient foraging, nesting, and mating strategies in complex ever-changing environments. Nature’s approach to optimisation is commonly based on the collective search effort and self-organisation of many interacting individuals [1]. The earliest attempts at modelling Nature’s problem-solving strategies date back to the mid-sixties when natural evolution itself was taken as inspiration for a new class of optimisation methods: evolutionary algorithms (EAs) [2]. Differently from traditional optimisation methods, EAs used a population of candidate solutions, and stochastic procedures to iteratively improve the quality of the population. In EAs, the optimisation process is generally driven by competition for reproduction amongst individuals, and often procedures of recombination of traits (features) [2]. It was only in the late ‘80 s and early ‘90 s that mechanisms of animal cooperation such as flocking [3, 4] and foraging [5] began to be investigated for optimisation purposes. The success of this latter nature-inspired approach has generated many algorithms which achieved considerable popularity [6]. This chapter describes one of the best established and most popular optimisation techniques inspired by the foraging behaviour of honey bees: the Bees Algorithm [7, 8].
2 Parameter Optimisation Solving an optimisation problem P entails finding the solution s that minimises some cost function f , or maximises some quality index f . The function f is often called the objective function. A candidate solution x to problem P consists of a fixed or varying number of decision variables (parameters). Without loss of generality, this chapter focuses on problems where the solutions x = {x1 , . . . , xn } are composed of a fixed number of n decision variables. Each decision variable is defined in a discrete (e.g., the set of integer numbers N) or continuous (e.g., the set of real numbers R) set of elements, and a solution generally includes both kinds of variables. Yet, optimisation problems containing only discrete or continuous decision variables define two important subclasses of optimisation problems. The number of values that a given variable can take is often limited by one or more constraints. Henceforth, X i will indicate the set of allowed values for a given variable xi , and X = {X 1 , . . . , X n } the space of the feasible solutions to problem P. The space X of feasible solutions may consist of a finite, albeit huge, or infinite number of elements. An example of the first case is when all the variables belong to the set of integer numbers (xi ∈ N) and are constrained by an upper limit (xi ≤
The Bees Algorithm—A Gentle Introduction
5
ai ∀i = [1, n]). An example of the second case is when at least one variable is defined in the set of real numbers (∃xi |xi ∈ R). Given the above notation, a minimisation problem requires finding: s = argmin f (x)
(1)
x∈X
and a maximisation problem: s = argmax f (x)
(2)
x∈X
Conveniently, a minimisation problem can always be turned into a maximisation problem, and vice versa, by simply adding a ‘minus’ sign in front of the function f : s = argmax f (x) = argmin[− f (x)] x∈X
(3)
x∈X
This property allows addressing optimisation as one unique problem regardless of whether the goal is to find the minimum or the maximum of a given function. Let us see now a first example of a minimisation problem.
2.1 Discrete Optimisation: The Travelling Salesman Problem The Travelling Salesman Problem (TSP) was first mathematically formulated in the XIX century by Irish mathematician W.R. Hamilton and his British colleague T. Kirkman [9]. To date, it is arguably the best known and most widely studied optimisation problem. In its simplest and best-known formulation, the problem can be stated as follows: Given a list of n cities, what is the shortest possible distance a salesman needs to travel to visit all cities exactly once?
The TSP problem has several applications in logistics, manufacturing, and scheduling [10], and has been studied in different forms and variants [11]. The first elements needed to solve an optimisation problem are the mathematical representation of the solutions, and the objective function f . In the TSP case, a candidate solution can be represented as a vector x of n variables, representing the list of cities in the order they are visited. The constraint that each city needs to be visited exactly once can be expressed with the requirement that no two elements of the vector can take the same value. Formally: x = (x1 , . . . , xn )and xi /= x j ∀i, j ∈ [1, n]
(4)
Each variable xi is bound to take a value equal to one of the n cities. If the cities are progressively numbered, the TSP takes the familiar form of a discrete
6
M. Castellani and D. Pham
optimisation problem where each variable is defined in the interval xi ∈ [1, n]. That is, each variable can take only a finite number of values. The space X of the feasible solutions comprises all possible permutations of the n cities. This particular and important case is called a combinatorial optimisation problem, that is a problem where the solution is a combination of several elements. Many discrete optimisation problems (logistics, assembly, scheduling) are combinatorial. Indeed, it could be argued that combinatorial problems are the most important and common type of discrete optimisation problems. Once the TSP has been fully defined, a strategy for its solution can be devised. Given that the space of feasible solutions is finite, a brute force approach might be chosen. Unfortunately, the number of feasible solutions turns out to grow factorially with the number of cities. In detail, the cardinality C(X ) of the set X of feasible solutions is equal to: C(X ) =
(n − 1)! 2
(5)
This number grows very quickly: for a TSP of 10 cities C(X ) = 181, 000, if the number of cities is doubled C(X ) ∼ = 1016 . To date, the largest TSP solved via brute force involved a tour of 85,900 cities and required the equivalent of 136 CPU-years [12]. Unfortunately, without the availability of supercomputers, brute force search would require a prohibitive computational effort for many combinatorial problems of practical importance. For example, printed circuit board assembly entails the optimisation of the placement sequence of the electronic components on the board, with the goal of minimising the assembly time. This problem is akin to the TSP, and the number of feasible solutions for a board housing 50 electronic components [13] would be C(X ) ∼ = 3 × 1062 !
2.2 Heuristics From the above discussion, it is clear that, without high-performance computing facilities, brute force search is not practicable for any non-trivial application, even when the number of feasible solutions is finite. In truth, given that digital computers store real numbers in a finite number of digits, typically 32 or 64 bits, it can be argued that the solution of any optimisation problem requires searching the optimal value in a finite and discrete solution space. What makes this search challenging is the huge number of combinations of values that n variables can take. When brute force search is not practical, only a subset of the feasible solutions X can be evaluated. In this case, optimisation must rely on some intelligent strategy for sampling the solution space. Given that the search is not exhaustive, the optimisation process cannot guarantee that the global optimum has been found. However, if the sampling strategy is sufficiently ‘clever’, the search will return at least a good quality solution.
The Bees Algorithm—A Gentle Introduction
7
Computationally, search strategies are formalised as algorithms (from Persian mathematician al-Khw¯arizm¯ı), i.e., computer procedures entailing a sequence of operations. Any search strategy, including brute force or random search, is implemented as an algorithm. Intelligent sampling strategies rely on rules of thumb derived from expertise or study of the problem domain. These rules of thumb are called heuristics, from the Greek word εØρ´ισ κω (heurísk¯o) meaning ‘to discover’. A simple heuristic for the TSP would be to start from a randomly picked city, and always move to the closest neighbouring city. The heuristic is to build an overall route of minimal distance from many movements of minimal distance. This heuristic is known to return good albeit not necessarily optimal solutions. The main problem with heuristics is that there is not a one-size-fits-all strategy [14], but different heuristics best suit different problems. Let us now see another example of optimisation problem and some new heuristics.
2.3 Continuous Optimisation: Function Maximisation Function maximisation is a common mathematical problem, where the objective is to find the elements of the domain of a given function at which the function values are maximized. Formally, it entails the solution of the argmax problem defined in Eq. 2. Let us assume that the domain variables are all continuous and constrained within a bound interval. In this case, candidate solutions can be represented as vectors of real-valued variables, and the objective function is the function f which is to be maximised. Let us consider the function in Fig. 1. To find the maximum of the function in Fig. 1, a good heuristic would be to start from a random point, calculate the local gradient, and move upwards of a fixed step following the gradient. This strategy can be iterated until the top of the function is reached. Unfortunately, many objective functions are not everywhere differentiable, which means that an alternative heuristic must be used. Fig. 1 Continuous function maximisation problem—unimodal function
8
M. Castellani and D. Pham
A very simple heuristic that does not require the differentiability of the objective would be the following:
Heuristic 1.1 1. Randomly generate current solution x 2. For n cycles (a) Randomly generate new solution y similar to x (b) If f itness(y) > f itness(x) x=y (c) Else do nothing 3. Return x The idea is that a solution similar to a good solution is likely to be good too, and the algorithm tries incrementally to improve the initial solution. This heuristic is at the basis of many engineering processes, where an initial prototype is designed, and progressively improved. The main difference is that here the new solution is generated randomly, and not based on some engineering analysis of the behaviour of the prototype. If some knowledge can be used to improve the likelihood of generating a better solution, random generation can be substituted with some other heuristics in step 2a. Applied to the function in Fig. 1, Heuristic 1.1 would likely follow a zigzagging path to the maximum of f . A more direct path can be achieved by generating several random solutions at a time in step 2a. In detail:
Heuristic 1.2 1. Randomly generate current solution x 2. For n cycles (a) Randomly generate m solutions z i , i = [1, n], similar to x (b) Take best y = max (z i ) i=[1,n]
(c) If f itness(y) > f itness(x) x=y (d) Else do nothing 3. Return x
The Bees Algorithm—A Gentle Introduction
9
Fig. 2 Continuous function maximisation problem, multimodal function
Heuristic 1.2 is more likely to produce improvements at each step than Heuristic 1.1. The price to pay for this increase in efficiency is a larger computational effort, since now m solutions instead of one need to be generated and evaluated at each step. The computational complexity of generating and testing solutions will dictate how many candidate solutions can be generated at each step (the parameter m), and how many cycles of optimisation can be performed (the parameter n). If a reasonable number of optimisation cycles is performed, approaches like Heuristic 1.1 and Heuristic 1.2 would find the maximum of the objective function in Fig. 1, or at least something close to the maximum. However, if the objective function has several peaks like in Fig. 2, the heuristics will very likely fail to yield the global maximum. This problem is due to the locality of the search, that is to the fact that the new candidate solutions are similar to the current (step 2a of Heuristics 1 and 2). In other words, they are generated close to the existing solution in the search space. Consequently, the algorithm would climb the closest local maximum to the current solution and would yield the global optimum only in the lucky case the starting point of the search was set on the main peak. This is a common feature of all local search algorithms: they are efficient at finding the local optimum, but they are prone to suboptimal convergence. In other words, they are not consistent, and the result depends on which initial candidate solution is chosen. A different heuristic could prevent the algorithm from being trapped in a local optimum. For example, instead of being generated close to the current, new candidate solutions are randomly generated anywhere in the solution space. This approach amounts to random search, let’s call it Heuristic 1.3. Formally, it can be said that Heuristics 1 and 2 generate new solutions within a small radius from the current solution, whilst Heuristic 1.3 generates new solutions within a radius equal to the size of the solution space. Certainly Heuristic 1.3 would not be trapped in a local peak. However, the probability that the global optimum is found by chance is low. Heuristic 1.3 has a global outlook; it generates new candidate solutions anywhere in the search space. Given a sufficient number of cycles, random search is likely to
10
M. Castellani and D. Pham
Fig. 3 Examples of explorative (left) and exploitative (right) search approaches. The same number of solution samples were evaluated in the two cases
find the main peak, although it is unlikely to precisely find the global optimum. If local search is characterised by efficiency but lack of consistency, random search is typically consistent but not accurate. In detail, at each cycle of the optimisation procedure, local search creates new solutions exploiting the results of the previous cycles (new solutions are similar to the old), whilst random search focuses on the exploration of the whole search space. Heuristics 1 and 2 are typical cases of local exploitative search, whilst Heuristic 1.3 is an extreme case of global explorative search. By adjusting the radius within which new solutions are generated, a balance between exploitative and explorative search can be struck. Addressing the exploitation vs. exploration trade-off is the main issue in the design of any optimisation algorithm. Given that only a limited number of solutions can be evaluated in a reasonable amount of time, the dilemma is whether to sample them in the most promising regions, or to sample them across the whole search space. As shown in Fig. 3, exploitative search allows to perform a fine-grained and hence accurate search in a limited region of the solution space, with the risk of missing altogether the global optimum if it is outside this region. Explorative search allows covering the whole solution space, but the search will be broad-brush and inaccurate. The next section describes how biological honey bees solve the exploitationexploration trade-off, and how their strategy is mimicked in the Bees Algorithm.
3 Bee Inspired! Intelligent Optimisation with the Bees Algorithm Honey bees are eusocial flying insects living in colonies of tens of thousands of individuals. They obtain their nutritional needs from pollen and nectar, which respectively provide proteins and lipids, and water and carbohydrates [15]. Although individual
The Bees Algorithm—A Gentle Introduction
11
bees consist of simple cognitive units with only local knowledge, they can collectively achieve globally intelligent behaviours and optimise the food foraging process in a rapidly varying environment [16].
3.1 Honey Bees Foraging Behaviour A bee colony employs a portion of its unemployed foragers as scouts, tasked to explore the environment surrounding the hive [16, 17]. Scout bees move randomly, looking for food sources within a radius which is typically within 10 km from the hive, but they can occasionally travel longer distances [18]. Scout bees look for flower patches where food is abundant, easy to extract, and rich in nutrients. In the case of nectar, it was shown that bees can precisely evaluate the energetic efficiency of a food source, that is the energy gained minus the harvesting costs [16]. Once they return to the hive, scout bees deposit the food gathered during the exploration process. Scouts who found a highly profitable food source communicate its location to idle mates through a ritual known as the “waggle dance” [17, 19]. The waggle dance is performed in a particular area of the hive called the “dance floor” and communicates to the onlookers three basic pieces of information regarding the flower patch: the direction where it is located, its distance from the hive, and its quality rating [17, 19]. Recruited through the waggle dance, idle foragers join the dancer in harvesting the food source. The duration of the waggle dance depends on the quality rating of the food source. Long dances are seen by more idle foragers, and hence recruit more bees [16, 19]. Once a recruited forager returns to the hive, it may in turn waggle dance itself to call more foragers to harvest the food source, and the number of foragers grows in geometric fashion. Thanks to this autocatalytic mechanism, the bee colony can quickly mobilise a large portion of the foragers to harvest the most profitable food sources, thus optimising the efficiency of the food collection process (i.e., amount of food collected versus the cost of collecting it) [16].
3.2 The Bees Algorithm Without loss of generality, it will be assumed in the rest of this chapter that the optimisation problem requires the minimisation of a given measure of cost. This measure is evaluated through an objective function y = f (x), where x is a candidate solution and y is its cost. The Bees Algorithm takes inspiration from the food foraging strategy of honey bees to search for the optimal solution to a given problem. Each point in the solution space (i.e., each potential solution) is thought of as a food source. A colony of artificial agents, divided into scout and forager bees, is used to search the for the optimum. Scout bees randomly explore the solution space and evaluate the quality
12
M. Castellani and D. Pham
of the visited (sampled) solutions via the objective function. The purpose of scout bees is to sustain the explorative effort, their action is described by the previously defined Heuristic 1.3. The solutions sampled by the scout bees are ranked in increasing order of cost, and forager bees are recruited to search the solution space in the neighbourhood of the highest-ranking locations. That is, new candidate solutions similar to the best scouts’ finds are generated and evaluated. In Bees Algorithm parlance, the neighbourhood of a solution is called a “flower patch”. The neighbourhood of the very best (elite) solutions is searched by a large number of foragers, whilst the remaining flower patches are visited by fewer foragers. This gradual allocation of foragers is inspired by the recruiting mechanism of the waggle dance. The purpose of forager bees is to sustain the exploitative effort, their action is described by the previously defined Heuristic 1.2. The rest of this section describes the Bees Algorithm in detail. The description is done taking in consideration the continuous optimisation case. As discussed later, with minimal modification the algorithm can be used to solve continuous optimisation problems. The flowchart of the bare bone algorithm (Basic Bees Algorithm) [8, 20] is shown in Fig. 4, and the main parameters are listed in Table 1.
Fig. 4 The basic Bees Algorithm
The Bees Algorithm—A Gentle Introduction Table 1 Main parameters of the Bees Algorithm
3.2.1
Parameter
13 Description
maxIt
Number of iterations of the main cycle
ns
Number of scout bees
ne
Number of elite sites
nb
Number of best sites
nre
Recruited bees for elite sites
nrb
Recruited bees for remaining best sites
ngh
Initial size of neighbourhood
stlim
Limit of stagnation cycles for site abandonment
Representation Scheme
Given the space of feasible problem solutions U = {x ∈ Rn ; max i < x i < mini i = 1,…, n}, and an objective function f (x): U → R, each candidate solution is expressed as an n-dimensional vector of decision variables x = {x 1 ,…,x n }.
3.2.2
Initialisation
In this phase, the initial population is randomly scattered with uniform probability across the solution space. The population is fixed to ns scout bees. The algorithm then enters the main loop, which is composed of four phases. The sequence of optimisation cycles is interrupted when the stopping criterion is met.
3.2.3
Waggle Dance
The ns solutions found by the scouts are ranked in increasing order of cost. The scouts that landed on the best (lowest cost) nb ≤ ns solutions are considered to have found a promising region of the search space. Each one of these nb scout bees performs the “waggle dance”, that is, it recruits foragers for local exploration of the flower patch. The number of recruited foragers is deterministically allocated: nr e foragers are assigned to the very best ne highest ranking (elite) sites, and the remaining (nb − ne) sites are allocated nr b ≤ nr e foragers. According to the above procedure, more bees are assigned to search the solution space in the vicinity of the ne points where the value of the cost function f is lowest. The local search is thus more thorough in the neighbourhood of the elite sites, which represent the most promising regions of the solution space. This costbased differential recruitment is modelled on the waggle dance of biological bees and contributes to define the exploitative effort.
14
3.2.4
M. Castellani and D. Pham
Local Search
For each of the nb selected flower patches, the recruited bees are randomly placed with uniform probability in a neighbourhood of the solution found by the scout bee. This neighbourhood is defined as an n-dimensional hyper-box of sides a1 , . . . , an centred on the solution. For each flower patch, the value taken by the objective function on the locations visited by the recruited bees is evaluated. If one of the recruited bees lands on a solution of lower cost than the solution found by the scout bee, that bee is chosen as the new scout. At the end, only the solution found by the scout is retained for each neighbourhood. The scout will become the dancer once back at the hive. In nature, the feedback mechanism is different since all the bees involved in the foraging process might perform the waggle dance.
3.2.5
Global Search
In the global search phase, ns−nb bees are randomly placed across the solution space. These scouts evaluate the solutions where they landed via the objective function. Random scouting represents the exploration effort of the Bees Algorithm.
3.2.6
Population Update
At the end of each iteration, the new scout population of the bee colony is formed out of two groups. The first group comprises the nb bees that found the best solution of each flower patch and represents the results of the local exploitative search. The second group is composed of the ns − nb scout bees associated with a randomly generated solution and represents the results of the global explorative search.
3.2.7
Stopping Criterion
The stopping criterion depends on the problem domain and can be either the location of a solution of cost inferior to a pre-defined value (a solution deemed good enough), or the completion of a pre-defined number of evolution cycles.
3.3 The Standard Bees Algorithm Since its first appearance in the literature [7], the Bees Algorithm often included two further procedures: neighbourhood shrinking and site abandonment. Although not featured in every Bees Algorithm implementation, these two procedures are so widely used that added to the Basic Bees Algorithm constitute what is commonly
The Bees Algorithm—A Gentle Introduction
15
Fig. 5 The standard Bees Algorithm
referred to as the Standard Bees Algorithm [8, 20]. The flowchart of the Standard Bees Algorithm is visualised in Fig. 5.
3.3.1
Neighbourhood Shrinking
The size a = {a1 , . . . , an } of the flower patches is initially set to a large value. For each variable xi , it is set as follows: ai (t) = ngh(t) · (maxi − min i )
(6)
where t denotes the j th iteration of the Bees Algorithm main loop, ngh(t) is a parameter which is usually set to ngh(0) = 1.0 and adapted through the execution of the algorithm, and min i ≤ xi ≤ maxi are the constraints on the i th variable. The size of a flower patch is kept unchanged if the local search procedure yields points of lower cost. If the local search fails to bring any improvement in the value of the objective function, the size of a is decreased. The updating of the neighbourhood size follows the following heuristic formula: ngh(t + 1) = k · ngh(t)
(7)
where it is usually k = 0.8. Thus, following this strategy, the local search is initially defined over a large neighbourhood, and has a largely explorative character. As the search progresses, a more detailed search is needed to refine the current local optimum. Hence, the search is made increasingly exploitative, and the area around the optimum is searched more
16
M. Castellani and D. Pham
thoroughly. The neighbourhood shrinking method bears some similarities with the Simulated Annealing [21] procedure.
3.3.2
Site Abandonment
This procedure is applied when the search stops progressing within a flower patch despite repeated application of the neighbourhood shrinking procedure. After a predefined number (stlim) of consecutive stagnation cycles, local search is assumed to have reached the bottom of the local dip of the cost function, and no further progress is possible. Consequently, the exploration of the patch is terminated and a new random solution is generated. If the site being abandoned corresponds to the solution of lowest-so-far cost, the location of the peak is recorded. If no other flower patch will produce a solution of lower cost during the remaining of the search, the recorded best is taken as the final solution.
3.4 The Bees Algorithm for Discrete Optimisation Problems The description of this section so far focused on continuous optimisation problems. In truth, the whole algorithm as described in Figs. 4 and 5 can be seen independently of the nature of the solutions. If it is possible to define the ‘similarity’ of two solutions, the algorithm can be applied to both discrete and continuous optimisation problems. The concept of similarity will determine the implementation of the local search, and if used of the neighbourhood shrinking procedure. In the case where variables take values defined in countable sets as integer or natural numbers, a neighbourhood can be defined analogously to the continuous optimisation case. Some care might be taken to define neighbourhood shrinking, to make sure that the shrinking equation generates smaller intervals of feasible numbers. For example: if the solution space is defined in the set of integer numbers, and −4 ≤ ai ≤ 4, the first application of Eq. 6 would give −3.2 ≤ ai ≤ 3.2, effectively restricting the range of permitted values to the interval [−3, 3]. The second application would give −2.56 ≤ ai ≤ 2.56, and the range of permitted values for the variable would be further restricted to [−2, 2]. However, the next application would give −2.05 ≤ ai ≤ 2.05, and the range of permitted values would remain unchanged to [−2, 2]. An alternative method would be to rewrite Eq. 5 as follows: ai (t + 1) = ai (t) − 1
(8)
Defining a neighbourhood might be more complex for cases like the TSP, where there is no metric to define the closeness of two solutions. Yet, some concept of closeness can be still defined. For example: two tours which differ only for the visiting order of two cities being swapped could be defined as ‘similar’, and the degree of closeness of two solutions could be measured by the number of pairs of
The Bees Algorithm—A Gentle Introduction
17
cities whose visiting order is swapped. Other definitions of closeness are possible, and the decision on what best represents the similarity between solutions becomes here an additional design variable.
4 Variants of the Bees Algorithm In addition to the standard formulation, many variants of the Bees Algorithm have been proposed. A recent survey [20] highlighted the emergence in the literature of three main variants of the Bees Algorithm: the Basic Bees Algorithm (BBA), the Shrinking-based Bees Algorithm (ShBA), and the Standard Bees Algorithm (SBA). The BBA refers to the way many authors [8, 22–24] have described the bare bone version. The ShBA includes the neighbourhood shrinking procedure in the algorithm. The original formulation of the Bees Algorithm [7] can be regarded as an ShBA, as it already featured a gradually shrinking neighbourhood. Finally, the SBA is indicated by many authors [8, 25, 26] as the version including both the neighbourhood shrinking and site abandonment procedures. Many alternative recruitment, neighbourhood modification, and site abandonment procedures were proposed in the literature. The recruitment procedure was widely investigated. In his doctoral dissertation, Ghanbarzadeh [27] suggested two alternative methods for allocating the number of foragers to a flower patch: proportionally to a) the cost or b) the location of the sites. Other authors proposed recruitment schemes where the number of foragers was initially proportional to the value taken by the objective function, and progressively decreased by a fixed amount [23], or according to an empirically determined fuzzy logic policy [28]. Pham et al. [29] used Kalman filtering to allocate number of bees to the sites selected for local search. This strategy was beneficial to improve the accuracy and speed of a neural network optimisation (training) problem. Another variant was proposed by Imanguliyev [30], who used an alternative scheme where the number of recruited foragers depended on the efficiency rate of the site, rather than the cost of the local best. In other schemes, new found flower patches did not compete with established ones until they underwent a pre-set number of optimisation cycles [26]. Other authors focused on the modification of the neighbourhood shrinking procedure. Ahmad [31] used asymmetric neighbourhoods, increasing the size of the flower patch in the direction in the solution space that yielded better solutions, and decreasing it in the direction that yielded no improvement in the quality of the solutions. Other authors proposed adaptive schemes where the neighbourhood size may undergo different phases of decrease and increase [24]. Many studies focused on defining local search in continuous optimisation problems. Shift and swap operations were amongst the most used [13, 32], whilst other authors [33] employed operators inspired by theory of inventive problem solving (TRIZ) [34]. Finally, in some Bees Algorithm variants site abandonment is not only applied to neighbourhoods that stop yielding improvements in the output of the objective
18
M. Castellani and D. Pham
function, but also to all solutions of inferior quality to the solution being abandoned [28].
5 Discussion The Bees Algorithm randomly searches the solution space for promising solutions, and intensively searches their neighbourhoods looking for the global minimum of the objective function, that is the solution that minimises the given cost measure. Intensive neighbourhood search is performed in parallel at different locations of the solution space, adaptively shifting the sampling effort at each generation to favour the very best locations. Neighbourhoods can be abandoned due to lack of progress or replaced with more promising ones found via global search. The original division of the population in scouts and foragers was meant to keep clearly separated the exploration from the exploitation effort. However, very soon it became clear [8] that other factors played a role in the exploration vs. exploitation trade off, such as the number of parallel local searches and the partition of foragers between best and elite sites. The contribution of each of these factors to the overall trade-off is tuned by one or more parameters, such as ne and nr e for the exploitation of the elite sites. The main challenge in applying the Bees Algorithm is to adjust these parameters to strike the optimal balance between exploration and exploitation. This balance needs to be found for each application and requires some trial and error. Several empirical studies have investigated the influence of the parameterisation of the Bees Algorithm on its performance, using popular [8, 20] and purpose-built [35, 36] benchmarks. These studies confirmed that tuning the Bees Algorithm parameters was indeed important for top performance, although several settings showed robust nearly optimal performance on many different types of problems. An important decision in the parameterisation of the local search is how to distribute the exploitation effort in time. That is, given a fixed number of sampling opportunities s = nr e × stlim, whether it is best to concentrate the sampling effort in a few cycles (large nr e) and quickly abandon the neighbourhood if there is no progress, or to sample the region less intensively but for longer time, allowing several cycles of stagnation in the local search progress (large stlim). Experimental evidence suggested that the latter strategy (large stlim) generally gives the best results. This result was theoretically confirmed by Baronti et al. [37]. Baronti et al.’s theoretical study also demonstrated that, as the intensity of the local search effort increases (nr e or nr b are increased), the convergence speed of the search to the local optimum quickly levels off. These results indicate that there is a limit to the benefits that can be reaped from intensifying the local search effort. Yet, experimental [35, 36] and theoretical [37] evidence showed that, before this limit is reached, it is important to ensure an adequate sampling of the neighbourhood. Perhaps the least investigated component of the Bees Algorithm is the global search conducted by the scout bees. One reason is that, as local search quickly
The Bees Algorithm—A Gentle Introduction
19
descends the local dips of the cost function, it becomes less likely to generate competitive solutions via random search. This was the motivation that led QT Pham et al. [26] to introduce the concept of ‘young bees’, who are left to improve their findings via a few cycles of local search before they compete for foragers with more established scouts. Other authors [35, 36, 38] do not use scout bees and rely on the number nb of parallel searches and site abandonment to ensure adequate exploration of the search space.
6 Conclusions This chapter presented the Bees Algorithm as a solution to the problem of parameters optimisation. The main variants of the standard algorithm were discussed, and the main issues related to the application of the Bees Algorithm analysed. In the fifteen years since its creation, the Bees Algorithm attracted significant research and found hundreds of applications. The very first article presenting the Bees Algorithm [7] has accumulated to date over 1500 citations according to Google Scholar. Despite its popularity and wide use, the behaviour of the algorithm has not been yet fully studied analytically. This is a problem common to many natureinspired algorithms, partly due to the complexity of their search mechanisms, partly because of a tendency of the research community to focus on practical applications and empirical solutions. Yet, Baronti et al. [37] have clearly shown the advantages of a good analytical understanding of the Bees Algorithm. New theoretical studies are needed to shed further light on the mechanisms and properties of the Bees Algorithm and its operators. As discussed earlier, the many parameters (Table 1) currently defining the behaviour of the Bees Algorithm require some tuning effort. Work by Pham and Haj Darwish [28] and Ismail and Pham [39] has produced versions of the Bees Algorithm with fewer parameters that should ease the task of adapting the algorithm to different problems. It is not clear if it would be possible to remove more parameters and an adaptive parameter-free version remains the Holy Grail of Bees Algorithm research. An important aspect of the Bees Algorithm is the generality and flexibility of its approach, which makes it easily applicable to any optimisation problem where a quality function and some metrics on the similarity of the solutions can be defined. By changing its parameters, the Bees Algorithm can be readily re-configured to fit a wide range of applications. Once again, the study of Nature provided scientists an elegant, versatile, and effective problem-solving tool. Ongoing and future research will widen the understanding and use of the Bees Algorithm.
20
M. Castellani and D. Pham
References 1. Bonabeau E, Theraulaz G, Dorigo M, Theraulaz G, Marco D de RDF (1999) others: Swarm intelligence: from natural to artificial systems. Oxford university press 2. Fogel DB (2006) Evolutionary computation: toward a new philosophy of machine intelligence. John Wiley & Sons 3. Reynolds CW (1987) Flocks, herds and schools: a distributed behavioral model. In: Proceedings of the 14th annual conference on Computer graphics and interactive techniques, pp 25–34 4. Kennedy J, Eberhart R (1995) Particle swarm optimization. In: Proceedings of ICNN’95international conference on neural networks, pp 1942–1948 5. Dorigo M, Maniezzo V, Colorni A (1996) Ant system: optimization by a colony of cooperating agents. IEEE Trans. Syst Man, Cybern Part B 26:29–41 6. Lones MA (2020) Mitigating metaphors: a comprehensible guide to recent nature-inspired algorithms. SN Comput Sci 1:1–12 7. Pham DT, Ghanbarzadeh A, Koç E, Otri S, Rahim S, Zaidi M (2006) The Bees Algorithm—a novel tool for complex optimisation problems. In: Intelligent production machines and systems. Elsevier, pp 454–459 8. Pham DT, Castellani M (2009) The Bees Algorithm: modelling foraging behaviour to solve continuous optimization problems. Proc Inst Mech Eng Part C J Mech Eng Sci 223:2919–2938 9. Biggs N, Lloyd EK, Wilson RJ (1986) Graph theory. Oxford University Press, pp 1736–1936 10. Davendra D (2010) Traveling salesman problem: theory and applications. BoD—Books on Demand 11. Ilavarasi K, Joseph KS (2014) Variants of travelling salesman problem: a survey. In: International conference on information communication and embedded systems (ICICES2014), pp 1–7 12. Applegate DL, Bixby RE, Chvátal V, Cook WJ (2011) The traveling salesman problem. In: The traveling salesman problem. Princeton university press 13. Castellani M, Otri S, Pham DT (2019) Printed circuit board assembly time minimisation using a novel Bees Algorithm. Comput Ind Eng 133:186–194 14. Wolpert DH, Macready WG (1997) No free lunch theorems for optimization. IEEE Trans Evol Comput 1:67–82 15. Brodschneider R, Crailsheim K (2010) Nutrition and health in honey bees. Apidologie 41:278– 294 16. Tereshko V, Loengarov A (2005) Collective decision making in honey-bee foraging dynamics. Comput Inf Syst 9:1 17. Von Frisch K (2014) Bees: their vision, chemical senses, and language. Cornell University Press 18. Beekman M, Ratnieks FLW (2000) Long-range foraging by the honey-bee. Apis mellifera L Funct Ecol 14:490–496 19. Seeley TD (2009) The wisdom of the hive: the social physiology of honey bee colonies. Harvard University Press 20. Hussein WA, Sahran S, Sheikh Abdullah SNH (2017) The variants of the Bees Algorithm (BA): a survey. Artif Intell Rev 47:67–121 21. Kirkpatrick S, Gelatt CD, Vecchi MP (1983) Optimization by simulated annealing. Science 220(80):671–680 22. Pham DT, Castellani M, Fahmy AA (2008) Learning the inverse kinematics of a robot manipulator using the Bees Algorithm. In: 2008 6th IEEE international conference on industrial informatics, pp 493–498 23. Packianather MS, Landy M, Pham DT (2009) Enhancing the speed of the Bees Algorithm using pheromone-based recruitment. In: 2009 7th IEEE international conference on industrial informatics, pp 789–794 24. Yuce B, Packianather MS, Mastrocinque E, Pham DT, Lambiase A (2013) Honey bees inspired optimization method: the Bees Algorithm. Insects 4:646–662
The Bees Algorithm—A Gentle Introduction
21
25. Castellani M, Pham QT, Pham DT (2012) Dynamic optimisation by a modified Bees Algorithm. Proc Inst Mech Eng Part I J Syst Control Eng 226:956–971 26. Pham QT, Pham DT, Castellani M (2012) A modified Bees Algorithm and a statistics-based method for tuning its parameters. Proc Inst Mech Eng Part I J Syst Control Eng 226:287–301 27. Ghanbarzadeh A (2007) Bees algorithm: a novel optimisation tool. Cardiff University (United Kingdom) 28. Pham DT, Darwish AH (2008) Fuzzy selection of local search sites in the Bees Algorithm. In: Proceedings of the 4th international virtual conference on intelligent production machines and systems (IPROMS 2008), pp 1–14 29. Pham DT, Darwish HA (2010) Using the Bees Algorithm with Kalman filtering to train an artificial neural network for pattern classification. Proc Inst Mech Eng Part I J Syst Control Eng 224:885–892 30. Imanguliyev A (2013) Enhancements for the Bees Algorithm 31. Ahmad SA, Pham DT, Ng KW, Ang MC (2012) TRIZ-inspired asymmetrical search neighborhood in the Bees Algorithm. In: 2012 sixth Asia modelling symposium, pp 29–33 32. Tapkan P, Özbakir L, Baykaso˘glu A (2012) Bees algorithm for constrained fuzzy multiobjective two-sided assembly line balancing problem. Optim Lett 6:1039–1049 33. Ang MC, Ng KW, Pham DT, Soroka A (2013) Simulations of PCB assembly optimisation based on the Bees Algorithm with TRIZ-inspired operators. In: International visual informatics conference, pp 335–346 34. Royzen Z (1993) Application TRIZ in value management and quality improvement. In: The SAVE proceedings, pp 2–5 35. Pham DT, Castellani M (2014) Benchmarking and comparison of nature-inspired populationbased continuous optimisation algorithms. Soft Comput 18:871–903 36. Pham DT, Castellani M (2015) A comparative study of the Bees Algorithm as a tool for function optimisation. Cogent Eng. 2:1091540 37. Baronti L, Castellani M, Pham DT (2020) An analysis of the search mechanisms of the Bees Algorithm. Swarm Evol Comput 59:100746 38. Fahmy AA, Kalyoncu M, Castellani M (2012) Automatic design of control systems for robot manipulators using the Bees Algorithm. Proc Inst Mech Eng Part I J Syst Control Eng 226:497– 508 39. Ismail AH, Pham DT (2022) Bees traplining metaphors for the vehicle routing problem using a decomposition approach. In: Pham DT, Hartono N (eds) Intelligent production and manufacturing optimisation—the Bees Algorithm approach. Springer Series in Advanced Manufacturing
Manufacturing Process Optimisation
Minimising Printed Circuit Board Assembly Time Using the Bees Algorithm with TRIZ-Inspired Operators Mei Choo Ang and Kok Weng Ng
1 Introduction A substantial number of complex multivariable optimisation problems, such as those confronted within the systems engineering environment, cannot be solved precisely within polynomial-bounded computation times. This has created interest in developing search algorithms that ascertain near-optimal solutions within reasonable computational times. With increasing interest in developing more efficient search algorithms, researchers have explored nature for inspiration. These inspirations have resulted in the development of algorithms inspired by biological activities, and the Bees Algorithm is one of the biologically inspired algorithms. The Bees Algorithm is a swarm-based search algorithm that is well established for efficiently finding good solutions [1–4]. The algorithm drew its inspiration from the food foraging behaviour of honeybees and can be categorised as an “intelligent” optimisation tool [5] and was developed by Pham [5] to solve continuous problems, and it was further extended to solve discrete, combinatorial and multiobjective problems [2, 3, 6–11]. The concept of the Bees Algorithm is based on the foraging behaviour of bees in exploring areas to search for nectar and return to inform others of the nectar source [12] using the ‘waggle’ dance [13]. The entire scouting and recruitment process was adopted and simulated by the Bees Algorithm. Even though the Bees Algorithm attempts to mimic the food foraging behaviour of honeybees, unlike the Bees Algorithm applied in continuous problems, the Bees M. C. Ang (B) Institute of IR 4.0, Universiti Kebangsaan Malaysia, 43600 Bangi, Malaysia e-mail: [email protected] K. W. Ng Department of Mechanical, Materials and Manufacturing Engineering, Faculty of Science and Engineering, University of Nottingham Malaysia Campus, 43500 Semenyih, Malaysia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 D. T. Pham and T. Hartono (eds.), Intelligent Production and Manufacturing Optimisation – The Bees Algorithm Approach, Springer Series in Advanced Manufacturing, https://doi.org/10.1007/978-3-031-14537-7_2
25
26
M. C. Ang and K. W. Ng
Algorithm applied in solving discrete problems has adopted the local search operators of 2-opt, 3-opts and others [2, 11, 14]. These operators are applied in a single operation using one operator or in a sequence of operations with different operators. The decision on which operator to use and the sequence of arrangement are made entirely based on trial-and-error. Hence, there is a need to find ways to assist decision making in applying different operators and their sequences in the Bees Algorithm to solve discrete problems. This research work examines how to apply the theory of inventive principles or TRIZ [15] to derive optimisation operators for the Bees Algorithm. The following sections introduce the Bees Algorithm, a description of the TRIZ methodology, how TRIZ is applied to derive TRIZ-inspired operators for the Bees Algorithm and the results of a case study on using the TRIZ operator-enhanced algorithm for PCB assembly optimisation.
2 Bees Algorithm The Bees Algorithm may be considered a form of Swarm Optimisation (SO) algorithm in that it is a method inspired by the collective behaviour exhibited by animals. SO algorithms are inspired by groups of animals that gather in a particular area (often in large numbers), for instance, flocking of birds or the schooling of fish [5]. In particle SO systems, there exists a population of candidate solutions within which individual solutions take the form of “particles” that will evolve or alter positions. The specific positions of the particles in a search space are self-adjusted based upon the experience of the particle and that of neighbouring particles by recalling the best location visited by the particle and its neighbours, therefore applying local and global search methods together [1]. The Bees Algorithm is based specifically on the behaviour of the common honeybee and its original and basic form, which is described in detail and developed by Pham et al. [5], to solve the continuous optimisation problem that involves randomly generating scout bees within the search space of the target function for optimisation, followed by an evaluation of the fitness of the sites within the search space that were visited by scout bees. The method for the evaluation of fitness is dependent on the problem to be optimised. However, in general, the ‘fitness’ can be the output value of a function that is to be optimised. The bees with the highest fitness are identified and labelled “selected bees”, and the sites they visited are selected for use within a search in the neighbourhood. Following this, the Bees Algorithm undertakes neighbourhood searches within the sites that were selected earlier. To do that, the size of the neighbourhood search will also need to be determined. To ensure that the best potential solution sites have been thoroughly searched, a higher number of bees are allocated to these best sites. The bees can be chosen directly according to the fitness associated with the sites they are visiting. Alternatively, the fitness values can be used to ascertain the chance of the bees being selected. The use of selective recruitment and the application of random scouting are key operation features of the Bees Algorithm.
Minimising Printed Circuit Board Assembly Time …
27
The best fitness values within the neighbourhoods (patches) that have been searched are then selected to form the next bee population (helping to cut down the number of points that will be searched). The remaining population of bees is distributed around the search space randomly. This facilitates the discovery of potential solutions and helps prevent the Bees Algorithm from focusing on what may potentially be a false optimum. These steps are repeated until the stopping criterion (for example, a fitness test or a test on whether a certain number of iterations have taken place) is met. After each iteration, the new population will consist of bees for the selected patches and scout bees for random global search. However, for discrete and combinatorial problems, the operation “Determine the size of the neighbourhood” cannot be performed. This is because the search space is discrete and the model representation is in a sequential and contiguous form, where the size of the neighbourhood cannot be determined. Hence, the Bees Algorithm that is applied to solve discrete and combinatorial problems needs to be adapted, and it will slightly differ from the Bees Algorithm for continuous problems. Figure 1 illustrates the basic Bees Algorithm adapted for solving discrete and combinatorial problems [16]. Without the ability to determine the size of the neighbourhood, the Bees Algorithm applied local search operators for solving discrete problems such as the single-point insertion operators, the 2-opt operator, 3-opt and others. These local search operators [2] can be used individually as standalone operators or in a sequence of multiple operators during an application to solve a combinatorial or discrete problem.
3 Theory of Inventive Problem Solving The theory of inventive problem solving, which is more commonly known by its acronym TRIZ, is an abbreviation for “Teoriya Resheniya Izobretatelskikh Zadatch” in Russian. TRIZ was developed by Genrich Altshuller in 1985 [15] based on his extensive research work on design patents to help solve design problems. TRIZ can be applied to find solutions for design problems based on the notion that there are always conflicting features (contradictions) in solving inventive design problems. According to TRIZ, an improving feature proposed to solve a design problem will always lead to a worsening feature in which there are inventive principles that can be applied to resolve these contradictions and solve the design problem. TRIZ has been applied widely to deal with problem-solving, not only in product design and engineering but also in domains such as business and management [17] as well as software development [18]. There are 39 improving and 39 worsening features that design problems can be mapped to, and based on these corresponding contradicting features, one or more inventive principles can be recommended to solve the design problems. The 40 inventive principles are shown in Table 1.
28
M. C. Ang and K. W. Ng
Fig. 1 The Bees Algorithm for discrete and combinatorial problems
Initialise population
Evaluate fitness of the population
Select sites for a neighbourhood search using neighbourhood operator (2-opt operator, single point insertion operator)
Recruit bees for selected sites and evaluate fitness
Select the fittest bee
No
Assign remaining bees to search randomly and evaluate fitness
Stopping criteria met? Yes END
The terminology and application of the 40 inventive principles are more inclined towards the engineering domain. Thus, the interpretation of the 40 inventive principles in other domains is subjective and highly dependent on the knowledge, experience, and creativity of TRIZ users. Only three inventive principles are applied in this work to inspire the derived local search operators to solve the optimisation problems.
4 MBTD PCB Assembly This research work has a key aim, which is to determine an optimal assembly sequence that will assemble the PCB components in the shortest time [6, 19, 20]. The machine is required to assemble each PCB in the shortest time possible to lower production costs and increase productivity. The optimisation challenge depends on
Minimising Printed Circuit Board Assembly Time …
29
Table 1 The 40 inventive principles of TRIZ 1. Segmentation
2. Taking out
3. Local quality
4. Asymmetry
5. Merging
6. Universality
7. Nested doll
8. Anti-weight
9. Preliminary anti-action
10. Preliminary action
11. Beforehand cushioning
12. Equipotentiality
13. The other way round
14. Spheroidality—curvature
15. Dynamisation
16. Partial or excessive actions
17. Another dimension
18. Mechanical vibration
19. Periodic action
20. Continuity of useful action
21. Skipping
22. Blessing in disguise
23. Feedback
24. Intermediary
25. Self-service
26. Copying
27. Cheap short-living objects
28. Mechanics substitution
29. Pneumatics and hydraulics
30. Flexible shells and tin 31. Porous films materials
32. Colour changes
33. Homogeneity
34. Discarding and recovering
35. Parameter changes
36. Phase transitions
37. Thermal expansion
38. Strong oxidants
39. Inert atmosphere
40. Composite materials
the machine, and among the PCB assembly machines, three of the basic configurations for PCB assembly machines are analysed and are shown in Table 2 [6, 19]. For PCB assembly machines with Configuration 1, the components are delivered via a single feeder (most likely in a magazine form) to an assembly head in the X–Y plane. These components are then transferred onto the PCB at a pre-determined x–y location via the assembly head. This PCB is held to a table that will travel in the X–Y plane to enable these components to be placed. This configuration is the most basic form of a PCB assembly machine, which is usually used when there is only one type of component that needs to be assembled. Such an assembly process is the same as the travelling salesman problem from the optimisation problem perspective. The PCB assembly machines with Configuration 2 have a feeder (or feeder array) with a stationary table (and therefore the PCB is also mounted to a fixed location). The single assembly head travels to deliver components to the correct placement positions on the PCB in the X–Y plane. The machine can be used to assemble various types of components onto a PCB if an array of feeders is used. Last, the PCB assembly machines with Configuration 3 possess a turret that carries multiple pick-up/placement assembly heads. The turret has a centre of rotation fixed in the X–Y plane. The required component is then brought by a feeder array that travels along the X-axis to the fixed pickup location. The table travels to position different points of the PCB in the X–Y plane, in sequence, at the fixed component placement location. Component pick-up and placement will occur simultaneously after the correct feeder and table have reached their designated positions and the
30
M. C. Ang and K. W. Ng
Table 2 PCB assembly system configurations [6, 19] Configuration 1
Configuration 2
Configuration 3
Model of problem
Travelling salesman
Pick and place
Moving board with time delay (MBTD)
Description of assembling process
The machine assembles the components while the table that holds the PCB travels along x–y axes. The feeding of components into the assembly arm is direct
The machine arm travels along x–y axes to pick and place (assemble) the components while the table that holds the PCB, as well as the feeder system are stationary
The multi-head turret collects the components from the feeder system (that moves in one axis to deliver the right component for assembly) with one arm whilst placing them with the other onto the PCB by revolving. The PCB is restrained to an x–y axis moving table that moves based on the location and sequence of the component which require to be assembled
Key characteristics of the solution
The shortest path to assemble the components
The feeder slot arrangement The shortest path to assemble the components
The feeder slot arrangement The shortest path to assemble the components The number of heads on the assembly turret
indexing of the appropriate pick-up and placement heads by the turret has been completed. Such machines can also be used to place components of different types. Among the 3 different PC assembly machines, PCB assembly machines with Configuration 3 or MBTD type are the most sophisticated, as this type of machine needs the synchronisation of three systems in place (the multi-head turret pick and place system, the feeder system, and the assembly table) to implement the assembly task. The PCB assembly machine with Configuration 3 (refer to Fig. 2) synchronously works to perform the assembly at a very high speed. The feeder travelling from one point to another to feed the right component to a system with a multi-head turret requires sophisticated coordination. When this component is collected, the multi-head turret revolves to the predetermined component position to be placed. Concurrently, the PCB travels to that location while the feeder system moves to a new location to provide the next pick-up component. The components ci (x i , yi ) and cj (x j , yj ) on a PCB, which are assembled on the X–Y assembly table (held on the X–Y assembly table) and need to be placed at a
Minimising Printed Circuit Board Assembly Time …
Y-axis movement of the PCB assembly table
Rotary movement of the multi-head turret
31
Y-axis movement of the feeder
Assembly Feeder slots table X-axis movement of the assembly table Fig. 2 An MBTD-type PCB assembly machine (with 10 feeder slots, 2 rotary heads and a movable assembly table) [6, 19, 20]
time interval, are established by the Tchebyshev metric represented below: (| | | |) |x j − xi | | y j − yi | t1 (ci , c j ) = max , vx vy
(1)
where vx and vy depict the velocities in their respective x and y directions of the X–Y table, set at 60 mm/s for this experiment. The time taken for the feeder carrier to f f f f travel between feeders f i (xi , yi ) and f j (x j , y j ) is calculated by
t2 ( f i , f j ) =
/| |2 | |2 | f | f f| f| |x j − xi | + |y j − yi | vf
(2)
where vf is the speed of the feeder carrier (vf is 60 mm/s in the case study). In this research work, the case study presented involves feeders that are aligned in a straight y-axis, as illustrated in Fig. 2, with a y-distance set at 15 mm. The feeders have the same x-coordinates throughout with negligible effect. t 3 is the time for turret indexing, which is 0.25 s per step. C = {c1 …, ci , …, cN-1 , cN } denotes the component placement sequence, where ci is the ith component to be placed. A feeder assignment sequence is referred to as F = {f 1 …, f j , …, f R-1 , f R }, where the feeder for the jth component type is denoted by f j . The time required for the component to be placed, ck, is τk = max(t1 (ck−1 , ck ), t2 ( f k+g−1 , f k+g ), t3 )
(3)
32
M. C. Ang and K. W. Ng
in which ck-1 = cN if k = 1. If f l has l > N, f l is substituted by f l-N, which symbolises the components of the next board in the batch. This is based on the assumption that in every batch, there is a lot of similar PCB assembly. In this study, the “gap” here is 1 or g = 1, as there are only two turret heads. For every PCB, the total assembly time to assemble a PCB can be calculated as displayed in the equation below: Total assembly time, TTotal =
N .
τk
(4)
k=1
5 Application of TRIZ to Operator Generation Three inventive principles were used in deriving the TRIZ-inspired operators in this work. Table 3 describes these inventive principles in their generic forms. Based on the description of the inventive principles and an in-depth study of how optimisation operators and algorithms work, the dynamisation, segmentation, and local quality operators are derived.
5.1 Dynamisation Operator The dynamisation operator conducts a group size change in each iteration randomly if no best solution was found in the previous iteration. This change in group size will affect the process undertaken by the segmentation operator. The dynamisation operator is necessary to improve the exploration of the local search. Without this operator, the group size would remain the same throughout all the iterations in every run. Table 3 The three inventive principles of TRIZ used in this research work TRIZ inventive principles
Description of principles
Dynamisation
Allow the characteristics of an object, external environment, or process to change or to search for an optimal operating condition
Segmentation
Split an object into independent parts
Local quality
Construct each part of an object to achieve a useful function, which is either a new and/or complementary type function
Minimising Printed Circuit Board Assembly Time …
33
Group size, s = 3
1 2 3
4 5 6
7 8 9
10 11 12
Group size, s = 5
1
2
3
4
5
6
7
8
9 10
11 12
Fig. 3 The segmentation operator
5.2 Segmentation Operator This segmentation operator divides the component representation and feeder representation into smaller groups (refer to Fig. 3). The size of a group may be different if the representation is not evenly divisible, as shown in Fig. 3. If the representation has twelve components and the group size is three, then we have four equally sized segments. If the group size is five, then the first and second groups will have five components each. The remaining two components will be in the third group.
5.3 Local Quality Operator The local quality principle pertains to making each part of an object function in conditions most suitable for its operation. This operator performs local functions to enhance the local quality for both component representation and feeder representation. For example, for a component sequence, the local quality operator would work in the intra-group mode (refer to Fig. 4) and the inter-group mode (refer to Fig. 5) [6, 19].
Fig. 4 A local quality operator in the intra-group mode (within group) [6, 19]
34
M. C. Ang and K. W. Ng
Fig. 5 Local quality operator in the inter-group mode (between groups) [6, 19]
During the intra-group mode, this operator calculates the distance of each component in the same group and reorders the component sequence in the same group. If the new arrangement improves the total path distance, the new arrangement will be accepted. During the inter-group mode, groups of components are rearranged to produce a new path. If the resulting total distance obtained from this local quality operator is shorter than the initial path, the new path will be accepted.
6 The Bees Algorithm with TRIZ-Inspired Operators In the implementation, each bee will have two linked representations to record the component placement sequence and feeder arrangement, as shown in Fig. 6 [6, 19]. The TRIZ-inspired operators are applied separately to the two representations for the component placement sequence and feeder arrangement. Figure 7 illustrates the flowchart of the Bees Algorithm using TRIZ operators used in solving the PCB assembly problem. The algorithm commences with a predefined number of scout bees being randomly placed in the search space. The fitness of the solutions found by the scout bees is evaluated. In this work, the evaluation is conducted based on the total assembly time, T Total , which represents the ‘fitness’. The bees that have the higher fitness are designated as “selected bees”, and sites visited by them are considered among the best and will be chosen for neighbourhood search. Then, the algorithm conducts searches in the neighbourhood of the selected bees, assigning more bees to search near the best bees (e). The bees were chosen according to the fitness associated with the points they are visiting. After recruitment of bees for selected sites was conducted, the TRIZ-inspired operators were then applied mainly to perform local searches, which were sequentially applied. The TRIZ-inspired operators will then commence with the dynamisation operator, which will randomly ascertain the group size in every iteration. The segmentation operator will divide the representation into the size determined by the dynamisation operator. The local quality operator performs intra-group and inter-group optimisation after the segmentation operation. After the
Minimising Printed Circuit Board Assembly Time …
Component
c1 c2 c3 c4 .
.
35
Feeder
. cN-3 cN-2 cN-1 cN f1 f2 . . .
fR-1 fR
Fig. 6 A PCB assembly sequence representation
local quality operation, the 2-opt operation and insertion operation will be conducted on a randomly selected segmented group.
Fig. 7 Flowchart of Bees Algorithm with TRIZ-inspired operators
36
M. C. Ang and K. W. Ng
7 PCB Assembly Involving 50 Components (10 Different Types of Components)—A Case Study In this case, study, a complex combinatorial problem involving a PCB assembly that requires fifty components (ten different types of components) to be assembled in the shortest time using an assembly machine with MBTD characteristics is investigated (Fig. 2). This assembly problem is adopted from Leu [21]. Many algorithms have been used to solve these assembly problems [9], and therefore, the results obtained can be compared. The parameters of the MBTD assembly machine are listed in Table 4, while Table 5 tabulates the parameter settings for the Bees Algorithm with TRIZ-inspired operators. Full details of the test data can be found in [21, 22]. The PCB used in this case study is shown in Fig. 8, where the board has fifty locations for the placement of the ten different types of component types, T1 to T10, with the coordinates of the locations labelled. With the labelled locations on the PCB shown along with the placement coordinates as well as the type of component that is supposed to be placed, the total assembly time can be verified when the assembly path and the arrangement of the components are determined. The time to assemble all 50 components onto the PCB using the MBTD assembly machine using the Bees Algorithm with TRIZ-inspired operators in this case study was 23.42 s (as illustrated in Fig. 9). Preliminary runs that did not include any feeder arrangement optimisation were conducted until the distance of the PCB assembly path reached 1200 mm (approaching the optimal assembly path). The optimisation process would include feeder arrangement optimisation after the preliminary runs as it became critical. The best component pickup sequence for this case study is shown in Fig. 10. Table 6 presents the results for various algorithms used by past researchers in this case study. Table 4 Parameters of MBTD assembly machine Number of turret heads
Indexing time of turret
Average PCB mounting table speed
Average feeder system speed
Distance between feeder
2
0.25 s/index
60 mm/s
60 mm/s
15 mm
Table 5 Parameters of the Bees Algorithm with TRIZ-inspired operators
Parameters for the Bees Algorithm with TRIZ-inspired operator Symbol
Value
Number of scout bees
n
500
Number of selected sites
m
50
Number of recruited bees for best m sites
n1
100
Number of iterations
itr
1000
Component number
N
50
Feeder number
R
10
Minimising Printed Circuit Board Assembly Time …
37
Fig. 8 The PCB has fifty fixed locations on it that are labelled with information on which ten different types of components (T1-T10) are required to be placed for these specific locations on the PCB
Fig. 9 Best assembly sequence and feeder arrangement (23.42 s)
Fig. 10 Best component pickup sequence (23.42 s)
M. C. Ang and K. W. Ng
Feeder Carrier
38
12 10 8 6 4 2 0
Component pickup sequence 23.41s
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49
Placement Sequence
The significance of these results and the strength of the newly developed TRIZ operators were based on a straight run of 100 trials, where each trial consisted of 1000 iterations. The results obtained show trials with improved assembly timing when compared to the assembly time obtained from publications on a similar MBTD assembly problem, and one of the improved assembly timings was 23.42 s. The assembly path and placement sequence are shown in Figs. 9 and 10. The assembly time of 23.42 s was obtained on the 88th trial or attempt from the straight run of 100 trials of 1000 iterations. The average assembly time was 24.29 s, with a median of 24.4167 s from these 100 trials or attempts. The standard deviation for the 100 trials was 0.44, and the variance was 0.19. The longest assembly time in the trials was 25 s.
8 Discussions The results in Table 6 show that the application of TRIZ-inspired operators with the Bees Algorithm has reduced the total assembly time for the PCB assembly overtaking other assembly times obtained from different versions of the Bees Algorithm and four other established non-Bees Algorithm techniques. The optimal production time achieved was 8.16% less than the best non-Bees Algorithm solution. As most PCB assembly machines operate non-stop throughout the year, over 87,900 additional PCBs could be assembled with such improvement in assembly time. As with all heuristics search algorithms, the best result of 23.42 s may not be the optimal result due to the nature of all heuristic search algorithms. It is computationally too expensive and infeasible to find the optimal results for an NP-complete combinatorial problem such as the MBTD assembly problem. Therefore, it is difficult to find a good basis to compare heuristic search algorithms, as all the best results obtained are likely not to be reproduced and may not be optimal. All comparison bases, such as the number of evaluations, average assembly time, faster convergence rate, number of iterations and many more, are not the main critical aim of most heuristic search algorithms. The main critical aim of all heuristic search algorithms is to find the best result within a reasonable and available time.
Best assembly time (s)
51.5
36
n/a
EP [23]
Non-Bees Algorithm
GA [21]
Best initial assembly time 70 (s)
Optimisation techniques
26.9
60
GA [22]
Table 6 Comparison with results from other algorithms
25.5
28.83
HGA [24]
25.92
54.59 24.08
29
BA (with seeding) [9]
Bees Algorithm BA (no seeding) [9]
23.58
35.5
TRIZ-inspired operators [19, 20]
23.46
71.08
cBA [2]
23.42
28
Current work
Minimising Printed Circuit Board Assembly Time … 39
40
M. C. Ang and K. W. Ng
Therefore, the result of 23.42 s obtained within a short time and with a reasonable finite number of trials is the key factor to demonstrate the effectiveness of the Bees Algorithm with the developed TRIZ-inspired operators. Although the algorithm with the TRIZ-inspired operators uses seeding procedures, it is not true to claim that seeding procedures contributed to the assembly time of 23.42 s as all other algorithms and operators for this MBTD assembly problem of many years of research have not demonstrated any result that has a better assembly time with or without seeding procedures. In the world of industrial production, the goal is to obtain the best assembly time to enhance productivity, and on that basis, the Bees Algorithm with TRIZ-inspired operators managed to produce the best assembly time for this MBTD assembly problem.
9 Conclusions This research work has demonstrated that the use of TRIZ-inspired operators with the Bees Algorithm has produced improvements in the PCB assembly time. TRIZinspired operators have helped to find a shorter assembly time even though other Bees Algorithm implementations had a lower number of evaluations. The implementation has met the ultimate requirement of the problem which is to minimise the PCB assembly time. Although it is only a very small improvement, since large production volumes of PCB assembly occur in electronic industries, a minor reduction in cycle time can increase annual production output significantly. Acknowledgements The research was conducted as part of an investigation in the FRGS/1/2018/TK03/UKM/2/6 and GUP-2018-124 research grants. The authors would like to thank the Ministry of Higher Education, Malaysia and Universiti Kebangsaan Malaysia for supporting the work through the two research grants above.
References 1. Al-Betar MA, Alomari OA, Abu-Romman SM (2020) A TRIZ-inspired bat algorithm for gene selection in cancer classification. Genomics 112:114–126 2. Castellani M, Otri S, Pham DT (2019) Printed circuit board assembly time minimisation using a novel Bees Algorithm. Comput Ind Eng 133:186–194 3. Ang MC, Ng KW, Pham DT (2013) Combining the Bees Algorithm and shape grammar to generate branded product concepts. Proc Instit Mech Eng Part B: J Eng Manuf 227:1860–1873 4. Hussein W, Sahran S, Abdullah S (2017) The variants of the Bees Algorithm (BA): a survey. Artif Intell Rev 47:67–121 5. Pham DT, Ghanbarzadeh A, Koc E, Otri S, Rahim S, Zaidi M (2006) The Bees Algorithm—a novel tool for complex optimisation problems. In: Proceedings of 2nd virtual conference on intelligent production machines and systems (I*PROMS 2006) 6. Ang MC, Pham DT, Ng KW (2009) Application of the Bees Algorithm with TRIZ-inspired operators for PCB assembly planning. In: Proceedings of 5th international virtual conference on innovative production machines and systems (I*PROMS 2009). Cardiff University
Minimising Printed Circuit Board Assembly Time …
41
7. Pham DT, Lee JY, Darwish AH, Soroka AJ (2008) Multi-objective environmental/economic power dispatch using the bees algorithm with pareto optimality. In: 4th international virtual conference on intelligent production machines and systems (I*PROMS 2008), pp 422–430 8. Pham DT, Ang MC, Ng KW, Otri S, Darwish AH (2008) Generating branded product concepts: Comparing the Bees Algorithm and an evolutionary algorithm. In: Proceeding 4th international virtual conference on innovative production machines and systems (I*PROMS 2008), pp 398– 403 9. Pham DT, Otri S, Haj Darwish A (2007) Application of the Bees Algorithm to PCB assembly optimisation. In: Pham DT, Eldukhri EE, Soroka AJ (eds) Proceedings of 3rd virtual international conference on innovative production machines and systems (I*PROMS 2007), pp 511–516 10. Pham DT, Ghanbarzadeh A (2007) Multi-objective optimisation using the Bees Algorithm. In: Pham DT, Eldukhri EE, Soroka AJ (eds) Proceeding 3rd virtual international conference on innovative production machines and systems (I*PROMS 2007), pp 529–533 11. Pham DT, Afify AA, Koç E (2007) Manufacturing cell formation using the Bees Algorithm In: Pham DT, Eldukhri EE, Soroka AJ (eds) Proceeding 3rd virtual international conference on innovative production machines and systems (I*PROMS 2007), pp 523–528 12. Frisch K (1968) Bees: their vision, chemical senses, and language. Cornell University Press 13. Seeley TD (1995) The wisdom of the hive: the social physiology of honey bee colonies. Harvard University Press 14. Pham DT, Otri S, Darwish AH (2007) Application of the Bees Algorithm to PCB assembly optimisation. In: 3rd international virtual conference on intelligent production machines and systems (I*PROMS 2007), pp 511–516 15. Altshuller G (1997) 40 principles: TRIZ keys to innovation. Technical Innovation Center, Worcester, MA 16. Otri S (2011) Improving the Bees Algorithm for complex optimisation problems. PhD Thesis, Cardiff University 17. Mann D (2007) Hands-on systematic innovation for business and management. IFR Press, Clevedon 18. Mann D (2008) Systematic (software) innovation Lazarus Press. Bideford, Devon, UK 19. Ang MC, Pham DT, Soroka AJ, Ng KW (2010) PCB assembly optimisation using the Bees Algorithm enhanced with TRIZ operators. In: 36th annual conference of the IEEE industrial electronics society (IECON-2010). Phoenix, Arizona, USA, pp 2702–2707 20. Ang MC, Ng KW, Pham DT, Soroka AJ (2013) Simulations of PCB assembly optimisation based on the Bees Algorithm with TRIZ-inspired operators. In: International visual informatics conference (IVIC2013). Advances in visual informatics, pp 335–346 21. Leu MC, Wong H, Ji Z (1993) Planning of component placement/insertion sequence and feeder setup in PCB assembly using genetic algorithm. ASME J Electron Pack 115:424–432 22. Ong N-S, Tan W-C (2002) Sequence placement planning for high-speed PCB assembly machine. Integr Manuf Syst 13:35–46 23. Nelson KM, Wille LT (1995) Comparative study of heuristics for optimal printed circuit board assembly. Southcon/95, pp 322–327 24. Ho W, Ji P (2007) Optimal production planning for PCB assembly. Springer, London
The application of the Bees Algorithm in a Digital Twin for Optimising the Wire Electrical Discharge Machining (WEDM) Process Parameters Michael S Packianather, Theocharis Alexopoulos, and Sebastian Squire
1 Introduction Automation is reducing or eliminating the human intervention in the execution of tasks via the introduction of smart machines. Automation is achieved via the use of control systems, generally with closed feedback loops and the benefits include savings on labour costs, increased reliability and enhanced overall operational performance. “Applications of Cyber Physical Systems arguably have the potential to dwarf the 20th century IT revolution.” [13]. Cyber Physical Systems are a mechanism of utilising logical and discrete properties of computers to monitor and control the continuous and dynamic properties of physical systems. It is the merging of information processing into physical environments which shows the scope of expected benefits of cyber physical systems. These expected benefits include but are not limited to: increased efficiency and sustainability of processes, mass customisation of products to the individual consumer’s wants and needs, the birth of Industry 4.0, and the Internet of Things. M. S. Packianather (B) School of Engineering, Cardiff University, Queen’s Buildings, The Parade, Cardiff CF24 3AA, UK e-mail: [email protected] T. Alexopoulos School of Engineering, Cardiff University, Queen’s Buildings, The Parade, Cardiff CF24 3AA, UK e-mail: [email protected] S. Squire Rowden Technologies, Unit G3C, Bolingbroke Way, Patchway, Bristol BS34 6FE, UK e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 D. T. Pham and T. Hartono (eds.), Intelligent Production and Manufacturing Optimisation – The Bees Algorithm Approach, Springer Series in Advanced Manufacturing, https://doi.org/10.1007/978-3-031-14537-7_3
43
44
M. S. Packianather et al.
Fig. 1 Cyber physical system architecture (IBM) https://www.ibm.com/developerworks/library/ ba-cyber-physical-systems-and-smart-cities-iot/
Cyber Physical Systems are an embedded system in a physical environment. They work by monitoring the physical process through sensors, processing information received, and controlling the process through actuators as shown in Fig 1. One example of a Cyber Physical System is anti-lock braking system in cars. When the driver holds the brake pedal the tyre sensors collect information about the rate of rotation of each tyre which is sent to a processing unit which determines the system’s status and acts appropriately by activating and deactivating brakes to minimise sliding. The two major Cyber Physical System Design challenges are: . Time Sensitivity—Generally, sensitivity to time is not built into current computing software and so it undergoes high time variability. This is fine for current computing systems but this is very important for integrating Cyber Physical Systems and they will be concurrent. Most current programming languages contain limited capacity for concurrent threads making them less helpful for Cyber Physical System design. Some extensions/programs (Split-C and Clik) support concurrent threads, and languages like Guava lend themselves more to Cyber Physical System design. . Reliability and Uncertainty—A computer program has essentially 100% reliability in that it will execute the same commands in the same way every time it is run. By contrast physical systems‘ input parameters change and rely on timing as well as function. 2 ways to increase reliability are: – Decrease uncertainty to increase reliability to get close to 100% as possible. i.e. reduce variation in input parameters and the way in which the physical system operates to increase repeatability. – Impose algorithms to correct errors caused by the unreliability of the physical system. In practice both strategies are used. Industry 4.0 is the so called “fourth industrial revolution”, the combination of physical manufacturing systems and technology with computerised intelligence
The application of the Bees Algorithm …
45
and data [22]. This may culminate in a “smart factory”—where without human input, automated machines can produce goods, manage resource logistics, and repair themselves. Industry 4.0 is built on 4 key design principles [8]: . Interoperability: Networking and communication between machines, devices, sensors and people via the Internet of Things and Internet of People. . Information Transparency: The creation of a virtual copy of the real world by supplementing digital models with sensor data. . Technical Assistance: The capacity of assistance offered by machines to first: collect, aggregate and visualise data in a way to support and inform decision making of humans and short notice problem solving, and second physically assist humans by doing work which is too unpalatable, tiring or dangerous for humans. . Decentralised Decisions: The capacity of Cyber Physical Systems to make decisions, and perform tasks autonomously, and without human input. The internet of things (IoT) is the interconnection of home, mobile, and embedded applications that can now communicate via the internet integrating greater computing capabilities and enabling use of big data analytics to extract meaningful information [2]. Some advantages of the internet of things are: machine-to-machine (M2M) communication, devices become more connected which increases transparency and leads to higher efficiencies, automation and control, decreasing human input into operations which leads to faster output and heavily decreased errors, and big data analytics producing information which can lead to increasing efficiencies, saving money and decreasing environmental effect through things like cutting down on waste. Brettel et al. [4] showed virtualisation of processes leading to mass customisation and agile manufacturing are ways for western manufacturing industries to compete with low cost, mass production in places where labour costs are low. Mass customisation is enabled through the ability of machines and products that communicate with each other via the Internet of Things, which leads to the Smart Factory of Industry 4.0—with multiple benefits including: . Mass customisation leading to giving companies the ability to reduce their customer segment down to just one i.e. completely customised products for all. . More resource conscious production through big data and advanced analytics leading to reduced waste and use of resources that are environmentally friendly. . Device integration and management via the Internet of Things. . Reduced lead times due to advanced analytics and the birth of the Smart Factory. . Rapid adaptation to market changes through changing resource availability or changing trends. One example of mass customisation already taking place in today’s manufacturing is the Adidas Speedfactory. Adidas allows customers to design their own shoes via use of their App. Adidas opened multiple speed factories that make use of Cyber Physical Systems in the production of the shoes—this means that customers receive the product very quickly and the problem of high wages which is why companies
46
M. S. Packianather et al.
prefer to produce in low wage countries is minimised. This means that the long lead times associated with receiving mass produced products are cut out which forces the rest of the industry to follow suit or attempt to catch up. A virtual machine is an emulation of a system in the computing space. The aim of this project is virtualisation of a product—creation in the virtual space a copy of a physical product that is being manufactured. The goal of this will be to give a specific case of virtualisation to provide information for the general case of virtualisation. The steps involved in virtualisation will be demonstrated such that insight can be gleaned for virtualisation. The steps to be demonstrated are: . Creation of the virtual machine copy of a real product. . Examination and testing of the virtual copy of the real product. . Description of the parameters that need to be changed in order to continue/complete its production. This study also aims to provide a framework for a system of complete automation and integration with Industry 4.0 using Digital Twin (DT) techniques and the Internet of Things. The intent of the long-term series of projects is to design Cyber Physical Production Systems capable of working autocratically year round without human intervention able to manufacture a product based on available and economically sensible resources, machining operations (not already engaged), and completely customised to the consumer’s individual wants. This includes provisions for if the machining process were to run out of resources or break which may involve contacting companies to deliver more or varied resources and contacting other machines via the Internet of Things to fix them. The organisation of this chapter is as follows. Literature review is presented in Sect.2. The Wire Electrical Discharge Machining is described in Sect.3. The proposed approach of applying the Bees Algorithm in a Digital Twin for optimising the WEDM process parameters is explained in Sect.4. In Sect.5 the results and discussion of results are given. Finally, the conclusion and future work is in Sect.6.
2 Literature Review A Digital Twin is a virtual model of a physical system [20, 21, 23]. In general, data from sensors in the physical world will flow into the virtual model. Hence, it behaves like a twin of the real system. The virtual model can be used to run scenario simulations to help understand potential usage, reliability and efficiency. Pivoto et al. [16] and Lee [13] have showed that the core concepts of modern day computing need rethinking if the true potential of Cyber Physical Systems is to be reached, and that carefully thought out coordination of the physical world with the cyber system necessitates models with a strong understanding of the characteristics of both. Hu [10] showed both the design challenges for Cyber Physical Systems: Multidisciplinary knowledge requirements, misrepresentation of sensor
The application of the Bees Algorithm …
47
readings and network coding challenges, and Cyber Physical design lessons: adaptation (increasing the quantity of information in the CPS to allow it to have a more realistic copy of the physical world leading to better repair capability and error handling), unification (giving CPS’s the ability to work together), dependability and consistency. Altintas [1] explained the current trend in machining operation saying that automation is inclusion of sensors in machines which measure output parameters (such as surface roughness, temperature, vibrations, etc.). Mathematical models then compare the relationship of measured sensor outputs with the assumed state of the machining operation and perform adjustments based on results of corrective algorithms. Ho et al. [9] speculated that the main areas of future research for the Wire Electrical Discharge Machining (WEDM) process would be in 3 areas: 1. Optimising the Process Variables. “The complex and random nature of the erosion process in WEDM requires the application of deterministic as well as stochastic techniques” [9]. 2. Monitoring and Controlling the Process. Especially with the use of fuzzy logic control which has the ability “to consider several machining variables, weigh the significant factors affecting the process and make changes to the machining conditions without applying the detailed mathematical model” [9]. 3. WEDM Developments including WEDM applications and Hybrid Machining Processes. This includes “a major push toward an unattended WEDM operation attaining a machining performance level that can only be achieved by a skilled operator” [9]. i.e. a push towards automation of the process, in which virtualisation of the process will enhance. Both Datta and Mahapatra [6] and Rao et al. [19] performed experiments investigating WEDM parameter optimisation, using linear regression analysis to produce predicted material removal rate and surface roughness equations for D2 Tool Steel and Aluminium 2014 T6 respectively. Datta et al. also produced an equation for predicted kerf width. Rao et al. [19] used a genetic algorithm with hybrid function FMINCON to solve the simultaneous optimisation problem of Surface Roughness and Material Removal Rate in Matlab’s tool box using the objective function: Z = W1 S R − W2 M R R
(1)
where SR and MRR are normalised Surface Roughness and normalised Material Removal Rate respectively and W1 and W2 are the related weights. “The larger the weighting factor, the greater the improvement in machining performance outputs.” The inputs were limited to reasonable ranges and results were produced accordingly. The paper showed that “a sacrifice in cutting efficiency is essential for the production of quality surfaces and vice versa. Hence, the selection of particular weights depends on engineering applications”. Other researchers performed multi objective optimisation research on various materials which include: Rajyalakshmi and Ramaiah et al. [18] on Inconel 825,
48
M. S. Packianather et al.
Boopathi and Sivakumar [3] on high speed steel-M2, Nikalje et al. [14] on MDN 300 Maraging Steel, and Ikram et al. [11] on D2 tool steel. These provide material removal rate and surface roughness equations for optimisation performed by the Bees Algorithm in this study. For most of the above cases, Material Removal Rate (MRR) and Surface Roughness (SR) were chosen as objective parameters for the investigations. SR measures quality of finish and MRR measures production rate and hence cutting effectiveness. A better MRR results in a worse SR and vice versa and it is very difficult to achieve both a low SR and high MRR. Pham et al. [15] created a population-based search algorithm called the Bees Algorithm. This algorithm imitates the habits of honey bees’ colonies foraging for food sources when considering the best solution to an optimisation problem. In simpler terms, “the algorithm performs a kind of neighbourhood search combined with random search and can be used for both combinatorial optimisation and functional optimisation” [18]. King [12] created a module in Matlab containing the Bees Algorithm which performs multi objective optimisation of a given objective function within the limits of its input parameters. This was used in the creation of the Matlab program of this study, for calculation of optimal inputs based on required output parameters. Harrison [7] performed a similar project to this analysing the laser cutting process with the same aim of virtualisation. This was in order to compliment this study and display the similarities and differences in virtualisation of different machining processes as both studies look to provide a template for the general case of virtualising any machining process. Chavda [5] explained the similarities and differences between event based simulation and time (or cycle) based simulation. While event based simulation takes events, one at a time through the simulation and deals with them until the desired end condition is accomplished with each change in input parameters being counted as a new event, time based simulation will assess each “logic element” once per cycle.
3 Wire Electrical Discharge Machining (WEDM) Wire Electrical Discharge Machining (WEDM) is an electro-thermal non-traditional machining process where a current is passed through a metal wire, creating an electrical spark. When this wire is passed through an electrically conductive workpiece material removal occurs due to thermal energy of the spark. The process is capable of cutting any electrically conductive material and has very high precision relative to other machining processes—it can be visualised as a super precision band saw as shown in Fig 2. The WEDM works by accurately positioning a taut tin wire through which a current is passed through. The arc formed between the wire and the electrically conductive workpiece erodes the metal—there is no contact between the wire and the workpiece as shown in Fig. 3. The wire is continuously fed through during the
The application of the Bees Algorithm …
49
Fig. 2 Wire electrical discharge machine diagram http://mechanicalinventions.blogspot.co.uk/ 2016/01/electrical-discharge-machining-edm.html
process as the operation erodes it. The workpiece movement during the process is currently CNC controlled with movement coordinates being inputted for each operation by a technician before the operation. When the voltage between the wire and the workpiece is increased, electric field intensity becomes stronger than dielectric strength, and as a result the dielectric breaks down and current flows between the wire and the workpiece [17]. The now solid particles vaporised in the wire-workpiece volume are carried away as the dielectric fluid is pumped through the workpiece gap and new dielectric fluid is transported into the wire-workpiece volume meaning its insulating properties are maintained. After a spark and vaporisation, the potential difference between the wire and the workpiece is returned to normal and the process restarts as shown in Fig. 4. During operation, the workpiece is submerged in a dielectric fluid which transmits an electrical force. The most commonly used dielectric is deionised water—deionised as ions in regular water act as conductors of electricity. The dielectric fluid is not static and is pumped through where the workpiece is being cut so the newly vaporised workpiece material is evacuated from the cutting area. This fluid containing the
Fig. 3 WEDM cutting mechanism (XACT WEDM corporation) http://www.xactedm.com/edmcapabilities/how-edm-works/
50
M. S. Packianather et al.
Fig. 4 Voltage and current in the wire-workpiece volume during the WEDM process (Qiu [17], Lecture Notes, Cardiff University)
vaporised workpiece is then cleaned before being returned to the machine’s pumping system. WEDM presents many advantages over other machining processes which are: . The ability to cut details with high tolerances, up to 5 microns as the process creates no distortions. . The ability to create complicated internal and external shapes. . Any electrically conductive material (ferrous and non-ferrous) can be cut and material hardness/properties do not affect process speed or efficacy. . The cutting process does not leave any residual stresses in the material. . Very small, thin, fragile parts can be produced with ease. . The process does not produce burrs. . Does not need to be manned—can be set up and left overnight. . Short lead times. No post processing required as the process is correct first time. . Low set up time due to minimal tooling requirements. . CNC controlled so repetition and duplicating parts is trivial. WEDM does come with disadvantages which are: . Specific energy consumption is large, up to 50 times that of conventional machining. . Dielectric fluid is generally deionised water but otherwise, oils tend to be used which present a fire hazard. . Only available for electrically conductive materials. . The process is relatively slow when compared to conventional machining processes. . Can produce thermal stresses and a heat affected zone on the workpiece. Due to WEDM’s high precision and specific energy consumption it lends itself to low quantity manufacturing including one off jobs and prototype manufacturing,
The application of the Bees Algorithm …
51
also moulds do not need to be created and the process is CNC which decreases production times for one off parts significantly. Its very high precision means it can be used for production of small, highly detailed pieces that would be difficult for other machining processes to produce for example machine tooling. Datta and Mahapatra [6] and Rao et al. [19] both used linear regression analysis of experiments designed using Taguchi Grey Relational Analysis to produce mathematical equations defining the behaviour of output parameters namely, Surface Roughness (SR) and Material Removal Rate (MRR) in relation to several input parameters for D2 Tool Steel and Aluminium 2014 T6 respectively. The equations relating to D2 Tool Steel is as follows [6]: S R =3.37 + 0.0761X 1 + 0.0356X 2 − 0.0194X 3 + 0.0456X 4 − 0.0144X 5 − 0.0944X 6 − 0.0156X 12 − 0.0606X 22 + 0.0061X 32 − 0.0122X 42 + 0.0011X 52 + 0.0061X 62 − 0.0067X 1 X 2 + 0.0242X 1 X 6
(2)
M R R =0.0253 + 0.0748X 1 + 0.129X 2 − 0.0072X 3 + 0.0003X 4 − 0.0108X 5 − 0.0395X 6 − 0.0147X 12 − 0.0252X 22 + 0.00233X 32 − 0.00073X 42 + 0.00221X 52 + 0.00599X 62 − 0.00905X 1 X 2 + 0.00588X 1 X 6
(3)
With input parameters given in Table 1 being: The equations relating to Aluminium 2014 T6 is as follows [19]: S R = − 11.445 + 0.08X 1 − 0.014X 2 + 0.61X 3 − 0.012X 4 − 0.029X 5 − 0.052X 6 − 0.022X 7 + 0.00003X 8
(4)
M R R = − 6070.081 + 46.808X 1 − 4.134X 2 + 192.87X 3 + 6.949X 4 − 7.076X 5 − 1.202X 6 − 15.652X 7 − 0.051X 8
(5)
With input parameters given in Table 2 being: Table 1 D2 tool steel input parameters
Symbol
Input parameter
Units
X1
Discharge current
Ampere
X2
Pulse duration
M sec
X3
Pulse frequency
KHz
X4
Wire speed
m/min
X5
Wire tension
g
X6
Dielectric flow rate
Bars
52 Table 2 D2 aluminium 2014 T6 parameters
M. S. Packianather et al. Symbol
Input parameter
Units
X1
Pulse on time
µsec
X2
Pulse off time
µsec
X3
Peak current
Ampere
X4
Dielectric fluid flushing pressure
Kg/cm2
X5
Wire feed rate
m/min
X6
Wire tension
Kgf
X7
Spark gap voltage
Volts
X8
Servo feed rate
mm/min
Rao et al. [19] verified the legitimacy of their predicted values based on these equations with a series of experiments showing that the proposed mathematical models were indeed suitable.
4 Virtual Machine Creation Surface Roughness (SR) and Material Removal Rate (MRR) were chosen as the two output parameters to be optimised for three reasons: Firstly they are the two most important output parameters of the machine and need to be controlled and optimised as products being produced for different engineering applications will heavily rely on one or other or both of these parameters, secondly they are negatively correlated— input parameters resulting in improving the Surface Roughness will generally result in a worse Material Removal Rate and vice versa so a balance must be struck. Lastly, plenty of papers have been produced optimising the two factors and so scope is given for expanding the library of the program in the future. Matlab was chosen for creation of the virtual copy. The Bees Algorithm module developed by King [12] was used in creation of the program virtualising the Wire Electrical Discharge Machining (WEDM) Process. The algorithm was used in calculation of optimal input and output parameters by the program given measured error. In current manufacturing the WEDM process consists of three stages: a rough cutting operation, a finishing operation, and a surface finishing operation, the quality of these operations is judged by different criterion: in the finishing and surface finishing operations, surface finish is the essential aspect, but in rough cutting, Material Removal Rate, Surface Finish and Kerf Width are of equal importance. Hence the rough cutting operation is to be considered here.
The application of the Bees Algorithm …
53
4.1 Program Architecture The program architecture is shown in the form of a flowchart in Fig 5.
4.2 Bees Algorithm Optimisation Testing and Results In order to thoroughly test the Bees Algorithm produced for the purpose of optimisation combinations of measured values from the entire spectrum of allowable values were randomly selected using Matlab’s rand() function. Due to the randomness of the Bees optimisation algorithm, the program will produce slightly different values each time it is run. Key Performance Indicators (KPI’s) were formulated to both provide feedback on how well the machining process is running, and to inform the machine of this such that it can optimise as effectively as possible. The 4 KPI’s created were: . Temperature—Binary value whether the temperature is acceptable or not. . Surface Roughness—Linear value dependant on deviation away from minimum (best) Surface Roughness value attainable by the WEDM process given the workpiece material. 0 if the Surface Roughness value indicates the machine is broken. . Material Removal Rate—Linear value dependant on deviation away from maximum (best) Material Removal Rate value attainable by the WEDM process given the workpiece material. 0 if Material Removal Rate value indicates machine is broken. . Overall KPI—Value dependant on both the performance of individual KPI values and the difference between Surface Roughness and Material Removal Rate KPI values. Hence an optimised Virtual Machine will produce higher overall KPI values than the non-optimised machine. The BA optimisation program first ascertains the minimum and maximum Surface Roughness and Material Removal Rates based on what material is being cut and measured ranges of inputs using equations (2) and (3). It then produces KPI values of the measured Surface Roughness and Material Removal Rate, these are values displaying how well the machining operation is running relative to if it were perfectly optimised for the each individual output. A WEDM running fully optimised for maximum Material Removal Rate would give a MRR KPI value of 1, this would however seriously negatively affect the SR KPI value. The program then runs the optimisation using the Bees Algorithm to find the theoretically optimum input values that will deliver the maximum overall KPI value, assuming no error in input. TheoreticallyOptimumInputsOutput: Calls the function BeesAlgorithmFunction to calculate optimal inputs and outputs for multi objective optimisation of both SR and MRR, without accounting for measured errors. This provides a benchmark for the “Optimise” function to attempt to achieve.
54
Fig. 5 Program architecture flowchart of the bees optimisation
M. S. Packianather et al.
The application of the Bees Algorithm …
55
Optimise: Calls the function BeesAlgorithmFunction to calculate optimal inputs and outputs for multi objective optimisation of both SR and MRR accounting for measured errors using the Bees Optimisation Algorithm. The program was first run using a DesiredSRMRR value of 2 meaning it will equally value optimisation of Surface Roughness and Material Removal Rate i.e. balanced outputs. 10 random SR values and 10 random MRR values were chosen within the possible ranges of the output parameters, 0.7760–3.3550µm and 17.6718 mm3 /min to 1058.5 mm3 /min respectively. The results are displayed in the “Theoretically Optimum” columns of Table 3 and their variation is indicative of the randomness of the Bees Algorithm in finding the solution. The program then uses the Bees Algorithm again to attempt to optimise the given measured values by finding the solution which simultaneously maximises Surface Roughness KPI value and Material Removal Rate KPI value, in order to maximise the overall KPI value as given in Table 4. The program will then return the input parameters which will account for the measured error and maximise the overall KPI value. The effects of optimisation on KPI values for balanced outputs are graphically displayed in Fig 6. The program was then run using a DesiredSRMRR value of 1 meaning it will favour SR over MRR optimisation. The results are displayed in the “Theoretically Optimum” columns of Table 5. The program then uses the Bees Algorithm again to attempt to optimise the given measured values by finding the solution which simultaneously maximises Surface Roughness KPI value and Material Removal Rate KPI value, in order to maximise the overall KPI value as given in Table 6. The program will then return the input parameters which will account for the measured error and maximise the overall KPI value. The effects of optimisation on KPI values where SR is favoured more than MRR outputs are graphically displayed in Fig 7. Table 3 Virtual machine optimisation test data (balanced outputs, i.e. DesiredSRMRR = 2) Pre optimisation
Theoretically optimal
Test SR (µm) MRR SR (µm) MRR number (mm3 /min)
(mm3 /min)
Post optimisation SR (µm) MRR (mm3 /min)
1
1.2314
526.0621
2.0039
605.6984
1.4901
605.0161
2
2.8846
122.4418
2.0075
601.9279
2.9691
143.0397
3
1.5656
334.2587
2.0433
614.9612
1.9757
459.6668
4
1.2391
442.0465
2.0065
601.289
1.6139
563.846
5
1.963
995.2534
2.0209
610.8127
1.8289
918.9928
6
2.7512
425.1164
2.0039
605.6462
2.7098
392.2604
7
1.4065
662.1261
2.006
602.198
1.4122
667.7655
8
0.9542
637.7422
2.0314
610.4559
1.1729
708.9877
9
1.0079
996.1829
2.0077
603.2621
1.0063
994.9965
10
2.8025
142.0702
2.0038
606.0239
2.9146
173.0747
56
M. S. Packianather et al.
Table 4 Virtual machine KPI optimisation test data (balanced outputs, i.e. DesiredSRMRR = 2) Pre optimisation KPI’s
Post optimisation KPI’s
Test number
SR KPI
MRR KPI
Overall KPI
SR KPI
MRR KPI
Overall KPI
1
0.8234
0.4885
0.4614
0.7231
0.5634
0.6404
2
0.1824
0.1007
0.2864
0.1496
0.1205
0.3373
3
0.6938
0.3042
0.2881
0.5348
0.4247
0.5802
4
0.8204
0.4077
0.3478
0.6751
0.5248
0.6212
5
0.5397
0.9393
0.4443
0.5917
0.866
0.5718
6
0.2341
0.3915
0.3929
0.2502
0.3599
0.4381
7
0.7555
0.6192
0.6907
0.7533
0.6246
0.6995
8
0.9309
0.5958
0.5278
0.8461
0.6642
0.6839
9
0.9101
0.9402
0.9317
0.9107
0.939
0.9333
10
0.2142
0.1195
0.3053
0.1708
0.1493
0.3781
1 0.9
Overall KPI Value
0.8 0.7 0.6
Overall KPI Pre Optimisation
0.5 0.4
Overall KPI Optimised
0.3 0.2 0.1 0 1
3
5
7
9
Test Number Fig. 6 Effects of optimisation on KPI values (balanced outputs, i.e. DesiredSRMRR = 2) Table 5 Virtual machine optimisation test data (favour SR over MRR, i.e. DesiredSRMRR = 1) Pre optimisation SR Test number (µm)
MRR
(mm3 /min)
Theoretically optimal SR (µm)
MRR
(mm3 /min)
Post optimisation SR (µm)
MRR (mm3 /min)
1
3.261
788.6829
2.0075
599.4072
2.6976
435.6042
2
1.1882
518.6838
2.004
605.3054
1.4865
608.0843
3
3.1265
994.3655
2.0043
604.9099
2.5127
557.4716
4
1.9325
1046
2.0178
597.1626
1.753
950.6099
5
1.5128
864.1611
2.0101
594.9962
1.5186
876.5204
The application of the Bees Algorithm …
57
Table 6 Virtual machine KPI optimisation test data (favour SR over MRR, i.e. DesiredSRMRR = 1) Pre optimisation KPI’s
Post optimisation KPI’s
Test number
SR KPI
MRR KPI
Overall KPI
SR KPI
MRR KPI
Overall KPI
1
0.0364
0.7408
0.001
0.2549
0.4015
0.5656
2
0.8402
0.4814
0.7975
0.7245
0.5673
0.8007
3
0.0886
0.8904
0.53
0.3266
0.5186
0.6415
4
0.5516
0.9881
0.8592
0.6212
0.8964
0.8638
5
0.7413
0.8133
0.873
0.7121
0.8252
0.8755
1 0.9
Overall KPI Value
0.8 0.7 0.6
Overall KPI Pre Optimisation
0.5 0.4
Overall KPI Post Optimisation
0.3 0.2 0.1 0 0
1
2
3
4
5
Test Number Fig. 7 Effects of optimisation on KPI values (favour SR over MRR, i.e. DesiredSRMRR = 1)
The program was finally run using a DesiredSRMRR value of 3 meaning it will favour MRR over SR optimisation. The results are displayed in the “Theoretically Optimum” columns of Table 7. The program then uses the Bees algorithm again to attempt to optimise the given measured values by finding the solution which simultaneously maximises Surface Roughness KPI value and Material Removal Rate KPI value, in order to maximise the overall KPI value as given in Table 8. The program will then return the input parameters which will account for the measured error and maximise the overall KPI value. The effects of optimisation on KPI values where MRR is favoured more than SR outputs are graphically displayed in Fig. 8.
5 Discussion of Results These results show that the implementation of the Bees Algorithm program is working successfully. As a result, the overall KPI values increased in all cases but
58
M. S. Packianather et al.
Table 7 Virtual machine optimisation test data (favour MRR over SR, i.e. DesiredSRMRR = 3) Pre optimisation
Theoretically optimal
Post optimisation
MRR(mm3 /min) SR(µm) MRR (mm3 /min) SR (µm)
Test SR number (µm)
MRR (mm3 /min)
1
1.0474
294.3336
2.0356
614.4894
1.7556
517.4194
2
1.921
690.443
2.0038
605.8796
1.9241
684.9981
3
3.4138
412.954
2.0115
606.1863
2.9148
276.866
4
2.6403
453.4569
2.0068
600.4984
2.6255
447.0357
5
2.2633
821.2393
2.0144
606.3657
2.1202
737.0247
Table 8 Virtual machine KPI optimisation test data (favour MRR over SR, i.e. DesiredSRMRR = 3) Test number
Pre optimisation KPI values
Post optimisation KPI values
SR KPI
SR KPI
MRR KPI
Overall KPI
MRR KPI
Overall KPI
1
0.8948
0.2658
0.6983
0.6202
0.4802
0.7387
2
0.556
0.6464
0.7743
0.5548
0.6412
0.7723
3
0.0819
0.3798
0.4199
0.1707
0.249
0.4541
4
0.2771
0.4187
0.5836
0.2829
0.4125
0.5845
5
0.4233
0.7721
0.7561
0.4788
0.6912
0.7585
1 0.9
Overall KPI Value
0.8 0.7 0.6 Overall KPI Pre Optimisation
0.5 0.4
Overall KPI Post Optimisation
0.3 0.2 0.1 0
0
1
2
3
4
5
Test Number Fig. 8 Effects of optimisation on KPI values (favour MRR over SR, i.e. DesiredSRMRR = 3)
one where there was a very slight decrease. This decrease (Table 8, test number 2) is very small and is indicative of the random variation in the running of the Bees Optimisation Algorithm. The most important test results were testing when DesiredSRMRR = 2, i.e. the machine is endeavouring to produce balanced outputs. The average KPI increase over
The application of the Bees Algorithm …
59
the 10 tests was 33.16%. This shows that the program works as a virtual machine and hence, can be used to optimise the input parameters of a real WEDM if it were to be receiving measured output parameters. The goals of the study were to Introduce Industry 4.0 to WEDM process and create a cyber-physical model. This entailed: Identification of important inputs and outputs, Creation of software that will simulate the machining process and control all necessary inputs and outputs and their ways of being measured, and creation of a virtual copy of the product being manufactured. These were achieved in the program created. One idea to extend the comprehensiveness of this study was identification of the most common ways in which the machining process fails and introduction of methods countering these risks. This was completed to a basic level. Some simple, common ways in which the machine breaks were identified and a constraint stopping the machining process was added. These included: the temperature sensor detecting a temperature out of permissible range, material removal rate sensor detecting a sudden change to zero indicating wire break, and detection of out of range output values indicating a dysfunctional input component.
6 Conclusions and Future Work In this study the virtualisation of a product in order to optimise the machining process using the Bees Algorithm has been achieved. This study has shown how virtualisation could be used to provide information for the general case of machining processes. In particular, WEDM has been considered and the steps involved in virtualisation and optimisation of process parameters have been demonstrated. The approach proposed in this study is currently capable of: . Simulation of the “Rough Cut” part of the Wire Electrical Discharge Machining Process. . Control of all necessary inputs to optimise Surface Roughness and Material Removal Rate with the integrated Bees Algorithm. . Scope to include the last of the important factors, Kerf Width, into the program through copying the process already implemented for the first two output parameters. . Identification and reaction to some of the most common/simple ways in which the Wire Electrical Discharge Machining Process fails. The future work will include making continuous improvements to the application of the Bees Algorithm in a Digital Twin for the optimisation of WEDM process parameters in several ways. In particular some of the short-term improvements that could be made are: . Including more materials in the library of the program to emulate a WEDM process more realistically.
60
M. S. Packianather et al.
. Adding more input and output parameters into the optimisation of the WEDM by iterating the virtual machine which has already been created. . Performing a greater number of tests on the virtual machine to test its reliability in the face of random variables. . Considering more output parameters such as Kerf Width and Time for completion of the machining process, and expanding to the finishing operation of the WEDM process as well as just the rough cutting operation to model the process more holistically and hence provide a more realistic virtual machine in the creation of a Cyber Physical System. The main direction this work could be taken in the longer-term is: . Providing a template for other machining processes to be virtualised. If enough processes were virtualised, a holistic template for automation of a machining process could be formed leading to be used in the virtual machine controlling the physical machining process remotely through the internet achieving a true Cyber Physical System. Acknowledgements The authors would like to thank ASTUTE 2020 funded by the ERDF and MANUELA project funded by the EU H2020 for their support.
References 1. Altintas Y (2000) Manufacturing automation: metal cutting mechanics, machine tool vibrations, and CNC design. Cambridge university press 2. Bi Z, Jin Y, Maropoulos P, Zhang WJ, Wang L (2021) Internet of things (IoT) and big data analytics (BDA) for digital manufacturing (DM). Int J Prod Res 1–18 3. Boopathi S, Sivakumar K (2013) Experimental investigation and parameter optimization of near-dry wire-cut electrical discharge machining using multi-objective evolutionary algorithm. Int J Adv Manuf Technol 67(9):2639–2655 4. Brettel M, Friederichsen N, Keller M Rosenberg M (2017) How virtualization, decentralization and network building change the manufacturing landscape: an industry 4.0 perspective. FormaMente 12 5. Chavda A (2015) Basic difference between event based simulator and cycle based simulator. https://asic4u.wordpress.com/2015/09/30/basic-difference-between-event-based-simula tor-and-cycle-based-simulator/. Last accessed 26 Nov 2021 6. Datta S, Mahapatra S (2010) Modeling, simulation and parametric optimization of wire EDM process using response surface methodology coupled with grey-Taguchi technique. Int J Eng Sci Technol 2(5):162–183 7. Harrison J (2017) Virtualisation of products: a case of machining a metal component. BEng Thesis, Cardiff University, UK 8. Hermann M, Pentek T, Otto B (2016) Design principles for industrie 4.0 scenarios. In: 2016 49th Hawaii international conference on system sciences (HICSS), IEEE, pp 3928–3937 9. Ho KH, Newman ST, Rahimifard S, Allen RD (2004) State of the art in wire electrical discharge machining (WEDM). Int J Mach Tools Manuf 44(12–13):1247–1259 10. Hu F (2013) Cyber-physical systems: integrated computing and engineering design. CRC Press
The application of the Bees Algorithm …
61
11. Ikram A, Mufti NA, Saleem MQ, Khan AR (2013) Parametric optimization for surface roughness, kerf and MRR in wire electrical discharge machining (WEDM) using Taguchi design of experiment. J Mech Sci Technol 27(7):2133–2141 12. King S (2017) Development and application of a bees algorithm module. BEng Thesis, Cardiff University, UK 13. Lee EA (2008) Cyber physical systems: design challenges. In: 2008 11th IEEE international symposium on object and component-oriented real-time distributed computing (ISORC). IEEE, pp 363–369 14. Nikalje AM, Kumar A, Srinadh KV (2013) Influence of parameters and optimization of EDM performance measures on MDN 300 steel using Taguchi method. Int J Adv Manuf Technol 69(1):41–49 15. Pham DT, Ghanbarzadeh A, Koç E, Otri S, Rahim S, Zaidi M (2006) The bees algorithm— a novel tool for complex optimisation problems. In: Intelligent production machines and systems. Elsevier Science Ltd., pp 454–459 16. Pivoto DG, de Almeida LF, da Rosa Righi, Rodrigues JJ, Lugli AB, Alberti AM (2021) Cyberphysical systems architectures for industrial internet of things applications in Industry 4.0: a literature review. J Manuf Syst 58:176–192 17. Qiu C (2016) Micro_Nano_manufacturing. Lecture Notes, Cardiff University, UK, p 2017 18. Rajyalakshmi G, Venkata Ramaiah P (2013) Multiple process parameter optimization of wire electrical discharge machining on Inconel 825 using Taguchi grey relational analysis. Int J Adv Manuf Technol 69(5):1249–1262 19. Rao PS, Ramji K, Satyanarayana B (2014) Experimental investigation and optimization of wire EDM parameters for surface roughness, MRR and white layer in machining of aluminium alloy. Procedia Mater Sci 5:2197–2206 20. Shao G, Helu M (2020) Framework for a digital twin in manufacturing: scope and requirements. Manuf Lett 24:105–107 21. Tao F, Zhang H, Liu A, Nee AY (2018) Digital twin in industry: state-of-the-art. IEEE Trans Ind Inf 15(4):2405–2415 22. Zheng T, Ardolino M, Bacchetti A, Perona M (2021) The applications of Industry 4.0 technologies in manufacturing context: a systematic literature review. Int J Prod Res 59(6):1922–1954 23. Zhou G, Zhang C, Li Z, Ding K, Wang C (2020) Knowledge-driven digital twin manufacturing cell towards intelligent manufacturing. Int J Prod Res 58(4):1034–1051
A Case Study with the BEE-Miner Algorithm: Defects on the Production Line Merhad Ay, Adil Baykasoglu, Lale Ozbakir, and Sinem Kulluk
1 Introduction Due to the technological developments, a high volume of data is employed to generate meaningful cause and effect relations hidden in quality control data of manufacturing processes. Frumosu et al. [1] argued that the main contributor of this voluminous data in manufacturing is usually originated from the quality control stages. For this reason, accurate analysis of quality control data and determination of useful actions are of great importance in manufacturing systems. Especially in textile manufacturing, fault detection is an important task in effective fabric quality control due to the effect of different quality product pricing [2]. On the other hand, an equally important issue is to prevent occurrence of quality defects by determining their causes. The rapid spread of digital transformation in production systems, especially with the concept of Industry 4.0, increases the value of data analytics studies in all kinds of manufacturing systems. We can divide related studies into two main groups: automatic defect detection by image processing [2–4] and classification approaches for the detection of defect causes [5, 6]. The first approach aims to automate visual quality control in the fabric production, while the second one is focused on detecting and eliminating the causes of quality errors. M. Ay (B) · L. Ozbakir · S. Kulluk Department of Industrial Engineering, Erciyes University, 38280 Kayseri, Turkey e-mail: [email protected] L. Ozbakir e-mail: [email protected] S. Kulluk e-mail: [email protected] A. Baykasoglu Department of Industrial Engineering, Dokuz Eylul University, 35210 ˙Izmir, Turkey e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 D. T. Pham and T. Hartono (eds.), Intelligent Production and Manufacturing Optimisation – The Bees Algorithm Approach, Springer Series in Advanced Manufacturing, https://doi.org/10.1007/978-3-031-14537-7_4
63
64
M. Ay et al.
One of the most important components of data mining is classification. It is a model generation approach that is used to assign instances to the correct classes as possible whose classes are not previously known. However, most of the classification algorithms either ignore different misclassification errors or assume that all misclassification errors have equal costs. In many real-life classification problems, misclassification has different costs, because of the problem characteristics these costs cannot be ignored during classification. Cost-sensitive classification is especially important for accurate and significant classification in data, where class distributions are not balanced. Therefore, more work is being done on cost-sensitive classification approaches [1, 7, 8]. Studies on cost-sensitive multi-class classification problems are still ongoing and there are considerable gaps in the literature on the cost evaluation in classification of quality defects in fabric production. The Bees Algorithm (BA) is developed to solve complex optimisation problems due to a model inspired from food foraging behaviour of honeybees [9]. Since the BA was first proposed, the BA and its variants have been applied to many optimisation problems in different areas. Pham et al. [10] proposed an improved BA including new search operators and selection mechanism to solve dynamic optimisation problems and compared their results with the Ant Colony Optimisation (ACO) algorithm. Ziarati et al. [11] applied different bee algorithms such as BA, artificial bee colony (ABC), bee swarm optimisation (BSO) to resource constrained project scheduling problem and compared the results of these algorithms. Fahmy [12] applied BA to determine the optimal operating speed parameters of wind power units and compared their results with Particle Swarm Optimisation (PSO) algorithm. Yüce et al. [13] proposed a new version of Bees Algorithm to solve a multi-objective supplychain configuration problem. Akpinar and Baykaso˘glu [14] proposed a Multiple Colony Bees Algorithm (MCBA) to solve functional optimisation problems. Tsai [15] proposed a novel bees algorithm (NBA) by incorporating a stochastic selfadaptive neighbourhood search into the BA and applied this algorithm to unimodal and multimodal function optimisation problems. Yüce et al. [16] enhanced the performance of BA by integrating Genetic Algorithm (GA) operators in the global search stage and applied this hybrid algorithm to single machine scheduling problem. Laili et al. [17] proposed a ternary bees algorithm for generating feasible disassembly replanning solutions as subassembly detection and sequence optimisation for robotic disassembly. Xu et al. [18] presented a modified discrete BA based on Pareto (MDBAPareto) to solve disassembly sequence planning problem and compared their results with Non-dominated Sorting Genetic Algorithm II (NSGA II) multi-objective bees algorithm (MOBA) and modified teaching–learning-based optimization (MTLBO) algorithms. Recently Baronti et al. [19] presented a comprehensive analysis of the BA structure and the effects of algorithm parameters and design choices on the search performance of BA to provide useful information for the BA applications. In this study, the modified bees algorithm is applied for the classification of quality defects that will contribute to the BA literature. Two different cost-sensitive classification algorithms, namely, BEE-miner and MEPAR-miner that were suggested by Tapkan et al. [20] and Kulluk et al. [21] are used to determine the reasons of quality defects in fabric production. BEE-miner algorithm is based on Bees Algorithm [9]
A Case Study with the BEE-Miner Algorithm: Defects on the …
65
and it was designed to be applicable for mutually binary and multi-class classification problems. MEPAR-miner algorithm is based on the multi-expression programming (MEP) that was proposed by Oltean and Dumitrescu [22] for symbolic regression problems. Cost-insensitive and cost-sensitive versions of these algorithms are applied to three different datasets which are composed of knot, stop mark, loop/tangle defects. The performances of BEE-miner and MEPAR-miner algorithm are compared in terms of different measures such as testing accuracy, number of classification rules and CPU times. The remainder of this paper is planned as follows. Section 2 introduces the costsensitive classifiers, BEE-miner and MEPAR-miner algorithms. Section 3 presents the case study covering the relevant fabric defect data sets. Section 4 presents the conclusive remarks of the paper.
2 Cost-Sensitive Classifiers 2.1 BEE-Miner Algorithm The BEE-miner algorithm is a cost-sensitive classifier based on Bees Algorithm (BA) that can be applied to binary/multi-class problems. Since the algorithm is a direct method, the misclassification cost is incorporated into the operation rules of the algorithm. The stages, which include the cost component are the determination of the fitness function and the neighbourhood structures. Pre-processing: In the BEE-miner algorithm, the user can either remove the instances having missing values from the dataset or complete the missing values by using one of the four different methods in the algorithm. The continuous attributes must be discretized in the algorithm and there are six different discretization methods in the algorithm. Generating initial solutions: In the solution string of the BEE-miner algorithm, binary and discrete feature values are divided as “=”, “/=” and “null”, and continuous feature values are divided as “≤”, “≥” and “null”. The null value indicates that the relevant attribute has no impact on the rule generation phase. The size of the solution string is “# of features + 1”, including the number of input attributes and the class variable. In this solution string approach, a rule is generated by connecting each attribute with the logical operator “AND”. The initial solutions are randomly generated accordance with this solution string structure. An example solution string that is used in the BEE-miner algorithm is given in Fig. 1. Also, the classification rule showed in the figure: IF A1 ≤ 5.25 AND A2 ≥ 12.50 AND A4 = 6 AND A5 /= 7 AND A7 = 1 AND A8 /= 1 THEN Class 1
66
M. Ay et al.
Fig. 1 An example solution string structure [20]
Neighbourhood structures: In the BEE-miner algorithm, a shape is based on modifying the operator-value arrangement of a particular feature on the solution string that is used while obtaining a neighbour solution from a solution. This structure is used based on the involvement of cost information in entropy, which forms the basis of the C4.5 algorithms and ID3. According to the relevant structure, the entropy of the database is computed as follows: Entr opy(C S) = ⎛ Ai = ⎝
c . i=1 c .
−Ai pi log2 pi
(1)
⎞ ci j ⎠
(2)
j=1
The entropy calculation that is given in Eq. 1 is based on the multiplication of the entropy by the sum of the misclassification cost of the relevant instance. In Eq. 2, cij represents the misclassification cost of classifying an example with actual class i as j. The information gain of a feature is computed on a rule basis in the algorithm. Because the related misclassification costs rely on the actual and predicted classes, the information gain have to be computed within this situation by Eq. 3 where K symbolizes the class of the present solution string. ( c ) . |Sv | . −ci K pi log2 pi I n f.Gain(C S, A, K ) = Entr opy(C S) − (3) |S| i=1 v∈V ( A)
The BEE-miner algorithm uses three kinds of neighbourhood forms, which are insertion, extraction, and alteration. When a new neighbour solution is to be derived from a solution, one of 3 neighbourhood structure is chosen randomly. A selection is made depending on the probability between the features with low-cost information value during insertion and high-cost information value. After the selection of the attribute to be inserted or altered, the selection of the related operator-value combination is once more chosen probabilistically according to the misclassification cost ratio. The mentioned misclassification cost percentage is computed as follows [23]:
A Case Study with the BEE-Miner Algorithm: Defects on the …
. ∀(x,y)∈s
MC S Rak ,s,i = . ∀(x,y)∈s
C y,i +
(. C
.
j=1
C y,i ∀(x,y)∈s∧y=i
67
) 1 C y, j ∗ C−1
(4)
In Eq. 4, MC S Rak ,s,i represents the misclassification cost ratio obtained because of classifying all the instances in the sth bin of feature ak as class i whose actual class is not truly i, and (x, y) indicates the feature and class information of an instance in the training set. Fitness function: The performance of a classifier using c class labels is measured with a c * c dimensional confusion matrix (CoM). Cost-sensitive classifiers, on the other hand may be analysed by a cost-weighted total of misclassifications divided by the amount of classified examples based on one-to-one multiplication of the confusion matrix and cost matrix [24]. The following equation is used as the fitness function in the algorithm. ) .c ( i=1 j=1 CoMi j C i j .c .c i=1 j=1 CoMi j
.c f it =
(5)
Unfortunately, there are several complexities in deciding the confusion matrix in the multi-class situation. To determine the misclassification cost of a sample whose rule part is not satisfied but class part is satisfied (TN) in a multi-class dataset, the expected cost value, whose formulation is given below is used, where P(j) represents the occurrence probability of class j. E x pected Costi =
C .
P( j )ci j
(6)
j =1 i /= j Generating the rule set: The BEE-miner algorithm employs a multi-rule structure. Starting from the classes with a small number of instances in the training set, the rules are obtained by running the algorithm for each class that are stored in the rule pool after a pre-processing step to prevent conflictions. Genetic algorithm (GA) is used to generate the rule set from the rules in the rule pool. In the GA, the chromosome size symbolizes the number of rules in the rule pool. A gene consists of 1 and 0 values, indicating whether the related rule is contained in the rule pool or not. The simplified flow chart of the BEE-miner algorithm is described in Fig. 2. For more details about the algorithm readers can refer to [20].
68
M. Ay et al.
Complete missing values Preprocessing Discretize continuous attributes Generate initial solutions Selection Evaluate fitness function Elite sites (e)
Best sites (m-e)
Fitness evaluation
Fitness evaluation
Select best
Select best
Generate neighbor solutions
Global search
New population Random solutions (n-m) No
Stop? Fitness evaluation Yes
Generate the rule set via GA
Generate initial population (different combinations of rules) Evaluate rule set performances
Perform genetic operators
Evaluate rule set performances
Form new rule sets (replace worst rule sets) No
Yes Stop?
Fig. 2 The simplified flow chart of the BEE-miner algorithm [20]
2.2 MEPAR-Miner Algorithm The cost-sensitive MEPAR-miner is a meta-learning algorithm that uses rescaling as a meta-learning approach. This approach works by rebalancing the classes according to their cost. In the process under consideration, training samples with different classes are oversampled or under sampled according to their cost, afterwards obtained samples are sent to the cost-insensitive MEPAR-miner algorithm to get coming
A Case Study with the BEE-Miner Algorithm: Defects on the … Table 1 The structure of the cost-sensitive meta-learning [21]
69
Step 1: If the problem is binary class, then go to Step 5, if not, go to Step 2 Step 2: Create the coefficient matrix for the costs Step 3: Revise the cost matrix up to getting a stable cost matrix (rank < c) Step 4: Determine w from the consistent cost matrix Step 5: Resample the training set by oversampling or under sampling Step 6: Use cost-insensitive MEPAR-miner algorithm to the scaled data Step 7: Calculate the performance metrics
predictions. The used cost-sensitive meta-learning form is given in Table 1. For more details about the algorithm readers can refer to [21]. The cost-insensitive MEPAR-miner algorithm [25] is based on the multiexpression programming (MEP) approach [22]. MEP is used for symbolic regression, MEPAR-miner extends MEP to handle classification problems. In the algorithm, the basic structure of the MEP chromosome representation is used, but logical expressions are obtained with the changes in the function and terminal sets. The following expression shows a typical rule set: IF antecedent1 THEN class1 , ELSE IF antecedent2 THEN class2 , …, ELSE classdefault The calculation of rule set begins with the first rule and continues until finding the rule, which matches the instance. If the instance is not matched by any rule, the class of that instance is set as the default class. The chromosome form of the MEPAR-miner algorithm contains of the function set containing the logical operators and the terminal set containing the attributerelational operator-value triple relationship. The size of the terminal genes is the same with the number of variables in the associated classification problem. The function and terminal sets used in the MEPAR-miner algorithm are presented in Table 2. Also, a sample chromosome structure for classification rule is presented in Fig. 3. The 7th gene of the chromosome in Fig. 3 expresses the following classification rule: IF ((x1 ≥ 8) AND (x2 = 1)) OR ((x0 ≤ 5) OR (x2 = 1)) THEN Classi The Michigan coding approach is used in the MEPAR-miner algorithm. Thus, the size of the rule set includes the number of classes in the related dataset plus a statement for assigning the remaining instances to the default class. A fitness function based on sensitivity (S e ) and specificity (S p ) measures is applied to obtain classification rules in the algorithm. With two metrics that measure the effectiveness of the classification rules, the maximization-oriented fitness function,
70 Table 2 Function and terminal sets [25]
M. Ay et al. Xi
Feature i
Relational operator
Type of the feature
=
Categorical features
≤, ≥
Continuous features
Vxi
Domain of feature i
Terminal set
{x0 -RO − Vxo , x1 -RO − Vx1 , …, xn -RO − Vxn }
Function set
{AND, OR, NOT}
Fig. 3 A sample chromosome form for classification rule [21]
which can take a value between 0 and 1, is defined as in Eq. 7. In this equation, tp and tn represent the correct classifications where fp and fn express the incorrect classifications. Fitness = Se × S P =
tn tp × tp + f n tn + f p
(7)
The genetic operators applied in the MEPAR-miner algorithm are crossover and mutation. Before the genetic operators are applied, the best chromosome in the population is passed on to the next generation. In the selection process, the matching pool is formed from the individuals determined by the double tournament selection. Two chromosomes are selected at random from the population and the one with the higher fitness value is transferred to the matching pool. According to a certain crossover probability (Pc), a single point crossover is used to two parent individuals chosen from the matching pool. The mutation operator is applied to the terminal and function genes in the chromosome according to the predetermined mutation probability (Pm). Random mutation points are determined in the chromosome and applied in different ways according to the terminal or functional status of the genes at these points. If the
A Case Study with the BEE-Miner Algorithm: Defects on the …
71
selected gene is terminal, the relational operator and value in that gene are changed. If the selected gene is a function, this gene is changed by selecting a new function and markers for this function. The general shape of the MEPAR-miner algorithm is presented in Fig. 4.
3 Experimental Study In the study, it is aimed to find the reasons that cause quality defects in the production of a textile manufacturing company. The company records various data in their databases throughout the production. Much information such as machine information, product types, error types, defect amounts, machine stops, time between stops and machine failure reasons are recorded in the company’s databases. When the data were examined, the three most common quality defects namely knot, stop mark, and loop/tangle were taken into consideration in the study. In order to classify the quality defects, BEE-miner and MEPAR-miner algorithms are used. Knot defect: Yarns can break during production and the appearance of these broken yarns on the textile is known as knot defect. Generally, variables such as ball warping, sizing, and weaving are effective in this defect. Stop mark defect: It is the fault on the production that occurs due to the change in weft width when machines stop for some reason. In general, variables such as machine stops, mechanical settings, product type cause this defect. Loop/tangle defect: If some of the warp threads are not included in the weaving during the knitting process in production, this defect occurs. This defect often occurs due to variables such as slack warp, burrs on the reed, back rest roller setting. The attributes that will cause these defects were determined after the interviews with the production experts. 13 input attributes and one output (class) variable in the data sets were obtained. The attributes were gathered from several databases of the company and brought together. These attributes and output are shown in Table 3 with their types. In the study, 3 different data sets belonging to 3 different defect types which are occurred in production in a textile company were used. These datasets with quantities are knot defect (230 samples), stop mark defect (230 samples), and loop/tangle defect (143 samples) as shown in Table 4. Fixed-Frequency Fractionation (FFD) [26] was used to discretize the attributes in the datasets. The minimum number of elements required in a piece is determined as 30 for FFD. The percentage split method was used for splitting the datasets for train and test. The 66% of dataset were utilized as training and remaining 34% are utilized as testing. There are 4 defect classes for knot and stop mark datasets and 2 defect classes for loop/tangle dataset as shown in Table 4. The percentages of samples in each class in the knot defect dataset are 75%, 21%, 3%, and 1%, respectively. In the stop mark dataset, the percentages of samples
72
Fig. 4 The general structure of the MEPAR-miner algorithm [21]
M. Ay et al.
A Case Study with the BEE-Miner Algorithm: Defects on the … Table 3 Attributes in datasets [6]
No
73
Attribute
Type
1
Type code
Categorical
2
Weft break
Continuous
3
Warp break
Continuous
4
Total weft break/100,000 m
Continuous
5
Sizing machine
Categorical
6
Indigo machine
Categorical
7
Wrapping break
Continuous
8
Wrapping break/1,000,000 m
Continuous
9
Rebeaming break
Continuous
10
Rebeaming break/1,000,000 m
Continuous
11
Collar
Categorical
12
Crossing
Categorical
in each class are 74%, 19%, 4% and 3%, respectively. In the loop/tangle data set, the percentages of the samples in each class are 95% and 5%, respectively. There are no missing values in all data sets. Therefore, a missing value filling pre-process step was not carried out for datasets. The parameters of the algorithms were set before running the algorithms. To determine values of these parameters, many different parameters were tried and parameters with the best performance were chosen for the experimental study. Parameters and determined values for the BEEminer algorithm are presented in Table 5. Population size of genetic algorithm (Ps), crossover probability (Cp), mutation probability (Mp), percentage of chromosome that will be protected (Pc), maximum iteration number of genetic algorithm (maxin), number of scout bees (S), number of employed bees (P), number of best employed bees (e), number of onlooker bees assigned to best e employed bees (nep), number of onlooker bees allocated to remaining P-e employed bees (nsp) and maximum iteration number of Bees Algorithm (MaxIter) are parameters of the BEE-miner Table 4 Total instances and percentage of classes in datasets
Knot defect
Stop mark defect
Loop/tangle defect
# of total instances
230
230
143
% of Class 1 instances
75
74
95
% of Class 2 instances
21
19
5
% of Class 3 instances
3
4
–
% of Class 4 instances
1
3
–
74
M. Ay et al.
Table 5 Descriptions and values of BEE-miner parameters Parameter
Description
Value
Ps
Population size of genetic algorithm
40
Cp
Crossover probability
0.8
Mp
Mutation probability
0.2
Pc
Percentage of chromosome that will be protected
0.2
maxin
Maximum iteration number of genetic algorithm
100
S
Number of scout bees (s = 1, …, S)
30
P
Number of employed bees (p = 1, …, P)
20
e
Number of best employed bees
10
nep
Number of onlooker bees assigned to best e employed bees
8
nsp
Number of onlooker bees assigned to remaining P-e employed bees (nsp < nep)
4
MaxIter
Maximum iteration number of Bees Algorithm
250
algorithm and the obtained best parameter set is {Ps, Cp, Mp, Pc, maxin, S, P, e, nep, nsp, MaxIter}: {40, 0.8, 0.2, 0.2, 100, 30, 20, 10, 8, 4, 250}. Parameters and determined values for MEPAR-miner algorithm are indicated in Table 6. Generation number (G), population size (N p ), chromosome length (L c ), crossover rate (Pc ), mutation rate (Pm ), and number of mutation points (N m ) are the parameters of the MEPAR-miner algorithm and the obtained best parameter set is {G, N p , L c , Pc , Pm , N m }: {1000, 100, 40, 0.9, 0.2, 4}. After determining the parameter values, the experimental study is carried out on 4 algorithms, namely cost-sensitive BEE-miner, cost-insensitive BEE-miner, cost-sensitive MEPAR-miner, and cost-insensitive MEPAR-miner. In cost-sensitive studies, cost matrices were prepared for each data set as shown in Fig. 5. Cost matrices for the knot defect dataset, the stop mark defect dataset, the loop/tangle defect dataset are presented in Fig. 5a–c. In cost-insensitive classification studies, the cost of misclassification (red) would be 1 and the cost of correct classification (green) would be 0 in the matrices shown in Fig. 5. After the parameters and cost matrices were determined, the algorithms were Table 6 Descriptions and values of MEPAR-miner parameters
Parameter
Description
Value
G
Generation size
1000
Np
Population size
100
Lc
Chromosome length
40
Pc
Crossover rate
0.9
Pm
Mutation rate
0.2
Nm
Number of mutation points
4
A Case Study with the BEE-Miner Algorithm: Defects on the …
(b) Cost matrix for stop mark dataset
(a) Cost matrix for knot dataset
75
(c) Cost matrix for loop/tangle dataset
Fig. 5 Cost matrices for datasets
Table 7 Comparative results with BEE-miner and MEPAR-miner Dataset Knot
Cost-insens. BEE-miner
Cost sens. BEE-miner
Cost sens. MEPAR-miner
Test acc. (%)
83.67 ± 5.85 90.17 ± 2.17
71.57 ± 9.32
71.70 ± 6.96
# of rules
4.63 ± 0.96
4.00 ± 0.00
4.00 ± 0.00
4.37 ± 1.07
CPU time (s) 43.93 ± 3.08 46.00 ± 3.56 Stop mark
Cost insens. MEPAR-miner
241.43 ± 18.53 277.97 ± 20.64
Test acc. (%)
58.76 ± 3.66 65.21 ± 5.15
50.27 ± 13.47
51.43 ± 8.86
# of rules
5.60 ± 1.13
4.00 ± 0.00
4.00 ± 0.00
10.43 ± 1.77
CPU time (s) 43.57 ± 3.11 50.50 ± 2.42 Loop/tangle Test acc. (%) # of rules
246.87 ± 25.29 279.63 ± 27.32
75.37 ± 6.29 91.09 ± 2.76
58.80 ± 1.79
59.73 ± 4.63
2.16 ± 0.37
2.00 ± 0.00
2.00 ± 0.00
61.83 ± 5.92
57.10 ± 6.31
2.07 ± 0.25
CPU time (s) 11.04 ± 1.25 11.30 ± 2.15
executed 30 times for each data set. All algorithms were executed on the Intel Core i7-7700 3.6 GHz, 8 GB Ram computer. The observed results are shown in Table 7. The values in the table are the mean and standard deviations of 30 executions. As it can be noticed from Table 7, cost-sensitive BEE-miner algorithm outperforms cost-insensitive BEE-miner, cost-insensitive MEPAR-miner, and costsensitive MEPAR-miner algorithms according to predictive accuracies. In addition, both the cost-sensitive and cost-insensitive results of the BEE-miner algorithm gave better results than the MEPAR-miner algorithm. As expected, the MEPARminer algorithm produced as many rules as the number of classes of the associated dataset. Although the BEE-miner algorithm rarely produced many rules, it generally produced rules close to the number of classes. Considering the CPU times, the BEE-miner algorithm gave results in a much shorter time when compared to the MEPAR-miner algorithm. Paired-t test is additionally used to analyse the superiority of the cost-sensitive BEE-miner algorithm over the remaining three algorithms. It can be determined whether the differences between the found outcomes are statistically significant.
76
M. Ay et al.
Table 8 The p-values of all classifiers paired with the BEE-miner (cost-sensitive) Cost-insensitive BEE-miner
Cost-insensitive MEPAR-miner
Cost-sensitive MEPAR-miner
Knot
1.30E−05
1.07E−14
1.33E−07
Stop mark
1.90E−05
7.55E−05
1.90E−05
Loop/tangle
3.65E−08
7.77E−21
1.99E−21
Alpha value is determined as 0.05.and p-values show probability of equivalence of accuracies of the compared algorithms. Low p-values (p < 0.05) show accuracies are not equal and there is a statistically meaningful difference between the accuracies of the compared algorithms. p-values in Table 8 show that it is possible to statistically demonstrate that results of evaluated algorithms are significantly different because the values are lower than 0.05.
4 Conclusion In the present research, the most common quality defects occurring in the production of a textile industry were examined. It is important for the manufacturing industries to accurately classify these defects due to an effective classification is the most important way to avoid production defects. These defects were successfully classified with the BEE-miner algorithm. The primary feature of the algorithm is that it can use the cost sensitivity system as a principle of the algorithm. The BEE-miner algorithm can successfully classify binary or n-ary classification problems at low cost by making use of the Bees Algorithm. The experimental results show that the Cost-sensitive BEE-miner algorithm outperforms the cost-sensitive MEPAR-miner algorithm and cost-insensitive versions of these algorithms in terms of predicted accuracies. At the same time, the BEE-miner algorithm is superior to the MEPAR-miner algorithm in terms of computation time. As a result, BEE-miner algorithm can be used to classify quality defects in production industries. In future studies, the computational time of the algorithm can be further improved. In addition, the number of rules and misclassification cost can be reduced.
References 1. Frumosu FD, Khan AR, Schioler H, Kulahci M, Zaki M (2020) Cost-sensitive learning classification strategy for predicting product failures. Expert Syst Appl 161:113653 2. Koulali I, Eskil MT (2021) Unsupervised textile defect detection using convolutional neural networks. Appl Soft Comput 113:107913 3. Uzen H, Turkoglu M, Hanbay D (2021) Texture classification with multiple 8. Pooling and filter ensemble based on deep neural network. Expert Syst Appl 175:114838 (2021)
A Case Study with the BEE-Miner Algorithm: Defects on the …
77
4. Jing JF, Ma H, Zhang HH (2019) Automatic fabric defect detection using deep convolutional neural networks. Color Technol 135:213–223 5. Özbakır L, Baykaso˘glu A, Kulluk S (2011) Rule extraction from artificial neural networks to discover causes of quality defects in fabric production. Neural Comput Appl 20:1117–1128 6. Baykaso˘glu A, Özbakır L, Kulluk S (2011) Classifying defect factors in fabric production via DIFACONN-miner: a case study. Expert Syst Appl 38:11321–11328 7. Pei W, Xue B, Shang L, Zhang M (2021) Genetic programming for development of costsensitive classifiers for binary high-dimensional unbalanced classification. Appl Soft Comput 101:106989 8. Alotaibi R, Flach P (2021) Multi-label thresholding for cost-sensitive classification. Neurocomputing 436:232–247 9. Pham DT, Ghanbarzadeh A, Koç E, Otri S, Rahim S, Zaidi M (2006) The Bees algorithm—a novel tool for complex optimisation problems. Intell Prod Mach Syst 454–459 10. Pham DT, Pham QT, Ghanbarzadeh A, Castellani M (2008) Dynamic optimisation of chemical engineering processes using the Bees algorithm. IFAC Proc Vol 41:6100–6105 11. Ziarati K, Akbari R, Zeighami V (2011) On the performance of bee algorithms for resourceconstrained project scheduling problem. Appl Soft Comput 11:3720–3733 12. Fahmy AA (2012) Using the Bees algorithm to select the optimal speed parameters for wind turbine generators. J King Saud Univ-Comput Inf Sci 24:17–26 13. Yüce B, Mastrocinque E, Lambiase A, Packanather MS, Pham DT (2014) A multi-objective supply chain optimisation using enhanced Bees Algorithm with adaptive neighbourhood search and site abandonment strategy. Swarm Evol Comput 18:71–82 14. Akpinar S, ¸ Baykaso˘glu A (2014) Multiple colony bees algorithm for continuous spaces. Appl Soft Comput 24:829–841 15. Tsai HC (2014) Novel Bees algorithm: stochastic self-adaptive neighborhood. Appl Math Comput 247:1161–1172 16. Yüce B, Fruggiero F, Packianather MS, Pham DT, Mastrocinque E, Lambiase A, Fera M (2017) Hybrid genetic bees algorithm applied to single machine scheduling with earliness and tardiness penalties. Comput Ind Eng 113:842–858 17. Laili Y, Tao F, Pham DT, Wang Y, Zhang L (2019) Robotic disassembly re-planning using a two-pointer detection strategy and a super-fast Bees algorithm. Rob Comput Integr Manuf 59:130–142 18. Xu W, Tang Q, Liu J, Liu Z, Jhou Z, Pham DT (2020) Disassembly sequence planning using discrete Bees algorithm for human robot collaboration in remanufacturing. Rob Comput Integr Manuf 62:101860 19. Baronti L, Castellani M, Pham DT (2020) An analysis of the search mechanisms of the Bees algorithm. Swarm Evol Comput 59:100746 20. Tapkan P, Özbakır L, Kulluk S, Baykaso˘glu A (2016) A cost-sensitive classification algorithm: BEE-Miner. Knowl-Based Syst 95:99–113 21. Kulluk S, Özbakır L, Tapkan PZ, Baykaso˘glu A (2016) Cost-sensitive meta-learning classifiers: MEPAR-miner and DIFACONN-miner. Knowl-Based Syst 98:148–161 22. Oltean M, Dumitrescu D (2021) Multi expression programming. Technical Note, Department of Computer Science, Babes-Bolyai University, RO 23. Weiss Y, Elovici Y, Rokach L (2013) The CASH algorithm-cost-sensitive attribute selection using histograms. Inf Sci 222:247–268 24. Pietraszek T (2006) Alert classification to reduce false positives in intrusion detection. PhD thesis, Computer Science, University of Freiburg, DE 25. Baykaso˘glu A, Özbakır L (2007) MEPAR-miner: multi-expression programming for classification rule mining. Eur J Oper Res 183:767–784 26. Yang Y, Webb GI (2009) Discretization for Naive-Bayes learning: managing discretization bias and variance. Mach Learn 74:39–74
An Application of the Bees Algorithm to Pulsating Hydroforming Osman Öztürk, Muhammed Arif Sen, ¸ Mete Kalyoncu, and Hüseyin Selçuk Halkacı
1 Introduction Titanium alloys with superior properties, such as high heat and corrosion resistance, are widely used in the automotive and aerospace industries. Ti-6Al-4 V titanium alloy covers more than 50% of the overall industrial production of titanium alloy [1]. The advantages of Ti-6Al-4 V are its high mechanical strength, high corrosion resistance, and low density. However, forming these materials is a challenging task due to their poor formability. The formability can be increased by high-pressure forming or warm hydroforming, which requires high costs [2]. More feasible and low-cost processes have been developed recently. The pulsating hydroforming process, which has been claimed to increase the ductility at room temperature, might be a candidate process for forming titanium alloys. Studies on pulsating hydroforming (PHF) started in two different areas: tube hydroforming and sheet hydroforming. Hama et al. [3] developed a static explicit finite element method to investigate the improvement of the formability of a tubular automotive component by using hammering tube hydroforming, which is also called pulsating hydroforming. The main conclusion of this study is that the formability is improved because the frictional forces are lower in tube hydroforming with pulsating pressure. Mori et al. [4] found that the effect of PHF is not due to the change in friction O. Öztürk (B) · M. A. Sen ¸ · M. Kalyoncu · H. S. Halkacı Department of Mechanical Engineering, Konya Technical University, 42250 Selçuklu, Turkey e-mail: [email protected] M. A. Sen ¸ e-mail: [email protected] M. Kalyoncu e-mail: [email protected] H. S. Halkacı e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 D. T. Pham and T. Hartono (eds.), Intelligent Production and Manufacturing Optimisation – The Bees Algorithm Approach, Springer Series in Advanced Manufacturing, https://doi.org/10.1007/978-3-031-14537-7_5
79
80
O. Öztürk et al.
only in free bulging hydroforming experiments. It was concluded that the uniform expansion of tubes and the avoidance of wrinkling defects were achieved by free bulging under pulsating pressure. In a series of studies, PHF was used when manufacturing a t-shaped tube, a typical workpiece produced with HF [5, 6]. Yang et al. [7] examined the effect of pressure and time increments on PHF. According to this study, different pressure increment and time increment levels affect the uniformity and the die-filling ability. Moreover, there are numerous studies about the effect of PHF on tube hydroforming [8–13]. From a different perspective, Yang et al. [14] studied the formability characteristic of tube hydroforming by generating forming limit diagrams (FLD). The results verified the positive effect of PHF on formability. When the previous studies are examined, it is understood that there are very few studies on pulsating sheet HF [15], while the studies discussed above are mostly on pulsating tube HF. In hydroforming processes, optimisation of the internal pressure and blank holder force path has been challenging over the past decade. Researchers have mainly focused on optimising force and pressure loading paths using several optimisation algorithms. Mirzaali et al. [16] used simulated annealing to optimise the loading path in tube hydroforming. It has been emphasised that theoretical calculations and trialand-error simulations are costly and time-consuming. Other researchers also used a simulated annealing algorithm for optimising loading profiles [17, 18]. Another approach for determining the loading paths in hydroforming is the fuzzy control approach. Some researchers have proposed the real-time fuzzy control approach [19], while others have suggested integrating one with finite element simulations [20–22]. Finally, a couple of studies on the optimisation of hydroforming by using other heuristic methods, i.e., genetic algorithms, also successfully determined the loading path [23, 24]. This study aims to optimise the amplitude and the base pressure for the maximum bulge height and uniform thickness distribution in the pulsating hydraulic bulge test (PHBT) by using the Bees Algorithm [25]. The input parameters of the test are the amplitude and base pressure of the pressure path, while the outputs are the bulge height and the thickness distribution. The bulge height in the PHBT is modelled by the curve fitting method as a second-order polynomial expression. Three different models are generated for three different frequencies, and all mathematical models are validated experimentally. The thickness change at the center of the sphere section is modelled analytically. Furthermore, the objective function is constructed for the maximum bulge height and minimum thickness change. After obtaining the objective function, the bee algorithm is applied to the process model for the first time in the literature. As a result, the bulge height is optimised with a suitable thickness distribution. Optimum values of amplitude and base pressure are found for a specific frequency.
An Application of the Bees Algorithm to Pulsating Hydroforming
81
2 Methodology The hydraulic bulge test (HBT) is one of the two important tests used to investigate sheet materials’ mechanical properties and formability subjected to a biaxial stress state during forming. The other test is FLD, and HBT is simpler to perform than FLD, as FLD requires strain analysis via image processing and various specimen geometries. To model the HBT analytically, the membrane theory under shell theory is accepted in the literature. Thanks to this theory, the sheet metal exposed to a certain pressure in a closed volume can be defined as static. Based on the assumptions stated below, it provides the mathematical expression of the stress in the sheet according to the pressure and sheet dimensions. Therefore, thanks to the membrane theory, the yield curve of a material and its mechanical properties can be observed by means of the HBT in the biaxial stress state [26, 27]. The most significant assumptions of the membrane theory are (i) neglecting inertial forces, (ii) neglecting stresses in the thickness direction, (iii) assuming that the biaxial stress state at the surface is uniformly distributed, and (iv) assuming that an entire hemisphere is formed as a result of bulging. The validity of these assumptions depends on the dimensional characteristics of the test. It has been stated that this theory is valid when d/t0 ≥ 100, where d is the diameter of the die cavity and t0 is the initial sheet thickness [26]. The schematic representation of the membrane theory is given in Fig. 1. It is expected that the mechanical properties can be examined after HBT; in other words, it is necessary to obtain the stress–strain graph. The instantaneous thickness at the center is also required for the stress–strain graph. However, it is technically challenging to measure the instantaneous thickness in the center of the sheet with high pressure on one surface. For this reason, there are analytical approaches in the literature that associate instantaneous thickness with instantaneous height [28, 29]. In the trials with PHBT results, it was seen that among these approaches, the Lazarescu approach gave the most appropriate result. For instantaneous thickness, Lazarescu’s analytical approach, which has been validated before, will be used [29]. In this way, it was possible to express the instantaneous thickness in terms of the bulge height and the parametric values of the design. In the study’s continuation, the
Fig. 1 Schematic representation of the HBT
82
O. Öztürk et al.
bulge height’s mathematical expression will be found as a quadratic equation. These two expressions will be used when defining the objective function. The mathematical expression of the sheet thickness at the dome apex (center of the bulged specimen) is shown in Eq. 1. In this equation, c and α are expressed in Eqs. 2 and 3, where α max is given in Eq. 4. ( α )−2(1+cα) sin α ]/ [ ] [ / s0 αmax αmax αmax ln − ln c = ln smin sin αmax sin αmax [ ] (d ) +R 2 α = arcsin ( )2 1 d h + 2h 2 + R 2 t = t0
αmax = arcsin
1 2h max
(d 2
+R )2 +R +
d 2
h max 2
(1) (2)
(3)
(4)
2.1 Design of the Experimental Data Set The experiments were carried out according to the full factorial design matrix in the hydroforming press shown in Fig. 2. Internal pressure and bulge height were recorded every 20 ms during the experiments. The sheet should be subjected to stretching during bulging for the bulge test to be carried out following the standard. Therefore, the surfaces in contact with the pressure plate should not slip. For this reason, during the test, the pressure plate was applied up to the maximum value of 550 kN, and the sheet was locked by designing a draw bead on the outer front surface of the molds. The view of the samples before and after bulging is given in Fig. 2. The design of the experiments is given in Table 1. According to the full factorial design matrix, all combinations of the parameters are performed. Experimental designs such as Taguchi method [11], which reduced the number of experiments, were not required, as each test took a relatively short time to perform. The experiments were carried out in three repetitions; thus, the variability of the results could be determined, and the accuracy of the results could be increased. Therefore, a total of 81 experiments for PHBT were performed.
An Application of the Bees Algorithm to Pulsating Hydroforming
83
Fig. 2 The hydroforming test setup
Table 1 The full factorial design matrix for PHBT. A, f, and P0 are the amplitude, frequency, and base pressure, respectively f (Hz) P0 (MPa) No A f (Hz) P0 (MPa) No A f (Hz) P0 (MPa) No A (MPa) (MPa) (MPa) 1
5
1
20
10 5
2
20
19 5
3
20
2
5
1
40
11 5
2
40
20 5
3
40
3
5
1
70
12 5
2
70
21 5
3
70
4
15
1
20
13 15
2
20
22 15
3
20
5
15
1
40
14 15
2
40
23 15
3
40
6
15
1
70
15 15
2
70
24 15
3
70
7
25
1
20
16 25
2
20
25 25
3
20
8
25
1
40
17 25
2
40
26 25
3
40
9
25
1
70
18 25
2
70
27 25
3
70
2.2 Obtaining Mathematical Expression for the Bulge Height via Experimental Data When determining the objective function, a mathematical model of the thickness at the dome apex and bulge height, which are the output responses of the HBT, is required. The mathematical model of thickness at the dome apex can be determined by analytical approaches in the literature. However, there is no analytical approach to define the bulge height. Therefore, the mathematical model of bulge height had to be obtained as an empirical equation. For this reason, the experimental results, which were performed according to the full factorial experiment plan, were grouped for three different frequencies, and three polynomial equations corresponding to each frequency were found. The polynomial equations were obtained using the curve
84
O. Öztürk et al.
Table 2 The coefficients of the second-degree polynomial models β 10
β 11
β 12
β 13
β 14
β 15
h1 Hz
13.84
−0.08535
0.0395
0.0003718
−0.001331
−0.0001019
β 20
β 21
β 22
β 23
β 24
β 25
h2 Hz
14.86
−0.01048
−0.03516
0.00004136
0.0001226
0.0004304
β 30
β 31
32
β 33
β 34
β 35
14.67
−0.09805
0.04159
0.0017
0.0004474
−0.0005967
h3 Hz
fitting tool of MATLAB software. The output response of the curve fitting model was the bulge height, while the variable inputs of the model were the amplitude and base pressure. The second-degree polynomial models are given in Eqs. 5—7. The coefficients (β ij ) are given in Table 2. h1Hz = β10 + β11 · A + β12 · P0 + β13 · A2 + β14 · A · P0 + β15 · P20
(5)
h2Hz = β20 + β21 · A + β22 · P0 + β23 · A2 + β24 · A · P0 + β25 · P20
(6)
h3Hz = β30 + β31 · A + β32 · P0 + β33 · A2 + β34 · A · P0 + β35 · P20
(7)
2.3 Validation of Mathematical Modelling The validation of the mathematical model was made by comparing the real-life system response with the system response obtained from the model. The model’s performance was investigated by calculating the root mean square error (RMSE) and mean percentage error (MPE). The experimental and calculated results of the bulge height are given in Table 3. The RMSEs of the quadratic models for 1, 2 and 3 Hz are 0.12, 0.09, and 0.08, respectively. The MPE of the 1, 2, and 3 Hz quadratic models are 2.18%, 1.65, and 1.25%, respectively. According to these results, the developed quadratic model successfully predicted the bulge height, and the model was sufficient for use in the objective function.
2.4 Applying the Bees Algorithm to the Hydroforming Process The bee algorithm (BA), proposed by Pham et al. [25], is a swarm-based, metaheuristic search algorithm that mimics honey bees’ food source search behaviour. In the BA, bees share the distance, direction, and amount of the food source and the
An Application of the Bees Algorithm to Pulsating Hydroforming
85
Table 3 The experimental and calculated bulge height results. RMSE and MPE are calculated and given for each quadratic model No
1
2
3
4
5
6
7
8
9
Experimental h Exp-1 Hz (mm)
14.81 13.52 14.58 13.65 14.52 14.61 14.20 14.19 14.26
Calculated
14.29 14.12 14.50 14.22 14.07 14.49 14.15 14.03 14.49
h Cal-1 Hz (mm) RMSE (1 Hz)
0.12
MPE (%)
2.18
Experimental h Exp-2 Hz (mm)
13.96 14.65 15.46 14.26 13.68 14.07 13.74 14.44 14.00
Calculated
14.12 14.66 15.31 13.75 14.01 14.26 14.11 14.12 13.97
h Cal-2 Hz (mm) RMSE (2 Hz) MPE (%)
0.09 1.65
Experimental h Exp-3 Hz (mm)
15.07 14.62 14.55 14.13 14.94 13.83 14.06 14.45 14.07
Calculated
14.86 15.02 14.37 14.31 14.56 14.04 14.10 14.44 14.05
h Cal-3 Hz (mm) RMSE (3 Hz)
0.08
MPE (%)
1.25
other important information for the best position of the source. The BA attracts the attention of many researchers and still maintains its place in the current literature. It can be successfully applied to optimisation problems in different research areas and can offer satisfactory performance in global and local search. The remarkable local search capacity of the BA mainly depends on the size and shape of the neighbourhood, and the number of allowed stagnation cycles indicates its superior property over other heuristic methods. Neighbourhood search is based on the inspirational waggle dance that honeybees use to communicate and cooperate [30]. The authors have performed many optimisation studies using the BA in different subjects [31– 36]. Therefore, all optimisation processes in this study, including the determination of the BA parameters and the design of the objective function, were performed based on the authors’ recent experience with the BA. The general flow chart of this study is given in Fig. 3. First, the PHBTs are performed according to the full factorial experimental design with untuned, predesigned input parameters given in Table 1. Then, three quadratic equations of bulge height for three different frequencies were built by implementing the experimental results to the curve fitting method. The quadratic equations and current analytical thickness formula are combined in the BA process’s objective function, designed according to the primary goal. The motivation of the objective function is to
86
O. Öztürk et al.
increase the bulge height in the PHBT by ensuring a uniform thickness distribution. For three different frequencies of the pressure profile, the optimum initial pressure (P0 ) and amplitude (A) parameters are searched with the BA, which guarantees the maximum bulge height (h) with a uniform thickness distribution. The parameters of the BA are given in Table 4. The parameters of the BA in this study were obtained by applying systematic trial error methods and using the authors’ knowledge about the BA [33–36]. All parameters have been chosen at minimum values to avoid increasing the computing cost of the optimisation and at the same time not decreasing the optimisation performance. Therefore, it is intended for each iteration to be fast and robust and to require minimal computation cost and time while satisfying the optimisation performance.
J=
(w1 · hnor ) (w2 · tnor )
(8)
Fig. 3 General flow chart of the optimisation and modelling of PHBT
Table 4 Parameters of the BA
Description
Value
n—Number of scout bees
10
m—Number of best selected patches
4
n1 —Number of best recruited bees around best patches 7 n2 —Number of best recruited bees around elite patches 8 ngh—Patch radius for neighbourhood search
0.01
E—Number of elite selected patches
2
An Application of the Bees Algorithm to Pulsating Hydroforming
87
Fig. 4 Convergence graph of BA for 1, 2, and 3 Hz frequencies
In the objective function (J) in Eq. 8, the weighted constants w1 = 13.9 and w2 = 0.49 are determined so that the effects of hnor and tnor are as similar as possible. Normalized expressions of hnor and tnor are given in Eqs. 9—10. In the normalised expressions of the bulge height and the thickness at dome apex (hnor and tnor ), min (h), max (h), min (t), and max (t) are 12.29, 15.68, 0.45 and 0.54 mm, respectively. ) h − min(h) hnor = max(h) − min(h) ) ( t − min(t) tnor = max(t) − min(t) (
(9) (10)
In Fig. 4, the descending graphs of the objective function throughout the iteration are shown for three frequency values. The graphs indicate the convergence performances of the BA optimisation process. It can be drawn from the convergence graphs that the BA is performed successfully up to 300 iterations. The curve fitting modelling and the BA optimisation process are performed in MATLAB via the personal computer Intel® i5-6200U 2.8 Hz, 8 GB RAM.
3 Results and Discussion The result of the experimental plan given previously in Table 1 is shown in Fig. 5. It is clear that any combination of parameters used in PHBT improves the bulge height compared to the monotonous bulge test. In addition, it is seen that the variation of the bulge height depending on the parameters is not linear. Therefore, the BA is applied according to the procedure discussed in the previous section. The results of the BA are given in Table 5. As shown in Table 5, the maximum thickness change, which is a critical value for the failure of the sheet, is reduced, and therefore, a more uniform thickness is obtained with an acceptable drop in bulge height. The optimum parameters (f, A, P0) obtained from BA provide only the optimum bulge height and optimum thickness at the dome apex. Therefore, the optimal numerical results are validated experimentally, and the thickness distribution along the
88
O. Öztürk et al.
Fig. 5 The overall result of the experimental plan
Table 5 The optimisation results
Frequency (Hz)
Monotonous
Max bulge height (PHBT)
Optimum bulge height (PHBT)
N/A
2
2
Amplitude (MPa)
N/A
0.5
0.545
Base pressure (MPa)
N/A
7
6.678
Bulge height (mm)
13.39
15.46
15.18
Final thickness at dome apex (mm)
0.5
0.45
0.46
Maximum thickness change (mm)
0.04
0.09
0.082
bulging region is measured by a magnetic ball thickness gauge. As seen in Fig. 6, a more uniform thickness is obtained from PHBT-1 Hz (magenta) compared with monotonous (black) and pulsating maximum thinning (yellow). Moreover, similar uniform thickness distributions are obtained for PHBT-2 Hz (green) and PHBT-3 Hz (blue) frequencies in Figs. 7 and 8, respectively. The sheet is more likely to avoid tearing without a waiving bulge height due to the BA approach.
4 Conclusion In this study, the curve fitting method and the BA were proposed to model and optimise the pulsating hydraulic bulge test of the Ti-6Al-4 V Grade 5 sheet in terms of increasing ductility. First, PHBT experiments were performed according to the full factorial design matrix. A quadratic model was built by using those data via the curve
An Application of the Bees Algorithm to Pulsating Hydroforming
89
Fig. 6 The thickness distribution of monotonous, maximum thinning, and optimised specimens for a 1 Hz frequency
fitting toolbox. Then, the quadratic model was verified by a series of experimental data. In the optimisation phase, the maximum bulge height was found using the BA while considering a more uniform thickness distribution. Then, the thickness distributions were measured by re-experimenting with the optimum parameters found. As a result of all the studies, the following conclusions can be drawn. • The optimal results thanks to BA indicate that the provided method ensures less thinning at the dome apex with a bulge height similar to that of the traditional monotonous method. • Thus, a more uniform thickness distribution (the main critical quality indicator in hydroforming) was obtained, where the loss in bulge height is acceptable. • In optimisation, Δt is improved by approximately 9%. The bulge height increased 15% and 13% for the experimental result with the maximum bulge height and the optimal result obtained from BA, respectively.
90
O. Öztürk et al.
Fig. 7 The thickness distribution of monotonous, maximum thinning, and optimised specimens for 2 Hz frequency
Fig. 8 The thickness distribution of monotonous, maximum thinning, and optimised specimens for 3 Hz frequency
An Application of the Bees Algorithm to Pulsating Hydroforming
91
• After the studies related to experiments and optimisation, as a general result, it was found that a remarkably uniform thickness and a suitable bulge height can be ensured simultaneously by BA optimisation application. Considering potential future research, as an advanced manufacturing method, hydroforming has not received adequate attention from the point of view of process optimisation. The proposed application of BA and modelling in this study can be applied to industrial hydroforming operations such as hydromechanical deep drawing and sheet hydroforming with dies. In this way, it may be possible to use advanced industrial applications more efficiently with optimisation. There are a limited number of metaheuristic-based optimisation studies on hydroforming processes, such as genetic algorithms [24] and simulated annealing methods (SAs) [16–18], because they commonly use traditional adjusting approaches. SA effectively obtains global minima convergence and escapes from local minima [16]. However, to contribute to the literature, the “waggle dance” effect in neighborhood search of local optimisation performance is investigated in this study. The experimental results show that the neighborhood search can provide a more uniform thickness distribution. In addition, there are various parameters (shape, material, temperature, etc.) in hydroforming processes, which makes it difficult to compare each metaheuristic method effectively. Because it is not consistent to compare the optimisation approach proposed in this study with other limited studies, comparative studies based on various metaheuristic methods on the hydroforming process are still in progress. Acknowledgements The authors declare that they have no conflicts of interest.
References 1. Li FQ, Mo JH, Li JJ, Huang L, Zhou HY (2013) Formability of Ti–6Al–4V titanium alloy sheet in magnetic pulse bulging. Mater Des 52:337–344 2. Liu G, Wu Y, Wang JL, Zhang WD (2014) Progress on high pressure pneumatic forming and warm hydroforming of Titanium and Magnesium alloy tubular components. Mater Sci Forum 783:2456–2461 3. Hama T, Asakawa M, Fukiharu H, Makinouchi A (2004) Simulation of hammering hydroforming by static explicit FEM. ISIJ Int 44(1):123–128 4. Mori K, Maeno T, Maki S (2007) Mechanism of improvement of formability in pulsating hydroforming of tubes. Int J Mach Tools Manuf 47(6):978–984 5. Loh-Mousavi M, Mori K, Hayashi K, Maki S, Bakhshi M (2007) 3-D finite element simulation of pulsating T-shape hydroforming of tubes. Key Eng Mater 340:353–358 6. Loh-Mousavi M, Bakhshi-Jooybari M, Mori KI, Hyashi K (2008) Improvement of formability in T-shape hydroforming of tubes by pulsating pressure. Proc Instit Mech Eng Part B: J Eng Manuf 222(9):1139–1146 7. Yang LF, Chen FJ (2009) Investigation on the formability of a tube in pulsating hydroforming. Mater Sci Forum 628:617–622 8. Zhang S, Yuan A, Wang B, Zhang H, Wang Z (2009) Influence of loading path on formability of 304 stainless steel tubes. Sci China Ser E: Technol Sci 52(8):2263–2268
92
O. Öztürk et al.
9. Xu Y, Zhang SH, Zhu QX, Cheng M, Song HW, Zhang GJ (2013) Effect of process parameters on hydroforming of stainless steel tubular components with rectangular section. Mater Sci Forum 749:67–74 10. Xu Y, Zhang S, Cheng M, Song H, Zhang X (2014) Application of pulsating hydroforming in manufacture of engine cradle of austenitic stainless steel. Procedia Eng 81:2205–2210 11. Ashrafi A, Khalili K (2016) Investigation on the effects of process parameters in pulsating hydroforming using Taguchi method. Proc Instit Mech Eng Part B: J Eng Manuf 230(7):1203– 1212 12. Yang L, Wu C, He Y (2016) Dynamic frictional characteristics for the pulsating hydroforming of tubes. Int J Adv Manuf Technol 86(1):347–357 13. Ma J, Yang L, Liu J, Chen Z, He Y (2021) Evaluating the quality of assembled camshafts under pulsating hydroforming. J Manuf Process 61:69–82 14. Yang L, Tang D, He Y (2017) Describing tube formability during pulsating hydroforming using forming limit diagrams. J Strain Anal Eng Des 52(4):249–257 15. Hu G, Pan C (2021) Investigation on deformation behavior of magnesium alloy sheet AZ31B in pulsating hydroforming. Proc Instit Mech Eng Part B: J Eng Manuf 235(1–2):198–206 16. Mirzaali M, Seyedkashi SMH, Liaghat GH, Naeini HM, Moon YH (2012) Application of simulated annealing method to pressure and force loading optimisation in tube hydroforming process. Int J Mech Sci 55(1):78–84 17. Kadkhodayan M, Moghadam AE (2013) Optimisation of load paths in X-and Y-shaped hydroforming. IntJ Mater Form 6(1):75–91 18. Hashemi A, Hoseinpour-Gollo M, Seyedkashi SH, Pourkamali-Anaraki A (2017) A new simulation-based metaheuristic approach in optimisation of bilayer composite sheet hydroforming. J Braz Soc Mech Sci Eng 39(10):4011–4020 19. Manabe KI, Chen X, Kobayashi D, Tada K (2014) Development of in-process fuzzy control system for T-shape tube hydroforming. Procedia Eng 81:2518–2523 20. Teng B, Li K, Yuan S (2013) Optimisation of loading path in hydroforming T-shape using fuzzy control algorithm. Int J Adv Manuf Technol 69(5–8):1079–1086 21. Yaghoobi A, Bakhshi-Jooybari M, Gorji A, Baseri H (2016) Application of adaptive neuro fuzzy inference system and genetic algorithm for pressure path optimisation in sheet hydroforming process. Int J Adv Manuf Technol 86(9):2667–2677 22. Öztürk E, Türköz M, Halkacı HS, Koç M (2017) Determination of optimal loading profiles in hydromechanical deep drawing process using integrated adaptive finite element analysis and fuzzy control approach. Int J Adv Manuf Technol 88(9–12):2443–2459 23. Feng YY, Luo ZA, Su HL, Wu QL (2018) Research on the optimisation mechanism of loading path in hydroforming process. Int J Adv Manuf Technol 94(9):4125–4137 24. Chebbah MS, Lebaal N (2020) Tube hydroforming optimisation using a surrogate modeling approach and genetic algorithm. Mech Adv Mater Struct 27(6):515–524 25. Pham DT, Ghanbarzadeh A, Koç E, Otri S, Rahim S, Zaidi M (2006) The bees algorithm—a novel tool for complex optimisation problems. Intell Prod Mach Syst 454–459 26. Alharthi H, Hazra S, Alghamdi A, Banabic D, Dashwood R (2018) Determination of the yield loci of four sheet materials (AA6111-T4, AC600, DX54D+ Z, and H220BD+ Z) by using uniaxial tensile and hydraulic bulge tests. Int J Adv Manuf Technol 98(5):1307–1319 27. Giuliano G (2011) Superplastic forming of advanced metallic materials: methods and applications. Elsevier 28. Hill RC (1950) A theory of the plastic bulging of a metal diaphragm by lateral pressure. London, Edinburgh, Dublin Philosophical Mag J Sci 41(322):1133–1142 29. L˘az˘arescu L, Com¸sa DS, Banabic D (2011) Validation of a new methodology for determination of stress–strain curves through bulge test. Acta Technica Napocensis-Series: Appl Math Mech Eng 54(2) 30. Baronti L, Castellani M, Pham DT (2020) An analysis of the search mechanisms of the bees algorithm. Swarm Evol Comput 59:100746 31. Pham DT, Kalyoncu M (2009) Optimisation of a fuzzy logic controller for a flexible singlelink robot arm using the Bees Algorithm. In: 7th IEEE international conference on industrial informatics
An Application of the Bees Algorithm to Pulsating Hydroforming
93
32. Fahmy AA, Kalyoncu M, Castellani M (2012) Automatic design of control systems for robot manipulators using the bees algorithm. Proc Instit Mech Eng Part I: J Syst Control Eng 226(4):497–508 33. Sen ¸ MA, Tinkir M, Kalyoncu M (2018) Optimisation of a PID controller for a two-floor structure under earthquake excitation based on the bees algorithm. J Low Frequency Noise, Vib Active Control 37(1):107–127 34. Öztürk O, Kalyoncu M, Ünüvar A (2018) Multi objective optimisation of cutting parameters in a single pass turning operation using the bees algorithm. In: 1st international conference on advances in mechanical and mechatronics engineering 35. Bilgic HH, Sen MA, Yapici A, Yavuz H, Kalyoncu M (2021) Meta-heuristic tuning of the LQR weighting matrices using various objective functions on an experimental flexible arm under the effects of disturbance. Arab J Sci Eng 46(8):7323–7336 36. Onder A, Incebay O, Sen MA, Yapici R, Kalyoncu M (2021) Heuristic optimization of impeller sidewall gaps-based on the bees algorithm for a centrifugal blood pump by CFD. Int J Artif Organs 44(10):765–772
Production Equipment Optimisation
Shape Recognition for Industrial Robot Manipulation with the Bees Algorithm Marco Castellani, Luca Baronti, Senjing Zheng, and Feiying Lan
1 Introduction Reliable grasping and manipulation procedures are fundamental prerequisites for robotic systems in manufacturing. The available methods exploit properties of the objects such as their geometry [1] or dynamics [2]. In all cases, the shape of the object must be estimated, typically from 3D models built from stereo camera images or depth sensor scans. Regardless of the acquisition method, 3D models are customarily displayed using data structures called point clouds (PCs). In many applications, the shape of products can be approximated with a set of geometrical primitives, such as spheres, boxes, and cylinders. This process of fitting primitive shapes to point clouds is called primitive fitting. The main problem in primitive fitting is the efficiency of the identification algorithm due to the large number of points contained in point clouds. In addition, primitive fitting algorithms should avoid problem-specific assumptions that limit their generality. The literature on primitive fitting is wide, although the solutions cluster around two popular algorithms: the Hough transform (HT) [3] and random sample consensus (RANSAC) [4].
M. Castellani (B) · S. Zheng · F. Lan Department of Mechanical Engineering, University of Birmingham, Birmingham, UK e-mail: [email protected] S. Zheng e-mail: [email protected] F. Lan e-mail: [email protected] L. Baronti School of Computer Science, University of Birmingham, Birmingham, UK e-mail: [email protected]_insert_end © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 D. T. Pham and T. Hartono (eds.), Intelligent Production and Manufacturing Optimisation – The Bees Algorithm Approach, Springer Series in Advanced Manufacturing, https://doi.org/10.1007/978-3-031-14537-7_6
97
98
M. Castellani et al.
The HT was originally developed to detect lines in 2D images and later generalised to other shapes, such as circles and ellipses [5]. The algorithm fits a family of primitives to each point in the cloud. Each primitive is described by its parameterisation, which defines the size, position, and orientation of the shape. The algorithm extracts the primitives that fit the largest number of points. The main shortcoming of the HT is its computational complexity, which has limited its application to mainly offline applications and the detection of elementary shapes such as lines and circles [6], planes [7], or spheres [8]. Common solutions to reduce the complexity of the HT include the quantisation of the parameter space [9], breaking down the detection process into simpler subtasks [10], and considering only a randomly picked selection of the points [11]. RANSAC parameterises candidate primitive shapes based on a minimal set of randomly selected points from the cloud. These candidates are then evaluated against all the points in the cloud, and the shape that fits the largest number of points is picked. To deal with noisy models, shape fitting is evaluated according to a predefined approximation tolerance. Various methods have been proposed in the literature to reduce the computational complexity and hence speed up the RANSAC procedure, such as R-RANSAC [12] and the fast RANSAC implementation by Schnabel et al. [13]. In general, the performance of the HT and RANSAC depends on preliminary settings such as the quantisation of the shape parameters in the HT and the approximation tolerance in RANSAC. These settings define the trade-off between the accuracy and speed of execution of the algorithm. In this study, primitive fitting will be regarded as a parameter optimisation problem. That is, the sought solution is the set of parameters that optimises the fit of a primitive shape to the points of the cloud. The goodness of fit of a candidate solution will be assessed using a shape-specific fitness function, and the Bees Algorithm (BA) [14] will be employed as an optimiser. The performance of the BA will be benchmarked against the performance of an evolutionary algorithm (EA) [15]. The advantages of the BA metaheuristics include the generality of the approach and the efficient search in the parameter space. Furthermore, the search strategy of the BA is problem-independent, as long as the goodness of fit of a primitive can be quantitatively expressed via a function. Additionally, the BA does not rely on empirical knowledge such as RANSAC or HT, which need information on the noise level to set the approximation threshold (RANSAC) or shape parameter quantisation (HT). The chapter is organised as follows. Section 2 discusses the main literature on the topic. Section 3 introduces the proposed algorithm and the EA. Section 4 presents the experimental setup. The experimental results are reported in Sect. 5, while Sect. 6 concludes the chapter and gives indications for further work.
Shape Recognition for Industrial Robot Manipulation …
99
2 Literature Review In this study, primitive fitting is approached as a parameter optimisation problem. This approach was used in a number of studies in the literature for the recognition of shapes in 2D images and 3D PC models. In the latter case, the points were typically generated using edge detection procedures, and the primitives fitted to the contours of the objects. Roth and Levine [16] and Lutton and Martinez [17] used EAs to evolve a population of candidate 2D primitives, such as lines, circles, ellipses, and rec-tangles. Similar to RANSAC, the fitness evaluation function included an approximation tolerance to deal with noisy images. A similar approach was also proposed by Gtardo et al. [18] to detect planes from range images. In these studies, the individuals were represented using a minimal set of parameters. Ayala-Ramirez et al. [19] used three edge points to encode circles in 2D images and measured the fit of the candidate solutions by the number of pixel points lying on the perimeter of the shape. A standard genetic algorithm was used as the optimisation method. Three edge points were also used by Cuevas et al. [20] to encode circles, which we reoptimised employing learning automata optimisation [21]. An alternative study [22] utilised an artificial bee colony instead of learning automata optimisation. A multi-population EA was proposed by Yao et al. [23] to detect ellipses in 2D images, while Yuen and Ma [24] utilised least squares and a multimodal genetic algorithm to detect multiple instances of different kinds of primitive shapes in 2D images. González et al. [25] also focused on the detection of multiple instances of shapes (circles) in 2D images. With the increased availability of depth sensors, the primitive fitting literature saw a growing number of applications to 3D PC scenes. Ugolotti et al. [26] encoded primitives as the rigid transformation matrix that described the position and orientation of the sought shape and calculated the fit-ness of a candidate solution from the distance of the points in the cloud from the surface of the shape. Their study evaluated the performance of particle swarm optimisation (PSO) [27] and differential evolution [28] as optimisers for the primitive fitting problem. PSO was also used by Wang et al. [29] to detect planes in PC scenes. The main limitation of evolutionary and swarm intelligence procedures is their computational complexity, which often necessitates parallel implementation in graphics processing units (GPUs).
3 Primitive Fitting Methods Primitive fitting can be regarded as a numerical optimisation problem where primitive shapes are the candidate solutions, and the fitness of a shape is evaluated on how well it fits the points in the cloud. That is, on the distance of the points from its surface. Baronti et al. [30] used the BA to fit three different types of primitives, namely, spheres, boxes and cylinders.
100
M. Castellani et al.
Table 1 Number of parameters used to describe a primitive in 3D space
Parameter Position
Primitive type Sphere
Box
Cylinder
3
3
3
Rotation
–
4
4
Height
–
1
1
Width
–
1
–
Depth
–
1
–
Radius
1
–
1
Total
4
10
9
In this chapter, the BA approach will be outlined, and its accuracy compared to that of an EA. The BA and EA use the same representation scheme, fitness function, and local search operator. The key difference between the two optimisation algorithms is the metaheuristics utilised, which determines how the results of the local search (the heuristics) are used. For a more detailed description of the two algorithms, the reader is referred to the literature [30].
3.1 Representation Scheme Any geometrical primitive can be described in a 3D scene by a set of parameters that define its position, orientation, and size. Different shapes are described by a different number and type of parameters. The three primitives used in this study are listed in Table 1. To represent rotations, unit quaternions are used because of their stability properties in 3D space. A quaternion is fully determined by four scalar values.
3.2 Fitness Function For each candidate solution, the fitness function calculates a score defining how well the shape fits the points in the cloud. The higher the score is, the better the candidate primitive fits the 3D point cloud. Let PC = { p1 , . . . , p N } be a point cloud of N points. The fitness function takes into account the distance δ( pi , I ), as well as the concordance N C( pi , I ), of the normal between each point pi ∈ PC and the closest surface of the primitive I . The fitness of a candidate primitive I in a point cloud PC is given by the following function: F (I, PC) =
N 1 Σ N C( pi , I ) N i=1 1 + δ( pi ,I )2 δmax
(1)
Shape Recognition for Industrial Robot Manipulation …
101
where δmax is a normalisation factor. Equation 1 holds for every primitive; however, the distance δ and the normal concordance N C are shape-specific. In the case of a sphere, the distance δ( pi , S) between a point pi and the sphere S is computed as: δ( pi , S) = |d( pi , Sc ) − Sr |
(2)
where d is the Euclidean distance and Sc , Sr are the centre and radius of S, respectively. In the case of the box and cylinder, δ( pi , I ) is computed as the distance between point pi ∈ PC and its projection π ( pi , I ): δ( pi , I ) = |d( pi , π ( pi , I ))|
(3)
The concordance of normal N C( pi , I ) is evaluated with the cosine similarity between the normal N ( pi ) to point pi ∈ PC and the normal N (π ( pi , I )) of its projection on the candidate primitive surface: ( N C( pi , I ) = max
N ( pi ) · N (π ( pi , I )) ,0 ||N ( pi )||||N (π ( pi , I ))||
) (4)
with · as the dot product. This formulation promotes primitives with surface normals in the same direction as the normals of nearby points.
3.3 Local Search Operator Local searches in the BA and mutations in the EA are performed using the following operator: ) ( giI = giI + 0.1(u i − li )ρρ ∼ U −k I , k I
(5)
where giI is the value of the i-th of the n parameters defining primitive, I, U is the uniform distribution, and 2k I is the width of the interval where the random number ρ is drawn. The variables u i and li are the upper and lower bounds of the i-th parameter, respectively. For each primitive, the shape-specific parameters are divided into three groups: position, orientation, and size. The probability p g that a group of parameters is selected for mutation depends on the type of shape. The first step of the procedure is to randomly select the group of parameters to modify. The second step involves selecting which parameters within a group will be mutated. Within a group, the probability p f that a given parameter is selected for mutation depends on the type of shape. The probabilities p g and p f are detailed in Table 2. The third step is where a parameter selected for mutation is actually changed according to Eq. 5. The fourth step is to perform some ‘repair’ of the mutated parameters. Namely, since orientation is defined via a unit quaternion, if the orientation parameters are
102
M. Castellani et al.
Table 2 Shape modification probabilities Feature Centre
Rotation
Size
giI
pg Sphere
Box
Cylinder
1
0.3∗
0.33∗
–
1
0.3∗
0.4∗
0.33∗
0.33∗
pf Sphere
Box
Cylinder
X
1
0.7
0.7
Y
1
0.7
0.7
Z
1
0.7
0.7
w1 i
–
0.7
0.7
w2 j
–
0.7
0.7
w3 k
–
0.7
0.7
w4
–
0.7
0.7
Height
–
0.33∗
0.7
Width
–
0.33∗
–
Depth
–
0.33∗
–
Radius
1
–
0.7
Probabilities marked with an asterisk are mutually exclusive (e.g., either the centre, or the orientation, or the size of a cylinder is changed)
changed, the quaternion representation is then normalised. If the size of a box is modified, one extreme is kept unchanged, and the other is stretched/shrunk. The centre of the primitive is then computed again.
4 Experimental Set Up The BA and the EA were tested on three datasets according to a purpose-built error function. To perform an unbiased evaluation of the results, the error function is different from the goodness of fit function used by the individual algorithms. For each dataset, 40 independent optimisation trials were run for each algorithm, and the results were analysed statistically. The three datasets are available at the following GitHub repository: https://github.com/lucabaronti/BA-Primitive_Fitting_Dataset.
4.1 Datasets Each dataset contained 591 PC models of 103 data points, consisting of 181 spheres, 220 boxes, and 190 cylinders. Each model had different proportions and orientations. The details of the models are shown below: • Spheres: The radius was varied from 1 to 10 units in steps of 0.05.
Shape Recognition for Industrial Robot Manipulation …
103
• Boxes: The width, height, and depth were varied from 1 to 10 units in steps of 1, and all possible combinations of these levels were taken (full factorial design). The orientation was randomly determined. • Cylinders: The radius was varied from 0.5 to 5 units in increments of 0.25, the height was varied from 1 to 10 units in steps of 1, and the radius and height took all possible combinations of these levels (full factorial design). The orientation was randomly determined. For convenience, all the point clouds were centred at the origin. The point clouds were generated from the primitive shapes. Namely, first, the primitive shape was created and rotated to a random orientation. Then, the point cloud was created by uniformly sampling 103 points from the surface of the shape. Finally, the position of each point was modified according to some simulated sensor error. The following three sets of PCs were generated with different levels of simulated error: • Clean dataset: No error. The data points were sampled exactly from the surface of the primitive shape models. • Error dataset: A perturbation within a 0.1 unit radius was applied to the position of each data point in the clouds. • Double error: A perturbation within a 0.2 unit radius was applied to the position of each data point in the clouds. In the experiments, the type of sought primitive was assumed to be known. The goal was to find the size and orientation of the point clouds. In other words, three instances of the BA and EA algorithms were created, each instance specialised to detect one of the three types of shape. Each specialised algorithm only showed shapes of the type it was meant to fit. For example, if the algorithm was specialised to find the size and orientation of cylinders, only the 190 cylinder-shaped point clouds were used.
4.2 Error Evaluation Function To have a fair evaluation of the goodness of the solutions found by the algorithms, three issues need to be considered. First, standard metrics such as the Euclidean distance are not suitable, since the evaluation of the goodness of fit involves parameters of mixed units (e.g., degrees for orientation, linear units for size and position). Second, the centre and orientation have a larger impact on the fit of a shape than the size. Finally, the evaluation function needs to be invariant to size, since the size of the shapes varied up to one order of magnitude in the datasets. The error evaluation function developed in this study works as follows. At the end of the search, the best fitting solution F obtained by an algorithm is compared to the reference shape I used to generate the PC. The evaluation function considers three segments corresponding to the height, width, and depth of the candidate shape F and compares them to those of the reference shape I . The segments
104
M. Castellani et al.
Fig. 1 The found shape F is compared with the reference shape I by aligning their principal axes: width, height, and depth
are aligned to the principal axes of symmetry of the shape, as shown in Fig. 1. The details of the three segments for different primitive shapes are listed below: • Box: The length of the three segments corresponds to the three size parameters of the solution. • Cylinder: The length of one segment corresponds to the height, while the other two correspond to the circular section. • Sphere: All three segments are equal to the diameter. When the principal axes of symmetry are not unique, they are aligned with the axes of the reference frame. This is the case for the two axes lying on a circular section of a cylinder and all three axes of a sphere. The error function measures the overlap between the projections of each segment of solution F onto the three segments of reference shape I. For example, if solution F is perfectly aligned to reference shape I, the projection of each segment (e.g., the height of a box) fully overlaps with the corresponding segment of I and draws a point on the other two segments of I (zero overlapping). If solution F is misaligned I or its dimensions are incorrect, the projection of one segment of F does not perfectly cover the corresponding segment of I (Fig. 2a) and is nonzero on the other two segments. In detail, let us first draw a Cartesian frame of reference with axes aligned to the axes of symmetry if I . Given a solution F, let F 1 , F 2 and F 3 denote its three segments (width, depth, and height, respectively) in descending order of length, and let Fˆ ji denote the projection of the i-th segment of F on the i-th Cartesian axis, where j = 1, 2, 3 denote the X , Y , and Z| axes, | respectively. | | The three axes of I are defined | ˆi | || i || | | in the same fashion, noting that | Ii | = I and | Iˆki | = 0, ∀ k /= i. The length of a given segment A is henceforth denoted as |A|. The primitive fitting error Err(F, I ) is calculated as follows: ⎧ Err(F, I ) = 1 − min
min
i={1,2,3} k={1,2,3},k/=i
{ ( i i) ( )} M F , I − E Fi , I k
⎫ (6)
) ( where M F i , I i quantifies the similarity of F i to the corresponding segment I i and is calculated as the length of the intersection Fˆii ∩ Iˆii (green sub-segment in
Shape Recognition for Industrial Robot Manipulation …
(a) Axes comparison.
105
(b) Different cases.
(
( ) Fig. 2 Projection of the i t h segment of F F onto the j t h segment of I I j ; that is, the intersection of the projections of F i and I j onto the Cartesian axis. The green part of the segment marks the match (intersection) of the two projections, and the red parts mark the mismatch. In the case of a perfect match, the green part will be equal to the length of I j , and there will be no red parts. There are six possible cases of partial or no match between the two axes ) i
Fig. 2a) minus the sum of the lengths of the non-intersecting parts of Fˆii and Iˆii (red sub-segments in Fig. 2a). That is: (
M Fi , I
) i
=
| | [| | | |] [| | | |] | | | | | | | ˆ i ˆi | | | | Fi ∩ Ii | − | Fˆii | − | Fˆii ∩ Iˆii | − | Iˆii | − | Fˆii ∩ Iˆii | Iˆii
(7)
| | | | | | | | If the found shape F perfectly matches I, | Fˆii ∩ Iˆii | = | Iˆii |(F i is aligned to I i ), | | ( ) | | and | Fˆki | = 0 ∀ k /= i. In this case, M F i , I i = 1. In the case of total mismatch, | | (| | | |) | | ) ( | | | | | | | ˆ i ˆi | | Fi ∩ Ii | = 0 and M F i , I i = − | Fˆ i | + | Iˆii | /| Iˆii | < −1. ) ( E F i , I k denotes the misalignment of F i with I i and is measured as the length of the intersection of its projection with the non-corresponding axes I k of I . That is, ) ( E Fi , I k =
max
|⎫ ⎧ || ⎨ | Fˆki ∩ Iˆkk || ⎬
k={1,2,3},k/=i ⎩
Iˆkk
⎭
(8)
| | | | If the fitted shape F perfectly corresponds to I, | Fˆki | ∩ Iˆkk | = 0 for all i /= k (F i ) ( is aligned with I i ), then E F i , I k = 0. In case F i is( perpendicular to I i (total ) i k i k mismatch), F will be aligned with one of the I and E F , I ( =( 1. ) i i (Equation ) 6 is equal to zero in case F corresponds to I M F , I = 1 and i k E F ,I = (0, (is greater ) than zero( otherwise, ) )and is maximum in case of total mismatch M F i , I i < −1 and E F i , I k = 1 . The maximum and minimum
106 Table 3 Parameterisation of Bees Algorithm and evolutionary algorithm
M. Castellani et al. Parameter
Sphere
Box
Cylinder
# Individuals
10
10
25
# Parents ( ) Mutation rate p f
3
3
8
1
1
1
# Iterations
390
900
672
Sampling coverage (%)
5
25
25
2
3
4
Evolutionary algorithm
Bees Algorithm Scout bees (ns) Elite sites (ne)
1
1
1
Best sites (nb)
2
3
4
Recruited elite (nre)
9
10
10
Recruited best (nrb)
4
4
6
Stagnation limit (stlim)
20
20
25
Initial patch neighbourhood (ngh)
0.15
0.5
1
# Iterations
300
300
600
Sampling coverage (%)
5
25
25
operations in Eqs. 6 and 8 are meant to penalise the main mismatches in length and alignment while being more forgiving of minor discrepancies.
4.3 Parameterisation of Algorithms The parameters of the two metaheuristics have been optimised via extensive trial and error and are shown in Table 3. They are different for each shape type but the same across the three datasets (clean, error, and double error, see Sect. 4.1). To make the comparison fair, the two metaheuristics were parameterised to sample the same number of candidate solutions in one complete optimisation trial.
5 Results The experimental results are shown for each instantiation of the two algorithms and the data set in Table 4. The average primitive fitting accuracy is estimated as the median of the error Err(F, I ) (Sect. 4.2) of the final solutions obtained in the forty independent optimisation trials. The spread of the accuracy results is calculated as their interquartile range (IQR) [31].
Shape Recognition for Industrial Robot Manipulation … Table 4 Primitive fitting accuracy results obtained by the BA and EA on different datasets
Shape
107
Algorithm
Median
IQR
Sphere
Evolutionary algorithm
1.410 × 10−2
1.245 × 10−2
Sphere
Bees Algorithm
2.585 × 10−4
2.538 × 10−4
Box
Evolutionary algorithm
2.815
0.997
Box
Bees Algorithm
2.353
1.035
Cylinder
Evolutionary algorithm
4.441
6.984
Cylinder
Bees Algorithm
3.709
5.843
Sphere
Evolutionary algorithm
1.435 × 10−2
1.280 × 10−2
Sphere
Bees Algorithm
1.690 × 10−3
1.863 × 10−3
Box
Evolutionary algorithm
2.802
1.021
Box
Bees Algorithm
2.348
1.066
Cylinder
Evolutionary algorithm
4.478
7.078
Cylinder
Bees Algorithm
3.758
5.898
Clean
Single error
Double error Sphere
Evolutionary algorithm
1.522 × 10−2
1.342 × 10−2
Sphere
Bees Algorithm
3.140 × 10−3
3.772 × 10−3
Box
Evolutionary algorithm
2.768
0.989
Box
Bees Algorithm
2.351
1.092
Cylinder
Evolutionary algorithm
4.514
7.363
Cylinder
Bees Algorithm
3.884
6.196
The median and interquartile range (IQR) of 40 independent trials are reported
108
M. Castellani et al.
In terms of accuracy, the BA consistently obtained better results than the EA. The difference was particularly marked on spheres, where the solutions generated by the BA were characterised by a median error 80% smaller than the error of the solutions generated by the EA. On boxes and cylinders, the median error of the solutions generated by the BA was approximately 15% smaller than the error of the solutions generated by the EA. In terms of consistency of the results (IQR), the BA and EA performed comparably on boxes, while the BA was far more consistent on spheres and moderately more consistent on cylinders. The results obtained by the two algorithms were similar regardless of the level of simulated sensor imprecision. This result suggests a good degree of robustness of both techniques to error in the PCs.
6 Conclusions and Further Work Primitive fitting requires finding the optimal fit of a geometric shape to a PC scene. This study investigated the application of the popular BA metaheuristics to this problem. The main advantage of the approach is that the BA does not need any assumption to be made on the model. The BA does not need a seeding procedure, such as the state-of-the-art RANSAC algorithm. The BA was used to fit three kinds of primitive shapes to artificially generated models. Experimental evidence showed the accuracy and consistency of the proposed technique. The performance of the BA did not degrade if the position of the points in the cloud was perturbed with different levels of random noise, simulating imprecision in laser scanning devices. Compared to an EA, the BA showed superior accuracy, particularly in fitting spheres. Although the BA code was not optimised for speed, the BA execution time for aligning one shape was compatible with real-time applications. Run on an Intel i7 2.8 GHz processor, the BA took on average 0.6 s to fit one primitive. If required, code optimisation and parallelisation would further reduce the BA running time. Overall, the results presented in this chapter provide a first indication of the suitability of the BA as a tool to effectively and efficiently solve the primitive fitting problem. Further work should be done to investigate the performance of the BA on more complex primitives and on scenes including several shapes. The BA should also be tested on partial shapes, a common problem in real-world applications where the object cannot be scanned from all sides. It is hoped that this study will give momentum to this new and promising field of application for BA.
Shape Recognition for Industrial Robot Manipulation …
109
References 1. Björkman M, Bekiroglu Y, Högman V, Kragic D (2013) Enhancing visual perception of shape through tactile glances. In: 2013 IEEE/RSJ international conference on intelligent robots and systems. IEEE, pp 3180–3186 2. Mavrakis N, Stolkin R, Baronti L, Kopicki M, Castellani M et al (2016) Analysis of the inertia and dynamics of grasped objects, for choosing optimal grasps to enable torque-efficient postgrasp manipulations. In: 2016 IEEERAS 16th international conference on humanoid robots (Humanoids). IEEE, pp 171–178 3. Levine MD (1985) Vision in man and machine. McGraw-Hill College 4. Bolles RC, Fischler MA (1981) A RANSAC-based approach to model fitting and its application to finding cylinders in range data. IJCAI 1981:637–643. Citeseer 5. Ballard DH (1981) Generalizing the hough transform to detect arbitrary shapes. Pattern Recogn 13(2):111–122 6. Dalitz C, Schramke T, Jeltsch M (2017) Iterative hough transform for line detection in 3d point clouds. Image Process On Line 7:184–196 7. Vosselman G, Gorte BG, Sithole G, Rabbani T (2004) Recognising structure in laser scanner point clouds. Int Arch Photogram Rem Sens Spat Inform Sci 46(8):33–38 8. Rabbani T, Van Den Heuvel F, Vosselmann G (2006) Segmentation of point clouds using smoothness constraint. Int Arch Photogram Rem Sens Spat Inform Sci 36(5):248–253 9. Mukhopadhyay P, Chaudhuri BB (2015) A survey of hough transform. Pattern Recogn 48(3):993–1010 10. Rabbani T, Van Den Heuvel F (2005) Efficient hough transform for automatic detection of cylinders in point clouds. In: ISPRS WG III/3, III/4 V/3, pp 60–65 11. Khoshelham K (2007) Extending generalized hough transform to detect 3d objects in laser range data. In: ISPRS workshop on laser scanning and silvilaser 2007, 12–14 September 2007, Espoo, Finland. International Society for Photogrammetry and Remote Sensing, pp 206–210 12. Chum O, Matas J (2008) Optimal randomized RANSAC. IEEE Trans Pattern Anal Mach Intell 30(8):1472–1482 13. Schnabel R, Wahl R, Klein R (2007) Efficient RANSAC for point-cloud shape detection. In: Computer graphics forum, vol 26. Wiley Online Library, pp 214–226 14. Pham DT, Castellani M (2009) The Bees Algorithm: modelling foraging behaviour to solve continuous optimization problems. Proc Inst Mech Eng C J Mech Eng Sci 223(12):2919–2938 15. Fogel DB (2006) Evolutionary computation: toward a new philosophy of machine intelligence, vol 1. Wiley 16. Roth G, Levine MD (1994) Geometric primitive extraction using a genetic algorithm. IEEE Trans Pattern Anal Mach Intell 16(9):901–905 17. Lutton E, Martinez P (1994) A genetic algorithm for the detection of 2d geometric primitives in images. In: Proceedings of 12th international conference on pattern recognition, vol 1. IEEE, pp 526–528 18. Gotardo PF, Bellon ORP, Silva L (2003) Range image segmentation by surface extraction using an improved robust estimator. In: Proceedings of 2003 IEEE computer society conference on computer vision and pattern recognition, vol 2. IEEE, pp II–33 19. Ayala-Ramirez V, Garcia-Capulin CH, Perez-Garcia A, Sanchez-Yanez RE (2006) Circle detection on images using genetic algorithms. Pattern Recogn Lett 27(6):652–657 20. Cuevas E, Wario F, Zaldivar D, Pérez-Cisneros M (2012) Circle detection on images using learning automata. IET Comput Vis 6(2):121–132 21. Najim K, Poznyak AS (2014) Learning automata: theory and applications. Elsevier 22. Cuevas E, Sención-Echauri F, Zaldivar D, Pérez-Cisneros M (2012) Multi-circle detection on images using artificial bee colony (ABC) optimization. Soft Comput 16(2):281–296 23. Yao J, Kharma N, Grogono P (2004) Fast robust GA-based ellipse detection. In: Proceedings of the 17th international conference on pattern recognition, ICPR 2004, vol 2. IEEE, pp 859–862 24. Yuen SY, Ma CH (2000) Genetic algorithm with competitive image labelling and least square. Pattern Recogn 33(12):1949–1966
110
M. Castellani et al.
25. González MR, Martínez ME, Cosío-León M, Cervantes H, Brizuela CA (2021) Multiple circle detection in images: a simple evolutionary algorithm approach and a new benchmark of images. In: Pattern analysis and applications, pp 1–21 26. Ugolotti R, Micconi G, Aleotti J, Cagnoni S (2014) GPU-based point cloud recognition using evolutionary algorithms. In: European conference on the applications of evolutionary computation. Springer, pp 489–500 27. Kennedy J, Eberhart R (1995) Particle swarm optimization. In: Proceedings of ICNN’95international conference on neural networks, vol 4. IEEE, pp 1942–1948 28. Storn R, Price K (1997) Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces. J Glob Optim 11(4):341–359 29. Wang L, Cao J, Han C (2012) Multidimensional particle swarm optimization based unsupervised planar segmentation algorithm of unorganized point clouds. Pattern Recogn 45(11):4034– 4043 30. Baronti L, Alston M, Mavrakis N, Ghalamzan EAM, Castellani M et al (2019) Primitive shape fitting in point clouds using the Bees Algorithm. Appl Sci 9(23):5198 31. Hoaglin DC, Mosteller F, Tukey JW (2000) Understanding robust and exploratory data analysis, Sirsi. ISBN 9780471384915
Bees Algorithm Models for the Identification and Measurement of Tool Wear Doriana M. D’Addona
1 Introduction The Bees Algorithm (BA) is a method that allows optimising the search for specific values through a code that simulates the search behaviour of bees [1, 2, 3]. It is part of bio-inspired artificial intelligence techniques implemented for process optimisation. More specifically, it is one of the techniques called Swarm Intelligence (SI). BA replicates the behaviour of particle systems for optimisation and search processes. The particles, in this case, are bees, but they can be ants, birds, and fish. From a search in the Scopus database, in the last 5 years, 834 documents deal with the topic BA in manufacturing while the same search in Google Scholar has produced a results of 1310 scientific papers. It can be noted that, an average of 190 documents are produced every year, with a total number of citations equal to 6907: 640 are articles published in journals, 2 books, 18 book chapters, 157 conference papers, 14 reviews and 3 conference reviews (Fig. 1). This search is associated with words such as optimization and optimisation (288 times), genetic algorithm (106 times), PSO (91 times), artificial intelligence (89 times), evolutionary algorithms (82 times) and swarm intelligence (74 times). These results suggest the use of the technique associated with other artificial intelligence methods (Table 1 and Fig. 2). In [4, 4], the authors discussed the optimisation of artificial neural networks for the identification of wood defects via BA. Another work that studied the use of mixed techniques is [5], in which the authors used the BA together with the ant algorithm for optimal task assignment in mobile cloud computing. Particularly interesting are the research fields in which the BA is used: 28.5% in Computer Science, 27.6% in D. M. D’Addona (B) Department of Chemical, Materials and Industrial Production Engineering, University of Naples Federico II, Naples, Italy e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 D. T. Pham and T. Hartono (eds.), Intelligent Production and Manufacturing Optimisation – The Bees Algorithm Approach, Springer Series in Advanced Manufacturing, https://doi.org/10.1007/978-3-031-14537-7_7
111
112
D. M. D’Addona
Fig. 1 Documents by type using metadata for “Bees Algorithm in manufacturing” keyword. Source SCOPUS, last visit 6 February 2022 Table 1 Most used keywords using metadata for “Bees Algorithm”. Source SCOPUS, last visit 6 February 2022
Fig. 2 Word cloud realised with Biblioshiny using metadata for the “Bees Algorithm” keyword. Source SCOPUS, last visit 6 February 2022
Terms
Frequency
Optimization/optimisation
288
Algorithms
252
Genetic slgorithms
106
Particle swarm optimisation (PSO)
91
Artificial intelligence
89
Evolutionary algorithms
82
Swarm intelligence
74
Honey Bee
48
Bees Algorithm Models for the Identification and Measurement …
113
Fig. 3 Types of documents by subject area for “Bees Algorithm in manufacturing” keyword. Source SCOPUS, last visit 6 February 2022
Engineering, 13.5% in Mathematics, but there are also other fields such as Physics and Astronomy (4.2%) or Social Sciences (2.6%) (Fig. 3). Therefore, it is possible to find as an application of the BA a model that foresees the pattern synthesis of linear antenna arrays [6] or the comparison between BA and genetic algorithm for location-allocation of earthquake relief centres [7]. There are other publications concerning robotic disassembly sequence planning where the BA has been used to reduce the disassembly time [8] or to study human– robot collaborative disassembly cell [9, 10, 11], or even applications in the medical field, such as the selection of the most appropriate parameters for the classification of the type of breast cancer [12]. Another exciting fact that results from the bibliometric analysis of the research works containing the BA is the geographical distribution that sees 66 different countries that have produced at least one work on the subject. The country with the highest production is China, with approximately 363 products, followed by India, with 144 products, and Turkey, with 80 products (Fig. 4). Regarding the use of BA in manufacturing, there are approximately 4.9% of the total works. The first works date back to 2007 and are all produced by D.T. Pham. These jobs used the BA to schedule jobs for a machine [13, 14] and to find the best gearbox configurations [15]. Another application of the BA is the crack determination problems for beam-type structures [16]. The BA was applied in robotics with particular reference to disassembly in remanufacturing [17]. The BA method is also used in network applications in manufacturing. Xu et al. [18, 19] used the technique for the middleware layer of the quality of services in manufacturing networks. In this application, the BA was tasked with finding the best parameters to optimise system performance. In general, however, the main use of the BA is for process optimization [20]. The algorithm performs a sort-of exploitative neighbourhood search associated with random explorative search.
114
D. M. D’Addona
Fig. 4 Number of documents by country or territory for “Bees Algorithm in manufacturing” keyword. Source SCOPUS, last visit 6 February 2022
Animal and insect colony groups provide a rich environment to develop optimisation and search algorithms. Additionally, it has several characteristics, such as adaptation, scalability, speed, autonomy, parallelism and fault tolerance. The basic hypothesis for biology in manufacturing is that “Future Manufacturing Systems Will Incorporate Components, Features, Characteristics and Capabilities that enable convergence towards Living Systems”. This paper aims to test one aspect of this basic hypothesis by developing a biointegration approach consisting of the utilisation of the Bees Algorithm in its basic formulation for the identification and measurements of tool wear during turning operations. The cutting tests were run under small quantity lubrication (SQL) conditions using cutting fluid containing a microalgae species (Spirulina platensis) as the lubricant component. It is intended to represent a clear and explicit instance of direct use of resources from living nature for new innovation in manufacturing, according to the biointegration approach of biological transformation. The goal is to use the searchability of the bee agents to delimit the contours of a tool’s wear to calculate its value in terms of amplitude.
2 Bees Algorithm The BA uses a population of agents, “a family of bees”, to explore the whole space of possible solutions. A part of the population, “the exploratory bees”, randomly searches around regions with high suitability of results, i.e., “global search”. Exploratory bees that find areas with abundant availability of pollen, nectar or sugar secretions are the most successful agents. They recruit a variable number
Bees Algorithm Models for the Identification and Measurement …
115
of inactive agents, the “foraging bees” to search for the most suitable solutions, i.e., “local search”. The global and local search cycles are repeated until an acceptable solution is discovered, the equivalent of a zone in full bloom, or a certain number of iterations have passed; in nature, the limit of interactions is sunset [2]. The flowchart of the BA is described in Fig. 5. The process starts with a random initialisation phase in which the scout bees are randomly positioned throughout the space. This landscape is a matrix of solutions, and each is associated with a position. This position is a site that is evaluated using the fitness function. This stage is known as Fitness Evaluation. The sites scored by the bee scouts are ranked, and those with better values in relation to the research objective are the best sites. The local search phase then begins with foraging bees going to the area of the sites selected by the bee scouts. Communication between the scout bee and the foraging bee takes place through “waggle dance” [2, 21]. The waggle dance is a term used to describe a particular dance in the shape of eight performed by bees. By performing this dance, whose
Fig. 5 Bees Algorithm flowchart [2]
116
D. M. D’Addona
Fig. 6 The waggle dance performs by a bee
movements are correctly coded, the worker bee can communicate to her companion valuable information about the nature, position and degree of interest of a resource she has discovered, such as nectar and pollen flowers, water springs or new nesting sites [22]. This dance is, therefore, the mechanism by which bees can recruit other bees from their hive to collect resources. The waggle dance consists of the repetition of a variable number of circuits, each of which consists of two phases: the oscillatory phase and the return phase. When a bee finds a source of food, it returns to the hive and, excited by discovery, performs this dance among the other bees (Fig. 6). The oscillatory phase indicates the call to the foraging bees to indicate the presence of food in the direction of the path made in this phase. The oscillatory phase corresponds to a straight line made by oscillating the abdomen. The return phase consists of a right turn to return to the starting point [23]. This dance is repeated in the opposite direction. The length of the oscillatory phase then indicates the distance from the resource [24]. Some of the best sites are called elite sites. Here, scout bees attract foragers to search around these sites. The remaining foraging bees are called by the other bees in the best sites to perform a local search. The local search is intense near the elite sites where the largest number of foraging bees are concentrated. The fitness function regulating the determination of the sites may be better and worthy of being selected as elite [21]. The neighbourhood is identified as flower patch. For each flower patch the suitability is evaluated. If a bee is in a position with better values than another, the bee is converted into a bee scout. This takes place until the end of the cycle, and at that time, the best solution found is used as a representative of the entire neighbourhood [2]. The next step after the local search phase is the global search phase. The remaining bee scouts compared to those that identified the best sites were randomly placed on the fitness landscape to find new flower patches. At the end of this phase, there will be two groups of new populations—one generated by local search and another generated by global search [2, 21].
Bees Algorithm Models for the Identification and Measurement …
117
3 Turning Trials Cutting tests with a cutting edge at 90° to the cutting speed direction were executed using a parallel lathe on AISI 1045 steel bars with a diameter of 60 mm (Fig. 7a). Each test was performed with a new tool (carbide insert with rake angle 6° and chamfer angle 5°) until tool failure. During the testing campaign, the following cutting parameters were used: cutting speed vc = 130–230 m/min; feed rate f = 0.20 mm/rev; depth of cut d = 1.0 mm. Tests were run under dry and wet conditions using the SQL method with microbial-based cutting fluids supplied by a peristaltic pump to the cutting zone at a 30 ml/min rate and 5 bar [25].
3.1 Tool Wear Measurements During turning trials, the cutting tool condition was evaluated in terms of crater wear land by directly measuring tool wear at regular time steps by a shop floor microscope and through image acquisition and processing procedures. A copper wire with a diameter of 0.21 mm was positioned near the tool (Fig. 7b). This will be the reference for the conversion between pixels and mm to verify the effectiveness of the BA method. In the case of stable cutting conditions, the tool wear grew smoothly with increasing cutting time. In contrast, in cases of harsher cutting conditions, more rapid tool wear growth was detected.
Fig. 7 a Experimental test setup; b crater wear with copper wire
118
D. M. D’Addona
4 Bees Algorithm for Tool Wear Detection The BA is generally used for optimisation problems all related to functions defined from a mathematical point of view [2, 26, 27, 28, 29, 30]. In this chapter, the agent search capacity will be used to determine the extent of tool wear from images. The idea is to replace the fitness landscape with a landscape consisting of the image in a matrix version. Bees can find the characteristic points of wear and return the required value. In this case, the idea will be applied to a cutting tool, but in general, the method can also work with other types of images. The fundamental aspect is the choice of an objective function, which leads to a variety of steps compared to those shown in this work. The objective of the research activities is to obtain a curve with cutting wear amplitude values calculated from the images collected between one test and another. Measurements were initially made using image processing software. At the end of this work, there will be comparisons between the results obtained with the BA method and those obtained from photo analysis. Figure 8 shows the flowchart of the BA method for determining tool wear. The diagram shows all the steps taken to arrive at the final result of the wear calculation in a single image. The process was possible by using MATLAB software with special scripts that performed the necessary steps. The first step is image selection. The script allows the user to indicate the path and name of the image to be analysed. Once the image is selected, MATLAB imports the RGB image as a three-dimensional matrix and shows the user the figure. If the figure is not correctly rotated, the script allows the user to rotate the image by entering the number of degrees required for counterclockwise rotation. This operation also rotates the matrix connected to the image accordingly. The next step is to determine the number of pixels to associate with the reference. To do this, the user can select the area of the image where the copper wire is present. The script will advise the user to click on two ends of the wire. After the selection of the two points, the script will calculate the distance between the two points according to Eq. 1. The result obtained will be saved and used later. ⎧ ⎨ P1 (x1 , y1 ) P2 (x2 , y2 ) √ ⎩ d(P1 , P2 ) = (x2 − x1 )2 + (y2 − y1 )2
(1)
The process continues with the return to the starting image and the selection of the image area where the wear is present. In this case, the user goes to cut the image, and the selected part will be the one that the script will keep in the form of a matrix. Anything outside the box will be deleted. This will make the matrix smaller in size, which will help speed up the calculation, which will be slower. The remaining image will then be divided into areas of colour thanks to the MATLAB function called superpixels. This function uses the information from an RGB image to create a label matrix referring to the colours in the source image.
Bees Algorithm Models for the Identification and Measurement …
119
Fig. 8 Bees Algorithm flowchart for tool wear detection
The label matrix created is a set of clusters, each characterised by the most common colour in a given region. This process is called oversegmentation [31]. Figure 9 shows how the image changes as a result of this process. MATLAB then shows the user the effects of the superpixel function on the previously cut image. The user can click inside the wear to provide the script with information regarding the position of the wear. This step is of paramount importance because it reduces processing time and allows a more efficient search for bees. Each area is associated with a value that matches the colour shown on the image. The script searches the label matrix for the value selected by the user after clicking on the affected area. The rectangular area where the required values are present will be the actual scouting area for bees. This information is then saved and will be used later. The label matrix is no longer needed, and the image will be visible again without the visual filter applied by the superpixel function. The image is converted from RGB to grayscale. This conversion leads to the transformation of the original matrix
120
D. M. D’Addona
Fig. 9 Oversegmentation realised by the superpixel function in MATLAB
from three-dimensional to one-dimensional. A one-dimensional matrix is a matrix of individual values. However, considering the information regarding the position of each value within the matrix, the matrix is a grayscale image that on MATLAB is visible in three dimensions with the mesh function. The grayscale image represents the fitness landscape on which bees can move. The last step before starting the BA is the setting of the search parameters. Important information is the instructions given to the bees. In this case, the bees are not asked to search for a minimum point but for maximum points near a minimum area. The grey matrix has values ranging from 0 to 255. The minimum value corresponds to the black colour, while the white colour is associated with the maximum value. The wear zone is characterised by shadow and therefore a darker colour than the unused part of the tool. The edge of the wear area will have higher values than the area. For this reason, maxima will only be sought in areas where there are many minima. The risk of ending up outside the wear area has been averted with the superpixel function. Without this operation, bees could roam the entire image area with the risk of identifying areas with characteristics similar to those of wear and tear and giving incorrect results. The process continues with the start-up of the BA, whose operation remains unchanged with respect to what was described in the previous chapter except for the changes just discussed. At the end of the search, the positions of the required values are saved. The positions are saved in two vectors for the coordinates of the abscissae and the ordinate. MATLAB’s boundary function creates a closed polygon with the newly saved vectors. The closed polygon represents the boundary of wear. By selecting the lowest point of the polygon, the height of the crater wear can be calculated using Eq. 1. The second point is the highest point. Since the image may be slightly oblique to the abscissa plane, the highest point is searched in a range within which the abscissa value of the lowest point is present. Once the height h(px) is calculated, you get to the last step of this method (Eq. 2). h(mm) =
h( px) · 0.21 mm d(P1 , P2 )
(2)
Bees Algorithm Models for the Identification and Measurement …
121
(a)
(b)
(c)
(d)
Fig. 10 Steps of the BA method for tool wear detection: In a the user selects the area of interest in the full image and obtains, b after the conversion of RGB to grayscale, the image becomes the fitness landscape, c the Bees agents find the tool wear. His edge is shown in d with red circles
The height is expressed in pixels and must be converted to mm. For conversion, it is necessary to have available the data previously acquired from the reference copper wire. Figure 10 shows the steps of the BA for tool wear detection and measurement.
5 Results The BA method for determining tool wear from images is applied to figures of a cutting tool. The images came from quasi-orthogonal cutting tests and were taken between several passes at irregular intervals. The aim of this paper is not to investigate the quality of the tests carried out but rather the ability of the BA to determine the correct wear value from the images. The tests carried out are two made with different processing parameters. For each instant of time in which the wear value is present, a certain number of photos correspond. The wear value is not always precisely the same considering that the photos were sometimes taken from different positions. The variation between the various measurements is on the order of hundredths of one mm. The average values will
122
D. M. D’Addona
therefore be compared both for the survey taken directly on the photos and for the results provided by the BA. The objective of the following work is, in fact, to ascertain the effectiveness of the method by calculating how much it provides values different from those found with the direct measurement on the images. The comparison begins with the selection of the best images to be measured. After that, measurements for each image are recorded. This is done by applying the BA method for tool wear detection. The application of the BA is not immediate, as it is necessary to choose the best parameters for the optimisation of the results. The parameters that can be selected are duration, random scouts, elite sites, best sites, foragers on elite sites and best sites. To optimise the process of determining the best parameters, the script sets the last three parameters, and the values of duration, random scouts and elite sites are varied. The results of these tests are shown in Figs. 11, 12 and 13. The graphs show the test values and the percentage of error compared to the values determined by the measurement of the images. Once the parameters with the lowest error percentage have been chosen, the remaining parameters are chosen. Table 2 shows the various parameter configurations, while Fig. 14 reports the best results. The best configuration is “c”. The script that returns the matrices with all the results for each image shows the best configuration. Tables 3 and 4 report the crater wear measurements for each instant in which the photos were taken. The tables show the average value of both measured and Parameter: Duration
Fig. 11 Comparison between different values of duration parameter % Error
2.80 2.60 2.40
2.65 2.27
2.20 2.00
25
50
Duration
Parameter: Random Scouts
Fig. 12 Comparison between different values of random scouts parameter % Error
3.00 2.00
2.97
2.65 1.75
1.00 0.00 5
10
Random Scouts
15
Bees Algorithm Models for the Identification and Measurement …
123
Parameter: Elite Sites
Fig. 13 Comparison between different values of elite site parameters
% Error
2.80 2.60 2.40
2.69
2.20
2.46
2.40
2.43
6
8
10
2.25
2.00 2
4
Elite Sites
Table 2 Configuration of parameters for the BA method for tool wear detection Conf. Duration Random scouts Elite sites Best sites Foragers on elite Foragers on best sites sites a
25
10
4
4
5
b
25
10
4
8
5
5
c
25
10
4
12
5
5
d
25
10
4
16
5
5
e
25
10
4
20
5
5
f
25
10
4
4
10
10
g
25
10
4
8
10
10
h
25
10
4
12
10
10
i
25
10
4
16
10
10
j
25
10
4
20
10
10
5
Best Configuration
% Error
2 1.5 1 0.5
1.41
1.37
1.34
1.36
a
b
c
d
1.37
1.7
1.68
1.64
1.65
1.73
f
g
h
i
j
0 e
Configuration
Fig. 14 Comparison between different configurations
BA-determined wear. The average value means the calculation of the average of the values of all images for each single photographed pass. For the first tool, the success rate of the BA is 98.64% while the success rate of the BA for the second tool is 98.69%.
124
D. M. D’Addona
Table 3 Crater wear measured by photo and by BA for first tool
First tool Time (s)
Table 4 Wear measured by photo and by BA for second tool
Wear measured by photo (mm)
Wear measured by BA (mm)
90
0.659
0.659
180
1.012
1.043
220
1.104
1.124
260
1.301
1.296
300
1.820
1.813
333
1.879
1.865
453
2.538
2.522
Second tool Time (s)
Wear measured by photo (mm)
Wear measured by BA (mm)
76
0.554
0.548
210
0.705
0.710
309
0.963
0.950
407
1.290
1.272
505
1.453
1.476
572
1.562
1.576
The graphs in Figs. 15 and 16 are the curves obtained using the values in Tables 3 and 4 while the mean error of wear measurements are reported in (Table 5). First Tool Wear
3.000
Tool Wear (mm)
2.500 2.000 1.500 1.000 0.500 0.000 50
2.522 1.865
1.813
2.538
1.296 1.043
1.879 1.820
1.124
1.301
0.659 0.659
1.012
150
Bees Algorithm 250 Time (s)
Fig. 15 Tool wear versus time for first tool
Photo
1.104
350
450
Bees Algorithm Models for the Identification and Measurement …
Second Tool Wear
2.000 Tool Wear (mm)
125
1.576
1.476 1.500
1.272 0.950
1.000 0.500
0.548
0.963
1.290
0.705
0.554
0.000 50
1.562
1.453
0.710
150
Photo Bees Algorithm
250
350
450
550
Time (s)
Fig. 16 Tool wear versus time for second tool
Table 5 Mean error of wear measurement
First tool
Second tool
Time (s)
Mean error (%)
Time (s)
Mean error (%)
90
0.00
76
1.09
180
2.97
210
0.70
220
1.78
309
1.37
260
0.38
407
1.41
300
0.39
505
1.56
333
0.75
572
0.89
453
0.63
–
6 Discussion and Conclusions The Bees Algorithm in its basic formulation for the identification and measurements of tool wear in turning operations is presented. The error rates are small, even those with non-optimised parameters, as shown in Figs. 11, 12, 13 and 14. The highlights of this method are: . the ability to convert the image into a fitness landscape; . the possibility of optimally addressing bees through image oversegmentation; . the use of the position of the best sites for the realisation of a figure. For the first of these points, the conversion of the image in grayscale with the consequent realisation of the matrix to use as Fitness Landscape is fundamental. Usually, in the BA, the fitness landscape is generated by a function. This equation returns values as a function of two variables. The variables are comparable to coordinates on a map and indexes in a matrix. The idea is, therefore, to supply the system directly with the matrix without the generating function. This is only possible in grayscale, as this type of image only has information relating to the variation of one color: gray. Its variation can only range from white (highest value) to black (lowest value). In the case of RGB images, the information is linked to 3 layers: red, green and
126
D. M. D’Addona
blue. The combination of these three layers then provides the characteristic colour variation of the image. A matrix of this type is unusable for creating a fitness landscape. The conversion, on the other hand, allows the fitness landscape image to be used and does not lose information compared to using only one of the three layers in the RGB encoding. Another essential factor is oversegmentation. The collection of images was processed with only the interested party being highlighted. It was, therefore, easy for the bees to recognise the desired object. Without previous elaboration on the images, recognition by bees is influenced by the elements on them. If the image is simple and has only one characteristic element, the recognition is significant. If there are other elements in the image, not all bees will recognise the requested object, and this could be a problem for the investigation. With oversegmentation, bees have a more focused search box with lower processing time and a much higher recognition rate. In addition, oversegmentation allows us to retain information about colors that would be lost with just the conversion from RGB to grayscale. The last aspect underlined is that of the realisation of a figure. In the specific case, the objective was the calculation of the height, but this does not mean that it is not possible to obtain other parameters such as the area. The position of the best sites is the element for the creation of the perimeter. The position of the best sites may not be optimal to create a homogeneous figure. Using the boundary function, the concavity and convexity of the built polygon with the position of the best sites can be checked. In this way, positions that are far from the group of the other positions are not considered, excluding implausible forms for the object to be identified. An interesting aspect of this research is the computation time, i.e., the time taken by the bees to determine the boundary of the worn area. The response time is approximately 5 s. It can be observed that the time can vary depending on the size of the images. When the images are small, as in the case of Fig. 10d, the computation times are lower. A photographic campaign that produces equal images can establish a direct correlation between the number of pixels, the resolution and the computational time. A lower image resolution affects the computation time, which will be lower. Conversely, a high resolution leads to higher computation times. However, the resolution affects the accuracy of the wear measurement. A higher resolution image already provides better accuracy in the photographic analysis. The relationship between resolution and measurement accuracy is not directly observable in the analysis with the BA method. This nonobservation occurs because the analysis tool is based on the input image and the measurement is made on the same image. If there is an inaccuracy in the wear measurement in the image due to a low resolution, the BA method cannot overcome the original problem and brings with it the same inaccuracy. Again, a photo campaign with images taken at the same distance but with a different resolution is necessary. Future development could be the realisation of an ad hoc experimental test campaign to highlight the correlation between computational time and image resolution. Other aspects that should not be overlooked are the characteristics of the computer on which the script runs. The capacity of the RAM memory and the capacity
Bees Algorithm Models for the Identification and Measurement …
127
of the video card can affect the computational time. Leaving these aspects aside, the BA method developed in this thesis provides extremely interesting results. This is because the bees clearly provided the area to be analysed with the oversegmentation technique. This is the reason for very low computational times. The BA for the tool wear detection method can also be used in real time to obtain the required values by simply uploading the photos taken to the tool. The power of this method is to be able to identify any object. The ability to choose which section of the image to analyse is an excellent tool to speed up and optimise the process. One of the future developments of this method can be the automatic recognition of the area of the image where the object to be searched for is present. In this specific case, deep learning and image recognition techniques could be used to find the area where the object is present. To do this, the automatic division of the image into micro areas would be necessary. At the moment, the proposed system is faster, but it is not impossible to think that the proposed development could help to automate the process further.
References 1. Pham DT, Ghanbarzadeh A, Koç E, Otri S, Rahim S, Zaidi M (2006) The Bees Algorithm—a novel tool for complex optimization problems. In: Intelligent production machines and systems, pp 454–459 2. Pham DT, Castellani M (2009) The Bees Algorithm: modelling foraging behaviour to solve continuous optimization problems. Proc Inst Mech Eng C J Mech Eng Sci 223(12):2919–2938 3. Baronti L, Castellani M, Pham DT (2020) An analysis of the search mechanisms of the Bees Algorithm. Swarm Evol Comput 59:100746 4. Packianather MS, Kapoor B (2015) A wrapper-based feature selection approach using Bees Algorithm for a wood defect classification system. In: 2015 10th system of systems engineering conference (SoSE), San Antonio, TX, pp 498–503 5. Sundararaj V (2019) Optimal task assignment in mobile cloud computing by queue based ant-bee algorithm. Wirel Pers Commun 104(1):173–197 6. Guney K, Onay M (2007) Amplitude-only pattern nulling of linear antenna arrays with the use of Bees Algorithm. Prog Electromagn Res 70:21–36 7. Saeidian B, Mesgari MS, Ghodousi M (2016) Evaluation and comparison of Genetic Algorithm and Bees Algorithm for location–allocation of earthquake relief centers. Int J Disaster Risk Reduc 15:94–107 8. Liu J, Zhou Z, Pham DT, Xu W, Ji C, Liu Q (2018) Robotic disassembly sequence planning using enhanced discrete Bees Algorithm in remanufacturing. Int J Prod Res 56(9):3134–3151 9. Wang Y, Lan F, Liu J, Huang J, Su S, Ji C, Pham DT, Xu W, Liu Q, Zhou Z (2021) Interlocking problems in disassembly sequence planning. Int J Prod Res 59(15):4723–4735 10. Liu J, Zhou Z, Pham DT, Xu W, Ji C, Liu Q (2020) Collaborative optimization of robotic disassembly sequence planning and robotic disassembly line balancing problem using improved discrete Bees Algorithm in remanufacturing. Robot Comput-Integr Manuf 61:101829 11. Huang J, Pham DT, Li R, Qu M, Wang Y, Kerin M, Su S, Ji C, Mahomed O, Khalil R, Stockton D, Xu W, Liu Q, Zhou Z (2021) An experimental human-robot collaborative disassembly cell. Comput Ind Eng 155:107189 12. Addeh J, Ebrahimzadeh A (2012) Breast cancer recognition using a novel hybrid intelligent method. J Med Sig Sens 2(2):95
128
D. M. D’Addona
13. Pham DT, Koç E (2007) Using the Bees Algorithm to schedule jobs for a machine. In: Proceedings 8th international conference on laser metrology, CMM and machine tool performance (LAMDAMAP), Cardiff, pp 430–439 14. Phrueksanant J (2013) Machine scheduling using the Bees Algorithm. Doctor of philosophy thesis. Cardiff University, UK 15. Pham DT, Castellani M, Ghanbarzadeh A (2007) Preliminary design using the Bees Algorithm. In: Proceedings eigth LAMDAMAP international conference on laser metrology, CMM and machine tool performance, Cardiff, UK, pp 420–429 16. Moradi S, Razi P, Fatahi L (2011) On the application of Bees Algorithm to the problem of crack detection of beam-type structures. Comput Struct 89(23–24):2169–2175 17. Zheng Z, Xu W, Zhou Z, Pham DT, Qu Y, Zhou J (2017) Dynamic modeling of manufacturing capability for robotic disassembly in remanufacturing. Procedia Manuf 10:15–25 18. Xu W, Tian S, Liu Q, Xie Y, Zhou Z, Pham DT (2016) An improved discrete Bees Algorithm for correlation-aware service aggregation optimization in cloud manufacturing. Int J Adv Manuf Technol 84(1–4):17–28 19. Xu W, Zhou Z, Pham DT, Liu Q, Ji C, Meng W (2012) Quality of service in manufacturing networks: a service framework and its implementation. Int J Adv Manuf Technol 63(9–12):1227–1237 20. Pham DT, Castellani M (2015) A comparative study of the Bees Algorithm as a tool for function optimisation. Cogent Eng 2:1091540 21. Pham DT, Soroka A, Ghanbarzadeh A, Koc E, Otri S, Packianather M (2006) Optimizing neural networks for identification of wood defects using the Bees Algorithm. In: 2006 4th IEEE international conference on industrial informatics, Singapore, pp 1346–1351 22. Seeley TD (1994) Honey bee foragers as sensory units of their colonies. Behav Ecol Sociobiol 34(1):51–62 23. von Frisch K (1961) The dancing bees: an account of the life and senses of the honey bee, Paperback ed. A harvest/HBJ book. Harcourt Brace Jovanovich 24. Seeley TD, Mikheyev AS, Pagano GJ (2000) Dancing bees tune both duration and rate of waggle-run production in relation to nectar-source profitability. J Comp Physiol A Sens Neural Behav Physiol 186(9):813–819 25. D’Addona DM, Conte S, Teti R, Marzocchella A, Raganati F (2020) Feasibility study of using microorganisms as lubricant component in cutting fluids. Procedia CIRP 88:606–611 26. Mastrocinque E, Yuce B, Lambiase A, Packianather MS (2013) A multi-objective optimization for supply chain network using the Bees Algorithm. Int J Eng Bus Manage 5:38 27. Moradi A, Mirzakhani A, Ghanbarzadeh A (2015) Multi-objective optimization of truss structures using the bee algorithm. Scientia Iranica 22(5):1789–1800 28. Tsai H-C (2014) Integrating the artificial bee colony and Bees Algorithm to face constrained optimization problems. Inf Sci 258:80–93 29. Huang J, Pham DT, Ji C, Zhou Z (2020) Smart cutting tool integrated with optical fiber sensors for cutting force measurement in turning. IEEE Trans Instrum Meas 69(4), 8713500:1720–1727 30. Yuce B, Packianather MS, Mastrocinque E, Pham DT, Lambiase A (2013) Honey bees inspired optimization method: the Bees Algorithm. Insects 4(4):646–662 31. Thilagamani S, Shanthi N (2011) Object recognition based on image segmentation and clustering. J Comput Sci 7(11):1741–1748
Global Optimisation for Point Cloud Registration with the Bees Algorithm Feiying Lan, Marco Castellani, Yongjing Wang, and Senjing Zheng
1 Introduction Point cloud registration is the process of finding a rigid transformation that aligns two point sets. It has a wide range of applications in machine vision and robot manipulation. For example, cameras and LiDAR sensors are widely used to collect geometric information on the environment in the form of point clouds. 3D scanning reconstruction [1] aligns and merges the partial point clouds from range scans to build a complete model for reverse engineering [2]. In robotics, simultaneous localisation and mapping (SLAM) estimates the trajectory of the camera and builds a map of the environment by aligning multiview image data from different viewpoints [3, 4]. Additionally, 3D registration contributes to robot manipulation as a postprocessing step, that is, after the reconstructed 3D point cloud of the object is obtained. In this case, the area of interest in the point cloud is aligned with a template of the target object, and the pose of the object is determined for robot picking and grasping [5, 6]. 3D registration is considered an optimisation problem, where the goal is to minimise the L2 norm error metric between the source point cloud and the target point cloud. Iterative Closest Point (ICP) [7, 8] is arguably the most popular method for 3D registration. It iteratively calculates the error metric using the closest point-to-point correspondence and attempts to minimise this metric via singular value decomposition (SVD). The F. Lan · M. Castellani (B) · Y. Wang · S. Zheng Department of Mechanical Engineering, University of Birmingham, Birmingham, UK e-mail: [email protected] F. Lan e-mail: [email protected] Y. Wang e-mail: [email protected] S. Zheng e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 D. T. Pham and T. Hartono (eds.), Intelligent Production and Manufacturing Optimisation – The Bees Algorithm Approach, Springer Series in Advanced Manufacturing, https://doi.org/10.1007/978-3-031-14537-7_8
129
130
F. Lan et al.
ICP convergence theorem [7] demonstrated the local optimisation abilities of the algorithm, i.e., the error metric decreases monotonically during the iterative process owing to the closest point correspondence heuristics. However, the solution space is multimodal, and there are many local minima. Given unfavourable initial parameters, ICP would fail to find the globally optimal transformation, resulting in poor alignment of the two point clouds. Metaheuristic algorithms are general purpose strategies searching with a global outlook of the solution space. A popular instance of metaheuristics is the Bees Algorithm (BA) [9, 10], which mimics the foraging behaviour of honey bees to solve complex optimisation problems. The BA balances random explorative and local exploitative search by reproducing the behaviour of scouts and foragers in bee colonies. In addition, the site abandonment procedure prevents the BA from remaining stuck in local dips of the error landscape. This paper proposes a global optimisation method for 3D point cloud registration based on the BA. The original BA is hybridised with the problem-specific SVD operator to enhance the efficiency of the search. Due to the global nature of the BA, the proposed optimiser can avoid sub-optimal convergence to local error minima. At the same time, the local SVD search operator accelerates the descent of the local basins of attraction and refines the precision of the final solution. The paper is structured as follows. Section 2 reviews the state-of-the-art point cloud registration. Section 3 mathematically formulates the registration problem, while Sect. 4 presents the proposed method. The experimental results in Sect. 5 demonstrate the performance of BA-SVD in comparison to state-of-the-art methods. Section 6 concludes the paper.
2 Related Work The best-known method for 3D point cloud registration is ICP [7, 8], which defines a cost function via the root mean square metric using closest point correspondence. Least square minimisation of the cost function is achieved iteratively using SVD [11]. The effectiveness of ICP is documented by its convergence theorem [7]. Rusinkiewicz and Levoy [12] examined the performance of various variants of ICP featuring different sampling, matching, and reweighting strategies. Other authors focused on the solutions of various problems intrinsic to ICP. For example, the outlier problem was mitigated in various studies by trimming [13], distance measurements [14], M-estimation [15, 16], and soft rejection functions [17, 18]. Babin et al. [19] conducted a comparative study on popular outlier handling methods. The computational efficiency of ICP was improved via data pre-processing and k-d tree search modification [20, 21], linearization with projective data association [22], and Anderson acceleration [23]. Despite many improvements and variants, ICP is a local search method that is prone to get stuck in local minima of the error function. To obviate this limitation, global search methods have been investigated as alternatives to ICP. For example,
Global Optimisation for Point Cloud Registration …
131
geometric Branch and Bound (BnB) strategies were used for 2D image registration [24, 25]. BnB strategies were also used for 3D registrations [26], although under the restrictive assumption that the two point clouds are misaligned in orientation only. The application of BnB strategies to general 3D problems is still an open problem due to the curse of dimensionality [27]. Yang et al. [28] adopted a nested BnB strategy in rotation and translation space to avoid direct search in the 6D space. Metaheuristics are popular optimisation techniques based on global search strategies. They are usually amenable to implementation in parallel computation schemes and can obtain acceptable solutions (although not guaranteed optimal) in a reasonable amount of time using limited computational resources. Popular metaheuristic algorithms include evolutionary algorithms and swarm intelligence. In the field of evolutionary algorithms, Brunnstrom and Stoddart [29] used a genetic algorithm (GA) for point cloud registration with a coarse-to-fine strategy. Robertson and Fisher [30] applied a parallel evolutionary approach to avoid premature convergence to local minima. Silva et al. [31] used a GA for a first coarse alignment, and stochastic hill-climbing search for quick refinement of the solutions. They reported occasional failures probably due to premature termination of the GA search. Zhu et al. [32] introduced a centre alignment technique to compress the search space and used the trimmed ICP algorithm [13] to accelerate the evolution of the GA population. Yan et al. [33] applied a GA to the registration of TLS-TLS (Terrestrial LiDAR Scanning) and TLS-MLS (Mobile LiDAR Scanning) point clouds. Sahillio˘glu [34] and Edelstein et al. [35] used a GA to find the correspondence between two isometric shapes, while Zhang et al. [36] and Li and Dian [37] adopted differential evolution for the registration of partially overlapping point clouds. In the field of swarm intelligence, ant colony optimisation (ACO) was applied for image registration [38–40]. Image registration is akin to point cloud registration, with the main difference that its objective is to find the transformation aligning two input 2D images. Particle swarm optimisation (PSO) was used by Yu and Wang [41] and Ge et al. [42] for point cloud registration. Zhan et al. [43] applied the mean filter on k-nearest neighbours for noise suppression and used PSO for aligning the point clouds. Wongkhuenkaew et al. [44] used PSO to implement a hierarchical registration system and used it for tooth model reconstruction.
3 Problem Formulation The 3D point cloud registration problem consists of finding the rigid transformation that aligns the source point cloud with the target point cloud. In detail, given a point cloud X = {xi }, i = 1, . . . , N (the source) and a point cloud Y = {yi }, i = 1, . . . , M (the target), the objective of 3D registration is to find a rotation matrix R and a translation vector t that minimises L2-norm point-to-point mean square error:
132
F. Lan et al.
arg min F = arg min R,t
R,t
N 1 Σ ||y j ∗ − (Rxi + t)||22 N i=1
(1)
where y j ∗ is the closest point in Y to xi in X : j ∗ = arg min ||y j − (Rxi + t)||22
(2)
j
Equation 2 is used to drive the nearest neighbour search process and build the closest point correspondences between X and Y. This is a well-designed heuristic for local registration and has been used widely in the ICP algorithm and its variants. The non-convexity of the objective function described in Eqs. 1 and 2 has been discussed in the literature [28]. The aim of 3D registration is to solve this non-convex optimisation problem.
4 Methodology This section describes the methodology used in the experimental work.
4.1 Encoding of the Candidate Solutions A 3D registration transformation contains a rotation matrix R and a translation vector t. It is usually represented by the 3D special Euclidean group SE (3). ] ⎫ R t T , R R = I , det( R) = 1 S E(3) = T |T = T 03 1 ⎧
[
(3)
The rotation matrix is difficult to use directly in optimisation problems, since it has 3×3 elements but only 3 degrees of freedom following the application of the two constraints in Eq. 3. It is important to encode the rotation matrix into independent parameters. In this study, angle-axis vector encoding is used to represent the rotation matrix. The angle-axis vector is an unconstrained 3D vector r = (r1 , r2 , r3 )T . The direction of r is the rotation axis, usually represented using a unit vector n = r/|r|. The modulus of r is the rotation angle θ = |r|. The transformation between r and R is implemented by Rodrigues’ rotation formula [45]. Equation 4 shows the transformation from r to R: R = cosθ I + (1 − cosθ )rr T + sinθ r∧
(4)
Global Optimisation for Point Cloud Registration …
133
where I is the identity matrix, and r∧ is the skew-symmetric matrix of vector r, that is, r∧ = [r]× . The rotation vector r can also be computed from R. ) tr(R) − 1 , Rn = n θ = arccos 2 (
(5)
where n is the eigenvector of R corresponding to the eigenvalue 1. Thus, r = θ n. A candidate solution is encoded by concatenating the rotation vector r and the. translation vector t as shown in Eq. 6. ]T [ ξ = r T , t T = [r1 , r2 , r3 , t1 , t2 , t3 ]T
(6)
The main advantage of the proposed encoding scheme is the possibility of modifying a candidate solution ξ without considering the constraints on the rotation matrix R.
4.2 Fitness Function for 3D Registration Evaluation The fitness function is defined as the point-to-point mean square error (L2 norm error metric) between the source point cloud X = {xi } and the target point cloud Y = {yi }, as formalised in Eqs. 1 and 2. Let rot (ξ ) and trans (ξ ) define the rotation matrix R and the translation vector t respectively, associated with transformation ξ . The fitness function F is the following: F=
N 1 Σ ||y − (rot(ξ )xi + trans(ξ ))||22 , ξ ∈ R6 N i=1 i
(7)
4.3 SVD Operation for 3D Point Cloud Registration SVD is a technique used in ICP to obtain a closed-form solution to the least square fitting problem for two 3D point sets under given point correspondences [11]. Given a source point cloud X = {xi } and a target point cloud Y = {yi }(1 ≤ i ≤ N ), the least square fitting of the two point clouds is formulated as in Eq. 8: T∗ = arg min T
N 1 Σ ||yi − Txi ||22 , T ∈ S E(3) N i=1
(8)
134
F. Lan et al.
Fig. 1 The ICP algorithm. ICP requires an initial guess transformation T. It iteratively calculates the fitness function using the closest point-to-point correspondence and minimises it via SVD
The first step is to find the centroids of the source and target point clouds: x=
N N 1 Σ 1 Σ xi , y= yi N i=1 N i=1
(9)
Once the centroids have been calculated, the correlation matrix between the two point clouds is calculated as in Eq. 10. H=
N Σ
(xi − x)(yi − y)T
(10)
i=l
The SVD operation on the correlation matrix H = UΣVT gives the closed-form solution for the rotation matrix R = VUT . If det(R) = −1, the algorithm fails. Otherwise, the algorithm returns the least square estimation of the rigid transformation for two input point sets.
4.4 Iterative Closest Point (ICP) The ICP algorithm is a local optimisation method for 3D point cloud registration. It iteratively calculates the closest point correspondences obtained using the current transformation and reduces the error using the SVD procedure, as summarised in the flowchart in Fig. 1.
4.5 The Bees Algorithm for 3D Registration BA is a popular metaheuristic for complex optimisation problems [46] that simulates the food foraging behaviour of honeybee colonies. In a biological bee colony, a portion of the population scouts the environment looking for food sources (flower
Global Optimisation for Point Cloud Registration …
135
patches). Once found, these food sources are harvested by forager bees. The BA employs artificial scout bees to perform random explorative search of the solution space and artificial forager bees to perform exploitative search in the neighbourhood of the most promising solutions. The BA was first proposed by Pham et al. [9], and its standard version is described by Pham and Castellani [10]. The behaviour of the BA was experimentally analysed by Pham and Castellani [47, 48] and mathematically analysed by Baronti et al. [49]. In this study, the BA uses the encoding method and fitness function described in Sects. 4.1 and 4.2. Each artificial bee lands on a solution represented by the 6dimensional vector ξ described in Eq. 6. The goodness of this solution is evaluated based on the residual error in the registration of the two point clouds. This error is calculated using the fitness function described in Eq. 7. The bee colony is initialised with ns scout bees randomly scattered with uniform probability onto the solution space. The solutions visited by the scout bees are evaluated and ranked by their residual errors. The top nb solutions are defined as the best sites and are selected for local search. Among them, the best ne ≤ nb sites are elite sites. Each scout that visited an elite site recruits nre foragers for local search, while the scouts that visited the remaining nb − ne best sites recruit nr b ≤ nr e foragers. If a forager lands on a solution of lower residual error than the solution found by the scout, that forager will replace the scout in the next recruitment cycle. The remaining ns −nb scout bees perform global search (uniformly stochastic search) in the solution space. The bee colony size n is calculated from n = ne×nr e+(nb − ne)×nr b+ns. The BA repeats cycles of site selection and local and global search until a given stopping criterion is met. The best solution found during the search is returned as the final solution to the optimisation problem. In this study, the algorithm is terminated when either a solution of residual error smaller than a pre-set threshold ∈ is found, or a given number Z of cycles has been performed. The first condition is used to verify that a solution of reasonable accuracy is obtained, and the second condition terminates the search within a given computation run time. As in the standard version [10], the BA uses neighbourhood shrinking and site abandonment procedures. The SVD algorithm is used as a problem-specific operator to speed up the convergence of the local search to the local minimum. It is also used to improve the accuracy of the solutions found via global search. One cycle of the SVD procedure is applied to all the solutions found by the scouts. That is, SVD is applied to all nb local bests and ns − nb solutions found via random global search. The flowchart of the SVD-enhanced BA is shown in Fig. 2.
5 Experiments and Discussion This section presents the experimental tests of the SVD-enhanced BA. The performance of the proposed algorithm is compared with that of the standard BA, and the state-of-the-art ICP algorithm.
136
F. Lan et al.
Fig. 2 Bees Algorithm with SVD operation. For each generation of the bee colony, the scout bees are improved by one cycle of SVD
5.1 Dataset and Parameter Settings The three algorithms were evaluated on the 10 shapes shown in Fig. 3. The armadillo, bunny, dragon and Lucy statue shapes were taken from the Stanford 3D scanning repository [50]. The horse was taken from the Large Geometric Models Archive at Georgia Tech, and the rest of the shapes were taken from the popular ModelNet repository [51]. A target point cloud of 104 elements was sampled from the surface of each shape. From this point cloud, 100 source point clouds of 104 data points each were generated via random rigid transformations. All shapes were constrained in a cube of size [−1000, 1000]3 units, and the centres of the point clouds were moved to the centre of the coordinate axes. The aim of the registration algorithms was to find the rigid transformation aligning the source to the target point cloud. In total, the set of point clouds used to evaluate the performance of the proposed algorithm consisted of 10 × 100 elements, and each element was composed of 104 points.
(a) Armadillo
(f) Airplane
(b) Bunny
(c) Dragon
(g) Lamp
(d) Lucy statue
(h) Person
(i) Plant
(e) Horse
(j) Table
Fig. 3 Shapes used in experiments to assess performance. a–d : 4 shapes from the Stanford 3D Scanning Repository; e Horse shape model from the Large Geometric Models Archive at Georgia Tech; f–j: 5 shapes from the ModelNet repository
Global Optimisation for Point Cloud Registration …
137
Table 1 Parameter settings of the Bees Algorithm for 3D point cloud registration Maximum iterations
100
Convergence criteria
0.01
Stagnation cycle
10
Shrinking neighbourhood rate
0.8
n
ne
nb
nre
nrb
ns
2
–
1
–
2
1
4
–
2
–
1
2
6
–
2
–
2
2
8
–
2
–
2
4
10
–
2
–
2
6
20
–
4
–
3
8
30
–
5
–
4
10
40
–
6
–
5
10
The BA was parameterised as reported in Table 1, including the incremental bee colony size ns used in Sect. 5.2. The experimental tests investigated the consistency, precision, and robustness to noise of the SVD-enhanced BA and compared its results to those obtained using ICP and the standard BA.
5.2 Consistency Consistency is the measure of the success rate. The registration process may return an erroneous transformation of high residual errors if local convergence occurs. In this case, the erroneous transformation is marked as a failure. For the clean dataset used in this paper, a registration attempt was considered successful when the fitness score defined in Eq. 7 satisfied the empirically set condition F < 100.0. The consistency of the algorithm indicates the susceptibility to local optima and is defined quantitatively as in Eq. 11. consistency =
number of successful runs number of total runs
(11)
The search capability of the BA is affected by the size of the artificial bee colony: the larger the size is, the more effective is the search. However, a large colony size increases the computational cost of running the algorithm, ultimately increasing its execution time. The first set of experiments aimed to show the effect of varying the bee colony size on the 3D registration results. The colony size for the standard BA was increased from 10 to 40 in steps of 10. Experimental results show that the SVD-enhanced BA achieves a success rate of
138
F. Lan et al.
Fig. 4 The success rate of the two BA versions versus bee colony size. The success rate of ICP is also plotted for reference
Table 2 The success rate of the BA algorithms with different bee colony sizes Bee colony size 2
4
6
8
10
20
30
40
–
–
–
–
76.1%
85.7%
92.9%
98.0%
Bees algorithm with SVD 73.5% 98.3% 99.3% 99.6% 99.9%
100%
100%
100%
Bees algorithm ICP
68.67%
99.9% for a colony of size 10. The colony size in this case was increased from 2 to 10 in steps of 2 and then again from 10 to 40 in steps of 10. The success rates obtained by the three algorithms are plotted in Fig. 4 and detailed in Table 2. The success rate of ICP on the whole dataset is 68.67%, indicating that it is prone to sub-optimal convergence to local minima. The success rate of the standard BA is 76.1% for a bee colony size of 10 and rises up to 98.0% when the colony size is increased to 40. As already discussed, the SVD-enhanced BA reached a success rate very close to 100% using as few as 8 artificial bees and a 100% success rate using 20 artificial bees. Accordingly, the bee colony size was fixed to 20 individuals for the remaining experiments. The results show that both BA algorithms can achieve high success rates using moderately small (standard BA) or very small (SVD-enhanced BA) population sizes. The superior performance of the SVD-enhanced algorithm over the standard version confirms the usefulness of the problem-specific operator. In detail, the standard BA performs random local search, using neighbourhood shrinking to maintain the search efficiency. However, SVD helps the local search to quickly find the minimum of the basin of attraction, increasing the efficiency of the algorithm. Once found, local minima are abandoned following the application of the site abandonment procedure, and new local searches can be initialised in other regions of the solution space. The
Global Optimisation for Point Cloud Registration …
139
overall result is an increase in the exploration capability of the BA, and thus an increased resilience to sub-optimal convergence.
5.3 Precision The precision of the algorithms was evaluated from the residual error of the final solution, that is, from the fitness F of the final solution. A low residual error indicates that the two point clouds are well aligned, while a high residual error indicates a failure of the algorithm to align the two point clouds. The ICP algorithm and BA algorithms with or without SVD were run on the dataset of 10 shapes. For each run, the residual error was recorded. The residual errors of the algorithms are plotted on a logarithmic scale in Fig. 5. The box plot presents the distribution of the results obtained using the three algorithms. The same results are tabulated in Table 3. As Fig. 5 shows, the performance of the ICP algorithm is characterised by a large spread in the distribution of the results. Despite being capable of obtaining solutions of average (median) F of very small (10−7 ) order of magnitude, ICP was prone to local convergence, which determined the large interquartile range (IQR) of 1.75 × 103 . Fig. 5 The distributions of residual error of the algorithms. The residual error (Y-axis) was plotted in logarithmic scale
Table 3 Spread measure of the residual errors ICP
Min
Median
Max
IQR
1.71 × 10−9
4.45 × 10−7
1.93 × 104
1.75 × 103
65.17
4.75 × 103
179.59
3.23 × 10−4
9.96 × 10−3
1.8 × 10−3
× 10−3
BA
8.2
BA with SVD
2.76 × 10−8
140
F. Lan et al.
The distribution of the results obtained by the standard BA concentrated around the median value of F = 65.17. The small spread of the results (IQR of 179.59) indicates the consistency of the BA. However, in terms of absolute precision the results obtained by the standard BA were not optimal. This lack of precision was due to the stochastic nature of the local search operator, which was not always able to locate the optimum within the given number of optimisation cycles. The BA with the SVD algorithm combined the consistency of the standard BA with the precision of the ICP algorithm. The maximum residual error is F of order of magnitude 10−3 , well below the threshold of F = 100 where the registration process is considered unsuccessful. That is, all the obtained solutions precisely aligned the point clouds. The above results mirror and add detail to those presented in Table 2.
5.4 Robustness to Noise Noise robustness concerns the ability of the algorithm to maintain high performance when dealing with data corrupt by noise. In this set of experiments, the position of the points in the clouds (Fig. 3) was randomly perturbed by a small amount δ. The maximum displacement of the points (the noise level) depended on the size of the point cloud. Five series of tests were conducted for noise levels corresponding to 3, 6, 9, 12, and 15% of the entire shape size. An example of a point cloud corrupted with various levels of noise is shown in Fig. 6. The success rates of ICP and the standard and SVD-enhanced BA are detailed in Table 4 and visualised in Fig. 7.
Fig. 6 Bunny shape point cloud perturbed with noise of level ranging from 3 to 15%
Table 4 Success rates of the algorithms on noisy model sets Noise Level 3
6
9
12
15
ICP
0.485
0.497
0.493
0.475
0.443
BA
0.908
0.91
0.933
0.943
0.946
BA with SVD
1
1
1
1
1
Global Optimisation for Point Cloud Registration …
141
Fig. 7 The success rate of the algorithms with incremental noise levels
The success rate of the ICP algorithm is poor on noisy datasets. The success rate of the standard BA remains high and consistent across the various levels of noise. Perhaps counter-intuitively, it improves as the noise level increases. This result is probably because the attraction basin of the global error minimum is widened, and the fitness function landscape is smoothed by the intro-duction of noise. The results in Table 4 also show that the SVD-enhanced BA achieved a 100% success rate for all noise levels. That is, the performance of the proposed algorithm is not affected by noise of small to moderate levels in the clouds.
6 Conclusion This paper proposed an SVD-enhanced BA as a tool for the solution of the 3D registration problem. The algorithm combines the robust global search approach of the Bees Algorithm metaheuristics with the fast local search of the SVD approach. Compared to the standard BA and the ICP algorithm, the proposed approach excelled in terms of precision, consistency, and speed. Experimental evidence also demonstrated that SVD-enhanced BA is highly resilient to noisy shapes. This latter feature makes the SVD-enhanced BA an ideal candidate for the solution of industrial applications where sensor noise is an issue. Acknowledgements This work was funded by the UK Engineering and Physical Sciences Research Council (EPSRC), Grant No. EP/N018524/1—Autonomous Remanufacturing (AutoReman) project.
142
F. Lan et al.
References 1. Levoy M, Pulli K, Curless B, Rusinkiewicz S, Koller D, Pereira L, Ginzton M, Anderson S, Davis J, Ginsberg J et al (2000) The digital michelangelo project: 3d scanning of large statues. In: Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pp 131–144 2. Son S, Park H, Lee KH (2002) Automated laser scanning system for reverse engineering and inspection. Int J Mach Tools Manuf 42(8):889–897 3. Taketomi T, Uchiyama H, Ikeda S (2017) Visual slam algorithms: a survey from 2010 to 2016. IPSJ Trans Comput Vis Appl 9(1):1–11 4. Newcombe RA, Izadi S, Hilliges O, Molyneaux D, Kim D, Davison AJ, Kohi P, Shotton J, Hodges S, Fitzgibbon A (2011) Kinectfusion: real-time dense surface mapping and tracking. In: 2011 10th IEEE international symposium on mixed and augmented reality. IEEE, pp 127–136 5. Boehnke K (2007) Object localization in range data for robotic bin picking. In: 2007 IEEE international conference on automation science and engineering. IEEE, pp 572–577 6. Kosuge A, Oshima T (2019) An object-pose estimation acceleration technique for picking robot applications by using graph-reusing k-nn search. In: 2019 first international conference on graph computing (GC). IEEE, pp 68–74 7. Besl PJ, McKay ND (1992) Method for registration of 3-d shapes. In: Sensor fusion IV: control paradigms and data structures, vol 1611. International society for optics and photonics, pp 586–607 8. Chen Y, Medioni G (1992) Object modelling by registration of multiple range images. Image Vis Comput 10(3):145–155 9. Pham DT, Ghanbarzadeh A, Koç E, Otri S, Rahim S, Zaidi M (2006) The bees algorithm—a novel tool for complex optimisation problems. In: Intelligent production machines and systems. Elsevier, pp 454–459 10. Pham DT, Castellani M (2009) The bees algorithm: modelling foraging behaviour to solve continuous optimization problems. Proc Inst Mech Eng C J Mech Eng Sci 223(12):2919–2938 11. Arun KS, Huang TS, Blostein SD (1987) Least-squares fitting of two 3-d point sets. IEEE Trans Pattern Anal Mach Intell 5:698–700 12. Rusinkiewicz S, Levoy M (2001) Efficient variants of the ICP algorithm. In 3dim, vol 1, pp 145–152 13. Chetverikov D, Svirko D, Stepanov D, Krsek P (2002) The trimmed iterative closest point algorithm. In: Object recognition supported by user interaction for service robots, vol 3. IEEE, pp 545–548 14. Phillips JM, Liu R, Tomasi C (2007) Outlier robust ICP for minimizing fractional rmsd. In: Sixth international conference on 3-D digital imaging and modeling (3DIM 2007). IEEE, pp 427–434 15. Bosse M, Agamennoni G, Gilitschenski I et al (2016) Robust estimation and applications in robotics. Now Publishers 16. Bergström P, Edlund O (2017) Robust registration of surfaces using a refined iterative closest point algorithm with a trust region approach. Numer Algor 74(3):755–779 17. Bouaziz S, Tagliasacchi A, Pauly M (2013) Sparse iterative closest point. In: Computer graphics forum, vol 32. Wiley Online Library, pp 113–123 18. Agamennoni G, Fontana S, Siegwart RY, Sorrenti DG (2016) Point clouds registration with probabilistic data association. In: 2016 IEEE/RSJ Int Conf Intell Rob Syst (IROS). IEEE, pp 4092–4098 19. Babin P, Giguere P, Pomerleau F (2019) Analysis of robust functions for registration algorithms. In: 2019 International conference on robotics and automation (ICRA). IEEE, pp 1451–1457 20. Greenspan M, Yurick M (2003) Approximate KD tree search for efficient icp. In: Fourth international conference on 3-D digital imaging and modeling, 3DIM 2003. Proceedings. IEEE, pp 442–448
Global Optimisation for Point Cloud Registration …
143
21. Nuchter A, Lingemann K, Hertzberg J (2007) Cached KD tree search for icp algorithms. In: Sixth international conference on 3-D digital imaging and modeling (3DIM 2007). IEEE, pp 419–426 22. Low KL (2004) Linear least-squares optimization for point-to-plane icp surface registration. Chapel Hill, Univ North Carolina 4(10):1–3 23. Pavlov AL, Ovchinnikov GWV, Derbyshev DY, Tsetserukou D, Oseledets IV (2018) AA-ICP: iterative closest point with Anderson acceleration. In: 2018 IEEE international conference on robotics and automation (ICRA). IEEE, pp 1–6 24. Mount DM, Netanyahu NS, Le Moigne J (1999) Efficient algorithms for robust feature matching. Pattern Recogn 32(1):17–38 25. Breuel TM (2003) Implementation techniques for geometric branch-and-bound matching methods. Comput Vis Image Understand 90(3):258–294 26. Li H, Hartley R (2007) The 3d-3d registration problem revisited. In: 2007 IEEE 11th international conference on computer vision. IEEE, pp 1–8 27. Olsson C, Kahl F, Oskarsson M (2008) Branch-and-bound methods for euclidean registration problems. IEEE Trans Pattern Anal Mach Intell 31(5):783–794 28. Yang J, Li H, Campbell D, Jia Y (2015) Go-icp: a globally optimal solution to 3d icp point-set registration. IEEE Trans Pattern Anal Mach Intell 38(11):2241–2254 29. Brunnstrom K, Stoddart AJ (1996) Genetic algorithms for free-form surface matching. In: Proceedings of 13th international conference on pattern recognition, vol 4. IEEE, pp 689–693 30. Robertson C, Fisher RB (2002) Parallel evolutionary registration of range data. Comput Vis Image Underst 87(1–3):39–50 31. Silva L, Bellon ORP, Boyer KL (2005) Precision range image registration using a robust surface interpenetration measure and enhanced genetic algorithms. IEEE Trans Pattern Anal Mach Intell 27(5):762–776 32. Zhu J, Meng D, Li Z, Du S, Yuan Z (2014) Robust registration of partially overlapping point sets via genetic algorithm with growth operator. IET Image Proc 8(10):582–590 33. Yan L, Tan J, Liu H, Xie H, Chen C (2017) Automatic registration of tls-tls and tls-mls point clouds using a genetic algorithm. Sensors 17(9):1979 34. Sahillio˘glu Y (2018) A genetic isometric shape correspondence algorithm with adaptive sampling. ACM Trans Graph (TOG) 37(5):175 35. Edelstein M, Ezuz D, Ben-Chen M (2019) Enigma: evolutionary non-isometric geometry matching. arXiv preprint arXiv:1905.10763 36. Zhang X, Yang B, Li Y, Zuo C, Wang X, Zhang W (2018) A method of partially overlapping point clouds registration based on differential evolution algorithm. PLoS ONE 13(12):e0209227 37. Li C, Dian S (2018) Dynamic differential evolution algorithm applied in point cloud registration. In: IOP conference series: materials science and engineering, vol 428. IOP Publishing, p 012032 38. Hong-yan L (2015) Study on mutual information medical image registration based on ant algorithm. Int J Hybrid Inf Technol 8(9):353–360 39. Gupta S, Grover N et al (2016) A new optimization approach using smoothed images based on aco for medical image registration. Int J Inf Eng Electron Bus 8(2) 40. Wu Y, Ma W, Miao Q, Wang S (2019) Multimodal continuous ant colony optimization for multisensor remote sensing image registration with local search. Swarm Evol Comput 47:89–95 41. Yu Q, Wang K (2014) A hybrid point cloud alignment method combining particle swarm optimization and iterative closest point method. Adv Manuf 2(1):32–38 42. Ge Y, Wang B, Nie J, Sun B (2016) A point cloud registration method combining enhanced particle swarm optimization and iterative closest point method. In: 2016 Chinese control and decision conference (CCDC). IEEE, pp 2810–2815 43. Zhan X, Cai Y, He P (2018) A three-dimensional point cloud registration based on entropy and particle swarm optimization. Adv Mech Eng 10(12):1687814018814330 44. Wongkhuenkaew R, Auephanwiriyakul S, Chaiworawitkul M, TheeraUmpon N (2021) Threedimensional tooth model reconstruction using statistical randomization-based particle swarm optimization. Appl Sci 11(5):2363
144
F. Lan et al.
45. Murray RM, Li Z, Sastry SS (2017) A mathematical introduction to robotic manipulation. CRC Press 46. Pham DT, Baronti L, Zhang B, Castellani M (2018) Optimisation of engineering systems with the bees algorithm. Int J Artif Life Res (IJALR) 8(1):1–15 47. Pham DT, Castellani M (2014) Benchmarking and comparison of nature-inspired populationbased continuous optimisation algorithms. Soft Comput 18(5):871–903 48. Pham DT, Castellani M (2015) A comparative study of the bees algorithm as a tool for function optimisation. Cogent Eng 2(1):1091540 49. Baronti L, Castellani M, Pham DT (2020) An analysis of the search mechanisms of the bees algorithm. Swarm Evol Comput 59:100746 50. Levoy M, Gerth J, Curless B, Pull K (2005) The Stanford 3d scanning repository, vol 5. http:// www-graphics.stanford.edu/data/3dscanrep 51. Wu Z, Song S, Khosla A, Yu F, Zhang L, Tang X, Xiao J (2015) 3d shapenets: a deep representation for volumetric shapes. In: The IEEE conference on computer vision and pattern recognition (CVPR), pp 1912–1920
Automatic PID Tuning Toolkit Using the Multi-Objective Bees Algorithm Murat Sahin ¸ and Semih Çakıroglu ˘
1 Introduction Metaheuristic optimisation algorithms, also called unconventional optimisation algorithms, have been developed to solve complex optimisation problems. The methods are based on heuristic ideas [1]. One of the popular metaheuristic algorithms is the Bees Algorithm (BA). BA, pro-posed by Pham et al. in 2005, is a populationbased search algorithm. The algorithm mimics the nectar source-seeking behaviour of honey bees. Basically, it performs a kind of neighbourhood search along with random search and can be used for both continuous and discrete optimisation [2]. Despite its simple structure, it is a flexible and powerful algorithm. It has been used successfully in many different areas, such as optimisation of mechanical part design [3], control systems optimisation [4], robotic part design optimisation [5], circuit design optimisation [6], vehicle orientation planning [7], electric motors [8], and the traveling salesman problem [9]. There are different applications where BA is used within the scope of control system optimisation. In one of the first studies, the PID controller used in the robot manipulator was tuned. The six gains of the PID controller were adjusted to minimise vibrations and positioning errors [10]. In another robotic application, the PID used in the control of one leg of a quadruped robot is tuned [11]. After seeing that PID parameters can be tuned successfully with BA, it has been tried in different studies. In a study on DC motor control, PID parameters were tuned using a multi-objective BA [4]. In another study, PID control systems used in high-order plants were tuned with multi-objective BA [12]. Apart from PID, it has also started to be used in different M. Sahin ¸ (B) · S. Çakıro˘glu Department of Control System Development, ROKETSAN Inc., 06780 Ankara, Türkiye e-mail: [email protected] S. Çakıro˘glu e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 D. T. Pham and T. Hartono (eds.), Intelligent Production and Manufacturing Optimisation – The Bees Algorithm Approach, Springer Series in Advanced Manufacturing, https://doi.org/10.1007/978-3-031-14537-7_9
145
146
M. Sahin ¸ and S. Çakıro˘glu
control systems. It tries to tune the sliding mode control system used in frequency control of a power system [13]. Although different control systems have been developed over the years, PID controllers are still widely used in the industry. PID algorithms have again become popular, especially with the development of adaptable and robust PIDs. PID control systems have multiple objective functions, and these functions may need to be minimised or maximised simultaneously. These types of optimisation problems are called multi-objective optimisation problems. In some studies, these functions may act opposite to one another. Therefore, when one increases, the other may decrease [14]. In this case, one of the most preferred methods is the weighted sum method. Here, each objective has a weighting coefficient. This method is also called the scalarisation method. In this method, multiple solutions are combined into a single solution using weights [15]. There are three different methods depending on the state of the weights: equal weights, rank order centroid weights, and rank-sum weights. In equal weights, the value of each weight is equal, and their total is 1. In other methods, some equations are defined for weights. Apart from these equations, weights can also be determined according to the users’ experience [14, 15]. In this study, BA was preferred due to the examples given above to adjust the PID parameters. Control systems have multiple objects that need improvement, such as rise time, settling time, overshoot, and error. For this reason, a multi-purpose BA was prepared for the tuning of PID parameters. In the first chapter, the PID control system and tuning methods are given. In the second part, BA and the multi-objective optimisation algorithm are shown. In the third chapter, auto tune tool was explained. The tool has been specially developed to be applied to different systems. If the transfer function of any system is entered, it can calculate the appropriate PID coefficients. By adjusting the tool, the aggressiveness, speed and similar features of the control system can be adjusted. In the last section, there are evaluations and analyses.
2 PID Control and Tuning Methods The prior item before the evaluation of controller tuning options must be a brief introduction to the proportional-integral-derivative (PID) control concept, which is probably the most popular structure. Considering that the controller aims to provide a rule from a calculation process, as shown in Fig. 1, related to the deviation in the system state to be controlled, a robotic system can be defined by Eq. 1 in terms of joint variables q and control action u. M(q)q¨ + C(q, q) ˙ + g(q) + d(q) = u
(1)
where M(q) is the inertia matrix, C(q, q) ˙ is the centrifugal force, g(q) is the gravitational force vector, and d(q) is the disturbance. However, the robotic system is assumed to be linearized and represented as a transfer function of each joint actuation,
Automatic PID Tuning Toolkit Using the Multi-Objective Bees Algorithm
147
Fig. 1 Closed-loop system control structure
as shown in Eq. 2. J q¨ + B q˙ + K q = u
(2)
where J is the linearized inertia matrix, B is the linearized damping matrix, and K is the stiffness matrix. Nevertheless, the control action is similar, not described as a vector but a scalar per joint. The mentioned action can be proportional to error, which is simply the difference between the desired point q ∗ and the response q. Additionally, the control law may include some terms related to the derivative and/or integral of the error. The PID control structure in Fig. 2, which includes all of these, follows Eq. 3 as a function of the error e. ∫t u = K p e + Ki
e(τ )dτ + K d e˙
(3)
0
or in the form of a transfer function in Eq. 4 [16], ( ) 1 + Td s G c (s) = K p 1 + Ti s
(4)
Since the linearized system can be evaluated partially, each joint structure is evaluated as a single-input single-output (SISO) with a generalised transfer function, as in Eq. 5. G(s) =
K s e−Ls Ts + 1
(5)
where K s is the steady state value, L is the delay time, and T is the time constant of the open loop step response of the system. According to the first method of ZieglerNichols rules for tuning PID controller parameters, the controller transfer function is described in the following form (Eq. 6). ( ) 1 T 1+ + 0.5Ls G c (s) = 1.2 L 2Ls
(6)
148
M. Sahin ¸ and S. Çakıro˘glu
Fig. 2 Scheme of PID controller
Therefore, the PID controller includes a pole stated at the origin and two zeros at −1/L . On the other hand, there is a second Ziegler-Nichols method, which is a frequency domain approach and uses cyclic behaviour parameters. The other step response-related way to tune PID controller parameters is a MATLAB interface (Fig. 3), which is included in the Control System Toolbox [17]. It has a compact panel to use. By handling several adjustments, the tuned controller parameters and closed-loop system information can be obtained (Fig. 4). The adjustments are related to the objectives of the controller goals, such as response time and robustness. This tool uses an optimisation-based code in the background.
3 Bees Algorithm and Multi-Objective Optimisation BA solves the optimisation problem iteratively. There are local and global search sections. In the local search section, elite sites have more neighbourhood searches and fewer other sites. Parameters of the algorithm: number of scout bees (n), number of local search sites (m), number of elite sites (e), number of foraging bees in the elite site (nep), number of foraging bees in the other local site (nsp), size of the neighbourhood (ngh), and number of iterations (I) [18]. Elite site search has two nested loops. Neighbourhood searches are made nep times for each bee (n(i)) on the elite site (e). As a result of the searches, the best of these neighbours is selected, that is, greedy selection is made. This best selection is compared to member (n(i)) in the population. If this value is smaller, it will be replaced with n(i) to become the new member of the population. The flow chart of the elite site search is given in Fig. 5. Best site search has the same structure. It is performed outside the elite site within the local search area, and nsp searches are made. When designing control systems, in general, the integral of the square of the error (ISE) is used. The equation of ISE is given in Eq. 7. The equation shows the reference
Automatic PID Tuning Toolkit Using the Multi-Objective Bees Algorithm
Fig. 3 MATLAB PID tuner interface Fig. 4 PID tuner output panel
149
150
M. Sahin ¸ and S. Çakıro˘glu
Fig. 5 The flow chart of the elite site search
value with r(t), the output value with y(t) and the error value with e(t). ∫∞ ∫∞ 2 I S E = (r (t) − y(t)) dt = e(t)2 dt 0
(7)
0
The purpose of multi-objective (MO) optimisation is to show the trade-offs between objectives and provide a general view of the problem to the designer. MO also tries to minimise or maximise all objective functions at the same time. The general definition in the literature is given below in Eq. 8. { } Min(or Max) f 1 (x) = y1 , f 2 (x) = y2 , . . . , f j (x) = y j
(8)
Automatic PID Tuning Toolkit Using the Multi-Objective Bees Algorithm
151
In this study, the weighted sum method is selected for multi-objective optimisation. The general equation for the weighted sum method is given below in Eq. 9 Min(or Max)
N Σ
λ j f j (x)
(9)
j=1
ISE, maximum percent overshoot (MPO), rise time (RT) and settling time (ST) have been determined as parts of multi-objective functions. All components are required to be minimum. The multi-objective function used in the algorithm is given below in Eq. 10. M O F = λ1 I S E + λ2 RT + λ3 ST + λ4 M P O
(10)
Equation 10 shows that a single function is created from the sum of all the objectives. The critical issue here is to define lambdas. Lambda values can be similar when equal minimisation is desired for each parameter. The lambda value of the parameter for which more minimisation is desired can be determined to be higher. In this study, lambda values are designed to be adjustable in the tool. In this way, different values can be determined according to the system used and priorities. In the local search section, first, random PID parameters are determined in the neighbourhood. Then, the closed loop transfer function and the bandwidth of the system are calculated. If the bandwidth value is appropriate, step response analysis is started. (If it is not suitable, the process is finished.) MOF is calculated; if the new value is less than the best value found before, the new PID parameters are transferred to the best PID parameter variable. If it is not less, the process is finished without any action. The local search section of MO-BA is given in Fig. 6 and the model of PID tuner is given in Fig. 7.
4 Automatic PID Tuner The optimisation tool has similar abilities in terms of visualisation, parameter set, and ease of solution access. To serve deeper evaluation, the tool can be investigated into two main parts: the system and the optimisation-related parameter sections. First, the interface provides a section where the user can enter the information of the plant in transfer function form. This form includes a set of coefficients belonging to the nominator as well as the dominator and shows the resulting transfer function. Additionally, the controller parameters to be optimised are capable of limitation. The results are also listed on that panel (Fig. 8). A joint of a robotic manipulator is considered as an instance, as in Eq. 11.
152
M. Sahin ¸ and S. Çakıro˘glu
Fig. 6 The local search of MO Bees Algorithm
Fig. 7 Model of PID parameters tuning with MO-BA
Fig. 8 Plant definition section
The second section in Fig. 9 includes optimisation algorithm-related parameters, which define the bee colony, iteration number, and objective function of the optimisation problem. The colony is fully defined with the set of population and search area parameters. G(s) =
0.93 60s 2 + 9s + 1
(11)
Automatic PID Tuning Toolkit Using the Multi-Objective Bees Algorithm
153
Fig. 9 Algorithm parameters
Fig. 10 Goal weight setting sliders
The user can set the weight of the objectives via sliders in Fig. 10. One of them adjusts the speed importance level, which is related to the common weight of settling time and bandwidth. The second slider helps to define the common weight of steadystate error and overshoot. Finally, optimisation progress is observed by a plot for each iteration and a progress bar, which vanishes after the last iteration. Achieved controller parameters with basic system response information are listed as the search finishes (Fig. 11).
5 Discussion and Conclusion As a comparison, the step responses of the same plant as given in Eq. 11 with each controller were tuned via Ziegler-Nichols, MATLAB PID Tuner, and MOBA PID Tuner methods. The resulting parameters are listed in Table 1 The MOBA tuner provides satisfying results, as shown in Fig. 12 and Table 2. The rise time is the shortest in Ziegler-Nichols, but the system settles much later. Additionally, it causes a huge overshoot. On the other hand, MATLAB PID Tuner performs similar results in terms of rise time, but MOBA PID Tuner settles 4 times sooner with a smaller overshoot. From a computational point of view, it takes time, not in minutes but in seconds, to proceed with the iterations. However, it is not an essential requirement since it is an offline procedure. To test the repeatability of the algorithm, the optimisation sequence was repeated 10 times for the same parameter set. The results in Table 3 and Fig. 13 show that the solutions of different executions are similar.
154
M. Sahin ¸ and S. Çakıro˘glu
Fig. 11 Interface when optimisation has finished Table 1 Parameter results of tuning methods
Method
Kp
Ki
Kd
Ziegler-Nichols
8.07
1.64
9.93
MATLAB PID tuner
2.47
0.2
7.71
MOBA PID tuner
3.95
0.37
19.87
Fig. 12 Step responses of the compared methods
Automatic PID Tuning Toolkit Using the Multi-Objective Bees Algorithm
155
Table 2 Closed-loop system results Method
Rise time (s)
Settling time (s)
Overshoot (%)
Ziegler-Nichols
3.18
73.73
52.5
MATLAB PID tuner
8.21
40.96
4.2
MOBA PID tuner
5.62
8.16
1.5
Table 3 Search performance results of consecutive tests Search number Kp
Ki
Kd
Bandwidth (Hz) Settling time (s) Overshoot (%)
1
3.0985 0.3159 18.102 0.0489
2
3.8513 0.3539 20.949 0.0584
9.0959
0.0531
3
3.6053 0.3439 19.255 0.0548
9.2321
0.4205
4
3.4746 0.3374 18.849 0.0532
9.5715
0.3405
5
4.0759 0.3909 20.499 0.0597
7.8601
1.8517
6
3.4754 0.3246 17.739 0.0526
9.343
0.7144
7
4.4563 0.4225 22.637 0.0646
7.3402
1.713
8
3.0887 0.3740 20.162 0.0499
9
3.952
10
3.9933 0.3616 20.833 0.0595
0.3591 21.059 0.0593
11.16
0
10.446
1.3961
8.752
0.0354
8.5092
0.3544
Fig. 13 Step responses of different optimisation results for the same conditions
156
M. Sahin ¸ and S. Çakıro˘glu
To conclude, by the tool that is developed in this study, a proper PID controller is capable of being tuned for a system with a known transfer function. The algorithm behind the tool uses a multi-objective function, which enables satisfying different system response goals. The compact interface provides ease of usage by handling many parameters to achieve better results. In future work, the initial iteration constraints of the algorithm might be evaluated to start with more proper values. That study may also affect the convergence speed. In addition, to obtain the algorithm for a faster run, the functions used in the algorithm may be evaluated. The last improvement is projected as an addition with discrete time domain controller tuning. Acknowledgements The authors would like to thank Roketsan Co. for its financial support of this work.
References 1. Rao SS (2019) Engineering optimization: theory and practice. John Wiley & Sons 2. Pham DT, Ghanbarzadeh A, Koç E, Otri S, Rahim S, Zaidi M (2006) The Bees Algorithm—a novel tool for complex optimisation problems. Intell Prod Mach Syst 454–459 3. Nafchi AM, Moradi A, Ghanbarzadeh A, Yaghoubi S, Moradi M (2012) An improved Bees Algorithm for solving optimization mechanical problems. In: 20th annual international conference on mechanical engineering-ISME, School of Mechanical Eng., Shiraz University, Shiraz, Iran 4. Coban R, Ercin O (2012) Multi-objective Bees Algorithm to optimal tuning of PID controller. Çukurova Univer J Facul Eng Arch 27(2):13–26 5. Acar O, Kalyoncu M, Hassan A (2018) The Bees’ Algorithm for design optimization of a gripper mechanism. J Selcuk-Technic (ICENTE’18) Special Issue, 69–86 6. Mollabakhshi N, Eshghi M Combinational circuit design using bees algorithm. In: IEEE conference anthology. IEEE, pp 1–4 7. Alzaqebah M, Jawarneh S, Sarim HM, Abdullah S (2018) Bees algorithm for vehicle routing problems with time windows. Int J Mach Learn Comput 8(3):236–240 8. Braiwish NY, Anayi FJ, Fahmy AA, Eldukhri EE (2014) Design optimisation of permanent magnet synchronous motor for electric vehicles traction using the Bees Algorithm. In: 2014 49th international universities power engineering conference (UPEC). IEEE, pp 1–5 9. Ismail AH, Hartono N, Zeybek S, Pham DT (2020) Using the Bees Algorithm to solve combinatorial optimisation problems for TSPLIB. IOP Conf Ser: Mater Sci Eng 847:1–9 10. Fahmy AA, Kalyoncu M, Castellani M (2012) Automatic design of control systems for robot manipulators using the bees algorithm. Proc Instit Mech Eng Part I: J Syst Control Eng 226(4):497–508 11. Bakırcıo˘glu V, Sen ¸ MA, Kalyoncu M (2016) Optimization of PID controller based on The Bees Algorithm for one leg of a quadruped robot. In: MATEC web of conferences, vol 42. EDP Sciences, p 03004 12. Amirinejad M, Eslami M, Noori A (2014) Automatic PID controller parameter tuning using Bees Algorithm. Int J Sci Eng Res 5(8) 13. Shouran M, Anayi F, Packianather M (2021) The Bees Algorithm tuned sliding mode control for load frequency control in two-area power system. Energies 14(18):5701 14. Etesami G, Felezi ME, Nariman-Zadeh N (2019) Pareto optimal multi-objective dynamical balancing of a slider-crank mechanism using differential evolution algorithm. Autom Sci Eng 9(3):3021–3032
Automatic PID Tuning Toolkit Using the Multi-Objective Bees Algorithm
157
15. Gunantara N (2018) A review of multi-objective optimization: Methods and its applications. Cogent Eng 5(1):1502242 16. Ogata K (2010) Modern control engineering, 5th Edition. Pearson, 580 17. Mathworks (2021) mathworks.com/help/control/ref/pidtuner.html 18. Baronti L, Castellani M, Pham DT (2020) An analysis of the search mechanisms of the bees algorithm. Swarm Evol Comput 59:100746
The Effect of Harmony Memory Integration into the Bees Algorithm Osman Acar, Hacı Saglam, ˘ and Ziya Saka ¸
1 Introduction Nature-inspired optimization algorithms have been increasingly studied since 1992. Several search methods were developed by inspiring the use of swarm attitudes in nature to search for the global optimum of nonlinear functions. Algorithms differ in terms of searching methods and searching population attitudes. The genetic algorithm was designed as a dynamic population procedure to converge to the global optimum [1]. Particle swarm optimisation has an adjustable structure for speed, population, and global and local optimum searching [2]. Although the differential evolution algorithm is similar to the genetic algorithm, its self-organising scheme provides faster results [3]. The ant colony algorithm is dynamic but not self-adaptive [4]. Marriage in the Honeybees Optimisation Algorithm can be regarded as an integration of the Bees Algorithm and the Genetic Algorithm [5]. Therefore, it has several election stages that increase the time of the process. The artificial fish swarm optimisation algorithm has many advantages, including high convergence speed, flexibility, fault tolerance and high accuracy [6]. This algorithm was improved by integrating some parts from other algorithms. The Termite Algorithm works efficiently at low speed. Its superiority lies in the effectiveness of using data to carry routing information, reducing control overhead, and maintaining routing information [7]. The Artificial Bee Colony Algorithm, which is very similar to The Bees Algorithm [8], has a few O. Acar (B) · H. Sa˘glam Department of Mechanical Engineering, Selçuk University, Konya, Turkey e-mail: [email protected] H. Sa˘glam e-mail: [email protected] Z. Saka ¸ Department of Mechanical Engineering, Konya Technical University, Konya, Turkey e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 D. T. Pham and T. Hartono (eds.), Intelligent Production and Manufacturing Optimisation – The Bees Algorithm Approach, Springer Series in Advanced Manufacturing, https://doi.org/10.1007/978-3-031-14537-7_10
159
160
O. Acar et al.
control parameters: maximum number of cycles and colony size. It has only three searching stages based on the type of bees: employed bees, onlooker bees and scout bees [9]. The Bat-Inspired Algorithm updates the velocities and positions of bats, which is similar to the procedure in standard particle swarm optimisation [10]. The firefly searches the global optima based on the brightness and distance of two neighbouring firefly. The brightness and distance must be controlled frequently [11]. This control process increases the speed of the algorithm. Hunting [12–19], rout and food searching [20–24], mating or enemy avoiding attitudes of various animals have been used to design optimisation algorithms to solve various problems in science. The optimisation algorithms for problems with constraints need to be arranged specifically. They have been needed for quick and better solutions. Therefore, each algorithm has a specific feature for searching global optima. Some of the algorithms include specific characteristics, such as avoiding local optima. The main aim of this study is not comparison with other algorithms. However, it presents a method for the progress of a selected algorithm, which is the Bees Algorithm (BA). The results of the Harmonic Bees Algorithm have been published previously [25]. This paper presents the results of the Bees Algorithm by comparing the results of the Harmonic Bees Algorithm. The geodetic curvature values of both algorithms were compared. The paper includes the design problem in Sect. 2. The objective function and constraints were explained. The Bees Algorithm integrated with harmonic memory is explained in Sect. 3. Comparative results are shown in Sect. 4. The conclusion is presented in Sect. 5.
2 The Design Problem A robotic gripper design problem was selected as a case study. The gripper was designed from a four-link spherical mechanism with joint axes intersecting one central point. Therefore, the link lengths are measured as the central angle composed of two successive joint axes. The coupler of a four-link mechanism moves on a spherical plane. In other words, its motion is 3 dimensional due to the pure rotation about three axes. This feature provides the advantage of an underactuated finger without using tendons. Otherwise, a finger needs to be driven by three actuators: one for the distal phalanx, one for the middle phalanx and one for the proximal phalanx. A spherical mechanism with random link lengths moves along the spherical trajectory shown in Fig. 1. Point C is the effective mobile part of the mechanism for the grasping operation. However, point C must move along a path in the form of a spherical curve with zero geodetic curvature. Points A and B are the mobile joints that connect the crank and swinger to the coupler link, respectively. Ao and Bo are fixed joints that can only provide rotation about their own axis. A gripper designed from a spherical mechanism can grasp an object either by form-closure or force-closure grasping thanks to the motion space. However, the design should have at least three fingers beside the longitudinal trajectory of the fingertips. Otherwise, the grasp stability may not be reliable. The fingers may rotate
The Effect of Harmony Memory Integration into the Bees Algorithm
161
Fig. 1 A schematic spherical four-link mechanism [25]
the object as they move to grasp it. A three-fingered schematic gripper is shown in Fig. 2a and b. The trajectory of the mechanism coupler resembling the fingertip must be longitudinal. In other words, the coupler point must start the motion from south of the equatorial plane and must end on the north pole of the spherical plane larger than 90°. Thus, the object can be balanced under three forces applied by coupler point or the three mechanisms for three fingers imprison the object as mechanism links move. The optimisation problem must be expressed as mathematical formulations. The trajectory of the coupler has the most vital importance. Therefore, a spherical fourlink mechanism, which can provide motion in a longitudinal trajectory longer than 90°, must be investigated. Thus, the objective can be determined as the length of the longitudinal trajectory. The overall trajectory of the coupler point can include both a longitudinal part and a nonzero curvature part. The main target is the lowest geodetic curvature value and the longest trajectory. Equation (1) [25] is the length of the trajectory. max f (L) =
n /[ Σ (
X C(i+1) − X C(i )
)2
)2 ( )2 ] ( + YC(i+1) − YC(i ) + Z C(i+1) − Z C(i)
i=1
(1) The best spherical mechanism providing a longitudinal trajectory can be designed by using ball-burst theory [26]. Ball polynomials and Burmester polynomials
162
O. Acar et al.
Fig. 2 a Isometric view of the schematic gripper [25], b Top view of the schematic gripper [25]
composed of instantaneous invariants of the motion must have at least four common real roots [27], which can be used to calculate the coordinates of candidate coupler points. Thereafter, parametric Euler-Savary equations are also very useful to calculate the coordinates of fixed joints [28]. Consequently, the link lengths, type of mechanism, transmission angle and geodetic curvature value can be easily found. However, all these parameters must be filtered by using constraint functions. The instantaneous invariants can be investigated in the following ranges [25]: g1 (x) : − 25 < ωx1 < 25 − 25 < ωx2 < 25 − 25 < ωx3 < 25 − 25 < ω y2 < 25 − 25 < ω y3 < 25
(2)
The Effect of Harmony Memory Integration into the Bees Algorithm
163
where ωx1 is the common instantaneous invariant of both the Ball and Burmester polynomials. ωx2 and ω y2 are the instantaneous invariants of the ball polynomial. ωx3 and ω y3 are the instantaneous invariants of the Burmester polynomial. The range is determined by tests. The investigation is performed by using random values in these ranges. However, the random values must provide zero from the determinant of the Sylvester matrix (A), which is composed of a constant of Ball and Burmester polynomials [25]. g2(x) : det(A) = |A| = 0
(3)
Furthermore, there must be at least four common real roots of Ball and Burmester polynomials. g3(x) : t1 , t2 , t3 , t4 . . . .tn ∈ R, n ≥ 4
(4)
After all, the lengths of the mechanism links can be calculated by using the parametric Euler Savary equation. However, the link lengths must be limited to 90°. Otherwise, the four-link spherical mechanism cannot be symmetrically placed around the unit sphere. Interference between mechanisms is inevitable if the link lengths are not limited in the ranges [25]; g4(x) : α1 < 90◦ α2 < 90◦ α3 < 90◦ α4 < 90◦
(5)
where, α1 , α2 , α3 and α4 symbolise the link lengths of the mechanism. The type of mechanism is also an important criterion to select the best design. Therefore, Grashof Laws for the spherical four-link mechanism must be implemented [29]. g5(x) :
α2 + α4 ≤ α3 + α1 , α3 + α2 ≤ α4 + α1 , α2 + α1 ≤ α3 + α4
(6)
g6(x) : α2 + α4 ≤ α3 + α1 , α3 + α2 ≤ α4 + α1 , α4 + α3 ≤ α1 + α2
(7)
g7(x) : α3 + α2 ≤ α1 + α4 , α2 + α1 ≤ α4 + α3 , α1 + α4 ≤ α2 + α3
(8)
Motion transmission of a mechanism is crucial. Therefore, the transmission angle calculated in Eq. (9) must be larger than 40°. μmin = cos −1 ·
[
{cos(α4 − α1 ) − cosα2 cosα3 } {sinα2 sinα3 }
] (9)
164
O. Acar et al.
g8(x) :
μmin > 40◦
(10)
The last constraint is the geodetic curvature. It is very difficult to find a mechanism design providing a spherical curve with zero curvature. However, a design providing a spherical curve, which is close to zero curvature enough to grasp an object without rotation, is possible. Thus, the limitation for geodetic curvature [30]; ( ) g9 (x) : κg = 1/r g = r · r˙ × r¨ /(˙r · r˙ )3/2 ≤ 0.005
(11)
) ( Therefore, the best x ∗ = α1∗ , α2∗ , α3∗ , α4∗ design variables providing 9 inequality constraints gk (x)k = 1 . . . , 9 are investigated for f (x ∗ ) = max f(L), the maximum longitudinal trajectory.
3 The Bees Algorithm and Integration The computer used for optimisation has Windows 10 with a 64-bit processor, Intel® Core™ i7-4720HQ [email protected] GHz and 16,0 GB RAM. The two algorithms differed only due to the harmonisation step, which is Harmony Memory. The harmonisation steps were placed into the algorithm after population production or assignments, as shown in Fig. 4 Therefore, the possibility of finding global optima and searching sensitivity were increased without producing a new population. Thus, the process time was decreased from 10 to 6 s. Conversely, a step for constraints was added after each harmonisation step. Therefore, the improper design variables were eliminated, and the mechanisms were classified. The algorithm was selected for its successful searching technique and rapid convergence. The optimal algorithm control parameters were used from the literature [8]. Instead of explaining the food searching attitude of bees, Harmony Memory should be introduced. This is a searching stage of the harmonic searching algorithm [31]. It is inspired by the procedure of finding the most aesthetic sound in jazz trio composed of violin, saxophone, and keyboard, as shown in Table 1. D, E and G are the notes performed by instruments. The notes produced in each rank are shifted between the instruments. Thus, the sound converges to an excellent aesthetic. The instruments resemble the instantaneous invariants of the motion produced in the same range within Eq. The harmony memory provides a simple interchange of randomly produced values of instantaneous invariants to eliminate the time cost for production in the next iteration loop. This procedure is placed into Fig. 3 after random parameter production stages as in Fig. 4. Tables 2 and 3 show the results of the design variables from the two algorithms. The graphical illustration is also important for the final selection of the best design to prevent any collision between the symmetrical assembly of the four-link spherical mechanisms around the sphere. Each of the algorithms was run one hundred times. The best of the results among the hundred results were elected. The significant observation was the superiority of the HBA in
The Effect of Harmony Memory Integration into the Bees Algorithm
165
Table 1 Harmony memory [31] Violin
Saxophone
Keyboard
Evaluation
Rank 1
D
G
E
Excellent
Rank 2
E
D
G
Good
Rank 3
D
E
G
Fair
each result among the hundred runs. The regression analysis showed 97% reliability for the design variables.
Fig. 3 Procedure of the bees algorithm (BA) [32]
166
O. Acar et al.
Fig. 4 Procedure of the harmonic bees algorithm (HBA) [33]
Table 2 The design variables for HBA Solutions
a1
a2
a3
a4
The length of GCA (L)
1. Solution
33.8◦
32.44◦
38.45◦
38.43◦
123.82◦
2. Solution
9.09◦
18.43◦
48.94◦
27.26◦
119.8◦
3. Solution
47.95◦
30.66◦
29.43◦
46.04◦
114.56◦
Table 3 The design variables for BA Solutions
a1
a2
a3
a4
The length of GCA (L)
1. Solution
33.1◦
32.04◦
38.05◦
38.73◦
133.82◦
2. Solution
10.19◦
17.33◦
45.04◦
29.16◦
109.8◦
3. Solution
45.05◦
31.36◦
28.23◦
45.14◦
104.56◦
The Effect of Harmony Memory Integration into the Bees Algorithm
167
4 Results and Discussion The mechanism design was classified in three types: crank-rocker, double-rocker and double-crank. The best design among the types were selected in both of the algorithms as shown in Figs. 5 and 6. The result of HBA moves from equatorial plane to the north pole of the spherical plane. However, the result of BA moves the reverse direction. The coupler of design in Fig. 5 starts the longitudinal motion on the marker 1 and stops on the marker 2. The curvature of longitudinal trajectory is lower than 0.005 as shown in Fig. 7 [25]. The length of longitudinal trajectory is 123.82◦ . As for the BA, the result is smiliar to HBA’s, but the trajectory crosses the longitude of the sphere as shown in Fig. 4.2. The change of curvature is almost zero. However, the curvature is almost 0.005 which is critical. The workspce of design must be in 6 slices of the spherical plane. However, both of the results have wider workspaces. Consequently, both of the algorithms did not give a proper design for a gripper. The results of double-rocker for both of HBA and BA are similar and have proper link lengths to construct three-fingered robotic gripper. However, the end of trajectory in BA is not the pole of the spherical plane as shown in Figs. 8 and 9 Moreover, the curvature of the trajectory in the result of BA is very close to 0.005 which is very
Fig. 5 Crank-rocker obtained from HBA [25]
168
O. Acar et al.
Fig. 6 Crank-rocker obtained from BA
Fig. 7 The change of curvature value along the trajectory of crank-rocker mechanism obtained from BA [25]
The Effect of Harmony Memory Integration into the Bees Algorithm
169
critical as shown in Fig. 10. These two negative results prevent to design a gripper. Because, if the trajectory of mechanism coupler does not end on the pole of spherical plane, the object to be grasped would rotate. As for the HBA, the trajectory ends on the pole of the sphere. The curvature of HBA is lower than 0.005 [25]. The useful length of the trajectory is 119.8◦ which is higher than 90◦ . Although, both algorithms given similar results of design, HBA showed the superiority due to the harmony memory integration. The results of both algorithms are shown in Figs. 11 and 12. Both of the results are not proper to design a three-fingered gripper due to the link lengths of the mechanism. The design from HBA starts the motion on longitudinal trajectory on the marker 2. The longitudinal trajectory ends on the marker 1 which is on the pole of spherical plane as shown in Fig. 11. However, the design from BA starts the motion in the reverse direction and ends the motion on the south pole of the spherical plane. However, the trajectory crosses the longitude of the spherical plane. Therefore, the curvature of the design obtained from BA is not steady and at the threshold of 0.005 as shown in Fig. 13.The trajectory curvature of HBA is lower than 0.005 [25].
Fig. 8 Double-rocker obtained from HBA [25]
170
O. Acar et al.
Fig. 9 Double-rocker obtained from BA
Fig. 10 The change of curvature value along the trajectory of double-rocker mechanism obtained from BA
The Effect of Harmony Memory Integration into the Bees Algorithm
Fig. 11 Double-crank obtained from HBA [25]
Fig. 12 Double-crank obtained from BA
171
172
O. Acar et al.
Fig. 13 The change of curvature value along the trajectory of double-crank mechanism obtained from BA
5 Conclusion This paper presented the effect of Harmony Memory integration into Bees Algorithm. Spherical four-link mechanism design problem for gripper construction is selected as a case study. The best of link lenghts, trajectory lenghts and curvature of trajectory were investigated by using Bees Algorithm and Harmonic Bees Algorithm. The mechanism designs are classified based on types. The best mechanism among six results from two algorithms was found in the type of double-rocker mechanism produced by HBA. Both algorithms given the same mechanism design except the link lengths and the trajectory curvature. According to the authors’ observation, the difference was due to the search strategy of the algorithms. The harmony memory provided a more sensitive search ability by shifting randomly produced parameter values between instantaneous invariants. However, this feature cannot be used for optimisation problems that have parameters changing in various ranges. In other words, the randomly produced parameters must be able to take the same values. The selected design was prototyped and tested. The test can be seen in [34]. The harmony memory will be used for other optimisation algorithms for new case studies in future research.
The Effect of Harmony Memory Integration into the Bees Algorithm
173
References 1. Holland JH (1992) Genetic algorithms. Sci Am 267:66–73 2. Eberhart R, Kennedy J (1995) Particle swarm optimization. In: Proceedings of the IEEE international conference on neural networks, Citeseer, pp 1942–1948 3. Storn R, Price K (1997) Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J Global Optim 11:341–359 4. Dorigo M, Di Caro G (1999) Ant colony optimization: a new meta-heuristic. In: Proceedings of the 1999 congress on evolutionary computation-CEC99 (Cat. No. 99TH8406), IEEE, pp 1470–1477 5. Abbass HA (2001) MBO: marriage in honey bees optimization-a haplometrosis polygynous swarming approach. In: Proceedings of the 2001 congress on evolutionary computation (IEEE Cat. No. 01TH8546), IEEE, pp 207–214 6. Li X (2003) A new intelligent optimization method-artificial fish school algorithm. Doctor Thesis of Zhejiang University 7. Martin R, Stephen W (2006) Termite: a swarm intelligent routing algorithm for mobilewireless ad-hoc networks. Springer, Stigmergic optimization, pp 155–184 8. Pham DT, Ghanbarzadeh A, Koç E, Otri S, Rahim S, Zaidi M (2006) The bees algorithm—a novel tool for complex optimisation problems. Elsevier, Intelligent production machines and systems, pp 454–459 9. Karaboga D, Basturk B (2007) A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. J Global Optim 39:459–471 10. Yang X-S (2010) A new metaheuristic bat-inspired algorithm, Nature inspired cooperative strategies for optimization (NICSO. Springer 2010:65–74 11. Yang X-S (2010) Firefly algorithm, stochastic test functions and design optimisation. Int J Bio-inspired Comput 2:78–84 12. Oftadeh R, Mahjoob M, Shariatpanahi M (2010) A novel meta-heuristic optimization algorithm inspired by group hunting of animals: Hunting search. Comput Math Appl 60:2087–2098 13. Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer. Adv Eng Softw 69:46–61 14. Mirjalili S (2015) The ant lion optimizer. Adv Eng Softw 83:80–98 15. Yazdani M, Jolai F (2016) Lion optimization algorithm (LOA): a nature-inspired metaheuristic algorithm. J Computat Design Eng 3:24–36 16. Mirjalili S, Lewis A (2016) The whale optimization algorithm. Adv Eng Softw 95:51–67 17. Heidari AA, Mirjalili S, Faris H, Aljarah I, Mafarja M, Chen H (2019) Harris hawks optimization: algorithm and applications. Futur Gener Comput Syst 97:849–872 18. Połap D, Wo´zniak M (2021) Red fox optimization algorithm. Expert Syst Appl 166:114107 19. Połap D (2017) Polar bear optimization algorithm: Meta-heuristic with fast population movement and dynamic birth and death mechanism. Symmetry 9:203 20. Mirjalili S (2016) Dragonfly algorithm: a new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput Appl 27:1053–1073 21. Mirjalili S, Gandomi AH, Mirjalili SZ, Saremi S, Faris H, Mirjalili SM (2017) Salp Swarm algorithm: a bio-inspired optimizer for engineering design problems. Adv Eng Softw 114:163– 191 22. Jain M, Singh V, Rani A (2019) A novel nature-inspired algorithm for optimization: squirrel search algorithm. Swarm Evol Comput 44:148–175 23. Mirjalili S (2015) Moth-flame optimization algorithm: a novel nature-inspired heuristic paradigm. Knowl-Based Syst 89:228–249 24. Gandomi AH, Alavi AH (2012) Krill herd: a new bio-inspired optimization algorithm. Commun Nonlinear Sci Numer Simul 17:4831–4845 25. Acar O, Sa˘glam H, Saka ¸ Z (2021) Measuring curvature of trajectory traced by coupler of an optimal four-link spherical mechanism. Measurement 176:109189 26. Ting K-L, Wang S (1991) Fourth and fifth order double Burmester points and the highest attainable order of straight lines
174
O. Acar et al.
27. Özçelik Z, Saka ¸ Z (2010) Ball and Burmester points in spherical kinematics and their special cases. Forsch Ingenieurwes 74:111–122 28. Acar O, Saka ¸ Z, Özçelik Z (2019) Parametric Euler-Savary equations for spherical instantaneous kinematics. Springer, IFToMM World Congress on Mechanism and Machine Science, pp 347–356 29. Chiang C-H (1988) In: Kinematics of spherical mechanisms. Cambridge University Press Cambridge 30. Özçelik Z (2008) Ani invaryantlar yardımıyla küresel mekanizmaların tasarımı. Selçuk Üniversitesi Fenbilimleri Enstitüsü 31. Geem ZW, Kim JH, Loganathan GV (2001) A new heuristic optimization algorithm: harmony search. Simulation 76.2:60–68 32. Osman A, Kalyoncu M, Hassan A (2018) The bees algorithm for design optimization of a gripper mechanism. Selcuk University J Eng Sci 69–86 33. Acar O, Kalyoncu M, Hassan A (2019) Proposal of a harmonic bees algorithm for design optimization of a gripper mechanism. Springer, IFToMM World Congress on Mechanism and Machine Science, pp 2829–2839 34. Osman A, Sa˘glam H, Ziya S¸ (2021) Evaluation of grasp capability of a gripper driven by optimal spherical mechanism. Mechanism and Machine Theory 166:104486 35. Pinto PC, Runkler TA, Sousa JM (2007) Wasp swarm algorithm for dynamic MAX-SAT problems. Springer, International conference on adaptive and natural computing algorithms, pp 350–357 36. Mucherino A, Seref O (2007) Monkey search: a novel metaheuristic search for global optimization. American Institute of Physics, AIP conference proceedings, pp 162–173 37. Yang X-S, Deb S (2009) Cuckoo search via Lévy flights. In: World congress on nature and biologically inspired computing (NaBIC), IEEE, pp 210−214 38. Yang X-S (2012) Flower pollination algorithm for global optimization. Springer, International conference on unconventional computing and natural computation, pp 240–249 39. Askarzadeh A, Rezazadeh A (2013) A new heuristic optimization algorithm for modeling of proton exchange membrane fuel cell: bird mating optimizer. Int J Energy Res 37:1196–1204 40. Kaveh A, Farhoudi N (2013) A new optimization method: Dolphin echolocation. Adv Eng Softw 59:53–70 41. Sharma A, Sharma A, Panigrahi BK, Kiran D, Kumar R (2016) Ageist spider monkey optimization algorithm. Swarm Evol Comput 28:58–77 42. Li S, Chen H, Wang M, Heidari AA, Mirjalili S (2020) Slime mould algorithm: a new method for stochastic optimization. Futur Gener Comput Syst 111:300–323
Memory-Based Bees Algorithm with Lévy Flights for Multilevel Image Thresholding Nahla Shatnawi, Shahnorbanun Sahran, and Mohamad Faidzul Nasrudin
1 Introduction Lévy flight uses a Lévy probability distribution that is more efficient than Brownian random walks in exploring unknown and large-scale search spaces. Lévy flight can represent an optimal foraging strategy in many foraging patterns in nature, including honey bees. In this work, enhancements to the MBA algorithm proposed in [1] are introduced by using Lévy flights. This enhancement leads to a new algorithm called LMBA, which is used to reduce the randomisation in movement and tunable parameters problem in basic BA and MBA algorithms and to facilitate the use of the algorithm for multilevel image thresholding of different types of images. Basic BA is the BA in its simplest form where the algorithm requires a number of parameters to be set, namely: number of scout bees (n), number of sites selected out of n visited sites (m), number of best sites out of 455 m selected sites (e), number of bees recruited for best e sites (nep), number of bees recruited for the other (m-e) selected sites (nsp), initial size of patches (ngh) which includes site and its neighbourhood and stopping criterion [2]. The algorithm starts with the n scout bees being placed randomly in the search space. In LMBA, normal random initialisation and bee movement are replaced by a random walk of Lévy flight. This chapter is organised as follows: Sect. 2 discusses the relationship between honey bees and Lévy flights in nature. N. Shatnawi (B) Department of Computer Science, University of Yarmouk, Irbid, Jordan e-mail: [email protected] S. Sahran · M. F. Nasrudin Center for Artificial Intelligence Technology (CAIT), Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, 43600 Bangi, Selangor, Malaysia e-mail: [email protected] M. F. Nasrudin e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 D. T. Pham and T. Hartono (eds.), Intelligent Production and Manufacturing Optimisation – The Bees Algorithm Approach, Springer Series in Advanced Manufacturing, https://doi.org/10.1007/978-3-031-14537-7_11
175
176
N. Shatnawi et al.
This is followed by enhancements on the MBA algorithm, changing the movement of scout and follower bees, the pseudocode, and the parameters used. Otsu’s image thresholding and PSNR are explained in Sect. 3. Experimental setup and the results obtained for enhancement are discussed along with the applications and results in Sects. 4 and 5.
2 Lévy Flights and Honey Bees Lévy flights or Lévy motion is a non-Gaussian random process based on a stable distribution called the Lévy distribution originally studied by Paul Lévy. Lévy flights use Markovian stochastic processes with a probability density function (PDF) to determine the length distribution of individual jumps [3]. Lévy attempted to find a set of self-similar objects, known as fractals, since Benoit Mandelbrot invented them, much later, in 1968 [4]. Random walk is a stochastic procedure in which particles or waves travel along random trajectories. Lévy flights are one of the generalised random walk classes, where a ‘heavy-tailed’ probability distribution is used to describe the step lengths during the walk. Lévy flights are appropriate for different fields, such as describing animal foraging patterns, the distribution of human travel and even some aspects of earthquake behaviour. Transport based on Lévy flights has been studied numerically, but experimental work has been limited [5]. Some animals and insects in their flights follow the path of long trajectories with unexpected turns combined with short, random movements. This random walk is called Lévy flight, and it describes foraging patterns in natural systems, such as systems of ants, bees, bumbles, and even planktons [6]. In [7], they provided evidence that movement patterns by wandering albatrosses, bumblebees and deer are Lévy flight. By performing Lévy flight, the forager optimises the number of targets encountered against the travelled distance. The idea is that the probability of returning to the previous site is smaller compared to another random walk mechanism [8]. This Lévy flight motion has been found among various organisms, such as marine predators, fruit flies, and honey bees [8–10]. Foraging animals are regularly faced with resource depletion problems due to changes in food resource availability over time and space. Foragers, therefore, benefit from employing flexible strategies for resource exploration and exploitation [11]. Honey bees exploit resources by memorizing paths from the hive to rich patches and announce patch information using waggle dance to recruit bees to visit those locations repeatedly. However, recent evidence suggests that recruited foragers may not use the dance’s positional information to the degree that has traditionally been believed [12]. In [12], they find that in different environments, depending on private information about previously encountered food sources, it is more efficient than sharing information. Private information leads to a greater diversity of sites and can decrease the overharvesting of sources. When the food source is depleted, honeybees return to the hive and gather information from other bees about resource locations using waggle dancing and global information [13]. In
Memory-Based Bees Algorithm with Lévy Flights …
177
[11], they show that honey bee flight patterns have a scale-free (Lévy flight) characteristic that constitutes an optimal searching strategy for the location. In addition, in [14], they found that honey bees adopt optimal Lévy flights when searching after a known food resource becomes depleted and when searching for their hive after their hive-centred navigational systems have been disrupted. This search strategy would remain optimal based on [11] even if the implementation of the Lévy flights was imprecise, for example, due to errors in the bees’ path integration system or difficulties in responding to wind conditions (environment). In [12], they model bee colony foraging to investigate the value of sharing food source position information in different environments. They find that in many environments, the dependency on private information about previous food sources is more reliable than sharing information. This is beneficial in environments with small quantities of nectar per flower but may be detrimental in nectar-rich environments. Efficiency depends on both the environment and a balance between exploiting high-quality food sources and oversubscribing them. The bee movements based on Lévy flights, Lévy walk characteristics, and Lévy-flight distribution are showed (Fig. 1). The figure shows how honey bees’ movements in searching for food sources are based on Lévy flights and private memory, which is near the Lévy walk and Lévy flights distribution (Fig. 1).
2.1 Lévy Flights with MBA An increasing number of global optimisation tasks are being completed today using algorithms based on mimicking the nature of foraging animals and insects [4]. The memory-based BA algorithm was proposed in [1] to copy the decision-making capability of bees using private and social information (local and global memory). The results of the MBA algorithm show its superiority compared with the basic BA when applying both of them on nine benchmark functions. However, the MBA algorithm still requires a quite large number of tunable parameters, and the movements of bees depend on random search; the basic BA algorithm disadvantages. To solve the tunable parameter problem used and reduce the randomisation in movement, another enhancement of the basic BA is needed. However, before adding these enhancements, the randomisation in movement must be studied. The movements and direction of these movements of scout and follower bees in basic BA and MBA are chosen at random, and this randomisation effect on the time to reach the optimal solution. To view how much this randomisation affects the performance of the BA algorithm, many experiments on the basic BA algorithm are performed. Each experiment was performed using two scout bees (one based random and one elite bee) in 50 runs and required 1000 iterations to complete. The results are based on experiments performed on benchmark functions using the basic BA algorithm (Table 1). The results show that the scout bee journey based on random distribution increased when the search space became larger, and this increase meant more time and effort to find the optimal solution (Table 1). As a first step to solving this problem, the analysis of metaheuristic algorithms is important, especially the randomisation process,
178
N. Shatnawi et al.
Fig. 1 Bee movements (Lévy flight) in (a, b): a single scouting trip from the hive; b multiple scouting trips from the hive [15] (Reproduced from Bailis et al. [12]); c characteristic Lévy walk (Reproduced from Bazant); d Lévy flights (Reproduced from Yang) Table 1 Summation of random scout bee movements for different benchmark functions
Benchmark function
Summation of random scout movement percentage
De Jong
2717.8
Goldstein & Price
2349.4
Branin
5592.79
Martin & Gaddy
3994.87
Rosenbrock (a)
953.3
Rosenbrock (b)
11,176.58
Rosenbrock
1034.8
Hypersphere
4238.95
Grienwangk
3,400,416.32
Memory-Based Bees Algorithm with Lévy Flights …
179
which plays a key role in both exploration and exploitation. Randomisation depends on random variables, and using different types of distributions, such as a Gaussian distribution (normal distribution), is the most popular distribution, as many physical variables, errors in measurements, and many other processes act upon this distribution. Another important distribution is the Lévy distribution, which is a distribution of identical and independent distributions of random variables: Lévy walk and Brownian motion [15]. Foraging animals such as honey bees rely on the advantages of Lévy distributed lengths, which optimise the search compared to Brownian search [3]. Lévy flights are more efficient than Brownian random walks due to many reasons; one of them is because the variance of Lévy flights increases much faster than the linear relationship of Brownian random walks [15]. Lévy flight is a simple alternative to Brownian motion, showing that movements cannot simply be described as Brownian motion [16]. Since the basic BA and MBA algorithms mimic honey bee foraging, we can use the Lévy distribution for the randomisation process instead of normal random initialisation and random bee movement. However, in the Lévy distribution, will the Lévy walk or Lévy flights be used? To answer this question, [17] found that a Lévy walk is referred to as a Lévy flight for honey bees because of the focus on the flights of those bees. As mentioned in Sect. 2.6, the Lévy distribution is defined based on Fourier transformation [15]. F (k) = ex p [−α|k|β] where 0 < β ≤ 2
(1)
In Lévy flights, random number generation is based on two steps: direction selection and step generation. The uniform distribution is used for direction selection, while the generation of steps can be based on different methods; the most efficient way is by using the Mantegna algorithm for stable Lévy distribution (positive and negative steps). The step length (s) based on Mantegna’s algorithm can be calculated by Yang [15]: s =u/|v|1/β
(2)
where u and v are calculated from normal distributions, that is, ) ( ) ( u ∼ N 0, σu2 , v ∼ N 0, σv2 ,
(3)
σu = {Γ (1 + β)sin(π β/2)/ Γ [(1 + β)/2] β 2(β−1)/2 }1/β and σv = 1
(4)
where
In this work, Lévy flights will be used for the initialisation step and to determine scout and follower bees’movement (step size and direction).
180
N. Shatnawi et al.
2.2 Initialisation Step-Based Lévy Flights Initialisation is the first step in the basic BA and MBA algorithms, where scout bees are randomly sent to the search space. In this step, the number of scout bees is determined to search for solutions and evaluate the fitness of each bee. The sending of those scout bees is done in one step randomly and independently. Lévy flight along with global memory is used for scout bees’ initialisation in the solution space; this will be done in different steps. The first step is sending one bee (b1) to the search space randomly and evaluating its fitness. After that, Lévy flights will be used to determine the position of the next bee (b2) based on the position and fitness of (b1). If the fitness of b1 is close to the optimal value, then b2 is sent to a position near (b1) using the step size (small step size ± b1 position in different dimensions) instead of the batch size, and its fitness is evaluated. This step is repeated for all scout bees in n. In this way, scout bees are sent to the search space one by one based on the fitness value. The result of this process is to initialise the scout bees in the search space in a dependent way with as many bees near the optimal position as possible. The figure shows the process of initialization (Fig. 2).
2.3 Bees-Movement-Based Lévy Flights Honey bees are foraging insects with flight patterns that have scale-free (Lévy flight) characteristics [11]. Honey bees employ flexible strategies for resource exploration and exploitation, such as Lévy flights, when faced with resource depletion problems [13]. Additionally, in different environments, the dependency on private information about previous food sources is more efficient than information sharing using waggle dance [12]. Another enhancement on MBA will be introduced by replacing random movement of scout and follower bees with Lévy flights movements. This enhancement will use the memory of bees to determine the next position using the Lévy distribution to determine the step size and direction. The demonstration for the step of global-MBA with Lévy flights for initialisation of scout bees in the search space (Fig. 3). This enhancement will remove the need for the ngh parameter and replace it with the step size. The step size of follower bees will be less than the step size of scout bees to save the balance between exploration and exploitation. The difference between follower and scout bees’ movement-based randomisation based on Lévy flights are showed here (Fig. 4). The figure illustrates scout and follower bees’ flight in the search space based on random movements in (a) and (b) and based on Lévy flights in (c) and (d). In (a) and (c), single scout movement is described by random and by using step size, respectively; good position small step size otherwise
Memory-Based Bees Algorithm with Lévy Flights …
181
Fig. 2 Initialisation of 5 scout bees a send the first scout bee to the solution space randomly; b send the second bee based on the first one position and fitness value using Lévy flights (in different directions); c send the third scout based on the second scout using Lévy flights; d send all the scout bees
large step size. In (b) and (d), S is an elite scout bee, and f 1…f 4 are follower bees recruited to this elite site and have a position based on scout bee position and Lévy flight distribution. In (d), f 3 and f 4 are near, so f 4 needs to find another position based on the previous position and Lévy flights distribution.
182
N. Shatnawi et al.
Fig. 3 Flowchart of global-MBA with Lévy flights for initialisation of scout bees in the search space
Fig. 4 a Random scout bee movement; b follower bees inside the patch; c scout bee movement based on Lévy flights; d follower bees’ movement based on Lévy flights
Memory-Based Bees Algorithm with Lévy Flights …
183
The shaded parts contain the Lévy flights-based methods used, and nep*e means the number of follower bees to be sent to elite sites e, while nsp*(m–e) means the number of follower bees to be sent to other best sites excluding elite sites (Fig. 5).
3 Otsu’s Image Thresholding and PSNR Image thresholding is the process of separating the foreground pixels from the background. There are many ways of achieving optimal thresholding. One of the methods is called Otsu’s method, proposed by Nobuyuki Otsu[x]. It is a variance-based technique to find the threshold value where the weighted variance between the foreground and background pixels is the smallest. The key idea here is to iterate through all the possible threshold values and measure the margin of background and foreground pixels. Then, find the threshold where the margin is the smallest. The algorithm iteratively searches for the threshold that minimises the within-class variance, defined as a weighted sum of variances of the two classes (background and foreground). The
184
N. Shatnawi et al.
Fig. 5 Flowchart of Lévy flights with the memory-based BA (LMBA) algorithm
formula for finding the within-class variance at any threshold t is given by: 2 σ 2 (t) = ωbg (t)σbg (t) + ω f g (t)σ 2f g (t)
(5)
where ωbg (t) and ωfg (t) represent the probability of the number of pixels for each class at threshold t and σ2 represents the variance of colour values. The above probability means, let, Pall be the total count of pixels in an image, PBG (t) be the count of background pixels at threshold t, PFG (t) is the count of foreground pixels at threshold t. Therefore, the weights are given by
Memory-Based Bees Algorithm with Lévy Flights …
185
ωbg (t) =
PBG (t) Pall
(6)
ω f g (t) =
PF G (t) Pall
(7)
The variance can be calculated using the following formula: Σ σ (t) = 2
(xi − x)2 N −1
(8)
where x i is the value of the pixel at i in the group (background and foreground), x is the means of pixel values in the group (background and foreground). N is the number of pixels. The PSNR or peak signal-to-noise ratio is commonly used to measure reconstruction quality for images and videos subject to lossy compression. In the case of image thresholding, PSNR is used to quality the image before and after thresholding. The simplest definition of this starts out from the mean squad error. Let there be two images I 1 and I 2 with a two-dimensional size i and j, composed of c channels. MSE =
Σ 1 (I1 − I2 )2 c×i × j
(9)
Then, the PSNR is expressed as: ( P S N R = 10 × log10
M AX 2I MSE
) (10)
where MAX I is the maximum valid value for a pixel.
4 Experimental Results To evaluate the effectiveness of the enhancements that include using Lévy flights with local-MBA, global-MBA, and MBA algorithms, nine benchmark functions were used. In our experiments, the experiments are performed in 50 runs, and each run requires several iterations depending on the test function. The table contains the values for the BA algorithm parameters when applied to the benchmark functions (Table 2). The movements and direction of these movements of scout and follower bees in LMBA are chosen based on the Lévy flights distribution. To view how much Lévy flights distribution affects the performance of the LMBA algorithm, many experiments on the LMBA algorithm were conducted. Each experiment was performed using two scout bees (one based random and one elite bee), and 50 runs
186 Table 2 Summation of scout bee movements based on Lévy flights for different benchmark functions
N. Shatnawi et al. Benchmark function
Summation of scout based on Levy flights movement percentage
De Jong
1054.29
Goldstein & Price
998.75
Branin
2902.54
Martin & Gaddy
1852.16
Rosenbrock (a)
475.8
Rosenbrock (b)
4335.46
Rosenbrock
862.3
Hypersphere
2451.9
Grienwangk
5051.77
need 1000 iterations to complete. The results are based on experiments performed on benchmark functions using the basic BA algorithm (Table 2). The results show that Lévy flights solve the randomisation problem that is discussed (Table 1) and reduce the distances that scout bees must travel to find the solution (Table 2). A comparison between the basic BA and MBA based on the summation of movements (Fig. 6). This comparison does not include the Grienwangk function due to the large distance it needs (Table 2). The results lead to tuning of the basic BA parameters (Fig. 6). The values of the basic BA parameters are not changed in the LMBA algorithm, and even if they change, the results will not be affected. The ngh parameter will not be used because the LMBA algorithm depends on step_size in Lévy flights along with information in
Fig. 6 Comparison between basic BA and MBA based on summation of movements
Memory-Based Bees Algorithm with Lévy Flights … Table 3 Parameter tuning in the LMBA algorithm
Parameter
187 Tuning
n
X
m
X
e
X
nsp
X
nep
X
ngh
X √
Step_size (new)
the memory to determine any bee’s next position. Here, it shows that there is no need for tuning all the parameters, except the step_size, which needs to be tuned when we use it for different types of bees (scout and follower), where the step_size for scout bees is larger than the step_size of follower bees, and this difference must be tuned in different problems (Table 3). The values of step_size of scout bees and follower bees for different benchmark functions are chosen based on thprevious equation (Eq. 2).
4.1 Benchmark Test Functions The local-MBA, global-MBA, and MBA algorithms are applied to different problems with different numbers of dimensions, where the problems become more difficult when increasing the number of dimensions. Here, it shows the results of using Lévy flights distribution with global-MBA for initialisation step (Lévy flights with globalMBA (I)), Lévy flights with local-MBA for movement (Lévy flights with local-MBA (M)), Lévy flights with local-MBA for movement and initialisation (Lévy flights with local-MBA (M & I)), Lévy flights with MBA, along with the improvement percentage (Table 4). The initialisation procedure is an important procedure used to determine how scout bees are scattered in the search space to start their journey to find the optimal solution. LMBA algorithm. Here, it shows that using Lévy flights with global memory to make scout bees dependent on each other leads to sending more bees around the expected solution. Lévy flights with the local-MBA algorithm to find the next position of scout and follower bees depending on private information, without using Lévy flights for the initialisation step are also shown here (Table 4). The results show that the mean number of evaluations when using the Lévy flights distribution with local-MBA to determine each bee’s next position is improved. Whereas the scout bees step size is greater than the follower bees step size and depends on the fitness of the previous position, the follower bees’ position depends on the best scout bees’ position. The results show that the improvement percentage compared with the basic BA algorithm reaches 75.35%. The results of combined Lévy flights with local-MBA for initialising scout bees in the search space and Lévy flights for scout and follower bees’ movement are shown here (Table 4). The results of using Lévy
188
N. Shatnawi et al.
Table 4 Levy flight with different memories for the initialisation of scout bees and movements to find the next position and the percentage of improvement Benchmark function Levy flights with Levy flights with Levy flights with Levy flights with global MBA (I) local MBA (M) local MBA MBA (M&I) (M&I) Mean
Improve Mean w
Improve Mean (%)
Improve Mean (%)
Improve (%)
De Jong
767.3
59
742.2
60.11
712.4
61.71
602.6
67.61
Goldstein & Price
859.1
93
851.3
92.59
821.3
92.85
811.13 92.94
Branin
420.9
97.12
421.4
97.35
321.9
97.98
310.5
98.15
Martin and Gaddy
680.4
6.5
304.7
43.29
207.7
53.76
101.3
77.45
Rosenbrock (a)
1467.1 43
467.3
60.90
435.2
63.58
415.7
65.21
Ro-enbrock (b)
615.8
80.33
1288.9 81.67
1085.7 84.56
1082.2 84.61
Rosenbrock
732.6
92.5
1408.2 92.79
1408.2 92.79
1398.6
flights with MBA (global and local) for bees’ movement with initialisation step show that using Lévy flights with MBA improves the basic BA with a percentage equal to 81.13% when compared with the basic BA algorithm. The problem of choosing basic BA parameters was removed, with no need for the ngh parameter and no need to tune the parameters together. The only parameter that must be changed is the step size for scout and follower bees.
4.2 MBA and LMBA with PSNR for Multilevel Image Thresholding Thresholding is a method for region-based image segmentation that separates objects from the background. Multilevel thresholding used for a complex image consisting of more classes partitions the image into several distinct regions corresponding to different groups of grey levels [18]. Currently, many population-based optimisations are used for multilevel thresholding to solve the exhaustive search method problem. The bee algorithm, which proved to be the most powerful fair optimisation method for sampling a large solution space because of its fair random sampling, was quickly adapted in image processing [19]. PSNR is used as the fitness function in the basic BA algorithm to find the threshold values (it chooses the bees’ position that maximises the PSNR value from each class). PSNR measures the quality of the binaries image in comparison with the original image. In this section, the MBA and LMBA algorithms are used with PSNR instead of the basic BA algorithm to find the threshold values for the standard images, object recognition dataset, texture images dataset, and OCR dataset. The results of these algorithms are compared. The difference between the BA algorithm with Otsu’s method and the BA algorithm with PSNR for two standard images: Lena and Pepper.
Memory-Based Bees Algorithm with Lévy Flights …
189
5 Results Analysis for Standard Images Standard images named Lena, Pepper, and Cameraman are used to assess the appropriateness of the proposed method for image thresholding. The mean, standard deviation and variance for the basic BA, MBA, and LMBA with PSNR for different numbers of thresholds using three standard images, where values are computed based on the PSNR results for each algorithm for three standard images are showed here (Table 5). LMBA shows its superiority compared with basic BA and MBA, as LMBA has more PSNR mean compared with the basic BA and MBA which means less noise. The performance of the algorithm with PSNR, MBA with PSNR, and LMBA with PSNR give better results, better quality compared with basic BA for different thresholds and different images (Fig. 7). Table 5 Threshold values for basic BA, MBA, and LMBA with PSNR Image Lena
Cameraman
Pepper
k
Basic BA with PSNR
MBA with PSNR
LMBA with PSNR
2
122, 160
124, 155
124, 155
3
113, 153, 213
110, 155, 189
112, 149, 211
4
107, 150, 193, 248
106, 144, 179, 205
107, 146, 188, 249
5
101, 141, 167, 242, 250
102, 139, 179, 182, 232
102, 139, 179, 182, 232
2
101,181
100, 183
97, 197
3
91, 171, 191
83, 172, 186
89, 182, 204
4
91, 150, 190, 230
80, 171, 183, 204
80, 170, 194, 234
5
87. 129, 186. 227, 253
77, 164. 183, 187 221
86, 162, 201, 221, 255
2
103, 137
104, 128
106, 218
3
79, 135, 151
80, 128, 137
80, 128, 137
4
63, 120, 149, 203
76, 118, 137, 149
64, 120, 141, 203
5
62, 100, 138, 190, 251
63, 1 17, 131, 148 159
64, 91, 139, 182, 214
Fig. 7 PSNR values of BA with Otsu, and BA, MBA, and LMBA with PSNR for the Lena image
190
N. Shatnawi et al.
Table 6 Mean, standard deviation and variance for PSNR values and ANOVA No Mean 2
2
4
5
Total
BA
MBA
1 MBA
21.039567
21.046067
21.047333
Std. Deviation
0.6029106
0.6006090
0.6001625
Variance
0.364
0.361
0.360
Mean
22.783600
22.753733
22.814800
Std. Deviation
0.4402349
0.3387318
0.4423003
Variance
0.194
0.115
0.196
Mean
23.506467
23.594267
23.623600
Std. Deviation
0.3037500
0.2297297
0.2066076
Variance
0.092
0.053
0.043
Mean
23.904600
24.238967
24.462100
Std. Deviation
1.2298934
0.3435689
0.5219936
Variance
1.513
0.118
0.272
Mean
22.808558
22.908258
22.986958
Std. Deviation
1.3066067
1.2965568
1.3771312
Variance
1.707
1.681
1.896
Note No. means the number of thresholds to be calculated
The results of the comparison held for the quality of the image (PSNR) on three standard images (Table 6). The significant value Sig. equal to 0.004, which is less than 0.05. Therefore, there is a statistically significant difference in the mean length of PSNR results between LMBA, MBA, and the basic BA with PSNR. ANOVA table Sum of squares Between groups Within groups Total
df
Mean square
F
Sig
9.980
0.004
14.592
3
4.64
3.899
8
0.487
18.492
11
6 Conclusion The use of Levy flights helps to reduce the randomisation in movement and the tunable parameters in basic BA and MBA. The problem of choosing basic BA parameters is reducing, with no need for the ngh parameter and no need to tune the parameters together to find the optimal solution. The only parameter that needs to be tuned is the step size for scout and follower bees. Based on the conducted experiments on LMBA, it only needs to be tuned in different problems.
Memory-Based Bees Algorithm with Lévy Flights …
191
Acknowledgements This research is funded under the Ministry of Higher Education of Malaysia with the code FRGS/1/2016/ICT02/UKM/02/10.
References 1. Shatnawi N, Sahran S, Faidzul M (2013) Bees algorithm using lévy-flights for start configuration. In: International conference on computer science and computational mathematics (ICCSCM 2013), pp 12–16 2. Pham DT, Ghanbarzadeh A, Koc E, Otri S, Zaidi M (2006) The bees algorithm- a novel tool for complex optimization problems. IPROMS 02:454–459 3. Chechkin AV, Metzler R, Klafter J, Gonchar VY (2008) Introduction to the theory of lévy flights. In: Klages R, Igor GR, Sokolov M (ed) Anomalous transport: foundations and applications. illustrated ed., Wiley 4. Gutowski M (2001) Lévy Flights as an underlying mechanism for global optimization algorithms. Mathem Phys 1–8 5. Barthelemy P, Bertolotti J, Wiersma DS (2008) A Lévy flight for light. Nature 453:495–498 6. Tuba M, Subotic M, Stanarevic N (2012) Performance of a modified cuckoo search algorithm for unconstrained optimization problems. WSEAS Trans on Syst 112:62–74 7. Viswanathan GM, Buldyrev SV, Havlin S, da Luz MGE, Raposo EP, Stanley HE (1999) Optimizing the success of random searches. Nature 401:911–914 8. Raposo EP, Da Luz MGE (2008) L´evy flights and superdiffusion in the context of biological encounters and random searches. Phys Life Rev 5(3):133–150 9. Nurzaman SG, Matsumoto Y (2009) Biologically inspired adaptive mobile robot search with and without gradient sensing. In: IEEE/RSJ international conference on intelligent robots and systems. St. Louis, MO IEEE, pp 142–147 10. Sutantyo DK, Kernbach S, Levi P, Nepomnyashchikh VA (2010) Multirobot searching algorithm using Lévy flight and artificial potential field. Safety Security and Rescue Robotics SSRR. Bremen, IEEE, pp 1–6 11. Reynolds AM, Smith AD, Reynolds DR, Carreck NL, Osborne JL (2007) Honeybees perform optimal scale-free searching flights when attempting to locate a food source. The J Experim Biol 210:3763–3770 12. Bailis P, Nagpal R, Werfel J (2010) Positional communication and private information in honeybee foraging models 13. Biesmeijer JC, Seeley TD (2005) The use of waggle dance information by honey bees throughout their foraging careers. Behav Ecol Sociobiol 59:133–142 14. Reynolds AM, Swain JL, Smith AD, Martin AP, Osborne JL (2009) Honeybees use a Lévy flight search strategy and odour-mediated anemotaxis to relocate food sources. Behav Ecol Sociobiol 64:115–123 15. Yang XS (2010b) In: Nature-inspired metaheuristic algorithms. Luniver Press 16. Yang X-S, Deb S (2010a) Engineering optimization by cuckoo search. Int J Mathem Model Numer Opt 4:330–343 17. Reynolds AM (2006) Cooperative random Lévy flight searches and the flight patterns of honeybees. Phys Lett A 354:384–388 18. Ma J, Wen D, Yang S,Wang L, Zhan J (2010) Hierarchical segmentation based on a multilevel thresholding. In: 3rd international congress on image and signal processing CISP, IEEE. vol 3. pp 1396–1400 19. Azarbad M, Ebrahimzade A, Izadian V (2011) Segmentation of infrared images and objectives detection using maximum entropy method based on the bee algorithm. Int J Comput Inform Syst Indust Managem Appl IJCISIM 3:026–033
A New Method to Generate the Initial Population of the Bees Algorithm for Robot Path Planning in a Static Environment Mariam Kashkash, Ahmed Haj Darwish, and Abdulkader Joukhadar
1 Introduction Robot systems are widely used in many fields to perform a variety of tasks. These fields include industrial, medical, and entertainment. Therefore, it is imperative to make the robot act as an autonomous system that works safely and reliably [1]. An autonomous robot system consists of three units: localisation, planning, and mapping [2]. Path planning can be defined as a process to find a free obstacle path from a starting point to a goal point in an environment, which is represented as a map. The location of the start and goal points should be defined precisely in the map. The environment may also be indoor or outdoor with static or dynamic obstacles [3]. Path planning problems are usually solved with constraints such as path length or energy consumption, so the problem can be treated as an optimisation problem [3]. These optimisation methods are classified as either traditional or metaheuristic methods [4]. A rapidly exploring random tree (RRT) was designed as a randomised data structure to solve the path planning problem [5]. Artificial potential field was used for real-time obstacle avoidance [6]. The probabilistic road map (PRM) was suggested as a motion planning method in a static configuration space for a robot [7]. The genetic algorithm (GA) was proposed to plan a path in a static environment with a M. Kashkash (B) Faculty of Electrical and Computer Engineering, University of Sharjah, Sharjah, UAE e-mail: [email protected] A. Haj Darwish Faculty of Informatics Engineering, University of Aleppo, Aleppo, Syria e-mail: [email protected] A. Joukhadar Faculty of Electrical and Electronic Engineering, University of Aleppo, Aleppo, Syria e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 D. T. Pham and T. Hartono (eds.), Intelligent Production and Manufacturing Optimisation – The Bees Algorithm Approach, Springer Series in Advanced Manufacturing, https://doi.org/10.1007/978-3-031-14537-7_12
193
194
M. Kashkash et al.
review of all previous optimisation tools [8]. Bezier curves with GA were used to select the shortest path [9]. Particle Swarm Optimisation (PSO) was used to compute the optimal path for a mobile robot in a static environment[10]. This paper suggested enhanced PSO with sine and cosine algorithms to find a free-collision path for each robot in the environment [11]. Aging-Based Ant Colony Optimisation (ABACO) was implemented to solve the path planning problem in association with a gridbased model [12]. A cuckoo optimisation algorithm is used to find the path in a dynamic environment for a mobile robot [13]. Adaptive particle swarm optimisation (APSO) was used to find a solution for a path planning problem for a mobile robot [14]. Bacterial foraging optimisation was proposed to solve the online problem in a 2D workspace [15]. The Bees Algorithm (BA) was used for mobile robot path planning in a dynamic environment [4]. The Bees Algorithm was also used with the Q-learning algorithm to find the optimal path, and QL was implemented as a local search function of the BA [16]. However, the computational cost of the BA in path planning is high (time consuming) to find the initial population. Therefore, in this chapter, a new form of the BA with an alternative method to generate the initial population is presented. This method guarantees finding initial valid paths in complex environments. The proposed method is implemented offline to find the optimal path in a 2D static environment. The obtained path can be uploaded to a mobile robot to follow it. In brief, the aim of this chapter is: • Using the Bees Algorithm as a planner for a mobile robot in a static known environment. • The environment will be handled as a continuous environment without discretisation. • A new method to initialise the population to guarantee solving this problem regardless of the complexity of the environment. The recent chapter is organised as follows: Sect. 2 introduces the proposed method by declaring the configuration space, a new approach to initialise the population of the BA, local and global search, and the neighbourhood shrinking method. Section 3 shows the results, and Sect. 4 gives the comparison. Finally, the conclusion is given in Sect. 5.
2 The Proposed Method 2.1 Configuration Space The environment is represented using the configuration space method. In this method, the environment is represented by a map-based structure. The environment is continuous in this structure as Christian coordinates. To represent the robot as a point (x, y, θ ), the area of obstacles should be inflated by adding an additional area. This
A New Method to Generate the Initial Population …
195
Fig. 1 The configuration space
area is proportionated to the robot dimensions. After obtaining the optimal path, this area is eliminated. Figure 1 shows the original environment and the environment after inflating it by additional area. The configuration space consists of C f r ee and Cobs . C f r ee is the area where the robot can move without collision with any obstacle in the environment. However, Cobs is the area occupied by obstacles in the environment. The configuration space is represented by an occupancy map-based structure. A discriminative model is used to return the probability of each point whether it belongs to an obstacle or is a free point, as in “Eq. (1)” [17]: ⎧ f (x, y) =
1; (x, y) ∈ Cobs 0; (x, y) ∈ C f r ee
(1)
2.2 Initialise the Population of the Bees Algorithm The Bees Algorithm (BA) is a population-based algorithm. The BA was proposed by Pham et al. as an optimisation method to deal with continuous problems [18]. Moreover, the BA can be adapted easily to solve complex combinatorial problems such as escaping from a maze with low computational cost [19]. Initialising the population is the first and a critical step in the Bees Algorithm. The initial population plays an eminent function in the optimisation algorithm. The closer values of the initial population to the optimal solutions imply that the smaller number of evaluation functions of the BA would call to reach the optimal solutions. This also has a large impact on the stability of the algorithm and its robustness [20]. The population consists of (n) bees. The value of n is generated randomly using a
196
M. Kashkash et al.
uniform distribution function. Each bee in the population represents a complete path from the start to the goal point. This path should be feasible by meaning: “There are no intersections among a generated path and obstacles in the studied–environment.”
In this chapter, a new method to generate the initial population is suggested. The new method guarantees finding the population regardless of the complexity of the given environment. In the new method, a bee is handled as a robot. When the robot moves, it should generate a rotation angle θ randomly. After that, it moves a distance in that direction based on the range of its sensors. This movement should not intersect with obstacles in a given map. This assumption is the cornerstone of the proposed method to initialise the population; it simulates a real robot that moves randomly based on the readings of its sensors. First, the start point is added to the path. After that, θ is generated randomly in the range [0, 2π] using a uniform distribution function as the following equation: θ = uniform(0, 2π )
(2)
Then, find the distance d that the robot can move in the direction θ without collision. If d is larger than 0.5, then the coordinate of a new point is calculated using “Eqs. (3) and (4)”: xnew = xcurr + d ∗ cos θ
(3)
ynew = ycurr + d ∗ sin θ
(4)
where • (xcurr , ycurr ) is the coordinate of the current location of the bee. • (xnew , ynew ) is the coordinate of the new location of the bee. • d is the distance that the bee can move in the direction θ without collision. After finding the distance that the robot can move along it, the coordinate of the new point is computed. This coordinate should stay inside the map, and the new point should not be added to the path before. If the above conditions are not satisfied, then the possibility of moving from the current point will be impossible, and this point is removed from the path. Here, there is an assumption that at each point, the bee has a resolution when generating θ . θ is generated in the range [0, 2π] with a step π/18. This is used to prevent the bee from stuck in a point that the bee cannot move from it. In the proposed method, θ is generated onetime just at each point. If the bee at the current point has generated all available angles θ in the range [0, 2π ] and cannot move to any new point, then this point will be removed from the path. Finally, the initial obtained path (bee) is a group of consecutive points as follows:
A New Method to Generate the Initial Population …
197
Fig. 2 The shortage path process
bee = (xs , ys ), (x2 , y2 ), . . . , (xk−2 , yk−2 ), (xd , yd )
(5)
where k is the number of points in the path, which is different from one bee to another. After obtaining the full initial path, a path shortage is applied to remove redundant points from the initially obtained path. This method removes an intermediate point between three points in the path if the direct connection line is collision-free between the first and third points [21]. Figure 2 illustrates this process. Figure 3 shows the flow chart of the process of initialize a bee (a path) in the population.
start
map, s & d
Add start point to the path
Add destination point to the path
Path enhancement
no
Bee cannot reach to the destination
yes
There is an available subpath
end New point is added to the path
Fig. 3 Flow chart of the process of initialising the bee (path) in the population
198
M. Kashkash et al.
2.3 The Fitness Function The fitness function of the path (bee) in the population is the Euclidian distance, and the main goal of the BA is to minimise the fitness function value. “Eq. (6)” shows the fitness function: f =
k−1 / Σ ( ) (xi+1 − xi )2 + (yi+1 − yi )2
(6)
i=1
2.4 Local Search The goal of the local search function is to improve the found solutions of the studied problem. This improvement is implemented by recruiting additional bees to search in the neighbourhood of the population’s good (m) bees. In the proposed method, n, m, e, nep and nsp are generated randomly using the uniform distribution function. This process is suggested to study the effect of parameters on the performance of the algorithm. In each iteration of the BA, all the bees in the population are ordered ascended based on the evaluation values, which are calculated using “Eq. (6)”. The good (m) bees are selected based on the evaluation value, and the elite (e) bees are distinguished among them. The local search is implemented by generating nep or nsp points in the neighbourhood of each point of the basic path points except the start and destination points. (nep) and (nsp) are the number of recruited bees for the elite (e) and the remaining (m-e) good bees subsequently. However, (nep) is always greater than (nsp). The coordinates of new points are generated randomly using the uniform distribution function in the range [x − ngh, x + ngh] and [y − ngh, y + ngh] for x and y, respectively. The following equations are used to generate the new coordinates: xnew = uniform(xcurr − ngh, xcurr + ngh)
(7)
ynew = uniform(ycurr − ngh, ycurr + ngh)
(8)
The newly generated point should not be in an obstacle, and the new resulting path should be available. If the new point does not satisfy these conditions, then it will be extracted out of the obstacle to maintain the full feasible path. After that, the resulting path is evaluated using the evaluation function in “Eq. (6)”. The new path is considered a member of the population in the case if its length is smaller than the length of the former path that the local search is called for. Figure 4 clarifies the local search process.
A New Method to Generate the Initial Population …
199
Fig. 4 Flow chart of the local search function
2.5 Global Search The final step in the BA is the global search function. This step is implemented to maintain the diversity among the bees in the population. To guarantee that, the remaining bees (n-m) are sent to search for new paths that have not been discovered before. This is implemented by calling the modified initialise population function.
2.6 Neighbourhood Shrinking The size of the neighbourhood has a significant impact on the evaluation of the obtained path. If the neighbourhood size has a large value at the beginning of the
200
M. Kashkash et al.
Fig. 5 The neighbourhood shrinking process
proposed method, then the evaluation of the obtained path is accelerated. This large value is a percentage of the map size. However, after some iterations, the proposed method will be stuck in a local optimum, where the obtained solution does not change along several iterations. Therefore, the neighbourhood shrinking method is suggested to maintain the enhancement of the obtained path. This method decreases the size of the neighbourhood according to the following equation: ngh = α ∗ ngh
(9)
This process maintains the enhancement of the obtained path by investing the local search in the around of path points. Figure 5 illustrates the process of the neighbourhood shrinking.
3 Results MATLAB is used to evaluate the proposed method. The experiment is run for a mobile robot in a known static environment. A group of maps is used to evaluate the proposed method. These maps are shown in Fig. 6. The proposed method is implemented offline. After that, the obtained path can be given to the robot to follow it. The inputs of the suggested algorithm are: • Map "maps": map-based structure is used to represent the environment • Start and goal points: Cartesian coordinates are used to represent them, and they can exist anywhere in the map. • ngh: the size of the neighbourhood.
A New Method to Generate the Initial Population …
201
Fig. 6 Benchmark maps with their obtained paths, maps (1, 2 and 3) are available at [22] and maps (4 and 5) are available at [12]
202
M. Kashkash et al.
The main parameters of the BA (n, m, e, nep and nsp) are random integers in the proposed method. They are generated using a uniform distribution function as the following equations: n = uniform([6, 50])
(10)
m = uniform(3, n/2)
(11)
e = uniform(2, m − 1)
(12)
nep = uniform(1, 10)
(13)
nsp = uniform(1, 5)
(14)
The range of the uniform distribution function for “Eqs. (10), (11), (12), (13) and (14)” is selected based on trial and error. The set of maps is shown in Fig. 6 with the optimal obtained paths. Table 1 shows the average length of the obtained paths for each map. These lengths are the result of executing the proposed method ten times. Each execution has different values of parameters that are generated randomly. The random values are shown in Tables 2, 3 and 4 for maps (1, 2 and 3) consequently. From Fig. 7a, when n is larger, the number of iterations to obtain the optimal solution is smaller in comparison to when n is decreased. When n equals 100 bees, Table 1 The average length of the obtained path for each map Map#
1
2
3
4
5
Average length
39.7713 m
12.5676 m
21.5843 m
14.1640 m
21.7956 m
Table 2 Random parameters for map 1 #
N
M
e
nep
nsp
Path length
1
22
9
5
10
1
39.9788
2
7
3
2
6
1
39.7826
3
28
5
4
5
4
40.3119
4
16
6
3
10
1
40.0171
5
22
4
3
9
1
39.9968
6
31
5
2
4
1
39.9592
7
42
8
6
2
2
39.8774
8
46
18
14
8
2
38.8372
9
27
3
2
10
5
39.4583
10
19
5
2
8
1
39.4937
A New Method to Generate the Initial Population …
203
Table 3 Random parameters for map 2 #
n
M
e
nep
nsp
Path length
1
42
20
4
10
4
12.4676
2
25
10
3
4
2
12.5169
3
45
15
5
2
2
12.5219
4
25
5
4
1
2
12.5330
5
18
7
5
9
5
12.4977
6
41
10
7
9
1
12.4834
7
13
5
4
6
5
12.4929
8
10
4
3
7
2
12.5114
9
12
4
2
2
3
13.1137
10
34
18
7
5
2
12.5378
Table 4 Random parameters for map 3 #
n
M
e
nep
nsp
Path length
1
29
3
2
10
4
21.6242
2
20
5
2
4
1
21.5914
3
37
9
7
7
5
21.5705
4
14
7
4
2
2
21.5886
5
45
7
2
8
1
21.6182
6
46
21
2
1
4
21.5659
7
21
5
2
1
5
21.5757
8
41
8
2
6
3
21.5689
9
37
4
3
10
5
21.5726
10
22
8
7
3
3
21.5670
then the path length at the start iteration is nearly 45.8 m. In approximately ten iterations, the solution dropped to 44.4 m. The obtained length was fixed until 120 iterations. After that, the solution returned to improve through iterations. It sharply declined to 39 m in approximately 200 iterations. This significant change is due to the impact of the ngh shrinking process. The solution continued to improve until it reached the optimal length at nearly 300 iterations. Finally, the obtained length did not improve at all. On the other hand, when the parameters are small, as in Fig. 7b, the solution is decreased slowly through the iterations. After approximately 800 iterations, it reached the optimal obtained length. Hence, the improvement process was smooth and quick when the values of the parameters were large. In addition, it consumed many iterations when those values were small. Therefore, a large value of (n) gives the algorithm a chance to explore new solutions and speeds the process to arrive at the optimal solution. If (n) is large, then (m)
204
M. Kashkash et al.
Fig. 7 Comparison between large and small values of the bee parameter for map (1)
and (e) are also large. This gives the algorithm the opportunity to search explicitly around the best solutions in the population. Therefore, the merit of the large population size results from the ability to explore the map and find new solutions in a small number of iterations. Figure 8 shows the impact of changing (m) and (e) on the proposed method when (n), (nep), and (nsp) are fixed. The number of iterations is 300 in both experiments. Figure 8b shows that when the values of (m) and (e) are large, the improvement process is faster and more efficient. In contrast to the small values, the evaluation process is slower, as shown in Fig. 8a. Figure 9 shows the evaluation process when the number of (nep) is large. (nep) is assigned to 30. The figure shows that the process is quick, and the optimal solution is obtained in a small number of iterations.
Fig. 8 Comparison between different values of e and m for map (1)
A New Method to Generate the Initial Population …
205
Fig. 9 The impact of big nep on the evaluation process for map (1)
Table 5 The result of comparison with other algorithms map#
The modified Bees Algorithm
GA (m)
PS (m)
PSO (m)
ABC (m)
ABACO (m)
4
14.1640
18.4410
17.7430
17.7001
–
14.4853
5
21.7956
–
–
–
25.9706
23.1421
4 Comparison The comparison of the proposed method to the other algorithms is implemented by considering the same assumption. The assumptions are the exact dimension of the map, obstacle location, and coordinates of start and goal points. These assumptions are considered to guarantee that the comparison is fair. Not only the same assumption but also the comparison is implemented based on the length of the obtained path for each algorithm. The numbers of maps that are used in the comparison are 4 and 5. The proposed method is compared with Genetic Algorithm (GA), Pattern Search (PS), Particle Swarm Optimisation (PSO), and Aging-Based Ant Colony Optimisation (ABACO) for map number (4). Artificial bee colony (ABC) and ABACO for map number (5) [12]. Table 5 shows the result of the comparison. The results show that the proposed method has the shortest path for the provided maps. This clarifies the efficiency and superiority of the suggested BA in finding the shortest path.
5 Conclusion This chapter has presented a modified version of the Bees Algorithm to solve a robot path planning problem. This method was implemented offline in a known static environment. The obtained path was given to a wheeled mobile robot to follow
206
M. Kashkash et al.
online. The modified BA uses a new method to generate the initial population of bees. The modified method of initialising the population treats a bee as a real robot. This new method has succeeded in finding appropriate initial populations for a variety of complex maps. The local search and global search succeeded in improving the found solutions. The simulation was run with different values of the modified BA parameters. These parameters were generated randomly using a uniform distribution function. The results of the simulation showed the direct impact of the parameters on the performance of the proposed method. The large population size accelerated the process of finding the optimal path. The results showed the efficiency and robustness of the proposed method in finding the shortest path in five different maps. The comparison results showed the superiority of the proposed method in finding the optimal path compared with the results of other algorithms.
References 1. Dhillon BS (2015) Robot system reliability and safety a modern approach. 1st edn. CRC Press 2. Joukhadar A, Kass Hanna D, Abo Al-Izam E (2020) UKF-based image filtering and 3D reconstruction. Machine vision and navigation. Springer, Cham, pp 267–289 3. Siegwart R, Nourbakhsh I (2004) Introduction to autonomous mobile robots, 1st edn. MIT Press, London 4. Haj Darwish A, Joukhadar A, Kashkash M (2018) Using the bees algorithm for wheeled mobile robot path planning in an indoor dynamic environment. Cognet Eng 5(1):1–23 5. LaValle SM (1999) Rapidly-exploring random trees: a new tool for path planning 6. Khatib O (1986) Real time obstacle avoidance for manipulators and mobile robots. Int J Robot Res 5(1):90–98 7. Kavraki LE, Svestka P, Latombe J-C, Overmars MH (1996) Probabilistic roadmaps for path planning in high-dimensional configuration spaces. IEEE Trans Robot Autom 12(4):566–580 8. Choueiry S, Owayjan M, Diab H, Achkar R (2019) Mobile robot path planning using genetic algorithm in a static environment. In: IEEE international conference on robotics and automation (ICRA) 9. Ma J, Liu Y, Zang S, Wang L (2020) Robot path planning based on genetic algorithm fused with continuous Bezier optimization. Comput Intell Neurosci 2020:1–10 10. Alam MS, Rafique MU, Khan MU (2015) Mobile robot path planning in static environments using particle swarm optimization. Int J Comput Sci Electron Eng 3(3):253–257 11. Paikray HK, Das PK, Panda S (2021) Optimal multi-robot path planning using particle swarm optimization algorithm improved by sine and cosine algorithms. Arab J Sci Eng 46:3357–3381 12. Ajeil FH, Ibraheem IK, Azar AT, Humaidi AJ (2020) Grid-based mobile robot path planning using aging-based ant colony optimization algorithm in static and dynamic environments. Sens—MDPI 20(1880):1–26 13. Hosseininejad S, Dadkhah C (2019) Mobile robot path planning in dynamic environment based on cuckoo optimization algorithm. Int J Adv Robot Syst 16(2):1–13 14. Dewang HS, Mohanty PK, Kundu S (2018) A robust path planning for mobile robot using smart particle swarm optimization. Procedia Comput Sci 133:290–297 15. Abbas NH, Ali FM (2016) Path planning of an autonomous mobile robot using enhanced bacterial foraging optimization algorithm. Al-Khwarizmi Eng J 12(3):26–35 16. Bonny T, Kashkash M (2021) Highly optimized Q-learning-based bees approach for mobile robot path planning in static and dynamic environments. J F Robot 1–18 17. Francis G, Ott L, Ramos F (2017) Stochastic functional gradient for motion planning in continuous occupancy maps. In: IEEE international conference on robotics and automation (ICRA)
A New Method to Generate the Initial Population …
207
18. Pham DT, Ghanbarzadeh A, Koç E, Otri S, Rahim S, Zaidi M (2006) The bees algorithm— a novel tool for complex optimisation problems. In: 3rd International virtual conference on intelligent production machines and systems (IPROMS 2006), pp 454–459 19. Kashkash M, Haj Darwish A, Joukhadar A (2017) Solving maze problem using newly modified Tremaux’s algorithm with the bees algorithm. In: The third international conference on electrical and electronic engineering, telecommunication engineering and mechatronics (EEETEM2017), pp 182–186 20. Ashraf A et al (2021) Studying the impact of initialization for population-based algorithms with low-discrepancy sequences. Appl Sci 11(17):1–41 21. Miao Y-Q, Khamis AM, Karray F, Kamel MS (2011) A novel approach to path planning for autonomous mobile robots. Control Intell Syst 39(4) 22. C. T. University. http://imr.ciirc.cvut.cz/planning/
Production Plan Optimisation
Method for the Production Planning and Scheduling of a Flexible Manufacturing Plant Based on the Bees Algorithm Chao Wang, Tianxiang Chen, and Zhenghao Li
1 Introduction 1.1 Background This study was motivated by the actual production problem of the Trumpf enterprise (China), which is a German enterprise famous for its machine tools and laser systems. The flexible manufacturing plant that is the subject of this study is Trumpf’s sheet metal factory in Suzhou, China. The manufacturing of sheet metal parts involves laser cutting, punching press, sorting and pick up, bending, welding and inspection. However, the small-lot and high-variety products make the bending procedure a bottleneck stage of the production and processing operation. The flexible job shop scheduling problem (FJSP) plays a vital role in manufacturing systems and industrial engineering. In FJSP, workpieces can be processed by one of the feasible machines, which have more flexibility and are consistent with the realistic workshops. Hence, the most feasible machine needs to be selected for the workpiece, and the operation sequence in one of the machines also must be determined to obtain the optimal solution.
C. Wang (B) · T. Chen School of Mechanical Engineering, Changshu Institute of Technology, Suzhou 215500, China e-mail: [email protected] T. Chen e-mail: [email protected] Z. Li TRUMPF(China) Co. Ltd., Taicang 215400, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 D. T. Pham and T. Hartono (eds.), Intelligent Production and Manufacturing Optimisation – The Bees Algorithm Approach, Springer Series in Advanced Manufacturing, https://doi.org/10.1007/978-3-031-14537-7_13
211
212
C. Wang et al.
1.2 Literature Review Research into production scheduling problems that originated in the 1950s was prompted by the key concern of efficiency in industrial production and has attracted the attention of many international researchers. In 1954, Johnson proposed a study of the permutation flow shop scheduling problem (PFSP) including two machines and demonstrated that the flow job scheduling problem is an NP-hard problem if the number of machines (or processes) exceeds three [1]. One of the most popular optimisation approaches for PFSP is minimising completion time [2], and many studies have accordingly set the criterion as minimising total flow time [3] or minimising total delay [4]. The flexible job shop scheduling problem (FJSP) is another kind of JSP that is closer to the actual production environment and is more in line with the characteristics of smart manufacturing production compared to PJSP. In the past few decades, with the rapid development of computer technology, intelligent algorithms such as Genetic Algorithm (GA), Grey Wolf Algorithm (GWO), Simulated Annealing Algorithm (SA), Particle Swarm Algorithm (PSO), Taboo Search (TS) and Artificial Bee Colony Algorithm (ABC) have been widely used to solve FJSP problems with good results. For example, Zhou et al. proposed four superheuristic algorithms based on multiobjective genetic programming to solve a multi-objective dynamic FJSP with average weighted delay, maximum delay, and average flow through time as the objectives [5]. With the advancement of technology, many new swarm intelligence algorithms have emerged. For example, Mirjalili proposed a grey wolf algorithm (GWO, grey wolf optimisation) to simulate the leadership pattern and hunting mechanism of grey wolves, applied it to the PFSP problem, and achieved better optimisation results [6]. Currently, metaheuristic algorithms are usually the first choice for solving the FJSP [7]. Some of the studies in the last few years have combined the hybrid methods of fuzzy simulation and artificial bee swarm for disassembly sequence planning [8], flexible process planning with GA for product recall optimisation [9], disassembly sequence planning for CNC machine tools with multi-objective ant colony algorithm [10] disassembly planning with discrete artificial bee swarm [11], disassembly sequence planning for hydroelectric power plants with GA [7], robot disassembly sequence planning using discrete Bees Algorithm (BA) [12], improved multi-objective discrete BA for robot disassembly [13], or collaborative optimisation of robot disassembly sequence planning using improved discrete BA [14]. The range of applicability of the Bees Algorithm to the field of engineering encompasses a large number of optimisation tasks [15]. Moreover, the BA is quite suitable for applying the advantages of applying other algorithms. In a study on the single machine scheduling problem, great performance was achieved by the genetic BA, which is a hybridisation of the BA and GA [16]. Hence, the BA is selected for use in this study to enhance the adaptation of the optimisation problem.
Method for the Production Planning and Scheduling …
213
2 Mathematical Modelling At present, the bending processing plant subject to this study has four different bending machines (Fig. 1), each of them with different processing capabilities, which means that the processing time is different for the same workpiece, and the most feasible machine needs to be selected for the workpiece. Each day, n workpieces are assigned to be processed by the four machines, and the selection leads to a large range of time variations. Moreover, the dies used by each kind of workpiece are not the same, and the subsequent die changeover time has a vital contribution to the whole working time. In summary, the bending processing plant is a typical flexible manufacturing line, where two subproblems should be considered: (1) selecting the feasible machine for each workpiece to reduce the processing time and (2) determining the appropriate operation sequence for all workpieces that have been assigned to the same bending machine to reduce the die changeover time. Currently, the production schedule is manually scheduled by the company’s engineers, which then goes into the MES system using boundary conditions of less than 7.5 h of processing time per machine per day; the created production schedule depends on comprehensive product delivery dates, the number of parts to be subsequently processed, and the raw material inventory. However, only the processing time of the workpieces is considered in this stage, and as shown in Fig. 2, it occupies only 44% of the moments in a given situation. The actual case is that the existing scheduling method cannot meet the reasonable resource allocation, and sometimes a machine can only complete approximately 55% of the tasks. Among the parameters to be optimised, the selection of the most feasible bending machine requires engineers to make a prescheduling evaluation based on the material, size and bending force of each part and the capability of the bending machine. Therefore, in the actual scheduling process, with the requirement of completing the production tasks by the end of the deadline as the target, assigning the production
Fig. 1 The four bending machines in the flexible manufacturing plant
214
C. Wang et al.
Fig. 2 Multi-moments analysis of BEND5130
tasks to the corresponding machines one by one to meet the 7.5-h boundary condition will expose many problems: As the four bending machines have different processing capabilities, the low utilisation of machines may be caused by unreasonable machine selection. The dies can also be divided into upper and lower dies. The resource allocation may not be reasonable, which increases the processing time and die changeover times. Gathering workpieces that use the same die can greatly reduce the die change over time; however, it has not been considered by the MES but only based on the experience of engineers. When there is a demand for the use of the same die by different machines, one of the machines needs to wait for the return of the die before processing. Considering the problems and the real working conditions in the plant, our assumptions are presented as follows: • Only one workpiece is processed on a certain machine at a time. • A single workpiece is processed by one machine at a given time point. • While a workpiece is being processed on the selected machine, that process cannot be interrupted. • When the initial state of the processing equipment is free, any of the workpieces can be processed. • All workpieces are given the same priority. • Each workpiece can only be selected from the set of optional equipment specified for the workpiece. • The daily working time of each bending machine should be no longer than 450 min. If the current workpiece Oi uses the same type of dies as the previous workpiece Oi-1 and requires a smaller die length, there is no need to change the die. The consumed die changeover time can be calculated as: / Ldik × 1min mm
(1)
Method for the Production Planning and Scheduling …
215
Table 1 Symbols used in this study No
Symbol
Definition
1
n
Number of workpieces
2
m
Number of bending machines
3
i
Workpiece number, i ∈ n
4
k
Machine number, k ∈ m
5
Tik
Processing time of the workpiece i on machine k
6
Sik
Die changeover time of workpiece i on machine k
7
Dik
Die of workpiece i on machine k
8
Ld ik
Die length of workpiece i on machine k (min)
To facilitate the description of the mathematical model, the symbols of the relevant parameters in this study are defined as shown in Table 1.
2.1 Objective Function In this study, the minimum completion time of the machine with the longest completion time among the four machines is set as the optimisation target according to the actual demand of the flexible manufacturing plant. The completion time consists of the bending processing time and die changeover time. The objective function is: ( f = min
m Σ n Σ
i=1 i=1
Tik +
m Σ n Σ
) Sik
(2)
i=1 i=1
Σ m Σ n where Σ m i=1 Σ n i=1 Tik represents the total bending processing time of machine k, Sik represents the total die changeover time on machine k. As the die and i=1 i=1 changeover time is limited by the machine’s current and next processed workpiece, the calculation of Sik can be completed through the following equation: ) ( ⎧ i(f equals Dik ⎨ Ldik + Ldi−1k ) , D(i−1)k Sik |Ldik − Ldi−1k | i f equals Dik , D(i−1)k , di f f er ent length ) ( ⎩ 0 i f equals Dik , D(i−1)k , same length
(3)
The die changeover time includes both die unloading and loading, and the consumed time is proportional to the die length as mentioned above. Moreover, when the type of die is the same for the current workpiece and the next workpiece, the die is not changed in the case when the demanded die length can be covered by the previous longer die. Hence, the calculation of die change over time can be divided into three situations: the type and length are both the same, the type and length are both different, and the type is the same but the length is different.
216
C. Wang et al.
2.2 Boundary Conditions The machine processing constraints h i j − c pj > 0
(4)
where h i j represents the processing start time of workpiece Oi on machine m j , and c pj represents the processing end time of workpiece O p on machine m j , which means that only one workpiece can be processed on one machine at a given time point; and ci j − h pj = ti j , ti j = 1
(5)
where ci j represents the processing end time of workpiece Oi on machine m j , and h pj represents the processing start time of workpiece O p on machine m j , which means that once the processing of any workpiece has started, it cannot be interrupted. Feasible machine constraints The selection of a feasible machine set is determined by the process requirements of the workpiece and the key machine performance parameters. These parameters consist of the tonnage and working length of the bending machine. The detailed constraints of the existing four machines can be found in Table 2. Calculation of bending force As each workpiece contains one or more bending edges and the bending force is not identical for each bending edge, the bending force calculation in this study only considers the maximum bending force for each workpiece. Typically, the bending force is calculated as follows: P = 650 × S × S ×
L V
(6)
where P represents the bending force, S represents the thickness of the plate, L represents the length of the plate and V is the groove width of the die. However, the calculated results would lead to an error of approximately 25% based on work experience. Hence, the bending force used in this study adopted the database created by Trumpf engineers. Due to confidentiality agreements, this database of bending force was not shown in this chapter.
Table 2 Boundary conditions of the four bending machines Boundary condition
Bending machines M1
M2
M3
M4
Condition 1
46 t
66 t
130 t
320 t
Condition 2
1000 mm
2000 mm
3000 mm
4000 mm
Method for the Production Planning and Scheduling …
217
Working length of the workpiece When the length of the bending edge is greater than the width of the workpiece, the working length of the workpiece is equal to the length of the workpiece. Meanwhile, when the length of the bending edge is less than the width of the workpiece, the working length of the workpiece is equal to the width of the workpiece. In the process of feasible machine selection, when the working length is greater than the maximum working length of the machine, the workpiece cannot be processed on that machine.
2.3 Encoding and Decoding In this study, individual bees are coded using segmental coding, and each initial solution contains both machine selection and operation sequence coding. Based on the mathematical model of the proposed FJSP, machine selection (MS) is used for determining the feasible machine, and an operation sequence (OS) is used for process sequencing, where the length of MS is equal to the total number of machining tasks, and the length of OS is equal to the total number of machining tasks for a given machine. As shown in Fig. 3, a null matrix of N × 4 is created as the initial matrix, and the matrix is updated after conducting the feasible machine selection. If machine k for workpiece i is feasible, the first column of row i of the matrix is updated to k. Furthermore, a feasible machine is selected for each workpiece, and the actual situation is to obtain a set of optional machines for each workpiece; taking O1 as an example, the set of optional machines is [0,2,3,4]. According to the total processing time and the limited processing time of the machines, the random and roulette methods are conducted to select a single feasible machine that is not 0 in the matrix. After performing the machine selection, the number of workpieces is stored in the corresponding matrix according to the machine selection result. At the same time, the operation sequence on each machine is determined using sequencing rules. Finally, the new matrix that represents the selected machine and the operation sequence of the workpieces can be generated.
Fig. 3 Coding process for machine selection and operation sequence
218
C. Wang et al.
Fig. 4 Correspondence between encoding and decoding
Once the machine selection and operation sequence have been determined, these are decoded to calculate the total processing time and the total die changeover time for each bending machine. In other words, decoding is the reduction of the information contained in each honey source individual, which determines the start time, end time and dies change over time for each workpiece. The reduction of the honey source individuals is then performed into an executable scheduling solution. First, the processing machine Mk , processing time Tik and die information are obtained for these scheduled workpieces. Then, the tasks are drawn out on the same bending machine assuming that the tasks could be started immediately, and the total processing time for each machine is taken as the sum of the processing time of all workpieces on this machine. In this paper, the die changeover time is calculated with the determined operation sequence (OS). The changeover time for the current workpiece Oi is governed by the previous workpiece Oi−1 , when the upper die of Oi shares the same type of die of Oi−1 , and the changeover time is calculated from the difference in the length of the die. Similarly, the changeover time of a lower die can be obtained. Then, the changeover time for each machine will be equal to the sum of the changeover times for all workpieces. In Fig. 4, each solution is expressed as Site A, where (2,4) means that the first workpiece is processed on Machine 4 and the operation sequence is 2. By conducting the decoding procedure, both the processing time and die changeover time can be calculated.
3 The Bees Algorithm The BA is an intelligent optimisation algorithm for finding optimal solutions; it is a fusion of science and nature proposed in 2005 by the British academic Pham [17].
Method for the Production Planning and Scheduling …
219
The algorithm was inspired by a bee’s search for a nectar source. The location of the nectar source sought by the bees represents the solution to the optimisation problem, and the domain search area of radius R centred on the nectar source location is called the flowering bush. First, the initial population is generated by scout bees, and the domain of the nectar source can be determined during the iterative process, which identifies elite and selected sites based on the abundance of nectar sources (the magnitude of the fitness value). Different numbers of recruited bees are assigned to reinforce the elite and the selected sites. A greedy criterion is used to seek new honey sources during the domain search. The sites that have a better solution than the initial solution during the iterative process are retained. Finally, the bee population is reranked according to the fitness value, and the optimal solution is the output. In this study, two different BA were performed, one based on the original BA and the improved BA applying the site abandonment technology (SA) to the former [18]. In the improved BA, the site is abandoned when it does not meet the requirements, and it has two main bases of determination: (1) The initial site is judged, and when the increment is less than the desired value, the initial site is deemed to have no search potential and is discarded; (2) If a site is searched several times but no improved solution is found, the nectar field is deemed to be exhausted and abandoned. With SA technology, the improved BA can give potential sites more scope for optimisation during iterations.
3.1 BA with Site Abandonment Technique The flowchart of the improved BA used in this study is shown in Fig. 5. First, the population is initialised, and its initial sites are globally searched, where each site represents a feasible solution in the solution space. According to the rule of SA for the initial solution, the sites’ optimisation increment is recorded to judge the merit of the initial solution, and the initial solution without potential is abandoned. Then, the initial sites are ranked according to their fitness, and different numbers of recruited bees are assigned to the elite and the selected sites to search their domains. In this stage, the fitness value of the elite sites and the selected sites is recorded to judge the abundance of the sites. If the provided solution is inferior to the existing solution, it is discarded, and the stagnation count is increased by one. Otherwise, the site is updated, and the stagnation count is reset to zero. Meanwhile, when the stagnation count of the site reaches a certain value, the site is discarded. Finally, a second global search is performed to avoid falling into a locally optimal solution.
3.2 Local Search Once the initial solution has been generated, the initial set of solutions is ranked according to the fitness value, and a different number of recruit bees are dispatched to
220
C. Wang et al.
Fig. 5 Flowchart of the improved BA used in this study
perform a local search for solutions from elite sites (large fitness value) and selected sites (small fitness value). As the objective function is to minimise the maximum completion time of the bending machine with the longest working time, the local search is performed using two methods applicable to this objective. The first is machine mutation, which aims to change the tasks on all four bending machines and has the greatest impact on fitness. The additional one is operation sequence mutation, which is carried out on a single machine and generates a considerable change in the die changeover time. Machine mutation Since the processing time of the workpiece is limited by the process requirements and the type of bending machine, machine mutation is carried out by partial workpiece reselection of the machine. The specific process of mutation is partial workpiece reselection; when the selected machine is the same as the previous one, the selection operation is repeated. A greedy criterion is introduced so that when the performance of the selected machine is lower than that of the original machine, it is retained as the original machine to enhance the capability of the algorithm’s local search. At the same time, the machine mutation probability will affect the local search performance; when there are fewer processing tasks to reselect the machine, the algorithm’s optimisation will easily stagnate due to fewer transformed individuals. Meanwhile, when more processing tasks are selected for mutation, the algorithm’s local search capability is too strong, and it will be easy to break the loop of sites with higher fitness values. After repeated experiments, it is found that the algorithm’s local search performance is the best when the machine variation probability is 0.1.
Method for the Production Planning and Scheduling … Table 3 Sorting examples
Workpiece
Upper die
221 Lower die
Length (mm)
1
up_die_1
lower_die_1
200
2
up_die_1
lower_die_1
500
3
up_die_1
lower_die_2
1000
4
up_die_2
lower_die_2
800
5
up_die_2
lower_die_1
1000
6
up_die_3
lower_die_1
1200
7
up_die_3
lower_die_1
500
8
up_die_4
lower_die_1
500
In the improved Bees Algorithm, the initial mutation probability was set as 0.15, and the minimum mutation probability was set as 0.05. Operation sequence mutation Die replacement requires the bending equipment to stop running; therefore, in the production process, the same die processing tasks should be gathered to reduce the die change times and shorten the die changeover time. The die changeover time for the current workpiece is limited by the previous workpiece. The die change can be omitted when the current workpiece has the same type of die as the previous workpiece and is less than or equal to the length of the last die used. Otherwise, the length of the die needs to be increased or it must be replaced with a new die; thus, the die change time is related to the processing sequence of each task. In this study, a sequencing algorithm was used to sort the process sequence of each machine, sequence the processing tasks of the same upper or lower die, and determine the sequence of processing when the dies of multiple tasks are the same based on the die length. As shown in Table 3, the sequence was sorted according to the different priorities, where the upper die has first priority, the lower die has second priority, and the length has third priority. The mutation was carried out by the limited exchange of the processing sequence based on the upper dies.
4 Simulation Results In this study, the actual production tasks of one day at the Trumpf enterprise (China) are used for the case study. The current production situation is that each machine has a daily production task of approximately 20 workpieces and approximately 450 min of production time (the die changeover time is not included in the actual scheduling), of which approximately 200 min are spent on die changes and 250 min are spent on processing; only 55% of the tasks are finished on the day. The processing times of partial workpieces for the case study are shown in Table 4, where the numbers
222
C. Wang et al.
indicate the processing time of workpiece Oi on the corresponding machine, and “-” indicates that workpiece Oi cannot be processed on the machine. The above data are used as a case study to validate the mathematical models and compare the two Bees Algorithms. To make the optimisation results more valid, the example problem was run 10 times consecutively to avoid error interference. The initial structure of the bee population was adjusted several times during the operation of the algorithm to obtain the best population size for the running results. For the parameters used in BA, the colony size ranged from 170 to 530 and was tried by changing the number of scout bees and the recruited bees in elite or non-elite best patches. Three typical sets of parameters are shown in Table 5. Finally, the parameters of the BA and improved BA used in this study are shown in Table 6. Table 4 Processing time of partial workpieces (mins) Workpiece
M1
M2
M3
M4
O1
–
16
12
12
O2
–
–
–
3
O3
18
24
18
18
O4
–
–
60
60
O5
20
34
15
15
O6
–
–
25
25
O7
–
–
28
28
O8
12
23
17
17
O9
–
30
20
20
O10
–
–
–
8
Table 5 Performance with different parameters of BA Item
Value
Value
Value
Number of scout bees in the selected patches
50
100
200
Number of best patches in the selected patches
5
10
10
Number of elite patches in the selected best patches
2
3
3
Number of recruited bees in the elite patches
30
25
40
Number of recruited bees in the non-elite best patches
20
15
30
Size of the neighbourhood for each patch
0.1
0.15
0.15
Colony size
170
280
530
Number of iterations
100
100
100
Best solution
487.9
484.531
481.7
Method for the Production Planning and Scheduling …
223
Table 6 Parameters of BA and improved BA in this study Parameter
BA
Improved BA
Number of scout bees in the selected patches
50
100
Number of best patches in the selected patches
5
10
Number of elite patches in the selected best patches
2
3
Number of recruited bees in the elite patches
30
25
Number of recruited bees in the nonelite best patches
20
15
Stagnation
/
10
Size of the neighbourhood for each patch
0.1
0.05–0.15
Colony size
170
280
Number of iterations
100
100
4.1 Results of BA To apply the original BA, the results of iterations are shown in Fig. 6 and the corresponding Gantt chart in Fig. 7a. It can be observed that when the number of iterations is 35, a relatively good solution is retained for some time, which means that this algorithm achieves a locally optimal solution more easily. The best solution was 487.06 min after optimisation, and it appeared for Machine 2, which has the longest working time among the four machines. At Machine 2, the working time consists of processing time, up die changeover time and lower die change over time. Fig. 6 The results of BA and improved BA during iterations
224
C. Wang et al.
Fig. 7 a The Gantt chart of the four machines by BA; b the Gantt chart of the four machines by improved BA
4.2 Results of the Improved Bees Algorithm When applying the improved BA, the results of the iterations are shown in Fig. 6 and the corresponding Gantt chart in Fig. 7b. It can be observed that comparative stable solutions were obtained when the number of iterations was 38. The trend during the iterations was more stable than the above BA results. The best solution was 473.46 min after optimisation, which appeared at Machine 4, which has the longest working time among the four machines.
Method for the Production Planning and Scheduling …
225
Table 7 Result of machine working time with BA (mins) Processing time
M1
M2
M3
M4
464
455
430
422
Up die changeover time
11.1
12.2
23
25.1
Lower die changeover time
11.6
19.9
30.4
37.1
Total working time
486.7
487.1
483.4
484.2
M3
M4
Table 8 Result of machine working time with improved BA (mins) M1
M2
Processing time
447
445
424
405
Up die changeover time
11
11.3
20.7
29.4
Lower die changeover time
13.6
14.9
27.9
39
Total working time
471.6
471.2
472.6
473.4
4.3 Comparison of Results Comparing the results of the two kinds of BA, not only could the best solution be obtained by the improved BA, but the optimisation curve was also more stable with the improved BA, as shown in Fig. 6. It could also be observed that the optimisation curve of BA is prone to stagnation, and the solutions are unstable. With the technology of site abandonment, the optimisation curve shows that the solution process is much more stable, which can be attributed to the fact that some sites with better potential were given more opportunities for further search. The final total processing time and the total die changeover time can be found in Tables 7 and 8. The die changeover time occupied from 5 to 15% of the total working time, which was significantly reduced compared to the actual situation. This could be attributed to the fact that the operation sequence was sorted by the rules in the established mathematical model.
5 Conclusions The problem of FJSP in actual production processes is a subject worthy of in-depth research. The present study considered many of the boundary conditions that exist in actual production based on assumptions, but there are still issues that cannot be fully considered. For example, it is recognised that different industries have their own difficulties in solving the FJSP, mainly due to the complexity of the actual production process, production conditions and personnel management. In this study, mathematical modelling and optimisation calculations were carried out for the scheduling of a sheet metal factory with flexible manufacturing shop characteristics. A mathematical model that basically covers the actual production conditions was established, and the
226
C. Wang et al.
optimisation results of the two Bees Algorithms were compared. The best solution obtained in this study was accepted by the company. The algorithm for the selection of feasible machines, the algorithm for the calculation of bending forces based on the company’s database, and the method for sequencing the processing of workpieces on the same machine can all be applied independently to improve the production management at the company. The established machine mutation and operation sequence mutation algorithms can obtain better solutions in local search but still have room for improvement. In the future, the method of crossover in the genetic algorithm can be considered to compensate for the shortcomings of the existing mutation methods. When applying the site abandonment technique, the selection of the number of stagnations has certain contradictory characteristics. If this number is larger, it is not easy to introduce new sites that have greater potential; if it is smaller, it is not possible to further explore the existing good sites with the advantages of SA techniques. In this study, the best solution was 472.6 min, which ostensibly exceeded the 450-min working time constraint; however, it should be noted that only 55% of the tasks were finished in the actual situation on the day of the data considered. When comparing the two algorithms of the original BA and the improved BA, the latter is not only able to obtain a better solution, but its optimisation curve is also smoother and less likely to fall into a local optimum. Acknowledgements The authors would like to acknowledge the valuable comments and suggestions of their associates in TRUMPF(China). Special thanks are expressed to Prof. Duc Truong Pham for his interesting and rewarding comments.
References 1. Garey MR, Johnson DS, Sethi R (1976) The complexity of flowshop and jobshop scheduling. Math Oper Res 1(2):117–129 2. Fernandez-Viagas V, Framinan JM (2014) On insertion tie-breaking rules in heuristics for the permutation flowshop scheduling problem. Comput Oper Res 45:60–67 3. Fernandez-Viagas V, Framinan JM (2017) A beam-search-based constructive heuristic for the PFSP to minimize total flowtime. Comput Oper Res 81:167–177 4. Khare A, Agrawal S (2021) Effective heuristics and metaheuristics to minimize total tardiness for the distributed permutation flowshop scheduling problem. Int J Prod Res 59(23):7266–7282 5. Zhou Y, Yang J, Huang Z (2020) Automatic design of scheduling policies for dynamic flexible job shop scheduling via surrogate-assisted cooperative coevolution genetic programming. Int J Prod Res 58(9):2561–2580 6. Mirjalili S (2015) The ant lion optimizer. Adv Eng Softw 83:80–98 7. Li B, Li C, Cui X, Lai X, Ren J, He Q (2020) A disassembly sequence planning method with team-based genetic algorithm for equipment maintenance in hydropower station. IEEE Access 8:47538–47555 8. Tian G, Zhou M, Li P (2017) Disassembly sequence planning considering fuzzy component quality and varying operational cost. IEEE Trans Autom Sci Eng 15:748–760 9. Feng Y, Gao Y, Tian G, Li Z, Hu H, Zheng H (2018) Flexible process planning and end-of-life decision-making for product recovery optimization based on hybrid disassembly. IEEE Trans Autom Sci Eng 16:311–326
Method for the Production Planning and Scheduling …
227
10. Feng Y, Zhou M, Tian G, Li Z, Zhang Z, Zhang Q, Tan J (2020) Target disassembly sequencing and scheme evaluation for CNC machine tools using improved multi-objective ant colony algorithm and fuzzy integral. IEEE Trans Syst Man Cybernetics: Syst 49:2438–2451 11. Tian G, Ren Y, Feng Y, Zhou M, Zhang H, Tan J (2020) Modeling and planning for dualobjective selective disassembly using AND/OR graph and discrete artificial bee colony. IEEE Trans Ind Informatics 15:2456–2468 12. Liu J, Zhou Z, Pham DT, Xu W (2018) Robotic disassembly sequence planning using enhanced discrete bees algorithm in remanufacturing. Int J Prod Res 56:3134–3151 13. Liu J, Zhou Z, Pham DT, Xu W, Yan J (2018) An improved multi-objective discrete bees algorithm for robotic disassembly line balancing problem in remanufacturing. Int J Adv Manuf Technol 97:3937–3962 14. Liu J, Zhou Z, Pham DT, Xu W, Ji C, Liu Q (2020) Collaborative optimization of robotic disassembly sequence planning and robotic disassembly line balancing problem using improved discrete Bees algorithm in remanufacturing. Robot Comput-Integr Manuf 61:101829 15. Pham DT, Castellani M (2021) A comparative study of the Bees algorithm as a tool for function optimization. Cogent Eng 2(1):1091540 16. Packianather MS, Yuce B, Mastrocinque E et al (2014) Novel genetic Bees algorithm applied to single machine scheduling problem. In: 2014 World Automation Congress (WAC). IEEE, pp 906–911 17. Pham DT, Ghanbarzadeh A, Koc E, Otri S, Rahim S, Zaidi M (2005) The Bees algorithm. Technical Note, Manufacturing Engineering Centre, Cardiff University, UK 18. Pham DT, Castellani M (2009) The Bees algorithm: modelling foraging behaviour to solve continuous optimization problems. Proc Inst Mech Eng Part C: J Mech Eng Sci 223(12):2919– 2938
Application of the Dual-population Bees Algorithm in a Parallel Machine Scheduling Problem with a Time Window Yanjie Song, Lining Xing, and Yingwu Chen
1 Introduction In recent years, the manufacturing industry has undergone unprecedented changes. The product manufacturing process that traditional workers participate in is gradually reducing the number of people, and the production of products is automated and intelligent. While this development trend improves production efficiency, it also poses unprecedented challenges to the management of the production process. Intelligent manufacturing requires companies to accurately control the entire production process and effectively control every detail of the production process. Manufacturing enterprises need to ensure production efficiency by formulating a production plan that fits the actual situation of the enterprise. Reasonable arrangement of processing tasks for the machine is a prerequisite for achieving refined management. For the parallel machine scheduling problem, we study one of the unrelated parallel machine scheduling problems that does not allow encroachment. In the production workshop, a series of machines are used to process a series of parts. These parts and components only contain one processing procedure, and processing on one machine means the completion of the production process. The machine needs a certain waiting time before the product processing process starts. In this problem, the time window attribute is assigned to each product process, which becomes a
Y. Song (B) · Y. Chen College of Systems Engineering, National University of Defense Technology, Changsha 410073, China e-mail: [email protected] Y. Chen e-mail: [email protected] L. Xing School of Electronic Engineering, Xidian University, Xi’an 710126, China © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 D. T. Pham and T. Hartono (eds.), Intelligent Production and Manufacturing Optimisation – The Bees Algorithm Approach, Springer Series in Advanced Manufacturing, https://doi.org/10.1007/978-3-031-14537-7_14
229
230
Y. Song et al.
parallel machine scheduling problem with a time window. There is no mutual influence between the machines in the production workshop, and tasks are not allowed to be interrupted or inserted. In the PMSP-TW problem, the product is only effective when it is processed within the required time window. Once a product fails to be executed within the required time window or fails to be added to the plan, it means that the production process of this product has not been completed. The parallel machine scheduling problem, as a classic optimal scheduling problem, has been extensively studied by scholars. Fleszar and Hindi proposed a method combining mixed-integer and constraint programming models to solve the resource-constrained unrelated parallel machine scheduling problem [1]. Yang et al. designed a method for solving a multistage parallel genetic algorithm for the machine scheduling problem [2]. The genetic algorithm adopts the hybrid-coded method, and the corresponding algorithm is improved for job splitting. Oliveira and Pessoa proposed a new family of cuts to strengthen the arc-time-indexed formulation, along with an efficient separation algorithm [3]. They studied the identical parallel machine scheduling problem and presented an improved branch-cut-and-price algorithm. Li et al. considered the impact of job rejection on the parallel machine scheduling problem. They presented a 2-approximation algorithm [4]. The algorithm effectively shortened the running time to solve this problem. Qamhan and Alharkan considered an adjustment on a mixed-integer linear programming model [5]. This model fully considers resource constraints. The parallel machine scheduling problem is one of the classic optimization scheduling problems, and it is also a type of problem often encountered in the actual manufacturing process. Through our research, we can provide a new solution to the parallel machine scheduling problem with a time window and then improve the efficiency of actual manufacturing. PMSP-TW is a parallel machine scheduling problem. Compared with the traditional parallel machine scheduling problem, the constraint condition of the time window imposes strict restrictions on each process. The determination of the start time and end time of a process will affect the processing of other products that need to be produced. This problem adds the characteristic of time dependence based on the sequence dependence of the traditional parallel machine scheduling problem. Parallel machine scheduling with time windows has a large number of tasks, a limited number of available resources, and many constraints, which make it difficult to solve the problem. The level of automation and manufacturing capacity of the production workshop is largely related to the setting of the production plan. The generation of production plans needs to rely on planning and scheduling algorithms. The scheduling algorithm is further optimized based on a feasible production plan to improve production efficiency. It can be said that the scheduling algorithm is an important factor directly related to the production, operation, and profitability of an enterprise. Therefore, the scheduling algorithm needs to fully consider the impact on production. In the parallel machine scheduling problem with time windows, the scheduling algorithm needs to ensure the number of products that are successfully processed on the one hand, and it also needs to ensure the completion time of the processing. This requires that the
Application of the Dual-population Bees Algorithm …
231
algorithm not only needs a strong global search capability but also a strong mining capability. This is a problem that has been proven to be NP-Hard. It is more appropriate to use heuristic algorithms or evolutionary algorithms to solve large-scale problems. Although the accurate solution algorithm can find the optimal solution, when the problem scale increases, the search time of the algorithm increases rapidly, and it easily falls into a situation where the local optimal solution cannot escape. The current production workshop scheduling has been developing towards a large-scale and more complex trend. The use of heuristics or evolutionary algorithms can more effectively deal with more complex problems. A major advantage of the evolutionary algorithm is that the population search method effectively avoids the situation where it easily falls into the local optimum and cannot jump out of the complex solution space, and a high-quality solution can be found from a global perspective. Bees Algorithm is a kind of evolutionary algorithm. The Bees Algorithm simulates the situation of the bee population searching for honey in real life and finds highquality food sources through multiple stages of the search. When bees find a good source of nectar while searching, they will record it and continue searching within a new search range. Some bees in the bee population search near a good source of nectar. This method may be helpful when the search is in trouble, but the actual effect of finding a new source of nectar at the beginning of the search is not great. This easily leads to the problem of low convergence speed when the Bees Algorithm solves complex planning or optimization problems. We use an improved Bees Algorithm to solve the PMSP-TW problem. Compared with the traditional Bees Algorithm, bee search is based on a brand-new population setting method. Such a population setting allows the Bees Algorithm not only to have good global search performance but also to allow the algorithm to search for a better solution in a local area [6]. In the search process, the population dynamically adjusts the composition of the population according to the search performance and adjusts the subsequent search strategy in time. The main innovation of this research is that a dual-population Bees Algorithm is proposed to solve the PMSP-TW problem. The dual-population Bees Algorithm uses different populations for different stages of bee search and dynamically adjusts the composition of the population according to the search effect to ensure search efficiency while considering diversity. The remainder of this article is structured as follows. In the second part, we introduced the description and mathematical model of the problem. In the third part, we introduced the optimization process of the dual-population Bees Algorithm and the dynamic adjustment mechanism of the population in the algorithm. In the fourth part, we designed multiple sets of instances to experiment on the effect of algorithm scheduling. In the last part, we summarized the conclusions of this paper and the direction of future work.
232
Y. Song et al.
2 Model 2.1 Symbols and Variables In the PMSP-TW problem, the symbols and variables involved are as follows. J M T Wj ri di pi wi j esti jk leti jk
Products to be processed, the quantity is |J |; Machines that can be used to process products, the quantity | is |M|; | Available time window of the jth machine, the number is |T W j |; Release time of product i; Due time of product i; Process time of product i; Waiting time of product i before machine j processing; Earliest start time of the kth time window of product i on machine j; Latest end time of the kth time window of product i on machine j;
Decision variables xi jk Whether the product i is processed in the k time window of machine j. If processed, then xi jk = 1; otherwise, xi jk = 0. si j Start time when the product i was processed on machine j. ci j Processing time of product i on machine j.
2.2 Problem Description In the PMSP-TW problem, there are a series of production lines, and the machines on each production line are the same and can process the same type of products. During the operation of the machine, a machine can only process one product at a time. In addition, each product also means one job. For a product, from the raw materials to the product that can be sold and used, it must be machined. Products have strict limits on production time, and each product is limited in release time and due time. The earliest processing time of the product cannot be earlier than the release time, and processing is effective only after this time. Product processing needs to last for a certain period; this time is called process time. The time after the product is processed cannot exceed its own due time. For a machine, a certain amount of waiting is required before each product processing starts. This period is called waiting time. After the processing of one product occupies the machine, other products cannot be processed at the same time. Only when the production line is idle again and there are no other products to process can other products be processed. This also means that the time that the machine can provide for product processing is extremely limited, and the available time is far from enough for all products to be processed. Compared with the general parallel machine scheduling problem, the complexity of the problem lies in that it needs to meet the processing time requirements of the process production
Application of the Dual-population Bees Algorithm …
233
and needs to consider the legality of the process processing time. Due to the limitation of machine capacity, the number of products that can be processed at the same time is limited, so not all products can be successfully produced.
2.3 Mathematical Model This section will give the assumptions, objective functions, and constraints of the PMSP-TW mathematical model. The following constraints are determined in the addressing problem: 1. One machine can only process one product at a time; 2. Once the product processing process starts, it cannot be interrupted; 3. The machines are independent of each other, and there is no dependency relationship; 4. There are no products that need to be processed or cancelled temporarily, and all products are determined before production; 5. The equipment will not fail during the scheduling period; 6. The time window for product processing has been clarified before production and processing and will not change. Our goal in solving the PMSP-TW problem is to allow the production workshop to complete the production process of the product to be processed as soon as possible. Therefore, the optimization objective function is set to minimize the makespan of the processing completion time. Processing completion time refers to the time when the last product in the batch of products decided to be produced is completely processed. The formula of the objective function is expressed as follows. Objective function min max ci
(1)
The PMSP-TW problem mainly contains two types of constraints: the time requirement of product processing and the restriction of resource capacity. The time requirement must not only meet the time range requirements of product processing but also meet the requirements of the processing machine time window. The resource capacity limitation comes from the fact that a machine can only process one product at a certain time. Constraints ri · xi jk ≤ si j
(2)
( ) si j + wi j + pi · xi jk ≤ di
(3)
234
Y. Song et al.
( ) si j + wi j + pi − ci j · xi jk = 0
(4)
} { ci j ≤ si ' j , i ' ∈ Ji+1 , . . . , J|J |
(5)
esti jk · xi jk ≤ si j
(6)
si j · xi jk ≤ leti jk
(7)
Σ Σ
xi jk ≤ 1
(8)
j∈M k∈T W j
xi jk ∈ {0, 1}
(9)
Equation (2) indicates that the product start time must be later than the earliest allowable start time of the product. Equation (3) indicates that the product completion time must be earlier than the latest allowable end time of the product. Equation (4) indicates the relationship between the start time and end time of product processing. Equation (5) indicates that the processing of one product must begin after the processing of another product is completed. Equation (6) indicates that the start of product processing must be within the time window. Equation (7) indicates that the product needs to be processed within the time window. Equation (8) indicates that each product can be processed at most once. Equation (9) indicates the value range of the decision variable.
3 Dual-population Bees Algorithm After the PMSP-TW problem model is given in the previous section, a search algorithm needs to be used to find a high-quality solution. The dual-population Bees Algorithm is proposed to obtain a good production plan. This section will introduce the overall process of the dual-population Bees Algorithm, dual-population strategy and the specific content of other algorithms.
3.1 Overall Flow of the Algorithm The Bees Algorithm is a new type of evolutionary algorithm that imitates the process of bee population searching for nectar [7]. The bee colony is divided into three types of bees: scout bees, forager bees, and elite bees [8]. Scout bees are responsible for finding new food sources, forager bees are responsible for further searching near
Application of the Dual-population Bees Algorithm …
235
the already discovered food sources, and elite bees are searching for new good food sources [9]. These three types of bees complete their search work and determine the final solution according to the quality of the food source as the production plan of the product. A problem with the traditional Bees Algorithm is that although the algorithm has a good search effect, the algorithm does not converge fast, and the search efficiency is low [10]. How to ensure search quality while considering search efficiency is an important basis for our design algorithm. Using the dual-population population update strategy can make the population search more diversified. Some elite bees focus more on the search for new food sources, which will help find high-quality food sources. The improvement of search efficiency allows the algorithm to have more opportunities to find a good solution, and it is also easier for the algorithm to complete the search for a solution in a large search space. After each iteration of the population search is completed, the search performance is used to evaluate the ownership of food sources in the population. High-quality food sources are kept in the search population, while lower-quality food sources will be removed from the search population and added to the supplementary population. Such food sources do not participate in the search for the scout bee stage but only participate in the search for the forager bee stage. The food sources belonging to the supplementary population also have the possibility of entering the search population, if the quality of these food sources is good enough, it will happen, and if the quality is not good, they will continue to be in the supplementary population. The adjustment of the population’s belonging relationship takes place only when the number of food sources in the search population is within a certain range; otherwise, it may cause the search population to be too small and the search efficiency to be greatly affected. The flow of the dual-population Bees Algorithm is shown in Fig. 1.
3.2 Coding We use real numbers to represent products in individuals. The coded value represents the actual serial number of the product, as shown in the following example. For example, when the code is “1” in the individual, it represents the first product. Similarly, when the product with the code “3” appears, it means the product with the serial number 3. The real number coding method effectively guarantees the legitimacy of the product sequence, each code has a unique corresponding relationship with the actual product, and there will be no duplication or missing. Such an encoding method eliminates the need to perform repair operations when illegal encoding is found.
236 Fig. 1 Flow chart of dual-population Bees Algorithm
Y. Song et al.
Start
Initial population
Search by scout bee
New search population
Generate newborn population
New supplementary population
Search by forager bee
Dynamically adjust the population
Search by elite bee
Meet the termination conditions?
N
Evaluate population s fitness
Y
End
3.3 Scout Bee Stage Scout bees are responsible for searching for high-quality food sources. The search for scout bees is based on the search population; that is, further searches are based on a series of good-quality food sources. The search for scout bees is to try to find new and better sources of nectar. The search for scout bees requires a sufficiently large search range, and we use the method of recombination of two code fragments within an individual. The two fragments used for recombination must be of equal length to ensure the new food source obtained after recombination. As shown in Fig. 2, the figure demonstrates the search process of a scout bee. The two fragments in the picture are selected to complete the scout bee search process. The two segments are “4, 5” and “3, 6”. After switching the positions of
Application of the Dual-population Bees Algorithm …
237
Fig. 2 Schematic diagram of scout bee search
the two segments, priority will be given to product 3 and product 6 tasks to arrange production machines, while product 4 and 5 machines lag. The purpose of this scout bee search is to make the population have obvious differences in each iteration so that the population will maintain a large search range to find a good production plan.
3.4 Forager Bee Stage Forager bees search near sources of nectar discovered by employed bees. Bee search should be performed in consideration of high-quality food sources while ensuring the diversity of food sources. Therefore, the food source searched by the forager bee was obtained after the search population, and the supplementary population was screened under specific conditions. The range of search changes in the vicinity of the existing food source cannot be too large, and only a two-process exchange can be used to find a position that is relatively close to the existing food source. We refer to the search population and the supplementary population to select the new population for forager bee search as the newborn population. The generation of the new population guarantees a certain degree of randomness while referring to the quality of each food source. The new population is first calculated to obtain the performance of each food source in all food sources, and the probability of its selection is obtained by calculating its percentage in all food sources. When performing the selection, a random number from 0 to 1 is generated and compared with the probability value. If the probability value is exceeded, the food source is added to the new population and used for the search of forager bees. After judging whether all individuals in the population can enter the new population, you can start to observe the process of bee searching. The principle of forager bee search is to determine whether there is a production plan that better meets the scheduling goal based on the food source that has been searched and found.
3.5 Elite Bee Stage Elite bees try to find a good source of nectar through random search. This kind of search method is very conducive to continuously searching for existing food sources but has not yet found a better food source to try to use a new food source to allow the algorithm to carry out a new search. If the new food source is better, it will be kept.
238
Y. Song et al.
The detection of a good food source means that the number of original food sources has increased, exceeding the size of the population limit. To ensure the constant number of food sources used for searching, the worst food source in the population will abandon the search for it. By replacing the worst food source in the population with a new food source, the number of food sources in the population is kept stable.
3.6 Population Dynamic Adjustment When three types of bees have completed the search process, it is necessary to update the population type to which the food source belongs. The search population only retains good-quality food sources, and relative to the poor-quality food sources, it needs to be removed from the search population and added to the supplementary population. The adjustment of the population to which the food source belongs is carried out within a certain limit. When the number of food sources in the search population is too small, the adjustment will not occur to maintain the stability of the number of food sources in the search population. The evaluation of the food source belonging to the population is performed by comparing the fitness function value of the current food source with the average fitness function value of all food sources. When the fitness function value of the current food source is lower than the average fitness function value of all food sources, the food source no longer meets the condition of continuing to exist in the search population, and the food source is adjusted to the supplementary population. The important basis for population dynamics adjustment is the number of food sources in the search population and the average performance of all food sources. When both conditions are met, the population dynamics adjustment is performed according to the performance of the food sources. Without any one of the conditions, the adjustment cannot be made, and the original population belonging relationship is maintained constant. When the two conditions are met at the same time, the relationship between the food source in the search population with poor performance and the food source in the supplementary population with good performance will change. Such a change will not change the population size setting in the DPBA algorithm.
3.7 Fitness Function The fitness function is a key factor that affects the evolution of the population, and it is also an important basis for evaluating various food sources in the population. In the second part of this paper, the objective function of the PMSP-TW problem is given as Eq. (1). This optimization goal is consistent with fitness evaluation function.
Application of the Dual-population Bees Algorithm …
239
4 Experiment To effectively verify the effectiveness of the algorithm proposed in this paper, we designed a series of instances with different numbers of products to be processed and different numbers of machines. To visually show the number of tasks and machines, we use J/M to represent each example. Among them, J represents the number of products to be processed, and M represents the number of machines used to process products. The higher the ratio of the number of products to be processed to the number of machines, the higher the intensiveness of task scheduling, and the corresponding increase in the difficulty of scheduling [11]. We chose three algorithms to be used as the benchmark algorithm for comparison. The first is the neighborhood search algorithm (NSA). The algorithm finds highquality solutions by improving the neighborhood structure. The second is a heuristic algorithm (HA1). The algorithm plans the products in sequence according to the earliest allowable processing time of the products as the sorting basis. The third is also a heuristic algorithm (HA2). The algorithm plans the products in sequence according to the processing time required by the products as the sorting basis. Each example uses all algorithms to run 10 times each, using the minimum (Min), average (Avg), and maximum (Max) of the planning results as indicators to evaluate the performance of the algorithm. Three indicators can effectively reflect the comprehensive performance of the algorithm for multiple optimization searches [12, 13]. The minimum value reflects the best performance of the algorithm in finding the solution multiple times. The average reflects the average performance of multiple searches. The maximum value reflects the most unsatisfactory result among multiple searches. The experimental results are shown in Table 1. The DPBA algorithm we proposed performs well in all three evaluation indicators. The dual-population search method makes the entire search process more targeted. Some population individuals are responsible for finding the space where high-quality solutions may exist as quickly as possible, while the other population individuals strive to find high-quality solutions in the discovered solution space. The two parts of the population cooperate and dynamically adjust the composition of individuals in the population according to the performance so that the algorithm can find satisfactory results while continuously exploring new solution spaces. Another reason the DPBA algorithm can achieve good planning results is that the search for hired bees and forager bees has been improved and combined with the characteristics of dual populations so that the search of each type of bee can best play its corresponding role. Figure 3 shows the average of the search results. Among several comparison algorithms, the neighborhood search algorithm has good planning performance. However, due to the lack of utilization of the information obtained during the optimization process, there are obvious differences in the results of multiple optimizations. The heuristic algorithm generates a production plan based on the characteristics of the task attribute. Although the heuristic algorithm has a simple structure, it can quickly
240
Y. Song et al.
Table 1 Planning results of algorithms Instance
DPBA Min
NSA Avg
Max
Min
HA1 Avg
HA2
Max
100/4-1
9152
9159.5
9167
9152
9166.1
9177
9191
9186
100/4-2
9682
9682.2
9683
9682
9683
9684
9686
9714
100/4-3
9762
9809.5
9849
9784
9847.6
9908
10,572
10,530
150/6-1
12,027
12,027
12,027
12,027
12,028.5
12,032
12,070
12,132
150/6-2
12,046
12,063.3
12,089
12,049
12,113.6
12,227
12,493
12,486
150/6-3
12,064
12,070.8
12,082
12,075
12,078.8
12,084
12,103
12,103
200/8-1
12,760
12,760.7
12,762
12,763
12,764.9
12,771
12,781
12,806
200/8-2
14,190
14,198.9
14,203
14,202
14,205
14,209
14,217
14,212
200/8-3
13,619
13,623.7
13,627
13,622
13,628.9
13,632
13,646
13,646
300/10-1
18,507
18,529.7
18,540
18,521
18,539.7
18,547
18,555
18,550
300/10-2
17,888
17,888.2
17,889
17,889
17,890.6
17,894
17,897
17,935
300/10-3
18,091
18,095.1
18,098
18,092
18,100.2
18,104
18,130
18,109
find a satisfactory production plan. From the perspective of planning speed, this is far from being achieved by the DPBA algorithm and the NS algorithm. Through the above experiments, it can be seen that the dual-population Honey Bees Algorithm proposed in this paper has good performance in solving the PMSPTW problem. Compared with several other comparative benchmark algorithms, the PMSP-TW algorithm can find a higher-quality production plan. Good search results benefit from the setting of the dual-population search method in the algorithm. The functions of the two populations complement each other, allowing the algorithmic search to perform well in both the global and local aspects. Fig. 3 Average makespan of algorithms
Application of the Dual-population Bees Algorithm …
241
5 Conclusion This article focuses on a common scheduling problem in the field of intelligent manufacturing, that is, the scheduling problem of the production workshop with time windows. In response to this problem, we propose a dual-population Bees Algorithm. The dual population replaces the original single population, allowing the two populations to undertake different search tasks to increase the speed of the population to find a good solution as much as possible. In the proposed Bees Algorithm, the convergent population and supplementary population replace the population in the original Bees Algorithm, making it easier for the population to take advantage of search and mining. The quality evaluation mechanism of the food source in the population ensures the quality of the food source in the search population, improves the effectiveness of the search, and accelerates the convergence speed of the honeybee algorithm. When the quality of the food source does not meet the requirements of the search population, the relationship between the food source’s belonging population is adjusted, and the food source is adjusted to the supplementary population. We use the algorithm proposed in this paper and several comparison algorithms to test the planning effect through design instances. Among the planning results, the effect obtained by the DPBA algorithm is the most ideal. The planning result effectively proves the effect of our improved Bees Algorithm. In future research, using multiple clusters while considering parallel computing within the algorithm may be a way to further improve the search efficiency of the algorithm. Parallel computing can give full play to the advantages of computer multicore computing, and the information exchange between parallel populations can allow optimization to obtain useful experience to guide the search process. The information update mechanism between populations is also worthy of further in-depth study. For example, in the bee search process, the information obtained from the search is promptly provided to the bees responsible for another nectar search. This method may be helpful to improve the search efficiency. Combining the Bees Algorithm with other algorithms to form a memetic algorithm is also a good choice. The advantages of different algorithms are combined through reasonable mechanism settings to improve search quality while ensuring efficiency. Acknowledgements Special thanks to Prof. DT Pham for giving us the opportunity to introduce our work. This work was supported by the National Natural Science Foundation of China (61773120), the Special Projects in Key Fields of Universities in Guangdong (2021ZDZX1019) and the Hunan Provincial Innovation Foundation For Postgraduate (CX20200585).
References 1. Fleszar K, Hindi KS (2018) Algorithms for the unrelated parallel machine scheduling problem with a resource constraint. Eur J Oper Res 271(3):839–848
242
Y. Song et al.
2. Cheng CY, Chen TL, Wang LC, Chen YY (2013) A genetic algorithm for the multi-stage and parallel-machine scheduling problem with job splitting–a case study for the solar cell industry. Int J Prod Res 51(16):4755–4777 3. Oliveira D, Pessoa A (2020) An improved branch-cut-and-price algorithm for parallel machine scheduling problems. INFORMS J Comput 32(1):90–100 4. Li W, Li J, Zhang X, Chen, Z (2014) Parallel-machine scheduling problem under the job rejection constraint. In: International workshop on frontiers in algorithmics, June 2014, pp 158–169 5. Qamhan AA, Alharkan IM (2019) A two-stage adaptive fruit fly optimization algorithm for unrelated parallel machine scheduling problem with additional resource constraints. Expert Syst Appl 128:81–83 6. Song Y, Song B, Huang Y, Xing L, Chen Y (2021) Solving large-scale relay satellite scheduling problem with a dynamic population firework algorithm: a case study. In: 2021 IEEE Symposium Series on Computational Intelligence (SSCI), Dec 2021, pp 1–7 7. Pham DT, Ghanbarzadeh A, Koç E, Otri S, Rahim S, Zaidi M (2006) The bees algorithm—a novel tool for complex optimization problems. In: Intelligent production machines and systems, pp 454–459 8. Pham DT, Castellani M (2009) The bees algorithm: modelling foraging behaviour to solve continuous optimization problems. Proc Inst Mech Eng C J Mech Eng Sci 223(12):2919–2938 9. Pham DT, Afify A, Koc E (2007) Manufacturing cell formation using the Bees Algorithm. In: Innovative production machines and systems virtual conference, July 2007, Cardiff, UK 10. Yuce B, Mastrocinque E, Lambiase A, Packianather MS, Pham DT (2014) A multi-objective supply chain optimization using enhanced Bees Algorithm with adaptive neighbourhood search and site abandonment strategy. Swarm Evol Comput 18:71–82 11. Wang Q, Luo H, Xiong J, Song Y, Zhang Z (2019) Evolutionary algorithm for aerospace shell product digital production line scheduling problem. Symmetry 11(7):849 12. Rabadi G, Moraga RJ, Al-Salem A (2006) Heuristics for the unrelated parallel machine scheduling problem with setup times. J Intell Manuf 17(1):85–97 13. Kim J, Kim HJ (2021) Parallel machine scheduling with multiple processing alternatives and sequence-dependent setup times. Int J Prod Res 59(18):5438–5453
A Parallel Multi-indicator-Assisted Dynamic Bees Algorithm for Cloud-Edge Collaborative Manufacturing Task Scheduling Yulin Li, Cheng Peng, Yuanjun Laili, and Lin Zhang
1 Introduction Cloud computing and edge computing have received widely application in various fields. The limited computing resources and energy of the terminal devices are currently addressed through cloud computing in industrial Internet. Cloud servers store a large amount of knowledge and can make accurate analysis with large-scale centralized computational resources. On the contrary, to meet the industrial real-time requirements and reduce the consumption of network and resources, distributed edge servers are applied [1]. Intelligent services are provided at the edge of the network near the data source. Cloud-edge collaboration plays an essential role in improving productivity and efficiency. To achieve the purpose of rational and efficient use of resources, task scheduling in cloud computing is crucial. The scheduled tasks in cloud-edge collaboration usually come from multiple terminal devices, need real-time or continuous transmission, and are strongly associated with other tasks. Based on the above characteristics and complex cloud-edge collaborative structure, large-scale task scheduling has become difficult. In addition, task scheduling has been proved a NP-hard problem [2]. So far, many scheduling algorithms have been proposed for such problems. Populationbased meta-heuristic algorithms are widely studied in task scheduling, which is Y. Li · C. Peng · Y. Laili (B) · L. Zhang School of Automation Science and Electrical Engineering, Beihang University, Beijing, China e-mail: [email protected] Y. Li e-mail: [email protected] C. Peng e-mail: [email protected] L. Zhang e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 D. T. Pham and T. Hartono (eds.), Intelligent Production and Manufacturing Optimisation – The Bees Algorithm Approach, Springer Series in Advanced Manufacturing, https://doi.org/10.1007/978-3-031-14537-7_15
243
244
Y. Li et al.
easy to implement and has strong ability to solve such problems. Among them, the Bees Algorithm (BA) [3] is a typical algorithm widely used. There are disassembly sequence planning (DSP) [4], combinatorial optimization problems (COP) [5], scheduling in cloud computing [6–8], etc. However, BA faces two problems in large-scale scheduling. First, large-scale tasks lead to the large search space and the heavy search burden. This will affect the search speed and the quality of the solutions. Second, the adaptability of BA to the problem needs to be enhanced under the lack of testing and human judgment. With the increasing scale of tasks and the increasing number of users, study the task scheduling method to ensure efficient and stable operation of cloud-edge collaboration has become one of the core issues. This paper focus on solving the two problems mentioned above to get a reasonable and efficient scheduling scheme. When meta-heuristic algorithms are used to solve large-scale problems, they usually employ the pattern of dividing solution space. Saleh et al. [9] propose an improved PSO for large-scale scheduling in cloud computing. To reduce the burden of particles, this algorithm divides the task into several batches for serial scheduling, but there is still the problem of time-consuming scheduling. A Distributed PSO framework is proposed in [10], which uses a master–slave multi-group distributed model. The master randomly divides the entire population into several equal-sized groups of slaves in every generation. After co-evolution of these groups, the updated groups are sent back to the master. This helps to enhance population diversity and addresses the challenges of large-scale optimization. However, information transfer between master and slave nodes takes time in each iteration. Therefore, this paper will use the mode of independent process for parallel computing. In this way, the partition method of tasks and resources needs to be studied, which can improve the efficiency of the algorithm and ensure the quality of the solution. This paper proposes four sorting strategies to help partition search space and reduce energy consumption. Some researchers improve BA to realize the scheduling of discrete problems. For example, the search method of BA is improved by variable neighborhood search (VNS) in [11], paper [12] proposes unique dynamic scheduling using the BA with the shortest job first approach. Experiments and analysis in [13] show that the local search capability of the BA is affected by the size and shape of the neighborhood, and allowed stagnation cycles. To enhance the problem adaptability of search process of BA, researchers usually take problem related indicators as the guidance of the above parameters [14]. When there are multiple optimization objectives, how to design indicators has become a key. At the same time, how to speed up the algorithm in different problems is also a problem worthy of attention. Above all, this paper proposes the multi-indicator-assisted dynamic Bees Algorithm (MIDBA) for large-scale task scheduling in cloud-edge collaboration to optimize the completion time and energy consumption of tasks. The main contributions are as follows: (1) For large-scale task scheduling in cloud-edge collaboration, the sorting method, merging and fine adjustment method are proposed based on the parallel framework to improve the scheduling efficiency.
A Parallel Multi-indicator-Assisted Dynamic Bees …
245
Table 1 Abbreviations Abbreviation Description MIDBA
Multi-indicator-assisted dynamic Bees Algorithm
BA
Bees Algorithm
DSP
Disassembly sequence planning
COP
Combinatorial optimization problems
VNS
Variable neighborhood search
GADPSO
Discrete particle swarm optimization algorithm with genetic algorithm operators
BAT-SAA
Bat-based service allocation algorithm
(2) MIDBA modifies the operators of Bees Algorithm according to multiple indicators to improve the performance of the algorithm. The summary of abbreviations and their explanations and notations which are used in this paper are shown in Tables 1 and 2. Table 2 Notations Notations
Description
Notations
Description
Ii
Task i
d ynami c Pj
Dynamic energy power of Sj
Sj
Server j
Performance of S j
N, M
Number of tasks, number of servers
per f or m j { } ˜ j = O˜ j1 , O˜ j2 , ... O
Rcpu,i , Rmem,i
The CPU and memory requirements of Ii
ti
Computing time under maximum computing performance in current environment of Ii
Tedge,i , Tcloud,i
Data upload time of the Ii
pi
Predecessor task of. Ii
down Tedge,i
Download time of Ii
up d i , d down i
Upload and download data of Ii
Twait,i
The waiting time of Ii
Scpu, j , Smem, j
Available CPU and memory of S j
Tcomp,i
The actual computing time of Ii
Ccpu, j , Cmem, j
CPU and memory capacity of S j
Tend,i
The completion time of Ii
P stj at i c
Static energy power of Sj
Ei
The energy consumption generated by Ii
( ) r j o˜ jk up
List of devices that S j receives data Data rate between the device and S j
up
246
Y. Li et al.
2 Modeling and Problem Formulation In this paper, we establish a scheduling problem model to minimize completion time and energy consumption in cloud-edge collaboration environment. Suppose that N tasks are scheduled to MC cloud servers and M E edge servers. There is a simple dependency relationship between the tasks. A task Ii can be described as. In this model, resources are allocated according to CPU and memory requirements. pi represents the priority constraint between tasks. Tasks up are used to calcucan only be executed after predecessor task. d i , d down i late the communication time of task uploading and downloading data. ti is the basis for calculating the execution time of tasks in the server and fluctuates according to the performance of servers. Server tuples can be}represented as { d ynami c S j = Scpu, j , Smem, j , Ccpu, j , Cmem, j , P stj at i c , P j , per f or m j . Scpu, j , Smem, j are used to judge whether the server meets the CPU and memory requirements of dynamic are used to calculate the static and dynamic energy consumptask. P jstatic , P j tion to expect a scheduling scheme with low energy consumption. The computing performance of the server shows that different servers spend different time executing the same task, so as to obtain a shorter scheduling scheme. { } ˜ j = O˜ j1 , O˜ j2 , ... . An edge server must receive data from multiple devices O The data rate between the device and the server is calculated by Eq. 1 according to [15] where w is basic bandwidth, is the background noise. qo˜ jk , ql denote the transmission power of device or server. go˜ jk , j , gl, j represent the channel gain between devices and server. ( ) ( ) qo˜ jk go˜ jk , j Σ r j o˜ jk = wlog2 1 + (1) .0 + l∈ O˜ j \σ˜ jk ql gl, j First, the completion time model is established, and the task completion time is the sum of waiting time and execution time. Given the amount of tasks and the transmission bandwidth, data upload time of the task to edge servers or cloud servers can be calculated as Eqs. 2 and 3. up
up
dil ( ) l∈[1,m i ] r j o ˜ jl
Tedge,i = max up
(2)
up Σ d up dil il + l∈[1,m i ] r k (o ˜ kl ) l∈[1,m ] rcloud
Tcloud,i = max
(3)
i
If task Ii needs trained model or priori knowledge from the cloud, the download time of Ii is computed by Eq. 4. down Tedge,i =
Didown rcloud
(4)
A Parallel Multi-indicator-Assisted Dynamic Bees …
247
Hence, the waiting time of task Ii is the maximum value of the transmission time and the waiting time of the predecessor task, is shown as Eq. 5. Where a = 1 means that task Ii will be assign to the edge, and a = 0 means that the task is assigned to the cloud. } } { { up up down + (1 − a)Tcloud,i , Tend, pi (5) Twait,i = max a · max Tedge,i , Tedge,i Because of the different computing performance of servers, the actual computing time of tasks placed on different servers are different, as shown in Eq. 6. Tcomp,i = ti / per f or m j (1 ≤ i ≤ N , 1 ≤ j ≤ M)
(6)
Thus, the completion time of task Ii is formulated by Eq. 7. The energy consumption generated by Ii is calculated as Eq. 8. Where α1 and α2 are two weight factors which satisfy α1 , α2 ∈ [0, 1] and α1 + α2 = 1. Tend,i = Twait,i + Tcomp,i Ei =
( ) Σ qoll d up Rcpu,i Rmem,i ( il ) + Pmax, j α1 · Tcomp,i + α2 · Ccpu, j Cmem, j r o˜ jl l∈[1,m ] j
(7)
(8)
i
The objective is minimizing the f calculated by Eq. 9. Where the coefficient β1 and β2 satisfy the constraint β1 , β2 ∈ [0, 1] and β1 + β2 = 1. f = β1 max Tend,i + β2 i∈[1,N ]
Σ E i N i∈[1,N ]
(9)
3 Algorithm Dealing with the large-scale scheduling problems efficiently, parallel MIDBA is proposed. This section introduces the details of the parallel MIDBA.
3.1 Framework Firstly, parallel MIDBA provides four sorting strategy before tasks and resources division to improved algorithm efficiency. Second, to enhance the adaptability of the algorithm to the problem, multiple indicators are used to help the search of the BA. The algorithm framework is shown in Algorithm 1.
248
Y. Li et al.
Algorithm 1: Framework of parallel MIDBA Input: tasks I = {I 1 , I 2 , ..., I N }, cloud servers Cl oud = {C 1 , C 2 , ..., C M C }, edge servers Ed ge = {E 1 , E 2 , ..., E M E }, number of processes G Procedure: Step 1. I ' , Cl oud ' , Ed ge i = Sor ting( I , Cl oud, Ed ge) Step 2. Divided I ' , Cl oud ' , Ed ge i and the population into each process Step 3. Parallel computing for all processes: multi-indicator-assisted dynamic Bees Algorithm (MIDBA) Step 4. Merging and adjustment strategy Output: Scheduling scheme
The mainframe of parallel MIDBA corresponding to the above pseudo code is shown in Fig. 1. MPI is used to solve the problem. First, the tasks and resources are sorted according to a selected sorting strategy and are allocated to multiple subprocesses for parallel solution. The sub-processes are independent of each other. In each sub-process, MIDBA is used to improve the self-adaptive ability of solving problems through multi-indicator search. After multiple sub-processes are solved in parallel to get the sub scheme, the final total scheduling scheme is obtained by merging and adjustment strategy. Details are described in 3.2–3.6.
3.2 Individual Encoding Before introducing the proposed algorithm, we need to explain the individual encoding method. This paper adopts the indirect encoding method as shown in the Fig. 2. When there are N tasks to be scheduled, the individual length is N , and the value range of coding bits is [0, N − 1], which indicates the order in which the tasks are allocated to the server during the subsequent scheduling scheme mapping. Figure 2 shows an example schedule where the task sequence is task 1 followed by task 9, task 6, task 4, etc. Coding bits cover all positive integer values of [0, N − 1].
3.3 Pre-allocation Mechanism The pre-allocation mechanism of resources is designed considering the proportion of data uploaded and downloaded by tasks and the association relationship between tasks. It is expected to improve the load balance of the cloud and the edge and reduce the communication time between devices. The process of mechanism is described in Algorithm 2. Firstly, calculate the proportion of data uploaded and downloaded by the tasks, and sort tasks according to the proportion. Then, assign the tasks at the head of the task list to the cloud and the tail of the task list to the edge.
A Parallel Multi-indicator-Assisted Dynamic Bees …
Fig. 1 The mainframe of parallel MIDBA
Fig. 2 Discrete indirect encoding
Algorithm 2: Pre-allocation mechanism Input: tasks I = {I 1 , I 2 , ..., I N }, cloud servers Cl oud = {C 1 , C 2 , ..., C M C }, edge servers Ed ge = {E 1 , E 2 , ..., E M E }, number of processes G Procedure: Step 1. Sort I according to the rate pi of uploaded data and downloaded data Step 2. For i 1 to (N + 1)/2: Estimate whether there is a cloud server that meets the requirements of Ii Estimate whether there is an edge server that meets the requirements of I N −i End For
249
250
Y. Li et al.
3.4 Sorting Strategy To improve the scheduling effect, this paper proposes four sorting strategies for computing tasks and resources in cloud-edge collaboration. Because of the dependency relationship between the tasks considered in this paper, the tasks are divided into multiple sets according to the dependency relationship. The tasks in the same set can be regarded as a task flow, and tasks belonging to different sets are independent of each other. With task flow as the basic unit, tasks are sorted and divided into multiple processes to ensure that tasks belonging to a flow are assigned to the same process and facilitate the calculation of the task completion time in the objective function. Figure 3 shows that the task sequence containing 10 tasks and the servers sequence containing 9 servers are sorted respectively, and then divided into three groups. The 10 tasks in the figure are first divided into four task sets. Then the task sets are sorted and grouped. (1) Random sorting strategy: Firstly, the task flow sequence and the server sequence are scrambled and reordered directly. After the reordering, the tasks and servers corresponding to the task flow are evenly distributed to each process. (2) Sorting strategy based on complementary: The strategy takes advantage of the resource requirements of tasks and the characteristics of the remaining resources in the server. If a task can be regarded as a two-dimensional object (assuming that the requirements for memory and CPU are two dimensions), the smaller the angle between the vector representing the task and the vector representating the remained resources, that is, the more complementary requirements can be met, the higher the resource utilization can be [16]. (3) Sorting strategy based on similarity: The strategy considers the relationship between occupied resources and task resource requirements in server. In contrast to the complementary based sorting strategy, this strategy tends to allocate tasks to machines whose occupied computing resources are similar to their resource requirements.
Fig. 3 Sorting and grouping of tasks and resources
A Parallel Multi-indicator-Assisted Dynamic Bees …
251
(4) Sorting strategy based on energy overhead of resource: This strategy tends to schedule tasks to servers with lower energy consumption cost. In the server with lower energy consumption cost, tasks can use the same resource for calculation with lower energy consumption. The energy consumption cost of resources defined in this paper is shown in Eq. 10. Firstly, the cloud servers are arranged in ascending order according to the energy overhead of resource of servers, and the tasks are arranged in descending order according to the resource demand of the tasks, to complete the calculation of industrial computing tasks with the least energy consumption, and then they are grouped in order. dynamic
over head E/C, j =
P jstatic + P j
α1 Ccpu, j + α2 Cmem, j
(10)
3.5 Improved Bees Algorithm To increase the adaptability of the algorithm to different problems, MIDBA is designed. Three improvements are made based on the BA. (1) Search Operators: Because of the discrete encoding is adopted, the Swap Sequence [17] is the key element in the design of search operator. Swap Operator S O(i 1 , i 2 ) indicates that the bit i 1 and i 2 of the solution are exchanged. A Swap Sequence SS comprises one or more Swap Operators S O(i 1 , i 2 ). SS can express operation process from one solution to another. The order of the Swap Operator in SS represents the order in which the solution is manipulated, so it is important here. (2) Multi-indicator-assisted search method: A multi-indicator-assisted search method is designed for the onlooker bees process. The list of search directions that maximizes change of completion time and energy consumption is established. In the onlooker phase, the two lists are combined with the global optimal solution to generate new solutions. Their participation changes with the iterative process. The above method can reduce the possibility of falling into local optimization. ( ) ) ( t+1 xi,time = p1 ∗Vdeltatime + p2 ∗V xgbest − xit pi ⊗ xit
(11)
( ( ) ) t+1 xi,energy = p1 ∗Vdeltaenergy + p2 ∗V xgbest − xit pi ⊗ xit
(12)
(3) Vdeltatime and Vdeltaenergy are randomly select from the list of search directions ( with the) most changes in task completion time or energy consumption. V xgbest − xit pi represents the direction from the current solution to the global optimal solution obtained with probability pi ; corresponds to the current individual and decreases with the increase of times the individual is not updated in
252
Y. Li et al.
the scout bees procedure; p1 = 0.4 ∗ (M AX G E N −gen/M AX G E N )2 , p2 = 1 − p1 respectively represent the participation degree of the direction sequence V when generating the new solution. The change of p2 from small to large can avoid obtaining local optimization results in early iteration. (4) Archive-assisted fast scheduling method: When the number of times to regenerate the solution reaches H , archive and global optimal individual are used to generate candidate solutions pool, and the best solution is selected as the final sub scheme directly.
3.6 Merging and Adjustment Strategy After obtaining the scheme of sub scheduling problems, the merging and fine adjustment strategies are designed in the parallel MIDBA. The merging mechanism is shown in Fig. 4. Firstly, the sub scheme is generated in each process, and the index of server to which each task is assigned is given. According to the number of processes, the index of tasks and servers are adjusted to obtain the overall scheduling scheme. Specifically, the index of servers and tasks in each group shall be added with the starting index of the group when sorting and grouping. For example, the tasks 3–5 and the server 3–5 are assigned to the second group in Fig. 4. The sequence number of tasks and servers starts from 0 in this group. Therefore, after completing the allocation of tasks in each group to the server, the corresponding sequence number of tasks and servers in the second group shall be added with 3, and the final scheduling scheme shall be obtained after merging with other groups. Then, the maximum completion time and average energy consumption of all tasks are calculated. The fine adjustment strategies are applied when there are servers overloaded. Using the idea of first fit and best fit, adjust the tasks in the overload server to the first server that meets the requirements or the server that can get the best fitness value. In
Fig. 4 Merging mechanism
A Parallel Multi-indicator-Assisted Dynamic Bees …
253
addition, this paper also designs an energy-based heuristics to reassign the tasks in the overloaded server to the server with low energy consumption.
4 Experiments The task sequence comes from Google trace 2019 data set. The data of cloud and edge servers selected from SPECpowerssj2008. In this paper, the experiment is divided into three parts: IV-A–IV-B determine the most suitable combination of sorting method and adjustment strategy for different scales through comparison. In IV-C, the combination algorithm is compared with state-of-art algorithm, and the performance of parallel MIDBA proposed in this paper is analyzed. In following experiments, the number of iterations is 1000, and the algorithm runs 20 times to get the average. Three cases (number of tasks, number of cloud servers, number of edge servers) were set as follows: Case1 (6000, 3000, 300), Case2 (10,000, 5000, 500), Case3 (20,000, 10,000, 1000). Too many parallel processes will lead to poor solution quality, so the number of processes is set to 8.
4.1 Comparison of Sorting Strategies To evaluate the applicability of the four sorting strategies mentioned in III-C, this section will compare these methods in three scales. Figure 5 and Table 3 show the performance comparison of the four sorting strategies in different scale cases. Considering the three scales, the sorting strategy based on energy overhead of resource has shown its advantages, which contributes to the search for solutions with lower energy consumption, but it’s not prominent in the optimization of completion time, as displayed in Fig. 5. From the variance in Table 3, we can see that its performance is stable in two large-scale cases. However, the performance is not good enough in the task completion time. The random sorting strategy has the fastest solution speed, performs better in case 2, and contributes a small variance in case 1. In two large-scale cases, the performance of sorting strategy based on similarity has poor objective values in both cases.
4.2 Comparison of Adjustment Strategies The merging and adjustment strategies are applied to obtain better scheduling scheme after parallel solution. This section compares first fit, best fit, and energy-based heuristics. It can be seen from the Table 4 and Fig. 6 that the first fit strategy has advantage in solving speed, but poor fine adjustment. Energy-based Heuristics can reduce the
254
Y. Li et al.
Fig. 5 a Average objective value in case 1, b Average objective value in case 2, c Average objective value in case 3
energy consumption of the three cases by 37.2%, 13.4% and 15% respectively, and reduce the total objective function value by 31.8%, 11.2% and 15.8%.
A Parallel Multi-indicator-Assisted Dynamic Bees …
255
Table 3 Variance and running time of algorithms with different sorting strategies Cases
Sorting strategies Running time (s)
Variance objective value
Case 1
Energy overhead of resource
Case 3
564.9128
2272.7177
average energy consumption 0.9571
Complementary
59.7427
380.7260
1467.1398
21.3651
Similarity
29.2241
580.3180
2150.3476
18.7483
Random Case 2
34.5014
makespan
25.2697
183.9454
693.7497
29.2074
Energy overhead of resource
107.5329
81.4764
325.8529
1.1035
Complementary
162.2553
610.3603
2326.2394
6.1745
Similarity
80.0899
172.6055
655.9602
20.6439
Random
64.2957
91.3051
459.4934
24.2383
466.3481
82.6044
311.1899
3.0019
Energy overhead of resource Complementary
636.2903
521.8814
2117.8878
6.7636
Similarity
322.8276
1397.1498
5618.8785
4.7561
Random
263.6091
93.3058
350.2884
30.2796
Table 4 Average running time of fine adjustment strategies (ms)
Case 1
First fit
Best fit
1.529
11.551
Energy-based 75.275
Case 2
2.929
52.12
262.207
Case 3
4.812
84.79
624.213
Fig. 6 Average energy consumption value with different merging and adjustment strategies
4.3 Algorithm Performance Comparison According to the above comparison experiments, this section will choose the combinations of the sorting strategy based on energy overhead of resource and heuristic adjustment strategy. The results are compared with those of two state-of-the-art algorithms, GADPSO [18] and BAT-SAA [19]. GADPSO is a population-based heuristic method proposed in 2019. It is mainly to solve how to reasonably arrange the data
256
Y. Li et al.
location of scientific workflow and optimize the transmission time of data between different data centers in the cloud-edge collaborative environment. BAT-SAA is a meta-heuristic service allocation algorithm proposed by Mishra in 2018. The purpose of the algorithm is to solve the service request allocation problem in fog computing environment. The acoustic loudness attenuation coefficient is set to 0.9 according to Yang’s suggestion [20], and the pulse frequency enhancement coefficient is set to 0.9. To increase the comparability of the running time of the algorithms, the two algorithms are parallelized with random sorting. The average, best and worst objective values of the three algorithms are shown in Table 5. The smallest “best” and “avg”, largest “worst” values are in bold. From the perspective of solution results, MIDBA is better than GAPSO and BAT-SAA. MIDBA can find the solution with lower objective function value under the same conditions in all three cases. The parallel BAT-SAA algorithm has the shortest running time in three cases, but it cannot guarantee the quality of the solution. MIDBA can obtain less energy consumption scheduling scheme, but the ability of balancing task completion time and energy consumption needs to be improved.
5 Conclusion This paper constructs an industrial computing task scheduling model in the cloudedge collaboration of IIoT. A parallel MIDBA algorithm is proposed, which improves the BA under the parallel framework. Sorting, merging and fine adjustment strategies are designed to support parallel structure to improve the scheduling efficiency effectively. Starting with the optimization of task allocation at the cloud or edge, a heuristic optimization method reduces the energy consumption. Compared with new algorithms, parallel MIDBA can get the scheme with lower task completion time and energy consumption while reducing the running time. In the future research, we will further study the relationship between computing and manufacturing tasks under cloud-edge collaboration and establish an in-depth understanding of the actual industrial intelligence scene. Then, optimize the structure of MIDBA, including that design a local search strategy to further speed up the convergence and improve the performance of the algorithm.
A Parallel Multi-indicator-Assisted Dynamic Bees …
257
Table 5 Average objective value and running time of algorithms Cases
Algorithm
Case 1
MIDBA
GADPSO
BAT-SAA
Case 2
MIDBA
GADPSO
BAT-SAA
Case 3
MIDBA
GADPSO
BAT-SAA
Objective value
Makespan(s)
Average energy consumption (kJ)
Running time (s)
best
1112.650
1933.810
275.322
25.502
avg
1138.214
1990.578
285.849
26.240
worst
1176.260
2059.270
293.261
26.781
best
1167.070
2042.720
290.477
25.658
avg
1179.533
2064.440
294.623
26.477
worst
1196.680
2096.890
300.099
26.854
best
1288.430
2233.890
342.975
23.626
avg
1296.695
2248.493
344.899
24.905
worst
1302.420
2258.110
346.726
25.975
best
1062.910
1895.690
226.972
63.006
avg
1086.750
1940.517
232.980
67.765
worst
1122.150
2005.950
239.580
71.854
best
1123.160
1998.170
238.392
64.348
avg
1130.610
2017.072
244.154
67.987
worst
1138.540
2038.690
250.888
71.566 59.533
best
1208.450
2137.530
277.342
avg
1238.215
2194.813
281.616
61.414
worst
1249.900
2216.720
286.688
62.406
best
1304.960
1968.740
633.791
261.594
avg
1336.441
2034.364
638.520
271.974 284.652
worst
1377.860
2116.840
644.259
best
1323.330
1994.820
632.555
263.617
avg
1332.692
2020.228
645.156
272.232
worst
1344.740
2047.560
654.306
276.991
best
1394.160
2115.400
659.997
239.527
avg
1426.755
2181.317
672.193
245.515
worst
1451.230
2225.470
678.871
251.991
Acknowledgements This work is supported by the National Key Research and Development Program of China (Grant No. 2018YFB1700603).
References 1. Chen Y, Lin Y, Zheng Z et al (2021) Preference-aware edge server placement in the internet of things. IEEE Internet Things J
258
Y. Li et al.
2. Xu J, Tang J, Kwiat K, Zhang W et al (2013) Enhancing survivability in virtualized data centers: a service-aware approach. IEEE J Selected Areas Commun 2610–2619 3. Pham DT, Ghanbarzadeh A, Koc E et al (2006) The Bees algorithm—a novel tool for complex optimisation problems. In: Intelligent production machines and systems 4. Xu W, Tang Q, Liu J et al (2020) Disassembly sequence planning using discrete Bees algorithm for human-robot collaboration in remanufacturing. In: Robotics and computer-integrated manufacturing 5. Ismail AH, Hartono N, Zeybek S et al (2020) Using the Bees algorithm to solve combinatorial optimisation problems for TSPLIB. In: IOP conference series: materials science and engineering. IOP Publishing 6. Yuan H, Zhou M, Liu Q, Abusorrah A (2020) Fine-grained resource provisioning and task scheduling for heterogeneous applications in distributed green clouds. IEEE/CAA J Automatica Sinica 1380–1393 7. Jeyalaksshmi S, Smiles JA, Akila D et al (2021) Energy-efficient load balancing technique to optimize average response time and data center processing time in cloud computing environment. J Phys: Conf Ser (IOP Publishing) 8. Keshavarznejad M, Rezvani MH, Adabi S (2021) Delay-aware optimization of energy consumption for task offloading in fog environments using metaheuristic algorithms. In: Cluster computing, pp 1825–1853 9. Saleh H, Nashaat H, Saber W, Harb HM (2019) IPSO task scheduling algorithm for large scale data in cloud computing environment. In: IEEE Access, pp 5412–5420 10. Wang ZJ, Zhan ZH, Yu WJ et al (2020) Dynamic group learning distributed particle swarm optimization for large-scale optimization and its application in cloud workflow scheduling. IEEE Trans Cybernetics 2715–2729 11. Liu J, Zhou Z, Pham DT et al (2020) Collaborative optimization of robotic disassembly sequence planning and robotic disassembly line balancing problem using improved discrete Bees algorithm in remanufacturing. In: Robotics and computer-integrated manufacturing 12. Singh H, Marwaha C (2021) Optimization of job scheduling with dynamic bees approach. Sustainable communication networks and application. Springer, Singapore, pp 141–158 13. Baronti L, Castellani M, Pham DT (2020) An analysis of the search mechanisms of the bees algorithm. In: Swarm and evolutionary computation 14. Mansoor Hussain D, Surendran D (2020) Content based image retrieval using bees algorithm and simulated annealing approach in medical big data applications. In: Multimedia Tools and applications, pp 3683–3698 15. Abdel-Basset M, Mohamed M, Chang V (2018) Nmcda: a framework for evaluating cloud computing services. In: Future generation computer systems, pp 12–29 16. Zhou Z, Wang H, Lou P (2010) In manufacturing intelligence for indus trial engineering: methods for system self-organization, learning, and adaptation. In: Group technology, pp 189– 213 17. Wang K-P, Huang L, Zhou C-G et al (2003) Particle swarm optimization for traveling salesman problem. In: Proceedings of the 2003 international conference on machine learning and cybernetics, pp 1583–1585 18. Lin B, Zhu F, Zhang J et al (2019) A time-driven data placement strategy for a scientific workflow combining edge computing and cloud computing. In: IEEE transactions on industrial informatics, pp 4254–4265 19. Mishra SK, Puthal D, Rodrigues JJPC et al (2018) Sustainable service allocation using a metaheuristic technique in a fog server for industrial applications. In: IEEE transactions on industrial informatics, pp 4497–4506 20. Yang XS (2010) A new metaheuristic bat-inspired algorithm. Nature inspired cooperative strategies for optimization. Springer, Heidelberg, pp 65–74
Logistics and Supply Chain Optimisation
Bees Traplining Metaphors for the Vehicle Routing Problem Using a Decomposition Approach A. H. Ismail and D. T. Pham
1 Introduction Metaheuristics are effective methods for solving difficult combinatorial optimisation problems [1]. They are considered more practical than other methods due to their ability to produce near-optimal solutions, perform fast computations, be easily modified, and deal with large dimension problems. They typically produce higherquality solutions than classical heuristics [2]. However, those advantages are in direct conflict with the effort required to utilise a metaheuristic. Preparing a metaheuristic is a time-consuming process. All metaheuristic algorithms, including the Bees Algorithm (BA), have parameters that must be tuned to obtain the optimal solution. The preparation becomes more complicated as the number of parameters to set increases [3]. Two attempts have been made to address this issue with the number of parameters in the case of the BA. Pham and Darwish proposed a fuzzy scheme for automatically generating initial parameters using two inputs, the number of scout bees (n) and the maximum number of worker bees (nw) [4]. The fuzzy greedy selection algorithm can dynamically recruit worker bees to forage in the chosen patch: the stronger the potential of a foraging site, the more bees will be recruited for it. According to the fitness assessment and patch ranking, the recruitment mechanism assigns the appropriate number of worker bees to each site. Ismail described a different approach A. H. Ismail (B) · D. T. Pham Department of Mechanical Engineering, College of Engineering and Physical Sciences, The University of Birmingham, Birmingham B15 2TT, UK e-mail: [email protected]; [email protected] D. T. Pham e-mail: [email protected] A. H. Ismail Department of Industrial Engineering, Universitas Pancasila, 12640 Jakarta, Indonesia © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 D. T. Pham and T. Hartono (eds.), Intelligent Production and Manufacturing Optimisation – The Bees Algorithm Approach, Springer Series in Advanced Manufacturing, https://doi.org/10.1007/978-3-031-14537-7_16
261
262
A. H. Ismail and D. T. Pham
to BA simplification involving the combination of two distinct search processes, exploitation and exploration, which was inspired by the bees traplining behaviour [5]. The merging of search processes was generated using a random triangular distribution of sizes for each patch, where exploitation and exploration occur simultaneously. However, the work presented in [5] was limited to continuous optimisation problems. Due to the different properties of discrete space, this merging of exploitation and exploration could not be directly applied to the combinatorial version of the BA. Some adjustment was required to apply the merging technique to the combinatorial case. In this research, we applied the concurrent search processes of traplining to discrete space and examined the effect of parameter reduction on the vehicle routing problem (VRP) as a case study. The decomposition approach [6, 7] whereby customers were first grouped into clusters was used because it had advantages over other methods, including requiring less modification to the algorithm, enabling parallelisation and suiting both the continuous and combinatorial BA.
2 Bees Algorithm 2.1 Basic Version The foraging behaviour of honey bees motivated the creation of this algorithm in 2005 [8, 9]. The Bees Algorithm is sometimes mistaken for, but is different from, the Artificial Bee Colony (ABC) algorithm proposed by Karaboga in the same year [10]. The nature-inspired Bees Algorithm mimics bee behaviour in scouting for food, distributing knowledge about food sources to the colony’s workers, thoroughly exploiting regions with an abundance of food, and re-exploring the search space for other promising food sources (Fig. 1). To apply this simple algorithm, users need to specify six parameters in addition to a stopping criterion. The parameters are the numbers of scout bees (n), selected flower patches (m), elite (top) patches (e), worker bees on the elite patches (nep), and worker bees on other selected patches (nsp), and the size of the search neighbourhood (ngh). There are two additional parameters in the standard version of the BA that define the strategy for abandoning arid non-productive sites: neighbourhood shrinking rate and stagnation limit. The stopping criterion could be the maximum number of iterations or function evaluations and the error threshold. The algorithm’s colony size is determined by the total number of scout bees and worker bees in all patches throughout the foraging session. The colony size can be calculated using Eq. 1. colony size = (e · nep) + ((m − e) · nsp) + (n − m)
(1)
Bees Traplining Metaphors for the Vehicle Routing …
263
Fig. 1 Flowchart of the Basic and Standard (*) Bees Algorithm with a uniformly random distribution search mechanism
The Bees Algorithm begins by uniformly randomly distributing the n scout bees at points throughout the search space (Eq. 2) with x min and x max as the lower and upper bounds. Then, it evaluates the fitness of the scout bees’ visited spots. The waggle dance performed by a scout bee to recruit workers reflects the quality of the spot (also called the patch point) it visited. patch point = U[xmin , xmax ]
(2)
In Eq. 2, U[xmin , xmax ] denotes a uniform random variable in the interval [xmin , xmax ]. By attracting more bees (nep) to the e regions than to the m-e regions (nsp), the foraging process more intensively exploits the neighbouring area of the more promising patch points (see Eqs. 3–6). Finally, the bee with the highest fitness will be chosen to shape the next patch position (visitation), and the unselected bees will be randomly distributed across the search space, re-exploring or scouting for new
264
A. H. Ismail and D. T. Pham
patches using a method similar to that adopted to generate the initial solutions or patch points. foraging pointk,wk = ( patch pointk − ngh) + 2 · ngh · U[xmin , xmax ] = patch pointk ± U [0, ngh]∗(xmax − xmin )
(3)
∀k = {1, . . . , e} : wk = nep
(4)
∀k = e + 1, . . . , m : wk = nsp
(5)
∀k > m : wk = 1; patch point = U[xmin , xmax ]
(6)
2.2 Traplining Metaphor I: Parameter Reduction (Two-Parameter BA) This version uses only two parameters, n and nep, to run the algorithm [5]. The idea was inspired by recent observations of bees’ traplining foraging behaviour. Researchers [11–13] have discovered that bees conduct exploration when exploiting a site’s resources. Parameter reduction was accomplished by combining exploration and exploitation. The main difference between this and the basic BA lies in the recruitment and search mechanism. As previously seen, in the recruitment process of the basic BA, there are two separate groups of foragers: elite patch workers (nep) and other selected patch workers (nsp). In the two-parameter version, called BA2 , the recruited workers are generated once by processing the information on the quality of a patch defined by its rank relative to other patches. The number of workers in each patch is determined by using a linear function of workers among the n patches according to their rank k (Eq. 7). In Eq. 7, wmax is equal to nep and wmin to 1. ) ( wmin − wmax wk = wmax + (k − 1). n−1
(7)
As already mentioned, the basic BA divides the search mechanism into two separate search processes: exploitation and exploration. ngh represents the maximum radius of the neighbourhood search around the patch point (usually 0.1 ≤ ngh ≤ 0.5). The exploitative search neighbourhood in the basic BA is a set of points extending in any direction from the patch point but remaining within a radius ngh while the exploratory search covers the maximum range of the solution space to look for a potential patch.
Bees Traplining Metaphors for the Vehicle Routing …
265
On the other hand, BA2 ’s foraging mechanism (see Fig. 2) does not separate exploitation and exploration by employing a function that automatically controls the distribution of worker bees to each site. Bees will swarm densely around a promising site and will likely travel further if the site is poor. Thus, every site has its own ngh (nghk ). This value no longer represents the maximum neighbourhood radius, but rather the most densely populated region in the neighbourhood (see Eq. 8).
foraging pointk,wk
⎧ ⎨ patch pointk ± T [0, ngh k , 1] ∗ (xmax − xmin ) = xmin ; f oraging pointk,wk < xmin ⎩ xmax ; f oraging pointk,wk > xmax
(8)
The proposed foraging method uses a probability distribution (T [0, nghk , 1]) to regulate the exploitation and exploration ratio according to the value of nghk . The most desirable location will have the smallest nghk , while the least desirable
Fig. 2 Flowchart of the two-parameter/standard (*) versions of the Bees Algorithm with n = 5 and nep = 9
266
A. H. Ismail and D. T. Pham
Table 1 Comparison of recruitment of worker bees and the search mechanisms of basic BA and BA2 Rank (k)
Patch-3
Patch-2
Patch-4
Patch-5
Patch-1
1
2
3
4
5
Parameters of Basic BA: n = 5, e = 1, m = 3, nep = 9, nsp = 5, ngh = 0.5 wk
9
5
5
1
1
Ngh
0.5
0.5
0.5
–
–
Assignment
U [0, 0.5]
U [0, 0.5]
U [0, 0.5]
U [0, 1]
U [0, 1]
Parameters of BA2 (Linear): n = 5, nep = 9 wk
9
7
5
3
1
nghk
0
0.25
0.5
0.75
1
Assignment
T [0,0,1]
T [0,0.25,1]
T [0,0.5,1]
T [0,0.75,1]
T [0,1,1]
will have the largest nghk . If nghk for a patch approaches the minimal radius, the worker bees will tend to swarm near the patch point. On the other hand, if nghk approaches the maximum radius, the worker bees will tend to swarm away from the patch point. Table 1 illustrates the basic and traplining variants of the BA’s recruitment and search mechanisms. The table shows a traplining variant using a triangular distribution function with linear function increment. The best patch (k = 1) in BA2 would have a maximum of 9 workers and a minimum swarming radius (0% of the solution space). Less desirable patches will have fewer workers and larger swarming radii. The number of workers decreases, and the swarming radius increases until the minimum number of workers (one) and the maximum swarming radius (100% of the solution space) are reached. Figure 3 compares the search methods used by the basic and two-parameter Bees Algorithms by depicting the exploitation and exploration within them. As can be seen, in the basic BA, all worker bees assigned to a patch forage within the neighbourhood (ngh = 0.5) of the selected patch point (rank 1, 2 or 3). On the other hand, in BA2 , the assigned bees search the whole solution space but focus on the swarming radius associated with each patch. To obtain an estimation of the probability ratio between exploration and exploitation of BA2 , we simulated the triangular random operator for 1000 runs using the parameter setting in Table 1. It is assumed in Fig. 3 that foraging within 50% of the solution space from the patch point constitutes exploitation and that searching beyond that is exploration. The best patch would have the highest probability of being exploited by bees (76%). The probabilities of exploitation decline until the weakest patch, which has only around 26% probability of being exploited. For 74% of the time, the one assigned bee chooses to explore the wider solution space for more abundant food locations. However, the above method is not suitable for the two-parameter version of the combinatorial BA. This is due to a critical difference between continuous and combinatorial optimisation, which is the structure of the solution space. The two-parameter version of the continuous BA conducts search in real space in the neighbourhood (nghk ) of a patch point (see Fig. 4). In combinatorial optimisation, a different concept
Bees Traplining Metaphors for the Vehicle Routing …
267
Fig. 3 Probabilities of search mechanisms inside the solution range of a basic BA, b BA2
Fig. 4 Size regulation in a continuous problem (nghk = darker blue)
of neighbourhood is needed. Radius size regulation of nghk in the two-parameter continuous BA becomes sequence length regulation in the two-parameter combinatorial BA (see Fig. 5). In the combinatorial version, a short solution sequence has the same meaning as a small neighbourhood radius in the continuous version.
2.3 Traplining Metaphor II: Intensifier (Bees Routing Optimiser) In addition to the bees doing exploration during the exploitation process, they also gradually improve their tour over time (see Fig. 6). The critical behaviour underlying their improvement learning is the instinct to avoid danger. This behaviour could enrich the experience of bees and was the inspiration for intensifying the solution of BA [5].
268
Fig. 5 Sequence length regulator in a combinatorial problem
Fig. 6 Traplining behaviour to avoid a threat
A. H. Ismail and D. T. Pham
Bees Traplining Metaphors for the Vehicle Routing …
269
The intensifier starts once a bee receives a danger signal about a stagnant solution or a value that does not improve from the previous evaluation. This stagnant solution could be the initial solution generated by the local search operator (Fig. 6a). Then, the bee would avoid visiting a randomly chosen spot that it sees as a dangerous flower and would create a subtour (Fig. 6b). The dangerous flower in this version is selected randomly because, as in nature, threats occur randomly. Travelling on the new subtour enables the bee to acquire new experiences (or new distance information). The bee then improves the subtour by eliminating sharp turns inside it. After improvement, the subtour becomes a ‘habitual’ tour and when conditions are safe, the bee would observe the forgotten flower (Fig. 6c). To build the complete tour (Fig. 6d), the bee memorises the closest path to the forgotten flower when travelling on the habitual tour and inserts the flower in the centre of the closest segment of the tour. Figure 7 shows the simple procedure of the intensifier (Fig. 7). The processes of forgetting and reintroducing nodes in the Bees Routing Optimiser (BRO) intensifier resemble the destruction and reconstruction operations in the Large Neighbourhood Search (LNS) method proposed by Ropke in 2006 [14]. The difference is that the BRO also performs a subtour improvement before reintroducing the forgotten node. Removing a node in the initial tour could create sharp turning paths inside the subtour that bees dislike. The subtour is improved using the 2-opt operator before the forgotten flower is reintroduced to complete the tour. When the forgotten flower is reintroduced, the distances between it and the different segments Fig. 7 Bees traplining intensifier "Bees Routing Optimiser"
270
A. H. Ismail and D. T. Pham
of the habitual tour are calculated using Eqs. 9–10. The closest path or segment is the one with the minimum Euclidean distance between the forgotten flower (ff ) and the centre of the segment (ceij ) (Eq. 11). ) (x + x y + y ) ( i j i j cei,j = xcei,j , ycei,j = , 2 2 /( )2 ( )2 x f f − xcei, j − y f f − ycei, j distance f f,cei, j =
(9) (10)
∀i, j,
join f f to i, j i f } { distance f f,cei, j = Min distance f f,ce1,2 , . . . , distance f f,ce I,J
(11)
3 Vehicle Routing Problem In the capacitated vehicle routing problem (CVRP), commonly called VRP, a fleet of identical vehicles located at a central depot must be optimally routed to supply a set of customers with known demands. The solution to the problem consists of finding tours for all vehicles to serve the demands of a set of customers at different locations without violating the capacity of the vehicles, all of which start and end at the depot. The total distance travelled by the vehicles must be minimised and, as with TSP, in this problem, each customer can be visited only once by a vehicle. The following equations form the most common mathematical model for constrained optimisation of VRP. A binary decision variable (X ijv ) is used to select the route cost Ci j of a vehicle (v). In CVRP, the total customer demands (mi ) on a route could not exceed the capacity of the vehicle (qv ). Min Z =
V N N Σ Σ Σ
Ci j .X i j v
(12)
i=0 j=0; j/=i v=1 V Σ N Σ
X i j v ≤ V , for i = 0
(13)
v=1 j=0 N Σ
X i j v = 1, for i = 0 and v ∈ {1, . . . , V }
(14)
j=0 V N Σ Σ v=1 j=0; j/=i
X i j v = 1, for i = {1, . . . , N }
(15)
Bees Traplining Metaphors for the Vehicle Routing … V N Σ Σ
271
X i j v = 1, for j = {1, . . . , N }
(16)
≤ qv , for v = {1, . . . , V }
(17)
X i jv ∈ {0, 1}
(18)
v=1 i=0; j/=i N Σ i=1
m i
N Σ j=0; j/ =i
Xi jv
4 Methodology The search mechanism of BAC2 was integrated to make the algorithm as simple to use as BA2 . The algorithms were implemented using the high-level programming language MATLAB. Experiments were conducted using datasets available at TSPLIB [15]. The two largest CVRP datasets with 100 customers, EilA101 and EilB100, were chosen. The algorithms evaluated were: basic BAC (method 1), BAC with traplining metaphor I (method 2), BAC with traplining metaphor I + customer clustering (method 3), and BAC with traplining metaphor I + metaphor II (BRO) + customer clustering (method 4). The experiments were designed to reveal the contribution of each traplining metaphor (see Fig. 8). To begin all algorithms except method 1, the parameters n and nep were set to 10 and 40 respectively, yielding approximately 200 bees in the colony. The parameter setting for method 1 was based on a previous study of BAC applied to the vehicle routing problem [16]. According to previous work [17], the optimal number of worker bees for TSP instances ([52,100]) is 40. Because it had been found that the complexity of VRP was twice that of TSP [18, 19], the colony size was conservatively set to 200 bees in this work. As previously stated, Eqs. 12–18 represent the most frequently used mathematical model of VRP. That model is the best-known binary integer programming model of VRP. A binary solution representation was not used in this study but rather a permutation encoding [20, 21]. This encoding simplifies the formulation by adhering to several well-known constraints, such as the Hamilton cycle (Eqs. 14–16). Figure 9 depicts an example of a sequence of the vehicles’ tours. The figure indicates that vehicle 1 visits cities 1, 6, and 4 and then returns to the depot (0), while vehicle 2 visits cities 2, 3, and 5 and then returns to the depot. Without decomposition (method 1 and method 2), the objective function for this problem could be defined as the minimum total distance travelled by all vehicle tours plus a penalty cost for capacity violations (Eq. 19) [16]. If there is no capacity violation (CV = 0), the objective function Z(v) will equal the total length of travel of all vehicles. This is a straightforward multiobjective problem that does not require nondominated sorting. The pseudocode of the BAC for VRP is given in [16]. Algorithm 1 is the pseudocode of the BAC2 for VRP. However, Eq. 19 does not apply to
272
Fig. 8 Benchmark methods
A. H. Ismail and D. T. Pham
Bees Traplining Metaphors for the Vehicle Routing …
273
0 1 6 4 0 2 3 5 0 Fig. 9 Solution representation
methods 3 and 4 which use the sequential decomposition approach where penalty and total distance travelled are separate objectives. Z(v) =
V Σ
Length(v) + penalty
(19)
v=1
penalty = M ·
V Σ
C V (v)
(20)
v=1
Algorithm 1: Two-parameter Combinatorial Bees Algorithm for VRP (Traplining Metaphor I - Combinatorial: BAC2 ) Input: n = number of scout bees; nep = worker bees on the most promising patch Output: T = Tour 1
Start;
2
Search n potential patches;
3
Evaluate and rank the fitness of the initial potential patches;
4
while termination criterion not satisfied do
5
Recruit worker bees (Eq. 7: wmax = nep and wmin = 1) for all sites, and the assignments (Eq. 7: nghmin = 1 and nghmax = abs (dims/v));
6
Exploit and explore all patches (Fig. 5);
7
Evaluate (Eq. 19) and select the fittest Bee from each patch;
8
End
9
Report the best Bee (T );
10
End;
As previously mentioned, this work also examined two methods that decouple the VRP into clustering and routing subproblems, method 3 and method 4. The intensifier was used with method 4, which employs traplining metaphor II, whereas method 3 was BAC2 without the intensifier. Clustering was solved first using BA2 , and the clustered customers were then used as inputs for routing optimisation by BAC2 with or without the intensifier.
274
A. H. Ismail and D. T. Pham
4.1 Clustering Procedure Method 3 (BA2 + BAC2 ) and method 4 (BA2 + BAC2 + BRO) apply clustering to find the initial groups of customers served by the vehicles for the routing optimisation phase. The objective function for this subproblem (Eq. 21) is the sum of all customers’ Euclidean distances to the nearest cluster prototype (Eq. 22) and the penalty for vehicles’ capacity violation. In this study, the fine M was set to a large number to discourage violation. Equations 23–24 represent the distances between customers belonging to a cluster and their cluster centroid. Matrix D (Eq. 24) is a CS x V matrix (cs = customers = 1, …, CS and v = vehicles = 1, …, V ). Min Z clustering =
CS Σ
d(cs) + penalty
(21)
cs=1
d(cs) = min{d(cs, 1), d(cs, 2), . . . , d(cs, V )} d(cs, v) =
/
(xv − xcs )2 + (yv − ycs )2
⎞ d1,1 . . . d1,V ⎟ ⎜ D = ⎝ ... . . . ... ⎠ dC S,1 . . . dC S,V
(22) (23)
⎛
(24)
The clustering methods started with randomly generating V cluster centroids. The initial cluster centroids (see Eqs. 25–27) were defined by random angles (θ) and radii from the depot location (x 0 , y0 ). The search procedure of BA2 would move the centroids, which would result in a shorter total distance to the centroids and fewer capacity violations. The clustering optimisation was set to 50,000 evaluations. The pseudocode of BA2 for the clustering problem is given in Algorithm 2. [xv , yv ] = [radius · cos θ(v) + x0 , radius · sin θ(v) + y0 ] θ(v) = r and() ·
2π 2π + (v − 1) · V V
radius = r and().max{d(1, [x0 , y0 ]), d(2, [x0 , y0 ]), . . . , d(C S, [x0 , y0 ])}
(25) (26) (27)
In Eqs. 26–27, rand() is the uniform distribution of random numbers, U[0,1].
Bees Traplining Metaphors for the Vehicle Routing …
275
Algorithm 2: Two-parameter Bees Algorithm for Clustering Problem (Traplining Metaphor I - Continuous: BA2 ) Input: n = number of scout bees; nep = number of worker bees on the most promising patch Output: Clusters = The groups of customers 1
Start;
2
Initialise nV centroids (Eqs. 25–27);
3
Evaluate and rank the fitness of the initial solutions;
4
while termination criterion not satisfied do
5
Recruit worker bees (Eq. 7: wmax = nep and wmin = 1) for all sites, and the assignments (Eq. 7: nghmin = 0% and nghmax = 100%;
6
Exploit and explore all patches (Fig. 5);
7
Evaluate (Eq. 21) and select the fittest Bee from each patch;
8
End
9
Report the best Bee (Clusters) (i.e., the best set of V centroids);
10
End;
4.2 Routing Procedure In the routing procedure, except for method 1, the other three methods all employed the two-parameter combinatorial BA search algorithm. All patches were subjected to a combination of swapping, insertion, and inversion [21], with the length of each patch being regulated by nghk based on its rank. The basic BA applied local search operators on the selected (m) patches with ngh equal to 50% of the problem dimensions, while random permutation was used to replace the remaining patches (n-m) by the scout bees. For methods 3 and 4, the objective function for the routing subproblem was simply the total distance travelled by all vehicles (Eq. 28), because the clustering had satisfied the zero-capacity violation condition. The swap, insert, and invert (reverse) operators were performed within a cluster, and the route between customers could be modified only if they shared a vehicle or cluster. Swapping customers between different clusters would increase the likelihood of exceeding the vehicles’ capacity. Min Z routing =
V Σ
Length(v)
(28)
v=1
For method 4, the solution obtained from BAC2 was passed to the BRO for improvement. Additionally, the intensifier worked locally to achieve a feasible result. The BRO would remove several ‘dangerous’ flowers, smooth the subtour into a habitual tour, and reintroduce the removed flowers to the habitual tour to complete the tour.
276
A. H. Ismail and D. T. Pham
Although improving subtours could be accomplished using any local search method, a 2-opt local search was used in this study to convert subtours to habitual tours. The heuristic examines all possible pairs of customers on the subtour and reverses the visit sequence to avoid sharp turning angles (see Algorithm 3, line 13). The pseudocode of BRO for intensifying the solutions from the neighbourhood search mechanism could be seen in Algorithm 3. Algorithm 3: Bees Routing Optimiser (Traplining Metaphor II - Combinatorial Intensifier) Input: T init = Initial tour or subtour from the previous process Output: T = New tour or subtour 1
Start;
2
for all bouts do
3
HT ← Forgotten (T init , nff ) //nff : the number of forgotten flowers
4
T ← Re-Introduction (HT, ff ) //ff : the forgotten flowers
5
End
6
T (it) ← min {T 1 ,…,T bouts }
7
End;
8
Def Forgotten (T init , nff ):
9
ff ← random list of nff forgotten flowers;
10
T’ ← a tour T subtract by ff ;
11
HT ← T’;
12
for all possible edge-pairs in HT do
13
HT* ← tour by reversing end points in edge-pair HT;
14
if Cost(HT ) ≤ Cost(HT*)
15
HT ← HT*;
16
Else
17
End
18
End
19
return HT;
20
Def Re-Introduction (HT, ff ):
21
he ← list of all centroids of edges in HT as habitual tour
22
for each item i in ff do
23
DFH
ff(i),he
← list of distances from ff (i) node to he;
24
T ← tour by inserting a ff(i) into the closest edge (Fig. 6, and Eq. 11);
25
End
26
return T;
Bees Traplining Metaphors for the Vehicle Routing …
277
5 Experiments, Results and Discussion All benchmarking methods were run ten times with a maximum of 1,000,000 evaluations using datasets EilA101 and EilB101 from TSPLIB. The error and the number of function evaluations were used to analyse performance. A simple statistical analysis was conducted to determine the effect of the traplining metaphor. All the programmes in this study were created in MATLAB and are available on GitHub at https://github. com/asrulharunismail. This section will discuss the proposed method in detail. The routing problem was solved by iteratively improving the initial solutions (obtained by permutating either the complete set of customers or the customer clusters formed in the first phase) using the neighbourhood search mechanism (and the intensifier). Algorithm 1 is the procedure to solve the problem using BAC2 with BRO. The solutions obtained for all vehicles’ routing plans using the proposed methods were then validated for correctness (see Tables 3, 4, 5 and 6). As shown in those tables, the total demand of customers did not exceed the vehicles’ capacity (in EilA101, the capacity is 200, while in EilB101 the capacity is 112), the route followed a Hamilton cycle, and all customers were served. It can be concluded that the code executed the proposed methods correctly and generated valid and feasible solutions. Figures 10 and 11 show the optimal solutions for both datasets. Table 2 compares the four methods discussed above. The table gives the parameters used, the best solution obtained and its error (i.e., the difference between its fitness value and that of the best known solution), the mean and standard deviation for ten obtained solutions and their errors, as well as the mean and standard deviation in the number of evaluations for ten obtained solutions. The table shows that method 4 for solving EilA101 was the best compared to the other methods. After approximately 500,000 iterations, it could achieve an average error of less than 2%. By incorporating traplining metaphor I into the BAC , the error was reduced by an average of 2% (cf. methods 1 and 2). The same level of improvement was attained through decomposition (BAC2 + BA2 , i.e., method 3). The method with the intensifier (method 4) demonstrated the greatest improvement, with an error reduction
Fig. 10 Best solution of proposed BAC for EilA101
278
A. H. Ismail and D. T. Pham
Fig. 11 Best solution of proposed BAC for EilB101
of nearly 6%. Additionally, method 4 was also the most reliable method, with a maximum deviation error of 0.5%. The same degree of improvement was not seen with dataset EilB101 probably due to the higher complexity of the problem. EilB101 has a larger vehicle fleet (14 vehicles as opposed to 8 for EilA101). After one million evaluations, method 4 could achieve an average error of 5.5%. Although the final error was greater than that for the EilA101 dataset, the standard deviation was stable at 0.8%. On average, method 4 could reduce the error by approximately 4%. As seen with both datasets, the performances of BAC2 with and without clustering (methods 2 and 3) were quite similar. Figure 12 shows that methods 1, 2, and 3 were statistically indistinguishable from one another, whereas method 4 was statistically distinct from them. Although all end bars decreased, only the standard deviation error tail of method 4 did not overlap with the standard deviation error tail of the other methods. Additionally, a convergence speed analysis was performed on the four methods. Figure 13 illustrates the convergence performance step by step, from zero to one million evaluations in 25,000-evaluation increments. Method 4 converged extremely quickly. On its first 50,000 evaluations of EilA101, method 4 achieved an error of less than 5%. At the same number of evaluations, method 3 produced a 42% error, method 2 a 109% error, and method 1 a 128% error when compared to the best known solution of 817. On average, method 3 yielded less than 10% error at 275,000 evaluations, method 2 reached the same error at 475,000 evaluations, and method 1 achieved it at 575,000 evaluations. For the more complex dataset EilB101, method 4 achieved an error of less than 25% on its first 50,000 evaluations. At the same number of evaluations, method 3 gave a 55% average error, method 2 a 95% average error, and method 1 a 116% average error when compared to the best known solution of 1077. On average, method 4 could achieve less than 10% error at 200,000 evaluations, method 3, at 725,000 evaluations, method 2, at 700,000 evaluations, and method 1, at 900,000 evaluations. As a result, it can be concluded that, by combining traplining metaphors I and II with a decomposition approach, the proposed algorithm (method 4) was four to ten times
826
n = 10; nep = 40
M4 1161 1123 1139 1120
n = 40;e = 10;m = 20;nep = 40;nsp = 20; ngh = 0.5
n = 10; nep = 40
n = 10; nep = 40
n = 10; nep = 40
M1
M2
M3
M4
EilB101
848 841
n = 10; nep = 40
n = 10; nep = 40
M2
M3
Best Sol 862
Parameters
n = 40;e = 10;m = 20;nep = 40;nsp = 20; ngh = 0.5
M1
EilA101
Table 2 Results for methods 1 to 4 Best Err
3.99
5.76
4.27
7.80
1.10
2.94
3.79
5.51
1137.1
1169.6
1173.4
1181.1
832.3
858.2
862.2
880.8
Avg Sol
Avg Err
5.58
8.60
8.95
9.67
1.87
5.04
5.53
7.81
9.23
20.40
21.89
14.69
4.64
9.86
9.39
8.77
Std Sol
Std Err
0.86
1.89
2.03
1.36
0.57
1.21
1.15
1.07
Avg Eval
763,109.3
878,508.3
830,103.0
899,274.6
563,609.6
763,097.6
853,273.5
880,747.5
233,873.7
141,307.7
98,460.5
69,927.3
311,677.2
194,472.8
163,117.6
68,606.7
Std Eval
Bees Traplining Metaphors for the Vehicle Routing … 279
280
A. H. Ismail and D. T. Pham
Fig. 12 Average error comparison
Fig. 13 Convergence speed
faster than the basic BAC . Additionally, comparing methods 4 and 3 shows that the intensifier can enhance the performance of the algorithm between 3 and 6 times.
6 Conclusion This paper has proposed new BAC variants for the VRP. The combinatorial version of the BA was simplified using bee traplining metaphor I, and its solutions were intensified through the use of bee traplining metaphor II. Comparative studies of the basic BAC (method 1) and three methods combining those metaphors (methods 2, 3 and 4) were conducted on the two 100-customer datasets of TSPLIB (EilA101 and EilB101) to evaluate their relative accuracies and speeds.
Bees Traplining Metaphors for the Vehicle Routing …
281
It was found that, accuracy-wise, the three methods (1, 2, 3) performed identically statistically. However, method 3 (BA2 + BAC2 ) was significantly faster than basic BAC on both EilA101 and EilB101. Despite being much simpler to use as it required the setting of only two parameters, BAC2 demonstrated no deterioration in accuracy or speed when compared to BAC . The version using traplining metaphors I and II produced significantly better results than the other versions. This version increased accuracy by at least 4% and speed at least four times. The “Bees Routing Optimiser” intensifier contributed the most to the performance improvement, reducing the error by at least 3% and speeding up convergence at least three times. The BRO is well suited for use in solving practical problems that require rapid computation. Numerous potential developments can be made by utilising natural phenomena for further research. A new local search operator could be created by identifying dangerous routes based on the turning angle inside the BRO intensifier. Alternatively, coevolutionary computations, either cooperative (bees-flowers) or competitive (beeswasps), could be performed in which the bees interact with other species.
Appendix A: The Complete Routing Plan The results of the decomposition methods presented in this chapter can be found in Tables 3, 4, 5 and 6. Table 3 Routing plan result for A101 (BA2 + BAC2 ) v
Sequence
Distance
Capacity
1
0, 76, 77, 3, 79, 78, 34, 35, 65, 71, 9, 81, 33, 51, 50
122
175
2
0, 18, 82, 48, 47, 36, 49, 64, 11, 19, 7, 52
115
188
3
0, 28, 12, 80, 68, 29, 24, 25, 55, 54, 26
97
182
4
0, 99, 61, 16, 86, 38, 14, 44, 91, 100, 37, 98, 92
122
200
5
0, 6, 96, 59, 93, 85, 5, 84 17, 45, 46, 8, 83, 60, 89
99
198
6
0, 53, 58, 73, 41, 15, 43, 42, 57, 2, 87, 97, 95, 94, 13
88
129
7
0, 4, 39, 67, 23, 56, 75, 22, 74, 72, 21, 40
125
198
8
0, 31, 88, 62, 10, 63, 90, 32, 66, 20, 30, 70, 1, 69, 27
58
188
826
1458
Total
282
A. H. Ismail and D. T. Pham
Table 4 Routing plan result for A101 (BA2 + BAC2 + BRO) v
Sequence
Distance
Capacity
1
0, 31, 10, 32, 90, 63, 64, 49, 19, 11, 62, 88
122
175
2
0, 73, 74, 75, 56, 23, 67, 39, 4, 25, 55, 54, 26
115
188
3
0, 28, 12, 80, 68, 24, 29, 34, 78, 79, 3, 77, 76, 50
97
182
4
0, 89, 18, 83, 60, 5, 84, 17, 45, 8, 46, 36, 47, 48, 82, 7, 52
122
200
5
0, 6, 96, 99, 61, 16, 86, 44, 38, 14, 42, 87, 13
99
198
6
0, 40, 21, 72, 22, 41, 15, 43, 57, 2, 58, 53
88
129
7
0, 1, 33, 81, 51, 9, 35, 71, 65, 66, 20, 30, 70, 69, 27
125
198
8
0, 59, 93, 85, 91, 100, 98, 37, 92, 97, 95, 94
Total
58
188
841
1458
Table 5 Routing plan result for B101 (BA2 + BAC2 ) v
Sequence
Distance
Capacity
1
0, 78, 34, 35, 65, 71, 81, 33, 50
110
110
2
0, 1, 51, 9, 66, 20, 30, 69
88
97
3
0, 53, 58, 2, 57, 87, 97, 13
52
107
4
0, 93, 61, 85, 91, 100, 98, 37
61
112
5
0, 83, 45, 8, 46, 48, 82, 18, 52
89
110
6
0, 88, 62, 63, 90, 32, 10, 70, 31
86
112
7
0, 7, 47, 36, 49, 64, 11, 19
115
105
8
0, 27, 89, 94
35
58
9
0, 6, 96, 95, 92, 59, 99, 5, 84, 60
62
109
10
0, 42, 43, 15, 41, 22, 75, 74, 72, 73
91
103
11
0, 39, 67, 23, 56, 21, 40
92
111
12
0, 28, 12, 80, 54, 55, 25, 4, 26
76
103
13
0, 17, 86, 16, 44, 38, 14
108
110
14
0, 68, 24, 29, 79, 3, 77, 76
74
111
1139
1458
Total
Appendix B: Details of the Clustering Method The pseudocode for solving the clustering problem with zero capacity violation is shown in Algorithm 2. It has been found that placing the initial cluster centroids in a circular band around the depot (Fig. 15) produced a more robust segmentation of customers than achieved through positioning the initial centroids completely randomly (Fig. 14).
Bees Traplining Metaphors for the Vehicle Routing …
283
Table 6 Routing plan result for B101 (BA2 + BAC2 + BRO) v
Sequence
Distance
Capacity
1
0, 26, 4, 25, 55, 54, 12, 28
71
97
2
0, 8, 46, 36, 49, 64, 11, 63, 90, 32
132
102
3
0, 27, 20, 66, 71, 65, 35, 9
113
109
4
0, 75, 23, 67, 39, 56
94
109
5
0, 50, 76, 77, 29, 24, 80, 68
76
94
6
0, 13, 87, 42, 14, 43, 15, 57, 53
83
110
7
0, 94, 59, 93, 99, 96
41
97
8
0, 61, 16, 86, 38, 44, 92
91
103
9
0, 89, 60, 5, 84, 17, 45, 83, 18
71
92
10
0, 95, 97, 37, 98, 100, 91, 85, 6
56
112
11
0, 52, 7, 19, 47, 48, 82
74
110
12
0, 58, 2, 41, 22, 74, 72, 73, 21, 40
64
110
13
0, 69, 70, 30, 10, 62, 88, 31
70
103
14
0, 1, 51, 33, 81, 34, 78, 79, 3
Total
84
110
1120
1458
Fig. 14 Clustering with random points initialisation
The existence of clusters means that customers can reasonably be put into groups for which the total demand is less than or equal to the delivery vehicle’s capacity. The initial solutions obtained with customer clustering are better than those produced by random permutation of the whole set of customers.
284
A. H. Ismail and D. T. Pham
Fig. 15 Clustering with circular initial centroids (small “o” = initial; large “O” = final centroids)
Figure 16 shows examples of initial solutions with and without clustering. The initial path generated by random permutation is completely random and appears ‘chaotic’, whereas the clustering process limits the chaos within the individual clusters. The random permutation generator created initial solutions with approximately four times the errors achieved with the clustering technique. Due to the existence of good initial solutions, the algorithm was able to reach a near-optimal feasible solution to the routing plan more quickly than it would have done without them. Because the BA2 for clustering was utilised inside the main BAC2 initialisation, there will be n initial sets of clustered customers (Fig. 16).
Bees Traplining Metaphors for the Vehicle Routing …
285
Fig. 16 Initial routes a without and b with clustering on EilA101
Appendix C: Acronyms and Symbols Tables 7 and 8 define the acronyms and symbols used in this chapter. Table 7 Acronyms BA
Bees Algorithm
BA2
Two-parameter Bees Algorithm
BAC2
Two-parameter Combinatorial Bees Algorithm
BRO
Bees Routing Optimiser
BAC
Combinatorial Bees Algorithm
M-1, …, M-4
Method 1, …, Method 4
T
Triangular distribution of random number generator
T[a,b,c]
Triangular distribution of with minimum limit a, likely value b and maximum limit c
U
Uniform distribution of random numbers, U[0,1] or rand()
286
A. H. Ismail and D. T. Pham
Table 8 Symbols C i,j
Cost of path i to j
ceius potential developments can,j Centre of segment or path i,j inside the habitual tour CV v
Capacity violation of vehicle v
D
Euclidean distance
Dims
Dimensions of the problem
e
Number of elite patches
Ff
Forgotten flower(s)
k
Ranking of the patches
m−e
Number of non-elite selected patches
m
Number of selected patches
n
Number of scout bees
nep
Number of worker bees on the elite patches
ngh
Neighbourhood size
nghk
Swarming area of worker bees
nsp
Number of worker bees on the non-elite selected patches
nw
Number of worker bees in a patch
V
Number of vehicles in VRP
wk
Number of worker bees of the rank k patch
wmax
Maximum number of worker bees of a patch
wmin
Minimum number of worker bees of a patch
x 0 /y0
x-axis or y-axis of the depot
X ff
x-axis of forgotten flower(s)
X i,j,v
Binary value to represent an assignment of a vehicle (v) to (i,j) path
x v /yv
x-axis or y-axis coordinate of the initial cluster centroid
Y ff
y-axis coordinate of forgotten flower(s)
References 1. Gendreau M, Laporte G, Potvin JY (2002) Metaheuristics for the capacitated VRP. In: The vehicle routing problem. Society for Industrial and Applied Mathematics, pp 129–154 2. Anbuudayasankar SP, Ganesh K, Mohapatra S (2014) Survey of methodologies for tsp and vrp. In: Models for practical routing problems in logistics. Springer, Cham, pp 11–42 3. Riff MC, Montero E (2013) A new algorithm for reducing metaheuristic design effort. In: 2013 IEEE congress on evolutionary computation. IEEE, pp 3283–3290 4. Pham DT, Darwish AH (2008) Fuzzy selection of local search sites in the Bees Algorithm. In: Proceedings of the 4th international virtual conference on intelligent production machines and systems, pp 1–14 5. Ismail AH (2021) Enhancing the Bees Algorithm using Traplining metaphor. In: Thesis report, University of Birmingham, United Kingdom 6. Taillard É (1993) Parallel iterative search methods for vehicle routing problems. Networks 23(8):661–673
Bees Traplining Metaphors for the Vehicle Routing …
287
7. Noon CE, Mittenthal J, Pillai R (1994) A TSSP+ 1 decomposition strategy for the vehicle routing problem. Eur J Oper Res 79(3):524–536 8. Pham DT, Ghanbarzadeh A, Koc E, Otri S, Rahim S, Zaidi M (2005) The bees algorithm. In: Technical note, Manufacturing Engineering Centre. Cardiff University, UK 9. Pham DT, Ghanbarzadeh A, Koc E, Otri S, Rahim S, Zaidi M (2006) The bees algorithm—a novel tool for complex optimisation problems. In: Intelligent production machines and systems. Elsevier Science Ltd, pp 454–459 10. Karaboga D (2005) An idea based on honey bee swarm for numerical optimisation. In: Technical report-tr06, vol. 200, Erciyes University, Engineering Faculty, Computer Engineering Department, pp 1–10 11. Woodgate JL, Makinson JC, Lim KS, Reynolds AM, Chittka L (2017) Continuous radar tracking illustrates the development of multidestination routes of bumblebees. Sci Rep 7(1):1–15 12. Ohashi K, Thomson JD (2013) Trapline foraging by bumble bees: VI. Behavioralential developments can alterations under speed–accuracy trade-offs. Behav Ecol 24(1):182–189 13. Lihoreau M, Raine NE, Reynolds AM, Stelzer RJ, Lim KS, Smith AD, Chittka L (2012) Radar tracking and motion-sensitive cameras on flowers reveal the development of pollinator multidestination routes over large spatial scales. Plos Biol 10(9):e1001392 (Public Library of Science) 14. Ropke S, Pisinger D (2006) An adaptive large neighborhood search heuristic for the pickup and delivery problem with time windows. Transp Sci 40(4):455–472 (Informs) 15. Reinelt G (1994) TSPLIB—a traveling salesman problem library. ORSA J Comput 3(4):376– 384 16. Ismail AH, Hartono N, Zeybek S, Caterino M, Jiang K (2021) Combinatorial Bees algorithm for vehicle routing problem. In: Macromolecular symposia, vol 396, no 1. Wiley, pp 2000284 17. Hartono N, Ismail AH, Zeybek S, Caterino M, Jiang K, Sahin M (2021) Parameter tuning for combinatorial bees algorithm in travelling salesman problems. In: 13th ISIEM, Bandung, 28th July 2021, paper no 22, p 40 18. Dorigo M, Gambardella LM (1997) Ant colony system: a cooperative learning approach to the traveling computer salesman problem. IEEE Trans Evolutionary Comput 1(1):53–66 (IEEE) 19. Dorigo M, Blum C (2005) Ant colony optimisation theory: a survey. Theor Sci 344(2–3):243– 278 20. Talbi E-G (2009) Metaheuristics: from design to implementation, vol 74. Wiley 21. Ismail AH, Hartono N, Zeybek S, Pham DT (2020) Using the Bees Algorithm to solve combinatorial optimisation problems for TSPLIB. In: IOP conference series: materials science and engineering, vol 847, no 1. IOP Publishing, pp 012027
Supply Chain Design and Multi-objective Optimisation with the Bees Algorithm Ernesto Mastrocinque
1 Introduction Supply chain (SC) management has received recent growing attention by academia, industry and media. Due to the global Covid-19 pandemic, several industrial sectors have experienced disruptions at various stages in the supply chain resulting in shortages of items such as semiconductor chips for car manufacturers or shortages of commodities such as foods at the retailer stores [1]. Companies are under pressure for delivering quality products and providing services on time, at a low cost, with high service level. Therefore, they must build a robust and efficient supply chain able to support the whole life cycle of the product. A typical supply chain is composed of different stages such as multiple tiers of suppliers, manufacturing stages, distribution and wholesaler stages, retailers. Moreover, it includes the logistics operations, responsible for moving and storing the items from one stage to another. Designing a supply chain network may become very challenging because of the different factors that may affect the decision. The configuration of the supply chain will ultimately affect the cost of the finished products, the speed in reaching the final customers, as well as the service level and quality. Moreover, these factors might show opposite trends. In fact, usually the higher cost solution is also the faster, while the slower one can be performed at a lower cost. Finding the optimal configuration of a supply chain may result in a very complex optimisation problem, involving multiple objectives to satisfy. Therefore, the need of finding trade-off solutions which represent a good compromise of contrasting objectives [2]. Multi-objective optimisation problems have multiple solutions forming the so-called Pareto front. These solutions are non-dominated, meaning there is no other E. Mastrocinque (B) Faculty of Engineering, Environment and Computing, Coventry University, Priory Street, Coventry CV1 5FB, UK e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 D. T. Pham and T. Hartono (eds.), Intelligent Production and Manufacturing Optimisation – The Bees Algorithm Approach, Springer Series in Advanced Manufacturing, https://doi.org/10.1007/978-3-031-14537-7_17
289
290
E. Mastrocinque
solution improving the value of an objective without worsening at least one of the other objectives. Usually, these problems are not solvable with analytical methods, but require instead a high computational effort by means of stochastic approaches. Meta-heuristic algorithms have proven to be a valid tool for solving NP-hard problems, and the multi-objective supply chain optimisation problem is not an exception. Among the different meta-heuristic algorithms, the Bees Algorithm (BA) belongs to the category of the nature-inspired algorithms. In fact, it mimics the food foraging behaviour of the honeybees. The BA has proven to be an effective tool in solving numerous optimisation problems, both discrete and continuous, with a single objective or multi-objective in the fields of design, manufacturing, control, clustering, scheduling as well as supply chain. In fact, the supply chain network design problem can be formulated as a multi-objective optimisation problem and the Bees Algorithm has proven to be an effective tool for finding optimal configurations of the supply chain. Therefore, how to design and optimise a supply chain network considering multiple objectives with the Bees Algorithm? The aim of this chapter is to present an effective approach based on the Bees Algorithm to design a supply chain network by solving a multi-objective optimisation model consisting of minimising the total supply chain cost and lead time simultaneously. The reminder of this chapter is structured as follows: Sect. 2 defines the supply chain network design problem statement, including the mathematical formulation of the multi-objective optimisation model. The Bees Algorithm is introduced and presented in Sect. 3. Section 4 defines the Bees Algorithm approach for solving the multi-objective supply chain optimisation problem and a numerical example is discussed. Finally, conclusions and future research opportunities are outlined in Sect. 5.
2 Supply Chain Network Design Problem A supply chain design problem may involve different types of decisions such as supplier selection, facility location, technology selection, amount of materials, inventory levels, logistics modes among other others [3]. In the literature, various examples of supply chain design and optimisation problems are solved, proposing a range of mathematical models and solution approaches such as: a supply chain network design problem for minimising the total cost, demand fulfilment lead time and maximising the volume flexibility, using a multi-objective Particle Swarm Algorithm [4]; lead time constraints are considered in a mathematical model for designing global supply chains [5]; a robust optimisation model for the design of a supply chain considering uncertainty in demand, supply capacity and cost data [6]; a multi-objective biogeography-based optimisation algorithm to solve a supply chain network design problem with uncertain transportation costs and demands [7]; a strategic model for designing supply chain networks with multiple distribution channels, solved with a modified multi-objective artificial bee colony algorithm [8]; a mixed integer linear
Supply Chain Design and Multi-objective Optimisation …
291
programming model is proposed to optimise a closed-loop supply chain for a cardboard recycling network, with the goal of profit maximisation [9]; an integrated mathematical programming model for closed loop green supply chain optimisation in which suppliers offer quantity discounts, with the objectives of economic cost and environmental emissions minimisation and customer satisfaction maximisation [10]; a mixed-integer nonlinear programming model for total cost minimisation for the facility location problem of a closed loop supply chain, using a novel whale optimization algorithm [11]; a two-stage stochastic program for supply chain network design under disruption events for location, allocation, inventory and order-size decisions [12]; a hybrid methodology for designing a sustainable supply chain that is resilient to random disruptions, using a Benders decomposition algorithm [13]. The examples above show only some of the numerous models developed in the last decade, to address a variety of decisions involved in the design and optimisation of supply chain networks. They also show different objectives and constraints as well as solution approaches. However, the majority of the proposed models often address only a limited number of decisions involved in the SC design process and they are not easy to generalise. Moreover, they are usually solved with the use of commercial packages and on small instances because of the high computational effort. Looking at the problem from a generalised perspective, the aim of supply chain design and optimisation is to select the better option for each of the tasks per each SC stage, in order to minimise or maximise certain performance indicators such as cost, profit, lead time, service level and quality among others. A good and wellknown SC representation is based on the Bill of Materials (BoM) of the finished product, which is used to define the different materials, parts and components that need to be supplied, assembled and delivered to provide the finished goods to the customers. This approach allows to identify the tasks/activities in the supply chain, as well as their precedence relationships according to the BoM. This SC representation is proposed in [14], where a notebook supply chain design problem is presented, with the objective of minimising the total cost, including the cost of the finished product and inventory and solved by a dynamic programming algorithm. The same problem is solved in [15] by means of a Genetic Algorithm (GA) to reduce the computational effort, whereas the uncertainty in the cost options is considered in [16]. Other examples of the BoM representation of the SC are: a mixed integer linear programming model for selecting a product family and designing the SC in the automotive context [17]; a mixed integer programming model that integrates both platform product design and material purchase decisions in a two echelons SC [18]; a GA for supply chain configuration considering new product development with the objective of profit maximisation [19]; an approach based on Hybrid augmented Lagrangian coordination for an assembly supply chain optimisation [20]; a combination of GA and Taguchi method for configuring a supply chain considering new product design [21]; a SC design model including production sales policies optimisation for new products over multiple planning horizons to maximise the total profit [22]. Most of the proposed approaches aim to optimise a single objective which is usually minimising the total cost or maximising the total profit.
292
E. Mastrocinque
In Nepal et al. [23], a second objective is added to the model proposed in [14], where a compatibility index is introduced to consider the compatibility of the SC selected options in terms of strategic goals, financial aspects and other factors. The multi-objective model is solved with the weighted sum approach and GA. Moreover, a bi-objective SC design problem is proposed in [24], where a bulldozer supply chain is designed with the goal of minimise both the total SC cost and lead time using an approach based on the Ant Colony Optimisation (ACO) algorithm. The same problem is solved using the Bees Algorithm (which is presented in this chapter) and its enhanced variants in [25, 26] achieving better performance than the ACO approach, and subsequently a number of different instances are solved using the Intelligent Water Drop (IWD) algorithm [27]. Therefore, meta-heuristic and natureinspired algorithms have proven to be a valid tool for solving the BoM SC design and optimisation problem when multiple objectives are considered. However, most of the approaches developed are based on Genetic Algorithms. Moreover, the majority of the works available in the literature deal with single objective optimisation. A generalised representation of this supply chain is shown in Fig. 1. The triangles represent supply tasks, the squares sub-assembly tasks, the circles final products tasks and finally the dotted circles are delivery tasks. The dotted arrows indicate the precedence/input relationships between tasks. A task in the supply stages may be an input for a sub-assembly task as well as a final assembly task. Moreover, there might be multiple sub-assembly stages, as it is the case usually for complex products (e.g., the aerospace sector). Each task may be performed by different options. In fact, there are different suppliers for the same part/component, different sub-assemblies or final assembly approaches, as well as different delivery options (truck, air freight, water, rail, pipeline) to the final customers across multiple markets. The more the options are for each task in the SC, and more complex the problem of finding the optimal SC configuration becomes. Each option for a specific task exhibits different performance in terms of cost, speed, lead time, quality, flexibility etc. Among these indicators, costs and lead time are certainly the most important to optimise, at least at the initial design phase. Moreover, the total supply chain cost will have a major impact on the cost of the finished products and therefore on the price for the customers. Selecting the lower cost options across all the stages of the supply chain, will result in the lower total supply chain and finished products costs. However, this will rise the total SC lead time and therefore, the finished products will take longer to reach the final customer. Therefore, there is a need to find trade-off solutions to this problem with the goal of balancing the total supply chain cost and lead time. The mathematical formulation of this supply chain multi-objective optimisation problem is outlined in the next sub-section.
2.1 Mathematical Model The multi-objective supply chain optimisations problem is formulated as follows according to [14, 24]. Equation (1) represents the first objective Total Supply Chain
Supply Chain Design and Multi-objective Optimisation …
293
Fig. 1 Generic BoM supply chain configuration
Cost (TSCC), with N total number of tasks, i index of the task, j index of the option, ψ period under consideration, δ i average demand per unit time for task i, C ij cost of the option j at the task i and yij decision variable which is equal to 1 if the option j is selected for the task i and 0 otherwise. Equation (2) represents the second objective Total Supply Chain Lead Time (TSCLT ), which is the maximum cumulative lead time among the delivery activities/nodes, with d index of the delivery tasks and LT d cumulative lead time at the delivery tasks. The cumulative lead time at a specific task i is computed according to the constraint in Eq. (3), which is the sum of the processing lead time of the task i and the maximum cumulative lead time of all input tasks k at that node. Finally, Eq. (4) is the constraint on the decision variable yij , meaning that only one option j can be selected at the task i. T SCC = .
N Σ
δi
i=1
Nj Σ
Ci j yi j
T SC L T = max L Td d
Nj Σ
Ti j yi j + max L Tk − L Ti = 0 k
j=1 Ni Σ j=1
(1)
j=1
yi j = 1
(2)
(3)
(4)
294
E. Mastrocinque
Fig. 2 Pareto solutions
A well-established approach for solving multi-objective optimisation problems and generate the Pareto solutions (Fig. 2), is the Weighted Sum [28], consisting of combining the objectives into one single fitness function as shown in Eq. (5). Fitness function = w_1nor mT SCC + w_2nor mT SC L T
(5)
with w1 and w2 weights of the objective functions, having summation equal to 1, and normTCSSC and normTSCLT being the objective functions normalised between 0 and 1. The Bees Algorithm approach is presented in Sect. 3.
3 The Bees Algorithm Approach 3.1 Notes on the Foraging Behaviour of Honeybees A colony of bees is composed of a queen and thousands of worker bees [29]. The worker bees are allocated to different task in a colony such as building a comb, cleaning, and foraging. The scout bees responsible for food foraging are sent to explore the external environment and, once they find a possible good source, return to the hive and communicate to the rest of the colony the location of the source by means of movements forming the so-called waggle dance [30]. This dance contains useful information regarding the food source location such as distance, direction and quality of the flower patch. Then, other scout bees are recruited to explore the found patches and usually better is the quality of the patch, and more bees are allocated. These bees will then explore the neighbourhood of these patches and will communicate the outcome of the searching to their peer bees in the hive. This process will be repeated for a number of times, with scout bees looking for new flower patches randomly, and recruited bees exploring around those patches.
Supply Chain Design and Multi-objective Optimisation …
295
3.2 Notes on the Bees Algorithm Developed by Pham et al.[31, 32], the Bees Algorithm is a swarm-based and natureinspired optimisation algorithm emulating the food foraging behaviour of honeybees. The BA consists of a number of parameters which are presented in Table 1, where n represents the initial population of scout bees looking for food patches, m the number of the sites selected by the scout bees, e the number of the more promising sites or elite sites, neb the number of recruited bees for exploring the neighbourhood of the elite sites, nsb the number of recruited bees exploring the neighbourhood of the remaining selected sites and finally ngh the size of the neighbourhood of each flower patch. The main steps of the BA are: 1. Random initialisation of the n scout bees in the search space. 2. Evaluate the fitness of the visited patches. 3. Order the visited patches from the highest fitness found to the lowest. Select the first m patches with the higher fitness. 4. Assign neb recruited bees to the elite patches e with the higher fitness, and nsb recruited bees to the remaining patches m-e. 5. Local search in the neighbourhood of the patches according to the patch size ngh. 6. Assign the remaining bees from the initial population to a random search. 7. Evaluate the fitness of the new random patches. 8. Repeat the previous steps until a stopping criterion is met. One of the main strengths of the BA lies in the combination of a global search (exploration) and a local neighbourhood search (exploitation), which allows to effectively search the solutions space and find the global optima for several complex optimisation problems [33–38]. Moreover, the BA has been used to solve supply chain optimisation problems such as multi-objective supply chain design [25, 26, 39] and supplier selection [40]. The BA Approach for solving the multi-objective supply chain optimisation is presented in Sect. 4. Table 1 Parameters of the Bees Algorithm
Parameter
Detail
n
Number of scout bees
m
Number of selected sites
e
Number of elite sites
neb
Number of recruited bees for elite sites
nsb
Number of recruited bees for selected sites
ngh
Size of the neighbourhood for the patch
296
E. Mastrocinque
4 Bees Algorithm Approach for Solving the Multi-objective Supply Chain Optimisation Problem This section presents the multi-objective supply chain optimisation approach using the BA, proposed by the author in [25]. The methodology flowchart for solving the problem outlined in Sect. 2, is shown in Fig. 3. Initially, the TSCC and TSCLT are normalised between 0 and 1. This step is necessary for combining the two objectives using the weighted sum approach and obtaining the fitness function. Then, the BA runs from step 1 to 7 as per Sect. 3 to minimise Eq. (5). Subsequently, the normalised TSCC and TSCLT corresponding to the solutions found by the BA, are evaluated and added to the Pareto solutions, according to the rule that the TSCLT value must be higher than the previous point and the TSCC value lower (Fig. 2). These will be the non-dominated solutions, while the others are dominated and therefore discarded from the Pareto set. The algorithm continues until stopping criteria are met, in this case after a certain number of iterations, and finally the Pareto front is generated, including the best-found solutions. Fig. 3 SC optimisation BA approach flowchart
Supply Chain Design and Multi-objective Optimisation …
297
4.1 Numerical Example A real-world case of a BoM SC design problem is presented in [24] and solved by another well-known meta-heuristic algorithm such as the ACO. The same problem is solved using the BA approach in [25] and presented here as a numerical example. The problem consists in designing a supply chain network for a bulldozer, based on the BoM. The SC model follows the structure showed in Fig. 1, including a supply stage of parts/components, four sub-assembly stages, a final assembly stage of three products and a delivery stage to four markets/regions. The supply stage is composed of eighteen tasks, the first sub-assembly stage of four tasks, the second and third sub-assembly stages of one task each, the fourth sub-assembly stage of two tasks, the final assembly stage of three tasks and finally, the delivery stage to the markets of nine tasks. Overall, the SC includes thirty-eight tasks. Each task can potentially be performed by different options. The number of options available per each task at the different stages and the precedence/input relationships, are summarised in Table 2. Overall, the SC problem involves one hundred five options, with the following number of possible solutions/SC configurations: 38tasks ||
number of optionsi = 1.284 × 1016
(6)
i=1
which is a NP-hard problem and requires a high computational effort. Each of the options for the tasks has a cost and a processing lead time, a period of twelve months is considered, and different demands for the three finished products are assigned to each delivery node (δ30 to δ38 equal 20, 12, 23, 10, 32, 21, 9, 17, 6), which are then used to calculate the demand at the previous tasks according to the BoM. The BA approach is applied to find the Pareto solutions corresponding to different SC configurations, to minimise the TSCC and TSCLT simultaneously. Initially, a tuning phase of the BA parameters is performed to select the appropriate values, while keeping w1 = w2 = 0.5, ngh = 1, with one thousand iterations as stopping criterion and running the algorithm one hundred times to allow a better search of the solutions space. The values of n = 50, m = 15, e = 5, neb = 10 and nsb = 6, have proved to provide a good compromise between computational effort and quality of the solutions found. Then, with these parameters values, the BA is applied to solve the problem for nine different combinations of weights in the fitness functions, from w1 = 0.1 and w2 = 0.9, to w1 = 0.9 and w2 = 0.1. A greater value of one of the weights, tends to prioritise the respective objective, either TSCC or TSCLT. Therefore, the values of the weight depend on the decision-maker and whether it is more important to prioritise cost or lead time. This might be also related to the type of products and markets, such as the case of fast-moving consumer goods versus slow-moving consumer goods.
298 Table 2 BoM bulldozer task options and precedence relationships (based on [24])
E. Mastrocinque Number of options
Precedences/inputs tasks
Task 1
4
–
Task 2
3
–
Task 3
4
–
Task 4
4
–
Task 5
3
–
Task 6
2
–
Task 7
3
–
Task 8
4
–
Task 9
2
–
Task 10
5
–
Task 11
3
–
Task 12
3
–
Task 13
2
–
Task 14
3
–
Task 15
2
–
Task 16
3
–
Task 17
3
–
Task 18
3
–
SC stage and task number Supply stage
Sub-assembly stage 1 Task 19
2
123
Task 20
4
45
Task 21
4
789
Task 22
2
10 11
Sub-assembly stage 2 Task 23
2
6 20 21
Sub-assembly stage 3 Task 24
2
19 22 23
Sub-assembly stage 4 Task 25
3
12 13 24
Task 26
3
14 15 24
Task 27
3
16 25
Task 28
3
17 26
Task 29
3
18 26
Final assembly stage
Delivery stage (continued)
Supply Chain Design and Multi-objective Optimisation … Table 2 (continued)
299
SC stage and task number
Number of options
Precedences/inputs tasks
Task 30
3
27
Task 31
3
27
Task 32
3
27
Task 33
3
28
Task 34
3
28
Task 35
3
29
Task 36
3
29
Task 37
3
29
Task 38
3
29
Table 3 shows the results in terms of quality of the Pareto solutions in comparison to those found by the ACO. The metrics considered are the number of Pareto points, the minimum value of TSCC and TSCLT found by the two algorithms, Spacing which is a measure of the relative distance between consecutive solutions in the Pareto front, and the lower its value is, and more equally distributed the points are. The final metric is the Coverage of two sets which considers how many solutions found by one algorithm are dominated by the solutions found by the other algorithm. The BA has proved to find more Pareto solutions in all cases, same minimum TSCLT and lower than ACO in one case, lower minimum TSCC in all cases. Regarding the distribution of the Pareto solutions found, the two approaches exhibit similar behaviour, while the Coverage of two sets is better for the BA in the majority of the cases. Table 3 BA and ACO comparison (based on [25]) Weights
Pareto points
Min TSCLT Min TSCC
Spacing coverage of two sets
w1 = 0.1 w2 = 0.9 BA
14
30 days
$126,719,100
0.034
C (BA, ACO) = 0.830
ACO
6
30 days
$1.2796e+08
0.093
C (ACO, BA) = 0.070
w1 = 0.5 w2 = 0.5 BA
13
30 days
$126,595,752
0.073
C (BA, ACO) = 0.640
ACO
11
30 days
$127,193,700
0.037
C (ACO, BA) = 0.310
w1 = 0.9 w2 = 0.1 BA
15
30 days
$126,572,640
0.040
C (BA, ACO) = 0.270
ACO
11
35 days
$1.2683e+08
0.040
C (ACO, BA) = 0.530
300
E. Mastrocinque
5 Conclusions In this chapter the BA has been presented as a powerful tool to solve a complex multi-objective optimisation problem such as supply chain network design. In this case, a BoM-based supply chain consisting of multiple stages and tasks, which can be performed by alternative options. Therefore, the problem deals with the selection of a specific option for each task to minimise the total supply chain cost and lead time. From a theoretical perspective of multi-objective optimisation, the BA approach has proved to find better solutions in terms of number of Pareto points, spacing and coverage of two sets, when compared to another well-know optimisation algorithm. From a practical perspective, the BA approach allows to design and optimise effectively complex supply chain networks and to find a good trade-off between cost and time factors. Moreover, because of its flexibility, future works may consider the application of the BA to design reverse and closed-loop supply chains. Other future research opportunities include the application of the BA approach for solving SC optimisation problems with more than two objectives or with uncertainty in the model parameters.
References 1. Bloomberg: the world economy’s supply chain problem keeps getting worse. https://www.blo omberg.com/news/articles/2021-08-25/the-world-economy-s-supply-chain-problem-keepsgetting-worse. Accessed 3 Oct 2021 2. Deb K (2011) Multi-objective optimisation using evolutionary algorithms: an introduction. In: Wang L, Ng AH, Deb K (eds) Multi-objective evolutionary optimisation for product design and manufacturing. Springer, London, pp 3–34 3. Lambiase A, Mastrocinque E, Miranda S, Lambiase A (2013) Strategic planning and design of supply chains: a literature review. Int J Eng Bus Manage 5:5–49 4. Prasanna Venkatesan S, Kumanan S (2012) A multi-objective discrete particle swarm optimisation algorithm for supply chain network design. Int J Logistic Syst Manage 11:375–406 5. Hammami R, Frein Y (2013) An optimisation model for the design of global multi-echelon supply chains under lead time constraints. Int J Prod Res 51:2760–2775 6. Zokaee S, Jabbarzadeh A, Fahimnia B, Sadjadi S (2014) Robust supply chain network design: an optimization model with real world application. Ann Oper Res 257:15–44 7. Yang G, Liu Y, Yang K (2015) Multi-objective biogeography-based optimization for supply chain network design under uncertainty. Comput Ind Eng 85:145–156 8. Zhang S, Lee C, Wu K, Choy K (2016) Multi-objective optimization for sustainable supply chain network design considering multiple distribution channels. Expert Syst Appl 65:87–99 9. Safaei A, Roozbeh A, Paydar M (2017) A robust optimization model for the design of a cardboard closed-loop supply chain. J Clean Prod 166:1154–1168 10. Sadeghi Rad R, Nahavandi N (2018) A novel multi-objective optimization model for integrated problem of green closed loop supply chain network design and quantity discount. J Cleaner Prod 196:1549–1565 11. Ghahremani-Nahr J, Kian R, Sabet E (2019) A robust fuzzy mathematical programming model for the closed-loop supply chain network design and a whale optimization solution algorithm. Expert Syst Appl 116:454–471 12. Fattahi M, Govindan K, Maihami R (2020) Stochastic optimization of disruption-driven supply chain network design with a new resilience metric. Int J Prod Econ 230:107755
Supply Chain Design and Multi-objective Optimisation …
301
13. Sabouhi F, Jabalameli M, Jabbarzadeh A (2021) An optimization approach for sustainable and resilient supply chain design with regional considerations. Comput Ind Eng 159:107510 14. Graves SC, Willems SP (2005) Optimizing the supply chain configuration for new products. Manage Sci 51:1165–1180 15. Huang GQ, Zhang XY, Lian L (2005) Towards integrated optimal configuration of platform products, manufacturing processes, and supply chains. J Oper Manage 23:267–290 16. Wang J, Shu YF (2007) A possibilistic decision model for new product supply chain design. Eur J Oper Res 177:1044–1061 17. Lamothe J, Hadj-Hamou K, Aldanondo M (2006) An optimization model for selecting a product family and designing its supply chain. Eur J Oper Res 169:1030–1047 18. Zhang X, Huang Q, Rungtusanatham MJ (2008) Simultaneous configuration of platform products and manufacturing supply chains. Int J Prod Res 46:6137–6162 19. Afrouzy ZA, Nasseri SH, Mahdavi I (2016) A genetic algorithm for supply chain configuration with new product development. Comput Ind Eng 101:440–454 20. Qu T, Nie DX, Li CD, Thürer M, Huang GQ (2017) Optimal configuration of assembly supply chains based on Hybrid augmented Lagrangian coordination in an industrial cluster. Comput Ind Eng 112:511–525 21. Labbi O, Ouzizi L, Douimi M, Ahmadi A (2018) Genetic algorithm combined with Taguchi method for optimisation of supply chain configuration considering new product design. Int J Logistics Syst and Manage 31:531–561 22. Negahban A, Dehghanimohammadabadi M (2018) Optimizing the supply chain configuration and production-sales policies for new products over multiple planning horizons. Int J Prod Econ 196:150–162 23. Nepal B, Monplaisir L, Famuyiwa O (2011) A multi-objective supply chain configuration model for new products. Int J Prod Res 49:7107–7134 24. Moncayo-Martínez LA, Zhang DZ (2011) Multi-objective ant colony optimisation: a metaheuristic approach to supply chain design. Int J Prod Econ 131:407–420 25. Mastrocinque E, Yuce B, Lambiase A, Packianather MS (2013) A multi-objective optimization for supply chain network using the bees algorithm. Int J Eng Bus Manage 5:38 26. Yuce B, Mastrocinque E, Lambiase A, Packianather MS, Pham DT (2014) A multi-objective supply chain optimisation using enhanced Bees Algorithm with adaptive neighbourhood search and site abandonment strategy. Swarm Evol Comput 18:71–82 27. Moncayo–Martínez LA, Mastrocinque E (2016) A multi-objective intelligent water drop algorithm to minimise cost of goods sold and time to market in logistics networks. Expert Syst Appl 64:455–466 28. Marler RT, Arora JS (2010) The weighted sum method for multi-objective optimization: new insights. Struct Multidiscip Optim 41:853–862 29. Gould JL, Gould CG (1988) The honey bee. Scientific American Library 30. Riley JR, Greggers U, Smith AD, Reynolds DR, Menzel R (2005) The flight paths of honeybees recruited by the waggle dance. Nature 435:205–207 31. Pham DT, Ghanbarzadeh A, Koc E, Otri S, Rahim S, Zaidi M (2005) The bees algorithm. Technical Note, Manufacturing Engineering Centre, Cardiff University, UK 32. Pham DT, Ghanbarzadeh A, Koç E, Otri S, Rahim S, Zaidi M (2006) The bees algorithm—a novel tool for complex optimisation problems. In: Intelligent production machines and systems, pp 454–459 33. Pham DT, Ghanbarzadeh A (2007) Multi-objective optimisation using the bees algorithm. In: 3rd International virtual conference on intelligent production machines and systems, 2 July 2007 34. Pham DT, Afify A, Koc E (2007) Manufacturing cell formation using the bees algorithm. In: Innovative production machines and systems virtual conference, Cardiff, UK, 2 July 2007 35. Yuce B, Packianather MS, Mastrocinque E, Pham DT, Lambiase A (2013) Honey bees inspired optimization method: the bees algorithm. Insects 4:646–662 36. Packianather MS, Yuce B, Mastrocinque E, Fruggiero F, Pham DT, Lambiase A (2014) Novel genetic bees algorithm applied to single machine scheduling problem. In: 2014 IEEE (ed) world automation congress (WAC). IEEE, pp 906–911
302
E. Mastrocinque
37. Yuce B, Pham DT, Packianather MS, Mastrocinque E (2015) An enhancement to the bees algorithm with slope angle computation and hill climbing algorithm and its applications on scheduling and continuous-type optimisation problem. Prod Manuf Res 3:3–19 38. Yuce B, Fruggiero F, Packianather MS, Pham DT, Mastrocinque E, Lambiase A, Fera M (2017) Hybrid genetic bees algorithm applied to single machine scheduling with earliness and tardiness penalties. Comput Ind Eng 113:842–858 39. Yuce B, Mastrocinque E (2020) Supply chain network design using an enhanced hybrid swarmbased optimization algorithm. In: Management association I (ed) Supply chain and logistics management: concepts, methodologies, tools, and applications. IGI Global, pp 266–283 40. Yuce B, Mastrocinque E (2016) A hybrid approach using the bees algorithm and fuzzy-AHP for supplier selection. In: Samui P (ed) Handbook of research on advanced computational techniques for simulation-based engineering. IGI Global, pp 171–194
Remanufacturing
Collaborative Optimisation of Robotic Disassembly Planning Problems using the Bees Algorithm Jiayi Liu, Quan Liu, Zude Zhou, Duc Truong Pham, Wenjun Xu, and Yilin Fang
1 Introduction Traditional manufacturing mainly focuses on profit, ignoring pollution emissions and material conservation. To protect the environment, recently, sustainable manufacturing [1, 2] and remanufacturing [3–5] have received much attention. Through recycling end-of-life (EoL) products, remanufacturing provides an effective way to achieve the goal of reusing the material and protecting the environment. As an important role in remanufacturing, disassembly is always conducted by workers because of the complexities of EoL products [6]. However, high labor cost and low efficiency are the main disadvantages of manual disassembly. Recently, robotic disassembly has received more attention, and it has higher disassembly efficiency. Vongbunyong et al. proposed the concept of a cognitive robot, which was utilized to realize automatic disassembly [7]. Combined with the strategies of controlling basic and advanced J. Liu · Q. Liu · Z. Zhou · W. Xu (B) · Y. Fang School of Information Engineering, Wuhan University of Technology, Wuhan 430070, China e-mail: [email protected] J. Liu e-mail: [email protected] Q. Liu e-mail: [email protected] Z. Zhou e-mail: [email protected] Y. Fang e-mail: [email protected] D. T. Pham Department of Mechanical Engineering, University of Birmingham, Birmingham B15 2TT, UK e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 D. T. Pham and T. Hartono (eds.), Intelligent Production and Manufacturing Optimisation – The Bees Algorithm Approach, Springer Series in Advanced Manufacturing, https://doi.org/10.1007/978-3-031-14537-7_18
305
306
J. Liu et al.
behaviour [8–11], cognitive robots can cope with uncertainties and variations in the disassembly process. Two solutions have been provided to disassemble EoL products: a robotic disassembly cell and a robotic disassembly line. Robotic disassembly planning problems have received much attention in the disassembly process, including robotic disassembly sequence planning (RDSP) and robotic disassembly line balancing problem (RDLBP). Robotic disassembly planning helps to reduce the disassembly cost and expedite the disassembly process [12, 13]. When a robotic disassembly cell is considered, taking the precedence relationships about disassembly into consideration, RDSP could provide the optimal sequences for the disassembly line [14]. In regard to the robotic disassembly line, disassembly tasks need to be allocated to several robotic workstations with ordered sequences in a balanced way [15]. Much research in the fields of DSP (disassembly sequence planning) and DLBP (disassembly line balancing problem) could only be suitable for artificial disassembly and not for robotic disassembly. Traditional methods always ignore the moving time used to avoid obstacles between different disassembly points. However, RDSP and RDLBP should consider the moving time. Combinatorial optimization problems, represented by RDSP and RDLBP, aim to help generate the best sequence of disassembly for robotic disassembly cells. RDLBP is utilized in robotic disassembly lines to obtain the best disassembly plans. Generally, multiple robotic workstations (robotic disassembly cells) with sequenced order (robotic disassembly cell) form a robotic disassembly line. The performance of the robotic disassembly line will be affected by each robotic disassembly cell, and the robotic disassembly line affects the robotic disassembly cell. Taking the robotic disassembly line as an example, the smoothness index of the robotic disassembly line could be affected by the optimized disassembly sequence of every robotic workstation (robotic disassembly cell). In addition, the reduction in disassembly time occupied by robotic workstations (robotic disassembly cells) could also affect the smoothness index. In the robotic disassembly line, the disassembly time of each robotic workstation (robotic disassembly cell) in the disassembly line will be directly affected by the reduced cycle time of a robotic disassembly line. Although most studies have researched DSP and DLBP separately, few studies have attempted to optimize DSP and DLBP cooperatively. COP means the collaborative optimization of both RDSP and RDLBP and is proposed to further improve the disassembly efficiency. The following lists the rest parts of this chapter. Section 2 summarizes the related works. Section 3 introduces the COP, feasible disassembly solution generation, optimization objectives and weights. Section 4 provides the IDBA (improved discrete Bees algorithm) to deal with the COP. The proposed method is verified by case studies (disassembly lines for gear pumps and cameras), and simulations based on RoboDK in Sects. 5 and 6 provide conclusions.
Collaborative Optimisation of Robotic Disassembly …
307
2 Literature Review Much research has been conducted with regard to the disassembly sequence problem (DSP) and disassembly line balancing problem (DLBP). For DSP, to generate the optimal disassembly sequence under the environment of virtual maintenance, an improved max–min ant system (IMMAS) algorithm was utilized. To optimize the disassembly time for parallel selective disassembly, Smith et al. found the optimal disassembly sequence of a hard disk using the recursive rules and theory of modular design. The forward method is helpful in disassembling EoL products, including repair, reuse and remanufacturing [16]. The Q-learning method was utilized to solve the problem of disassembly sequences to disassemble useless electrical and electronic products. For DLBP, to solve the disassembly line balancing problem, the multiobjective hybrid artificial bee colony algorithm was used to optimize the number of workstations and the specific index (smoothness index, demand index and hazardous index), considering the fuzzy time used for processing the components [17]. Avikal et al. found the importance between different disassembly attributes of components through an analytic hierarchy process (AHP) [18]. They prioritize the disassembly tasks for workstations with the help of PROMETHEE. A beam search algorithm could be applied to find the optimal disassembly solutions to minimize the workstation numbers. The aforementioned methods are unable to achieve robotic disassembly. During the process of robotic disassembly, the time used for the robotic end-effector to reach the movement between different disassembly points should be considered, which the aforementioned research usually ignores. Recently, some studies have taken robotic end-effectors’ moving time into consideration. ElSayed et al. calculated the moving time between different disassembly points with the help of Euclidean distance and used an online genetic algorithm to solve DSP [19]. After that, for robotic disassembly, Alshibli et al. found an optimal disassembly sequence using a tabu search [20]. However, in the field of robotic disassembly, the obstacleavoidance moving path should be utilized to calculate the robotic end-effector’s moving time to realize the movement between different disassembly points instead of considering the Euclidean distance, which fails to consider the obstacle caused by EOL product contours. Therefore, more attention should be given to RDSP and RDLBP. For the disassembly cell, DSP searches the optimal disassembly sequence [21],and DLBP assigns the most suitable disassembly tasks to the disassembly workstations with sequenced order using a balanced method [22]. Generally, several disassembly workstations (disassembly cells) form a disassembly line. The increase or decrease in the disassembly workstation’s working time will directly affect the disassembly line’s smoothness index. The changes in the cycle time influence the disassembly workstation’s working time in the disassembly line. Therefore, the robotic disassembly line and each robotic disassembly cell affect each other in the field of disassembly performance. DSP and DLBP have been extensively studied in separate ways. In regard to DSP, with the influence of environmental and economic factors, an algorithm for multiobjective genetics was proposed to optimally disassemble coffee makers under
308
J. Liu et al.
the partial disassembly mode [23]. Based on the disassembly precedence graph, the disassembly level is optimized by the improved co-evolutionary optimization algorithm. In addition, the recovery options and disassembly sequence are also optimized in the same way [24]. In regard to DLBP, researchers developed a decision tool to find the optimal alternatives of disassembly lines, thinking about the uncertainty about processing time in a disassembly task [25]. DLBP can also be solved by ant colony optimization and particle swarm optimization, but these two solutions fall behind the hybrid algorithm. However, the aforementioned publications studied only DSP or DLBP, and little attention has been given to optimizing the DSP and DLSP collaboratively. For both RDSP and RDLBP, when the COP with multiple optimization objectives is considered, suitable weights should be assigned to the indicators. In the publication [18], the weights of follower number and disassembly time were decided using AHP. In addition, the revenue generated, part demand and part hazardous can also be decided through AHP. In cloud manufacturing, AHP was also utilized to assign suitable weights to different factors [26], and then the ant colony algorithm was used to solve the service selection problem. In addition, Gupta et al. evaluated sustainable manufacturing practices using AHP in terms of lean practices, process design, ecodesign and so on [27]. However, considering the multiple optimization objectives of the COP, it should be completely regarded using the AHP method. Different from AHP, analytic network process (ANP) considers the interactive relationships between different factors and it has been applied to many fields. Lee et al. utilized ANP to realize smartness assessment framework for smart factory and twenty manufacturing enterprises were used to verify the proposed method [28]. ANP was also applied to the supplier selection process which combined the quality function deployment and the best supplier was generated according to the customer needs [29]. From the above analysis, it is obvious that ANP has been applied to the fields of manufacturing and supply chain. When the collaborative robotic disassembly planning is considered, relationships between different optimization objectives should also be included. For instance, in robotic disassembly lines, the cycle time is affected by the number of robotic disassembly workstations. It is obvious that less cycle time means more robotic disassembly workstations are needed and vice versa. When AHP is utilized for COP, imprecise weights are obtained because it ignores the interaction relationship between different optimization objectives.
3 The Collaborative Optimization Problem 3.1 Assumption, Definition and Workflow In this chapter, the following assumptions are made: a specific type of EOL product is disassembled, and the structure and components of the EOL product are analysed. All the components could be completely disassembled. Each component is linked to a
Collaborative Optimisation of Robotic Disassembly …
Feasible disassembly solution
309
The improved discrete Bees algorithm
RDSP & RDLBP
Optimization Objectives
COP
The cycle time with minimum fitness value
Robotic disassembly line solution
ANP
Fitness
Different cycle time of robotic disassembly line (such as 15s ~30s, the interval is 1s)
Reduce the cycle time to the maximum working time of robotic workstations Optimal solution of COP
Fig. 1 Workflow of the proposed method [30]
robotic disassembly workstation. The summary of robotic disassembly workstations’ working time should not be longer than the disassembly line’s cycle time. RDSP, a typical optimization problem, aims to find the optimal disassembly sequence for a robotic disassembly cell. RDLBP, an optimization problem like RDSP, aims to arrange the optimal disassembly tasks to robotic workstations with ordered sequences in a balanced way. Generally, several robotic workstations (robotic disassembly cells) form a robotic disassembly line. For the robotic disassembly line, the smoothness index will change with the decline of the total disassembly time (indicator of RDSP) of any robotic workstation (robotic disassembly cell). Therefore, the performance of the robotic disassembly line and each robotic disassembly cell will be directly affected by each other. Based on this, with respect to the indicators of RDSP and RDLBP, the COP considers their interactive relationships and optimizes the two problems collaboratively and then improves the disassembly efficiency. As shown in Fig. 1, the workflow of the proposed method is described clearly, and this method is used to solve the COP. First, as mentioned in Sect. 3.2, the simplified interference matrix method (SIMM) has the ability to generate a feasible disassembly sequence. The assignment method for robotic workstations is used to obtain the optimal solution for robotic disassembly lines. After that, the optimization objectives of the COP are included in Sect. 3.3. In addition, ANP assigns appropriate weights to different indicators. In Sect. 4, IDBA is utilized to generate the optimal solutions. Due to various cycle times, IDBA is used to generate optimal solutions in Sect. 5. CT best represents the best cycle time with the minimum fitness value. In addition, CT best is the optimal solution of the COP if it has been reduced to the maximum working time of the robotic workstation.
3.2 Feasible Disassembly Sequence Generation The feasible disassembly sequence is generated based on the disassembly model of EOL products, which can be built by PetriNet [31]. The disassembly model of EOL products can also be built by graph-based methods [32] and matrix-based methods [33]. During the disassembly process, the feasible disassembly direction of parts is decided by the disassembly status of the other parts instead of the fixed direction. In
310
J. Liu et al.
this chapter, feasible disassembly sequences, including disassembly sequences and disassembly directions, are generated by SIMM. More details about SIMM can be found in the publication [34].
3.3 Optimization Objectives and the Weights COP aims to optimize the indicators of RDSP and RDLBP. RDSP is a decision problem for robotic disassembly cells. This problem generates the optimal sequence when the disassembly precedence constraint is satisfied. Several robotic workstations form a robotic disassembly line. The disassembly tasks are executed continuously, and the products are transferred along the disassembly line sequentially. During the cycle time, every robotic workstation’s disassembly tasks will be finished repeatedly. RDLBP is the decision problem about robot workstations’ optimal tasks. RDLBP aims to allocate disassembly tasks in a balanced way. Thus, the COP improves the disassembly efficiency by collaboratively optimizing the RDSP and RDLBP.
3.3.1
Indicators of RDSP
In this chapter, with respect to every robotic workstation, the optimization objective of RDSP is the total disassembly time [19]. The total disassembly time includes the basic working time, the time used for changing the disassembly tool, the time used to change the disassembly direction and the time used to move between disassembly points. The basic disassembly time is assumed to be constant in this chapter, which is the time used to disassemble a component. During the disassembly process, the posture adjustment makes the robots need additional time to deal with the changeable direction of disassembly. The penalty time used by disassembly direction changes is described (Eq. 1) and dir i represents the disassembly direction of the ith operation.
dti, j
⎧ |diri − dir j | = 0 ⎨0 = 1 |diri − dir j | = 90◦ ⎩ 2 |diri − dir j | = 180◦
Sp Sc Gr Pl H a EC ⎤ ⎡ Sp T T1 tt1,2 tt1,3 tt1,4 tt1,5 tt1,6 ⎥ Sc ⎢ ⎢ tt2,1 T T2 tt2,3 tt2,4 tt2,5 tt2,6 ⎥ ⎥ ⎢ Gr ⎢ tt3,1 tt3,2 T T3 tt3,4 tt3,5 tt3,6 ⎥ TT = ⎥ ⎢ Pl ⎢ tt4,1 tt4,2 tt4,3 T T4 tt4,5 tt4,6 ⎥ ⎥ ⎢ H a ⎣ tt5,1 tt5,2 tt5,3 tt5,4 T T5 tt5,6 ⎦ EC tt6,1 tt6,2 tt6,3 tt6,4 tt6,5 T T6
(1)
(2)
Collaborative Optimisation of Robotic Disassembly … Operations
311 ...
Removing
Unscrewing
Cutting
...
Tools Screwdriver/Spanners/...
Cutting discs
Grippers/pliers/...
The tool types
... Bolt: M1/M2/M3/M4/... Screw: Slotted/Phillips/...
Large/medium/... size
Thickness
Fig. 2 Different disassembly operations and tools [30]
⎡
M1 M2 M3 ... Mn
0 tta1,2 tta1,3 M1 tta M2 ⎢ ⎢ 2,1 0 tta2,3 ⎢ T T1 = M3 ⎢ tta3,1 tta3,2 0 ⎢ ... ... ... ⎣ ... Mn ttan,1 ttan,2 ttan,3
⎤ ... tta1,n ... tta2,n ⎥ ⎥ ⎥ ... tta3,n ⎥ ⎥ ... ... ⎦ ... 0
(3)
The robots also need additional time to change different tools to finish different adjacent disassembly operations. Equation 2 describes the additional disassembly time, where ‘Sp’ means spanners, ‘Sc’ means screwdrivers, ‘Gr’ means grippers, ‘Pl’ means pliers, ‘EC’ means electrical cutting and ‘Ha’ means hammers. To disassemble various types (sizes) of components, different kinds of disassembly tools are needed when disassembly operations are determined. For instance, as shown in Fig. 2, different kinds of spanners are used to disassemble different kinds of bolts. Changing the tools also needs additional time. Equation 3 describes this additional disassembly time clearly.
3.3.2
Indicators of RDLBP
The following indicators are listed to evaluate the solution quality of RDLBP: in regard to the robotic disassembly line, f 1 means the cycle time, f 2 means the number of robotic workstations and f 3 means the smoothness index. These indicators are described (Eqs. 4–6). In these equations, CT represents the cycle time, and the number of robotic workstations is shortened as nr . In addition, T rob,i means the ith robotic workstation’s total working time. • Optimize the cycle time: The decline in cycle time of robotic disassembly line means the improvement in efficiency.
312
J. Liu et al.
• Optimize the number of workstations: reduce the cost of robotic disassembly lines. • Optimize the smoothness index: This can improve the workload of robotic workstations. In addition, the minimum smoothness provides a lower f 3 value, and all the robotic workstations possess similar idle times. f1 = C T
(4)
f2 = nr
(5)
[ | nr |Σ f3 = ] (C T − Tr ob,i )2 /n r
(6)
i=1
With the feasible disassembly sequence obtained by SIMM in Sect. 3.2, the robotic workstation assignment method should be utilized to assign suitable tasks to each robotic workstation of this robotic disassembly line. With respect to the robotic workstation assignment method, more details are shown in this paper [15]. Equation 7 is used to evaluate the performance of the COP solution, and Eq. 8 calculates f i,norm, which is the normalized value of indicator f i .
3.3.3
f it = w1 f 1,nor m + w2 f 2,nor m + w3 f 3,nor m + w4 f 4,nor m
(7)
f i,nor m = ( f i − f i,min )/( f i,max − f i,min ) i = 1, 2, 3, 4
(8)
The Weights Obtained by Analytic Network Process
To obtain quantitative assessments of solutions, suitable values of w1 , w2 , w3 and w4 need to be assigned to different indicators. For the COP, if the robotic disassembly line needs a larger cycle time, a smaller number of robotic workstations are needed. In the country, a smaller cycle time leads to more robotic workstations. In this chapter, ANP [35] is utilized to obtain suitable weights of different indicators, considering the interactive relationships of indicators. The assessment results are obviously affected by the interaction relationships of indicators [36]. The network structure of the COP’s optimization objectives is described in Fig. 3. First, experts build pairwise comparison matrices through their judgments. With respect to COP, five matrices should be built. First, in solution evaluation, the pairwise comparison matrix of indicators should be built. Second, with respect to the cycle time, the pairwise comparison matrix of indicators should be built. Third, with respect to the number of robotic workstations, a pairwise comparison matrix of indicators should be built. Fourth, with respect to the smoothness index of the
Collaborative Optimisation of Robotic Disassembly …
313
Solution evaluation (f)
The maximum working time of all the robotic workstations(f4)
The cycle time (f1)
Smoothness index of robotic disassembly line (f3)
The number of robotic workstation (f2)
Fig. 3 The network structure of COP [30]
robotic disassembly line, a pairwise comparison matrix of indicators should be built. Finally, with respect to the maximum working time of all the robotic workstations, the pairwise comparison matrix of indicators should be built. For instance, Eq. 9 describes the pairwise comparison matrix of indicators with respect to solution evaluation. In this equation, the relative weights between indicators i and j are represented as aij on a 1-to-9 scale. The scale proposed by Saaty [37] is shown in Table 1. Obviously, the matrix is a reciprocal matrix. After the experts finish their judgements, Eq. 10 is used to check the consistency ratio. The random average index (RI) is described in Table 2 [38]. The consistency index (CI) can be generated (Eq. 11). In this equation, λmax means maximal eigenvalue and n means the size of matrix A. After the temporary pairwise comparison matrix is obtained, its maximal eigenvalue λmax could be calculated. Then, CI value could be calculated (Eq. 11) and CR value could be obtained Eq. 10 and Table 2. For example, if n = 3 and CR value is greater than 0.05, it means the temporary pairwise comparison matrix must be re-evaluated until CR value is no greater than 0.05 (0.05 for n = 3, 0.08 for n = 4 and 0.1 for n ≥ 5 [38]). On the other hand, this matrix should be corrected until it is acceptable. Since the matrix is accepted, the principal right eigenvector (corresponding to λmax ) should be normalized to be a local priority vector P1,f = (p1 , p2 , …, pn ). We can generate the remaining four matrices using the same method described in Fig. 4. In addition, the local priority vectors can be generated in the same way. After that, the weighted supermatrix is obtained in step 3 of Fig. 4. By raising the weighted supermatrix to powers, the limit supermatrix is generated with a unique value until this matrix converges [39]. Finally, the weights of the indicators are obtained in the limit supermatrix.
⎡
f1
f2
f3
1 a12 a13 f1 f2 ⎢ 1/a 1 a23 12 ⎢ A1 = f 3 ⎣ 1/a13 1/a23 1 f 4 1/a14 1/a24 1/a34
f4
⎤ a14 a24 ⎥ ⎥ a34 ⎦ 1
(9)
314
J. Liu et al.
Table 1 1-to-9 scale used in ANP [30] Verbal judgment of precedence
Scale
Explanation
Equally preferred
1
Two indicators have equal importance
Moderately
3
Judgments slightly favor one indicator over another
Strongly
5
Judgments strongly favor one indicator over another
Very strongly
7
One indicator is strongly favored over another and its dominance is demonstrated
Extremely
9
The evidence favoring one indicator over another is of the highest degree possible for affirmation
Intermediate values
2, 4, 6, 8
A compromise is used between two adjacent judgments
Table 2 The random average index [30] n
1
2
3
4
5
RI
0
0
0.52
0.89
1.11
Fig. 4 The weights obtained by ANP [30]
C R = C I /R I
(10)
C I = (λmax − n)/(n − 1)
(11)
Collaborative Optimisation of Robotic Disassembly …
315
4 The Improved Discrete Bees Algorithm The BA (Bees algorithm) comes from the foraging behavior of bees [34]. COP is solved by IDBA, as shown in Fig. 5. First, sn (number of scout bees), m (number of selected sites), mb (number of follower bees around the selected sites), n (number of elite sites), nb (number of follower bees around the elite sites) and iter (the iteration number) should be initialized. After that, SIMM obtains sn feasible disassembly sequences, and robotic disassembly line solutions are generated based on these sn feasible disassembly sequences using the robotic workstation assignment method. The nectar sources (the sites) are explored by these sn scout bees. According to the fitness value, the quality of the nectar source is sorted clearly. Elite sites represent the best n nectar sources, and the selected sites represent the best m nectar sources. With respect to each elite site, its neighborhood is explored by variable neighborhood search (VNS) with the help of nb follower bees. With the help of SIMM and robotic workstation assignment methods, we abandon the remaining sites, and sn-m new bees are generated. The procedure stops until the maximum iteration number iter is reached (Fig. 6).
4.1 Representation of Bees With respect to the bee in Fig. 10, we obtain the disassembly sequence and direction by using SIMM. We accomplish the assignment regarding a feasible disassembly solution by the method mentioned in Sect. 3.2.
4.2 Variable Neighborhood Search As Fig. 7 shows, the structures of the following neighborhood are included. From neighborhood structure N 1 , we can conclude that the means exchange of the disassembly sequence and disassembly direction will happen at the same time. From neighborhood structure N 2 , a 180-degree change will appear randomly in the disassembly direction of a Bee. In the robotic disassembly line, the neighborhood structure N 3 means two-bit exchange of the disassembly sequence ‘2–5-6–7’, and the exchange of the disassembly direction ‘z + /x + /x + /x-’ occurs at the same time, along with the maximum working time of workstation 3. Not only in the disassembly sequence but also in the disassembly direction, neighborhood structure N 4 randomly selects one bit, and this bit will be inserted to a random position. The function of neighborhood structure N 5 is to reverse a random part’s order. The neighborhood search always generates an unfeasible solution if the research is based on a feasible solution. Thus, the neighborhood will not stop searching until a feasible solution is
316
J. Liu et al. Initialization: Scout-bees number sn, selected site number m, elite site number n, number of follower bees around selected sites mb, number of follower bees around elite sites nb and iteration number iter sn feasible disassembly solutions generated by SIMM sn robotic disassembly line solutions (sn scout-bees) generated by robotic workstation assignment method; i3 = 1 Sorted the sn sites by fitness value; i1 = 1, i2 = 1 Search i1th Elite site (n) using nb bees by variable neighborhood search i1 = i1 + 1; Y
i1