270 22 10MB
English Pages IX, 324 [328] Year 2021
Springer Tracts in Nature-Inspired Computing
Nilanjan Dey Editor
Applications of Cuckoo Search Algorithm and its Variants
Springer Tracts in Nature-Inspired Computing Series Editors Xin-She Yang, School of Science and Technology, Middlesex University, London, UK Nilanjan Dey, Department of Information Technology, Techno India College of Technology, Kolkata, India Simon Fong, Faculty of Science and Technology, University of Macau, Macau, Macao
The book series is aimed at providing an exchange platform for researchers to summarize the latest research and developments related to nature-inspired computing in the most general sense. It includes analysis of nature-inspired algorithms and techniques, inspiration from natural and biological systems, computational mechanisms and models that imitate them in various fields, and the applications to solve real-world problems in different disciplines. The book series addresses the most recent innovations and developments in nature-inspired computation, algorithms, models and methods, implementation, tools, architectures, frameworks, structures, applications associated with bio-inspired methodologies and other relevant areas. The book series covers the topics and fields of Nature-Inspired Computing, Bio-inspired Methods, Swarm Intelligence, Computational Intelligence, Evolutionary Computation, Nature-Inspired Algorithms, Neural Computing, Data Mining, Artificial Intelligence, Machine Learning, Theoretical Foundations and Analysis, and Multi-Agent Systems. In addition, case studies, implementation of methods and algorithms as well as applications in a diverse range of areas such as Bioinformatics, Big Data, Computer Science, Signal and Image Processing, Computer Vision, Biomedical and Health Science, Business Planning, Vehicle Routing and others are also an important part of this book series. The series publishes monographs, edited volumes and selected proceedings.
More information about this series at http://www.springer.com/series/16134
Nilanjan Dey Editor
Applications of Cuckoo Search Algorithm and its Variants
123
Editor Nilanjan Dey Department of Information Technology Techno International New Town Kolkata, West Bengal, India
ISSN 2524-552X ISSN 2524-5538 (electronic) Springer Tracts in Nature-Inspired Computing ISBN 978-981-15-5162-8 ISBN 978-981-15-5163-5 (eBook) https://doi.org/10.1007/978-981-15-5163-5 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore
Preface
Evolutionary-based meta-heuristic approaches are effectively applied to solve complicated optimization problems in several real-world applications. One of the successful optimization algorithms is the Cuckoo Search (CS) which becomes an active research area to solve N-dimensional and linear/nonlinear optimization problems using simple mathematical processes. Based on its underpinning in 2009, CS has attracted the attention of various researchers, causing numerous variants of the basic CS with enriched performance. This book highlights the basic concepts of CS algorithm and its foremost variants and their use in solving diverse optimization problems in medical and engineering applications. This volume entails thirteen chapters providing different CS applications for solving optimization problems. In Chap. 1, Mondal et al. conducted an outstanding, cutting-edge survey focused on the uses of CS and its variants in different digital image processing stages including image enhancement, thresholding, segmentation, feature selection, classification, and compression. Then, in Chap. 2, Campuzano et al. applied the CS for parametric data fitting of characteristic curves of the Van der Waals equation of gases. This study also included a comparative study with polynomial curve fitting and multilayer perceptron neural network, as well as two popular nature-inspired meta-heuristic methods, namely firefly and bat algorithms. In Chap. 3, Ozsoydan et al. introduced the Cuckoo search algorithm with various walks. Novel movement procedures were proposed, including quantum, Brownian, and random walks for CS, which adopts the standard CS form, i.e., Lévy flights. In Chap. 4, Dao et al. implemented a statistical-based CS algorithm for its engineering applications, such as camera-positioning device task. In such an application, CS was used to optimize the design variables, objective functions, and constraints. In Chap. 5, Kotwal et al. employed the CS algorithm in training a feedforward neural network to solve its inherent problem of being trapped in local minima. This study delineated three different applications of CS-based artificial neural network. In Chap. 6, Carbas and Aydogdu optimized the design of real-sized high-level steel frames using CS. Afterward, in Chap. 7, Singh and Shukla proposed the use of cuckoo search algorithm user interface for parameter optimization of ultrasonic machining process to optimize the required parameters to obtain the desired user v
vi
Preface
interface profile on the machined surface with less residual damage. In Chap. 8, García-Gutiérrez et al. applied the CS to tune the fuzzy logic controller parameters. In Chap. 9, Das et al. reported different speech processing applications using CS. In Chap. 10, Ocal and Pekcan designed a CS-based back-calculation algorithm for estimating layer properties of full-depth flexible pavements. The performance of the proposed CS-based algorithm was evaluated using synthetically calculated deflections by a finite element-based software and deflection data obtained from the field. Moreover, a comparative study was conducted including the Genetic Algorithms (GA), Particle Swarm Optimization (PSO), and Gravitational Search Algorithm (GSA), which concluded the efficiency of the CS compared to these other optimization methods. In Chap. 11, Tutuş et al. applied CS in solving an objective-based design approach of retaining walls, followed by Chap. 12 wherein Altun et al. proposed a hybrid CS and differential evolution algorithms for optimizing the cost of mechanically stabilized earth walls. Finally, in Chap. 13, Maroosi proposed a cuckoo search algorithm inspired from membrane systems, which can concurrently evaluate more than one cost function on parallel devices. The editor is keen on expressing his gratitude to the contributors for the valuable contributions and the respected referees. An extended gratitude is directed to the members of the Springer team for their support. Am, the editor, wishes this book entails advanced, valuable research on Cuckoo search and its variants to solve diverse optimization engineering problems in real-world clinical applications. Kolkata, India
Nilanjan Dey, Ph.D.
Contents
1
2
Cuckoo Search and Its Variants in Digital Image Processing: A Comprehensive Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Atreyee Mondal, Nilanjan Dey, and Amira S. Ashour Cuckoo Search Algorithm for Parametric Data Fitting of Characteristic Curves of the Van der Waals Equation of State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Almudena Campuzano, Andrés Iglesias, and Akemi Gálvez
3
Cuckoo Search Algorithm with Various Walks . . . . . . . . . . . . . . . . F. B. Ozsoydan and İ. Gölcük
4
Cuckoo Search Algorithm: Statistical-Based Optimization Approach and Engineering Applications . . . . . . . . . . . . . . . . . . . . . Thanh-Phong Dao
1
21 47
79
5
Training a Feed-Forward Neural Network Using Cuckoo Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Adit Kotwal, Jai Kotia, Rishika Bharti, and Ramchandra Mangrulkar
6
Cuckoo Search for Optimum Design of Real-Sized High-Level Steel Frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Serdar Carbas and Ibrahim Aydogdu
7
Application of Cuckoo Search Algorithm User Interface for Parameter Optimization of Ultrasonic Machining Process . . . . 147 D. Singh and R. S. Shukla
8
The Cuckoo Search Algorithm Applied to Fuzzy Logic Control Parameter Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 G. García-Gutiérrez, D. Arcos-Aviles, E. V. Carrera, F. Guinjoan, A. Ibarra, and P. Ayala
vii
viii
9
Contents
Impact of Cuckoo Algorithm in Speech Processing . . . . . . . . . . . . . 207 Akalpita Das, Himanish Shekhar Das, and Himadri Shekhar Das
10 Cuckoo Search Based Backcalculation Algorithm for Estimating Layer Properties of Full-Depth Flexible Pavements . . . . . . . . . . . . 229 A. Öcal and O. Pekcan 11 An Objective-Based Design Approach of Retaining Walls Using Cuckoo Search Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 E. B. Tutuş, T. Ghalandari, and O. Pekcan 12 A Hybrid Cuckoo Search Algorithm for Cost Optimization of Mechanically Stabilized Earth Walls . . . . . . . . . . . . . . . . . . . . . 277 M. Altun, Y. Yalcin, and O. Pekcan 13 A Cuckoo Search Algorithm Inspired from Membrane Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 A. Maroosi
About the Editor
Nilanjan Dey is an Assistant Professor at the Department of Information Technology, Techno International New Town (Formerly known as Techno India College of Technology), Kolkata, India. He is a Visiting Fellow of the University of Reading, UK; a Visiting Professor at Duy Tan University, Vietnam; and was an honorary Visiting Scientist at Global Biomedical Technologies Inc., CA, USA (2012-2015). He was awarded his PhD. from Jadavpur University in 2015. He is the Editor-in-Chief of the International Journal of Ambient Computing and Intelligence, IGI Global. He is the Series Co-Editor of Springer Tracts in Nature-Inspired Computing, Springer Nature; Series Co-Editor of Advances in Ubiquitous Sensing Applications for Healthcare, Elsevier; and Series Editor of Computational Intelligence in Engineering Problem Solving and Intelligent Signal Processing and Data Analysis, CRC. He has authored/edited more than 50 books with Springer, Elsevier, Wiley, and CRC Press, and published more than 300 peer-reviewed research papers. His main research interests include medical imaging, machine learning, computer-aided diagnosis, data mining, etc. He is the Indian Ambassador of the International Federation for Information Processing (IFIP)—Young ICT Group.
ix
Chapter 1
Cuckoo Search and Its Variants in Digital Image Processing: A Comprehensive Review Atreyee Mondal, Nilanjan Dey, and Amira S. Ashour
1 Introduction In computer vision, image processing refers to the analysis and manipulation of digital images to improve their quality [1]. Upgrading the visual appearance of the image and extracting the significant information from images are considered the main tasks of the image processing [2]. Image processing is a subset of the digital signal processing, where the input parameter can be an image or a sequence of images in a video, while the output may be some features related to the image or an image frame [3]. Image processing has a wide range of techniques, such as image enhancement, image denoising, image clustering, image segmentation, feature extraction, feature selection, image classification, and image compression, [4]. In recent years, image processing becomes more challenging due to the huge amount of the complicated, noisy images from the different sources in a variety of applications, which requires minimal computational cost/time procedures and higher accuracy [5]. Therefore, and along with the different parameters in each image processing stage which require fine-tuning and accurate selection of their values, optimization algorithms become essential to resolve such challenges in a time efficient manner. Optimization algorithms perform competently well to effectively reduce the computational cost, increase the performance, and minimize energy consumption [6, 7]. A. Mondal (B) · N. Dey Department of Information Technology, Techno International New Town, Kolkata 700156, West Bengal, India e-mail: [email protected] N. Dey e-mail: [email protected] A. S. Ashour Department of Electronics and Electrical Communications Engineering, Faculty of Engineering, Tanta University, Tanta, Egypt e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 N. Dey (ed.), Applications of Cuckoo Search Algorithm and its Variants, Springer Tracts in Nature-Inspired Computing, https://doi.org/10.1007/978-981-15-5163-5_1
1
2
A. Mondal et al.
In image processing, the optimization problem arises to check the reliability of the algorithm by considering different components, such as the formulation of dependency between parameters [8]. The main objective of optimization algorithms is to find ideal, feasible, and optimal solution for the problem statement. To obtain such solutions, the algorithms initially incorporate a random solution vector and begin to traverse the entire search space and move to better approach in each step which ultimately results in the best solution in less time [9–12]. However, in real world, implementing a single feasible solution is a challenging task. Typically, metaheuristics optimization algorithms are powerful tools in image processing techniques, such as image enhancement, and image denoising [13, 14] methods for pre-process digital images. After pre-processing, the segmentation process is carried out [15]. Over the past few decades, various optimization algorithms are developed from some inspiring features of nature, including genetic algorithm [16, 17], which is developed from Darwin’s survival of fittest theory, particle swarm optimization algorithm (PSO) [18], which was inspired by swarm movements. Furthermore, ant colony optimization [19] influenced by the ant behaviour. Apart from these, bat algorithm [20], flower pollination algorithm [21], firefly algorithm [22], shuffled frog leaping algorithm [23], bacterial foraging optimization algorithm [24], artificial bee colony algorithm [25], and artificial fish swarm algorithm [26], which are influenced by natural phenomena and utilize different degrees of exploration and exploitation for searching purpose. In this study, a comprehensive review of one of such nature-inspired metaheuristic optimization algorithm, namely, cuckoo search (CS) is presented. The CS algorithm is inspired by the obligate brood parasitic strategy in combination with Lévy flight behaviour of cuckoo bird. Relevant researches represent that CS algorithm works are efficient in different digital image processing stages, such as pre-processing, segmentation, feature selection, classification, and compression. A detailed discussion of the performance of CS and its variants in several applications is introduced in this work. The rest of this study is carried out as follows. The classical CS algorithm and its variants as well as hybrids are discussed in Sect. 2. In Sects. 3 through 7, different image processing methods using CS are concisely reviewed. In Sect. 8, various applications are collectively highlighted. Finally, Sect. 9 depicts the conclusion.
2 Metaheuristic Algorithms in Digital Image Processing 2.1 Cuckoo Search Algorithm Cuckoo search algorithm (CS) is considered a nature-inspired metaheuristic structural optimization algorithm introduced by Xin-she Yang and Suash Deb in the year 2009 [27]. The algorithm was induced by obligate brood parasitic strategy of most
1 Cuckoo Search and Its Variants in Digital Image Processing …
3
of the cuckoo species in composition with the Lévy flight characteristics of some species. The reproduction characteristics of cuckoo bird species follow the typical obligate brood parasitism, where they lay their eggs to exploit a host bird’s nest. Often the host bird has different species other than cuckoo. Very few species of cuckoo like guira, Smooth-billed Ani, and the Greater Ani nests in their own community and thus considered as non-parasitic species. These rare cuckoo species practices cooperative breeding behaviour, where more than one female cuckoo lays eggs in the communal nests [28]. The adults remove the other cuckoo-chicks to strengthen the possibility of incubating their own chicks when other members are away from nest for feeding [29]. Apart from these few species most of the cuckoo species perform obligate brood parasitic behaviour by breeding in the host bird nests. Sometimes the host bird simply leaves its nest or throw the eggs out of their nests if they discover the egg as an alien egg. To increase the reproduction probability, some special female cuckoo species mimicry their eggs, which are alike in colour and pattern as the host bird. Moreover, the specialization in breeding time is also notable because in most of the cases, the female cuckoo selects those nests where the host bird recently came up with its own eggs [30]. Several relevant studies show that the cuckoo chick hatch a little prior to the eggs of host bird and can also imitate the sound of host chicks to increase the food consuming probability from the host bird [31]. Some species search for food in an arbitrary or relatively random way, where the probability of the next random movement path direction can be formulated mathematically [32]. The flight behaviour of different species indicates the classical Lévy flight characteristics [33–35]. In CS algorithm, a cuckoo searches host nest using Lévy flights, which was titled by the French mathematician Paul Lévy representing the typical strategy of random walk designated by its step length which fulfils a power law distribution [36]. Relevant researches depicted that the Lévy flights have also been perceived among hunting strategy of albatross species and fruit flies. It maximizes the efficiency of resources in uncertain environment [37]. The traditional CS algorithm follows three idealizations, namely, i.
Each female cuckoo bird breed one egg at once and selects an arbitrary host nest to dump it; ii. The prime nests which have the best standard of eggs are carried over to the descendants; and iii. The amount of accessible host nests is constant, and the cuckoo egg being recognized by the host bird is with a probability pa ∈ (0, 1). If pa = 0, the host bird can either eject the alien egg or leave its nest and form an entirely fresh one in another locality. For transparency, this end presumption can be approached by a fraction pa of the n nests that are substituted by fresh nests. In maximization problem, the fitness of a solution varies in direct proportion with objective function value. The mathematical form of maximization problem is given by [38]: min( f (x)) = max(− f (x))
(1)
4
A. Mondal et al.
In this case, the following assumption can be considered, where each egg in a host nest is a solution and cuckoo egg depicts a new solution. Also, for ease, it is presumed that there is only a single solution in host nest. The objective is to substitute comparatively poor solution with preferable new solutions (cuckoo egg). This algorithm becomes more complex when each host nest has a set of solutions, i.e. multiple eggs. Based on the above-idealized rules, the pseudo-code of the original CS algorithm can be summarized as follows:
Classical CS Algorithm: Input: Population of nests Output: Best Solution and its corresponding value Begin Consider a population of n host nests xj, where j=1,2,3…,n m=0 while (m Fk ) Replace k with new solution (cuckoo egg) end if Leave a fraction pa of the substandard nests and new nests are construct at new locality using Lévy flights Analyse fitness of new nests/solutions Rank all solutions to find the best solution Pass the best solution to the next generation end while Visualization end In CS, the lévy flight can be used for local and global exploration. The former used to improve the best solution via random walks directly, whereas the latter uses levy flight strategy to maintain diversity of the population. The generation of new solution x m+1 for a randomly selected cuckoo i is performed via lévy flight. xim+1 = xim + α ⊕ Le vy(λ)
(2)
where α > 0 representing the step size, and Le vy(λ) denotes the characteristic scale as α = O(1) in maximum cases. The lévy distribution function with infinite mean and variance for random step length evaluation in random walk of lévy flight is depicted using the following expression:
1 Cuckoo Search and Its Variants in Digital Image Processing …
Le vy ∼ u = m−λ
5
(3)
where λ ∈ [1,3]. In this context, the sequential jumps of a cuckoo bird create a random walk behaviour which follows the power law distribution with a dense tail [39].
2.2 CS Variants and Hybrids There are numerous variants of CS appear in the models proposed by many researchers after the invention of the original CS algorithm, such as discrete CS [40], binary CS, modified CS [41, 42], neural-based CS [43], gaussian distributionbased CS [44], modified adaptive CS [45], quantum inspired CS [46], and parallelised CS [47]. In this study, the relevant models are concisely highlighted. Binary Cuckoo Search (BCS) The binary cuckoo search algorithm is an extended version of classical cuckoo search algorithm, which is broadly used for feature selection. Feature selection is the mechanism by which the best subset of features can be found for solving classification or clustering of digital image in computer vision [48, 49]. Hence, feature selection problem is treated as a discrete binary problem and designed as n-dimensional Boolean lattice problem [50] in which the solutions are updated across the corners of a hypercube on a contrary of original CS where solutions are updated in a search space in a continuous domain. The features are either selected or rejected is decided using a binary vector and are represented by 0 or 1 [51]. To measure the quality of a solution, a binary vector is generated converting the dimension of the solution to binary values represented as follows:
y ij
t =
0 if x ij t < σ 1 if x ij t ≥ σ
(4)
where x ij t denotes the value of ith egg at time t for the jth nest, y ij t stands for binary vector at jth nest at time t , and for t = 0, which represents the primary solution. Here, in the above equation, σ ∈ (0, 1), depicts the Boolean boundary. Improved Cuckoo Search (ICS) The traditional CS algorithm is improved by some parameters and different improved versions evolved. These are used to serve different purposes, such as training feedforward neural network and solving unconstrained global optimization problem. To find the locally improved solution, the parameters pa and λ are used and for global solution, it uses the value of α [52]. Unlike classical CS, the parameters pa and α value changes dynamically with the number of new generations [53]. The following Eqs. (5–7) satisfy the above statement:
6
A. Mondal et al.
pa (c) = pa(max) −
c pa(max) − pa min n
α(c) = αmax × exp(b.c) b=
(5) (6)
α 1 ln( min ) n αmax
(7)
where n represents the total number of iterations and c represents the current one. An improved cuckoo search (ICS) method with increased efficiency, which is used to train feedforward neural networks and thereby solving classification problems, was proposed in [54]. A constructive heuristic called NEH [55] is incorporated with traditional CS to find optimal solution and used for task scheduling, reported in [56]. For a set of optimization problems, sometimes CS cannot reach to the desired solution. To overcome this deficiency CS is hybridized with some other optimization algorithms or machine learning models, applied in nearly each element of CS [57]. For example, hybrid CS/GA [58, 59] and hybrid CS [46] are the models proposed in the aforementioned context.
3 Image Pre-processing Image pre-processing includes image enhancement and denoising. Image enhancement is the process of sharpening digital image features, such as edge, and contrast for meaningful representation for further display and image analysis [60]. Image enhancement improves the intrinsic information as well as the dynamic range of features in the image for easier detection [61]. To improve the perception of a digital image, two different methods are used, namely, spatial domain techniques and frequency domain techniques [62]. The former one directly works on pixels, whereas the latter deals with Fourier transform of a digital image [63]. Transform function optimization is non-linear in nature and is used to stretch the dark/blur grayscale image. Some gray level transformation includes point transformation, linear transformation, logarithmic transformation, and power-low transformation [64, 65]. Generally, for efficient image enhancement, specific criteria are considered. To obtain the objective function for optimization, selection of the standard of image enhancement associated with quality function is important [66]. The quality function describes the image features in details. The performance measurement parameters reported in [67] are entropy, sum of the edge intensity, and the number of edge pixels. The objective function O t can be formulated as follows [68]: O t = ln(E(Ie )) ×
n edgels(Ie ) (i × j)
× H t
(8)
1 Cuckoo Search and Its Variants in Digital Image Processing …
7
where O t represents the fitness/quality value of the enhanced image, E(Ie ) denotes the intensity of the pixels of image Ie , and n edgels(I e) is the number of edge pixels or edges with intensity above threshold in Ie . H t represents the entropy of the enhanced image t and i, and j denote the rows and columns of the image, respectively. The image enhancement has wide application domains, such as for medical images and satellite images using CS. In [69], an optimal parameter estimation for log transformation was addressed. Here, computed tomography visual enhancement model was proposed via CS optimization algorithm. A contrast-based image enhancement approach is proposed in [70] using CS for quality advancement of low contrast satellite images. In addition, image denoising refers to the process in which an image can be reconstructed by removing the unwanted noise [71]. This process is very useful for medical imaging applications to separate original image from the noisy one [72, 73]. A hybrid filter is proposed in [74] via CS, where CS is pointed to be the most appropriate and effective optimization algorithm.
4 Image Segmentation In computer vision, image segmentation is referred to as the process of partitioning digital images into multiple segments on the basis of features, like colour, texture, patterns, and shapes to analyse an image in easier and humanly way [75]. Image segmentation technique can be broadly categorized into threshold-based, regionbased, and edge-based [76, 77]. Threshold-based Techniques Thresholding methodologies are most frequently used to segment images [78]. Threshold is used to determine intensity values of the grayscale images for classifying distinctly into different clusters [79]. The thresholding can be further subdivided into bi-level and multilevel thresholding [80]. In Bi-level thresholding technique, the input image is subdivided into background and foreground pixels and can be represented using the below formulations [81]: A0 = f (x, y) ∈ I |0 ≤ f (x, y) ≤ t − 1 A1 = f (x, y) ∈ I |t ≤ f (x, y) ≤ L − 1
(9)
where f (x, y) is the corresponding intensity value of the pixel, and I represent the sample image needs to be processed. Multilevel thresholding techniques differentiate the object of interest from the image and also used for colour images, where the value of R, G, B are distinctly processed with different gray levels [82]. The following mathematical formulations represent the multilevel thresholding:
8
A. Mondal et al.
A0 = f (x, y) ∈ I |0 ≤ f (x, y) ≤ t1 − 1 A1 = f (x, y) ∈ I |t1 ≤ f (x, y) ≤ t2 − 1 Ai = f (x, y) ∈ I |ti ≤ f (x, y) ≤ ti+1 −1 An = f (x, y) ∈ I |tn ≤ f (x, y) ≤ L − 1
(10)
where f (x, y) is the corresponding intensity value of the pixel, I represent the sample image needs to be processed, and ti = 1,2…,n (where n = total number of threshold values) [83]. To maximize the objective function, different techniques can be used, such as Kapur’s entropy, and Otsu’s/Tsallis’s entropy [84]. Generally, to search optimal threshold, the time complexity of the classical methods becomes higher [85]. In [86], a multilevel threshold-based CS technique was proposed using minimum cross entropy. A region-based CS algorithm is proposed in [87] for colour image segmentation. An adaptive CS algorithm for image segmentation is reported in [88]. In [89, 90], a multilevel thresholding technique is comparatively studied using optimization algorithms. Otsu’s between class variance method Otsu’s is a nonparametric segmentation method which increases the inner class variance and reduces the within class variance of pixels [91–93]. It can be represented as a summation of the above two variances in Eq. (10): σb2 T = B1 T σ12 T + B2 T σ22 T
(11)
where σ12 T and σ22 T are two class variance and B1 T and B2 T are the class segmentation probabilities by T and are calculated as follows:
−1 T B1 T = p(i) i=0
B2 T =
H −1
p(i)
(12)
i=T
The main disadvantage of Otsu’s method is the long searching time, but it can be reduced and become time efficient using metaheuristic algorithms. A modified CS is proposed in [94] with thresholding techniques and turned out to be a time efficient method. To maximize the inner class variance, the following formula is used: 2 =σ 2 − σb2 T ρmax = B1 (μ1 − μt )2 + B2 (μ2 − μt )2 2 = B1 T B2 T μ1 T − μ2 T
(13)
1 Cuckoo Search and Its Variants in Digital Image Processing …
9
The μ1,2,t T represents the mean of classes and can be formulated as μ1 T = μ2 T =
T −1
i p(i) B1 (T )
(14)
i p(i) B2 (T )
(15)
i=0
H −1 i=T
−1 H μt T = i p(i)
(16)
i=0
The optimal threshold can be calculated by computing maximum value of 2 T . ρmax
5 Image Compression Image compression is a technique of data compression used to minimize the storage and transmission cost of digital image [95]. There are loosely two types of image compression, namely, lossless and lossy compression. Vector quantization (VQ) is turned out to be an efficient tool for lossy image compression. The objective of VQ is it search the closet codebook by training test images [96]. Linde Buzo Gray (LBG) [97] algorithm is the most frequently used VQ method which designs a local optimal codebook for image compression. However, it cannot find the global best solution [98]. VQ is one of the block coding methodologies for image compression [99]. The prominent part of VQ design is codebook implementation [100]. Assume the original image of size s × s is quantized and branched in Sa KS × KS blocks each of size K × K pixels. Each branch is represented as A j where j ranges from 1 to Sa . Total number of codewords in codebook is Sc . The minimum Euclidian distance is calculated between vector and the codewords based on which every subgroup of images is approximated. The encoded results are named as index table. The distortion β between training vectors and codebook can be represented as β=
Sa Sc
1 v ji × A j − Ci Sc i=1 j=1
(17)
Using the following constraints: β=
Sc
v ji = 1
i=1
where v ji = 1 if A j is in the ith cluster, else it becomes 0.
(18)
10
A. Mondal et al.
In [101], the CS algorithm has been used to compress medical image based on wavelet particles. A hybrid CS algorithm is proposed in [102] for image compression using vector quantization. In [96], a modified CS optimization was proposed for image compression based on VQ.
6 Feature Selection Feature selection is the process which aims to extract the most discriminative subset of features from an image for easier classification [103, 104]. There are several feature selection mechanisms, most of them generally eliminates variables in a stepby-step manner [105]. For example, sequential floating forward selection (SFFS) [106], which has a forward step for insertion and a backward step for deletion to partially ignore local minima [107]. A direct sequential feature selection mechanism selects the most excellent feature from all the available features [108]. In sequential backward selection (SBS) mechanism, the algorithm begins by selecting the fully complete group of features and removes one variable an instance such that the predictor performance will be least affected [109, 110]. However, the sequential feature selection (SFS), SFFS methods suffer from nesting effect. To solve this limitation, an adaptive version of SFFS was proposed in [111, 112], which developed a better subset than SFFS although it is dependent on the objective function and data distribution. The major disadvantage of sequential feature selection techniques is during the starting phase of the searching process, it can lock at a local minimum. To avoid this problem, such techniques can be replaced by various optimization algorithms such as particle swarm optimization (PSO) algorithm [113, 114], genetic algorithms [115, 116], ant colony optimization (ACO) algorithm [117, 118], firefly algorithm [119, 120], and cuckoo search algorithm. The above population-based optimization algorithms can be used for functional optimization in high-dimensional space. A hybrid CS is proposed in [121] for feature selection, where an objective function is formulated to compute fitness depending upon the quality and the classification total number of features. The objective function Fit R is calculated as follows:
R Fit R = n 1 × θ Z (V ) + n 2 × 1 − |Z |
(19)
where Z is the total number of selected features and R is the features selected
from the set of features. Here, n 1 and n 2 represent the relative quality between R and classification performance, respectively. A modified BCS method is proposed in [121] for feature selection.
1 Cuckoo Search and Its Variants in Digital Image Processing …
11
7 Image Classification Image classification refers to labelling of a digital image into different categories based on features of the image [122]. In [123], an extreme learning machine (ELM) was trained using improved version of CS algorithm and further utilized in the field of medical image classification. Some parameters of ELM, such as regularization coefficient, Gaussian kernel, and hidden number of neurons were minimized via CS. In this context, classification accuracy is treated as objective function. A framework has been presented in [124] for band selection in hyperspectral image classification using binary CS. Here, the band selection is treated as a combinational problem and the objective function is used to reduce the error probability during classification.
8 Application Areas The original cuckoo search algorithm and its variants have a wide range of application domains. This section mainly focuses on application of cuckoo search in the field of data fusion, data clustering, flood forecasting, multilevel image thresholding, groundwater expedition, face recognition, travelling salesman problem, task scheduling, business optimization, n-queens puzzle problem, and computer games Relevant researches in this context have showed that cuckoo search is one of the simplest metaheuristic algorithms too because of having a single parameter pa. Also, the global search capacity of this optimization algorithm is notable. Some prominent applications of CS are briefly discussed as follows. Nurse scheduling algorithm [125] is developed using CS and is widely used in healthcare around the world to maintain nurse management system. A modified CS is used to solve effective non-linear problems, like mesh generation as reported in [42]. In some cases, CS outperforms over most of the well-known structural optimization algorithms, for example, engineering design such as spring design or welded beam design [39, 126]. In [127], CS provided the optimal solution for designing embedded systems. CS has also been used for designing steel frames as reported in [128]. In [46], a new quantum inspired CS based on quantum computing concepts like quantification, quantum bit representation, disruption, and quantum mutation were presented. The proposed algorithm increases the efficacy by minimizing the population size and number of iterations to reach the optimal solution. Quantum inspired CS has also been effectively used to solve combinational optimization problems such as 1-D bin packing problem (BPP) [129]. NP hard problems such as symmetric travelling salesman problem, knapsack problem can be solved by adapting an improved cuckoo search algorithm, reported in [130, 131]. In [132], a three-step polynomial metamodel is proposed to optimize OP-AMP using CS. An enhanced scatter search algorithm is proposed in [133] by adapting modified CS. This is generally used to solve various continuous as well as combinatorial optimization problems.
12
A. Mondal et al.
The most significant applications of CS include training neural networks and handle reliability optimization problem, addressed in [53, 54]. A multiobjective CS (MOCS) was developed in [134] to implement engineering applications [135]. A feature selection algorithm based on CS was proposed in [136] for efficient face recognition. A comparison between PSO and CS also done in [136], where CS was turned out to be a better performing algorithm in identification of face. Wireless sensor network was introduced as an emerging application of CS, reported in [137]. Apart from these CS also performed competently to solve six-bar double dwell linkage problem [138], DG allocation problem [139], business optimization [140], query optimisation [141], sheet nesting problem [142], machining parameter selection [143], automated software testing problem [144], UCAV path planning [145], manufacturing optimization problem [146], web service composition problem [147], Groundwater expedition [148], ontology matching [149], planar graph colouring [150], job scheduling problem [151], and flood forecasting [152]. Additionally, it is recommended to compare the performance of the CS with other optimization methods, such as the GA, PSO, ABC, and FFA, which have engaged in different image processing applications [153–160].
9 Conclusion Image processing and pattern recognition have a significant role in vision systems, medical applications, remote sensing, and more other applications. A typical image processing system includes successive operations/processes in different levels, namely, low-level using filtering procedures, middle-level using edge detection and segmentation methods, and high-level using extraction techniques and classification methods. Accordingly, parameters tuning is considered the most common difficulty which obstacle the design and performance of these systems. Parameter optimization is a complicated, nontrivial, and iterative challenging process to determine the preeminent outputs by fine-tuning the values of such parameters. Nature-inspired optimization including swarm intelligence techniques are extensively laboured in different image enhancement, segmentation, clustering, features selection, and classification processes to determine the optimal values of parameters by maximizing or minimizing appropriate objective functions. Accordingly, several metaheuristic-based algorithms, such as ACO, GA, PSO, and CS established their efficiency in different optimization problems. The iterative process of the optimization algorithms is a bottleneck in the image processing domain. In this chapter, the CS optimization algorithm was introduced in detail along with its variety in the image processing stages to automatically optimize the inherent parameters, which affect the performance of these stages. The general framework and the concept of the CS with its variety were introduced in this chapter as a milestone for any further use of the CS in the different applications. Different studies and applications based on the CS algorithm were reported. Each experiment entails
1 Cuckoo Search and Its Variants in Digital Image Processing …
13
variables and entries, such as filter size, threshold level, number of clusters, and number of classes. Such variables may be controlled or uncontainable. The CS algorithm is one of the efficient metaheuristic methods inspired by the activities of cuckoo species, including brood parasites along with the features of the Lévy flights, including fruit flies and birds. It is based on three main operations/rules in a simple manner. The reported studies established that the CS is a powerful optimization algorithm in the image processing applications due to its simplicity and efficient time-consuming ability.
References 1. Sonka M, Hlavac V, Boyle R (2014) Image processing, analysis, and machine vision. Cengage Learning 2. Russ JC (2016) The image processing handbook. CRC press 3. Bovik AC (2010) Handbook of image and video processing. Academic press 4. Ekstrom MP (2012) Digital image processing techniques(Vol 2). Academic Press 5. Daly S (1994, November) A visual model for optimizing the design of image processing algorithms. In: Proceedings of 1st international conference on image processing (Vol 2, pp 16–20). IEEE 6. Grangetto M, Magli E, Martina M, Olmo G (2002) Optimization and implementation of the integer wavelet transform for image coding. IEEE Trans Image Process 11(6):596–604 7. Ruiz JE, Paciornik S, Pinto LD, Ptak F, Pires MP, Souza PL (2018) Optimization of digital image processing to determine quantum dots’ height and density from atomic force microscopy. Ultramicroscopy 184:234–241 8. Wang D, Li G, Jia W, Luo X (2011) Saliency-driven scaling optimization for image retargeting. Vis Comput 27(9):853–860 9. George EB, Karnan M (2012) MR brain image segmentation using bacteria foraging optimization algorithm. Int J Eng Technol (IJET) 4(5):295–301 10. Precht H, Gerke O, Rosendahl K, Tingberg A, Waaler D (2012) Digital radiography: optimization of image quality and dose using multi-frequency software. Pediatr Radiol 42(9):1112–1118 11. Loukhaoukha K, Chouinard JY, Taieb MH (2011) Optimal image watermarking algorithm based on LWT-SVD via multi-objective ant colony optimization. J Inf Hiding Multimed Signal Proces 2(4):303–319 12. Vahedi E, Zoroofi RA, Shiva M (2012) Toward a new wavelet-based watermarking approach for color images using bio-inspired optimization principles. Digit Signal Proc 22(1):153–162 13. Krishnaveni M, Subashini P, Dhivyaprabha TT (2016, October) A new optimization approachSFO for denoising digital images. In: 2016 IEEE international conference on computation system and information technology for sustainable solutions (CSITSS), pp 34–39 14. Kockanat S, Karaboga N (2017) Medical image denoising using metaheuristics. In: Metaheuristics for medicine and biology (pp. 155–169). Springer, Berlin, Heidelberg 15. Emara ME, Abdel-Kader RF, Yasein MS (2017) Image compression using advanced optimization algorithms. J Commun 12(5) 16. Gholami A, Bonakdari H, Ebtehaj I, Mohammadian M, Gharabaghi B, Khodashenas SR (2018) Uncertainty analysis of intelligent model of hybrid genetic algorithm and particle swarm optimization with ANFIS to predict threshold bank profile shape based on digital laser approach sensing. Measurement 121:294–303 17. Hamid MS, Harvey NR, Marshall S (2003) Genetic algorithm optimization of multidimensional grayscale soft morphological filters with applications in film archive restoration. IEEE Trans Circuits Syst Video Technol 13(5):406–416
14
A. Mondal et al.
18. Shao P, Wu Z, Zhou X, Tran DC (2017) FIR digital filter design using improved particle swarm optimization based on refraction principle. Soft Comput 21:2631–2642 19. Dorigo M, Birattari M (2010) Ant colony optimization. Springer, US, pp 36–39 20. Yang XS (2010) A new metaheuristic bat-inspired algorithm. In: Nature inspired cooperative strategies for optimization (NICSO 2010) (pp. 65–74). Springer, Berlin, Heidelberg 21. Yang XS (2012, September) Flower pollination algorithm for global optimization. In: International conference on unconventional computing and natural computation (pp. 240–249). Springer, Berlin, Heidelberg 22. Yang XS (2009, October) Firefly algorithms for multimodal optimization. In: International symposium on stochastic algorithms (pp. 169–178). Springer, Berlin, Heidelberg 23. Eusuff M, Lansey K, Pasha F (2006) Shuffled frog-leaping algorithm: a memetic metaheuristic for discrete optimization. Eng Optim 38(2):129–154 24. Das S, Biswas A, Dasgupta S, Abraham A (2009) Bacterial foraging optimization algorithm: theoretical foundations, analysis, and applications. In: Foundations of computational intelligence Vol 3 (pp 23–55). Springer, Berlin, Heidelberg 25. Karaboga D, Akay B (2009) A comparative study of artificial bee colony algorithm. Appl Math Comput 214(1):108–132 26. Neshat M, Sepidnam G, Sargolzaei M, Toosi AN (2014) Artificial fish swarm algorithm: a survey of the state-of-the-art, hybridization, combinatorial and indicative applications. Artif Intell Rev 42(4):965–997 27. Yang XS, Deb S (2009, December) Cuckoo search via Lévy flights. In: 2009 world congress on nature & biologically inspired computing (NaBIC) (pp. 210–214). IEEE 28. Payne RB, Sorensen MD (2005). The cuckoos (Vol 15). Oxford University Press 29. del Hoyo J, Elliott A, Sargatal J, Cabot J (Eds) (1997). Sandgrouse to cuckoos (Vol 4). Lynx Edicions 30. Langmore NE, Kilner RM (2007) Breeding site and host selection by Horsfield’s bronzecuckoos. Chalcites Basalis Animal Behav 74(4):995–1004 31. Brooke MDL, Davies NB, Noble DG (1998) Rapid decline of host defences in response to reduced cuckoo parasitism: behavioural flexibility of reed warblers in a changing world. Proceedings of the Royal Society of London. Series B: Biological Sciences, 265(1403), 1277– 1282 32. Yang XS, Deb S (2010) Engineering optimisation by cuckoo search. arXiv preprint arXiv: 1005.2908 33. Brown CT, Liebovitch LS, Glendon R (2007) Lévy flights in Dobe Ju/’hoansi foraging patterns. Human Ecol 35(1):129–138 34. Pavlyukevich I (2007) Lévy flights, non-local search and simulated annealing. J Comput Phys 226(2):1830–1844 35. Pavlyukevich I (2007) Cooling down Lévy flights. J Phys A: Math Theor 40(41):12299 36. Shlesinger MF, Zaslavsky GM, Frisch U (1995) L´evy flights and related topics in physics: (Nice, 27–30 June 1994), Springer 37. Yang XS, Algorithms NIM (2008) Luniver press. Beckington, UK, pp 242–246 38. Bemporad A, Borrelli F, Morari M (2003) Min-max control of constrained uncertain discretetime linear systems. IEEE Trans Autom Control 48(9):1600–1606 39. Gandomi AH, Yang XS, Alavi AH (2013) Cuckoo search algorithm: a metaheuristic approach to solve structural optimization problems. Eng Comput 29(1):17–35 40. Jati GK, Manurung HM (2012, December) Discrete cuckoo search for traveling salesman problem. In: 2012 7th international conference on computing and convergence technology (ICCCT) (pp 993–997). IEEE 41. Tuba M, Subotic M, Stanarevic N (2011, April) Modified cuckoo search algorithm for unconstrained optimization problems. In: Proceedings of the 5th European conference on European computing conference (pp 263–268). World Scientific and Engineering Academy and Society (WSEAS) 42. Walton S, Hassan O, Morgan K, Brown MR (2011) Modified cuckoo search: a new gradient free optimisation algorithm. Chaos Solitons Fractals 44(9):710–718
1 Cuckoo Search and Its Variants in Digital Image Processing …
15
43. Khan K, Sahai A (2013) Neural-based cuckoo search of employee health and safety (hs). Int J Intell Syst Appl 5(2):76 44. Zheng H, Zhou Y (2012) A novel cuckoo search optimization algorithm based on Gauss distribution. J Comput Inf Syst 8(10):4193–4200 45. Zhang Y, Wang L, Wu Q (2012) Modified Adaptive Cuckoo Search (MACS) algorithm and formal description for global optimisation. Int J Comput Appl Technol 44(2):73 46. Layeb A (2011) A novel quantum inspired cuckoo search for knapsack problems. Int J Bioinspired Comput 3(5):297–305 47. Subotic M, Tuba M, Bacanin N, Simian D (2012, May) Parallelized cuckoo search algorithm for unconstrained optimization. In: Proceedings of the 5th WSEAS congress on applied computing conference, and proceedings of the 1st international conference on biologically inspired computation (pp 151–156). World Scientific and Engineering Academy and Society (WSEAS) 48. Rodrigues D, Pereira LA, Almeida TNS, Papa JP, Souza AN, Ramos CC, Yang XS (2013, May) BCS: A binary cuckoo search algorithm for feature selection. In: 2013 IEEE international symposium on circuits and systems (ISCAS2013) (pp 465–468). IEEE 49. Feng D, Ruan Q, Du L (2013) Binary cuckoo search algorithm. Jisuanji Yingyong/ J Comput Appl 33(6):1566–1570 50. Salesi S, Cosma G (2017, October) A novel extended binary cuckoo search algorithm for feature selection. In: 2017 2nd international conference on knowledge engineering and applications (ICKEA) (pp 6–12). IEEE 51. Pereira LAM, Rodrigues D, Almeida TNS, Ramos CCO, Souza A N, Yang XS, Papa JP (2014) A binary cuckoo search and its application for feature selection. In: Cuckoo search and firefly algorithm (pp 141–154). Springer, Cham 52. Valian E, Mohanna S, Tavakoli S (2011) Improved cuckoo search algorithm for global optimization. Int J Commun Inf Technol 1(1):31–44 53. Valian E, Tavakoli S, Mohanna S, Haghi A (2013) Improved cuckoo search for reliability optimization problems. Comput Ind Eng 64(1):459–468 54. Valian E, Mohanna S, Tavakoli S (2011) Improved cuckoo search algorithm for feedforward neural network training. Int J Artif Intell Appl 2(3):36–43 55. Pan QK, Wang L (2012) Effective heuristics for the blocking flowshop scheduling problem with makespan minimization. Omega 40(2):218–229 56. Marichelvam MK, Prabaharan T, Yang XS (2014) Improved cuckoo search algorithm for hybrid flow shop scheduling problems to minimize makespan. Appl Soft Comput 19:93–101 57. Fister I, Yang XS, Fister D (2014) Cuckoo search: a brief literature review. In: Cuckoo search and firefly algorithm (pp 49–62). Springer, Cham 58. Ghodrati A, Lotfi S (2012) A hybrid cs/ga algorithm for global optimization. In: Proceedings of the international conference on soft computing for problem solving (SocProS 2011) December 20–22, 2011 (pp 397-404). Springer, India 59. Ghodrati A, Lotfi S (2012, March) A hybrid CS/PSO algorithm for global optimization. In: Asian conference on intelligent information and database systems (pp 89–98). Springer, Berlin, Heidelberg 60. Mustafi A, Mahanti PK (2009) An optimal algorithm for contrast enhancement of dark images using genetic algorithms. In: Computer and information science 2009 (pp 1–8). Springer, Berlin, Heidelberg 61. Bharal S, Amritsar GNDU (2015) A survey on various underwater image enhancement techniques. Int J Comput Appl 5(4):160–164 62. Sawant HK, Deore M (2010) A comprehensive review of image enhancement techniques. Int J Comput Technol Electron Eng (IJCTEE) 1(2):39–44 63. Maini R, Aggarwal H (2010) A comprehensive review of image enhancement techniques. arXiv preprint arXiv:1003.4053 64. Bedi SS, Khandelwal R (2013) Various image enhancement techniques-a critical review. Int J Adv Res Comput Commun Eng 2(3)
16
A. Mondal et al.
65. Ortiz SHC, Chiu T, Fox MD (2012) Ultrasound image enhancement: a review. Biomed Signal Process Control 7(5):419–428 66. Dhal KG, Quraishi MI, Das S (2015) Performance analysis of chaotic Lévy bat algorithm and chaotic cuckoo search algorithm for gray level image enhancement. In: Information systems design and intelligent applications (pp 233–244). Springer, New Delhi 67. Gorai A, Ghosh A (2009, December) Gray-level image enhancement by particle swarm optimization. In: 2009 world congress on nature & biologically inspired computing (NaBIC) (pp 72–77). IEEE 68. Dhal KG, Quraishi IM, Das S (2015) A chaotic Lévy flight approach in bat and firefly algorithm for gray level image enhancement. IJ Image Graph Signal Process 7(7):69–76 69. Ashour AS, Samanta S, Dey N, Kausar N, Abdessalemkaraa WB, Hassanien AE (2015) Computed tomography image enhancement using cuckoo search: a log transform based approach. J Signal Inf Process 6(03):244 70. Bhandari AK, Soni V, Kumar A, Singh GK (2014) Cuckoo search algorithm based satellite image contrast and brightness enhancement using DWT–SVD. ISA Trans 53(4):1286–1296 71. Motwani MC, Gadiya MC, Motwani RC, Harris FC (2004, September) Survey of image denoising techniques. In: Proceedings of GSPX (pp 27–30) 72. Ragesh NK, Anil AR, Rajesh R (2011, April) Digital image denoising in medical ultrasound images: a survey. In: Icgst Aiml-11 conference, Dubai, UAE (Vol. 12, p 14) 73. Mohan J, Krishnaveni V, Guo Y (2014) A survey on the magnetic resonance image denoising methods. Biomed Signal Process Control 9:56–69 74. Malik M, Ahsan F, Mohsin S (2016) Adaptive image denoising using cuckoo algorithm. Soft Comput 20(3), 925–938 75. Fu KS, Mui JK (1981) A survey on image segmentation. Pattern Recogn 13(1):3–16 76. Pal NR, Pal SK (1993) A review on image segmentation techniques. Pattern Recogn 26(9):1277–1294 77. Dass R, Devi S (2012) Image segmentation techniques 1 78. Agrawal S, Panda R, Bhuyan S, Panigrahi BK (2013) Tsallis entropy based optimal multilevel thresholding using cuckoo search algorithm. Swarm Evolut Comput 11:16–30 79. Sezgin M, Sankur B (2004) Survey over image thresholding techniques and quantitative performance evaluation. J Electron Imaging 13(1):146–166 80. Pare S, Kumar A, Bajaj V, Singh GK (2016) A multilevel color image segmentation technique based on cuckoo search algorithm and energy curve. Appl Soft Comput 47:76–102 81. Bhandari AK, Singh VK, Kumar A, Singh GK (2014) Cuckoo search algorithm and wind driven optimization based study of satellite image segmentation for multilevel thresholding using Kapur’s entropy. Expert Syst Appl 41(7):3538–3560 82. Arora S, Acharya J, Verma A, Panigrahi PK (2008) Multilevel thresholding for image segmentation through a fast statistical recursive algorithm. Pattern Recogn Lett 29(2):119–125 83. Liao PS, Chen TS, Chung PC (2001) A fast algorithm for multilevel thresholding. J Inf Sci Eng 17(5):713–727 84. Suresh S, Lal S (2016) An efficient cuckoo search algorithm based multilevel thresholding for segmentation of satellite images using different objective functions. Expert Syst Appl 58:184–209 85. Song JH, Cong W, Li J (2017) A fuzzy c-means clustering algorithm for image segmentation using nonlinear weighted local information. J Inf Hiding Multimedia Signal Process 8(9):1–11 86. Pare S, Kumar A, Bajaj V, Singh GK (2017) An efficient method for multilevel color image thresholding using cuckoo search algorithm based on minimum cross entropy. Appl Soft Comput 61:570–592 87. Preetha MMSJ, Suresh LP, Bosco MJ (2016) Region based image segmentation using cuckoo search algorithm. J Chem Pharmaceutical Sci 9(2):884–888 88. Ong P (2014) Adaptive cuckoo search algorithm for unconstrained optimization. Scient World J 89. Akay B (2013) A study on particle swarm optimization and artificial bee colony algorithms for multilevel thresholding. Appl Soft Comput 13(6):3066–3091
1 Cuckoo Search and Its Variants in Digital Image Processing …
17
90. Brajevic I, Tuba M (2014) Cuckoo search and firefly algorithm applied to multilevel image thresholding. In: Cuckoo search and firefly algorithm (pp 115–139). Springer, Cham 91. Otsu N (1979) A threshold selection method from gray-level histograms. IEEE Trans Syst Man Cybern 9(1):62–66 92. Shah-Hosseini H (2011, October) Otsu’s criterion-based multilevel thresholding by a natureinspired metaheuristic called galaxy-based search algorithm. In: 2011 third world congress on nature and biologically inspired computing (pp 383–388). IEEE 93. Zhang J, Hu J (2008, December) Image segmentation based on 2D Otsu method with histogram analysis. In: 2008 international conference on computer science and software engineering (Vol 6, pp. 105–108). IEEE 94. Chakraborty S, Chatterjee S, Dey N, Ashour AS, Ashour AS, Shi F, Mali K (2017) Modified cuckoo search algorithm in microscopic image segmentation of hippocampus. Microsc Res Tech 80(10):1051–1072 95. Fisher Y (2012)Fractal image compression: theory and application. Springer Science & Business Media 96. Chiranjeevi K, Jena UR (2016) Image compression based on vector quantization using cuckoo search optimization technique. Ain Shams Eng J 97. Linde Y, Buzo A, Gray R (1980) An algorithm for vector quantizer design. IEEE Trans Commun 28(1):84–95 98. Patané G, Russo M (2001) The enhanced LBG algorithm. Neural Netw 14(9):1219–1237 99. Horng MH, Jiang TW (2010) The codebook design of image vector quantization based on the firefly algorithm. International Conference on Computational Collective Intelligence. Springer, Berlin, Heidelberg, pp 438–447 100. Chiranjeevi K, Jena U, Prasad PMK.(2017) Hybrid cuckoo search based evolutionary vector quantization for image compression. In: Artificial intelligence and computer vision (pp 89– 114). Springer, Cham 101. Bruylants T, Munteanu A, Schelkens P (2015) Wavelet based volumetric medical image compression. Sig Process Image Commun 31:112–133 102. Karri C, Umaranjan J, Prasad PMK (2014) Hybrid Cuckoo search based evolutionary vector quantization for image compression. Artif Intell Comput Vis Stud Comput Intell, 89–113 103. Alpaydin E (2010) Introduction to machine learning: London 104. Dash M, Liu H (1997) Feature selection for classification. Intell Data Anal 1(1–4):131–156 105. Siedlecki W, Sklansky J (1993) On automatic feature selection. In: Handbook of pattern recognition and computer vision (pp 63–87) 106. Pudil P, Novoviˇcová J, Kittler J (1994) Floating search methods in feature selection. Pattern Recogn Lett 15(11):1119–1125 107. Ververidis D, Kotropoulos C (2008) Fast and accurate sequential floating forward feature selection with the Bayes classifier applied to speech emotion recognition. Sig Process 88(12):2956–2970 108. Maragoudakis M, Serpanos D (2010, October) Towards stock market data mining using enriched random forests from textual resources and technical indicators. In: IFIP international conference on artificial intelligence applications and innovations (pp 278–286). Springer, Berlin, Heidelberg 109. Reunanen J (2003) Overfitting in making comparisons between variable selection methods. J Mach Learn Res 3:1371–1382 110. Chandrashekar G, Sahin F (2014) A survey on feature selection methods. Comput Electr Eng 40(1):16–28 111. Somol P, Pudil P, Novoviˇcová J, Paclık P (1999) Adaptive floating search methods in feature selection. Pattern Recogn Lett 20(11–13):1157–1163 112. Sun Y, Babbs CF, Delp EJ (2006, January) A comparison of feature selection methods for the detection of breast cancers in mammograms: adaptive sequential floating search vs. genetic algorithm. In: 2005 IEEE engineering in medicine and biology 27th annual conference (pp 6532–6535). IEEE
18
A. Mondal et al.
113. Xue B, Zhang M, Browne WN (2012) Particle swarm optimization for feature selection in classification: a multi-objective approach. IEEE Trans Cybern 43(6):1656–1671 114. Xue B, Zhang M, Browne WN (2014) Particle swarm optimisation for feature selection in classification: Novel initialisation and updating mechanisms. Appl Soft Comput 18:261–276 115. Chtioui Y, Bertrand D, Barba D (1998) Feature selection by a genetic algorithm. Application to seed discrimination by artificial vision. J Sci Food Agricul 76(1):77–86 116. Tsai CF, Eberle W, Chu CY (2013) Genetic algorithms in feature and instance selection. Knowl-Based Syst 39:240–247 117. Kanan HR, Faez K, Taheri SM (2007, July) Feature selection using ant colony optimization (ACO): a new method and comparative study in the application of face recognition system. In: Industrial conference on data mining (pp 63–76). Springer, Berlin, Heidelberg 118. Neagoe VE, Neghina EC (2016, June) Feature selection with ant colony optimization and its applications for pattern recognition in space imagery. In: 2016 international conference on communications (COMM) (pp 101–104). IEEE 119. Zhang L, Mistry K, Lim CP, Neoh SC (2018) Feature selection using firefly optimization for classification and regression models. Decis Support Syst 106:64–85 120. Mistry K, Zhang L, Sexton G, Zeng Y, He M (2017, June) Facial expression recongition using firefly-based feature optimization. In: 2017 IEEE congress on evolutionary computation (CEC) (pp 1652–1658). IEEE 121. El Aziz MA, Hassanien AE (2018) Modified cuckoo search algorithm with rough sets for feature selection. Neural Comput Appl 29(4):925–934 122. Baxes GA (1994) Digital image processing: principles and applications (pp. I-XVIII). New York: Wiley 123. Mohapatra P, Chakravarty S, Dash PK (2015) An improved cuckoo search based extreme learning machine for medical data classification. Swarm Evol Comput 24:25–49 124. Medjahed SA, Saadi TA, Benyettou A, Ouali M (2015) Binary cuckoo search algorithm for band selection in hyperspectral image classification. IAENG Int J Comput Sci 42(3):183–191 125. Tein LH, Ramli R (2010, November) Recent advancements of nurse scheduling models and a potential path. In: Proceedings 6th IMT-GT conference on mathematics, statistics and its applications (ICMSA 2010) (pp 395–409) 126. Gandomi AH, Yang XS, Talatahari S, Deb S (2012) Coupled eagle strategy and differential evolution for unconstrained and constrained global optimization. Comput Math Appl 63(1):191–200 127. Kumar A, Chakarverty S (2011, April) Design optimization for reliable embedded system using Cuckoo Search. In: 2011 3rd international conference on electronics computer technology (Vol 1, pp 264–268). IEEE 128. Kaveh A, Bakhshpoori T (2013) Optimum design of steel frames using Cuckoo Search algorithm with Lévy flights. Struct Design Tall Spec Build 22(13):1023–1036 129. Layeb A, Boussalia SR (2012) A novel quantum inspired cuckoo search algorithm for bin packing problem. Int J Inf Technol Comput Sci 4(5):58–67 130. Ouaarab A, Ahiod B, Yang XS (2014) Discrete cuckoo search algorithm for the travelling salesman problem. Neural Comput Appl 24(7–8):1659–1669 131. Feng Y, Jia K, He Y (2014) An improved hybrid encoding cuckoo search algorithm for 0-1 knapsack problems. Comput Intell Neurosci 2014:1 132. Zheng G, Mohanty SP, Kougianos E (2012, August) Metamodel-assisted fast and accurate optimization of an op-amp for biomedical applications. In: 2012 IEEE computer society annual symposium on VLSI (pp 273–278). IEEE 133. Al-Obaidi ATS (2013) Improved scatter search using cuckoo search. Int J Adv Res Artif Intell 2(2):61–67 134. Yang XS, Deb S (2013) Multiobjective cuckoo search for design optimization. Comput Oper Res 40(6):1616–1624 135. Chandrasekaran K, Simon SP (2012) Multi-objective scheduling problem: hybrid approach using fuzzy assisted cuckoo search algorithm. Swarm Evol Comput 5:1–16 136. Tiwari V (2012) Face recognition based on cuckoo search algorithm. Image 7(8):9
1 Cuckoo Search and Its Variants in Digital Image Processing …
19
137. Dhivya M, Sundarambal M, Anand LN (2011) Energy efficient computation of data fusion in wireless sensor networks using cuckoo based particle approach (CBPA). Int J Commun Netw Syst Sci 4(04):249 - c SR, Ðordevi´ - c VS (2013) Cuckoo search algorithm: a metaheuristic 138. Bulatovi´c RR, Ðordevi´ approach to solving the problem of optimum synthesis of a six-bar double dwell linkage. Mech Mach Theory 61:1–13 139. Moravej Z, Akhlaghi A (2013) A novel approach based on cuckoo search for DG allocation in distribution network. Int J Electr Power Energy Syst 44(1):672–679 140. Yang XS, Deb S, Karamanoglu M, He X (2012, November) Cuckoo search for business optimization applications. In: 2012 national conference on computing and communication systems (pp 1–5). IEEE 141. Joshi M, Srivastava PR (2013) Query optimization: an intelligent hybrid approach using cuckoo and tabu search. Int J Intell Inf Technol (IJIIT) 9(1):40–55 142. Elkeran A (2013) A new approach for sheet nesting problem using guided cuckoo search and pairwise clustering. Eur J Oper Res 231(3):757–769 143. Yildiz AR (2013) Cuckoo search algorithm for the selection of optimal machining parameters in milling operations. Int J Adv Manuf Technol 64(1–4):55–61 144. Srivastava PR, Reddy DPK, Reddy MS, Ramaraju CV, Nath ICM (2012) Test case prioritization using cuckoo search. In: Advanced automated software testing: Frameworks for refined practice (pp 113–128). IGI Global 145. Wang G, Guo L, Duan H, Liu L, Wang H, Wang J (2012) A hybrid meta-heuristic DE/CS algorithm for UCAV path planning. J Inf Comput Sci 9(16):4811–4818 146. Syberfeldt A, Lidberg S (2012, December) Real-world simulation-based manufacturing optimization using cuckoo search. In: Proceedings of the 2012 winter simulation conference (WSC) (pp 1–12). IEEE 147. Chifu VR, Pop CB, Salomie I, Suia DS, Niculici AN (2011) Optimizing the semantic web service composition process using cuckoo search. In: Intelligent distributed computing V (pp 93–102). Springer, Berlin, Heidelberg 148. Gupta D, Das B, Panchal VK (2013) Applying case based reasoning in cuckoo search for the expedition of groundwater exploration. In: Proceedings of seventh international conference on bio-inspired computing: theories and applications (BIC-TA 2012) (pp 341–353). Springer, India 149. Ritze D, Paulheim H (2011, October) Towards an automatic parameterization of ontology matching tools based on example mappings. In: Proceedings 6th ISWC ontology matching workshop (OM), Bonn (DE) (pp 37–48) 150. Zhou Y, Zheng H, Luo Q, Wu J (2013) An improved cuckoo search algorithm for solving planar graph coloring problem. Appl Math Inf Sci 7(2):785 151. Prakash M, Saranya R, Jothi KR, Vigneshwaran A (2012) An optimal job scheduling in grid using cuckoo algorithm. Int J Comput Sci Telecommun 3(2):65–69 152. Chaowanawatee K, Heednacram A (2012, July) Implementation of cuckoo search in RBF neural network for flood forecasting. In: 2012 fourth international conference on computational intelligence, communication systems and networks (pp 22–26). IEEE 153. Hore S, Chatterjee S, Santhi V, Dey N, Ashour AS, Balas VE, Shi F (2017) Indian sign language recognition using optimized neural networks. In: Information technology and intelligent transportation systems (pp 553–563). Springer, Cham 154. Dey N, Rajinikanth V, Ashour A, Tavares JM (2018) Social group optimization supported segmentation and evaluation of skin melanoma images. Symmetry 10(2):51 155. Dey N, Ashour A, Beagum S, Pistola D, Gospodinov M, Gospodinova E, Tavares J (2015) Parameter optimization for local polynomial approximation based intersection confidence interval filter using genetic algorithm: an application for brain MRI image de-noising. J Imaging 1(1):60–84 156. Naik A, Satapathy SC, Ashour AS, Dey N (2018) Social group optimization for global optimization of multimodal functions and data clustering problems. Neural Comput Appl 30(1):271–287
20
A. Mondal et al.
157. Ashour AS, Beagum S, Dey N, Ashour AS, Pistolla DS, Nguyen GN, … Shi F (2018). Light microscopy image de-noising using optimized LPA-ICI filter. Neural Comput Appl 29(12):1517–1533 158. Wang D, Li Z, Cao L, Balas VE, Dey N, Ashour AS … Shi F (2016) Image fusion incorporating parameter estimation optimized Gaussian mixture model and fuzzy weighted evaluation system: a case study in time-series plantar pressure data set. IEEE Sensors J 17(5):1407–1420 159. Parsian A, Ramezani M, Ghadimi N (2017) A hybrid neural network-gray wolf optimization algorithm for melanoma detection 160. Razmjooy N, Sheykhahmad FR, Ghadimi N (2018) A hybrid neural network–world cup optimization algorithm for melanoma detection. Open Med 13(1):9–16
Chapter 2
Cuckoo Search Algorithm for Parametric Data Fitting of Characteristic Curves of the Van der Waals Equation of State Almudena Campuzano, Andrés Iglesias, and Akemi Gálvez
1 Introduction 1.1 Motivation Equations of state (EoS) are very important in Physics, Thermodynamics, Chemical Engineering, Astrophysics, and many other fields. Roughly speaking, EoS are mathematical equations accounting for the observed thermodynamic relationships among variables like pressure P, volume V , and temperature T , for a pure component or a multicomponent system. As such, they are essential tools for describing and predicting the properties and behavior of chemical systems. It is worthwhile to mention that there is no single EoS that can accurately describe the properties and behavior of all substances for all possible conditions. So far, many equations of state have been discussed in the existing literature (Redlich–Kwong, Peng–Robinson, Soave–Redlich–Kwong, Elliot–Suresh–Donohue, etc.). The ideal gas law is the most popular and widely known. It can be written as
A. Campuzano School of Chemical, Biological and Environmental Engineering (CBEE), College of Engineering, Oregon State University, Corvallis, OR, USA e-mail: [email protected] School of Industrial Engineering, University of Cantabria, Santander, Spain A. Iglesias (B) · A. Gálvez Department of Information Science, Faculty of Sciences, Toho University, Funabashi, Japan e-mail: [email protected] A. Gálvez e-mail: [email protected] Department of Applied Mathematics and Computational Sciences, University of Cantabria, Santander, Spain © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 N. Dey (ed.), Applications of Cuckoo Search Algorithm and its Variants, Springer Tracts in Nature-Inspired Computing, https://doi.org/10.1007/978-981-15-5163-5_2
21
22
A. Campuzano et al.
P.V = n.R.T =
m .R.T Mm
(1)
where m accounts for the mass of the gas; Mm , for the molar mass, n represents the numerical of moles, and R is the universal gas constant, with a value of 0.082 L.atm.K−1 .mol−1 . This equation works properly for low pressures and high temperatures. However, for cases in which the pressure is high and/or the temperature is low, the ideal gas law predictions differ from the observed experimental data. For these scenarios, there are several EoS that incorporate different corrections to approximate the behavior of real gases more accurately. One of the most famous EoS is the Van der Waals (VdW) equation, named after Dutch physicist Johannes Van der Waals. It generalizes the ideal gas equation for real gases by taking into account the molecular size and the interaction forces between the molecules. For this purpose, it introduces two new parameters, a and b, whose value depends on each particular substance. The resulting EoS becomes P +a
n 2 V V
n
− b = R.T
(2)
An isotherm curve can be represented graphically by fixing a value of the temperature T in (2) and then plotting P versus V in the P − V plane (see further details in Sect. 2). Analyzing the behavior of the substance for different isotherms, we can obtain a phase diagram that models graphically the coexistence and immiscibility regions of solid, liquid, and gas phases. The different areas corresponding to single-phase regions are separated by phase boundaries (i.e., curves of non-analytical behavior where phase transitions occur). In particular, the transition between gas and liquid phases for the VdW EoS can be further explained through the construction of two characteristic curves: the binodal and the spinodal curves. The former describes the conditions for two distinct phases to coexist, while the latter encloses the region for the system’s instability. Between both curves, a metastable region is defined. It is convenient to note that neither the binodal nor the spinodal curve can be calculated analytically; instead, they are computed numerically by fitting a set of 2D points obtained for different isotherms. In particular, the sequence of points for the binodal curve is given by the leftmost roots, the critical point (the end point of the phase equilibrium curve, at which phase boundaries vanish and the liquid and gas phases can coexist), and the rightmost roots of the isotherms. In turn, the spinodal curve is the curve connecting the sequence of points given by the local minimum, the critical point, and the local maximum of the isotherms. Both sequences of points are determined by numerical procedures; then, polynomial data fitting is applied to obtain the characteristic curves (see [6, 8, 24] for details). This data fitting approach has several advantages, as polynomial functions are easy to plot and understand and can be computed efficiently. However, it also involves some difficulties: for instance, it requires to provide an initial estimation of the degree
2 Cuckoo Search Algorithm for Parametric Data Fitting …
23
of the approximating polynomial. This choice of the degree is of huge importance for the proper performance of this method; high degree polynomials provide good numerical fitting but tend to oscillate drastically and are prone to overfitting, while low degree polynomials, in spite of being easier to work with, present little flexibility and might arguably lead to poor fitting. As a result, a balanced trade-off between both aspects is needed. Unfortunately, it is a problem-dependent issue and generally difficult to determine. Finally, polynomials in canonical form can be difficult to apply to dynamic data fitting, as their coefficients lack a geometric meaning.
1.2 Aims and Structure of This Chapter It is possible to overcome all limitations previously discussed by considering free-form parametric curves [15]. Among these curves, B-splines and Bézier are the most widely used for computer-aided design, manufacturing, and engineering (CAD/CAM/CAE) along with other industrial settings [2, 24]. They are very flexible and can adequately represent a wide range of continuous and smooth curves and surfaces [8]. Additionally, their coefficients have a clear geometric meaning. Owing to their simplicity, here we focus our attention on the specific case of Bézier curves. They are applied to perform least-squares approximation for the collection of 2D points associated with the binodal and spinodal curves. Since these curves are parametric, this task requires a proper data parameterization in addition to computing the poles of the curves. Unfortunately, this results in a continuous nonlinear optimization problem that is not properly addressed by standard mathematical optimization techniques [10, 11, 16, 17]. The problem is solved by applying a strong swarm intelligence approach for continuous optimization: the cuckoo search algorithm. This new scheme is then applied to real data of a gas. Our experimental results are very promising, as the method can reconstruct the characteristic curves with high accuracy. A comparative work with other alternative methods has also been carried out, showing that our method is very competitive, as it outperforms the compared methods. The chapter follows this structure: Sect. 2 talks briefly about the fundamentals of the problem to be addressed (a nonlinear optimization problem). Next, Sect. 3 summarizes the cuckoo search algorithm, the computational approach used here. Section 4 describes in detail the proposed methodology. Experimental results can be seen in Sect. 5. Some other interesting issues, such as the implementation details and computing times, and a comparison with four existing alternative methods, are also discussed in the same section. Last but not least, main conclusions are drawn, and some future work ideas are proposed.
24
A. Campuzano et al.
2 Problem to Be Solved 2.1 Background For the subsequent lines, we consider the Van der Waals equation of state given by (2). For any value of T , it reduces to an equation relating P and V . Thus, given a set of fixed temperatures T1 , T2 , . . . , TM , its graphical representation in the P – V plane is a set of curves called isotherms, each corresponding to a certain temperature Ti . As an example, Fig. 1 shows a set of 11 isotherms for the Argon corresponding to temperatures from 10 K to 510 K with step-size 50 K. We can assume, without losing generality, n = 1 in Eq. (2) for simplicity. Multiplying it by V 2 /P and rearranging terms, we get a cubic polynomial in V : ab RT a V2 + V − =0 V − b+ P P P 3
(3)
which has either one or three real roots. The first case occurs for temperatures T higher than the critical temperature, Tc , a value that is characteristic of each substance. A typical isotherm curve for this case is shown in green in Fig. 2. The second case happens for temperatures lower than Tc , when the isotherms oscillate up and down, as the one in blue in Fig. 2. Both cases are separated by the isotherm for T = Tc (in red in Fig. 2), when the three real roots merge into a single point (triple root),
1000 800 600 400
P
200 0 -200 -400 -600 -800 -1000 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
V
Fig. 1 Plot of 11 isotherms of the argon for temperatures from T = 10 K (bottom curve) to T = 510 K (top curve) with step-size 50
2 Cuckoo Search Algorithm for Parametric Data Fitting …
25
Fig. 2 Sketch of the isotherms for the VdW EoS and temperatures above (green), equal to (red), and below (blue) the critical temperature Tc
called the critical point. In the following, we will focus on the case T < Tc , where the isotherms have three real roots, labeled in increasing order as R1 , R2 , and R3 . For a liquid-gas system, the end roots, R1 and R3 , correspond to the liquid and the vapor phases, respectively. The second root, R2 , has no physical meaning; it is associated with an unstable molar volume and does not represent any real behavior. Suppose now that we start with a temperature T < Tc and we increase it until reaching the critical value Tc . In that case, the molar volume of the saturated liquid (the liquid at the limit between the single-liquid and two-phase liquid/vapor behavior) gets larger and the molar volume of the saturated vapor (the vapor at the limit between the single-vapor and two-phase liquid/vapor behavior) gets smaller. This means that increasing the temperature makes the smallest and largest roots R1 and R3 move toward each other, until they merge into a single point for T = Tc , reaching the critical point, when the three roots become identical. At this point, critical point, vapor and liquid molar volumes are identical, meaning that there is no distinction between a liquid and vapor. It can be proved [19, 26] that the critical values for the VdW EoS of a gas depend only on the parameters a and b as follows: Vc = 3b,
Pc =
a , 27b2
Tc =
8a 27b R
(4)
It is convenient to work with dimensionless variables by considering the reduced temperature, pressure, and volume: (Tr , Pr , Vr ) =
T P V , , Tc Pc Vc
(5)
26
A. Campuzano et al.
so Eq. (3) becomes Vr3
8Tr 1 3 1 Vr2 + Vr − 1+ − =0 3 Pr Pr Pr
(6)
The isotherms for T < Tc in Fig. 2 show something surprising, as decreasing the volume from the right part makes the pressure rise, fall, and then rise again. This suggests that, for some molar volumes, compressing the fluid can cause its pressure to decrease. This makes the phase unstable to density fluctuations. Thus, the stable states of such isotherms are divided into two parts: the first one, on the left, is characterized by relatively small molar volumes; it corresponds to the liquid states. The second part, on the right, is given by relatively large molar volumes; it corresponds to the gas states. These two parts are separated by a set of unstable states. This striking behavior violates the conditions of stability for thermodynamic equilibrium. A solution to fix this situation was proposed by famous scientist James Clerk Maxwell in [23], and is now known as Maxwell’s equal area rule. Basically, it states that the problem would be fixed if the oscillating part of the isotherm between the diluted liquid and the diluted gas is replaced by a horizontal line. According to Maxwell’s rule, the height of this horizontal line should be taken such that the two regions enclosed by the isotherm curve and the line have the same area.
2.2 Obtaining Data Points for the Binodal and Spinodal Curves Now, we can generate the list of data points for the binodal curve as follows: we start by considering a set of increasing temperatures T1 < T2 < . . . TM < Tc . For each temperature T j , ( j = 1, . . . , M) there exists a value of the pressure, say P j∗ , for which the areas of the isotherm curve below and above the horizontal line P = P j∗ are equal. Such a value is computed by applying an iterative optimization procedure that minimizes the difference between both areas, starting from an initial guess P˜ j . After j this step, the corresponding roots of the isotherm, denoted as Rk , for (k = 1, 2, 3), can be computed as the intersection of the isotherm curve for T j and the horizontal line P = P j∗ . Then, the list of data points for the binodal curve, B, is given by j M+1− j ∗ B = {(R1 , P j∗ )} j , (1, 1), {(R3 , PM+1− j )}
j=1,...,M
(7)
The local maxima and minima of the isotherm curves are very useful to analyze the dynamics and properties of phase transitions. The values for which d P/d V > 0 correspond to unstable states of a substance. Considering the part of the isotherms located inside the area enclosed by the binodal curve, this situation occurs for the values of the volume between the local minimum, l j and the local maximum, Ll of the isotherm of temperature T j . Note that vectors are noted in bold. These local
2 Cuckoo Search Algorithm for Parametric Data Fitting …
27
optima are computed by solving d P/d V = 0, and checking the sign of the second derivative, d 2 P/d V 2 at the obtained solutions. Then, the list of data points for the spinodal curve, S , is given by
S = {l j } j , (1, 1), {L j } j j=1,...,M
(8)
2.3 Data Fitting of the Characteristic Curves Once the data points for the binodal and spinodal curves are obtained, standard numerical routines for data fitting are applied to reconstruct such curves. Two different approaches can be applied to this problem: interpolation and approximation. Interpolation requires the fitting function to pass throughout all the data points. Such condition is less strict for approximation, where the function is only required to pass nearby the data points, according to certain given metrics. Since our data is affected by noise, irregular sampling, and other elements of disturbance, approximation is preferred, often in the form of least-squares optimization. In that case, we want to minimize the error functional F given by the summed square of residuals, where the residual for the i-th data can be explained as the difference between the observed di , and the fitted data, dˆi : m F = (di − dˆi )2 (9) i=1
where m accounts for the number of data and our fitted data are obtained from a certain fitting model function G . Note that the minimization process is performed on the free variables of G . For the problem in this paper, G is generally assumed to be a polynomial of a certain degree. As discussed in Sect. 1.1, this choice has several advantages but also serious shortcomings, which can be avoided through free-form parametric curves. Here consider the specific case of Bézier curves, as described in next section.
2.4 Data Fitting Through Bézier Curves For this section, we will assume that the reader knows the main concepts of free-form parametric curves [8, 15]. A free-form Bézier curve C(t) of degree n is described as C(t) =
n
P j C nj (t)
(10)
j=0
with P j being vector coefficients denoted as poles (also control points), C nj (t) are the Bernstein polynomials of degree n and index j, that can be given by
28
A. Campuzano et al.
C nj (t) =
n j t (1 − t)n− j j
(11)
n n! with = , t being the curve parameter, defined on the unit interval j j!(n − j)! [0, 1]. We take 0! = 1 by convention. Let’s suppose that we have a list consisting of 2D points {Dd }d=1,...,m in R2 , such as those obtained for the binodal and spinodal curves, as described in Sect. 2.2. Now our aim is to reckon the curve C(t) achieving a good approximation of the data points, according to least-squares method. In order to achieve this goal, the least-squares error, E , has to be minimized. E is expressed as E =
m d=1
⎛ ⎝Dd −
n
⎞2 P j C nj (td )⎠
(12)
j=0
there is a parameter value, td , related to each data point Dd , d = 1, . . . , m. Given the vectors C j = (C nj (t1 ), . . . , C nj (tm ))T , j = 0, . . . , n, where (.)T means transposition, ¯ = (D1 , . . . , Dm ), Eq. (12) evolves into this system of polynomial equations, and D called the normal equation: ⎛
⎞⎛ ⎞ ⎛ ⎞ ¯ 0 P0 D.C . . . CnT .C0 .. .. ⎟ ⎜ .. ⎟ = ⎜ .. ⎟ . . ⎠⎝ . ⎠ ⎝ . ⎠ T T ¯ n C0 .Cn . . . Cn .Cn Pn D.C
C0T .C0 ⎜ .. ⎝ .
(13)
that can be condensed as MP=R
(14)
⎡ ⎤ ⎡ ⎤ m m with M = ⎣ Cln (t j )Cin (t j )⎦, P = (P0 , . . . , Pn )T , and R = ⎣ D j Cln (t j )⎦ j=1
j=1
for i, l = 0, 1, . . . , n. If we assign values to ti , Eq. (14) turns to be a classical linear least-squares minimization, which might easily be treated using standard numerical procedures. However,if ti are unknowns, the difficulty of the problem increases. Given the non-linearity at t in the blending functions, C nj (t), it is a nonlinear continuous problem. It is also multimodal, as there is possibly more than one data parameterization vector leading to the optimal solution [24]. As a conclusion, and taking all this into account, solving the discussed parameterization results into a difficult multivariate, multimodal, nonlinear, continuos optimization problem, which is not solvable by applying the classic mathematical techniques [6]. This challenge is addressed in this chapter through the cuckoo search algorithm, a strong evolutionary method, already applied in the past to data fitting, as shown in several previous work [12, 18]. The algorithm is described in some detail in the following section.
2 Cuckoo Search Algorithm for Parametric Data Fitting …
29
3 The Cuckoo Search Algorithm Metaheuristic algorithms inspired by behaviors observed in nature are increasingly burgeoning as a method to address optimization problems [3, 4]. One of them is the cuckoo search (CS). Originally conceived by Yang and Deb in 2009 [33], it is based on the brood parasitism that cuckoos use as their reproductive strategy. More specifically, intraspecific brood parasitism. This interesting behavior carried out by some cuckoos, consists in delegating their parental role to other birds, (generally from other species). To achieve such a thing, the cuckoos randomly choose some host nests to lay their eggs in. They can distribute their eggs among different chosen host nests. Some cuckoos can even mimicry the hosts egg’s color, pattern, or even incubation period, in order to guarantee that the offspring will be raised by host bird it was its own. These sophisticated tactics allow to decrease the probability of the host bird to identify the parasite eggs, and therefore, it increases cuckoo’s own reproductivity. Of course, still there is a chance that the host discovers the alien eggs. Should this happen, it will either push them away or leave the nest, building some new nest elsewhere (random location). The abovementioned CS approach applied to optimization problems is a reflection of this remarkable breeding and behavioral pattern. In the algorithm, the initial eggs existing in the host’s nest play the role of a pool of possible solutions, and every new cuckoo egg coming represents a brand-new solution, potentially better. This process of replacement, applied interactively, eventually converges in an optimal solution. Additionally, CS algorithm is built taking on the ideal assumption of three rules [33, 34]: 1. Every parasite bird lays one egg at a given time. These eggs are dropped in randomly chosen nests. 2. Those nests with high-quality eggs (high-quality solutions) will be moved to the next iteration (generation); 3. Available host nests’ quantity is given by a given number. Furthermore, a parasite egg can be found by its host, with probability factor pa ∈ [0, 1]. For an easier approach, the last premise can be expressed approximately as the percentage pa of the total n nests that have been replaced with new nests, containing new (random) solutions. For instance, for a maximization problem, the quality or suitability of the solution might be simply proportional to a value of the objective function. Nevertheless, the quality function could be also described by other complex expressions that determine how close is the solution from achieving the set aims. Taking into account these rules, the CS algorithm’s steps can be synthesized as it appears above (see pseudocode from Table 1). Given an objective function, the iterative process begins with an (initial) population of n selected hosts. At the beginning, initial values for the jth element of the ith nest are determined by the expression j j j j j j xi (0) = r nd.(Ui − L i ) + L i , where Ui and l L i are the lower and upper bounds of the jth component, respectively, and r nd is a (random) standard uniform figure within an opened interval (0, 1). This choice is a key to ensure that variables’ initial
30
A. Campuzano et al.
Table 1 Summary of Cuckoo Search via Lévy flights [33, 34] Algorithm: Cuckoo Search via Lévy Flights begin Target function f (x), x = (x1 , . . . , x D )T Create initial population of n host xi (i = 1, 2, . . . , n) while (t < Max Generation) or (stopping criterion) Get a random individual (say, i) by Lévy flights Compute its fitness Fi Select a nest among n (e.g., j) randomly if (Fi > F j ) Change j by the new solution end A percentage ( pa ) of worse nests are left and new ones are created via Lévy flights Preserve the best solutions (or nests with quality solutions) Rank all solutions, then, find the current best end while Post-processing of results end
values remain inside the desired domain. Such boundary conditions are regulated in every step of the iteration. For every iteration k, one random cuckoo i is chosen and brand-new solutions xi (k + 1) are created through the Lévy flight, i.e., a random walk characterized by the fact that the step-lengths have a probability distribution that is heavy-tailed, and with the direction of the steps being random and isotropic. The use of Lévy flight is considered better than alternative options, since it results into a better general performance of the cuckoo search. The expression associated to the Lévy flight reads as follows: (15) xi (k + 1) = xi (k) + α ⊕ levy(λ) where k represents the happening iteration, and α > 0 accounts for the step-length, related to the problem size. The notation ⊕ is used for indicating entry-wise multiplication. It is convenient to remark that Eq. (15) is basically a Markov chain, with the following position at iteration k + 1 is just depending on the original position at k and a transition probability that can be, respectively, obtained from the two first terms of Eq. (15). Such probability is controlled by the Lévy distribution as levy(λ) ∼ g −λ ,
(1 < λ 3)
(16)
which exhibits infinite mean and variance. In this case the steps conform a random walk process following a step-length fat-tailed distribution. From the computational
2 Cuckoo Search Algorithm for Parametric Data Fitting …
31
standpoint, the process of creating random numbers using Lévy flights can be summarized in two steps: first, the selection of a random direction in accordance to the uniform distribution; then, the generation of steps, according to the selected distribution. Using Mantegna’s algorithm for symmetric distributions [33] is suggested, where the term “symmetric” means that the negative and positive steps were taken into consideration (see [34] for detail). This approach leads to the factor: ⎞ 1ˆ β π.βˆ ˆ Γ (1 + β).sin 2 ⎠ φˆ = ⎝ ˆ ˆ 1+β ˆ β−1 2 . β.2 Γ 2 ⎛
(17)
3 with Γ accounting for the Gamma function and βˆ = is Yang and Deb’s original 2 implementation [34]. This factor is used to compute the step-length, s, in Mantegna’s algorithm as described in a simple scheme presented in detail by Yang [34, 35]. Next, the step-size η is calculated: η = 0.01 ς (x − xbest )
(18)
where ς is computed as described in [34, 35]. Last, x is recalculated as follows: x ← x + η.ϒ with ϒ being a randomly chosen vector having identical size to the solution x, and following the normal distribution N (0, 1). Then, the new solution’s fitness is assessed and set side by side with the current one. In case this fitness is more significant than the previous one, the former solution is replaced. Additionally, a percentage of the worst nests (regarding their fitness degree) are left and changed by new nests with solutions, increasing the explored space, seeking other auspicious results. The rate of replacement happens to be determined by the probability pa , a parameter that needs adjustment for reaching a better performance. What is more, for each iteration, existing solutions are ranked in terms of their suitability degree with the most suitable solution being saved as a vector xbest (used in Eq. (18)). CS algorithm is applied iteratively until a stop condition is met. Some usual stopping criteria are, either to find a solution complying with a lower brink value, to reach a established number of iterations, or the fact that a successive number of iterations are not capable of producing any better results.
4 The Method 4.1 Workflow of the Method As explained in Sect. 1.1, the VdW EoS in (2) depends on two parameters, a and b, whose values are specific for each chemical substance. The input of our problem consists of such parameters a and b and a list of temperatures T1 < T2 < · · · < TM
32
A. Campuzano et al.
assumed to be below the corresponding critical temperature, Tc . Our method consists of the following steps: (i) (ii) (iii) (iv)
Compute the critical values Vc , Pc , Tc using (4). Compute the reduced variables Vr , Pr , Tr through (5). Compute the isotherms from (2) at input temperatures T j , j = 1, . . . , M. For each isotherm of temperature T j :
(iv-a) Consider an initial guess P˜ j and perform optimization to compute the value of P j∗ , according to Maxwell’s rule. (iv-b) Use P j∗ to compute the roots of (6), as described in Sect. 2.2. (iv-c) Obtain the local optima of (6), as described in Sect. 2.2. The output of steps (iv-a) and (iv-b) is the lists B and S of data points for the binodal and the spinodal curves, given by (7) and (8), respectively. (v) Perform data fitting with Bézier curves on B and S as follows: (v-a) Apply the cuckoo search algorithm to perform data parameterization for the lists B and S (see Sect. 4.2 for details). (v-b) Apply least-squares optimization to obtain the poles of the curve. The resulting system of equations can be assessed by well-known numerical procedures, such as the standard LU decomposition, singular value decomposition (SVD), and a remodeling of the LU decomposition for non-squared sparse problems (see [25] for details). Step (v-a) of this workflow is the most critical part of the method and also the important contribution of this paper. It will be explained in detail in next section.
4.2 Cuckoo Search Algorithm for Data Fitting Now, the method explained in Sect. 3 will be used to perform data parameterization with Bézier curves. To this purpose, the following issues must be addressed: 1. Encoding of individuals. First, an adequate representation of the free variables of our optimization problem is required. The individuals in our method, denoted as Nk , are real number vectors whose length M is contained in the interval [0, 1] related to the parameterization of data: k ) ⊂ [0, 1] M Nk = (N1k , N2k , . . . , N M
(19)
All individuals {Nk }k are initialized with random numbers that are uniformly distributed on their parametric domain. The {Nik }i are also arranged in increasing order to replicate the layout of the parameterization. 2. Fitness function. The target function is given by the least-squares in (12). This function does not account for the size of data, so we also evaluate the RMSE (short for root-mean squared error), as
2 Cuckoo Search Algorithm for Parametric Data Fitting …
RMSE =
E M
33
(20)
3. Curve parameters. The only curve parameter here is the curve degree, n, also determining its number of poles. In this paper, the optimal value is determined empirically. To do so, we compute the RMSE for values of n ranging from n = 2 to n = 9. The value minimizing the RMSE is finally selected. 4. Cuckoo search algorithm parameters. As known, metaheuristic methods do depend on some parameters, which require to be tuned for better performance [7]. This task is challenging, as the suitable parameter values depend on the problem under analysis. In this chapter, their choice is based on a large collection of computational trials. Fortunately, cuckoo search is specially beneficial in this aspect. Contrary to other metaheuristics depending on many parameters, our algorithm does depend on only three parameters: • Population size, NP : this value is set to NP = 100 for this chapter. We also tested larger populations, up to 300 individuals, but the results do not modify remarkably. And as larger populations mean longer CPU times with no noticeable improvement, we find this value appropriate. • The number of iterations, Tmax : it is important to execute the algorithm until convergence. In all our simulations, we run the cuckoo search algorithm for Tmax = 2, 500 iterations, which is enough to reach convergence in all our executions. • The probability pa : similar to other previous work [12], we checked other values, from 0.1 to 0.9 with steps of size 0.05, and found that the value pa = 0.25 leads to the best performance, so this is the value taken in this work. After selecting those parameters, the cuckoo search algorithm is run for the prescribed number of iterations. Then, the individual with best fitness for (20) is chosen as the best global solution to the problem.
5 Experiments and Results 5.1 Application to a Real Example: Argon Our method has been applied to the VdW EoS of a chemical component: the Argon (Ar), with atomic number 18. Argon is the third-most abundant gas in the Earth’s atmosphere, and the most abundant noble gas in the Earth’s crust. It is widely used in welding and in other high-temperature industrial processes. Its parameter values for the Vdw EoS are [27]: a = 1.337 L2 .atm.mol−2 , and b = 0.03201 L.mol−1 . The critical temperature is Tc = 150.86 K, with an uncertainty of 0.1 K according to [1, 13]. We applied the steps (1)–(3) of our workflow for the sets of temperatures {130, 133, 135, 137, 140, 142, 145, 147, 148, 149,Tc } and {128,130,133,135,137,140,142,
34
A. Campuzano et al.
145,147,148,149,150.2,Tc } for the binodal and the spinodal curves, respectively. Then Step (4) is applied to obtain the lists B and S of data points for the binodal and the spinodal curves. The optimization process in Step (iv-a) is carried out through standard polynomial linear fitting using the Vandermonde matrix. Then, the procedure described in Sect. 4.2 is applied to perform data parameterization (Step (v-a) of our workflow). Finally, pole computation is achieved solving the resulting linear system through SVD-based numerical routines.
5.2 Computational Results The previously presented method is used for data fitting with the binodal and spinodal curves for different degrees of the approximating function, given by the parameter n. From Eqs. (10)–(11), it is clear that high-degree values should be avoided in order to prevent numerical errors. Therefore, we restrict the values of n to the range of 2–9. To remove any stochastic effect, and avoid premature convergence, 30 independent executions are run for every n within that dataset. Then, 10 worst simulations are rejected to prevent the spurious effects of unstable behaviors. Tables 2 and 3 report our computational results for the curve degree from n = 2 to n = 9 (in rows). For these tables, the following data are listed (in columns): the curve degree, the best RMSE (out of the 30 simulations), and the RMSE mean (for the 20 best simulations). We have also checked Śfor overfitting (column 4): the symbol ◦ indicates that overfitting occurs, symbol is used otherwise. Finally, we also analyzed the stability of the solutions (column 5), this concept being an indicator about whether or not several executions for the same values yield qualitatively different solutions. This is ranked from 1 (very low stability) to 5 (very high stability) using the diamond ˛ symbol as many times as the value. This ranking is determined by computing the stability rate factor, denoted as ρ, obtained as the rate between the number of similar solutions and the number of simulations performed, where two solutions are considered similar if their respective control polygons (the piecewise polylines joining the poles) have the same number and sequence of concavities, and also have the same number of self-intersections. Then, such a rate is classified into the five quintiles according to its numerical value ρ, from 0 to 1, so that values 0.8 < ρ ≤ 1 lie on the upper quintile, values between 0.6 < ρ ≤ 0.8 lie on the next quintile, and so on. Several useful conclusions can be extracted from the results in Tables 2 and 3. As the reader can see from the numerical error values, the method performs extremely well, except for the lowest degree, n = 2, which yields the worst results in our simulations. This can be explained by the fact that neither the binodal nor spinodal curves are really parabolas, so they cannot be fitted well with a quadratic curve. The best and mean RMSE for degree n ≥ 3 reach are, respectively, of order 10−7 for the binodal curve, and 10−6 for the spinodal curve, in the best cases, and as good as order 10−5 for the binodal curve, and 10−4 for the spinodal curve, in the worst cases. In particular, the best fitting results for the binodal curve are obtained for n = 3, with a RMSE of order 10−7 . Good results (of order 10−6 ) can also be obtained for n = 4
2 Cuckoo Search Algorithm for Parametric Data Fitting …
35
Table 2 Computational results for binodal curve with cuckoo search algorithm Degree RMSE (best) RMSE (mean) Overfitting Stability Ś 9.7554E−5 5.6653E−5 ˛˛˛ n=2 Ś 8.8238E−7 9.2121E−7 ˛˛˛˛˛ n=3 Ś 9.8274E−6 1.0095E−5 ˛˛˛˛˛ n=4 Ś 3.3710E−6 4.7625E−6 ˛˛˛˛˛ n=5 4.3470E−6 6.2708E−6 ◦ ˛˛˛˛ n=6 5.7215E−6 8.9314E−6 ◦ ˛˛˛˛ n=7 8.0233E−6 1.1364E−5 ◦ ˛˛˛˛ n=8 7.3972E−6 3.5884E−5 ◦ ˛˛˛˛ n=9
Table 3 Computational results for spinodal curve with cuckoo search algorithm Degree RMSE (best) RMSE (mean) Overfitting Stability Ś 6.5943E−4 1.1292E−3 ˛˛ n=2 Ś 3.4997E−6 5.9436E−6 ˛˛˛˛˛ n=3 Ś 1.0655E−6 1.8997E−6 ˛˛˛˛˛ n=4 5.5023E−6 9.3258E−6 ◦ ˛˛˛˛ n=5 8.2757E−6 3.3957E−5 ◦ ˛˛˛˛ n=6 1.7334E−5 5.6443E−5 ◦ ˛˛˛˛ n=7 3.0119E−5 7.9673E−5 ◦ ˛˛˛˛ n=8 3.6942E−4 7.4102E−4 ◦ ˛˛˛˛ n=9
and n = 5, although they are slightly worse than those for n = 3. This suggests that four poles are enough for optimal fitting. Higher degrees also yield good numerical results, still of order 10−6 for the best RMSE, and of order 10−5 –10−6 for the mean RMSE. In fact, we have observed that the best RMSE values from n = 4 to n = 9 are generally of the same order, which is consistent with the fact the high-degree functions provide extra degrees of freedom (DOFs) and hence, they tend to fit the
36
A. Campuzano et al.
data points better. Obviously, this occurs at the expense of higher model complexity, so some kind of trade-off between the accuracy of the model and its complexity might be required. Another problem is that too many DOFs might lead to overfitting. We have checked for this overfitting problem through several computational trials. From our simulations, we found that overfitting happens for models of degree n ≥ 6 for the binodal curve, a clear indication that we are approximating the data points with more parameters than it is actually needed. This means that only low degree curves must be considered in order to apply the model as a predictor for unsampled values of the temperature. Regarding the stability of the solutions, values of n between 3 and 5 provide the most stable solutions, while the value n = 2 shows only medium stability. Increasing the value of n from 5 leads to high (but not optimal) stable solutions, as the poles tend to oscillate sometimes and the control polygon exhibits occasionally selfintersections. The numerical results for the spinodal curve are quite similar, but the fitting errors are generally slightly worse. Best fitting results in this case are obtained for n = 4, with a RMSE of order 10−6 . Good results (also of order 10−6 ) can also be obtained for n = 4 to n = 6, although they are slightly worse than those for n = 4. This indicates that five poles are the optimal value for a proper fitting, and higher values of n decrease the fitting error, by a small margin for values of n near to n = 4 and to a larger extent for high degrees. This also means that the spinodal curve has a more complex shape than the binodal curve, although not dramatically. Note also that overfitting occurs for degrees n ≥ 5, so such values should be avoided. Finally, about the stability of the solutions, the most stable solutions are obtained for n = 3, 4, while the value n = 2 exhibits low stability. Increasing the value of n from 5 leads to a stability pattern quite similar to that of the binodal curve. Our good numerical results are also confirmed graphically. Figure 3 (top) shows the best fitting curves obtained for the binodal (left) and spinodal (right) curves. Both, the original and the computed data points are shown as yellow filled squares and blue hexagons, respectively. The figure also shows the best Bézier fitting curve, represented as a red solid line. Notice the excellent matching between reconstructed and original data, as well as also the excellent fitting through the Bézier curve.
5.3 Implementation Details and Computing Times The computational work in the present chapter was performed using a 3.4 GHz processor Intel Core i7 with RAM of 8 GB. The programming code was written by the authors using the scientific program Matlab (version 2018b) using its native programming language. About the CPU times, we found that our method is very competitive: a single run takes only a few seconds to be completed. It is difficult to estimate the computation time of a single execution in advance, as it depends on the degree of the fitting curve, number of data points, configuration of the computer used for the simulations, and
2 Cuckoo Search Algorithm for Parametric Data Fitting …
37
Reconstructed data Original data Fitting Bézier Curve
Reconstructed data Original data Fitting Bézier Curve
0.045
0.03
0.04 0.025
Fitness Value
Fitness Value
0.035 0.03 0.025 0.02 0.015
0.02 0.015 0.01
0.01 0.005 0.005 0
0 0
500
1000
1500
Number of Iterations
2000
2500
0
500
1000
1500
2000
2500
Number of Iterations
Fig. 3 Best Bézier fitting curve (top) and cuckoo search algorithm convergence diagram (bottom) for the binodal (left) and spinodal (right) curves
other factors. For illustration, a typical single execution for our example, and the computer configuration used in this work, takes about 15–20 s for the binodal curve, and about 20–25 s for the spinodal curve. This is in contrast with other swarm intelligence methods, which can require several minutes for a single execution to finish. For comparison, a typical execution for the firefly algorithm takes about 18–22 min, while the bat algorithm takes about 30–90 s. Figure 4 shows the CPU time (in seconds) required for data fitting of the binodal and the spinodal curves (represented by triangle and square symbols, respectively) for the best value of the parameter n through the three compared methods: firefly, bat, and cuckoo search. A logarithmic scale is used in the vertical axis to display the CPU times for 20 independent executions in our experiments (represented on the horizontal axis). The CPU times for the firefly algorithm, bat algorithm, and the cuckoo search algorithm appear in the upper, medium, and lower parts of the figure, respectively. The figure shows clearly that the CPU times of the cuckoo search algorithm are very good, and outperform significantly those of the firefly and the bat algorithms in all executions. These very good CPU times improve the applicability of our method and are one of the most interesting features of this approach for practical settings.
38
A. Campuzano et al.
Fig. 4 CPU time (in seconds and following a logarithmic scale) for data fitting of the binodal (triangle symbols) and spinodal (square symbols) curves for 20 independent executions using the three compared methods: firefly (upper part), bat (middle part), and cuckoo search (lower part)
5.4 Comparative Analysis It is always convenient to perform a comparison between this new method and alternative methods reported. We have carried out a comparative work of four alternative methods. They can be classified in two groups: firstly, we consider two of the latest reported methods: polynomial curve fitting, as well as artificial neural networks. The former is applied through the polyfit Matlab command [22]. For the latter, we consider a deep artificial neural network called multilayer perceptron (MLP), which is well-known to be a universal function approximator [9, 14]. The MLP in this comparison includes 5 neurons in a single hidden layer and uses the back propagation algorithm of Levenberg–Marquardt for training [20, 21]. For the second group, we consider two popular nature-inspired metaheuristic techniques for optimization: firefly [28, 29], and bat [30–32] algorithms. For a fair comparison, the methods of the first group are computed for the curve degree leading to the best RMSE for the binodal and the spinodal curves. For the methods in the second group, we keep the configuration and common parameters of the compared methods as similar as possible. For instance, all three algorithms (firefly, bat, and cuckoo) were applied for the same population size. We remark, however, that the number of iterations is
2 Cuckoo Search Algorithm for Parametric Data Fitting …
39
Table 4 Comparative results of this new method versus some other methods (best results highlighted) Method: Binodal curve: best RMSE Spinodal curve: best RMSE Polynomial fitting Multilayer perceptron Firefly algorithm Bat algorithm Cuckoo search algorithm
1.480733E−2 9.098664E−3 4.080013E−4 7.739411E−5 8.823841E-7
1.447021E−1 6.477843E−3 5.580561E−4 9.864933E−5 1.065596E-6
not generally the same; instead of using a given number of iterations, the stopping criterion is that no further improvement is observed after 20 consecutive iterations. Table 4 reports the results of this comparison of the five methods for the best RMSE of the binodal and spinodal curves. The best results are highlighted in bold. As it becomes clear from the table, our method, based on cuckoo search algorithm, outperforms the four alternative methods significantly: our results improve those with these methods by two or more orders of magnitude. The table also shows that the nature-inspired methods outperform the latest methods available for both curves. This is a clear indication of the potential of such methods for this particular problem and others of a similar nature. It is worthwhile to perform a comparison of the nature-inspired methods in the second group for different values of the curve degree. Tables 5 and 6 report the best RMSE and the mean RMSE for these methods when varying n from n = 2 to n = 9 for the binodal curve and the spinodal curve, respectively. A visual comparison of both tables draws some remarkable conclusions. The first one is that the cuckoo search algorithm outperforms the other two methods for both curves in all cases. At its turn, the bat algorithm also outperforms the firefly algorithm in all cases. Additionally, it is remarkable that, despite the fact that fitting errors for bat algorithm are numerically better than those for the firefly algorithm, they are still of the same order. This does not happen with the cuckoo search algorithm, as in this case, the improvement rate is of two orders of magnitude in almost all instances. We also applied statistical techniques to validate and authenticate these numerical results more accurately. According to [5], nonparametric tests are advisable for comparison of evolutionary and swarm intelligence techniques. In this chapter, we consider two pairwise comparisons (firefly vs. cuckoo search, and bat vs. cuckoo search) through the Wilcoxon rank sum test, a nonparametric method well suited for independent samples. For each pairwise comparison, we compute the h index, where h = 1 means that the null hypothesis of equality is rejected for a level of significance α (assumed α = 0.05 in this work) and h = 0 otherwise. We obtained the value h = 1 in both comparisons, indicating that cuckoo search outperforms both firefly and bat algorithms for α = 0.05. It is also interesting to compare these three methods in terms of overfitting and stability. Tables 7 and 8 report the overfitting test of these methods for different values
40
A. Campuzano et al.
Table 5 Computational results of the best and mean RMSE for different degrees (in rows) of the binodal curve using three algorithms (in columns): firefly (FFA), bat (BAT), and cuckoo search (CSA) Curve FFA BAT CSA Degree RMSE— RMSE— RMSE— RMSE— RMSE— RMSE— best mean best mean best mean n=2 n=3 n=4 n=5 n=6 n=7 n=8 n=9
4.7291E−1 7.9008E−1 3.5349E−2 5.6509E−2 9.7554E−5 5.6653E−5 1.1120E−3 2.6749E−3 7.7394E−5 9.8801E−5 8.8238E−7 9.2121E−7 4.0800E−4 5.6491E−4 8.5572E−5 1.0833E−4 9.8274E−6 1.0095E−5 8.5447E−4 1.6375E−3 1.0365E−4 1.1776E−4 3.3710E−6 4.7625E−6 1.0526E−3 1.4679E−3 1.1927E−4 1.3328E−4 4.3470E−6 6.2708E−6 5.6543E−4 8.9655E−4 1.1624E−4 1.2684E−4 5.7215E−6 8.9314E−6 7.2730E−4 9.9331E−4 1.3014E−4 1.4915E−4 8.0233E−6 1.1364E−5 9.0152E−4 1.1634E−3 1.2293E−4 1.3579E−4 7.3972E−6 3.5884E−5
Table 6 Computational results of the best and mean RMSE for different degrees (in rows) of the spinodal curve using three nature-inspired algorithms (in columns): firefly (FFA), bat algorithm (BAT), and cuckoo search (CSA) Curve FFA BAT CSA Degree RMSE— RMSE— RMSE— RMSE— RMSE— RMSE— best mean best mean best mean n=2 n=3 n=4 n=5 n=6 n=7 n=8 n=9
6.7750E−2 9.8217E−2 4.9074E−2 7.1226E−2 6.5943E−4 1.1292E−3 2.6379E−3 3.5749E−3 9.8649E−5 1.0356E−4 3.4997E−6 5.9436E−6 5.5805E−4 7.0836E−4 1.1267E−4 1.2842E−4 1.0655E−6 1.8997E−6 9.3740E−4 1.0639E−3 1.2060E−4 1.2953E−4 5.5023E−6 9.3258E−6 7.5304E−4 8.2051E−4 1.4971E−4 1.8611E−4 8.2757E−6 3.3957E−5 9.9780E−4 1.1403E−3 1.4750E−4 1.6975E−4 1.7334E−5 5.6443E−5 8.0223E−4 1.0793E−3 1.6122E−4 2.0137E−4 3.0119E−5 7.9673E−5 7.8953E−4 9.9698E−4 1.6678E−4 1.9152E−4 3.6942E−4 7.4102E−4
2 Cuckoo Search Algorithm for Parametric Data Fitting …
41
Table 7 Computational results of the overfitting test for different degrees (in rows) of the binodal curve using three algorithms (in columns): firefly (FFA), bat (BAT), and cuckoo search (CSA) Degree FFA BAT CSA Ś Ś Ś 2
Ś
Ś
Ś
3
Ś
Ś
Ś
4
Ś
Ś
Ś
5 ◦ 6 ◦
Ś Ś
◦ ◦
7 ◦
◦
◦
◦
◦
◦
8 9
Table 8 Computational results of the overfitting test for different degrees (in rows) of the spinodal curve using three algorithms (in columns): firefly (FFA), bat (BAT), and cuckoo search (CSA) Degree FFA BAT CSA Ś Ś Ś 2
Ś
Ś
Ś
3
Ś
Ś
Ś
4 ◦
Ś
◦
5 ◦
◦
◦
◦
◦
◦
◦
◦
◦
◦
◦
◦
6 7 8 9
of the curve degree for the binodal curve and the spinodal curve, respectively. The tables show that the three methods perform similarly in terms of overfitting, although the bat algorithm is more robust than the other two for n = 6, 7 and n = 5 for the binodal and spinodal curves, respectively.
42
A. Campuzano et al.
Table 9 Computational results of the stability test for different degrees (in rows) of the binodal curve using three algorithms (in columns): firefly (FFA), bat (BAT), and cuckoo search (CSA) Degree FFA BAT CSA ˛
˛˛˛
˛˛˛
˛˛˛
˛˛˛˛˛
˛˛˛˛˛
˛˛˛˛
˛˛˛˛˛
˛˛˛˛˛
˛˛˛˛
˛˛˛˛
˛˛˛˛˛
˛˛˛
˛˛˛˛
˛˛˛˛
˛˛
˛˛˛˛
˛˛˛˛
˛˛˛
˛˛˛˛
˛˛˛˛
˛˛
˛˛˛˛
˛˛˛˛
2 3 4 5 6 7 8 9
Finally, we also compare the stability of these three methods for different values of the curve degree. The results for the binodal curve and the spinodal curve are reported in Tables 9 and 10, respectively. The main conclusion is that the cuckoo
Table 10 Computational results of the stability test for different degrees (in rows) of the spinodal curve using three algorithms (in columns): firefly algorithm (FFA), bat algorithm (BAT), and cuckoo search algorithm (CSA) Degree FFA BAT CSA ˛
˛˛
˛˛
˛˛˛
˛˛˛˛˛
˛˛˛˛˛
˛˛˛˛
˛˛˛˛˛
˛˛˛˛˛
˛˛˛˛
˛˛˛˛
˛˛˛˛
˛˛
˛˛˛˛
˛˛˛˛
˛˛˛
˛˛˛˛
˛˛˛˛
˛˛
˛˛˛
˛˛˛˛
˛˛˛
˛˛˛˛
˛˛˛˛
2 3 4 5 6 7 8 9
2 Cuckoo Search Algorithm for Parametric Data Fitting …
43
search method provides the highest stability of the three compared methods. In fact, the method is very stable, ranging from high to very high for all instances. Of the three methods, the firefly algorithm provides the worse stability. Note also that all methods are very stable for the curve degrees with the best RMSE errors. From this comparative analysis, we can infer that the introduced cuckoo searchbased method outperforms all other methods in this comparison not only in terms of the numerical fitting errors but also in terms of overfitting and stability behaviors.
6 Conclusions and Future Work Hereby, this chapter presents an innovative technique to compute the characteristic curves of the VdW EoS by data fitting through free-form Bézier curves. Given the parameters a and b of a chemical system, our method computes sets of data points for the binodal and spinodal curves; then, it applies the cuckoo search algorithm to perform data parameterization; finally, the poles of the curves are obtained by least-squares optimization with SVD. The method is applied to Argon, a noble gas for which the characteristic curves are reconstructed with very good accuracy. Additionally, we have compared this method versus four existing alternative methods, including two of the most recently introduced ones in the field (polynomial curve fitting, as well as the multilayer perceptron neural network) and two popular nature-inspired metaheuristic methods (firefly algorithm and bat algorithm). This comparative analysis shows that our approach outperforms these four methods for at least two orders of magnitude for the given example. In addition, the cuckoo search method also provides superior behavior than the two other studied algorithms, the firefly and the bat algorithms, respectively, in terms of overfitting and stability of the solutions. Furthermore, the method is quite fast, with CPU times in the range of tens of seconds for a single execution, which are much better than the firefly algorithm and the bat algorithm. As a conclusion, we can state that this approach is very promising and can be successfully applied to real cases of chemical components and mixtures. This research work can be extended in multiple directions. On the one hand, it will be desirable to reduce the CPU times while simultaneously gaining even more accuracy. Furthermore, we plan to apply this methodology to other interesting chemical components and mixtures, including substances with more challenging characteristic curves. Finally, we look forward to widen the spectrum of this approach, including the analysis of other popular equations of state described in the literature. Other interesting problems in the realm of chemical engineering and related fields will also be regarded as a part of our work yet to come. Acknowledgements The authors acknowledge the financial support from the project PDE-GIR of the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 778035, and from the Spanish Ministry of Science, Innovation and Universities (Computer Science National Program) under grant #TIN2017-89275-R of the Agencia Estatal de Investigación and European Funds EFRD (AEI/FEDER, UE).
44
A. Campuzano et al.
References 1. Angus S, Armstrong B, Gosman AL, McCarty RD, Hust JG, Vasserman AA, Rabinovich VA (1972) International thermodynamic tables of the fluid state—1 Argon. Butterworths, London 2. Barnhill RE (1992) Geometric processing for design and manufacturing. SIAM, Philadelphia 3. Dey N (ed) (2017) Advancements in applied metaheuristic computing. IGI Global, PA, USA 4. Dey N, Ashour AS, Bhattacharyya S (2020) Applied nature-inspired computing: algorithms and case studies. Springer Tracts in Nature-Inspired Computing. Springer, Singapore 5. Derac J, García S, Molina D, Herrera F (2011) A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evolut Comput 1:3–18 6. Dierckx P (1993) Curve and surface fitting with splines. Oxford University Press, Oxford 7. Engelbrecht AP (2005) Fundamentals of computational swarm intelligence. Wiley, Chichester, England 8. Farin G (2002) Curves and surfaces for CAGD, 5th edn. Morgan Kaufmann, San Francisco 9. Funahashi KI (1989) On the approximate realization of continuous mappings by neural networks. Neural Netw 2(3):183–192 10. Gálvez A, Iglesias A (2013) Firefly algorithm for polynomial Bézier surface parameterization. J Appl Math, Article ID 237984, 9 p (2013) 11. Gálvez A, Iglesias A (2013) An electromagnetism-based global optimization approach for polynomial Bézier curve parameterization of noisy data points. In: Proceedings international conference on cyberworlds, CW-2013. IEEE Computer Society Press, pp 259–266 12. Gálvez A, Iglesias A (2014) Cuckoo search with Lévy flights for weighted Bayesian energy functional optimization in global-support curve data fitting. Sci World J (2014) Article ID 138760, 11 p 13. Gosman AL, McCarty RD, Hust JG (1969) Thermodynamic properties of Argon from the triple point to 300 K at pressures to 1000 atmospheres. Nat Stand Ref Data Ser Nat Bur Stand, NSRDS-NBS 27 (1969) 14. Hornik K, Stinchcombe M, White H (1989) Multilayer feedforward networks are universal approximators. Neural Netw 2(5):359–366 15. Hoschek J, Lasser D (1993) Fundamentals of computer aided geometric design. A.K. Peters, Wellesley, MA 16. Iglesias A, Gálvez A, Collantes M (2015) Bat algorithm for curve parameterization in data fitting with polynomial Bézier curves. In: Proceedings international conference on cyberworlds, CW-2015. IEEE Computer Society Press, 107–114 17. Iglesias A, Gálvez A (2016) Cuckoo search with Lévy flights for reconstruction of outline curves of computer fonts with rational Bézier curves. In: Proceedings IEEE congress on evolutionary computation, CEC-2016. IEEE Computer Society Press, pp 2247–2254 18. Iglesias A, Gálvez A, Surez P, Shinya M, Yoshida N, Otero C, Mancahdo M, Gomez-Jauregui V (2018) Cuckoo search algorithm with Lévy flights for global-support parametric surface approximation in reverse engineering. Symmetry 10(3):58 19. Johnson DC (2014) Advances in thermodynamics of the van der Waals Fluid. Morgan & Claypool Publishers, Ames 20. Levenberg K (1944) A method for the solution of certain non-linear problems in least squares. Q Appl Math 2(2):164–168 21. Marquardt D (1963) An algorithm for least-squares estimation of nonlinear parameters. SIAM J Appl Math 11(2):431–441 22. The MathWorks polyfit web page, available at the URL: https://www.mathworks.com/help/ matlab/ref/polyfit.html (last accessed Aug. 20th 2019) 23. Maxwell JC (1875) On the dynamical evidence of the molecular constitution of bodies. Nature 11:357–359 24. Piegl L, Tiller W (2004) The NURBS Book. Springer, Berlin, Heidelberg (1997). 146–165 (2004)
2 Cuckoo Search Algorithm for Parametric Data Fitting …
45
25. Press WH, Teukolsky SA, Vetterling WT, Flannery BP (1992) Numerical recipes, 2nd edn. Cambridge University Press, Cambridge 26. Smith JM, Van Ness HC, Abbott MM (2005) Introduction to chemical engineering thermodynamics. McGraw-Hill, Boston 27. Weast RC (1972) Handbook of chemistry and physics (53rd Edition). Chemical Rubber Publication 28. Yang XS (2009) Firefly algorithms for multimodal optimization. Lect Notes Comput Sci 5792:169–178 29. Yang XS (2010) Firefly algorithm, stochastic test functions and design optimisation. Int J Bio-Inspired Comput 2(2):78–84 30. Yang XS (2010) A new metaheuristic bat-inspired algorithm. Stud Comput Intell 284:65–74 Springer, Berlin 31. Yang XS, Gandomi AH (2012) Bat algorithm: a novel approach for global engineering optimization. Eng Comput 29(5):464–483 32. Yang XS (2013) Bat algorithm: literature review and applications. Int J Bio-Inspired Comput 5(3):141–149 33. Yang XS, Deb S (2009) Cuckoo search via Lévy flights. In: Proceedings world congress on nature & biologically inspired computing (NaBIC). IEEE, pp 210–214 34. Yang XS, Deb S (2010) Engineering optimization by cuckoo search. Int J Math Model Numer Optim 1(4):330–343 35. Yang XS (2010) Engineering optimisation: an introduction with metaheuristic applications. Wiley, Hoboken, NJ
Chapter 3
Cuckoo Search Algorithm with Various Walks ˙ Gölcük F. B. Ozsoydan and I.
1 Introduction Bio-inspired modern metaheuristics open up new approaches in optimization. These algorithms are inspired by the survival mechanisms of living species in nature. Such computational techniques have been shown to be very promising and efficient in numerous publications. Therefore, they have grabbed notable attention in the past decade. It can be put forward that emergence of these algorithms dates back to mid-70s. Holland [1] introduced the first ideas about Genetic Algorithm (GA), which was a totally brand new algorithm of which the researchers of the era were unfamiliar with. Holland [1] was inspired by the Darwinian evolutionary theory. Next, it has been shown that such natural mechanisms can be efficiently used in simulated environments to find solutions to challenging optimization problems. Although it is not mentioned in most new generation bio-inspired algorithms, which mostly do not directly employ evolutionary mechanisms of GAs, they indeed draw inspiration from this pioneering algorithm. There are lots of reported modern bio-inspired metaheuristics in the related literature. In one of them, Passino [2] mimics the behaviours of bacteria. Karaboga and Basturk [3] make use of artificial bees in Artificial Bee Colony Algorithm. Yang [4] introduces Firefly Algorithm (FA), which is inspired by the communication and flashing behaviours of fireflies. Yang and Deb [5] develop the Cuckoo Search (CS) Algorithm that draws inspiration from some particular bird species. In the same year, F. B. Ozsoydan (B) Department of Industrial Engineering, Dokuz Eylül University, 35397 ˙Izmir, Turkey e-mail: [email protected] ˙I. Gölcük Department of Industrial Engineering, ˙Izmir Bakırçay University, 35665 ˙Izmir, Turkey e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 N. Dey (ed.), Applications of Cuckoo Search Algorithm and its Variants, Springer Tracts in Nature-Inspired Computing, https://doi.org/10.1007/978-981-15-5163-5_3
47
48
F. B. Ozsoydan and ˙I. Gölcük
Krishnanand and Ghose [6] introduce the Glowworm Swarm Optimization for multimodal function optimization. Next year, Yang [7] reports another algorithm, referred to as Bat-inspired Algorithm, which simulates the navigation of bats via reverberation of sounds. Wolf Search Algorithm that is based on wolf preying behaviour is proposed by Tang et al. [8]. In the same year, Yang [9] introduces the Flower Pollination Algorithm that is inspired by the biotic and abiotic pollination mechanisms of flowers. Another algorithm, namely, Grey Wolf Optimizer [10], is shown to be successful in both constrained and unconstrained optimization problems. In a recent algorithm, Zhang et al. [11] introduce Biology Migration Algorithm, which is shown to be quite easy-to-implement and effective. The main motivation of the present study is to analyse the effects of various walk mechanisms in CS Algorithm. As reported by Yang and Deb [5], the standard CS is based on behaviour of some cuckoo species in nature. It generates new solutions by using a Lévy function that is capable of unpredictable sharp movements at random. Fortunately, there are several other movement mechanisms [12–16] that have been reported by efficient applications in problem solving. In this regard, the present work introduces and analyses the effects of various movement mechanisms including, quantum-based walk, Brownian walk and a random walk of which the details are presented in the following. The proposed modifications are tested by using both global optimization and design optimization problems comprised of some well-known mechanical design optimization benchmarks. The remainder of the present study is organized as follows: a survey is presented in Sect. 2. Proposed CS modifications along with all related details are given in Sect. 3. Comprehensive experimental study and finally concluding remarks are reported in Sect. 4 and Sect. 5, respectively.
2 Literature Review Developed by Yang and Deb [5], CS is one of the outstanding bio-inspired metaheuristic algorithms that has been shown to be very promising in a great variety of scientific fields. Since the first introduction of this modern optimizer, popularity of CS has grown and it has attracted attention of researchers from varying disciplines. Some closely related studies are chronologically presented in the following. One year after the introduction of CS, Yang and Deb [17] solve some engineering design problems. Authors report that found results are far better than the best solutions obtained by another swarm-based algorithm. Chandrasekaran and Simon [18] propose a hybridized CS to solve multi-objective unit commitment problem. Tuba et al. [19] analyse a modified CS for global optimization. The proposed modification is focused on step sizing function of CS. Civicioglu and Besdok [20] compare the algorithmic concepts of CS to those of some other well-known optimizers. Kanagaraj et al. [21] hybridize CS by GA to solve redundancy allocation problems. Promising results are reported by the authors. In the same year, a binary CS is proposed by Rodrigues et al. [22] to solve feature selection problem. Sigmoid transfer function to
3 Cuckoo Search Algorithm with Various Walks
49
convert real numbers to binary is employed by the authors. Gandomi et al. [23] employ CS to solve a group of structural optimization problems. CS is employed by Yildiz [24] to solve a milling optimization problem, which a well-known manufacturing problem. According to the obtained results, the author addresses CS as a robust and effective algorithm in comparison to some other swarm-based algorithms. Another related work is reported by Yang and Deb [25], where multi-objective engineering design problems under complex nonlinear constraints are solved by an improved CS, which is a further modification of the standard CS to handle multiple objectives. Kaveh and Bakhshpoori [26] report optimum design of two-dimensional steel frames by using CS algorithm. In Agrawal et al. [27], optimal thresholds for multilevel thresholding are found using CS. The authors report that obtained results are promising in terms of both quality and required CPU time. Next year, Dash et al. [28] compare several modifications of CS algorithm. Bhandari et al. [29] use CS for multilevel treshodling in image segmentation. In the same year, a comprehensive survey on CS, particularly including advances and applications, is presented by Yang and Deb [30]. Another discrete CS is reported by Ouaarab et al. [31, 32] to solve the well-known travelling salesman problem. Marichelvam et al. [33] propose a modification of CS to solve a challenging scheduling problem with makespan minimization criterion. Promising results for such a complex optimization problem are reported by the authors. Another brief literature review on CS is presented by Fister et al. [34]. Civicioglu and Besdok [35] test the performance of CS on a large set of benchmarking problems comprised of unconstrained global optimization problems. Another binary modification of CS is proposed by Pereira et al. [36]. Syberfeldt [37] presents a real-life case study of CS for a multiobjective optimization of a manufacturing process. Salomie et al. [38] hybridize CS by FA. Li and Yin [39] report a modification of CS to solve a set of real-valued optimization problems. Similar benchmarking problems are solved by Wang et al. [40] by using a hybridized CS by another swarm-based algorithm. In some recent studies, Kang et al. [41] employ CS in order to find a solution to a closed-loop-based facility layout design. Another hybridized CS is proposed by Majumder et al. [42] to solve a scheduling problem with unequal job ready times. The authors propose some lower bounds for this problem and according to the reported results, CS is found as a promising algorithm again. Boushaki et al. [43] present a new quantum chaotic CS algorithm for data clustering. The merit in this study is extending the capabilities of CS by using nonhomogeneous update inspired by the quantum theory. Laha and Gupta [44] report a modified CS algorithm to schedule jobs on identical parallel machines with makespan minimization criterion. El Aziz and Hassanien [45] propose an improved CS algorithm that makes use of rough sets theory for feature selection. Yang et al. [46] propose a multi-population-based CS algorithm that mimics the co-evolution among multiple cuckoo species. The authors use real-valued optimization problems for validation. Reported results point out the efficiency of the proposed approach by the authors. Another hybrid CS is proposed by Chi et al. [47] to solve a set of global optimization problems. Jalal and Goharzay [48] employ CS to solve structural and design optimization problems. Finally, Bhandari and Maurya [49] propose a CS algorithm for image enhancement.
F. B. Ozsoydan and ˙I. Gölcük
50
3 Proposed Solution Approaches Before introducing the developed CS modifications, the standard CS is presented first.
3.1 The Standard CS As mentioned before, CS is one of the outstanding bio-inspired new generation metaheuristic algorithms. CS is inspired by the aggressive breeding behaviour of cuckoos [5]. As intraspecific brood parasitism, cooperative breeding and nest takeover, there exist three basic types of brood parasitism. What is more, it is known that some particular cuckoo species are very specialized in the mimicry in colour and pattern of the eggs. Such ability makes them harder to be detected by host bird and this contributes to their survival capabilities. In nature, if a host bird discovers an alien egg, which is not its own, it either throws it away or simply abandons the nest and builds a new one elsewhere. Yang and Deb [5] define three main rules to run CS algorithm in a simulated environment. According to the first rule, each cuckoo can lay one egg at a time and dump its egg in a randomly chosen nest. The second rule applies an elitist behaviour so that the best nests are carried to the next generations. Finally, according to the third rule, the egg laid by a cuckoo is discovered with a probability pa ∈ [0, 1]. This is the procedure which can be considered as analogues to partial random immigrants or mutation in evolutionary search algorithms. In order to generate new solutions (cuckoos), Lévy flight, which is also inspired by sharp movements of some fly species such as fruit flies, is employed in the standard CS algorithm. This move is formulated by Eqs. 1–2, where xi(t) is the ith solution vector at the tth iteration, η > 0 is a step size that can be defined in accordance with the range of the variables and ⊕ denotes entrywise multiplication. xi(t+1) = xi(t) + η ⊕ Levy(λ)
(1)
Levy ∼ u = t −λ , (1 < λ ≤ 3)
(2)
Although one can use an improved initial population generation technique, generally speaking, a population in CS is initialized at random. Next, new solutions are obtained by using the move procedure of CS (Eqs. 1–2). Cuckoos replace the current eggs in the nest if they improve the quality of the current eggs. Next, a fraction of pa of worse nests are abandoned and new ones are built. The best solution is carried over to the next iteration. This process is repeated until a termination criterion is met (Algorithm 1).
3 Cuckoo Search Algorithm with Various Walks
51
Algorithm 1. A pseudo code for the standard CS algorithm [5] 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14:
begin generate an initial population of popSize host nests while termination criterion is not met obtain a random cuckoo by Lévy flight (Eq. 1) evaluate the fitness of this cuckoo choose a random nest j among n nests then if is better than cuckoo i replaces the host nest j end abandon of worse nests and build new ones carry the best solution over to the next iteration end print the best solution end
3.2 Proposed Modifications As can be noted from the steps of the CS, one of the most crucial points is the movement mechanism that adopts Lévy flights. Since new solutions are generated by this procedure, it further means that quality of upcoming populations is also closely related to this step. Therefore, it is worth observing the effects of different new solution generation procedures in place of Lévy flights. Accordingly, as quantum walk, Brownian walk and random walk, three different moves are separately adopted in the proposed modifications in order to see their individual effects on the performance of the CS Algorithm. Blackwell and Bentley [50] employed charged particles, and then this model was extended by introducing a quantum model in [51]. In a classical atomic model, electrons orbit around the nucleus at some distance, whereas in a quantum model, electrons are randomly distributed within a sphere [13]. This final model is adopted here as the quantum-based walk procedure. Let the current host nest (say, i) be chosen as the current solution vector. Next, a quantum solution vector is generated around this host nest by following the steps of Algorithm 2. As can be seen from this algorithm, the distance of the quantum vector (cuckoo egg, new solution) from the current vector (host egg, current solution) is defined by a user supplied parameter denoted by rcloud . Moreover, one can note that this parameter is analogues with the step size parameter η in the standard CS. Therefore, rcloud can be regarded as the step size in quantum-based walk. Thus, search can be dampened or sped up by calibrating this parameter.
F. B. Ozsoydan and ˙I. Gölcük
52
Algorithm 2. A pseudo code for quantum walk [13] 1: 2:
begin randomly generate a Gaussian vector
3:
calculate vector
4: 5:
generate a uniform random number for ( ) do √
6: 7: 8: 9: 10: 11:
from
(0,1) for
from the origin, [0,1]
end end /*D is dimension of the Gaussian vector */ /* is jth dimension of the cuckoo */ /* , is the jth dimension any host nest i */
Secondarily, Brownian walk [12] is employed as another walk procedure in the present study. This technique is quite similar to quantum walk but it is much simpler. Instead of generating solutions within a quantum cloud, a Gaussian hyperparallelepiped is used. Simply one can generate a new Brownian solution by adding on each coordinate a Gaussian random variable with μ = 0 and σ = 1 as formulated (t) is the best-found solution until tth iteration. Although range by Eqs. 3–4, where xbest of the new vector in Brownian walk can be calibrated by using different values σ to some extent, using μ = 0 and σ = 1 with a step size parameter is more practicable and easier to manipulate. (t) − xi(t) ⊕ η ⊕ b xi(t+1) = xi(t) + xbest
(3)
b ∼ N (0, σ )
(4)
One can note from Eqs. 3–4 that, the difference between the best vector and the current vector is used in this move. This satisfies that if any vector comes closer to (t) xbest , its step size is automatically decreased proportionally by the distance between the current and the best vector. It also provides attractiveness around the promising regions while encouraging an intensified search around the best vector. Final move adopted in the presented study is similar to the Brownian walk. The only difference is the used distribution. In this move, Gaussian variable in Brownian walk replaces a uniform random number. The mentioned move is formulated by Eqs. 5–6, where u ∈ (0, 1) is a random number to mimic a random walk.
3 Cuckoo Search Algorithm with Various Walks
53
(t) xi(t+1) = xi(t) + xbest − xi(t) ⊕ η ⊕ u
(5)
u ∈ (0, 1)
(6)
Finally, a pseudo code bringing all mentioned procedures together is presented by Algorithm 3. The CS extensions that adopt quantum-based walk, Brownian walk and uniform random walk are denoted by quCS, brCS and unCS, respectively throughout the rest of this paper. It should be emphasized that the difference between all these algorithms is only the used walk mechanisms to generate new solutions. Thus, it can be put forward that, the only thing that affects the performance of these CS extensions is the strategy while generating new solutions. In this context, the effects of such strategies on the efficiency of CS can be observed.
4 Experimental Results 4.1 Calibration of Algorithm Parameters Except for population size (popSize) and total number of iterations (maxIter), which are common for all swarm-based stochastic search algorithms that use a fixed swarm size and a maximum number of fitness evaluation (FE), as denoted by η and pa , there are only two parameters to be calibrated. Fortunately, Yang and Deb [5] report their efficient values, which are 0.01 and 0.25, respectively. The preliminary work shows that the values η = 0.01 and pa = 0.25 work also well both for unconstrained function optimization and mechanical design problems used in the present study. Therefore, they are fixed to these values. Secondarily, some combinations of popSize and maxIter were tested. It was observed that for function minimization problems popSize = 20 and maxIter = 10000, for mechanical design problems popSize = 20 and maxIter = 1000 give excellent results. Since fitness evaluation is performed twice in a single iteration of CS, these parameter values consume 2 × 20 × 10000 = 400000 and 2 × 20 × 1000 = 40000 FEs for unconstrained function minimization and mechanical design problems, respectively.
F. B. Ozsoydan and ˙I. Gölcük
54
Algorithm 3. A pseudo code for the proposed CS algorithms 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20:
begin generate an initial population of popSize host nests while termination criterion is not met if quCS is run then obtain a random cuckoo by Algorithm 2 elseif brCS is run then obtain a random cuckoo by Eqs. 3-4 elseif unCS is run then obtain a random cuckoo by Eqs. 5-6 end evaluate the fitness of this cuckoo choose a random nest j among n nests if is better than then cuckoo i replaces the host nest j end abandon a fraction of of worse nests and build new ones carry the best solution over to the next iteration end print the best solution end
4.2 Constraint Handling Technique Design problems used in this study are nonlinear and constrained optimization problems. Therefore, an efficient constraint handling technique is crucial. Such problems can be formulated by using an objective function f (x) with design variables x subject to equality or inequality constraints denoted by g(x) = 0 and h(x) ≤ 0, respectively. In this context, a constrained design optimization problems can be formulated by Eqs. 7–9. min f (x), x ∈ D ⊆ Rn p
(7)
gl (x) = 0, gl : Rnp → R, l = 1, 2, 3, . . . , m e
(8)
h l (x) ≤ 0, h l : Rnp → R, l = 1, 2, 3, . . . , m i
(9)
s.t.
3 Cuckoo Search Algorithm with Various Walks
55
Each with advantages and disadvantages, there are several approaches such as penalizing, rejecting or repairing to deal with constraints. For example, in a penalizing strategy, one should carefully and precisely choose the level of penalty in order not to deteriorate search capability of the used algorithm. Similarly rejecting strategy may yield to missing high-quality solutions. In a recent work, an efficient approach is introduced by Kim et al. [52]. This approach uses the atan trigonometric function to transform the objective function along with constraints to define the feasibility and quality of a solution simultaneously. This is a very remarkable and easy-to-use constraint handling technique because it does not require any additional parameters to calibrate. This approach, which is formulated by Eq. 10, where h max (x) = max h 1 (x), h 2 (x), h 3 (x), . . . h m i (x) and atan denotes the inverse h l (x)
tangent, is adopted in the present work. min L(x) = x∈D
ˆ i f h max (x) > 0 h(x) = h max (x), ˆ f (x) = atan( f (x)) − π2 , other wise
(10)
Finally, in real-valued optimization problems, if a variable exceeds the boundaries of a problem, the corresponding variable is simply set to the boundary of which it is exceed.
4.3 Computational Results for Mechanical Design Problems 4.3.1
Design of a Tension/Compression Coil Spring
This problem is formulated by Eqs. 11–12 [53]. There are three continuous variables in this problem. A related figure is illustrated in Fig. 1. In these equations N, D and d represent the number of spring coils, the winding diameter and the wire diameter, respectively. The aim here is to minimize the weight of the spring by satisfying the formulated constraints. Obtained best designs results and comparisons with the previously published results are presented in Table 1 and Table 2, respectively. Fig. 1 An illustration for tension/compression coil spring [53]
11.241939348
11.203941621
11.299694730
11.257468057
leCS
brCS
quCS
unCS
x1 (best)
0.357257273
0.356534948
0.358177329
0.357522673
x2 (best)
0.051711528
0.051681468
0.051749665
0.051722500
x3 (best)
0.012665327
0.012665241
0.012665338
0.012665262
f best
Table 1 Best solutions obtained by CS algorithms for the spring design problem
0.012669500
0.012667721
0.012697384
0.012679875
f mean
0.012706669
0.012678846
0.012800506
0.012766728
f worst
7.84e-06
3.03e-06
3.83e-05
2.54e-05
f std.dev.
40000
40000
40000
40000
FEs
56 F. B. Ozsoydan and ˙I. Gölcük
3 Cuckoo Search Algorithm with Various Walks
57
Table 2 Comparison with the literature for the spring design problem f best
f mean
f worst
f std.dev.
FEs
Kim et al. [52]
0.0126652328
0.01266523
0.01266523
1.05055E-14
100000
Baykasoglu [54]
0.0126652296
0.0140793687
0.0128750789
0.0002966889
20000
Coello [55]
0.0127047834
Na
Na
Na
900000
Mezura-Montes et al. [56]
0.012688
0.017037
0.013014
0.000801
36000
Parsopoulos and Vrahatis [57]
0.013120
0.0503651
0.0229478
0.00720571
5000
Mezura and Coello [58]
0.012689
Na
0.013165
0.00039
30000
Aguirre et al. [59]
0.012665
Na
0.012665
0
350000
He and Wang [60]
0.0126652
0.0127191
0.0127072
0.000015824
81000
Cagnina et al. [61]
0.012665
Na
0.0131
0.00041
24000
Maruta et al. [62]
0.0126652329
0.01461170
0.01275760
0.000269863
40000
Tomassetti [63]
0.012665
Na
Na
Na
200000
Akay and Karaboga [64]
0.012665
Na
0.012709
0.012813
30000
Gandomi et al. [65]
0.01266522
0.0168954
0.01350052
0.001420272
20000
Brajevic and Tuba [66]
0.012665
Na
0.012683
0.00000331
15000
Gandomi [67]
0.012665
0.012799
0.013165
0.0159
8000
Baykaso˘glu and Ozsoydan [68]
0.0126653049
0.0126653049
0.0126770446
0.0127116883
50000
leCS
0.012665262
0.012679875
0.012766728
2.5456e-05
40000
brCS
0.012665338
0.012697384
0.012800506
3.8387e-05
40000
quCS
0.012665241
0.012667721
0.012678846
3.0345e-06
40000
unCS
0.012665327
0.012669500
0.012706669
7.8447e-06
40000
x = (x1 , x2 , x3 )T := (N , D, d)T min f (x) = (x1 + 2)x2 x32
(11)
s.t. x1 x23 ≤0 71785x34 4x22 −x2 x3 1 + 5108x 2 − 12566(x2 x33 −x34 ) 3 140.45x3 1 − x x2 ≤ 0 1 2 x2 +x3 −1≤0 1.5 T 3
h 1 (x) := 1 − h 2 (x) := h 3 (x) :=
1≤0 (12)
h 4 (x):= D := x ∈ R : (2.0, 0.25, 0.05) ≤ x ≤ (15.0, 1.3, 2.0)T As one can see from the results presented in Table 2, quCS achieves the most promising performance in terms of finding the best solution, the best mean solution and the minimum standard deviance among all CS extensions. It is also clear from Table 2 that this is one of the most promising results among all compared algorithms in the related literature. It points out the superiority of the quantum-based walk in the spring design problem. It is also clear that although it is still competitive with the published results, brCS shows the worst performance.
F. B. Ozsoydan and ˙I. Gölcük
58
One can note that the standard leCS achieves very promising results in this problem with its current configuration. It can be put forward that this shows the efficiency of the standard CS.
4.3.2
Design of a Pressure Vessel
The second design problem is illustrated in Fig. 2 The aim here is to minimize the total cost [52]. There are four design variables in this problem. The first two variables are integer multiples of 0.0625, whereas the other two variables are continuous. Finally, pressure vessel design problem can be formulated as given in Eqs. 13–14. Obtained best designs results and comparisons with the previously published results are presented in Table 3 and Table 4, respectively. x = (x1 , x2 , x3 , x4 )T := (Ts , Th , R, L)T min f (x) = 0.6224x1 x3 x4 + 1.7781x2 x32 + 3.1661x12 x4 + 19.84x12 x3
(13)
s.t. h 1 (x) := 0.0193x3 − x1 ≤ 0 h 2 (x) := 0.00954x3 − x2 ≤ 0 h 3 (x) := 1296000 − π x32 x4 − 43 π x33 ≤ 0 h 4 (x):= x4 − 240 ≤ 0 D := x ∈ R4 : (0, 0, 10, 10)T ≤ x ≤ (99, 99, 200, 200)T
(14)
As presented in Table 3, quCS achieves the most promising performance in terms of finding the best solution again, however, the best mean solution is obtained by unCS among all CS extensions in this problem. Standard deviance values of the algorithms seem to be similar except for the leCS, which shows worse performance. Although unCS performs slightly better than brCS, it is outperformed by leCS in terms of finding the best solution. Finally, it is apparent from Table 4 that competitive results in comparison to the previously published ones are achieved by all CS algorithms adopted in the present study.
Fig. 2 An illustration for the for pressure vessel problem [52]
0.8125
0.8125
0.8125
0.8125
leCS
brCS
quCS
unCS
x1 (best)
0.4375
0.4375
0.4375
0.4375
x2 (best)
42.09844495
42.09844598
42.09844218
42.09844513
x3 (best)
176.6366112
176.6365934
176.6366498
176.6366075
x4 (best)
f mean 6.109079e + 03 6.076303e + 03 6.081153e + 03 6.075730e + 03
f best 6.0597145e + 03 6.0597150e + 03 6.0597143e + 03 6.0597145e + 03
Table 3 Best solutions obtained by CS algorithms for the pressure vessel design problem
6.37078e + 03
6.37077e + 03
6.37078e + 03
6.41008e + 03
f worst
56.91
56.56
56.99
94.50
f std.dev.
40000
40000
40000
40000
FEs
3 Cuckoo Search Algorithm with Various Walks 59
F. B. Ozsoydan and ˙I. Gölcük
60
Table 4 Comparison with the literature for the pressure vessel design problem f best
f mean
f worst
f std.dev.
FEs.
Kim et al. [52]
6059.714355
6060.074434
6059.727721
0.065870503
100000
Baykasoglu [54]
6059.83905683
6823.60245024
6149.72760669
210.77
20000
Coello [55]
6288.7445
Na
Na
Na
900000
Mezura-Montes et al. [56]
6059.714355
6846.628418
6355.343115
256.04
36000
Parsopoulos and Vrahatis [57]
6154.7
9387.77
8016.37
745.869
5000
Mezura and Coello [58]
6059.7143
Na
6379.938037
210
30000
Aguirre et al. [59]
6059.714335
Na
6071.013366
15.101157
350000
He and Wang [60]
6059.7143
6288.6770
6099.9323
86.2022
81000
Cagnina et al. [61]
6059.714335
Na
6092.0498
12.1725
24000
Maruta et al. [62]
6059.714355
7332.841508
6358.156992
372.71
40000
Tomassetti [63]
6059.714337
Na
Na
Na
200000
Akay and Karaboga [64]
6059.714736
Na
6245.308144
205.00
30000
Gandomi et al. [65]
6059.7143348
6318.95
6179.13
137.223
20000
Brajevic and Tuba [66]
6059.714335
Na
6192.116211
204
15000
Gandomi [67]
6059.714
7332.846
6410.087
384.6
5000
Baykaso˘glu and Ozsoydan [68]
6059.71427196
6090.52614259
6064.33605261
11.28785324
50000
leCS
6059.71453133
6109.0797
64100.867
94.5014
40000
brCS
6059.71502470
6076.3031
6370.7819
56.9956
40000
quCS
6059.71434351
6081.1534
6370.7796
56.5692
40000
unCS
6059.71458839
6075.7305
6370.7807
56.9130
40000
4.3.3
Design of a Welded Beam
The aim of welded beam is to find the best cost design of the structural welded beam, which is illustrated in Fig. 3. As formulated by Eqs. 15–16, there are four continuous variables in this problem. Obtained best designs results and comparisons with the previously published results are presented in Table 5 and Table 6, respectively. x = (x1 , x2 , x3 , x4 )T := (h, l, t, b)T min f (x) = 1.10471x12 x2 + 0.04811x3 x4 (14 + x2 )
(15)
3 Cuckoo Search Algorithm with Various Walks
61
Fig. 3 An illustration for the welded beam problem [69]
s.t. h 1 (x) := τ (x) − τmax ≤ 0 h 2 (x) := σ (x) − σmax ≤ 0 h 3 (x) := x1 − x4 ≤ 0 h 4 (x) := 0.125 − x1 ≤ 0 h 5 (x) := δ(x) − δmax ≤ 0 h 6 (x) := P − Pc (x) ≤ 0 h 7 (x) := 0.10471x12 + 0.04811x3 x4 (14 + x2 ) − 5 ≤ 0
x2 + (τ )2 , τ = √2xP x τ = MJR (τ )2 + 2τ τ 2R 1 2
x1 +x3 2 x22 x2 L M = P L+ 2 ,R = 4 + 2 , σ (x) = 6P x32 x4
2 √ x2 x1 + x3 4P L 3 J =2 2x1 x2 2 + , , δ(x) = 12 2 E x33 x4 x32 x46 4.013E 36 E x3 1− Pc (x) = L2 2L 4G T 4 D := x ∈ R : (0.1, 0.1, 0.1, 0.1) ≤ x ≤ (2, 10, 10, 2)T P = 6000I b, L = 14in, E = 30 × 106 psi, G = 12 × 106 psi, τmax = 13600 psi, σmax = 30000 psi, δmax = 0.25in
τ (x) =
(16)
As one can see from the results presented in Table 5, leCS achieves the best performance in terms of finding the best solution, which is slightly better than that of the quCS’s, which also slightly better than the rest of the CS algorithms. It is also clear that brCS achieves the best performance in terms of finding the best mean result in this problem. Finally, it is apparent from Table 5 that competitive results in comparison to the previously published ones are also achieved by all CS algorithms adopted in the present study.
0.2057296
0.2057296
0.2057296
0.2057295
leCS
brCS
quCS
unCS
x1 (best)
3.4704907
3.4704887
3.4704902
3.4704886
x2 (best)
9.0366239
9.0366239
9.0366238
9.0366243
x3 (best)
0.2057296
0.2057296
0.2057296
0.2057296
x4 (best)
1.7248526
1.7248524
1.7248525
1.7248523
f best
Table 5 Best solutions obtained by CS algorithms for the welded beam design problem
1.7248601
1.7248584
1.7248555
1.7248577
f mean
1.7248943
1.7248872
1.7248646
1.7248885
f worst
1.00e-05
7.54e-06
3.31e-06
6.77e-06
f std.dev.
40000
40000
40000
40000
FEs
62 F. B. Ozsoydan and ˙I. Gölcük
3 Cuckoo Search Algorithm with Various Walks
63
Table 6 Comparison with the literature for the welded beam design problem f best
f mean
f worst
f std.dev.
FEs
Kim et al. [52]
1.724852
1.724852
1.724852
0.000000
50000
Baykasoglu [54]
1.724852
1.724852
1.724852
0.000000
20000
Coello [55]
1.74830941
Na
Na
Na
900000
Parsopoulos and Vrahatis [57]
2.4426
Na
Na
Na
5000
Mezura and Coello [58]
1.724852
Na
1.7776
0.088
30000
Aguirre et al. [59]
1.724852
Na
1.724881
0.000012
350000
He and Wang [60]
1.724852
1.814295
1.749040
0.040049
81000
Cagnina et al. [61]
1.724852
Na
2.0574
0.2154
32000
Maruta et al. [62]
1.724852
1.813471
1.728471
0.0136371
40000
Tomassetti [63]
1.724852
Na
Na
Na
200000
Akay and Karaboga [64]
1.724852
Na
1.741913
0.031
30000
Gandomi et al. [65]
1.7312
2.3455793
1.8786560
0.2677989
20000
Brajevic and Tuba [66]
1.724852
Na
1.724853
0.0000017
15000
Baykaso˘glu and Ozsoydan [68]
1.724852
1.724852
1.724852
0.000000
50000
leCS
1.7248523
1.7248577
1.7248885
6.7785e-06
40000
brCS
1.7248525
1.7248555
1.7248646
3.3133e-06
40000
quCS
1.7248524
1.7248584
1.7248872
7.5411e-06
40000
unCS
1.7248526
1.7248601
1.7248943
1.0012e-05
40000
4.3.4
Design of a Speed Reducer
The aim of this problem is to minimize the weight, subject to constraints on surface stress, transverse deflections of the shafts, bending stress of the gear teeth and stresses in the shafts. A representative picture of a speed reducer is illustrated in Fig. 4. As one can see from this figure, there are seven design parameters of the speed reducer problem. While the third variable is integer, the rest of the variables are continuous. This problem is formulated by Eqs. 17–18. Obtained best design results and comparisons with the previously published results are presented in Table 7 and Table 8, respectively.
Fig. 4 An illustration for a speed reducer [70]
3.50
3.50
3.50
brCS
quCS
unCS
3.50
leCS
x1 (best)
0.70
0.70
0.70
0.70
x2 (best)
17.00
17.00
17.00
17.00
x3 (best)
7.30
7.30
7.30
7.30
x4 (best)
7.80
7.80
7.80
7.80
x5 (best)
3.3502
3.3502
3.3502
3.3502
x6 (best)
5.2866
5.2866
5.2866
5.2866
x7 (best)
Table 7 Best solutions obtained by CS algorithms for the speed reducer design problem
2996.349528
2996.349033
2996.348450
2996.348325
f best
2996.35233
2996.35070
2996.34945
2996.34871
f mean
2996.356
2996.354
2996.350
2996.349
f worst
0.001
0.001
0.000
0.000
f std.dev.
40000
40000
40000
40000
FEs
64 F. B. Ozsoydan and ˙I. Gölcük
3 Cuckoo Search Algorithm with Various Walks
65
Table 8 Comparison with the literature for the speed reducer design problem f best
f mean
f worst
f std.dev.
FEs
Kim et al. [52]
2996.348072
Mezura-Montes et al. [56]
2998.011963
3094.556809
3016.492651
24.48
20000
3162.881104
3056.206999
49.40
Mezura and Coello [58]
36000
2996.348094
Na
2996.348094
0
30000
Aguirre et al. [59]
2996.348165
Na
2996.408525
0.028671
350000
Cagnina et al. [61]
2996.348165
Na
2996.3482
0.0000
24000
Tomassetti [63]
2996.348165
Na
Na
Na
200000
Akay and Karaboga [64]
2997.058412
Na
2997.058412
Na
30000
Brajevic and Tuba [66]
2994.471066
Na
2994.471072
0.00000598
15000
Baykaso˘glu and Ozsoydan [68]
2996,372698
2996.669016
2996.514874
0.09
50000
Akhtar et al. [69]
3008.08
3028.28
3012.12
Na
20000
leCS
2996.348325
2996.348715
2996.349810
0.0003
40000
brCS
2996.348450
2996.349453
2996.350380
0.0005
40000
quCS
2996.349033
2996.350708
2996.354714
0.0013
40000
unCS
2996.349528
2996.352337
2996.356863
0.0019
40000
x = (x1 , x2 , x3 , x4 , x5 , x 6 , x7 )T := (b, m, z, l1 , l2 , d1 , d2 )T 2 min f (x) = 0.7854x1 x22 3.3333x 3 − 43.0934
23 + 14.9334x 3 3 −1.508x1 x6 + x72 + 7.4777
2x6 + x72 +0.7854 x4 x6 + x5 x7
(17)
s.t. h 1 (x) := h 2 (x) := h 3 (x) := h 4 (x) := h 5 (x) := h 6 (x) :=
27 −1≤0 x1 x22 x3 397.5 −1≤0 x1 x22 x32 1.93x43 −1≤0 x2 x3 x64 1.93x53 −1≤0 x x x4 2 3 7 1/2 745x4 2 +16.9x106 x x 2 3
110x63 1/2 2 745x4 +157.5x106 x x 2 3
85x73
−1≤0 −1≤0
2 x3 h 7 (x) := x40 −1≤0 5x2 h 8 (x) := x1 − 1 ≤ 0 x1 h 9 (x) := 12x −1≤0 2 1.5x6 +1.9 h 10 (x) := −1≤0 x4 h 11 (x) := 1.1xx75+1.9 − 1 ≤ 0 D := {x ∈ R7 : (2.6, 0.7, 17, 7.3, 7.8, 2.9, 5.0)T ≤ x ≤ (3.6, 0.8, 28, 8.3, 8.3, 3.9, 5.5)T }
(18)
66
F. B. Ozsoydan and ˙I. Gölcük
As one can see from the results presented in Table 7, leCS achieves the best performance in terms of finding the best solution again, which is slightly better than that of the brCS’s, which is also found as superior to the rest of the CS algorithms in speed reducer design problem. It is also clear that leCS shows the best performance in terms of finding the best mean result in this problem. Additionally, obtained standard deviance values are apparently similar. It points out the robustness of these algorithms in this problem. It is apparent from Table 5 that competitive results in comparison to the previously published ones are also achieved by all CS algorithms adopted in the present study. Finally, mean convergence plots for all algorithms are illustrated in Fig. 5. As one can see from this figure, quCS performs a better convergence pattern in comparison to other CS extensions. However, it is also clear that they achieve approximately similar results towards the end of the run, which is already demonstrated by the results presented in Tables 1, 2, 3, 4, 5, 6, 7, 8. Although intuition superiority of some CS extensions in design problems, a further statistical analysis, which is to be conducted in the following sections, is required.
Fig. 5 atan transformed convergence plots for mechanical design problems
3 Cuckoo Search Algorithm with Various Walks
67
4.4 Computational Results for Real-Valued Unconstrained Function Optimization Problems All used benchmarking problems to validate the performance of CS algorithms in unconstrained optimization problems and all related results are presented in Table 9 and Table 10, respectively. According to the results of Table 10, where average ranks according to the best results are presented in the last row, brCS seems to be performing better than the rest of the algorithms. It is followed by leCS and quCS of the same ranks. Finally, it can be put forward that unCS shows a naive performance in comparison to other CS extensions in terms of finding the best result. Additionally, it is clear that also mean and standard deviance values of brCS seem to be promising than those of others’. To sum up, it can be concluded that in most of the instances it achieves superior results with smaller deviance, which points to the efficiency and robustness of brCS in real-valued optimization problems. Convergence graphs are illustrated in Fig. 6. As one can see from this figure, Brownian particle enhanced brCS shows more successful convergence in most of the instances. However, it is also clear that CS extensions approximate to similar values after a while as in mechanical design problems. This also points out the efficiency of CS algorithm. To sum up, intuition suggests superiority of the brCS in comparison to other CS extensions in real-valued optimization problems. However, statistical tests are required for validation. In this context, the next subsection is devoted to statistical analysis for all comparisons performed in the present study.
4.5 Statistical Verification Since conditions for the safe use of parametric tests are not met here [71], appropriate non-parametric tests are conducted. First analysis is devoted to mechanical design problems. There are only four mechanical benchmarks used in the present study. This is quite few in number and so is not sufficient to perform a whole test. In this regard, this analysis exceptionally is performed for all mechanical benchmarks individually based on the best values obtained over all runs. Therefore, Friedman test is applied first to observe at least one pair of algorithm of which the performances are significantly different. The obtained p-values along with mean ranks are given in Table 11. According to these results, null hypothesis, which is based on equality of medians of the compared algorithms are rejected for spring design, welded beam and speed reducer problems for significance levels α = 0.10, α = 0.05 and α = 0.05, respectively. On the other hand, there is not enough statistical evidence to prove significant differences for welded beam problem. Therefore, post hoc tests are not required for this benchmark.
ID
f1
f2
f3
f4
f5
f6
f7
f8
f9
Function
Sphere
Ackley
Griewank
Rastrigin
Rosenbrock
Easom
Schwefel
Beale
Styblinsky-Tang
Table 9 Used real-valued test functions
i=1
xi2
1 D
D
i=1
D
i=1
D
cos
xi √ i
i=1
D
xi sin
f (x) = 0.5 i=1
D
xi4 − 16xi2 + 5xi
f (x) = (1.5 − x1 + x1 x2 )2 +
2
2 2.25 − x1 + x1 x22 + 2.625 − x1 + x1 x23
f (x) = 418.9828872724339D −
30
2
30
30
30
30
√ |xi |
2 100 xi+1 − xi2 + (xi − 1)2
+1
30
2
i=1
xi2 −
xi2 −
xi2 − 10cos(2π xi ) + 10
i=1
D
D−1
i=1
1 D
cos(2π xi ) + 20 + e
1 4000
i=1
D
30
Dim.
f (x) =
− cos(x1 ) cos(x2 )ex p −(x1 − π)2 − (x2 − π )2
f (x) =
f (x) =
f (x) =
exp
f (x) = −20exp −0.2
f (x) =
D
Formulation
[−5, 5]
[−4.5, 4.5]
[−500, 500]
[100, 100]
[−30, 30]
[−5.12, 5.12]
[−600, 600]
[−32, 32]
[−100, 100]
Range
(continued)
xi =-2.903534
x1 = 3 x2 = 0.5
xi = 420.968746
xi = π
xi = 1
xi = 0
xi = 0
xi = 0
xi = 0
Opt.
68 F. B. Ozsoydan and ˙I. Gölcük
ID
f 10
f 11
f 12
f 13
f 14
f 15
f 16
Function
Rotated hyper-ellipsoid
Noncontinous Rastrigin
Levy
Zakharov
Trid
Dixon and Price
Michalewicz
Table 9 (continued)
j=1
i=1
D
i=1
i=1
(xi − 1) − 2 i=2
D
xi xi−1
f (x) = − i=1
D
sin(xi )sin 2m
i=2
i xi2 π
i=1
, m = 10
D
2 f (x) = (x1 − 1)2 + i 2xi2 − xi−1
f (x) =
f (x) =
= 1, . . . , D 2 4 D D xi2 + 0.5 i xi + 0.5 i xi
xi −1 4 ,i
D
yi = 1 +
f (x) = sin 2 (π y1 ) + (yi − 1)2 1 + 10sin 2 (π yi + 1) + i=1 (y D − 1)2 1 + sin 2 (2π y D )
D−1
f (x) = i=1 yi2 − 10cos(2π yi ) + 10 |xi | < 0.5 xi , yi = r ound(2x ) i other wise 2
D
i=1
Formulation 2 D i f (x) = xj
2
30
10
30
30
30
30
Dim.
−D 2 , D 2
[0, π ]
[−10, 10]
[−5, 10]
[−10, 10]
i −2 2i
−2
x1 = 2.20 x2 = 1.57
xi∗ = 2
xi = i(D + 1 − i)
xi = 0
xi = 1
xi = 0
xi = 0
[−65.536, 65.536]
[−5.12, 5.12]
Opt.
Range
3 Cuckoo Search Algorithm with Various Walks 69
F. B. Ozsoydan and ˙I. Gölcük
70 Table 10 Experimental results for function minimization problems Functions Sphere
Ackley
Griewank
Rastrigin
Rosenbrock
Easom
Schwefel
Beale
Styblinsky-Tang
Rotated hyperellipsoid
perf. f1
f2
f3
f4
f5
f6
f7
f8
f9
f 10
Algorithms leCS
brCS
quCS
unCS
best
8.8078e-65
1.1964e-68
9.7282e-56
1.8235e-66
mean
4.1334e-62
5.5335e-65
1.7489e-52
2.5712e-63
worst
4.6553e-61
3.6380e-64
1.4043e-51
1.7921e-62
std.dev.
1.0604e-61
9.5267e-65
3.5436e-52
4.9262e-63
best
4.4408e-15
4.4408e-15
4.4408e-15
0.9313e + 00
mean
0.9427e + 00
1.1822e + 00
0.8713e + 00
2.5199e + 00
worst
2.0118e + 00
2.4922e + 00
2.1189e + 00
4.7374e + 00
std.dev.
0.7642e + 00
0.6861e + 00
0.6809e + 00
0.7976e + 00
best
0.0000e + 00
0.0000e + 00
0.0000e + 00
0.0000e + 00
mean
0.0022e + 00
0.0048e + 00
0.0043e + 00
0.0117e + 00
worst
0.0270e + 00
0.0585e + 00
0.0270e + 00
0.0465e + 00
std.dev.
0.0058e + 00
0.0121e + 00
0.0069e + 00
0.0126e + 00
best
2.9848e + 00
0.2102e + 00
0.0038e + 00
4.9747e + 00
mean
9.9657e + 00
4.9027e + 00
4.8955e + 00
1.0962e + 01
worst
1.7909e + 01
1.2019e + 01
1.2934e + 01
2.8853e + 01
std.dev.
3.9199e + 00
2.7549e + 00
2.7389e + 00
4.7702e + 00
best
1.1608e-17
2.5780e-18
2.2006e-12
1.0762e-08
mean
0.6821e + 00
0.6181e + 00
0.4001e + 00
2.4372e + 00
worst
3.9866e + 00
4.1201e + 00
3.9876e + 00
9.1931e + 00
std.dev.
1.5045e + 00
1.2846e + 00
1.2160e + 00
2.6541e + 00
best
1.000e + 00
1.000e + 00
1.000e + 00
1.000e + 00
mean
1.000e + 00
1.000e + 00
1.000e + 00
1.000e + 00
worst
1.000e + 00
1.000e + 00
1.000e + 00
1.000e + 00
std.dev.
0.000e + 00
0.000e + 00
0.000e + 00
0.000e + 00
best
7.4111e + 01
2.5273e-06
0.0896e + 00
1.1844e + 02
mean
7.1856e + 02
4.3023e + 02
7.6325e + 02
8.4747e + 02
worst
1.4820e + 03
9.8943e + 02
2.2654e + 03
1.8054e + 03
std.dev.
3.2121e + 02
1.9618e + 02
4.9522e + 02
4.1195e + 02
best
0.000e + 00
0.000e + 00
0.000e + 00
0.000e + 00
mean
0.000e + 00
0.000e + 00
0.000e + 00
0.000e + 00
worst
0.000e + 00
0.000e + 00
0.000e + 00
0.000e + 00
std.dev.
0.000e + 00
0.000e + 00
0.000e + 00
0.000e + 00
best
-1.174e + 03
-1.174e + 03
-1.174e + 03
-1.174e + 03
mean
-1.170e + 03
-1.167e + 03
-1.150e + 03
-1.122e + 03
worst
-1.146e + 03
-1.146e + 03
-1.061e + 03
-1.076e + 03
std.dev.
7.728e + 00
1.0323e + 01
2.5761e + 01
3.1267e + 03
best
1.1481e-11
8.5673e-14
3.2317e-10
6.1856e-12
mean
1.4248e-09
3.3124e-11
4.3902e-09
0.0624e + 00
worst
9.1853e-09
7.3231e-10
2.4159e-08
1.5831e + 00
std.dev.
1.7253e-09
1.3255e-10
5.5766e-09
0.2899e + 00 (continued)
3 Cuckoo Search Algorithm with Various Walks
71
Table 10 (continued) Functions Noncontinous Rastrigin
Levy
Zakharov
Trid
Dixon and Price
Michalewicz
Average ranks
perf. f 11
f 12
f 13
f 14
f 15
f 16
Algorithms leCS
brCS
quCS
unCS
best
7.0000e + 00
1.0146e + 00
5.0000e + 00
9.2261e + 00
mean
1.5771e + 01
7.7616e + 00
1.1682e + 01
1.6884e + 01
worst
2.5000e + 01
1.3238e + 01
2.8004e + 01
2.8486e + 01
std.dev.
4.5513e + 00
2.7412e + 00
4.9631e + 00
4.6955e + 00
best
1.4997e-32
1.4997e-32
1.4997e-32
1.4997e-32
mean
0.1682e + 00
0.2104e + 00
0.2889e + 00
0.6056e + 00
worst
0.6333e + 00
0.7229e + 00
1.8172e + 00
2.6052e + 00
std.dev.
0.2013e + 00
0.2416e + 00
0.4411e + 00
0.6148e + 00
best
1.5844e-12
3.7306e-14
1.7281e-11
3.1751e-15
mean
3.0860e-11
7.1937e-05
1.9814e-10
1.5033e-12
worst
3.4156e-10
0.0021e + 00
1.8257e-09
1.2782e-11
std.dev.
6.3642e-11
0.0003e + 00
3.6212e-10
3.1125e-12
best
-0.210e + 03
-0.210e + 03
-0.210e + 03
-0.210e + 03
mean
-0.210e + 03
-0.210e + 03
-0.210e + 03
-0.210e + 03
worst
-0.210e + 03
-0.210e + 03
-0.210e + 03
-0.210e + 03
std.dev.
0.0000e + 00
0.0000e + 00
0.0000e + 00
0.0000e + 00
best
0.6666e + 00
9.7931e-22
0.6666e + 00
0.6666e + 00
mean
0.6666e + 00
0.6444e + 00
0.6666e + 00
0.6666e + 00
worst
0.6666e + 00
0.6666e + 00
0.6666e + 00
0.6666e + 00
std.dev.
3.9034e-13
0.1217e + 00
4.6603e-16
8.6112e-15
best
-1.801e + 00
-1.801e + 00
-1.801e + 00
-1.801e + 00
mean
-1.801e + 00
-1.801e + 00
-1.801e + 00
-1.801e + 00
worst
-1.801e + 00
-1.801e + 00
-1.801e + 00
-1.801e + 00
std.dev.
9.0336e-16
9.0336e-16
9.0336e-16
9.0336e-16
2.656
1.844
2.656
2.844
The mean ranks presented in Table 11 point out the superiority of quCS to other CS extensions in spring design and pressure vessel problems. brCs is found as slightly better than quCS in welded beam problem. Finally, one can note that leCS obtains the best results in speed reducer problem. Further analysis is performed to observe comparative performances of the proposed CS algorithms. Due to put control on family wise error rate, obtained unadjusted p-values are adjusted by using Nemenyi test [71]. According to the results presented in Table 12, quCS is found as significantly better than brCs in spring design problem for significance level α = 0.10. In pressure vessel problem, it can be concluded that quCS is significantly better than leCS and brCS for significance level α = 0.05. Finally, in speed reducer while leCS significantly outranks all other CS extensions, and unCS is significantly outperformed by all algorithms. To sum up, it can be put forward that quCS and leCS bring about significant improvements among the compared CS extensions.
F. B. Ozsoydan and ˙I. Gölcük
72
Fig. 6 Convergence plots for real-valued optimization problems
Table 11 Friedman ranks of algorithms for mechanical design problems Design problems Spring design
Pressure vessel
Welded beam
Speed reducer
leCS
2.6667
3.1000
2.8667
1.0000
brCS
2.9000
2.7000
2.2333
2.2667
quCS
2.0667
1.8000
2.2667
2.8333
unCS
2.3667
2.4000
2.6333
3.9000
p-values
0.0694
0.0010
0.1718
0.0000
Decision
Reject H 0
Reject H 0
Cannot reject H 0
Reject H 0
Similar tests are conducted for real-valued optimization problems. Friedman test, of which the results are presented in Table 13, is applied first. According to the results of Table 13, brCS is found as the most efficient CS extension. Additionally, p-values of the Friedman test points out rejection for both significance levels α = 0.05 and α = 0.10. Further comparative analysis based on Nemenyi test is presented in Table 14. According to these results, although unadjusted p-values point out superiority of brCS to unCS (α = 0.05), leCS (α = 0.10) and quCS (α = 0.10), Nemenyi test cannot find significant differences between these algorithms in terms of finding the best solution. Similarly, although unadjusted p-values point out superiority of brCS
3 Cuckoo Search Algorithm with Various Walks
73
Table 12 Nemenyi test results for mechanical design problems Spring design
Pressure vessel
Speed reducer
Unadjusted p-values
Adjusted p-values
Unadjusted p-values
Adjusted p-values
leCS vs. quCS
0.000 (+)
0.000 (+)
unCS vs. leCS
0.000 (+)
0.000 (+)
0.431 (~)
brCS vs. quCS
0.006 (+)
0.041 (+)
quCS vs. leCS
0.000 (+)
0.000 (+)
0.109 (~)
0.657 (~)
leCS vs. unCS
0.035 (+)
0.214 (~)
unCS vs. brCS
0.000 (+)
0.000 (+)
leCS vs. unCS
0.368 (~)
1.000 (~)
unCS vs. quCS
0.071 (~)
0.431 (~)
brCS vs. leCS
0.000 (+)
0.001 (+)
unCS vs. quCS
0.368 (~)
1.000 (~)
leCS vs. brCS
0.230 (~)
1.000 (~)
unCS vs. quCS
0.001 (+)
0.008 (+)
brCS vs. leCS
0.483 (~)
1.000 (~)
brCS vs. unCS
0.368 (~)
1.000 (~)
quCS vs. brCS
0.089 (~)
0.535 (~)
Unadjusted p-values
Adjusted p-values
brCS vs. quCS
0.012 (+)
0.074 (+)
leCS vs. quCS
0.071 (+)
brCS vs. unCS
Table 13 Friedman test results for global optimization problems
Ranks for best results
Ranks for mean results
leCS
2.656
2.313
brCS
1.844
2.125
quCS
2.656
2.500
unCS
2.844
3.063
p-values
0.010716
0.088985
Decision
Reject H 0
Reject H 0
to unCS (α = 0.05) and of leCS to unCS (α = 0.10), Nemenyi test cannot find significant differences. To sum up, statistical tests demonstrate some significant improvements among the developed CS modifications both for unconstrained and constrained optimization problems adopted in the present study
5 Conclusions This study introduces some new movement procedures including quantum, Brownian and random walks for CS, which adopts Lévy flights in the standard form. The proposed modifications are tested by using a set of well-known real-valued unconstrained global optimization problems and some nonlinear constrained
F. B. Ozsoydan and ˙I. Gölcük
74
Table 14 Nemenyi test results for real-valued optimization problems Best results
Mean results
Unadjusted p-values
Adjusted p-values
Unadjusted p-values
Adjusted p-values
brCS vs. unCS
0.028 (+)
0.170 (~)
brCS vs. unCS
0.039 (+)
0.239 (~)
brCS vs. leCS
0.075 (+)
0.450 (~)
leCS vs. unCS
0.100 (+)
0.602 (~)
brCS vs. quCS
0.075 (+)
0.450 (~)
quCS vs. unCS
0.217 (~)
1.000 (~)
quCS vs. unCS
0.681 (~)
1.000 (~)
brCS vs. quCS
0.411 (~)
1.000 (~)
leCS vs. unCS
0.681 (~)
1.000 (~)
leCS vs. quCS
0.681 (~)
1.000 (~)
quCS vs. leCS
1.000 (~)
1.000 (~)
leCS vs. brCS
0.681 (~)
1.000 (~)
mechanical design problems. Comprehensive experimental study and statistically verified results demonstrate that some of the proposed movements induce significant improvements over the standard CS and some other developed modifications. Particularly Brownian and quantum-based walks, of which the details are reported in this paper, are found to be promising in solving unconstrained and constrained optimization problems, respectively. Moreover, it is also shown that the CS in the standard form is already an efficient optimizer that is not outperformed by the enhanced modifications in some of the instances. It should be emphasized that adopting some mechanisms in some particular algorithms might not always be practicable. First, one should be aware of possible conflicts between flow of the main algorithm and the related mechanisms in order to avoid possible deteriorations in a proactive manner. Therefore, it might sometimes be challenging to find the best working mechanism for an algorithm. Analysing the effects of several other walk mechanisms can be considered as a future. It is also clear that a learning procedure such as a hyper-heuristic can learn efficient walk types based one a feedback mechanism throughout iterations. Moreover, several other walks by generating a walk pool for the mentioned hyper-heuristic can also be tested. In this regard, adopting a learning mechanism for such walk mechanisms is scheduled as the future work.
References 1. Holland JH (1975) Adaptation in natural and artificial systems. The University of Michigan Press, Ann Arbor, MI 2. Passino KM (2002) Biomimicry of bacterial foraging for distributed optimization and control. IEEE Control Syst Mag 22(3):52–67
3 Cuckoo Search Algorithm with Various Walks
75
3. Karaboga D, Basturk B (2007) A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. J Global Optim 39:459–471 4. Yang XS (2009) Firefly algorithms for multimodal optimization. In International symposium on stochastic algorithms, pp 169–178, Springer, Berlin, Heidelberg 5. Yang XS, Deb S (2009) Cuckoo search via Lévy flights. In 2009 world congress on nature & biologically inspired computing (NaBIC) pp 210–214. IEEE 6. Krishnanand KN, Ghose D (2009) Glowworm swarm optimization for simultaneous capture of multiple local optima of multimodal functions. Swarm Intell 3:87–124 7. Yang XS (2010) A new metaheuristic bat-inspired algorithm. In Nature inspired cooperative strategies for optimization (NICSO 2010) pp. 65–74. Springer, Berlin 8. Tang R, Fong S, Yang XS, Deb S (2012) Wolf search algorithm with ephemeral memory. In: IEEE International conference on digital information management (ICDIM) pp 165–72 9. Yang XS (2012) Flower pollination algorithm for global optimiza-tion. In International conference on unconventional computing and natural computation, pp 240–249. Springer, Berlin, Heidelberg 10. Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer. Adv Eng Softw 69:46–61 11. Zhang Q, Wang R, Yang J, Lewis A, Chiclana F, Yang S (2018) Biology migration algorithm: a new nature-inspired heuristic methodology for global optimization. Soft Comput 1:1–26 12. Mendes R, Mohais AS (2005) DynDE: a differential evolution for dynamic optimization problems. In 2005 IEEE congress on evolutionary computation, pp 2808–2815 13. Blackwell T, Branke J, Li X (2008) Particle swarms for dynamic optimization problems. In: Blum C, Merkle D (eds) Swarm intelligence. Springer, Berlin, pp 193–217 14. Ozsoydan FB (2018) A quantum based local search enhanced particle swarm optimization for binary spaces. Pamukkale Univ J Eng Sci 24:675–681 15. Ozsoydan FB, Baykaso˘glu A (2019) Quantum firefly swarms for multimodal dynamic optimization problems. Expert Syst Appl 115:189–199 16. Ozsoydan FB (2019) Effects of dominant wolves in Grey Wolf Optimization algorithm. Appl Soft Comput 105658 17. Yang XS, Deb S (2010) Engineering optimisation by cuckoo search. arXiv:1005.2908 18. Chandrasekaran K, Simon SP (2012) Multi-objective scheduling problem: hybrid approach using fuzzy assisted cuckoo search algorithm. Swarm Evol Comput 5:1–16 19. Tuba M, Subotic M, Stanarevic N (2012) Performance of a modified cuckoo search algorithm for unconstrained optimization problems. WSEAS Trans Syst 11:62–74 20. Civicioglu P, Besdok E (2013) A conceptual comparison of the Cuckoo-search, particle swarm optimization, differential evolution and artificial bee colony algorithms. Artif Intell Rev 39:315– 346 21. Kanagaraj G, Ponnambalam SG, Jawahar N (2013) A hybrid cuckoo search and genetic algorithm for reliability-redundancy allocation problems. Comput Ind Eng 66:1115–1124 22. Rodrigues D, Pereira LA, Almeida TNS, Papa JP, Souza AN, Ramos CC, Yang XS (2013) BCS: A binary cuckoo search algorithm for feature selection. In 2013 IEEE international symposium on circuits and systems (ISCAS2013), pp 465–468 23. Gandomi AH, Yang XS, Alavi AH (2013) Cuckoo search algorithm: a metaheuristic approach to solve structural optimization problems. Eng Comput 29:17–35 24. Yildiz AR (2013) Cuckoo search algorithm for the selection of optimal machining parameters in milling operations. Int J Adv Manuf Technol 64:55–61 25. Yang XS, Deb S (2013) Multiobjective cuckoo search for design optimization. Comput Oper Res 40:1616–1624 26. Kaveh A, Bakhshpoori T (2013) Optimum design of steel frames using Cuckoo search algorithm with Lévy flights. Struct Des Tall Spec Build 22:1023–1036 27. Agrawal S, Panda R, Bhuyan S, Panigrahi BK (2013) Tsallis entropy based optimal multilevel thresholding using cuckoo search algorithm. Swarm Evol Comput 11:16–30 28. Dash P, Saikia LC, Sinha N (2014) Comparison of performances of several Cuckoo search algorithm based 2DOF controllers in AGC of multi-area thermal system. Int J Electr Power Energy Syst 55:429–436
76
F. B. Ozsoydan and ˙I. Gölcük
29. Bhandari AK, Singh VK, Kumar A, Singh GK (2014) Cuckoo search algorithm and wind driven optimization based study of satellite image segmentation for multilevel thresholding using Kapur’s entropy. Expert Syst Appl 41:3538–3560 30. Yang XS, Deb S (2014) Cuckoo search: recent advances and applications. Neural Comput Appl 24:169–174 31. Ouaarab A, Ahiod B, Yang XS (2014) Improved and discrete cuckoo search for solving the travelling salesman problem. In Cuckoo search and firefly algorithm, pp. 63–84. Springer, Cham 32. Ouaarab A, Ahiod B, Yang XS (2014) Discrete cuckoo search algorithm for the travelling salesman problem. Neural Comput Appl 24:1659–1669 33. Marichelvam MK, Prabaharan T, Yang XS (2014) Improved cuckoo search algorithm for hybrid flow shop scheduling problems to minimize makespan. Appl Soft Comput 19:93–101 34. Fister I, Yang XS, Fister D (2014) Cuckoo search: a brief literature review. In Cuckoo search and firefly algorithm, pp 49–62. Springer, Cham 35. Civicioglu P, Besdok E (2014) Comparative analysis of the cuckoo search algorithm. In Cuckoo search and firefly algorithm pp 85–113. Springer, Cham 36. Pereira LAM, Rodrigues D, Almeida TNS, Ramos CCO, Souza AN, Yang XS, Papa JP (2014) A binary cuckoo search and its application for feature selection. In Cuckoo search and firefly algorithm pp 141–154. Springer, Cham 37. Syberfeldt A (2014) Multi-objective optimization of a real-world manufacturing process using cuckoo search. In Cuckoo search and firefly algorithm, pp 179–193. Springer, Cham 38. Salomie I, Chifu VR, Pop CB (2014) Hybridization of cuckoo search and firefly algorithms for selecting the optimal solution in semantic web service composition. In Cuckoo search and firefly algorithm pp 217–243. Springer, Cham 39. Li X, Yin M (2015) Modified cuckoo search algorithm with self adaptive parameter method. Inf Sci 298:80–97 40. Wang GG, Gandomi AH, Zhao X, Chu HCE (2016) Hybridizing harmony search algorithm with cuckoo search for global numerical optimization. Soft Comput 20:273–285 41. Kang S, Kim M, Chae J (2018) A closed loop based facility layout design using a cuckoo search algorithm. Expert Syst Appl 93(322–335):3 42. Majumder A, Laha D, Suganthan PN (2018) A hybrid cuckoo search algorithm in parallel batch processing machines with unequal job ready times. Comput Ind Eng 124:65–76 43. Boushaki SI, Kamel N, Bendjeghaba O (2018) A new quantum chaotic cuckoo search algorithm for data clustering. Expert Syst Appl 96:358–372 44. Laha D, Gupta JN (2018) An improved cuckoo search algorithm for scheduling jobs on identical parallel machines. Comput Ind Eng 126:348–360 45. El Aziz MA, Hassanien AE (2018) Modified cuckoo search algorithm with rough sets for feature selection. Neural Comput Appl 29:925–934 46. Yang XS, Deb S, Mishra SK (2018) Multi-species cuckoo search algorithm for global optimization. Cogn Comput 10:1085–1095 47. Chi R, Su YX, Zhang DH, Chi XX, Zhang HJ (2019) A hybridization of cuckoo search and particle swarm optimization for solving optimization problems. Neural Comput Appl 31:653– 670 48. Jalal M, Goharzay M (2019) Cuckoo search algorithm for applied structural and design optimization: float system for experimental setups. J Comput Des Eng 6:159–172 49. Bhandari AK, Maurya S (in press) Cuckoo search algorithm-based brightness preserving histogram scheme for low-contrast image enhancement. Soft Comput 1–27 50. Blackwell TM, Bentley PJ (2002) Dynamic search with charged swarms. In proceedings of the genetic and evolutionary computation conference, vol 2, pp 19–26 51. Blackwell TM, Branke J (2004) Multi-swarm optimization in dynamic environments. In: Raidl G, Cagnoni S, Branke J, Corne D, Drechsler R, Jin Y, Johnson C, Machado P, Marchiori E, Rothlauf F, Smith G, Squillero G (eds) Applications of evolutionary computing. Springer, Berlin, pp 489–500
3 Cuckoo Search Algorithm with Various Walks
77
52. Kim TH, Maruta I, Sugie T (2010) A simple and efficient constrained particle swarm optimization and its application to engineering design problems. Proceedings of the Institution of Mechanical Engineers, Part C: J Mech Eng Sci 224:389–400 53. Arora JS (1989) Introduction to optimum design. McGraw-Hill, New York 54. Baykasoglu A (2012) Design optimization with chaos embedded great deluge algorithm. Appl Soft Comput 12:1055–1067 55. Coello CAC (2000) Use of a self-adaptive penalty approach for engineering optimization problems. Comput Ind 41(2):113–127 56. Mezura-Montes E, Coello CAC, Landa-Becerra R (2003) Engineering optimization using a simple evolutionary algorithm. In: Proceedings of the 15th IEEE international conference on tools with artificial intelligence 57. Parsopoulos KE, Vrahatis MN (2005) Unified particle swarm optimization for solving constrained engineering optimization problems. In: Wang L, Chen K, Ong YS (eds) Advances in natural computation. Springer, Berlin, pp 582–591 58. Mezura E, Coello C (2005) Useful infeasible solutions in engineering optimization with evolutionary algorithms. In: Gelbukh A, Albornoz AD, Terashima-Marín H (eds) Lecture notes in computer science. Springer, Berlin, pp 652–662 59. Aguirre H, Zavala AM, Diharce EV, Rionda SB (2007) COPSO: constrained optimization via PSO algorithm. Technical report No I-07-04/22-02-2007, Center for Research in Mathematics (CIMAT) 60. He Q, Wang L (2007) A hybrid particle swarm optimization with a feasibility-based rule for constrained optimization. Appl Math Comput 18:1407–1422 61. Cagnina L, Esquivel S, Coello CC (2008) Solving engineering optimization problems with the simple constrained particle swarm optimizer. Informatica 32:319–326 62. Maruta I, Kim TH, Sugie T (2009) Fixed-structure H∞ controller synthesis: a meta-heuristic approach using simple constrained particle swarm optimization. Automatica 45:553–559 63. Tomassetti G (2010) A cost-effective algorithm for the solution of engineering problems with particle swarm optimization. Eng Optimiz 42:471–495 64. Akay B, Karaboga D (2012) Artificial bee colony algorithm for large-scale problems and engineering design optimization. J Intell Manuf 23:1001–1014 65. Gandomi AH, Yang XS, Alavi AH, Talatahari S (2013) Bat algorithm for constrained optimization tasks. Neural Comput Appl 22:1239–1255 66. Brajevic I, Tuba M (2013) An upgraded artificial bee colony (ABC) algorithm for constrained optimization problems. J Intell Manuf 24:729–740 67. Gandomi AH (2014) Interior search algorithm (ISA): a novel approach for global optimization. ISA Trans 53:1168–1183 68. Baykaso˘glu A, Ozsoydan FB (2015) Adaptive firefly algorithm with chaos for mechanical design optimization problems. Appl Soft Comput 36:152–164 69. Akhtar S, Tai K, Ray T (2002) A socio-behavioural simulation model for engineering design optimization. Eng Optim 34:341–354 70. Rao SS (1996) Engineering optimization, 3rd edn. Wiley, New York 71. Derrac J, García S, Molina D, Herrera F (2011) A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol Comput 1(1):3–18
Chapter 4
Cuckoo Search Algorithm: Statistical-Based Optimization Approach and Engineering Applications Thanh-Phong Dao
1 Introduction Flexure hinge is a basic element in compliant mechanism (CM) that is alternated for conventional kinematic joint-based mechanism [1]. It does not need any free lubricant and has a lightweight. Today, compliant joints are adopted where ultrahigh precision instruments are desired [2, 3]. Regarding focusing positioning devices, motors, and actuators have been widely employed in order to adjust a microscope [4, 5]. Kinetic bearings were used to transfer motion from motor and actuator that could result in errors and undesired vibrations. Hence, it needs an alternation for designing overall system. Until now, several cameras have been widely used for microscopy device [6–9], camera phone [10], image identification [11], laser probe system [12], etc. In order to take high advantages of CMs, mobile imaging devices adopted flexure hinges in a focus adjustment [13, 14]. In addition, a compliant optical zoom mechanism for mobile-phone cameras was studied [15]. There have been several other devices such auto-focusing imaging system comprised of two electrode plates [16] and magnetic actuators [17]. Up to now, there has a little attention on integration of CMs into camera positioning devices. In the present work, a camera positioning device is adopted so as to guide focuses in imaging devices. The limitations of flexure hinge-based CMs are that the working travel is in a small range and speed is in a low range [1–3]. When a large working travel and a high speed are reqired, the devices are limited. Therefore, there is a need to enhance both those characterictics of camera devices. There have been studies focused on improvement of the stroke and speed [11, 14, 18–21]. To simultaneously T.-P. Dao (B) Division of Computational Mechatronics, Institute for Computational Science, Ton Duc Thang University, Ho Chi Minh City, Vietnam e-mail: [email protected] Faculty of Electrical and Electronics Engineering, Ton Duc Thang University, Ho Chi Minh City, Vietnam © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 N. Dey (ed.), Applications of Cuckoo Search Algorithm and its Variants, Springer Tracts in Nature-Inspired Computing, https://doi.org/10.1007/978-981-15-5163-5_4
79
80
T.-P. Dao
achieve a large working travel and a fast speed, an optimization process is preferred approach. It is known that the working travel is always conflicted with the speed. This was proved in many studies [19–21]. Besides, those performances of a camera positioning device are strongly influenced through geometrical dimensions and shapes of compliant joint. Therefore, it must select a suitable dimension aiming to make a trade-off between the stroke and speed. In this chapter, an optimization approach is proposed to answer this issue. Usually, an engineering optimization problem can be solved by adopting approaches from experiments to programing in computer. Regarding experiments, it concludes the trial and error method and Taguchi method. Both these two methods need a lot of experiments and are so costly [22, 23]. In contrast to experiments, population-based metaheuristic algorithm is easy to program in a computer [24]. For example, sequential linear program [25], dynamic program [26], direct algorithm [27], GA [28], DE [29], and PSO [30]. Later, cuckoo search algorithm (CSA) was developed so as to decrease computational time [31–33]. The existing algorithms have been widely used for a lot of various fields [34–39]. However, the CSA has still some poor behaviors or large time consuming to seek an accurate solution. In order to overcome the limitation of CSA, a statistical-based cuckoo search algorithm (SCSA) is proposed to increase convergent speed and find more accurate solution in this study. Operation principle of the proposed SCSA is based on physical experimentations and programing code. This chapter is dedicated to develop a statistical-based optimization approach, abbreviated as SCSA. This approach includes the TM, analysis of variance (ANOVA), and CSA. Camera positioning device is an example case of the proposed hybrid approach. Subsequently, the proposed SCSA approach is compared with existing population-based metaheuristic algorithms. Finite element analysis and experimentations are carried out to verify the predictions.
2 Statistical-Based Cuckoo Search Algorithm The SCSA is developed by embedding the TM and ANOVA into the CSA. Each of the methods is presented, and then the proposed SCSA approach is stated.
2.1 Approach of Robust Parameter Selection The TM’s robust design approach is commonly used to improve the quality of a product [23, 40–42]. The robust process is based on a performance metric, called as signal-to-noise (S/N) ratio. It consists of three different types being replied on designer and customer [40–42]. In this chapter, a large working travel and a fast speed are two objective functions of the CPD. Therefore, the larger-the-better is chosen and described by
4 Cuckoo Search Algorithm: Statistical-Based Optimization …
81
n 1 1 S/N = −10 log , n i=1 yi2
(1)
where the y represents the ith response and n is replication of ith experiment. In order to recognize an improvement of product, analysis of mean of S/N ratios is conducted so as to search the optimum setting of parameters. In addition, it also helps to determine the sensitivity of design parameters based on statistical analysis. However, the TM only optimizes a single objective function. Hence, there embedded the TM and ANOVA into the CSA so that this combination can solve several aforementioned objective functions.
2.2 Analysis of Variance ANOVA is a common method in statistics and it has widely utilized to evaluate the significant contribution of controllable parameters [43–45]. In this study, ANOVA is used to determine the influencing degree and range of design variable on the objective function. Based on these analyzed results, the range and constraints of design variables are searched and redetermined. The ranges are preferred as an initial research space for the cuckoo search algorithm. Influences of design parameters on responses can be identified through this analysis. In ANOVA, statistical F-test and contribution percentage of factors are used as measurement index.
2.3 Cuckoo Search Algorithm The CSA was developed for solving global optimization problems, and then it was utilized for many engineering fields [46–59]. Details of the CSA can be found in previous studies [46–59]. Operation of the CSA is experienced key steps as below. A random walk is computed as Lk =
k i=1
Xi = X1 + X2 + · · · + Xk =
k−1 i=1
X i + X k = L k−1 + X k ,
(2)
where L k is random walk, k is random steps, and X i is ith random step. Step size is determined by L=
u , |v|1/β
(3)
where β is about 1.5 while u and v are as follows: u ∼ N 0, σu2 , v ∼ N 0, σv2 ,
(4)
82
T.-P. Dao
Table 1 Pseudo code of the SCSA
begin Selection of effective design variables; and determine key objective functions; Generate a constraint for variables; Generate an initial population for the CSA; Redetermine a new investing space for variables using statistical methods; Replace old population by new population for the CSA; Identify best fitness value; Check stop criterion; Postprocessing and estimation of results; end
where N is normal distribution, σ u and σ v are as follows: σu =
1/ β Γ (1 + β) sin πβ 2
, σv = 1, Γ (1 + β) 2 β2(β−1)/ 2
(5)
where is the Gamma function as follows: ∞ Γ (z) =
t z−1 e−t dt,
(6)
0
for z = p is a positive integer, Gamma function is Γ ( p) = ( p − 1)!. A new solutions xit+1 for the ith cuckoo are carried out via Lévy flight as xit+1 =xit +αL ,
(7)
where xit+1 is new solution,xit is a random solution, and L is Lévy flight vector. Another parameter of the CSA is pa, a probability. Discovery probability, pij , is determined by pi j =
1, if rand(0, 1)< pa 0, otherwise
(8)
Table 1 discusses a pseudo code for the SCSA’s operation.
2.4 Statistical-Based Optimization Approach Aforementioned, the TM is a statistical tool and not able to solve several specifications, simultaneously. Several advantages of the TM include a simple experiment
4 Cuckoo Search Algorithm: Statistical-Based Optimization …
83
plan, reduced production cost, and sensitivity analysis. Meanwhile, ANOVA is also a statistical method and commonly used to compare the difference of factors as well as determine the significant contribution of factors. Based on the point of statistical theory, the TM and ANOVA are embedded into the CSA. The original CSA is easy to program its code with less control parameters but it has a slow convergence speed and computational time. An efficient approach is to make a proper population for the CSA so as to obtain a set of optimal solution in short time. In this chapter, a decrease in human work and increase in computational speed are key purposes of the present study. Statistical method can resolve and analyze a set of simple or complex data. Dependent on researcher’s demands, many different types of statistical results can be integrated, e.g., a discrete optimization, a comparison of couple of factors, contribution of factor, regression surrogate model, graph, and so on. An effective combination of such statistical methods and a population-based metaheuristic algorithm can be considered for engineering optimization problems. In this chapter, the SCSA is briefly covered by following steps: – – – –
Experimentation’s matrix is constructed by using the TM; Data are collected so as to calculate the S/N ratio values; Sensitivity of controllable factors are recognized; Using the S/N ratios, ANOVA is employed to determine and confirm a robust range of controllable parameters; – Based on the results of ANOVA, a proper range of each controllable parameter is recommended so as to generate a new constraint for those parameters. Such new constraints make a new population integrated into the CSA; – The CSA is programmed in a PC in order to search the best combination of design variables. To sum up, Fig. 1 describes the SCSA process. It initializes a constraint of variables and then makes a new population for the CSA based on the statistical methods.
3 Case Study 3.1 Optimization for Camera Positioning Device Camera positioning device (CPD) is an example to show application of the proposed SCSA approach. Figure 2 depicts a primary design of the CPD. The camera system covers a base plate, voice coil motor (VCM), rod, CPD, screw hole, and lens holder. In order to decrease the positioning error, increase accuracy, and reduce manufacturing of product, the CPD is designed with four flexure hinges and concept of CMs. Cross-sectional parameters of hinge are considered as variables. In order to make a lightweight, material Al T73-7075 (ρ of 2770 kg/m3 , σ y of 435 MPa, E of 72000 MPa) is preferred [50]. The total size of the CPD is about 450 mm × 36 mm
84
T.-P. Dao
Fig. 1 Flowchart of the SCSA optimization approach
× 15 mm. A completed design process in detail can be found in a previous investigation [50]. In order to fulfill a practical demand, the CPD is expected to have a large working travel and high frequency over 170 µm and 185 Hz, respectively.
3.2 Statement of Optimization Problem A wide enough working travel and a fast enough speed are two essential conditions for design the CPD. According to a mechanic view of point, the working travel can be modified by varying the structure or topology of the CPD but it takes long time.
4 Cuckoo Search Algorithm: Statistical-Based Optimization …
85
Fig. 2 Camera positioning device
Meanwhile, the speed of CPD can be improved by increasing the natural frequency of the structure of CPD. A change in structure of topology may be costly but it does not make sure that both characteristics are fulfilled. Therefore, this study begins a formulation of mathematical models for both objective functions prior to conducting the SCSA approach. An example application of pseudo-rigid-body mode for a hinge is illustrated, as given in Fig. 3a, b. The readers can found the analysis process by referring [50]. Mass of lens holder is determined by Mh = ρ × vhs = ρ × (50 × 36 × 15) − π × 42 × 15 − 4 × π × 2.52 × 15 . (9) where ρ is density of material and vhs is volume. Total mass of flexure hinges are calculated by Ml f = ρ × vl f = ρ × Nl f × (l × t × w).
(10)
Total mass of the CPD is determined as M = Mh + Ml f = ρ(27000 + 8 × l × t × w), in which N lf includes 8 hinges.
(11)
86
T.-P. Dao
Fig. 3 a Front view, b analytical model of hinge
A dynamic model of the CPD is established based on Lagrange’s principle. The kinetics of the CPD is as follows: T =
1 M y˙ 2 . 2
(12)
Potential energy of the torsional spring (see in Fig. 3b) is determined by Vds =
1 K ds θz2 , 2
(13)
where K ds is the dynamic stiffness of spring, which is depicted by K ds = 2γ kΘ El I , 3 γ = 0.85, kΘ = 2.669, I = wt is moment of inertia. Θ is called the pseudo-rigid 12 angle. Total potential energy of hinges are as V = 16 ×
y 2 1 1 K ds θz2 = 16 × K ds , 2 2 2l
(14)
where θz = y 2l is found according to previous work [50]. Dynamic model of the CPD is as follow d dt
∂T ∂ y˙
−
∂T ∂V + = 0. ∂y ∂y
(15)
4 Cuckoo Search Algorithm: Statistical-Based Optimization …
87
The Eq. (15) becomes M y¨ +
4K ds y = 0. l2
(16)
The Eq. (16) is simplified by M y¨ + K y = 0.
(17)
Equation of stiffness of the CPD is as 4K ds . l2
K =
(18)
Frequency of the CPD is determined by 1 f = 2π
4K ds . Ml 2
(19)
The Eq. (19) is written as 2γ kΘ Ewt 3 . + 8 × l × t × w)
1 f = 2π
3ρl 3 (27000
(20)
Working travel of the CPD is determined by y=
4l 3 F 2σmaxl 2 = . 3Et Ewt 3
(21)
The stress concentration of the CPD is computed by σmax =
6Fmaxl . wt 2
(22)
To end this, optimization problem for the CPD is briefly described by Maximize the first frequency: 1 f = 2π
2γ kΘ Ewt 3 , 3ρl 3 (27000 + 8 × l × t × w)
(23)
Maximize the working travel: y=
4l 3 F 2σmaxl 2 = . 3Et Ewt 3
(24)
88
T.-P. Dao
Subject to constraints are described by Stress constraint: σmax ≤ σ y ,
(25)
⎧ ⎪ ⎨ 30mm ≤ l ≤ 150mm, 6mm ≤ w ≤ 36mm, ⎪ ⎩ 0.4mm ≤ t ≤ 3.4mm,
(26)
Constraints of design variables:
where σ max represents maximum stress and σ y is yield strength of material while l, w, and t are cross-sectional parameters of hinges.
4 Results and Discussion 4.1 Data Measurement Physical models of the CPD are made by WEDM technique, and then experimentations are performed to get the data, including the working travel and frequency of the device. Figure 4 gives a physical set for the displacement. Entire measuring process is located on an optical table to suppress undesired shocks. Devices and instruments include a force gauge, a displacement laser sensor, and a digital indicator. Each of experiment is conducted five times. A physical set for measuring the resonant frequency is given in Fig. 5. Instruments and devices include a modal hammer, an accelerator, and a modal analyzer. Software CUTPRO in PC is employed to analyze the data. The experimentation could be conducted to several times to calculate the average value which was used to compare with predicted value. We could perform many times for an experiment. This spent a lot of times but the achieved results were almost the same. So, in this study, each of experiment is performed five times.
4.2 New Range of Constraints Aforementioned, the purpose of this study is to make a proper range of constraints of design variables, and then a new population is embedded into the CSA. First of all, three factors are categorized into four levels based on designer experiences and working capacity of the device, as given in Table 2. Orthogonal array L 16 is utilized to build a number of experiments, as shown in Table 3. A similar way experimentation
4 Cuckoo Search Algorithm: Statistical-Based Optimization …
89
Fig. 4 Physical model for displacement’s measurement
is conducted by previous section (see in Figs. 4 and 5). Using the data, the S/N ratio values are calculated via Eq. (1) and the results are given in Tables 4 and 5. Based on these tables, it can determine the sensitivity of each design variable. Another way, ANOVA of the S/N ratios is calculated, and then the contribution and effective degree of each controllable factor are identified as well. The results of ANOVA for both responses are calculated in Tables 6 and 7. Table 6 concludes that the F-value of the length, thickness, and width are 567.56, 53.09, and 1.74, respectively. According to a statistical analysis [50], a larger contribution of a variable is with respect to the higher F-value. To end this part, a new range of constraints is reset in Eq. (27). This range is considered as a shortest range of population for the CSA. Case study 1: A range of new constraints ⎧ ⎪ ⎨ 110 mm ≤ l ≤ 150 mm 6 mm ≤ w ≤ 16 mm ⎪ ⎩ 0.4 mm ≤ t ≤ 1.4 mm
(27)
90
T.-P. Dao
Fig. 5 Physical model for frequency’s measurement
Table 2 Key design variables and their levels (Unit mm) Variables
Levels 1
2
3
4
Length, l
30
70
110
150
Width, w
6
16
26
36
Thickness, t
0.4
1.4
2.4
3.4
A similar analysis, Table 7 indicates that the F-value of the length, thickness, and width are 148.24, 5.16, and 1.05, respectively. Also, a new range of constraints, which is regarded as new population for the CSA is formulated in Eq. (28). Case study 2: A range of new constraints ⎧ ⎪ ⎨ 110 mm ≤ l ≤ 150 mm 16 mm ≤ w ≤ 36 mm ⎪ ⎩ 2.4 mm ≤ t ≤ 3.4 mm
(28)
4 Cuckoo Search Algorithm: Statistical-Based Optimization …
91
Table 3 Experimental matrix (Unit mm) No.
Design variables l
w
t
1
30
6
0.4
2
30
16
1.4
3
30
26
2.4
4
30
36
3.4
5
70
6
1.4
6
70
16
0.4
7
70
26
3.4
8
70
36
2.4
9
110
6
2.4
10
110
16
3.4
11
110
26
0.4
12
110
36
1.4
13
150
6
3.4
14
150
16
2.4
15
150
26
1.4
16
150
36
0.4
Table 4 S/N ratios data for working travel No.
Working travel (µm) in which W i (i = 1, 2,…, 5) is repetition of ith experiment W 1 (µm)
W 2 (µm)
W 3 (µm)
W 4 (µm)
S/N (dB)
W 5 (µm)
1
98.03
99.99
99.11
99.79
99.21
39.93
2
82.56
84.52
83.64
84.32
83.74
38.46
3
75.52
77.48
76.60
77.28
76.70
37.70
4
66.33
68.29
67.41
68.09
67.51
36.59
5
127.46
129.42
128.54
129.22
128.64
42.19
6
131.06
260.48
132.14
132.82
132.24
43.11
7
100.23
102.19
101.31
101.99
101.41
40.12
8
107.05
209.24
108.13
108.81
108.23
41.36
9
146.78
148.74
147.86
148.54
147.96
43.40
10
139.27
141.23
140.35
141.03
140.45
42.95
11
161.02
162.98
162.10
162.78
162.20
44.20
12
150.31
152.27
151.39
152.07
151.49
43.61
13
182.45
184.41
183.53
184.21
183.63
45.28
14
197.34
199.30
198.42
199.10
198.52
45.96
15
228.56
230.52
229.64
230.32
229.74
47.23
16
240.28
242.24
241.36
242.04
241.46
47.66
92
T.-P. Dao
Table 5 S/N ratios data for frequency No.
Frequency (Hz) in which F i (i = 1, 2,…, 5) is repetition of ith experiment
S/N (dB)
F 1 (Hz)
F 2 (Hz)
F 3 (Hz)
F 4 (Hz)
F 5 (Hz)
1
122.53
124.98
123.88
124.74
124.00
41.87
2
136.06
138.51
137.41
138.27
137.53
42.77
3
141.27
143.72
142.62
143.48
142.74
43.09
4
155.39
157.84
156.74
157.60
156.86
43.91
5
188.43
190.88
189.78
190.64
189.90
45.57
6
180.21
371.09
181.56
182.42
181.68
45.89
7
202.05
204.50
203.40
204.26
203.52
46.17
8
194.76
399.26
196.11
196.97
196.23
46.56
9
231.36
233.81
232.71
233.57
232.83
47.34
10
248.22
250.67
249.57
250.43
249.69
47.95
11
224.07
226.52
225.42
226.28
225.54
47.07
12
230.35
232.80
231.70
232.56
231.82
47.30
13
275.63
278.08
276.98
277.84
277.10
48.85
14
270.15
272.60
271.50
272.36
271.62
48.68
15
261.04
263.49
262.39
263.25
262.51
48.38
16
252.31
254.76
253.66
254.52
253.78
48.09
Table 6 ANOVA for working travel Variables
Level 1
Level 2
Level 3
Level 4
F-test
l
38.97
41.70
43.54
46.53
567.56
w
42.70
42.62
42.31
42.31
1.74
t
43.73
42.87
42.11
41.24
53.09
Level 1
Level 2
Level 3
Level 4
F-test
l
42.91
46.05
47.42
48.50
148.24
w
45.91
46.32
46.18
46.47
1.05
t
45.73
46.01
46.42
46.72
5.16
Table 7 ANOVA for frequency Variables
To sum up, each of the new constraints is considered as a case study for further optimization process through the proposed SCSA.
4 Cuckoo Search Algorithm: Statistical-Based Optimization …
93
Table 8 Optimal results for both case studies Quality characteristics
Case study 1
Case study 2
Working travel (µm)
155.05
188.36
Frequency (Hz)
228.12
284.06
Stress (MPa)
416.07
430.14
4.3 Results and Discussion Through many simulations in MATLAB, parameters n of 25, a tolerance of 10−6 , and pa of 0.25 are suitable for the SCSA algorithm in this study. An optimization process is carried out in MATLAB by programming the proposed SCSA. Case study 1 and case study 2 are implemented individually. The results are given in Table 8. It found that the working travel in case study 2 is higher than that of case study 1 about 21.4%. It determined that the frequency of case study 2 is larger than that of case 1 about 24.5%. Besides, the stresses of both case studies are still under the yield strength of Al T73-7075. Compared with design’s demand, the working travel and the frequency are larger than 170 µm and 185 Hz, respectively. In comparison of case study 1 and case study 2, it can conclude that the case study 2 is chosen as the final optimal solution for the CPD because the case study 2 is satisfied better. It found a best combination, including l of 111.4 mm, t of 3.3 mm, and w of 31.5 mm. The proposed SCSA is compared with other population-based metaheuristic algorithms, e.g., DE [60], GA [61], PSO algorithm [62], AEDE algorithm [63], and PSOGSA [64]. The results of Table 9 found that the proposed SCSA needs 50 function evaluations that are lower than aforementioned algorithms. It can conclude that the convergent speed of the proposed SCSA is improved better than others. Table 9 Comparison of nature-based metaheuristic algorithms Quality characteristics
Optimization algorithms SCSA
DE
GA
PSO
AEDE
PSOGSA
Working travel (µm)
188.36
188.03
188.15
188.24
188.03
188.24
Frequency (Hz)
284.06
284.11
284.08
284.12
284.11
284.12
Stress (MPa)
430.14
430.21
43 0.18
430.25
430.21
430.25
Number of function evaluation
50
3825
1325
1525
1000
1140
94
T.-P. Dao
5 Validations In previous section, we use parameters l = 111.4 mm, t = 3.3 mm, and w = 31.5 mm to fabricate a final prototype. Experimentations are performed so as to verify the predicted results. The results of Table 10 indicate that the predicted solution from the SCSA is close to the results of experiments. In order to measure the stress, the strain is firstly calibrated, as given in Fig. 6. A relationship between the strain and stress is determined by Eq. (3.29) [50]: Table 10 Comparisons Responses
SCSA
Physical model
Simulation
Error (SCSA and physical model) (%)
Error (SCSA and simulation) (%)
Working travel (µm)
188.36
180.63
197.28
4.27
4.52
Frequency (Hz)
284.06
277.54
295.67
2.34
3.92
Stress (MPa)
430.14
419.12
420.05
2.62
2.40
Fig. 6 Physical model of strain’s measurement
4 Cuckoo Search Algorithm: Statistical-Based Optimization … Table 11 Comparison of the present device with previous devices
Devices
Displacement (µm)
95 Frequency (Hz)
Initial design
170
185
Optimal design
188
284
Hsu et al. [11]
50
NA
Song et al. [14]
100
Liu and Xu [18]
10000
σ = ε × E,
NA 50
(29)
where σ , E, and E are the stress, the strain, and Young’s modulus. The result of Table 10 gives that the stress value is under the yield strength of Al T73-7075. Subsequently, a 3D-model of the CPD is simulated by using finite element analysis (FEA) through software ANSYS 16 [65–68]. The material and boundary conditions are similar to previous sections. It indicated that there is a good correlation between the predicted result and FEA’s result, as given in Table 10. As given in Table 11, the results found that both two responses of the CPD are outperformed over original design. Besides, the results indicated that the quality characteristics of CPD are better than those of previous studies in the literature review. So, the SCSA is an efficient approach so as to solve the optimization for the CPD.
6 Conclusions In this chapter, a statistical-based cuckoo search algorithm has been developed so as to reduce computations of human work, machine’s material, and increase convergent speed. A camera positioning device is considered as an application example of the SCSA. In this study, the working travel and frequency of the CPD are considered as two objective functions. Cross-sectional parameters of hinge are main design variables. Based on the TM, the data matrix is established and experimental data are collected by setting of physical models. And then, the S/N ratio values for both responses are calculated. The effective degree of design variables on the responses is determined through the ANOVA analysis. Based on the ANOVA’s results, new constraint’s ranges of design variables are recognized. These ranges contribute as a new population for the further SCSA. The results determined that there are two new constraints of design variables. Each new constraint is considered as a case study. The results determine that case study 2 is chosen as the final optimal solution because it satisfied the design’s demand. The results find that the SCSA outperforms other metaheuristic methods (GA, DE, AEDE, PSO, and PSOGSA). It shows that the performance characteristics
96
T.-P. Dao
of camera device are improved through the proposed SCSA approach. A statisticalbased optimization approach is expected to become an efficient tool for engineering optimization problems with complex objective functions and constraints. Acknowledgements This research is funded by Vietnam National Foundation for Science and Technology Development (NAFOSTED) under grant number 107.01-2019.14.
References 1. Howell LL (2001) Compliant mechanisms. Wiley, New York 2. Huang SC, Dao TP (2016) Design and computational optimization of a flexure-based XY positioning device using FEA-based response surface methodology. Int J Precis Eng Manuf 17(8):1035–1048 3. Dao TP, Huang SC (2016) Design and analysis of a compliant micro-positioning platform with embedded strain gauges and viscoelastic damper. Microsyst Technol 23(2):441–456 4. Yu HC, Liu TS (2007) Design of a slim optical image stabilization actuator for mobile phone cameras. Physica Status Solidi 4:4647–4650 5. Kim C, Song MG, Park NC, Park KS, Park YP, Song DY (2011) Design of a hybrid optical image stabilization actuator to compensate for hand trembling. Microsyst Technol 17:971–981 6. O’Brien W (2005) Long-range motion with nanometer precision. Photonics Spectra. Laurin Publishing Co, pp 80–81 7. Gauthier M, Piat E (2006) Control of a particular micro-macro positioning system applied to cell micromanipulation. IEEE Trans Automat Sci Eng 3:264–271 8. Dai G, Pohlenz F, Danzebrink HU, Xu M, Hasche K, Wilkening G (2004) Metrological large range scanning probe microscope. Rev Sci Instrum 75:962–969 9. Hausotte T, Jaeger G, Manske E, Hofmann N, Dorozhovets N (2005) Application of a positioning and measuring machine for metrological long-range scanning force microscopy. Proc SPIE 5878:87802 10. Chung MJ, Yee YH, Cha DH (2007) Development of auto focus actuator for camera phone by applying piezoelectric single crystal. In: International symposium on optomechatronic technologies. International society for optics and photonics 67:1507–671507 11. Hsu WY, Lee CS, Chen PJ, Chen NT, Chen FZ, Yu ZR, Hwang CH (2009) Development of the fast astigmatic auto-focus microscope system. Meas Sci Technol 20:045902 12. Fan KC, Chu CL, Mou JI (2001) Development of a low-cost autofocusing probe for profile measurement. Meas Sci Technol 12:2137 13. Kim C, Song MG, Kim Y, Park NC, Park KS, Park YP, Lee GS (2013) Design of an autofocusing actuator with a flexure-based compliant mechanism for mobile imaging devices. Microsyst Technol 19:1633–1644 14. Song MG, Baek HW, Park NC, Park KS, Yoon T, Park YP, Lim SC (2010) Development of small sized actuator with compliant mechanism for optical image stabilization. IEEE Trans Magnet 46:2369–2372 15. Mutlu R, Alici G, Xiang X, Li W (2014) An active-compliant micro-stage based on EAP artificial muscles. In: IEEE/ASME international conference in advanced intelligent mechatronics, pp 611–616 16. Wei HC, Chien YH, Hsu WY, Cheng YC, Su GDJ (2012) Controlling a MEMS deformable mirror in a miniature auto-focusing imaging system. IEEE Trans Control Syst Technol 20:1592– 1596
4 Cuckoo Search Algorithm: Statistical-Based Optimization …
97
17. Pournazari P, Nagamune R, Chiao M (2014) A concept of a magnetically-actuated optical image stabilizer for mobile applications. IEEE Trans Consum Electron 60:10–17 18. Liu YL, Xu QS (2015) Design of a flexure-based auto-focusing device for a microscope. Int J Precis Eng Manuf 16:2271–2279 19. Polit S, Dong J (2011) Development of a high-bandwidth XY nanopositioning stage for highrate micro-/nanomanufacturing. IEEE/ASME Trans Mechatron 16:724–733 20. Xu QS (2012) Design and development of a flexure-based dual-stage nanopositioning system with minimum interference behavior. IEEE Trans Autom Sci Eng 9:554–563 21. Yong YK, Sumeet SA, Moheimani SOR (2009) Design, identification, and control of a flexurebased XY stage for fast nanoscale positioning. IEEE Trans Nanotechnol 8:46–54 22. Halab LK, Ricard A (1999) Use of the trial and error method for the optimization of the graft copolymerization of a cationic monomer onto cellulose. Eur Polym J 35:1065–1071 23. Roy RK (1990) A primer on the Taguchi method. Van Nostrand Reinhold, New York 24. Dey N (ed) (2017) Advancements in applied metaheuristic computing. IGI Global 25. Zienkiewicz OC, Campbell JS (1973) Shape optimization and sequential linear programming. Optimum structural design, pp 109–126 26. Wong PJ, Robert EL (1968) Optimization of natural-gas pipeline systems via dynamic programming. IEEE Trans Autom Control 13:475–481 27. Björkman M, Holmström K (1999) Global optimization using direct algorithm in matlab 28. Davis L (1991) Handbook of genetic algorithms. Van Nostrand Reinhold, New York, USA 29. Price K, Storn RM, Lampinen JA (2006) Differential evolution: a practical approach to global optimization. Springer 30. Kennedy J (2011) Particle swarm optimization. Encyclopedia of machine learning. Springer, pp 760–766 31. Rajabioun R (2011) Cuckoo optimization algorithm. Appl Soft Comput 11(8):5508–5518 32. Huang J, Gao L, Li X (2015) An effective teaching-learning-based cuckoo search algorithm for parameter optimization problems in structure designing and machining processes. Appl Soft Comput 36:349–356 33. Mellal MA, Edward JW (2016) Total production time minimization of a multi-pass milling process via cuckoo optimization algorithm. Int J Adv Manuf Technol 1–8 34. Nam JS, Kim DH, Chung H, Lee SW (2015) Optimization of environmentally benign micro-drilling process with nanofluid minimum quantity lubrication using response surface methodology and genetic algorithm. J Clean Prod 102:428–436 35. Zhu X, He R, Lu X, Ling X, Zhu L, Liu B (2015) An optimization technique for the composite strut using genetic algorithms. Mater Des 65:482–488 36. Atif M, Sulaiman FAA (2015) Optimization of heliostat field layout in solar central receiver systems on annual basis using differential evolution algorithm. Energy Convers Manag 95:1–9 37. Gholami M, Alashti RA, Fathi A (2015) Optimal design of a honeycomb core composite sandwich panel using evolutionary optimization algorithms. Compos Struct 139:254–262 38. Delgarm N, Sajadi B, Kowsary F, Delgarm S (2016) Multi-objective optimization of the building energy performance: a simulation-based approach by means of particle swarm optimization (PSO). Appl Energy 170:293–303 39. Chen SY, Hung YH, Wu CH, Huang ST (2015) Optimal energy management of a hybrid electric powertrain system using improved particle swarm optimization. Appl Energy 160:132–145 40. Dao TP, Huang SC (2017) Optimization of a two degrees of freedom compliant mechanism using Taguchi method-based grey relational analysis. Microsyst Technol 23(10):4815–4830 41. Dao TP (2016) Multiresponse optimization of a compliant guiding mechanism using hybrid Taguchi-grey based fuzzy logic approach. Math Probl Eng 2016:1–17 42. Huang SC, Dao TP (2016) Multi-objective optimal design of a 2-DOF flexure-based mechanism using hybrid approach of grey-Taguchi coupled response surface methodology and entropy measurement. Arab J Sci Eng 41(12):5215–5231
98
T.-P. Dao
43. Scheffe H (1999) The analysis of variance, vol 72. Wiley 44. Nguyen DN, Dao TP, Chau NL, Dang VA (2019) Hybrid approach of finite element method, kigring metamodel, and multiobjective genetic algorithm for computational optimization of a flexure elbow joint for upper-limb assistive device. Complexity 2019:1–13 45. Ho NL, Dao TP, Chau NL, Huang SC (2019) Multi-objective optimization design of a compliant microgripper based on hybrid teaching learning-based optimization algorithm. Microsyst Technol 25(5):2067–2083 46. Chau NL, Dang VA, Le HG, Dao TP (2017) Robust parameter design and analysis of a leaf compliant joint for micropositioning systems. Arab J Sci Eng 42(11):4811–4823 47. Yang XS, Deb S (2009) Cuckoo search via Lévy flight. In: Proceeding of world congress on nature & biologically inspired computing (NaBIC 2009). IEEE publications, USA, pp 210–214 48. Payne RB, Sorenson MD, Klitz K (2005) The Cuckoos. Oxford University Press, New York 49. Brown C, Liebovitch LS, Glendon R (2007) Lévy flights in Dobe Ju/hoansi foraging patterns. Hum Ecol 35:129–138 50. Pavlyukevich I (2007) Lévy flights, non-local search and simulated annealingm. J Comput Phys 226:1830–1844 51. Dao TP, Huang SC, Pham TT (2017) Hybrid Taguchi-cuckoo search algorithm for optimization of a compliant focus positioning platform. Appl Soft Comput 57:526–538 52. Binh HTT, Hanh NT, Dey N (2018) Improved cuckoo search and chaotic flower pollination optimization algorithm for maximizing area coverage in wireless sensor networks. Neural Comput Appl 30(7):2305–2317 53. Dey N, Sourav S, Yang XS, Achintya D, Sheli SC (2013) Optimisation of scaling factors in electrocardiogram signal watermarking using cuckoo search. Int J Bio-inspir Comput 5(5):315– 326 54. Sourav S, Dey N, Das P, Acharjee S, Chaudhuri SS (2013) Multilevel threshold based gray scale image segmentation using cuckoo search. arXiv preprint arXiv:1307.0277 55. Ashour AS, Sourav S, Dey N, Kausar N, Abdessalemkaraa WB, Hassanien AE (2015) Computed tomography image enhancement using cuckoo search: a log transform based approach. J Signal Inf Process 6(03):244 56. Shouvik C, Chatterjee S, Dey N, Amira SA, Ahmed SA, Shi F, Mali K (2017) Modified Cuckoo search algorithm in microscopic image segmentation of hippocampus. Microsc Res Tech 80(10):1051–1072 57. Dey N, Ashour AS, Bhattacharyya S (2019) Applied nature-inspired computing: algorithms and case studies 58. Li Z, Dey N, Ashour AS, Tang Q (2018) Discrete cuckoo search algorithms for two-sided robotic assembly line balancing problem. Neural Comput Appl 30(9):2685–2696 59. Chakraborty S, Dey N, Samanta S, Ashour AS, Barna C, Balas MM (2017) Optimization of non-rigid demons registration using a Cuckoo search algorithm. Cognit Comput 9(6):817–826 60. Storn R, Price K (1995) Differential evolution—a simple and efficient adaptive scheme for global optimization over continuous spaces. International Computer Science Institute, Berkeley, TR-95-012 61. Goldberg DE, Holland JH (1988) Genetic algorithms and machine learning. Mach Learn 3(2):95–99 62. Poli R, Kennedy J, Blackwell T (2007) Particle swarm optimization. Swarm Intell 1(1):33–57 63. Demertzis K, Iliadis L (2016) Adaptive Elitist differential evolution extreme learning machines on big data: intelligent recognition of invasive species. In INNS conference on Big Data, pp 333–345 64. Das PK, Behera HS, Panigrahi BK (2016) A hybridization of an improved particle swarm optimization and gravitational search algorithm for multi-robot path planning. Swarm Evolut Comput 28:14–28 65. Dang MP, Le HG, Chau NL, Dao TP (2019) A multi-objective optimization design for a new linear compliant mechanism. Optim Eng 2019:1–33
4 Cuckoo Search Algorithm: Statistical-Based Optimization …
99
66. Chau NL, Dao TP, Nguyen VTT (2018) Optimal design of a dragonfly-inspired compliant joint for camera positioning system of nanoindentation tester based on a hybrid integration of Jaya-ANFIS. Math Prob Eng 2018 67. Ho NL, Dao TP, Le HG, Chau NL (2019) Optimal design of a compliant microgripper for assemble system of cell phone vibration motor using a hybrid approach of ANFIS and Jaya. Arab J Sci Eng 44(2):1205–1220 68. ANSYS Workbench (2016) ANSYS, Canonsburg
Chapter 5
Training a Feed-Forward Neural Network Using Cuckoo Search Adit Kotwal, Jai Kotia, Rishika Bharti, and Ramchandra Mangrulkar
1 Introduction Artificial Neural Networks (ANN) are computational models inspired by the actual arrangement and function of a human brain [1]. These ANNs learn to perform tasks in a fashion similar to how a brain would from previous experience. A biological brain contains neurons (nerve cells) that transmit messages to other neurons. A junction between the two known as the synapse assists nerve impulses in the passage. Likewise, ANNs also contain several artificial neurons each of which is capable of performing certain mathematical calculations. ANNs contain two important layers known as the input layer and output layer, along with one or more hidden layers in between which the bulk of the processing is done. The input layer is where the preliminary information is fed to the network and the output layer is where it responds to the data it was given. The neurons in a layer are connected to neurons in the next consecutive layer so as to allow for the transmission of information. As the data is passed on from the initial input layer, and down to the deeper hidden layers, the network starts to learn about the information being fed. Each level of neurons provides insights and then the information gets passed on to the deeper levels. A. Kotwal (B) · J. Kotia · R. Bharti · R. Mangrulkar Dwarkadas J. Sanghvi College of Engineering, Vile Parle, Mumbai, India e-mail: [email protected] J. Kotia e-mail: [email protected] R. Bharti e-mail: [email protected] R. Mangrulkar e-mail: [email protected]
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 N. Dey (ed.), Applications of Cuckoo Search Algorithm and its Variants, Springer Tracts in Nature-Inspired Computing, https://doi.org/10.1007/978-981-15-5163-5_5
101
102
A. Kotwal et al.
Implementation of an ANN consists of two phases: the training phase and the testing phase [2]. In order for ANNs to learn, a large amount of data needs to be provided. This data which is used to train the model is known as the training set. For instance, if an ANN is being taught how to differentiate between an image of a dog and cat, a great number of images labelled as dogs, as well as cats are required. These labels are a description that will help the network initially classify the images. The hidden layers will then analyze the images and try to extract certain distinctive features that distinguish between a dog and a cat. In the training period, the output is compared with the label of what should be observed. If the output is the same, the network is validated. In case the output is different, a method known as backpropagation [3] is employed to adjust its learning. This method involves going back through the layers in order to adjust the mathematical equations of the neurons in the hidden layers which are responsible for the extraction of characteristic attributes. This process is repeated several times based on the number of iterations to obtain an accuracy above a certain threshold value. Once training ends, the testing phase commences. This involves providing previously unseen data to the trained neural network for classification in order to check its efficiency. Care is to be taken to avoid over-fitting in the training phase in which the network becomes too familiar with the training data and is not able to perform well with unseen data. ANNs are a powerful tool in order to perform predictions and have been widely used for data set classification in data mining. Unfortunately, the backpropagation method does not serve to be very efficient when it comes to handling large amounts of data. A network which utilizes backpropagation in order to solve classification tasks by arranging patterns in the data makes use of the gradient descent technique to tweak the neurons accordingly in the layers. This does not always guarantee the selection of accurate characteristics for a particular task. A limitation of this method is that a differentiable neuron transfer function is requisite, which would take a long time to converge. Several research studies have attempted to overcome this optimization problem by using hybrid metaheuristic algorithms. One such algorithm is the Cuckoo Search algorithm which has been used to find an optimal number of neurons that should be present in the hidden layers, a learning rate that is optimal and an optimal momentum rate that would help minimize the error function. In this chapter, an analysis of these works is presented. Some of the important applications for ANNs include image and pattern classification, text sentiment analysis, forecasting prediction on time series data, etc. The dependence of rea-world industries on ANNs justifies the need to further explore the need to build more efficient and accurate networks.
2 Related Work Nature-inspired algorithms have played a crucial role in computing in recent times. These are the algorithms that have been developed by mimicking practices that are prevalent in nature. They have successfully provided innovative solutions to
5 Training a Feed-Forward Neural Network Using Cuckoo Search
103
problems with a high level of complexity where ordinary algorithms are not able to provide optimal solutions. Certain algorithms provide a solution to a problem with high dimensionality by only exploring the search space where it initially finds an optimum solution and does not explore the global space efficiently. This is where the nature inspired algorithms provide a metaheuristic approach that warrants a more optimum solution. Such optimization may be crucial in certain applications. In this section, there is a discussion presented covering a few nature-inspired algorithms which have been used in the implementation of problems with a scope for application in the real world. 1. Genetic Algorithm (GA): This algorithm is based majorly on the Darwinian theory, which roughly translates to survival of the fittest. This includes five stages where initially, a population characterized by parameters (genes) is established. The population is evaluated using a fitness function and the fittest individuals pass on their genes to the next generation, i.e. best parameters are chosen which will be used in the next iteration. This also includes the stages of crossover and mutation to allow for random searching which may or may not yield a better solution until termination criteria are met. 2. Ant Colony Optimization (ACO): This algorithm is based on the fact that ants are able to find the shortest path between their nests and the food source. Ants leave a trail of pheromones behind as they move from point to point. Hence in their quest for food, there are several pheromone trails. The trail with the highest amount of pheromone corresponds to the most premier path. This corresponds to exploring the search space and discovering the most optimal path to the solution. 3. Particle Swarm Optimization (PCO): The idea behind this algorithm is that birds from a flock will communicate with each other to discover the most optimum path towards food. They constantly learn from previous experience to discover the shortest paths. This behaviour is used to determine the global optimum solution and avoids the risk of being trapped in local minima. The process of exploration continues until termination criteria are satisfied. 4. Artificial Bee Colony (ABC): This algorithm mimics the behaviour of bees to scout for food. These bees randomly search their surroundings which translates to exploring the search space and each food source is a potential solution. The positions for the food source are retained by the bees and better food positions in its neighbourhood are explored further. The most optimal solution corresponds to the food source with the highest amount of nectar. 5. Firefly Algorithm (FA): This algorithm is inspired by the flashing behaviour of fireflies and works on the assumption that the attraction between them is based on the amount of light intensity emitted by them. The search space is explored and fitness value (brightness) is assigned to each solution. There is a convergence stage (attraction between fireflies) where the most optimal solution is obtained.
104
A. Kotwal et al.
Table 1 References and their role in contributing to the topic of interest References Contribution role Yang [4] Nur et al. [7] Sulaiman et al. [11] Chatterjee et al. [12] Chatterjee et al. [13]
Working of the Cuckoo Search algorithm for optimization ANN weight optimization CS-ANN for modelling of operating photovoltaic module temperature CS-ANN for forest type classification CS-ANN for chronic kidney disease detection
Similarly, Cuckoo Search (CS) is a kind of a nature-inspired metaheuristic algorithm that has been successfully used in several applications. In this chapter, there is a specific discussion about optimizing an Artificial Neural Network with the help of Cuckoo Search. ANNs are widely being used in the modern industrial world today mainly due to their ability to approximate unknown values. This further bolsters the need to explore different methods for the optimization of ANNs. In [4], Xin-She Yang introduces the working and concept of the Cuckoo Search algorithm. It delineates the working of the algorithm, deriving the basis upon the cuckoo breeding behaviour and the Lévy flight behaviour. In [5], an adaptive Cuckoo Search algorithm is proposed, for optimization. Nawi N. M. et al. made use of the Cuckoo Search algorithm in a hybrid approach along with the Accelerated Particle Swarm Optimization (APSO) algorithm in order to train an ANN [6] (Table 1). In [7] the authors discuss a relatively new metaheuristic algorithm which is based on stochastic global optimization known as the Harmony Search algorithm (HSO). This is based on the process of musical improvisations and is used during the training phase of an ANN. Similarly, in [8] the authors have discussed three different methods for improving the performance of an ANN. These include Simulated Annealing (SA), Direct Search (DS) and Genetic Algorithm (GA). An explanation regarding each different method is provided which prevents a neural network from getting trapped in the local minima. It is described how the process of SA involves the generation of new point (new solution) in the search space and its distance between the current point (previous best solution) is subject to a probability distribution and determining the fitness of each new solution. This is inspired by the physical process of heating a material and allowing it to cool in order to decrease defects. The method of DA involves the creation of a mesh of points (solutions) around the current best solution to discover an objective function better than the previous one. In [9, 10] the Cuckoo Search algorithm has been used for setting an optimal weight vector in an Artificial Neural Network. Further, they make use of this ANN in order to perform prediction.
5 Training a Feed-Forward Neural Network Using Cuckoo Search
105
3 Cuckoo Behaviour and Lévy Flights 3.1 Cuckoo Breeding Behaviour Cuckoo is a fascinating breed of birds characterized due to their more aggressive reproduction strategy. These birds lay their eggs in the nest of other birds (often different species) to ensure survival. Some species of cuckoo birds also lay their eggs in the nest of other cuckoo birds but may also sometimes remove previous eggs to increase the hatching probability of their own. A few host birds can engage in direct conflict with the intruding cuckoos. If a host bird discovers that an egg in the nest is not its own, it will either throw these alien eggs away or simply abandon the nest to build a new one. Some cuckoo species have undergone an evolution in order to allow female parasitic cuckoos to create eggs quite similar in colour and pattern to those of the host species. This evolution reduces the probability of the cuckoo eggs being abandoned and trick the new host species into believing that the new eggs are its own. Certain parasitic cuckoos also time the laying of their eggs into a nest in an impeccable manner. These cuckoos often choose a nest where the host bird has just laid its own eggs. Generally, the cuckoo birds hatch slightly earlier than the eggs of their host and almost instinctively expel the host eggs by blindly propelling them out of the nest. This increases the new cuckoo chicks share of food provided by the host of the nest due to less competition. Research also provides observations where cuckoo chicks learn to mimic the call of host chicks to obtain access to a greater supply of food. This behaviour plays a part in the formulation of the overall Cuckoo Search algorithm, which draws its basis from the cuckoo brood parasitic behaviour and the Lévy flight behaviour.
3.2 Lévy Flights Consider an animal that scouts a particular area for resources such as food and water to ensure its survival. Once it realizes that it has used up all the sources in this spot, it will head off in a random direction. It will then walk many paces until it reaches a different spot and starts searching once more. This exploration of their immediate surroundings is done by several animals and insects using a series of straight flight paths dictated by sudden 90◦ turns. This kind of behaviour is known as the Lévyflights-style and is most commonly used for optimization to produce optimal search results [5].
106
A. Kotwal et al.
Lévy flights are inherently Markov processes which refer to the memory-less property of a random process [4]. This means that the next action is not based on the previous one. The study of this is useful in stochastic measurement and simulations for random natural phenomena.
4 Cuckoo Search In order to implement the cuckoo search, the following few assumptions need to be made: (i) Each cuckoo can lay only one egg at a time. This egg is laid in a nest that is chosen randomly. (ii) The number of host nests that are available is fixed. (iii) The nests with the highest quality of eggs translate into being the best nests. (iv) The probability that an egg which has been laid by a cuckoo bird is found by the host bird is Pa ∈ [0, 1]. (v) If the host bird discovers the presence of an alien egg, it can either discard the egg or leave the nest behind. Based on the assumptions made above, a corollary can be made that each egg in of the host bird in the nest represents a solution and a cuckoo bird’s egg represents a new and possibly better solution in order to supersede a less efficient solution.
4.1 Pseudocode The pseudocode for the Cuckoo Search algorithm is described below [6]: Define an objective function f (a), such that a = (a1 , a2 ....an )T An population of n host nests is generated initially (corresponding to initial solutions) Iterate through the following: while (t < MaximumCuckooBir ds) do Use Lévy flight to randomly generate new solution (i.e. a cuckoo ) Evaluate the quality, i.e. the fitness function Fi Randomly choose a nest j among n nests if (Fi > F j ) then j is replaced by the new solution end if Leave behind a fraction of nests (solutions) that are worse and build new ones using Lévy flights Keep the best solutions, i.e. the nests that contain the best quality solutions Rank current best solutions and choose best one accordingly end while
5 Training a Feed-Forward Neural Network Using Cuckoo Search
107
4.2 Explanation Lévy flight is implemented to generate new solutions a t+1 for a cuckoo, i with the equation: ´ ai (t+1) = ai t + α ⊕ L evy(λ) This equation is used to generate a random walk, i.e. searching the search space for a range of solutions. In this, the next location (solution) only depends on current location (the term ai t in above equation) and the transition probability (the term α ⊕ L evy(λ) ´ in the above equation). Here, α is defined as the step size which is dependent upon the nature of the task. α is always greater than 0. This random step length is drawn from a Lévy distribution which has an infinite variance and infinite mean. Due to this, the random walk via Lévy flight is more efficient in exploring the search space. Some new solutions are generated around the best solution obtained so far which will speed up the local search. An advantage of this method is that it avoids getting confined in the local optimum due to far field randomization and new solutions will be explored far enough from the current best solution. The quality or fitness value of each new solution has to be processed and analyzed using a test function. Depending on this value, the algorithm will decide whether to update the current best solution or to keep exploring further.
5 Detailed Overview of Artificial Neural Networks As discussed earlier in this chapter, neural networks are based on the arrangement and working of neurons in a real brain. These neurons exchange and process information in order to learn. Similarly, an Artificial Neural Network consists of various layers each of which consists of artificial neurons capable of performing mathematical tasks. Neurons of one layer are connected to neurons of the next consecutive layer and are capable of transmitting computed information. Each of these connections has a specific real valued number known as a weight attached to it. This weighted connection decides the importance of the neuron (Fig. 1). A neuron takes the value of a neuron from a previous layer connected to it and multiplies it with the weight of the connection which is linking the two together. The sum of all connected neurons is known as the bias value. This value is then provided to an activation function which essentially transforms limits the value in a particular range so as to achieve normalization. This process is propagated through the whole network. Based on these calculated bias values, an output is generated. The challenge lies in obtaining an efficient value of weights in the connections which will give an accurate answer in the output layer [7]. Initially, random weights are assigned to the connections. In the training phase, these weights are constantly
108
Fig. 1 The Cuckoo search algorithm flowchart
A. Kotwal et al.
5 Training a Feed-Forward Neural Network Using Cuckoo Search
109
Fig. 2 Structure of an artificial neural network
adjusted as the network looks for certain characteristic distinguishing features of the problems input. In order for the network to arrive at the correct answer, it has to equipped with some sort of feedback mechanism. This mechanism is known as the backpropagation method. This enables the network to re-adjust the connections back according to the desired output. The ANN is able to go back and ‘double-check’ and even tweak the structure of the network to ensure all biases are correct (Fig. 2).
5.1 Backpropagation Below there is a discussion for a hypothesis function which is nothing but the overall bias value of a particular neuron. This is obtained from the summation of the products between the weight assigned to each of the neurons in previous layers and their individual bias values. Hypothesis function for a neuron is given as h(x) =
k i=1
Wi X i
110
A. Kotwal et al.
Here, k is the total number of neurons in a particular layer of the network. Wi is the weight assigned to the connection between X i and the neuron in the consecutive layer. At the end of one iteration of the neural network, a loss function is calculated which indicates how accurately the network has been able to perform the task given to it. Ideally, the loss function should be zero but practically it should be as low as possible. Several such iterations are performed by the network until the loss is minimized. During the first few iterations, the loss will be high. But based on this, the network will learn to adjust the weights assigned to the connections in order for the loss to be reduced in the next iterations. The error function is given as: Error = 1/N
N
(y − y)
2
i=1
Here, N is the number of input values. y’ is the value predicted by the model and y is the expected value as provided in the data set. The summation is divided by N since the aim is to minimize the error for all the data points. The adjustment of weights is not done in a random manner. The Batch Gradient Descent optimization function is used to determine what direction should the weight be adjusted to, i.e. whether the value of a weight should be increased or decreased.
5.2 Limitations in Artificial Neural Networks Due to the large number of neurons and connections between every consecutive layer, it is often a hard task to find out how neural networks arrive at a solution by obtaining certain insights. This also makes it difficult to identify hyperparameters prior to the commencement of training the model. These are the parameters whose value is defined initially before even the first iteration since the value for the weights between neurons needs to be available to calculate the bias. This process is heavily influenced by the working of the human brain. There is also the danger of over-fitting the data where the model risks becoming over familiar with input data and do not perform well on unseen data. Such a situation occurs when the number of independent observations is significantly less than the number of parameters. The model performance is also affected if the network relies on default parameters. Not performing hyperparameter optimization through proven methods and hand tuning is also a performance driving aspect. These networks also remain prone to the butterfly effect, i.e. minor modifications in the initial input data can result in significantly varying results, rendering them unstable.
5 Training a Feed-Forward Neural Network Using Cuckoo Search
111
Neural networks are essentially black-boxes and understanding every aspect of their calculation can be an arduous task. This renders them unsuitable for applications in which verification of the process is necessary. Simpler machine learning algorithms can calculate an optimal model based on the data set. But in the case of ANNs, the optimal set of weights cannot be directly computed and neither does the global convergence guarantees to find an optimal set of weights. These parameters are found by solving a non-convex optimization problem with many good solutions but also many misleadingly good solutions.
6 Parameter Tuning of ANNs Using Cuckoo Search In the traditional working process of Artificial Neural Networks, the number of neurons in the hidden layer, the rate of learning and the activation function is generally decided with the aid of methods that are heuristic. This means that such a process does not always guarantee an optimal or perfect solution but is sufficient for accomplishing an immediate objective. Heuristic methods are practical and somewhat satisfactory solutions to problems where finding an optimal solution is impossible. Even though this method results in an acceptable solution, it is very time consuming as many solutions need to be explored. In the case of Cuckoo Search, these network parameters are discovered using an optimal strategy. Furthermore, various forms of activation functions (learning algorithms) are considered to find out the most appropriate scheme. Each solution consists of a different combination of the number of neurons present in the hidden layers, the rate of learning along with the learning algorithm. These solutions correspond to an egg in the host nest (search space). A cuckoo bird’s egg will then correspond to a new and potentially better solution in the nest. As known, an error function is derived at the end of each iteration of the training phase of an ANN which represents the difference between expected and calculated value. The primary aim is to have the lowest possible error function. The objective of the Cuckoo Search algorithm is to minimize this error function. Cuckoo Search can be applied to this problem in the following manner: (i) Initialize the population of the host nests randomly, xi,k which represent solutions to the ANN with separate combinations of the parameters. Here, i has a range from 1 up to m, k is defined as the decision variable and m is defined as the size of the population. (ii) Determine the maximum number of iterations and the probability of Pa discovering an alien egg. This probability will determine whether the new egg (new solution) will survive the host nest or not and translates to the probability of the new solution being better than the current best one. (iii) The new nest (new set of solutions) for the cuckoo is determined with the help of Lévy flights ´ xi,k (t + 1) = xi,k (t) + α ⊕ L evy(λ)
112
(iv)
(v) (vi)
(vii) (viii)
A. Kotwal et al.
xi,k (t) is defined as the starting position of the cuckoo, while α is defined as a positive step size parameter that modulates the scale at which random search is performed. λ is based upon a random walk determined by Lévy distribution. Select a nest j randomly, from within the m possible nests and evaluate the fitness values that corresponds to the quality of the solution. Compute the fitness value Fi , of the cuckoo i present at the new location which correspond to the quality of the new solution. Compare Fi and F j . The greater value will be established as the current best solution with the optimal combination of various parameters for the ANN. Abandon fraction of worst nests with probability Pa ensuring that the remaining nests are kept as possible alternative solutions. The new nests are then built at new locations via Lévy flights. Continue iteration of step 2 until the maximum number of iterations are executed. Finally, arrange all the nests corresponding to their fitness value and find out the best nest present based on all iterations.
7 Applications of ANN Optimized with Cuckoo Search Artificial Neural Networks have seen an explosion of interest over the last few years and are being successfully applied across an extraordinary range of problem domains. The encouraging results from using ANNs indicate that it will quickly become an extensive part of several industrial applications. Due to this fact, it is justified that ANNs need to be optimized to achieve the maximum possible benefit. As discussed above, Cuckoo Search is used to obtain the optimal parameters. A brief discussion regarding the applications of ANN optimized with CS has been provided below.
7.1 Modelling Operating Photovoltaic Module Temperature A solar/photovoltaic (PV) cell is an electric device that converts the energy of incoming light directly into electricity which can be utilized. The working of this is based on a phenomenon known as the photovoltaic effect which is a physical and chemical phenomenon. In this, light with sufficient energy is incident on the material of a solar panel which causes an electron to get excited to a higher energy state in its atom which leads to the flow of electricity. A PV module is implemented by connecting several solar cells in series. An increase in the solar irradiance (radiant energy per unit area) in the day will correspond to an increase in the ambient temperature. This will also cause a rise in the PV module temperature. Consequently, the voltage at the output of the PV will decrease and so will the power at output coming from the module. The modelling
5 Training a Feed-Forward Neural Network Using Cuckoo Search Fig. 3 An artificial neural network optimized with Cuckoo search
113
114
A. Kotwal et al.
of this operating PV module temperature is done with the use of an ANN. Input parameters to this ANN are solar irradiance and ambient temperature (Fig. 3). The use of ANN is beneficial due to no details about the physical and thermal characteristics of the module or the coefficient of heat transfer due to accompanying wind is necessary. The use of various other mathematical models requires the previously mentioned parameters and is only applicable to a particular set of conditions. This limits the applicability and practicality of models other than ANNs. In [11], for the development of this application, solar irradiance (SI) in watt per metre square (W/m 2 ) and the ambient temperature in ◦ C served as input parameters for the ANN. These two parameters have been chosen because they majorly affect the solar cell performance. The operating photovoltaic module temperature in ◦ C is obtained as the output of the model. A multi-layer feed-forward neural network architecture along with a single hidden layer was chosen for the implementation of this task. A standard procedure involving the training phase for learning and testing phase for validation of the ANN was followed but with the additional exploration of solutions consisting of various possible combinations of the number of neurons present in the hidden layer, rate of learning and also the learning algorithm with the help of Cuckoo Search. The search space was explored using a random walk provided by standard Lévy distribution. The objective of this procedure was to obtain the lowest Mean Absolute Percentage Error (MAPE) which is basically an extension of the error function explained previously. It is represented as: MAPE = 1/n
N
| (A p − Pp )/(A p ) | x100%
p=1
Here, p is defined as the number of data pattern and n is defined as the total number of data patterns. A p serves as the true value and Pp is the predicted value.
7.1.1
Results and Discussions
In [11], the initial analysis was conducted to determine the optimal population size of cuckoos. The population was varied between 10 and 100 with a step size of 10. The size of the prime population was identified to be 100 as it yielded the lowest MAPE of 2.5659%. With the use of various learning algorithms, the performance of the CS-ANN was tested to determine the best algorithm for the learning process. This includes the following algorithms: Levenberg-Marquardt algorithm (used in minimization problems involving least squares curve fitting), Scaled-Conjugate gradient method (a fully automated supervised learning algorithm), Quasi-Newton backpropagation (involves updating weights as an approximation of Hessian matrix at each iteration) and Resilient backpropagation (more efficient for full-batch training).
5 Training a Feed-Forward Neural Network Using Cuckoo Search
115
Table 2 Performance of CS-ANN in terms of MAPE using different learning algorithms
Algorithm
MAPE in %
Levenberg-Marquardt Scaled-Conjugate gradient Quasi-Newton Resilient backpropagation
2.5659 7.7967 4.8113 11.6235
Table 3 MAPE in % during training and testing phase of CS-ANN and ABC-ANN
Phase
CS-ANN
ABC-ANN
Training Testing
2.5659 2.9867
2.9445 3.7699
The most efficient combination was found to be a cuckoo population of size 100 along with the Levenberg-Marquardt algorithm that resulted in the least MAPE value of 2.5659 % and six neurons in the hidden layer of the ANN (Table 2). An ANN which was optimized using another metaheuristic algorithm, the Artificial Bee Colony (ABC) based on the intelligent foraging behaviour of honey bee swarm was used in order to compare the performance of the CS-ANN. Testing showed that CS-ANN performed better than the ABC-ANN in terms of yielding a Mean Absolute Percentage Error that was much lower than that of ABC-ANN. Cuckoo Search also turned out to be the faster learning algorithm with regards to computational speed, in comparison to the Artificial Bee Colony (Table 3).
7.2 Forest Type Classification Remote sensing is an important application where the physical characteristics of an area are detected and monitored usually by ground level platforms (cranes and towers) or by aerial/spaceborne platforms (aircrafts, space-shuttles, geostationary satellites). This is done by measuring the reflected and emitted radiation at a specific distance from the targeted area. Remote sensing finds its application in obtaining up to date land usage pattern of a large area and also monitor changes that occur from time to time. It is also used to study deforestation and degradation of fertile land along with the damage caused by earthquakes, volcanoes, landslides and floods. Pixel classification plays an important role in remote sensing. In case of forest type classification in remote sensing, the problem of classification can be devised such that it can classify individual or groups of pixels into distinct groups corresponding to a particular land cover type. Forest classification has great ecological significance. The very purpose of this application is to benefit certain officials and make the task of classification simpler with the help of satellite imagery of areas which cannot be easily traversed by humans.
116
A. Kotwal et al.
When images of forests are obtained from satellites, the classification task becomes strenuous due to the increased number of pixels as one pixel may belong to multiple classes. This introduces a significant amount of uncertainty in the classification task. This problem of uncertainty in the case of forest classification is a greater challenge as compared to other remote sensing applications mainly because forest images contain more amounts of variation of pixels within a smaller geographic area. Previously, a technique known as split and merge was used for the classification task where an image is successively split into quadrants based on homogeneity criterion and then similar regions are merged to create the segmented result. Fuzzy c-means was also used which works on the principle of clustering similar data points. Due to the challenges during forest classification, these methods did not provide the desired accuracy for the classification task. Developments in research have indicated ANNs could be a potential solution to tackle this problem. In this case, by minimizing the error function (root mean square error), the ANN weight vectors can be optimized. Care must also be taken so as to avoid convergence to the local optima in the local search space. In order to deal with this limitation, metaheuristic optimization procedures are employed for training the ANN which include Genetic Algorithms. They simulate the process of natural selection, i.e. the species which can adapt to changes in their environment are able to survive into the next generation. The classification can also be enhanced by using recent metaheuristic algorithms such as Cuckoo Search. In [12], it has been shown that traditional CS can be improved by using McCulloch’s random numbers in the Lévy flight to enhance the convergence rate of the procedure to work efficiently in situations with time constraints. This method is used to achieve a less expensive procedure that generates stable random numbers. A larger number of simulations can be performed with a lesser number of inaccuracies. The procedure for Modified Cuckoo Search with McCulloch algorithm is shown below. Set the population X i, j Calculate the fitness value for defined objective function; f (x) while Iteration < Max(Iteration) do Create new solution space with a specific combination of initial weights of the neural network Calculate the fitness value of each solution (nest) Store the highest quality nest if k 1.5 F cr = (0.658λ2c )F y while λc = [(Kl/rπ )((F y /E)1/2 )]. Here, the modulus of elasticity is E, the laterally unbraced length and the effective length factor of a structural member k are l and K, respectively. For beam members, the needful strength is given in Eq. (6) according to the F2 section of provision. Here, the needful and nominal moments through major axis in a beam member b are M uxt and M nxt , respectively. The flexural factor of resistance whose numerical value is 0.90 is Øb . M nxt is assumed as equivalent to the strength of plastic moment (M p ), in a beam member b. M p is calculated via F y Z in which modulus of plasticity is denoted by Z and the identified minimal yield stress of compact section that is laterally supported is presented by F y . Moreover, the Appendix F of provision contains the calculation method of partially and non-compact sections. Beam to column connection is controlled by Eq. (7). This equation guarantees that at any story s the flange width of a beam is less than or equal to those of a column. Also, Eq. (7) permits that the net gap between column’s flanges is greater than those of the beam B2 by comparing with dc -2tf . The terms in Eq. (7) are depicted in Fig. 1. Here, flange widths of B1 beam, B2 beam, and column are bfb , b’fb, and bfc, respectively. The column depth and flange thickness are d c and t f . Fig. 1 Notations of a beam-column connection point
130
S. Carbas and I. Aydogdu
4 Cuckoo Search 4.1 Cuckoo Birds in Nature Cuckoo birds belong to the Cuculidae with a slightly curved beak, long and pointed wings, long tails, and good flight. This bird species lives in forests or places they find suitable for themselves. It is known that there are 130 species from sparrow size to crow size. They got the name cuckoo because of their pleasant singing [27–29]. The most impressive features that distinguish these birds from other bird species are their aggressive breeding strategies. This type of bird is a hatching parasite and lay eggs in the nest of foreign birds [30]. The female cuckoo takes one of the eggs of the foreign bird that owns the nest where it wants to lay its egg and leaves its own egg in the nest, which takes only 10 s. The female cuckoo birds concentrate on a bird species that they identify and compare their eggs in shape and color to those of the birds that own the nest. Thus, they aim to prevent their nestlings from being noticed by the nesting bird [31, 32]. Most birds that own the nest are capable of recognizing the eggs of foreign birds left in their nests. When the nesting bird notices a foreign egg, it either throws the foreign egg out of the nest or breaks up the nest to build it again. Eggs that are not noticed by the bird owning the nest grow together with other bird eggs. The parasitic feature that highlights cuckoo birds in this process is that its eggs hatching earlier than the original nesting bird’s eggs. The cuckoo hatchling from the egg starts to throw the eggs of the nesting bird out of the nest with acrobatic movements even though cuckoo’s eyes are not opened in the first four days. If there are chicks that have hatched before the cuckoo chicklet in the nest, they cannot survive due to the eating instinct of the cuckoo chicklet since the chick cuckoo, which has a larger body, consumes most of the food brought by the stepmother. Even if the mother realizes that the cuckoo chick does not resemble her, she continues to feed it because of her maternal instinct, and she tries to care for it until cuckoo chick leaves the nest [33, 34]. Migration processes of cuckoo birds are as follows: when cuckoo chicklet grows and it is time for the spawning approach, the cuckoo bird migrates from its present environment to a new and better environment, with species more likely to resemble its eggs of foreign birds and with more food resources for their offspring. After the cuckoo birds form groups in different areas, the best production community is selected as the target point for the migration of other cuckoo birds. Cuckoo groups continue to migrate until they find the best environment [35].
4.2 Standard Cuckoo Search Optimization Algorithm The standard cuckoo search (CS) optimization algorithm is one of the state-of-art metaheuristic algorithms taking inspiration from nature. This contemporary algorithm was improved by Xin-She Yang and Suash Deb in 2009 [15]. The cuckoo search algorithm is an approach based on the nature of the hatchability parasitism
6 Cuckoo Search for Optimum Design of Real-Sized High-Level …
131
characteristics of some cuckoo bird species described in detail in the previous section. This approach has been developed by observing the migration processes of cuckoo birds, depending on user preference and using a simple isotropic random walk or Levy flight [5, 36]. As with any optimization problem, certain assumptions are taken into account in defining the CS algorithm. These assumptions are: (1) In the problem description, it was assumed that per cuckoo would merely leave a single egg in one go into an arbitrarily selected nest. (2) The available nest number is assumed to be fixed. (3) It is assumed that the eggs left by the cuckoo bird will be discovered by the host bird with the possibility of Pa (0,1). (4) It is assumed that nests having good quality eggs are transferred to the future generation. In a standard cuckoo search (CS) algorithm, egg population of n each of which stands for a feasible design in the steel space frame problem is selected initially. In the design problem, this implies that it is requisite to produce n amount of vector solution as {x} = {x 1 , x 2 , ….., x m }T with total decision variables m. The objective function (f(x)) is calculated for per feasible solution vector. Then a new solution (x i )v+1 =(x i )v + βλ is generated by the algorithm for a cuckoo bird i in which the new and previous solution vectors are (x i )v and (x i )v+1 . Problem and user dependent step size β is chosen as greater than 1.0. In conformity with Levy flight random walk, step size length λ is determined. When the particles move on an arbitrary path with consecutive unplanned steps, that is so-called as random walk. For instance, an arbitrary pathway of an animal for seeking hunts can be assumed as a random walk. A Levy flight having steps directions with the precise distribution of probability make it isotropic and arbitrary is a random walk where the steps are described with respect to the lengths of steps. Thus, Levy flights demand the choosing of an arbitrary course and reproduction under selected Levy distribution. An algorithm proposed by Mantegna [37] offers a rapid and exact algorithm producing a stochastic variable with a density of probability near to Levy stable distribution characterized by random selection control parameter α (0.3 ≤ α ≤ 1.99). By utilizing Mantegna’s algorithm, λ (step size) is computed as λ=
x |y|1/α
(8)
here, two common stochastic variances come into the picture as x and y with standard deviations σ x and σ y defined as in Eq. (9).
Γ (1 + α) sin(π α/2) σx = Γ ((1 + α)/2)α2(α−1)/2
1/α and σ y (α) = 1.0 for α= 1.5
(9)
here, the upper case Greek character illustrates the function of Gamma which can ∞ be defined as Γ (z) = t z−1 e−z dt which is the expansion of the factorial function 0
132
S. Carbas and I. Aydogdu
with its argument switched down by 1.0 to complex and real numbers. The Gamma function will be (k) = (k−1), in the case of z = k is a positive integer. The CS algorithm can be collocated as in the following steps [5, 15]: (1) Selecting appropriate values for parameters set of cuckoo search algorithm parameters that are discovering probability (Pa ), step size (β), eggs (nests) number (n), and maximum number structural analyses to terminate the iterative process. (2) Initial solution set n (host nests) is generated as {x i }, (i = 1,2,…, n) arbitrarily one of each presents a feasible design to design optimization with f(x) (objective function) and {x} = {x 1 , x 2 ,…., x m }T (decision variables). (3) A cuckoo bird is selected by the aid of Levy flight utilizing (x i )v+1 =(x i )v + βλ and its fitness (F i ) is evaluated. Here, Levy flight based random walk λ is computed via Eqs. (8) and (9). (4) Among all nests (n) a random one j is selected and the F j fitness value of which is evaluated. Provided that F j < F i , the j is replaced by a novel solution. (5) The worst nest fractions are abandoned and a novel one is built contingent on the probability of Pa probability. Firstly, according to Eq. (10) it is figured out that each nest holds its present position neither or not. Ri ←
1 if r < Pa 0 if r ≥ Pa
(10)
here, in R matrix only values of 0 and 1 are stored as one of which denoted to a component of nest i, where 1 states updated position and 0 states holding the present position. Novel nests are executed by the aid of Eq. (11). xit+1 = xit + r ∗ Ri ∗ (Per m1i − Per m2i )
(11)
here, r is a random number generated between 0 and 1. Perm1 and Perm2 are the two row permutations of the corresponding nest. The probability matrix is described as Ri . (1) Order designs and specifies the current least weighted frame (best one). (2) In order to terminate the algorithm, steps from 3 to 6 are iterated until the maximum iteration number is reached.
5 Design Examples Functioning of the proposed standard cuckoo search (CS) algorithm is examined on the optimal design of two real-sized high-level steel space frames which are considered as design examples of this chapter. The first design example is selected as
6 Cuckoo Search for Optimum Design of Real-Sized High-Level …
133
an eight-story, 1024-member steel space frame including 40 groups of members. The latter design example is a twenty-story, 1860-member steel space frame consisting of 86 structural member grouping. In both design examples the unit weight, the modulus of elasticity, and the yield stress of the steel material are taken as 7.85 ton/m3 , 200 GPa, and 250 MPa, respectively. The available steel profile list comprising 272 W-sections reported in LRFD-AISC [19] is utilized to set a design variable pool in which the suitable steel W-sections are picked out by the proposed standard CS algorithm to be assigned to member groups of steel space frames. The standard CS algorithm engages a parameter set which is needed to be predefined at the beginning. Also, the performances of the parameter set in the standard CS algorithm dissimilar at each design problem and directly depend on the problem size. The parameter sets of standard CS algorithm for the following design examples are selected pursuant to recommended ranges and values in previous studies, as well as exhaustive examines executed in this study. Namely, each of the design examples severally resolved by different parameter sets and the optimally reached designs are reported here.
5.1 Eight-Story, 1024-Member Steel Space Frame The first design example is a 3-D, eight-story, 1024-member steel space frame as depicted in Fig. 2. This frame was formerly designed using an adaptive firefly algorithm (AFFA) [38], using an artificial bee colony (ABC) algorithm [39], using a dynamic harmony search (DHS) algorithm [38, 39], using an ant colony optimization [ACO] algorithm [38, 39], using a biogeography-based optimization algorithm [40], and to show the influence of Levy flight on discrete design optimization of steel spatial frame structures utilizing metaheuristics [41]. The frame structure planned as having 1024 structural members and 384 joints. Frame members are brought together in 40 individualistic design variables. The structural members grouping in the frame is demonstrated in Table 1. The lateral and gravity loadings computed by ASCE7-05 [42] are imposed on the frame. Here, 2.88 kN/m2 is taken as design dead load and 2.39 kN/m2 is taken as design live load. Additionally, 85 mph (38 m/s) is taken as basic wind speed. By the aid of code-based loading notations (dead load D, live load L, snow load S, wind load through global X-axis WX, and wind load through global Z-axis WZ), undermentioned load combinations are kept insight as stated in structural provision [14];( i) 1.2D + 1.6L + 0.5S, (ii) 1.2D + 0.5L + 1.6S, (iii) 1.2D + 1.6WX + L + 0.5S. The top- and inter-story drift bounds are given as 7.0 cm and 0.875 cm, respectively. The all beam members are limited with 2.0 cm deflection, as well. Also, in cuckoo search (CS) algorithm following parameters are used to reach the minimum weighted optimally designed frame for this example; eggs (nests) number, n = 100, step size, β = 1.5, discovering probability, Pa = 0.20, and maxiter = 75000 that is used as the maximum number of structural analyses. Figure 3 presents the convergence change of the CS algorithm. The CS algorithm converges to the
134
S. Carbas and I. Aydogdu
Fig. 2 Eight-story, 1024-member steel space frame, a 3-D shot, b Front shot, c Plan and column orientations shots
a) 3-D shot
b) Front shot
c) Plan and column orientations shots
6 Cuckoo Search for Optimum Design of Real-Sized High-Level …
135
Table 1 Member grouping of eight-story, 1024-member steel space frame Story
Corner Columns
Side Columns
Inner Columns
1
Side Beams 1
Inner Beams 2
17
18
19
2
3
4
20
21
22
3
5
6
23
24
25
4
7
8
26
27
28
5
9
10
29
30
31
6
11
12
32
33
34
7
13
14
35
36
37
8
15
16
38
39
40
Fig. 3 Convergence history of eight-story, 1024- member steel space frame
optimum design weight so smoothly. This means that the standard cuckoo search strategy implemented onto a real-sized high-level steel space frame demonstrates its efficiency once again. The sections designated to each structural member group by standard cuckoo search algorithm are demonstrated in Table 2. From this table, it is specified that the iteration number at which the optimum design is determined in CS is about 55000. It
136
S. Carbas and I. Aydogdu
Table 2 Optimal design of eight-story, 1024-member steel space frame #
Member type
Sections designated by CS
Area (mm2 )
Sections designated by CS
Area (mm2 )
1–21
Beam/Column
2–22
Beam/Column
W410X38.8
4990
W760X220
27800
W410X38.8
4990
W530X74
3–23
Beam/Column
W530X85
9520
10800
W360X179
22800
4–24
Beam/Column
W530X66
8370
W760X284
36200
5–25
Beam/Column
W310X129
16500
W530X85
10800
6–26
Beam/Column
W310X32.7
4180
W920X201
25600
7–27
Beam/Column
W460X82
10400
W1100X390
49900
8–28
Beam/Column
W200X100
12700
W610X113
14400
9–29
Beam/Column
W690X192
24400
W920X201
25600
10–30
Beam/Column
W250X67
8550
W1100X390
49900
11–31
Beam/Column
W760X147
18700
W690X192
24400
12–32
Beam/Column
W200X100
12700
W1000X350
44500
13–33
Beam/Column
W760X147
18700
W1100X433
55300
14–34
Beam/Column
W460X74
9450
W690X192
24400
15–35
Beam/Column
W920X201
25600
W1100X499
63500
16–36
Beam/Column
W410X38.8
4990
W1100X433
55300
17–37
Column/Column
W250X131
16700
W840X193
24700
18–38
Column/Column
W530X182
23100
W1100X499
63500
19–39
Column/Column
W460X60
7590
W1100X433
55300
20–40
Column/Column
W250X149
19000
W920X238
30400
Maximum inter-story drift (cm)
0.590
Maximum strength ratio
0.961
Maximum top-story drift (cm)
3.642
Minimum weight (kN)
6638.44
Maximum iteration
75000
means that CS attains optimal steel space frame design before reaching a maximum number of iterations. The minimum design weight yielded by CS is 6638.44 kN. It is identified that in the optimally designed frame by CS, the inter-story drift for the third floor is 0.590 cm which is relatively near to its top bound of 0.875 cm while top-story drift is 3.642 cm whose upper limit is 7.0 cm. The maximum strength ratio value is obtained as 0.961 among the combined strength constraints, whose top bound is 1.0. So far, the optimal structural frame weight of this problem produced by using standard versions of metaheuristic algorithms is declared as 6462.79 kN obtained via standard Biogeography-Based Optimization (BBO) [40]. This frame design is a little bit slightly lighter than the minimum structural frame weight achieved by the CS algorithm. The comparative results on minimum frame weight and maximum values of the constraints yielded by BBO and CS algorithms are presented in Table 3.
6 Cuckoo Search for Optimum Design of Real-Sized High-Level … Table 3 Optimal frame designs of 1024-member steel space frame attained by standard versions of CS and BBO algorithms
137
Algorithms
CS (present study)
BBO [35]
Maximum inter-story drift (cm)
0.590
0.875
Maximum strength ratio
0.961
1.0
Maximum top-story drift (cm)
3.642
6.508
Minimum weight (kN)
6638.44
6462.79
Maximum iteration
75000
75000
From this table, it can be concluded that in optimal frame design reached by the BBO algorithm, especially, both the strength ratio and the inter-story drift constraints are dominant. And the optimization process is governed by these constraints. But if the optimally designed frame attained by the CS algorithm is investigated via the same table, it can easily be understood that only descent constraint upon the others is strength ratio. So, this was identified that the dominating design constraint is the strength in the optimization procedure for the CS algorithm.
5.2 Twenty-Story, 1860-Member Steel Space Frame The second and the most challenging design example of this chapter is selected as a twenty-story, 1860-member steel space frame [43]. The geometric scheme of the steel space frame and also the columns orientations and member groupings are presented in Figs. 4 and 5. The joints and the number of member groups are 820 and 86, respectively. The frame is imposed on gravity and lateral loads, both are calculated according to ASCE 7-05 [42]. The design dead load is taken as 2.88 kN/m2 while the design live load is taken as 2.39 kN/m2. The 85 mph (38 m/s) is assumed as basic wind speed. In order to design the steel space frame, below mentioned load combinations are implemented on it individually (i) 1.2D + 1.3WZ + 0.5L + 0.5S and (ii) 1.2D + 1.3WX + 0.5L + 0.5S. The load notations are identical as in the first design examples. Namely, dead load is presented by D, live load is represented by L, snow load is demonstrated by S, and the wind loads through global X and Z-axes are symbolized as WX and WZ, respectively. Inter- and top-story drift ratio limits for this real-sized high-leveled design example are 0.75 cm and 15 cm, respectively. The deflections of all beam members are limited to 1.67 cm. Design optimization of this real-sized high-level steel space frame is figured out by utilizing a standard cuckoo search algorithm. In this algorithm, the search parameters are selected as eggs (nests) number = 75, step size β = 1.5, discovering probability Pa = 0.25, and maxiter = 80000 used as the maximum number of iterations. The controlling constraint comes in sight as strength over the optimization process as a similar first design example. The optimal structural design having the least frame weight is obtained via standard CS algorithm as 5526.69 kN and that is 0.785%
138
S. Carbas and I. Aydogdu
Fig. 4 3-D shot of twenty-story, 1860-member steel space frame
lighter than those achieved by standard ACO algorithm reported as 5570.10 kN in [43]. The optimum designs attained by both CS and ACO algorithms are represented in Table 4. It is obvious from this table that the strength constraints are dominated in both frame designs accomplished by the mentioned algorithms. Moreover, another interesting result exiting from this table that the top- and inter-story drifts are not as dominant as strength ratios in both algorithm designs. Furthermore, even though the optimum weight of the steel space frame is obtained approximately after 70,000 structural analyses by ACO, it is received only nearly after 20,000 structural analyses
6 Cuckoo Search for Optimum Design of Real-Sized High-Level …
th
(a) Plan shot of 1-4 story
th
(b) Plan shot of 5-8 story Fig. 5 Plan shots and member groupings of twenty-story, 1860-member steel space frame
139
140
S. Carbas and I. Aydogdu
th
(c) Plan shot of 9-12 story
th
(d) Plan shot of 13-16 story
th
(e) Plan shot of 17-20 story Fig. 5 (continued)
6 Cuckoo Search for Optimum Design of Real-Sized High-Level … Table 4 Optimal frame designs of 1860-member steel space frame attained by standard versions of CS and ACO algorithms
141
Algorithms
CS (present study)
ACO [38]
Maximum inter-story drift (cm)
0.688
0.461
Maximum strength ratio
0.840
0.937
Maximum top-story drift (cm)
6.481
8.560
Minimum weight (kN)
5526.69
5570.1
Maximum iteration
80000
80000
No. of structural analysis (approximate)
20000
70000
by CS. This denotes that the cuckoo search algorithm exhibits a more fine and rapid convergence attitude. The minimum frame weight, maximum values of constraints, and I-shaped Wsections designated by the CS algorithm are showed in Table 5, in detail. There are some lighter designs announced in the literature for this design example, yet those optimal designs are reached by using adaptive and dynamic versions of the Harmony Search Algorithm [23]. In this chapter, the standard version of the CS algorithm is implemented on the design examples to prove its raw version’s effectiveness on realsized high-level steel space frames. So, in order to be fair, comparisons are executed over the optimal designs yielded by standard versions of metaheuristics. Also, the design convergence history of the solution acquired by CS is shown in Fig. 6. The CS algorithm converges to the optimal design weight so quickly. According to this figure, the convergence inclination to the optimum design is very comfortable.
6 Conclusions A standard cuckoo search (CS) optimization procedure based design algorithm is presented in order to obtain minimum weighted real-sized high-level steel space frames. This algorithm is inspired by the hatchability parasitism of cuckoos in nature. This algorithm is very easy to implement since it does not require any gradientbased information. The standard version of the CS algorithm only needs initially describing three main parameters, which are eggs (nests) number, step size, and discovering probability. Two mighty real-sized high-level steel space frames taken from the literature are designed via the CS algorithm to inquire about its capacity and performance in determining the optimal designs. Starting from this optimally designed frames cuckoo search algorithms are illustrated to be robust, effective, and reliable to reach design optimization of challenging structural engineering problems. When it is compared to the reported optimal designs obtained by standard versions of metaheuristics so far, the cuckoo search algorithm performs better productivity than
142
S. Carbas and I. Aydogdu
Table 5 Optimal design of twenty-story, 1860-member steel space frame #
Member Type
Sections designated by CS
Area (mm2 )
Sections designated by CS
Area (mm2 )
1–44
Beam/Column
2–45
Beam/Column
W410X67
8600
W610X217
27800
W360X72
9110
W530X92
3–46
11800
Column/Column
W410X67
8600
W530X138
17600
4–47
Column/Column
W310X67
8510
W690X152
19400
5–48
Column/Column
W310X67
8510
W250X44.8
5720
6–49
Column/Column
W460X113
14400
W250X44.8
5720
7–50
Column/Column
W410X100
12700
W310X74
9490
8–51
Column/Column
W410X100
12700
W610X82
10400
9–52
Column/Column
W250X58
7420
W760X185
23500
10–53
Column/Column
W250X58
7420
W1100X433
55300
11–54
Column/Column
W530X165
21100
W610X113
14400
12–55
Column/Column
W530X182
23100
W1000X258
33000
13–56
Column/Column
W610X174
22200
W760X147
18700
14–57
Column/Column
W530X165
21100
W530X138
17600
15–58
Column/Column
W250X73
9280
W690X152
19400
16–59
Column/Column
W310X74
9490
W310X67
8510
17–60
Column/Column
W610X174
22200
W360X57.8
7220
18–61
Column/Column
W760X196
25100
W460X97
12300
19–62
Column/Column
W610X174
22200
W610X82
10400
20–63
Column/Column
W530X165
21100
W760X185
23500
21–64
Column/Column
W610X82
10400
W1100X433
55300
22–65
Column/Column
W610X174
22200
W690X140
17800
23–66
Column/Column
W760X257
32600
W1000X258
33000
24–67
Column/Column
W200X71
9110
W760X147
18700
25–68
Column/Column
W250X167
21300
W840X176
22400
26–69
Column/Column
W250X58
7420
W690X152
19400
27–70
Column/Column
W200X31.3
4000
W460X52
6630
28–71
Column/Column
W410X53
6810
W250X58
7420
29–72
Column/Column
W610X82
10400
W250X58
7420
30–73
Column/Column
W760X185
23500
W310X79
10100
31–74
Column/Column
W1000X371
47300
W610X174
22200
32–75
Column/Column
W250X89
11400
W610X174
22200
33–76
Column/Column
W610X195
24900
W610X82
10400
34–77
Column/Column
W310X79
10100
W760X185
23500
35–78
Column/Column
W310X44.5
5690
W1100X433
55300
(continued)
6 Cuckoo Search for Optimum Design of Real-Sized High-Level …
143
Table 5 (continued) #
Member Type
Sections designated by CS
Area (mm2 )
Sections designated by CS
36–79
Column/Column
W460X89
11400
W690X140
17800
37–80
Column/Column
W200X31.3
4000
W1000X314
40000
38–81
Column/Column
W200X31.3
4000
W760X147
18700
39–82
Column/Column
W200X59
7560
W840X176
22400
40–83
Column/Column
W610X82
10400
W690X152
19400
41–84
Column/Column
W760X185
23500
W460X68
8730
42–85
Column/Column
W1000X371
47300
W250X73
9280
43–86
Column/Column
W530X101
12900
W690X125
16000
Maximum inter-story drift (cm)
0.688
Maximum strength ratio
0.840
Maximum top-story drift (cm)
6.481
Minimum weight (kN)
5526.69
Maximum iteration
80000
Fig. 6 Convergence history of twenty-story, 1860-member steel space frame
Area (mm2 )
144
S. Carbas and I. Aydogdu
other standard metaheuristics. Particularly, in the second design example, where there are a large number of design variables and a bigger design domain, the CS algorithm has succeeded to find the slightly lighter optimum weight than those determined by the standard Ant Colony Optimization. In the first design example, also it has a lot of design variables and a large design space, the CS display very good performance whose optimal design slightly supervene after the optimum design acquired by the Biogeography-Based Optimization technique. These results bring about the cuckoo search algorithm more prevailing and appropriate to the optimal design of real-sized high-level steel space frame structures.
References 1. Kirsch U (1981) Optimum structural design, concepts, methods, and applications. McGraw-Hill Book Company, New York 2. Haftka TR, Gurdal Z (1992) Elements of structural optimization. Kluwer Academic Publishers 3. Arora JS (1989) Introduction to optimum design. McGraw-Hill Book Company 4. Rao SS (2009) Engineering optimization; theory and practice, 4th edn. Wiley, Hoboken, New Jersey 5. Yang XS (2010) Engineering optimization. An introduction with metaheuristic applications. Wiley, Hoboken, New Jersey 6. Ney D (ed) (2018) Advancements in applied metaheuristic computing. IGI Global 7. Dey N, Ashour AS, Bhattacharyya S (eds) (2020) Applied nature-inspired computing: algorithms and case studies. Springer Singapore 8. Horst R, Pardolos PM (eds) (1995) Handbook of global optimization. Kluwer Academic Publishers 9. Horst R, Tuy H (1995) Global optimization. Springer, Deterministic Approaches 10. Paton R (1994) Computing with biological metaphors. Chapman & Hall, USA 11. Adami C (1998) An introduction to artificial life. Springer/Telos 12. Kochenberger GA, Glover F (2003) Handbook of metaheuristics. Kluwer Academic Publishers 13. De Castro LN, Von Zuben FJ (2005) Recent developments in biologically inspired computing. Idea Group Publishing, USA 14. Dreo J, Petrowski A, Siarry P, Taillard E (2006) Metaheuristics for hard optimization. Springer, Berlin, Heidelberg 15. Yang XS, Deb S (2009) Cuckoo search via Lévy flights. in: proceedings of the world congress on nature and biologically inspired computing, Coimbatore, India 16. Chakraborty S, Dey N, Samanta S, Ashour AS, Barna C, Balas MM (2017) Optimization of non-rigid demons registration using a cuckoo search algorithm. Cogn Comput 9(6):817–826 17. Li Z, Dey N, Ashour AS, Tang Q (2018) Discrete cuckoo search algorithms for two-sided robotic assembly line balancing problem. Neural Comput Applic 30(9):2685–2696 18. Binh HTT, Hanh NT, Dey N (2018) Improved cuckoo search and chaotic flower pollination optimization algorithm for maximizing area coverage in wireless sensor networks. Neural Comput Applic 30(7):2305–2317 19. AISC-LRFD (2001) Load and Resistance Factor Design (LRFD), vol 1, Struc-tural Members Specifications Codes, 3rd edn. American Institute of Steel Construction 20. Kameshki ES, Saka MP (2001) Genetic algorithm based optimum bracing design of nonswaying tall plane frames. J Consruct Steel Res 57:1081–1097 21. Hasancebi O, Bahcecioglu T, Kurc O, Saka MP (2011) Optimum design of high-rise steel buildings using an evolution strategy integrated parallel algorithm. Comput Struct 89:2037– 2051
6 Cuckoo Search for Optimum Design of Real-Sized High-Level …
145
22. Kazemzadeh Azad S, Hasancebi O, Kazemzadeh Azad S (2014) Computationally efficient optimum design of large scale steel frames. Int J Optim Civil Eng 4(2):233–259 23. Saka MP, Aydogdu I, Hasancebi O, Geem ZW (2011) Harmony search algorithms in structural engineering. In: Yang XS, Koziel S (eds) Computational optimization and applications in engineering and industry. Studies in Computational Intelligence, vol 359. Springer, Berlin, Heidelberg 24. Park HS, Adeli H (1997) Distributed neural dynamics algorithms for optimization of large steel structures. J Struct Eng ASCE 123(7):880–888 25. Kaveh A, Bolandgerami A (2017) Optimal design of large-scale space steel frames using cascade enhanced colliding body optimization. Struct Multidisc Optim 55:237–256 26. Kaveh A, Ilchi Ghazaan M (2018) Optimal seismic design of 3D steel frames. In: Kaveh A, Ilchi Ghazaan M (eds) Meta-heuristic algorithms for optimal design of real-size structures. Springer International Publishing AG, Cham, Switzerland, pp 139–155 27. Ericson PGP et al (2006) Diversification of Neoaves: integration of molecular sequence data and fossils. Bio Lett 2(4):543–547 28. Hackett SJ et al. (2008) A Phylogenomic study of birds reveals their evolutionary history. Sci 320(5884):1763–1768 29. Jarvis ED et al. (2014) Whole-genome analyses resolve early branches in the tree of life of modern birds. Sci 346(6215):1320–1331 30. Payne RB, Sorenson MD (2005) The cuckoos. Oxford University Press, Oxford, UK 31. Rajabioun R (2011) Cuckoo optimization algorithm. Appl. Soft Comput 11(8):5508–5518 32. Zhou Y, Ouyang X, Xie J (2014) A discrete cuckoo search algorithm for travelling salesman problem. Int. J. Collab Intell 1(1):68–84 33. The Life of Birds, Parenthood. http://www.pbs.org/lifeofbirds/home/index.html. (retrieved 05.10.2019) 34. Ellis C, Kepler C, Kepler A, Teebaki K (1990) Occurrence of the longtailed cuckoo; Eudynamis Taitensis on Caroline Atoll. Kiribati Short Commun EMU 90:202 35. Hockey P (2000) Patterns and correlates of bird migrations in Sub-Saharan Africa EMU100 (5):401–417 36. Yang XS (2011) Optimization algorithms. computational optimization, methods and algorithms. In: Koziel S, Yang XS (eds) Computational optimization, methods and algorithms. Springer, Berlin, Heidelberg, pp 13–31 37. Mantegna RN (1994) Fast, accurate algorithm for numerical simulation of levy stable stochastic processes, Physic Review E, 49(5):4677–4683 38. Saka MP, Aydogdu I, Akin A (2012) Discrete design optimization of space steel frames using the adaptive firefly algorithm. In: Topping BHV (ed) proceedings of the eleventh international conference on computational structures technology, Civil-Comp Press, Stirlingshire 39. Aydogdu I, Akin A, Saka MP (2012) Optimum design of steel space frames by artificial bee colony algorithm. In: Proceedings of the 10th international congress on advances in civil engineering, Middles East Technical University, Ankara, Turkey 40. Carbas S (2017) Optimum structural design of spatial steel frames via biogeography based optimization. Neural Comput Applic 28:1525–1539 41. Aydogdu I, Carbas S, Akin A (2017) Effect of Levy Flight on the discrete optimum design of steel skeletal structures using metaheuristics. Steel Comp Struct 24(1):93–112 42. ASCE 7-05 (2005) Minimum design loads for building and other structures. American Society of Civil Engineers, Virginia, USA 43. Aydogdu I (2010) Optimum design of 3-d irregular steel frames using ant colony optimization method. PhD thesis, Middles East Technical University, Ankara, Turkey
Chapter 7
Application of Cuckoo Search Algorithm User Interface for Parameter Optimization of Ultrasonic Machining Process D. Singh and R. S. Shukla
1 Introduction Ultrasonic machining (USM) process is widely used in machining of engraving, cavity sinking, drilling holes, slicing, etc. in materials that are difficult to hard with brittle in nature. In the USM process, tool high-frequency, low-amplitude oscillation is transmitted to fine abrasive particles present between tool and workpiece. Powder like, aluminium oxide, silicon carbide or boron carbide grains are used in water slurry which carries away the debris at each stroke. The USM parameters like tool design, power, amplitude, frequency and abrasive size influence the performance in terms of dimensional accuracy, MRR and surface finish. The number of tuning parameters on USM is large; these parameters cannot be tuned at the proper value by the operator. This work is inspired due to this challenging task to provide the proper value of the process parameters to enhance the performance of the process. This task can be accomplished by providing an intermediate interface that provides an optimum set of process parameters. This type of intermediate will work as an interface between the operator and the machine. This will benefit the operator and industries in terms of saving time, resources, power consumption and enhanced finish of the final product. In this chapter, a graphical user interface (GUI) is proposed that mimics the metaheuristic algorithm, i.e., Cuckoo search algorithm (CSA). An attempt is made to obtain the optimum parameter setting for the USM process.
D. Singh (B) · R. S. Shukla Mechanical Engineering Department, Sardar Vallabhbhai National Institute of Technology, Ichchhanath, Surat 395007, India e-mail: [email protected] R. S. Shukla e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 N. Dey (ed.), Applications of Cuckoo Search Algorithm and its Variants, Springer Tracts in Nature-Inspired Computing, https://doi.org/10.1007/978-981-15-5163-5_7
147
148
D. Singh and R. S. Shukla
2 Related Works The application of the USM process is increased in the last decade and researchers have attempted several works related to the USM process. Sharman et al. [1] have outlined the application of ultrasonic vibration-aided turning for machining titanium material. They have improved Ra and tool life by 12% compared to the conventional turning process. Singh and Khambha [2] have reviewed the selected applications of considered process in the machining of titanium-based alloys to operate the process effectively in the industry. Gauri et al. [3] have attempted parameter optimization for multiple correlated responses using “PCA-based TOPSIS method” and “weighted principal component (WPC)” on two sets of past experimental data of USM process and the result shows that both methods are effective to obtain optimum parameter setting compared to PCA-based grey relational analysis method. Singh and Gianender [4] have summarized the recent progress of USM in machining mainly glass, titanium or ceramics. The effect of process parameters on material removal rate, tool wear rate or surface finish were reported in the domain of manufacturing. Popli et al. [5] presented a review of machining process parameters for different materials machined using USM process. Some facts were concluded that machining of material like superalloys still needs research because of their wide applications in industries. Goswami and Chakraborty [6] have attempted parameter optimization using “gravitational search algorithm (GSA) and fireworks algorithm (FWA)” for the USM process. The optimal performance of these two algorithms was compared with other popular population-based algorithms. An effort is made by the past researchers to investigate the effect USM process parameters on its performance through experimental investigation, parameter optimization and experimental conditions as reported in Table 1 in the chronological order. The present survey on USM process concludes that the most studied performance parameter is Ra followed by MRR and TWR (Fig. 1a), most considered process parameters for investigation is abrasive grit size followed by slurry concentration and power rating (Fig. 1b). The researchers have mostly considered titanium-based alloy as a workpiece material followed by brittle material glass (Fig. 1c). Alike to other processes, most researchers have attempted experimental investigation using the design of experiments based on Taguchi or RSM to investigate the effects of USM process parameters on its performance. The detail of the CSA with a user interface is described in the next section. In Sect. 4, the examples considered on the ultrasonic machining process are described and solved for parameter optimization using developed CSA-GUI.
3 Problem Solution This section reports the developed CSA-GUI for the considered metaheuristic technique. The developed GUI is user-friendly and can be effectively used for continuous domain problems.
Workpiece material
Pure titanium, Titanium alloy (Ti-6Al-4v)
Titanium
Authors
Dvivedi and Kumar [7]
Kumar and Khamba [8]
Table 1 Literature summary of USM process
Tool materials (HCS, HSS, Titanium, Ti alloy, Cemented carbide), Abrasive materials (Alumina, SiC, Boron carbide), Grit size, Power rating
Workpiece, Grit size, Slurry concentration, Power rating, Tools
Process parameters
TWR, Ra
Ra
Performance parameters
Taguchi, ANOVA
Taguchi method, Optical Profiling System
Method/Approach/Instrument
(continued)
An experimental study on USM for pure titanium was conducted to determine the effect of considered process parameters on the machining characteristics
An experimental investigation is carried out to obtain the effect of considered process parameters on Ra during machining of titanium-based alloy using USM process
Effect of parameters/Observations
7 Application of Cuckoo Search Algorithm … 149
Workpiece material
Pure titanium (ASTM Grade-I), Pure Titanium (ASTM Grade-V)
Alumina based ceramics
Authors
Kumar et al. [9]
Jadoun et al. [10]
Table 1 (continued)
workpiece material, tool material (HCS, HSS, TC), grit size of the abrasive, power rating and slurry concentration
Abrasive material (Alumina, SiC, Boron carbide), Slurry concentration, Tool material (HCS, HSS, Titanium, Ti alloy, Cemented carbide), Abrasive grit size, Power rating
Process parameters
Hole-oversize Out-of-roundness conicity
TWR
Performance parameters
Taguchi, SEM,
Taguchi, Dimensional analysis
Method/Approach/Instrument
(continued)
They have reported the effect of considered process parameters on production accuracy obtained through ultrasonic drilling of holes in alumina-based ceramics for silicon carbide abrasive
They have conducted an experimental investigation on the USM process for machining titanium material to determine the influence of considered process parameters on tool wear rate Furthermore, they have developed a semi-empirical model to predict TWR based on the given process parameters using Buckingham’s pie theorem
Effect of parameters/Observations
150 D. Singh and R. S. Shukla
Workpiece material
Glass
–
Authors
Kumar [11]
Chakravorty et al. [12]
Table 1 (continued)
Different parameters based on case study considered.
Abrasive slurry, Slurry concentration, Grit size, Power rating
Process parameters
Based on case study.
MRR
Performance parameters
WSN, multi-response signal-to-noise ratio method, UT approach, GRA
ANOVA
Method/Approach/Instrument
(continued)
Four simple methods for the computational requirements and two sets of previous experimental data were analyzed for the considered process The relative performances of four methods were compared and the results show that weighted signal-to-noise (WSN) ratio and utility theory (UT) method was found better compared to other methods
The statistical analysis of the ultrasonic machining for glass material using design of experiments and regression approach was attempted
Effect of parameters/Observations
7 Application of Cuckoo Search Algorithm … 151
Workpiece material
Pure titanium (ASTM Grade-I)
Glass
Authors
Kumar [13]
Agarwal [14]
Table 1 (continued)
Tool (Mild steel), Abrasive (Boron carbide), Abrasive grain size
Abrasive material (Alumina, SiC, Boron carbide), Slurry concentration, Tool material (HSS, Ti alloy, HCS, Cemented carbide), Abrasive grit size, Power rating
Process parameters
MRR
Ra, microhardness
Performance parameters
Analytical
Taguchi, SEM
Method/Approach/Instrument
(continued)
The mechanism of material removal have been investigated and found its dependence on machining conditions and properties of the workpiece
An experimental investigation was conducted to obtain the influence of selected process parameters on surface quality during machining of pure titanium
Effect of parameters/Observations
152 D. Singh and R. S. Shukla
Workpiece material
Poly-crystalline cubic boron nitride
Titanium alloy Grade-I
Authors
Kuruc et al. [15]
Teimouri et al. [16]
Table 1 (continued)
Tool material (Carbon steel and Titanium alloy), Grit size (Aluminum oxide),Power rating
–
Process parameters
MRR, TWR, Ra
Ra
Performance parameters
Full factorial design, Adaptive neuro-fuzzy inference system, ICA
Confocal microscope
Method/Approach/Instrument
(continued)
They have performed experiments on USM process and multi-response optimization was attempted using an “imperialist competitive algorithm (ICA)” to obtain the single optimum parameter setting to have enhanced performance parameters
The work was focused on surface integrity, because roughness has a significant influence on the welding process, especially on sticking ability. It was observed that the considered process was a suitable method to manufacture a tool, owing to the low roughness achieved
Effect of parameters/Observations
7 Application of Cuckoo Search Algorithm … 153
Workpiece material
Zr-Cu-Ti metallic glass
Carbon fiber-reinforced plastics
Authors
Kuriakose et al. [17]
Geng et al. [18]
Table 1 (continued)
Spindle speed, Helical feed, Axial feed per revolution
Feed, Abrasive grit size, Concentration of abrasive slurry Abrasive material: Boron carbide
Process parameters
Hole edge quality and Surface integrity
MRR, TWR, overcut, edge deviation and taper angle
Performance parameters
Experimental, SEM
Multi-objective optimization on the basis of ratio analysis (MOORA), SEM
Method/Approach/Instrument
They have performed experimental work on rotary ultrasonic helical machining to improve the surface integrity during machining of holes
They have performed experimental work on micro-USM to obtain the influence of the considered parameters. It was revealed that the amorphous structure of “Zr60Cu30Ti10 metallic glass” was not affected by micro-USM machining
Effect of parameters/Observations
154 D. Singh and R. S. Shukla
Fig. 1 Identified investigation area for USM process a Performance parameters b Process parameters c Workpiece materials
7 Application of Cuckoo Search Algorithm … 155
156
D. Singh and R. S. Shukla
3.1 Cuckoo Search Algorithm—Graphical User Interface (CSA-GUI) CSA is developed by Yang and Deb [19] in 2010. The cuckoo behaviour is based on brood parasitism. They rely on other species of birds to grow their eggs. They place their eggs in the nest of other birds and fly away [19]. In the CSA algorithm, the positions of cuckoos are based on the current position and transition probability. The foraging behaviour is different from other species as they are little self-centred in nature as they don’t even grow their own eggs. The foraging of cuckoo depends on three factors [19, 20]. (1) Each cuckoo lays only one egg and this egg will be dumped in some other species nest simultaneously. (2) The nest with high dumping of cuckoo’s eggs will be used for the subsequent generation. (3) The nest of the host bird is fixed for the foraging of cuckoos. The new position of the cuckoo’s in xi(t+1) iteration for the ith cuckoo is obtained using a random walk mechanism with minor step size using the following Eq. (1). xi(t+1) = xi(t) = αSx
(1)
wher α is the step size which depends on the scale of the optimization problem. Random walks are drawn from a levy distribution using Eq. (2) [20]. Levy random steps(S) = t −λ
(2)
wher λ varies from 1 to 3, t is iteration number. The parameters value like the probability to alien egg (pa) and step size (α) to perform a global and local search are varying with respect to the iteration. The values of these parameters were constant in the traditional version of the considered algorithm [21]. The algorithm convergence rate and ability to obtain the optimum solution of the optimization problem is affected by these parameters. It is observed in the traditional version of CSA that if the user fixed the value of Pa as high and α as low then the convergence rate of the algorithm deters. Further, if the user fixed the value of Pa as low and α as high then the convergence rate gets improved with the suboptimum solutions. Thus, to enhance the performance of traditional version CSA, the value of Pa and α is varying with respect to the iteration. However, the values of Pa and α should be reduced at the end of the generations to obtain fine-tune solution. The values of pa and α are controlled with the number of iteration using Eqs. (3)–(5), respectively. Pa (gn) = Pamax −
Pamax − Pa
min
NI
α(gn) = αmax exp(c.gn)
× gn
(3) (4)
7 Application of Cuckoo Search Algorithm …
157
Ln c=
αmax α
min NI
(5)
wher NI is the number of total iterations and gn is the current iteration counter. A flowchart of CSA is shown in Fig. 2. The Pseudocode in the CSA algorithm is given as below Valian et al. [21]: Initialize using randomization of host nest position x i (i = 1, 2, 3… n). Evaluate the fitness value of each host nest position.
Start
Randomly generate the population of host cuckoo’s nest position
Evaluate the fitness function and obtain the cuckoo egg with highest fitness value
Obtain the new cuckoo nest position using levy flights
Replace the host nest position having alien egg with new position
No
Stopping criteria satisfied
Yes Obtain best solution
End Fig. 2 Flow chart for CSA
158
D. Singh and R. S. Shukla
f (x), x = (x 1 , x 2 , x 3 ,… x d )T . Sort the best egg from the host nest with high fitness value.
While t < Maximum generation (G) Generate new cuckoos positions by random walk and using the factors like step size and probability to find the alien egg. Evaluate the fitness value (Fi) for each new host positions. Choose a nest among n (say j) randomly If Fi > Fj then Replace j by new solution End if Replace the each host nest position if the alien egg (fraction of Pa) is detected in it. Replace the current host bird nest with new host nest position randomly. Keep the best solution Rank the solution and obtain the current best End While Store the obtained results of CSA and exit the loop. The objective of the current CSA-based GUI is to solve the continuous domain problems of engineering. This CSA-based GUI is an end display for the user which can be easily operated by the user due to its user-friendly environment. This interface is developed using Matlab® environment-GUIDE. A Matlab® code is generated inside a callback function to develop the interrelationship between the inputs provides by the user and output. The function of the callback is based on the Matlab® coding that generates the output for the end user in the output section of the developed GUI. The controllers are provided on the CSA interface for easy handling. The interface has two files that are generated while launching the CSA interface named as “FIGfile” and “M-file” [22, 23]. The evolutionary optimization technique like CSA is capable to obtain the optimum parameters which enhance the machining characteristics. Some researcher has proposed the topological model, control and auto-based, artificial neural networkbased for controlling factors like the modelling of tool chip interface temperature [24], finite impulse response optimization, filter design [25], pathfinding [26], autocorrection in CNC machine tools [27], structure designing [28], Wire sensor network [29], Robotic assembly line balancing problem [30], Video processing—demon registration [31], Fuel cell [32], etc. In the present study, an attempt is made to connect the CSA-based metaheuristic technique to the graphical-based interface as depicted in Fig. 3. This concept of developing interface provides flexibility to the user in obtaining the solution of the continuous domain problem without bothering about the programming computation. Two problems of parameter optimization of the USM process are tested using CSA-GUI based interface to demonstrate the effectiveness of the approach.
7 Application of Cuckoo Search Algorithm …
159
Fig. 3 FA based Graphical user interface
4 Application of CSA-GUI in USM Process This section demonstrates the simulation results of CSA-GUI to attain the optimum values of the operating parameters for the USM process. For these considered processes, single objective at a time approach is considered to solve problems using CSA-GUI-based optimization.
4.1 Example 1: Ultrasonic Machining (USM) Lalchhuanvela et al. [33] have conducted an experimental investigation on an “AP1000” Sonic-Mill, a 1000 W USM having frequency of vibration 20 kHz. They have used a holding plate of mild steel with a cavity size of 30 × 30 × 8 mm at its centre for making through holes on the workpiece made from alumina ceramic material with size 40 × 40 × 5 mm. In the machining of workpiece material, boron carbide powder of selected grain sizes was used as an abrasive slurry. A tool made from tubular stainless steel of hexagonal shape with size 17 mm long and 8.7 mm hole diameter was used. Four process parameters, i.e., slurry concentration (%), grit size (μm), power rating (%), feed rate (mm/min) and slurry flow rate (lit/min) and two performance parameter MRR (gm/min) and Ra (μm) were considered to carry out experimental work. These process parameters were set at five different levels. The actual values and coded values of different process parameters are shown in Table 2. The parametric levels for coded (X i ) need to be uncoded actual variable (X) using the Eq. (6).
160
D. Singh and R. S. Shukla
Table 2 Process parameters and their bounds for USM [33] Parameters
Levels −2
−1
0
1
2
Grit size (μm)
14
24
34
44
63
Slurry concentration (%)
30
35
40
45
50
Power rating (%)
40
45
50
55
60
Feed rate (mm/min)
0.84
0.96
1.08
1.20
1.32
Slurry flow rate (lit/min)
6
7
8
9
10
2X − Xmax + Xmin coded value(Xi ) = Xmax −X min
(6)
2
where coded value for X i is −2, −1, 0, 1 and 2, X max and X min is the maximum and minimum value of the actual variable. Lalchhuanvela et al. [33] used a “central composite second order half fraction rotatable design (CCRD)” experimentation plan with 32 experimental runs. The mathematical predictive regression models are remodelled for the experimental results of Lalchhuanvela et al. [33] with the help of software “MINITAB” using coded values of the process parameters. These RSM-based models are given in Eqs. (7)–(8). M R R = 0.034205+0.00264583x1 − 0.00347917x2 − 0.00139583x3 − 0.0001875x4 + 0.0008125x5 + 0.00151989x12 − 0.000542614x22 + 0.000363636x32 − 0.000605114x42 − 0.000292614x52 − 0.00282813x1 x2 − 0.000546875x1 x3 + 0.000328125x1 x4 + 0.000328125x1 x5 + 0.000609375x2 x3 − 0.000203125x2 x4 − 0.000328125x2 x5 − 0.000484375x3 x4 Ra = 0.607727+0.026667x1 − 0.00958333x2 − 0.0070833x3 − 0.0037500x4 + 0.0133333x5 + 0.000994318x12 − 0.00443182x22 + 0.00224432x32 − 0.00838068x42 − 0.0080682x52 − 0.00437500x1 x2 − 0.00906250x1 x3 + 0.00343750x1 x4 + 0.002500x1 x5 + 0.00406250x2 x3 − 0.00343750x2 x4 − 0.0012500x2 x5 − 0.00437500x3 x4 + 0.00281250x3 x5
(7)
7 Application of Cuckoo Search Algorithm …
− 0.000937500x4 x5
161
(8)
where x1 is the grit size, x2 is the slurry concentration, x3 is the power rating, x4 is the feed rate, x5 is the slurry flow rate. This USM problem [33] is considered for process parameter optimization using proposed algorithms. The results for MRR and Ra obtained are shown in Figs. 4 and 5 for the considered process at different iteration, respectively. The effectiveness of the proposed algorithm is measured for the considered process by employing Eqs. (7) and (8). Here, the performance characteristics MRR and Ra are to be maximized minimized respectively. Lalchhuanvela et al. [33] had not performed single objective optimization for the considered performance parameters, and therefore, the results of proposed algorithms cannot be compared. However, the results are compared with other metaheuristic techniques like PSO and BHA. The optimum process parameters for MRR and Ra results of single objective optimization are shown in Table 3. As seen from the Figs. 4 and 5, PSO and CSA are converging faster compared to BHA algorithms to obtain the optimal solution for the performance parameters MRR and Ra. The values obtained in the coded form are decoded using Eq. (6) to obtain actual parameters as depicted in Table 3. These values obtained in Table 3 are rounded off so that they can be easily tuned on the considered USM process as given in Table 4. The computational time, mean and standard deviation are obtained at the end of 50 trials using CSA, PSO and BHA algorithms for considered performance parameters MRR and Ra as shown in Table 5. While comparing the mean values of the proposed algorithms, the mean value obtained using CSA for the performance parameter MRR and Ra as 0.0731 and 0.4463, respectively. Here, the mean values define the success rate of the algorithm at the end of 50 trials. In terms of success, it is found that CSA is better compared to other considered algorithms, i.e., PSO and BHA. While
Fig. 4 MRR convergence of PSO, CSA and BH algorithm
162
D. Singh and R. S. Shukla
Fig. 5 Ra convergence of PSO, CSA and BH algorithm
comparing the standard deviation of the proposed algorithms value, it is found better for the CSA algorithm compared to other algorithms for the considered performance parameters. The influence of USM process parameters on MRR and Ra is obtained as depicted in Fig. 6. The optimum values obtained using CSA are used to obtain the trends of MRR and Ra with respect to USM process parameters. As shown in the Fig. 6a– e, MRR in the considered process increases with an increase of grit size (Fig. 6a), feed rate (Fig. 6d) and slurry flow rate (Fig. 6e). The value of MRR reduces with an increase of USM process parameter slurry concentration (Fig. 6b) and power rating (Fig. 6c). These trends of the USM process parameter on performance characteristic MRR are reasonable due to some facts. As the USM parameter grit size increases the area of impingement on workpiece increases which results in the growth of MRR. The problem of draining out the burrs produced between the tool and workpiece reduces with the increase of slurry flow rate. With the increase of slurry concentration, the grit size decreases which results in a reduction of MRR. The abrasive particles strike with a higher value of force on workpiece material with an increase of process parameter power rating. The growth of MRR occurs with the increase of the USM process parameter and with the increase of number of particles strikes on the workpiece. The obtained value of the USM process parameters using CSA is in the agreement with the trends. While the performance characteristic Ra reduces with the increase of grit size (Fig. 6a), slurry concentration (Fig. 6b), feed rate up to a certain limit (Fig. 6d) and slurry flow rate (Fig. 6e), the value of Ra increases with the increase of USM process parameter power rating (Fig. 6c). The USM process parameter values obtained for Ra using the CSA algorithm is in good agreement with the trends. These trends of USM process parameters with respect to Ra are reasonable due to some facts. As the
BH
0.0738
0.4443
Ra (μm)
0.4346
Ra (μm)
MRR (gm/min)
0.0766
MRR (gm/min)
0.4391
Ra (μm)
CSA
0.0765
MRR (gm/min)
PSO
Optimum value
Output
Method
1.8422
1.9333
2.0000
1.9687
1.9917
61.0670
62.1829
63.0000
62.6166
62.8983
49.7930
31.2430
1.9586
50.0000
2.0000
30.1790
−1.9642
−1.7514
49.6890
1.9378
30.1210
−1.9758
62.9412
Actual
1.9952
Slurry concentration Coded
Coded
Actual
Grit size
41.3665 41.3700
−1.7260
40.0000
−2.0000 −1.7267
40.3445
40.7660
−1.8468 −1.9311
40.9490
Actual
−1.8102
Coded
Power rating
Table 3 Results of single objective optimization for the USM process in coded and actual form
0.7346
1.4294
−0.3850
−0.0110
0.2552
0.0634
Coded
Feed rate
1.1681
1.2515
1.0338
1.0786
1.1106
1.0876
Actual
1.9903
1.5066
2.0000
2.0000
1.5143
1.8925
9.9903
9.5066
10.000
10.000
9.5143
9.8925
Actual
Slurry flow rate Coded
7 Application of Cuckoo Search Algorithm … 163
BH
0.0747
0.4417
Ra (μm)
0.4293
Ra (μm)
MRR (gm/min)
0.0778
0.4329
MRR (gm/min)
Ra (μm)
CSA
0.0768
MRR (gm/min)
PSO
Optimum value
Output
Method
61
62
63
63
63
63
1.8367
1.9184
2.0000
2.0000
2.0000
2.0000
50
31
50
30
50
30
41
41
2.0000
40
2.0000
40
−1.8000
41
2.0000
41
−2.0000
−2.0000
Round off
1.32 1.20
−1.8000
1.08
−2.0000 −1.8000
1.08
1.20
−1.8000 −2.0000
1.08
Round off
Feed rate
−1.8000
Feasible value
Power rating
Feasible value
Round off
Round off
Feasible value
Slurry concentration
Grit size
Table 4 Round-off and feasible values for single objective optimization of the USM process
1
2
0
0
1
0
Feasible value
10
10
10
10
10
10
Round off
2
2
2
2
2
2
Feasible value
Slurry flow rate
164 D. Singh and R. S. Shukla
7 Application of Cuckoo Search Algorithm …
165
Table 5 Performance comparison of the proposed algorithms Method
Output
Optimum value
Mean
Standard deviation
Computation time (s)
PSO
MRR (gm/min)
0.0765
0.0636
0.0052
0.6358
Ra (μm)
0.4391
0.4931
0.0223
0.6827
CSA
MRR (gm/min)
0.0766
0.0731
0.0016
0.5239
Ra (μm)
0.4346
0.4463
0.0052
0.9457
BH
MRR (gm/min)
0.0738
0.0696
0.0019
0.9468
Ra (μm)
0.4443
0.4624
0.0066
1.0579
process parameter grit size increases the area of impingement increases on the target surface which enhances machining using the USM process. As the value of slurry concentration increases, it results in the increase of flow of grit with respect to the target surface which reduces Ra. As the value of power rating increases, the abrasive particles strikes on the target surface with higher force, results in the increase of Ra. The increase of “feed rate” and “slurry flow rate” reduces the problem of draining out the burrs produced between the tool and workpiece which enhances the value of Ra. The trends of performance characteristic MRR and Ra confirm the optimality of the solution that is obtained using the CSA algorithm.
4.2 Example 2: Single Objective Optimization of USM Process This case study is taken from Li et al. [34] to obtain the optimum parameter setting and to see the effectiveness of the proposed algorithm. They have conducted experiments on “WXD170 reciprocating diamond abrasive wire saw machine tool”. The specifications are x-axis range as 0–120 mm, y-axis range as 0–120 mm, part speed range as 0–36 r/min, part feed rate range as 0.025–18 mm/min and wire saw velocity range as 1.3–2.2 m/s. These process parameters are set at three different levels as given in Table 6. Li et al. [34] have used a central composite rotatable design experiment plan with 30 experimental runs. The mathematical predictive regression models developed by Li et al. [34] are given in Eq. (9).
166
Fig. 6 a–e Variations of process parameters of USM (Example 1)
D. Singh and R. S. Shukla
7 Application of Cuckoo Search Algorithm …
167
Fig. 6 (continued)
Table 6 Process parameters and their bounds of USM process [34] Parameter
Units
Level1
Level2
Level3
Wire saw velocity
m/s
1.3
1.6
1.9
Part feed rate
mm/min
2.5 × 10−2
5 × 10−2
8 × 10−2
Part rotation speed
r/min
8
12
16
Ultrasonic vibration amplitude
mm
0
1×
10−3
2 × 10−3
Ra = 2.25 − 1.73x1 + 5.25x2 − 0.03x3 − 538x4 + 0.52x12 + 1.94x22 + 0.00103x32 + 1.68 × 105 x42 − 0.64x1 x2 + 2.08 × 10−3 x1 x3 − 9.17x1 x4 + 0.05x2 x3 + 1330x2 x4 − 2.91x3 x4
(9)
168
D. Singh and R. S. Shukla
This USM problem [34] is considered for process parameter optimization using considered algorithms. The result for Ra obtained is shown in Fig. 7 for the considered example 2 of USM process. The effectiveness of the proposed algorithms is measured for the considered USM process by employing Eq. (9). Here, the performance parameter Ra is to be minimized. The comparison of the performance parameter results obtained using the CSA algorithm with the previous researcher is shown in Table 7. Furthermore, the results of CSA are compared with other algorithms like PSO and BHA as depicted in Table 7. The optimum process parameters for the performance characteristics Ra are shown in Table 8. As seen from the Fig. 7, PSO is converging faster compared to CSA and BHA algorithm but it leads PSO to obtain the sub-optimal solution for the considered performance characteristics. The variations of the considered USM process parameters, i.e., wire saw velocity (m/s), part feed rate (mm/min), part rotation speed (r/min), ultrasonic vibration amplitude (mm) with respect to Ra is shown in Fig. 8. As depicted in the Fig. 8a–d, Ra value initially decreases then increases with the increase of wire saw velocity (Fig. 8a), part rotation speed (Fig. 8c) and ultrasonic vibration amplitude (Fig. 8d). The performance parameter Ra value increases with the increase of the part feed rate (Fig. 8b). The observed effect of the process parameter on Ra can be justified by few facts. As the wire saw velocity increases the cutting rate increases for the target surface causes smooth machining initially then gradually degrade roughness.
Fig. 7 Ra convergence of PSO, CSA, BH algorithms for Example 2
Table 7 Comparison of CSA algorithm results for Example 2 Algorithm
Ra (μm)
Desirability functional approach [34]
0.2600
Particle swarm optimization (PSO)
0.2628
Cuckoo search algorithm (CSA)
0.2508
Black hole (BH)
0.2545
7 Application of Cuckoo Search Algorithm …
169
Table 8 Results of single objective optimization for USM process Method
Wire saw velocity (m/s)
Part feed rate (mm/min)
Part rotation speed (r/min)
Ultrasonic vibration amplitude (mm)
Ra (μm)
Computational time (sec)
PSO
1.6678
0.0266
14.9946
0.0018
0.2628
5.72
CSA
1.7110
0.0251
15.1114
0.0016
0.2508
1.60
BH
1.6864
0.0256
15.6649
0.0018
0.2545
0.57
The increase of process parameter part feed rate reduces the time of cutting the workpiece which increases Ra up to a certain extent. As “part rotation speed” and “ultrasonic vibration amplitude” increases the abrasive particles erode more material from work piece gradually with less force which improves Ra. These trends of the performance parameter Ra confirm the optimality of the solution that is obtained using the proposed algorithms for the USM process.
4.3 Result and Discussion Some complexity is observed in the development of the CSA interface in terms of problem size along with constraints and consumption time. If the algorithm parameters are not properly tuned then it may stuck in local optima. This can be overcome by taking some trial runs for the considered problem. In the considered work, the mean value is obtained to determine the success rate of the algorithm. It is computed from Table 5 that the value obtained using CSA for the performance parameter MRR and Ra is improved by 5.02% and 3.48%, respectively, when compared to the BHA algorithm. While the value of Ra obtained using CSA is improved by 3.53% when compared to the desirability functional approach as computed using the values Table 7. In terms of convergence as depicted in Figs. 4, 5 and 7, it is found that in each case PSO is converging faster which results in suboptimum solution While CSA has obtained approximate optimum solution. In this chapter, authors have used algorithms based on their applications and suitability to handle the considered problem. No particular algorithm is best which creates one threat to the obtained solution. Each algorithm has its own characteristics and accordingly, it performs for the problem. There is no particular criterion to select the algorithm for the problem of optimization. So, if only one algorithm is attempted then it is not fair enough to validate that the obtained solution is near to optimum best. Thus, to validate that the solution is approximately near to optimum it is required to test the objectives with other algorithms and compare the solutions with the selected algorithms. If the solutions obtained from these attempted algorithms are found in proximity then it is feasible to validate that the obtained solution is optimally best so far.
170
D. Singh and R. S. Shukla
(a)
(b)
(c) Fig. 8 a–d Variations of process parameters of USM (Example 2)
7 Application of Cuckoo Search Algorithm …
171
(d) Fig. 8 (continued)
5 Conclusions The CSA-GUI is successfully attempted for parameter optimization of the USM process. The results obtained are compared with other popular population-based metaheuristic techniques, i.e., PSO and BHA to see the effectiveness of the developed CSA-GUI. In example 1, it is observed that the values obtained using CSA for MRR and Ra are improved by 5.02% and 3.48%, respectively, when compared to the BHA algorithm. In example 2, the response value of Ra obtained using CSA is improved by 3.53% when compared to the previous researcher’s result for the desirability functional approach. Thus, the results show that the developed CSA-GUI proves its acceptability in solving the machining optimization problem. The end user can use the developed CSA based interface to optimize the process parameters of other advanced processes. This study concludes that CSA-GUI can be extended to other engineering problems as a significant tool that would enhance the capability and performance of the system.
References 1. Sharman AR, Aspinwall DK, Kasuga V (2001) Ultrasonic assisted turning of gamma titanium aluminide. In: Proceedings of 13th international symposium for electromachining, Spain, (PartI), pp 939–951 2. Singh R, Khamba JS (2006) Ultrasonic machining of titanium and its alloys: a review. J Mat Process Technol 173(2):125–135 3. Gauri SK, Chakravorty R, Chakraborty S (2010) Optimization of correlated multiple responses of ultrasonic machining (USM) process. Int J Adv Manuf Technol 53(9–12):1115–1127 4. Singh N, Gianender (2012) USM for hard or brittle material and effect of process parameters on MRR or surface roughness : a review. Int J Appl Eng Res 7(11):1 - 6
172
D. Singh and R. S. Shukla
5. Popli D, Singh RP (2013) Machining process parameters of USM—a review. Int J Emerg Res Manag Technol 9359(10):46–50 6. Goswami D, Chakraborty S (2015) Parametric optimization of ultrasonic machining process using gravitational search and fireworks algorithms. Ain Shams Engi J 6(1):315–331 7. Dvivedi A, Kumar P (2007) Surface quality evaluation in ultrasonic drilling through the Taguchi technique. Int J Adv Manuf Technol 34:131–140 8. Kumar J, Khamba JS (2008) An experimental study on ultrasonic machining of pure titanium using designed experiments. J Brazilian Soc Mech Sci Engi 3:231–238 9. Kumar J, Khamba JS, Mohapatra SK (2009) Investigating and modeling tool-wear rate in the ultrasonic machining of titanium. Int J Adv Manuf Technol 41(11–12):1107–1117 10. Jadoun RS, Kumar P, Mishra BK (2009) Taguchi’s optimization of process parameters for production accuracy in ultrasonic drilling of engineering ceramics. Prod Eng 3(3):243–253 11. Kumar V (2013) Optimization and modeling of process parameters involved in ultrasonic machining of glass using design of experiments and regression approach. Am J Mater Eng Technol 1(1):13–18 12. Chakravorty R, Gauri SK, Chakraborty S (2013) Optimization of multiple responses of ultrasonic machining (USM) process: a comparative. Int J Ind Eng Comput 4:285–296 13. Kumar J (2014) Investigations into the surface quality and micro-hardness in the ultrasonic machining of titanium (ASTM GRADE-1). J Brazilian Soc Mech Sci Eng 36(4):807–823 14. Agarwal S (2015) On the mechanism and mechanics of material removal in ultrasonic machining. Int J Mach Tools Manuf 96(1):1–14 15. Kuruc M, Vopát T, Peterka J (2015) Surface roughness of poly-crystalline cubic boron nitride after rotary ultrasonic machining. Procedia Eng 100:877–884 16. Teimouri R, Baseri H, Moharami R (2015) Multi-responses optimization of ultrasonic machining process. J Intel Manuf 26(4):745–753 17. Kuriakose S, Kumar P, Bhatt J (2017) Machinability study of Zr-Cu-Ti metallic glass by micro hole drilling using micro-USM. J Mater Process Technol 240:42–51 18. Geng D, Teng Y, Liu Y, Shao Z, Jiang X, Zhang D (2019) Experimental study on drilling load and hole quality during rotary ultrasonic helical machining of small-diameter CFRP holes. J Mat Process Technol 270:195–205 19. Yang X, Deb S (2010) Engineering optimization by Cuckoo search. Int J Math Model Numer Opt 1:330–343 20. Rajabioun R (2011) Cuckoo optimization algorithm. App Soft Comput 11:5508–5518 21. Valian E, Tavakoli S, Mohanna S, Haghi A (2013) Improved cuckoo search for reliability optimization problem. Comput Ind Eng 64:459–468 22. Chapman SJ (2008) Matab® programming for engineers. Thomson Asia Ltd, Singapore 23. Hahn BH, Valentine DT (2010) Essential matlab for engineers and scientist. Elsevier Academic Press 24. Korkut I, Acır A, Boy M (2011) Application of regression and artificial neural network analysis in modelling of tool–chip interface temperature in machining. Expert Sys Appl 38(9):11651– 11656 25. Kumar M, Rawat TK (2015) Optimal design of FIR fractional order differentiator using cuckoo search algorithm. Expert Sys Appl 42:3433–3449 26. Rajabi-Bahaabadi M, Shariat-Mohaymany A, Babaei M, Ahn CW (2015) Multi-objective path finding in stochastic time-dependent road networks using non-dominated sorting genetic algorithm. Expert Sys Appl 42(12):5056–5064 27. De Oliveira LW, Carlos Campos Rubio J, Gilberto Duduch J, De Almeida PEM (2015) Correcting geometric deviations of CNC machine-tools: an approach with artificial neural networks. App Soft Comput J 36:114–124 28. Huang J, Gao L, Li X (2015) An effective teaching-learning-based cuckoo search algorithm for parameter optimization problems in structure designing and machining processes. App Soft Comput J 36:349–356 29. Binh HT, Hanh NT, Dey N (2018) Improved cuckoo search and chaotic flower pollination optimization algorithm for maximizing area coverage in wireless sensor networks. Neural Comput Appl 30(7):2305–2317
7 Application of Cuckoo Search Algorithm …
173
30. Li Z, Dey N, Ashour AS, Tang Q (2018) Discrete cuckoo search algorithms for two-sided robotic assembly line balancing problem. Neural Comput Appl 30(9):2685–2696 31. Chakraborty S, Dey N, Samanta S, Ashour AS, Barna C, Balas MM (2017) Optimization of non-rigid demons registration using a cuckoo search algorithm. Cognit Comput 9(6):817–826 32. Zhu X, Wang N (2019). Cuckoo search algorithm with onlooker bee search for modeling PEMFCs using T2FNN, Engi Appl Artif Intel, 85. https://www.sciencedirect.com/science/ journal/09521976/85/supp/C, 740–753 33. Lalchhuanvela H, Doloi B, Bhattacharyya B (2012) Enabling and understanding ultrasonic machining of engineering ceramics using parametric analysis. Mat Manuf Process 27(4):443– 448 34. Li S, Wan B, Landers RG (2014) Surface roughness optimization in processing SiC monocrystal wafers by wire saw machining with ultrasonic vibration. Proc IMechE Part B: J Engi Manuf 228(5):725–739
Chapter 8
The Cuckoo Search Algorithm Applied to Fuzzy Logic Control Parameter Optimization G. García-Gutiérrez, D. Arcos-Aviles, E. V. Carrera, F. Guinjoan, A. Ibarra, and P. Ayala
1 Introduction Most of the everyday engineering applications require the optimization of objective functions in order to enhance the operation, cost, and/or performance associated with real-life systems. Basically, optimization can be seen as a process where the values of certain parameters are explored under specified conditions or constraints to discover those peculiar values which enable to generate a minimum or maximum merit in a set of objective functions [1, 2]. However, the optimization of complex systems is a difficult task, especially when objective functions with multiple-goals are involved [3]. In order to solve complex optimization problems, several techniques can be used even if they do not guarantee that an optimal solution will be obtained. Examples of G. García-Gutiérrez · D. Arcos-Aviles (B) · E. V. Carrera · A. Ibarra · P. Ayala Departamento de Eléctrica, Electrónica y Telecomunicaciones, Universidad de las Fuerzas Armadas ESPE, Av. Gral. Rumiñahui s/n, 171-5-231B, Sangolquí, Ecuador e-mail: [email protected] G. García-Gutiérrez e-mail: [email protected] E. V. Carrera e-mail: [email protected] A. Ibarra e-mail: [email protected] P. Ayala e-mail: [email protected] F. Guinjoan Department of Electronics Engineering, Escuela Técnica Superior de Ingenieros de Telecomunicación de Barcelona, Universitat Politècnica de Catalunya, C. Jordi Girona 31, 08034 Barcelona, Spain e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 N. Dey (ed.), Applications of Cuckoo Search Algorithm and its Variants, Springer Tracts in Nature-Inspired Computing, https://doi.org/10.1007/978-981-15-5163-5_8
175
176
G. García-Gutiérrez et al.
such techniques are some trial and error methods that can help us to achieve at least acceptable solutions [4, 5]. However, complex systems may require a lot of time and computational power to solve their optimization tasks. Thus, many research groups have focused their efforts on developing algorithms capable of solving optimization problems in a more effective way. That is why several metaheuristic algorithms inspired by nature have attracted attention over the last decades. These nature-inspired algorithms have shown great capability to solve optimization problems in a broad spectrum of applications such as chemical processes [6–8], job scheduling [9–11], multi-objective optimization [12– 14], operation and control of electric power systems [15–18], image processing [20], vehicle routing [19, 20], autonomous vehicles control [21, 22], mobile networking [23], etc. Some of these nature-inspired algorithms are Genetic Algorithms, Differential Evolution, Ant Colony Optimization, Particle Swarm Optimization, Cuckoo Search, Firefly Algorithm, Bat Algorithm, etc. In particular, recent studies have shown that the Cuckoo Search (CS) algorithm is one of the most promising metaheuristic algorithms developed for global-scale optimization problems. It has become very popular for solving complex optimization problems in an extensive range of engineering applications. In fact, CS has been proved to be more efficient than other algorithms to solve optimization problems in terms of robustness and precision of obtained results [24], faster convergence speed [14], better obtained solutions [15], and its evened intensification/diversification search strategies [25–27]. Due to well-balanced search strategies, the CS algorithm represents a valuable tool for the parameters tuning procedure of a controller that allows a high probability of obtaining an acceptable solution for the optimization problem in a reasonable period of time. In this context, the desired performance parameters of the control system play an important role in the optimization process as a condition for the algorithm decision-making. Thus, this chapter presents the application of the CS algorithm used for tuning the controller parameters in two different case studies. The first case study is associated with the fuzzy logic controller (FLC) parameter tuning of a nonlinear magnetic levitation system [28]. The second case study is associated with the FLC optimization of the energy management system of a residential microgrid [29]. In the two cases, the CS algorithm has been used to optimize a Fuzzy Logic Controller including its membership functions (MF) and rule base (RB). The presented cases show that these procedures can be applied to several problems that involve a system where these control techniques are required. Simulation results are provided for each case study in order to emphasize and compare the features of the optimized controllers when compared against other traditional techniques. Results show a clear advantage of the CS algorithm over more standard techniques. The rest of this chapter is organized as follows: Sect. 2 presents a short literature review of previous studies addressing the optimization of controller parameters through the CS algorithm. Section 3 presents the research methodology of CS applied for FLC parameter tuning. Section 4 analyzes the FLC parameter optimization for a magnetic levitation system. Section 5 shows the adjustment of parameters for the
8 The Cuckoo Search Algorithm Applied to Fuzzy Logic Control …
177
controller of a microgrid energy management system. Finally, Sect. 6 analyzes and discusses the most relevant results found in each case study and concludes the chapter.
2 Literature Work In the last decades, there has been an increasing development of solutions for optimization problems based on nature-inspired algorithms [30]. These solutions cover most of the scientific and technological areas, but in the case of PID controller optimizations, some previous works range from bacterial foraging optimization [31] to bat algorithm optimization [32]. In application related to the optimization of power systems, there is also a good set of previous works discussing more conventional techniques like extremum seeking optimization [33] and particle swarm optimization [34]. In fact, [35] presents a good summary of distributed algorithms with applications to optimization and control of power systems. Within nature-inspired algorithms, the Cuckoo Search algorithm has appeared as one of the most efficient metaheuristic optimization techniques. Thus, the CS algorithms have been applied in many areas like game theory and multi-objective optimization problems [36], image registration in video processing [37], optimization of robotic assembly lines [38], and even maximizing area coverage for wireless sensor networks [39]. In the case of control systems, the CS algorithm has also improved several optimization tasks like the optimal tuning of a PI controller [40], optimal tuning of PID controllers [41–43], optimization of parameters in cascade controllers [44], optimal power system stabilizers design in a multi-machine power system [45], loop control of frequency and voltage for distributed generating systems [46], optimal overcurrent relays coordination in microgrids [47], reduction of single-input single-output and multiple-input multiple-output LTI systems [48], among other applications. Finally, in the case of fuzzy logic controllers, the CS algorithm has also been used for the optimization of power controllers in isolated networks [49], for designing an optimized FLC in order to enhance the performance of wind turbines by maximizing the captured energy [50], for adapting the transient and steady-state characteristics of an FLC used in a distillation column [51], and even for managing a hydraulic semi-active damper intelligently minimizing the resultant vibration magnitude [52]. Thus, this work extends previous works related to the optimization of fuzzy logic controllers by including the membership functions and rule base in the parameters to optimize.
178
G. García-Gutiérrez et al.
3 Research Methodology 3.1 Basics of the Cuckoo Search Algorithm In general terms, the CS algorithm is based in three idealized rules defined in [25, 26] and represent the approach for implementation of this algorithm to solve an optimization problem, these rules are listed next: 1. In a predefined population of host nests and iteratively, a cuckoo egg representing a candidate new single solution is dumped in a randomly-selected host nest (the new cuckoo egg intends to replace a previous egg by improving the quality of solutions). 2. A “selection of best” process advantages the convergence of solutions to the global optimum by carrying over the best nests with high-quality eggs to the next generations. For this purpose, a decision-making condition comprising the objective(s) of optimization (i.e., cost/objective/fitness function) is crucial to success in obtaining a good solution according to the problem specifications. 3. Each cuckoo egg laid in a host nest has an assigned discovery rate value, pa , and when a predefined discovery condition is met, the host bird can either build a new nest or just get rid of the cuckoo egg. This means that a completely new solution far enough from current population’s solutions is generated, enabling the global scale exploration. Moreover, the CS algorithm potentiality for solving complex optimization problems is mainly due to its well-balanced search strategies for both local and global scales (i.e., Lévy Flights and Biased/selective random walks) [14, 25–27, 53]. From this perspective, the problem of FLC parameter optimization can be referred to as an optimization problem that can be solved by using the CS algorithm. The objectives of optimization are the desired system performance parameters (i.e., MLS performance parameters of both transient and steady-state regimes, and MG energy management quality criteria), and the parameters to optimize are the controller parameters (i.e., FLC MF mapping and RB). In general, the CS algorithm for FLC parameter optimization comprises the phases of initialization, intensification, diversification, update, and selection of best, detailed next and illustrated in Fig. 1.
Fitness Function
Initialization
Intensification
Diversification
No
Fig. 1 General diagram of Cuckoo Search (CS) algorithm phases
Stop Criterion
Yes
Results
8 The Cuckoo Search Algorithm Applied to Fuzzy Logic Control …
179
3.2 Initialization During the initialization, algorithm parameters such as the cuckoo dimension, d, and the search space boundaries, U b and L b , are defined according to the number of MF and RB parameters and the limiting values of these parameters, respectively. The other initial parameters such as the egg discovery rate, pa = 0.25, the Lévy distribution coefficient, β = 1.5, and the length of the population, n = 25, are fixed according to previous studies [25, 26]. Finally, the initial population is created by drawing n random solutions of dimension d from the search space.
3.3 Intensification The intensification or exploitation process focuses the search on a local region of previous candidate population’s solutions and is carried out using a Lévy Flights random walk where the length of the step-sizes are drawn from a Lévy distribution [25, 26, 54–57]. The aim of the intensification or local search process is to discover new solutions in the neighborhood of the solutions from the population, optimizing these feasible local optimums during the iterative process.
3.4 Diversification The diversification or exploration process focuses the search far enough from the current population’s solutions, so the search space is explored on a global scale. As a result, this search strategy is especially useful when solving multi-modal optimization problems, very common in engineering applications. This process is carried out using a biased/selective random walk together with the mutation and crossover operators [58], controlled by the egg discovery rate parameter, pa [25–27, 58]. It is worth to remark the significance that the egg discovery rate parameter, pa , has during the optimization process, since it controls the intensity of global search by modifying the percentage of global search time over the total search time [15, 29].
3.5 Update and Selection of Best Through the iterative process, intensification and diversification search strategies are applied to the Cuckoo population until a predefined stop criterion, which can be a tolerance value or a fixed number of iterations, is met. During this iterative optimization process, the aim of the update process is to discriminate between solutions
180
G. García-Gutiérrez et al.
generated at different generations, G and G + 1, in order to improve the quality of the population. The “selection of best” process, on the other hand, finds the best local optimum among the cuckoo population at each generation, advantaging the convergence of the cuckoo population to a feasible global optimum. In general, the CS algorithm for FLC parameter optimization follows the same general approach exposed in [25, 26], but taking into account the specific meaning of the CS algorithm parameters regarding the FLC to optimize and its respective implications. A detailed description of these implications will be presented in each case study section.
4 Case Study 1: Fuzzy Logic Controller Parameter Optimization for a Magnetic Levitation System 4.1 Magnetic Levitation System Model In general, a Magnetic Levitation System (MLS) can be referred to as a complex system comprised of one or more electromagnets that apply a strong magnetic field to a ferromagnetic object and suspend it at a specific height [28]. Mathematically, it represents the common example of a nonlinear system, modeled according to a set of differential equations where at least one is nonlinear, as can be seen in Eqs. (1), (2), (3), (4), and (5) [59]: r = v v = − i =
(1)
Fem +g m
(2)
1 · (ki · u + ci − i) f i (r )
Fem = i 2 ·
Fem P1 ·e Fem P2
f i P1 ·e f i (r ) = f i P2
−
−F
r em P2
r f i P2
(3)
(4)
(5)
In addition, parameter values and description of this model are listed in Table 1 [28, 59]. When designing a control system, the nonlinear condition of the MLS may lead to complex sensor/actuator parameter identification procedures and complex mathematical models when a linearization design approach is used [59]. In this concern, modern complex nonlinear control techniques such as Fuzzy Logic avoid the requirement of complex models by introducing the designer’s heuristic knowledge about
8 The Cuckoo Search Algorithm Applied to Fuzzy Logic Control …
181
Table 1 Magnetic Levitation System (MSL) model parameters [59] Description
Parameter
Value
Units
Vertical position of the ferromagnetic object
r
[0, 0.016]
m
Velocity of the ferromagnetic object
v
[−inf, inf]
m/s
Current injected to the electromagnets
i
[−2.38, 2.38]
A
Mass of the ferromagnetic object
m
0.0448
kg
Gravity constant
g
9.81
m/s2
Electromagnetic force
F em
0.9501
N
Constants to compute the electromagnetic force
F emP1
0.02110
H
F emP2
0.00656
m
Constants to compute the inductance value
f iP1
0.00277
mS
f iP2
0.09787
m
Constants to compute the static characteristic of current versus control signal
ci
−0.0326
ki
2.5081
Control signal
u
[−1, 1]
V
Electromagnetic inductance function
fi
–
–
A A
Fig. 2 Control system structure for MLS using FLC
the system and the desired performance into the controller. Fuzzy Logic control approach is presented in this work. For purposes of simulations, a Simulink model representing Eqs. from (1) to (5) and parameters listed in Table 1 has been implemented, this has led to the following Simulink fuzzy logic-based control system structure and Simulink MLS real model represented in terms of differential equations, illustrated in Fig. 2 and Fig. 3, respectively [28].
4.2 Fuzzy Logic Controller Design Taking into account the MLS nonlinear dynamics modeled in the previous Section, a Fuzzy Logic Controller (FLC) will be designed with the aim of controlling the
182
G. García-Gutiérrez et al.
Fig. 3 MLS model representation in Simulink for simulation purposes
vertical position r of the ferromagnetic object. This approach provides the advantage of simplicity and robustness by implementing the human heuristic knowledge about the desired controlled system performance, into the controller [60–62]. In addition, the FLC is comprised of three main modules defined during the design process: the fuzzification, the inference engine, and the defuzzification [15, 60–62]. A Mamdani-based inference module will be used since it is the common one used in FLC design and is mainly focused on local dynamics scenarios since building the RB represents the aggregation of control policies of local dynamics. A defuzzification of the center of gravity will be used since it leads to smooth outputs when the output of the fuzzy processing is a fuzzy set and is an algorithm with high plausibility. For the design, a simple Proportional-Derivative (P-D) FLC type is chosen, with its inputs and output defined according to the control system structure illustrated in Fig. 2. The first input of the FLC is determined by the vertical position error of the ferromagnetic object, the second input is the derivative of error, and the output is the control signal [28]. The controller uses the information of its inputs to act over the controlled system by modifying the control signal, according to a predefined control objective related to a set of desired performance parameters. Furthermore, a regular distribution of triangular MF-type has been considered for both input and output variables under the perspective of easiness of implementation with a low-cost digital processor, together with 25 rules to be evaluated. MF and RB distribution are shown in Figs. 4, 5, 6 and Table 2.
8 The Cuckoo Search Algorithm Applied to Fuzzy Logic Control … NB
ZE
NS
183 PS
PB
1
0.5
0 -0.016
-0.008
0
0.008
0.016
Position Error
Fig. 4 MF used for position error input data
NB
ZE
NS
PS
PB
1
0.5
0 -0.016
-0.008
0
0.008
0.016
Derivative of Error
Fig. 5 MF used for derivative of error input data
Five MFs uniformly distributed along the respective universe of discourse are assigned to FLC inputs, and FLC output, whereas the initial RB is defined according to the heuristic designer’s knowledge about the desired MLS performance [28]. For MF depicted in Figs. 4, 5 and 6 B represents “Big”, S “Small”, N “Negative”, P “Positive”, and ZE “Zero”. The parameters to optimize are the MF positions for both FLC inputs and the output, and the RB values. The CS algorithm used to optimize these parameters is discussed later.
184
G. García-Gutiérrez et al. NB
ZE
NS
PB
PS
1
0.5
0 -1
-0.5
0
1
0.5
Control Signal
Fig. 6 MF used for control signal output data
Table 2 Fuzzy Logic initial rules for MLS control Control signal Derivative error
Position error NB
NB
NS
ZE
PS
PB
PB
PB
PS
PS
ZE
NS
PB
PS
PS
ZE
NS
ZE
PS
PS
ZE
NS
NS
PS
PS
ZE
NE
NS
NB
PB
ZE
NS
NS
NB
NB
4.3 Definition of the Cost/Objective/Fitness Function In general, the objective function can be referred to as the algorithm decision-making condition. That function enables the algorithm to discriminate between obtained solutions and permits the convergence of solutions to the feasible global optimum according to the problem restrictions, through the iterative process. For the present case study, the objective function involves the controlled MLS performance parameters. These parameters have been selected considering the desired MLS performance in terms of a typical second-order linear system [28]. Besides, the weighted sum method has been used due to its simplicity and providing the advantage that it expresses the preference of each objective of optimization according to the problem specifications. In this regard, the objective function is formulated by Eq. (6), as follows: ts,D − ts %M p,D − %M p + α2 + α3 rss,D − rss × 100 O = α1 100 10
(6)
8 The Cuckoo Search Algorithm Applied to Fuzzy Logic Control …
185
where %M p,D = 5%, t s,D = 0.5 (s) and r ss,D = 0.008 (m) correspond to the desired MLS performance parameters (i.e., percentage overshoot, settling time, and final vertical position, respectively), %M p , t s , and r ss are the measured performance parameters which at the end of the optimization process should match as close as possible to the desired performance parameters, and α 1,2,3 are the weights of each component. It is worth pointing out that in the present case, every component in the objective function has been normalized so the assigned weights agree with the specific objectives of optimization preferences [63]. The weights in the objective function presented in Eq. (6), have been defined as α 1 = α 2 = α 3 = 1 to prioritize the objectives’ preferences of optimization, equally [28]. In addition, a stop criterion for the CS algorithm has been defined in Eq. (7) considering a tolerance value determined as the 1% of the weighted sum of the desired performance parameter [28], as follows: tolerance = 1% · %M p,D + ts,D + rss,D
(7)
The aim of defining the 1% tolerance value is to find a solution whose performance parameters’ values are in the range of the 1% tolerance value of each respective desired values, i.e., M p ∈ [4.95%,5.05%], t s ∈ [0.495 s, 0.505 s], r ss ∈[0.00792 m, 0.00808 m].
4.4 Fuzzy Logic Controller Parameter Optimization 4.4.1
Initialization
As the first step, the number of parameters to optimize (i.e., cuckoo dimension) is defined according to the FLC designed in chapter Sect. 4.2. In this concern, parting from the fact that a triangular MF is defined by three parameters and every FLC input/output variable has assigned five MF, the total number of parameters is determined by the sum of each MF number of parameters excluding those MF located at the maximum/minimum range value which have one variable parameter and two fixed. The RB is defined by twenty-five parameters [28]. Under these considerations, the cuckoo dimension is the total number of FLC parameters to optimize, d = 58, whereas a population length value of n = 25 is defined according to previous studies [25, 26]. In addition, for purposes of the random initial population generation, a special consideration has been taken into account regarding the requirement that the obtained solutions during the CS algorithm iterative process, and in consequence the best solution that minimizes the objective function defined in Sect. 4.3, preserve order and symmetry in relation to the initial parameters defined in the FLC design. In this concern, the initial random population is generated by adding random displacements to every parameter of the uniformly distributed MF mapping illustrated
186
G. García-Gutiérrez et al.
in Figs. 4, 5, and 6 and initial RB distribution shown in Table 2 (referred now to as base MF and RB parameter distribution), according to Eqs. (8), (9), (10), and (11) presented in [28]: ΩGi = [Ω M Fi , Ω R Bi ]
(8)
Ω M Fi = Ω M F(I ) + R
(9)
R = [U (−5% · R I np/Out , 5% · R I np/Out )
(10)
Ω R Bi = Ω R B(I ) + U (−1, 1)
(11)
where MFi represents an individual random initial solution for the MF parameters, MF(I) represents the base MF parameters illustrated in Figs. 4, 5, and 6, R is a set of random displacements drawn from a uniform distribution between ±5% of each input/output FLC variable range value RInp and/or ROut , RBi represents an individual random initial random solution for the RB, RB(I) represents the base RB parameters defined in Table 2, U ~ (−1, 1) is a set of random integer numbers drawn from a uniform distribution defined with the aim of displacing the RB values and create a random initial solution, and Gi is a complete random initial solution comprising of MFi and RBi at an initial generation Gi [28]. An example of this random MF solution generation procedure is illustrated in Fig. 7, where parting from a base MF of parameters called X, Y, Z, i.e., MF(I) in Eq. (9), by adding random displacements δ, i.e., R in Eq. (10), a completely new random solution, i.e., MFi in Eq. (9), is obtained. On the other hand, an example of a random initial RB solution generation by adding integer displacements to the base RB distribution (Table 2) is shown in Table 3. The described procedure for the initial random solutions generation has been implemented under two considerations.
1
Initial MF uniformly distributed Random MF generated from an initial MF
0.8 Z
X
0.6
Y
0.4 0.2 0 R MAX
Fig. 7 Example of random MF solution for the initial population
R M IN
8 The Cuckoo Search Algorithm Applied to Fuzzy Logic Control …
187
Table 3 Example of initial random RB solution after random integer displacements added to initial RB distribution in Table 2 Control signal Derivative error
Position error NB
NS
ZE
PS
PB
NB
PB
PB
PB(+1)
PB(+1)
ZE
NS
PS(−1)
PB(+1)
PB(+1)
PB(+2)
ZE(+1)
ZE
PS
PS
ZE
ZE(+1)
NS
PS
ZE(−1)
ZE
NS
NS
NS(+1)
PB
NB(−2)
ZE(+1)
NB(−1)
NB
NB
The first one considers that although the CS algorithm final results should be independent of the starting points, due to the problem’s high dimensionality and since the objective function landscape is unknown, the algorithm might converge slowly to optimality [25]. In this regard, the probability of generating a solution in the neighborhood of the optimality is higher by defining an initial pattern for newly generated solutions (i.e., MF and RB base parameters), thus increasing the convergence speed. The second considers the existence of a valid set of FLC parameters (i.e., MF and RB) that have a specific order, symmetry, and coherence concerning the Fuzzy Logic theory. This has been achieved by generating random solution parting from the base predefined parameter distribution.
4.4.2
Intensification
After the initialization, the CS local search strategy generates new solutions in the neighborhood of population solutions. This process is carried out for MF solutions using a Lévy Flights random walk where every single new solution is generated by means of Eqs. (12) and (13), as follows [25, 26, 28, 58]: Ω M F(G+1) = Ω M F(G) + α · Levy(β)
(12)
α = α0 · Ω M F(G) − Ω M F(best)
(13)
where for MF parameters, MF(G+1) is a new solution generated around the neighborhood of a solution selected randomly from the population, MF(G) , Lévy (β) are random step-sizes drawn from a Lévy distribution obtained using the Mantegna’s algorithm with a distribution coefficient β = 1.5 [25, 26], MF(best) is the best local optimum obtained during the “selection of best” process at every iteration [28], and α 0 is a scaling factor related to the length of cuckoo walks or flights in searching for new nests [64] and in general is set to a fixed value of 0.001 or 0.1 for most optimization problems [25, 26]. However, a recent study has suggested that a varying scaling factor leads the CS algorithm to converge faster [58]. From this perspective, to avoid
188
G. García-Gutiérrez et al.
new solutions to jump out of the search space and to incorporate the potentiality of a varying scaling factor into the CS algorithm for the present case study, a varying scaling factor has been formulated in Eq. (14), as follows: α0 ∼ U (0.01, 0.1)
(14)
On the other hand, RB solutions local search process is defined according to Eq. (15) Ω R B(G+1) = Ω R B(G) + S · Ω R B(G) − Ω R B(best)
(15)
where for RB parameters, RB(G+1) is a new solution generated around the neighborhood of a solution selected randomly from the population, RB(G) , S ~ N (0, 1) is a set of integer numbers that represent RB displacements, and RB(best) ) is the best local optimum obtained during the “selection of best” process at every iteration [28].
4.4.3
Diversification
This procedure ensures the generation of solutions on a global scale and far enough from local optimums of the population. For this purpose, a far-field randomization technique using the mutation and crossover operators is applied. According to [58] it is formulated by Eq. (16) as follows: ΩG+1 =
ΩG + r · Ω p(G) − Ωq(G) ; pa ≤ 0.25 ; pa > 0.25 ΩG
(16)
where for the present case study, G+1 represents a candidate new solution, and G is a solution selected from the cuckoo population [28], both comprising MF and RB parameters, r ~ U (0,1) is a random index drawn from a uniform distribution, and p(G) , q(G) are solutions from the population and selected randomly.
4.4.4
Update and Selection of Best
Intensification and diversification search strategies are applied to every solution of the population, and low quality solutions are replaced by new solutions only if the condition defined in Eq. (17) is met during the “update” process [15], [28] ΩG+1 =
ΩG+1 ; O(ΩG+1 ) < O(ΩG ) ΩG ; otherwise
(17)
8 The Cuckoo Search Algorithm Applied to Fuzzy Logic Control …
189
In addition, a “selection of best” procedure is applied in order to update the best solution obtained during the CS iterative search, and to take this solution as the reference for next iterations. This procedure compares each solution from the population with the best one obtained during the iterative search, according to Eq. (18): Ωbest ← O(Ωbest ) < O(ΩG )
(18)
All the phases exposed in this section are summarized in the flowchart of Fig. 8 and in the pseudocode presented in Table 4. Fig. 8 Flowchart of the FLC optimization process
Start
Load MLS model
Cuckoo Search initial parameter definition Generation of initial population (each solution represents one FLC) Selection of best
Intensification using Lévy Flights random walk Selection of best and update Diversification using biased/ selective random walk Selection of best and update
Tolerance? Yes
Optimal FLC (MSL transient and steady state responses with desired performance parameters)
End
No
190 Table 4 Pseudocode for CS algorithm applied to the FLC parameter optimization for a MLS
G. García-Gutiérrez et al. Load MLS model Fitness Function: O (x) Define Tolerance value Define initial parameters: n, d, β, pa Generate initial population of n host nest of dimension d Find best nest of the population while (auxTolerance > Tolerance) cuckoo population intensification update end while selection of best cuckoo population diversification update selection of best Update auxTolerance value Postprocess results and visualization
4.5 Simulation Results of the MLS System The complete optimization process FLC parameters took an average of 8 h of computation time with 40 iterations solved, after several runs of the algorithm [28]. The processing machine was an Intel(R) Core(TM) i5–5250U CPU (2.7 GHz) 4 GB of RAM computer using Matlab® program. The objective function convergence is illustrated in Fig. 9, where the iterative process finishes once the predefined tolerance value is reached by the algorithm. The obtained optimum FLC parameters, comprised of MF and RB parameters, are shown in Figs. 10, 11 and 12, and Table 5. Notice that the optimized MF keep the order and coherency required in accordance to the base MF parameters, as well as the optimized RB parameters which are kind of coherent regarding on its logic Fig. 9 Convergence of the objective function
12 Objective Function
Objective Function Value
10 8 6 4 2 0
0
10
20
30
40
50
Number of Iterations
60
70
80
8 The Cuckoo Search Algorithm Applied to Fuzzy Logic Control …
NB
ZE
NS
191
PS
1
PB
0.5
0 -0.016
-0.012
-0.008
-0.004
0
0.004
0.008
0.012
0.016
Position Error (m)
Fig. 10 MF optimized for the FLC position error input variable
NB
ZE
NS
PS PB
1
0.5
0 -0.016
-0.012
-0.008
-0.004
0
0.004
0.008
0.012
0.016
Derivative of Error
Fig. 11 MF optimized for the FLC derivative of the error input variable
for controlling the MLS in terms of the heuristic knowledge of the designer, defined by the base MF and RB parameters (Sect. 4.2). With these obtained results, the FLC tuned using CS has enabled the MLS to accomplish, as close as possible, a specific settling time t s,D = 0.5 (s), overshoot %M p,D = 5%, and steady-state vertical position r ss,D = 0.008 (m) [28]. For purposes of comparison, the step responses performance parameters of the MLS with the optimum FLC obtained with the CS algorithm, the MLS with the non-optimized FLC (i.e., initial FLC designed in Sect. 4.2), and the MLS with a full state feedback controller (SS controller) designed using a model-based linearization approach as presented in [28, 59], are depicted in Figs. 13 and 14. A summary of all performance parameters for the three compared approaches is listed in Table 6.
192
G. García-Gutiérrez et al.
NB
ZE
NS
PS
1
PB
0.5
0 -1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
Control Signal
Fig. 12 MF optimized for the FLC control signal output variable Table 5 Fuzzy Logic RB optimized for MLS control Control signal Derivative error
Position error NB
NS
ZE
PS
PB
NB
NB
PB
PB
PB
PS
NS
NS
PB
ZE
PS
NS
ZE
ZE
ZE
NS
NS
NS
PS
PS
NB
NS
NB
NB
PB
PB
NB
NS
NB
NB
Optimized FLC
SS controller
Non-optimized
Setpoint
Position (m)
0.016 0.012 0.008 0.004 0
0
0.5
Time (s)
1
1.5
Fig. 13 Steady-state step response comparison of the MLS with the optimized FLC, non-optimized FLC, and with the SS controller
8 The Cuckoo Search Algorithm Applied to Fuzzy Logic Control …
193
10 -3 Optimized FLC
SS controller
Position (m)
%M p = 5.05%
Setpoint
%M p = 4.9%
ts = 0.5
ts = 0.53
8
rss = 0.008m
0.2
0.25
0.3
0.35
0.4
0.45
0.5
0.55
Time (s)
Fig. 14 Dynamic step response of the MLS with the optimized FLC, and with SS controller
Table 6 Performance parameters comparison for the MLS with the non-optimized FLC, the CS optimized FLC, and the SS controller Parameter
Design value
Non-optimized FLC
CS optimized FLC
SS controller 0.008
r ss (m)
0.008
Inf
0.008
M p (%)
5
0
5.05
4.9
t s (s)
0.5
Inf
0.5
0.53
As a result, the main advantage of using the CS algorithm for FLC controller parameter optimization in the presented approach is that no expertise from the designer is required for tuning the controller parameters [28]. The obtained results are quite similar to the ones obtained with an analytical approach, but taking into account that this approach requires complex previous procedures of modeling and parameter identification, which may result in a longer implementation time [59].
5 Case Study 2: Adjusting Parameters of the Controller of a Microgrid Energy Management System 5.1 Electro-Thermal Microgrid Topology In general, microgrids (MG) are defined as a relatively small distribution network composed of distributed generation units, loads, and energy storage systems (ESS) that are connected to the main distribution network at a single point of common
194
G. García-Gutiérrez et al.
coupling (PCC). MGs can operate in either grid-connected and islanded or standalone mode. Therefore, MGs include an energy management system (EMS) to ensure their safe and efficient operation [65–68]. This case study involves a domestic electro-thermal MG which has been presented in [29, 69, 70]. On the electrical side, the MG architecture includes a wind turbine (WT) of 6 kW, a photovoltaic (PV) generator of 6 kWp, an ESS of 36 kWh of useful battery capacity, and a domestic load of 7 kW. On the thermal side, the MG architecture under analysis includes a domestic hot water (DHW) system which comprises an electric water heater (EWH) of 2 kW, a flat plate collector (FPC) of 2 kW, thermal storage consisting of a deposit of 800 L of water capacity, and domestic thermal demand (DTD) equivalent to 2 kW. The microgrid architecture is depicted in Fig. 15. The MG power balance, PLG , is defined in Eq. (19) as the difference between consumption power (i.e., the sum of load power, PLOAD , and the EWH power, PEWH ) and generation power (i.e., the sum of PV power, PPV , and WT power, PWT , PGEN = PPV + PWT ), whereas the battery power, PBAT , is defined in Eq. (20) as the difference between the PLG and the grid power, PGRID , as follows: PLG = PLOAD + PE W H − PG E N
(19)
PB AT = PLG − PGRID
(20)
Fig. 15 Grid-connected electro-thermal microgrid architecture. The arrow’s direction implies a positive power flow
8 The Cuckoo Search Algorithm Applied to Fuzzy Logic Control …
195
5.2 Energy Management As presented in [29, 69, 70], the main purpose of the EMS in this case study is smoothing the power exchanged with the grid while keeping the battery state-ofcharge (SOC) and the temperature of the water in the tank between established limits. To achieve this goal, the EMS makes use of the energy of the ESS to cover, if possible, the energy requirements of the EWH. The block diagram of the EMS is described in [29, 69, 70]. In short, the power exchanged with the grid, at each sampling time T S , is defined in Eq. (21) as the sum of three variables, as follows: PGRID = PC∗T R + PS OC + PF LC
(21)
where the first variable,PC∗T R , provides an average power profile to the MG and it is obtained through a 24-hour central moving average (CMA) filter which uses the data ∗ , and the 12-h forecast data from the previous 12-hours of the MG net power,PLG ∗ of the MG net power, PLG,FC [69, 70], the second variable, PSOC , is used to keep the battery SOC close to the 75% of the battery charge, and the third variable, PFLC , which is the output of a FLC controller, is used to smooth the grid power profile ∗ ∗ − PLG,FC ) according to both the prediction error of the previous 3-h (i.e., PE∗ = PLG and the battery SOC [68–71]. To quantify the quality of the grid power profile, a set of quality criteria (i.e., the maximum power delivered by the grid in one year, PG,MAX , maximum power fed into the grid, PG,MIN , maximum power derivative, MPD, average power derivative, APD, power variation range, PVR, and power profile variability, PPV ) is defined in [29, 67–69, 71, 72], so that, the lower the criteria values are the better EMS performance is. These criteria are recalled in Eqs. (22), (23), (24), (25), (26), (27), and (28) for the complete understanding of the chapter. PG,MAX = max(PGRID )
(22)
PG,MIN = min(PGRID )
(23)
M P D = max P˙GRID
(24)
AP D =
N 1 ˙ PGRID (k) N k=1
P˙GRID (k) = [PGRID (k) − PGRID (k − 1)] TS PV R =
PG,MAX − PG,MIN PLG,MAX − PLG,MIN
(25) (26) (27)
196
G. García-Gutiérrez et al.
f
f 2 PGRID, f P PV = PDC
(28)
f = fi
where k and (k-1) represent the current and previous samples, respectively, P˙ GRID is the grid power profile ramp-rate (i.e., the slope of two consecutive samples of the grid power profile), N the number of samples in one year, PGRID,f is grid power harmonic at f frequency, f i and f f are the initial and final frequencies, respectively, and PDC is the yearly power average value. Note that f i = 1.65 × 10−6 Hz and f f = 5.55 × 10−4 Hz to evaluate frequencies above one week or less variation periods [72].
5.3 Fitness Function Definition Similarly to Sect. 4.3 of the previous case study, the optimization cost function (i.e., fitness function), is defined by Eqs. (29), (30) and (31) based on the defined quality criteria, as follows:[29]: O = w · F1 + (1 − w) · F2
(29)
PG,MAX PG,MIN MPD + RE F + RE F M P DR E F PG,MAX PG,MIN
(30)
AP D PV R P PV + + A P DR E F P V RRE F P P VR E F
(31)
F1 = F2 =
where w is a real number used to prioritize either F 1 or F 2 . Note that to limit the variation range of F 1 and F 2 each quality criteria should be normalized with respect to a reference value which is intended to be minimized in the optimization process [63].
5.4 Fuzzy Logic Parameters Adjusting As mentioned in Sect. 3, the CS algorithm involves two phases. First, the algorithm creates the initial random population and then, the CS algorithm uses an iterative process where two random walks (i.e., Lévy flights and biased/selective random walk) for searching new solutions within the boundaries of the search space according to the problem restrictions [29]. Finally, the CS algorithm selects the solution with better cost function evaluation in the set of solutions generated by the iterative process. For this case study, the CS algorithm creates a set of solutions with a dimension, d, defined according to the total number of parameters to optimize, which correspond
8 The Cuckoo Search Algorithm Applied to Fuzzy Logic Control …
197
to the MFs mapping and the RB. In this context, the MFs for the FLC inputs (PE∗ and SOC) and output (PFLC ) of the EMS described in [70] and Sect. 5.2, are presented in [68] and [70], respectively, whereas the fuzzy RB is presented in [70]. In short, both inputs comprise five fuzzy subsets and thirteen parameters (i.e., tree parameters for NS, ZE, and PS; and, two parameters for NB and PB), whereas the FLC output comprises nine fuzzy subsets and twenty-seven parameters (i.e., three parameters for each MF). In addition, the fuzzy RB comprises twenty-five rules [70], thus, twenty-five parameters. Therefore, each solution is represented with d = 78 parameters (d = 13 + 13 + 27 + 25) with a population length fixed to n = 25 according to [25]. The initial random solutions are separately generated for every MF and RB parameters by means of Eqs. (32), (33), and (34), as follows [29]: ΩG = Ω M F(G) , Ω R B(G)
(32)
Ω M F(G) = [M FA + δ A , M FB + δ B , M FC + δC ]
(33)
Ω R B(G) = [R B1 + ρ1 , R B2 + ρ2 , · · · , R B25 + ρ25 ]
(34)
where G is an initial random solution comprising of solutions for MF parameters and RB, MF(G) is a random initial solution for MF parameters which at a first generation G, MF A , MF B , MF C are typical triangular with uniformly distributed MF’s parameters, δA, B, C ~ (−5%·R, 5%·R) are random numbers drawn from a uniform distribution, R is the variable range for every input/output variable, RB(G) is a random initial solution for RB parameters which at a first-generation G, RB1 , RB2 ,…, RB25 are a set of initial RB parameters defined according to the heuristic knowledge of the microgrid, and ρ ~ N (0, 1) is a random integer number drawn from a uniform distribution [29]. A 5% design value that has been defined for variable R since the EMS is sensible to small MF parameter variations. After the initialization, the algorithm starts the intensification process, so that, better solutions can be obtained. As mentioned in Sect. 4.4.1, the CS intensification for MF parameters is carried out using a Lévy Flights random walk with a stepsize drawn from a Lévy distribution using Eqs. (12), (13), and (14). Similarly, the CS intensification for RB parameters is computed by means of Eq. (15). Finally, a global search is applied to find new solutions far enough from the current best solutions by using far-field randomization defined in Eq. (16). Both local and global searches are applied to every solution of the population and a solution is replaced by another one or kept for the next generations according to Eq. (17), whereas the update process of the best in every generation G is defined by Eq. (18) [29]. A complete flowchart of the optimization process carried out in this case study is shown in Fig. 16 and its corresponding pseudocode is presented in Table 7.
198 Fig. 16 Flowchart of the optimization process of the FLC parameters of an EMS of an electro-thermal microgrid
G. García-Gutiérrez et al.
Start Load demand and generation MG variables.
Cuckoo Search initial parameter definition
Generation of initial random solutions (each solution represents one FLC)
EMS evaluation, fitness measure, and selection of best
Lévy Flights random walk
EMS evaluation, fitness measure, and selection of best
Replace low-quality solutions
Biased/selective random walk
EMS evaluation, fitness measure, and selection of best
Replace low-quality solutions
iter 0 is the step size which can be determined according to the scale of the problem. Levy(λ) calculates the search path by employing Levy distribution. Generation of the steps in the Levy flight can be effectively performed using the Mantegna algorithm. The step length in this algorithm is calculated with the following equation:
Levy(λ) =
u |v|1/λ
(4)
where λ is a parameter within the range of 1 < λ ≤ 3, u and v are derived from normal distributions as given below: u ∼ N 0, σu2 , v ∼ N 0, σv2 ,
(5)
The standard deviations, σ 2u and σ 2v , are computed with the equation given below: σu =
(1 + λ) sin(π λ/2) [(1 + λ)/2]λ2(λ−1)/2
1/λ , σv = 1
(6)
where refers to the gamma function. After calculating the nests’ new positions with Levy flights, their fitness values are evaluated and compared with the previous solutions in order to replace worse solutions with the better ones. In the following step of the algorithm, the discovery of cuckoo eggs by the host bird is implemented with the probability parameter pa . For each nest, a random number within the range of [0, 1] is generated and if the value is less than pa , the egg is discovered and replaced with a new solution. For the nests, which satisfies the
10 Cuckoo Search Based Backcalculation Algorithm …
239
above condition, a simple random walk is applied to create new positions using the formulation below: (t) (7) − X X i(t+1) = X i(t) + randn X (t) j k (t) where randn is a random number in [0, 1], X (t) j and X k are the different nests selected with a random permutation function. After that, new solutions are evaluated again through the fitness function, and the better solutions are transferred to the next generation. All these steps from Eqs. (3) to (7) are repeated until the termination criteria is reached. Finally, the best nest, X best, and its fitness value are reported as the output of the algorithm.
2.3 Backcalculation Algorithm: CS-ANN The backcalculation algorithm is developed by combining the ANN forward response model and CS optimization algorithm. The basic aim of the CS-ANN algorithm is to estimate the layer moduli of FDP; E AC and E Ri through the inversion process of FWD deflections. The backcalculation algorithm starts with the random initialization of n number of nests in the two-dimensional search space. The locations of nests (E AC and E Ri values) in the domain correspond to the possible solutions to the problem. Together with the known layer thicknesses, the CS algorithm provides the layer moduli values to the ANN response model to calculate the FWD deflections numerically. The fitness of each nest is calculated by using the following function: fitness(X i ) =
1+
1 2 FWD − δ ANN j=1 w j δ j j
4
(8)
and δ ANN are the measured and calculated where X i is the position of the ith nest, δ FWD j j deflections at sensor j, respectively. wj = [1, 1, 2, 4] is a set of scaling factors which are obtained from the preliminary analysis. The objective of the algorithm is to minimize the error between FWD and ANN deflections so that the maximum fitness of the nests is obtained with minimum deflection errors. After the fitness evaluations, the CS algorithm performs its own analysis procedures and updates the location of the nests for the following generations. In this study, the termination criteria are the maximum number of iterations; therefore, the CS algorithm iteratively provides the input to the ANN model and analyze the nests’ fitness values for the given number of iterations. Finally, the nest, which gives the best fitness to the field deflections, is reported as the solution to the problem. The flowchart for the CS-ANN algorithm is depicted in Fig. 3.
240
A. Öcal and O. Pekcan
Fig. 3 Flowchart of CS-ANN backcalculation algorithm
To visualize the CS algorithm’s searching approach, a randomly selected synthetic pavement section generated through the FE software is analyzed using CS-ANN backcalculation algorithm. The algorithm utilizes a population of 10 number of nests throughout 100 iterations. In the problem, the search domain is a two-dimensional space that is constituted by E AC and E Ri , and the axes are normalized between 0 and 1, accordingly. The topology of the search space for several iterations is given in Fig. 4 and the global optimum solution, which corresponds to the maximum fitness value 1, is marked at X opt = (0.781, 0.385). The positions of the nests are randomly initialized at the first iteration and scattered in the space. As can be seen from the searching process, the nests explore the search space globally at around the first 15 iterations. Thanks to the elitism that CS provided, the best locations obtained during the iterations are transferred to the following generations. Therefore, there is a tendency of nests to move to the better locations found so far. After the 25th iteration, the search intensifies and nests try to improve their fitness values around their local vicinities. At the final iteration, the best nest in the population is found as X global_best = (0.762, 0.397) with a fitness value of 0.9994. The corresponding error rate between the estimated and real layer moduli values for E AC and E Ri are 2.43%
10 Cuckoo Search Based Backcalculation Algorithm …
Fig. 4 Convergence of nests along with the iterations of Cuckoo Search Algorithm
241
242
A. Öcal and O. Pekcan
and 3.12%, respectively. Even though these amount of errors are still acceptable, increasing the number of nests in the population would reduce the error further.
3 Experimental Results In order to investigate the performance of the proposed algorithm, two different databases are utilized. The first database consists of the layer properties and corresponding deflections calculated from the FE analyses of various FDP sections; however, these numerical analyses may not consider all the factors affecting pavement responses in the modeling stage. Therefore, it is essential to evaluate the proposed method through the analysis of actual field data gathered by an FWD device. For this reason, the second database covering the FWD test results of a road section is utilized in the experimental stage. To investigate the efficiency of the CS search algorithm, well-known metaheuristic algorithms; SGA, PSO, and GSA are also embedded to work with the ANN forward routine. Consequently, using both synthetic and field data, the performance of the CSANN method against the other metaheuristic-based backcalculation approaches is tested with various road sections. Moreover, a commercial backcalculation software Evercalc is also utilized in the analysis of field data for comparison purposes. To calculate the prediction accuracy of the layer moduli estimations, Mean Absolute Percentage Error (MAPE) performance metric is utilized as given in Eq. (9).
n
100%
y j − yˆ j
MAPE = n j=1 y j
(9)
where y j and yˆ j are the actual and predicted values, respectively, and n is the number of samples.
3.1 Parameter Settings In order to maintain the consistency between algorithms, every run for each algorithm is performed for 5000 function evaluations. Therefore, the population size is taken as 50, and the algorithms are run for 100 iterations. SGA is an evolutionary optimization approach that is inspired by the natural selection process and genetics. By simulating the survival of the fittest approach, it seeks the search spaces to reach the optimum solution of the problems [59]. In the SGA, there are two control parameters; the probability of crossover and mutation, which are selected as 0.74 and 0.1, respectively, based on a study conducted by the researchers [8].
10 Cuckoo Search Based Backcalculation Algorithm …
243
PSO, which is also a population-based search algorithm, mimics the social behavior of a bird flock or a fish school [60]. The aim of the algorithm is to converge all the members of the population around the optimum point. Inertia weight w, cognitive factor c1 , and social factor c2 are the parameters of the algorithm, and they are selected based on the preliminary analysis as 0.5, 2, and 2, respectively. Another metaheuristic algorithm used for comparison purposes is the GSA, which is developed as inspiration from Newton’s law of gravitation. GSA simulates the interaction of masses with each other caused by the gravitational forces and implements their movement in the universe as a search approach [61]. α and G0 are the control parameters of GSA, and they are selected as 0.5 and 108 , respectively, in accordance with the performed initial sensitivity analyses and proposed values by the researchers [42, 61]. In order to determine the best set of control parameters of CS, a sensitivity analysis is conducted for a randomly selected pavement section with a population of 50 nests analyzed along with 100 iterations. The ranges evaluated in the analyses are given in Eq. (10), and each couple is run 5 times separately. α ∈ {0.01, 0.03, . . . , 1} pa ∈ {0.05, 0.1, . . . , 0.5}
(10)
It is reported in the literature that the change in the value of pa does not significantly affect the convergence rate [44]. Likewise, in the present sensitivity analysis, it is seen that the average fitness values remain insensitive to any increase in the pa except for the small parameter values. However, it is known that the selection of step size, α has an effect on the search characteristics of the algorithm and it should be adjusted based upon the problem. The range of the parameter α, given in Eq. (10), is evaluated initially and it is observed that higher values produce larger modifications, which cause moving the nests away from the better solutions. For this reason, the search of α is focused on a narrower region up to a value of 0.3. In Fig. 5a, the results
Fig. 5 Sensitivity analysis of CS parameters: a obtained best fitness values, b corresponding average error rates for estimated layer moduli
244
A. Öcal and O. Pekcan
of the sensitivity analysis are illustrated with a heat map, where the contours refer to the average best fitness values obtained through the batch analysis. It is observed that most of the pa values greater than 0.15 produce similar fitness values. To decide about the best set of parameters, the associated the average MAPE value for E AC and E Ri estimations are calculated and presented in Fig. 5b. It is noticed that α values between 0.02 and 0.06 yield smaller error values compared to higher ones. Besides, they ensure the smaller position changes of nests and therefore, the effective searching process is obtained for the backcalculation problem. Consequently, the best values of the control parameters are selected as 0.25 and 0.03, for pa and α, respectively.
3.2 Analysis of Synthetic Data In this section, the performance of the CS-ANN backcalculation algorithm is investigated with a database generated from the FE solutions. This synthetic database consists of randomly selected 40 different pavement sections, and CS-ANN algorithm performs 30 separate runs for each of them. At the end of the analysis of each test, estimated E AC , and E Ri values are compared using the MAPE function given in Eq. (9). As depicted in Fig. 6, the average of 30 runs for each test section is marked with a point together with the error bar that represents the limits of upper and lower predictions. MAPE values for each algorithm are determined based on the average estimation of the layer moduli. The results, presented in Fig. 6, shows that the error rate of CS-ANN is 1.48% while GSA-ANN, PSO-ANN, and SGA-ANN estimate E AC with MAPE values of 1.55%, 1.65%, and 1.63%, respectively. It is seen that the estimations of CS-ANN for 30 individual runs remain within the very narrow range compared to the other approaches, especially GSA-ANN and SGA-ANN. Although PSO-ANN also produces solutions with low deviations, the average of E AC estimation becomes slightly larger than the actual ones. On the other hand, GSA-ANN’s average solutions give relatively lower errors despite its predictions, which are in a larger range compared to the others. It is also observed that all the algorithms find better solutions for the sections having thinner asphalt layer and smaller the modulus of elasticity of layers. However, when higher stiffness values are investigated, larger errors may be observed. When the results on E Ri predictions given in Fig. 7 are examined, it is observed that relatively better solutions with the lowest MAPE of 1.52% are reached by CS-ANN. Regarding the other approaches, GSA-ANN and SGA-ANN, whose deviations are larger as in the case of E AC , produce similar results with 1.60% and 1.64% errors, respectively. In PSO-ANN analysis, a general trend that underestimates the layer moduli values compared to the others is reached with 1.76% MAPE. For both E AC and E Ri estimations, it is seen that CS-ANN produces more consistent solutions with a low margin of errors compared to the GSA-ANN, and SGA-ANN. PSO-ANN can also provide steady solutions but shows relatively larger error values.
10 Cuckoo Search Based Backcalculation Algorithm …
245
Fig. 6 E AC estimations on synthetic data a CS-ANN, b GSA-ANN, c PSO-ANN, and d SGA-ANN
Therefore, the CS-ANN algorithm performs competitively agreeable in comparison to the other approaches.
3.3 Analysis of Field Data The use of field data for verification of the proposed algorithm is an essential part of the study. Therefore, a set of FWD test results conducted on an FDP section to observe the structural capacity of the pavement is utilized in this part of the chapter. The FWD dataset utilized in the study is extracted from the Long-Term Pavement Performance (LTPP) Program’s online database, which is accessible from all around the world [62]. It contains numerous information about the observed pavement structures in the USA and Canada such as traffic and environmental data, different field and laboratory test results, etc. The selected FDP section is located in Indiana, USA with the ID number of 18-A350. This section was kept under observation within the scope of LTPP Specific Pavement Studies-3 (SPS-3) between 1987 and 1995. The section was first constructed with a 390 mm AC course by placing it directly on fine-grained, lean
246
A. Öcal and O. Pekcan
Fig. 7 E Ri estimations on synthetic data a CS-ANN, b GSA-ANN, c PSO-ANN, and d SGA-ANN
inorganic clay in 1975, and an aggregate seal coating with a thickness of 10 mm was applied in 1990. During the time interval when the 18-A350 section was under the LTPP SPS-3 investigation program, four FWD tests were performed in 1990, 1993, 1994, and 1995, respectively. In typical FWD tests, several different loadings schemes are applied, and associated deflections are calculated along with the successive stations on that part of the road. In this study, as only the loading scheme with a pressure value around 552 kPa is considered, the forward response model was developed only for this impact. Therefore, the deflection data, which are collected from two regions at the same road section; mid-lane (F1) and outer wheel path (F3), are extracted from the database. For comparison purposes, Evercalc, which is a traditional backcalculation software, is utilized in the analyses of LTPP data. The program employs a layered elastic analysis program WESLEA as the forward response analysis tool and utilizes a modified Augmented Gauss–Newton algorithm as the optimization approach. In order to analyze the LTPP data, the related deflection basins of the tests and associated pavement thicknesses are given to the backcalculation algorithm. Each year’s deflection data is analyzed 10 times and then results are reported accordingly. In Fig. 8, an average of layer moduli estimations with the error bars, referring to the estimated maximum and minimum values, are presented for Evercalc, CS-ANN, GSA-ANN, PSO-ANN, and SGA-ANN, respectively. In Fig. 8a, b, it is seen that
10 Cuckoo Search Based Backcalculation Algorithm …
247
Fig. 8 LTPP field data analysis of a E AC F1 Region, b E AC F3 Region, c E Ri F1 Region, and d E Ri F3 Region
solutions of all the backcalculation-based approaches show good agreement with the Evercalc results for asphalt layer moduli. CS- and PSO-based methods give more consistent predictions since the extends of their solutions deviate from the mean less than the ones based on GSA and SGA. Despite the similar performance of metaheuristic-based approaches, E Ri results given in Fig. 8c, d, slightly differ from the Evercalc solutions. This variation mostly comes from the material behavior taken into account in the forward analysis stage of the backcalculation methods. The ANN engine employed in the study is based on the nonlinear elastic modeling of the subgrade soil, while WESLEA considers the linear elasticity of the layer. It is also noticed that larger variations compared to the E AC estimations are observed, however CS and PSO still give more steady solutions compared to the others.
3.4 Evaluation of Convergence Performance In this section, performances of CS, GSA, SGA, and PSO on reaching the optimum solution are demonstrated with randomly selected four synthetic pavement sections. At each pavement section, algorithms’ best fitness values obtained so far are plotted in Fig. 9 along with 5000 function evaluations. The figure shows that CS approaches to the close vicinity of the best fitness value at around 2000 function evaluations, and for the remaining function evaluations, it continues to improve the solutions. Generally,
248
A. Öcal and O. Pekcan
Fig. 9 Convergence performance of algorithms for randomly selected 4 sections
it is noticed that CS reaches the best fitness values after 3000 function evaluations. It is also clearly seen that PSO approximates the best solutions at the very early stage of the analysis and after the 1000 function evaluations, no significant improvement in the best fitness value is observed. On the other hand, GSA and SGA techniques generally show slower convergence performance. Consequently, it is observed that CS shows better performance in reaching the best fitness against the other algorithms. The algorithms are developed with MATLAB programming environment and implemented in a personal computer with an Intel Core i7-4700 CPU, 2.40 GHz processor, and 16 GB RAM. The runtime for CS-ANN, GSA-ANN, PSO-ANN, and SGA-ANN algorithms for 5000 function evaluations of a backcalculation problem is calculated as the average of 10 separate runs. However, it is seen that there is no significant difference between each approach’s runtime that they complete their analyses within 19 ± 0.5 s. It should also be noted that runtime might show variations according to the implementation of the optimization algorithms.
4 Conclusions In this study an inversion algorithm, CS-ANN, is developed to backcalculate the mechanical layer properties of full-depth asphalt pavements. The objective of the
10 Cuckoo Search Based Backcalculation Algorithm …
249
study is to investigate the efficiency of the CS-ANN algorithm in the pavement backcalculation problems and to evaluate the searching performance of CS in comparison to the popular metaheuristic techniques in the pavement backcalculation area; GSA, PSO, and SGA. The proposed CS-ANN algorithm consists of two main components; (i) ANN forward response model used to calculate the deflections numerically and (ii) CS optimization technique utilized for minimizing the error between ANN estimations and FWD measurements. Here, ANN is the replacement approach of the nonlinear FE model of the FDP pavements that is employed to compute the deflections in a fast, robust, and realistic manner. CS adapts the aggressive breeding strategy of cuckoos and characteristics of Levy flights into a searching algorithm that investigates the domain to approximate the optimum solution of the backcalculation problem. Within the context, the performance of CS-ANN backcalculation algorithm and CS search algorithm are validated using synthetically generated and actual field data. As a result of the analysis carried out on the synthetic data, it is observed that CS-ANN exhibits better performance for both E AC and E Ri estimations with comparatively lower error rates compared to the other approaches. The estimated layer moduli values clearly reveal that CS-ANN is capable of producing consistent solutions better than GSA-ANN, PSO-ANN, and SGA-ANN. Although the GSA-ANN algorithm presents the closest performance to the CS-ANN, the results of the CSANN algorithm can be considered more reliable and consistent due to the wide range of solutions GSA-ANN reaches. In the analysis of field data, E AC predictions for all the backcalculation algorithms are in good agreement with the Evercalc solutions since each method assumes that the asphalt layer is a linear elastic material. However, different estimations are observed for E Ri values. This discrepancy is originated from the considered material behaviors that WESLEA, which is the forward response engine of Evercalc, considers the subgrade as linear elastic while ANN forward model of the CS-ANN takes into consideration the more realistic stress-sensitive nonlinear elastic nature of the subgrade. When the performance algorithms are examined on the field data, it is noticed that CS and PSO produce more consistent results with less standard deviations compared to the GSA and SGA. Moreover, the convergence performance of the algorithms also supports the finding that CS reaches higher fitness values in the analysis of hypothetical data. Consequently, it is revealed that CS presents competitively better performance compared to PSO, SGA, and GSA. In conclusion, by considering the performance of CS-ANN on both hypothetical and field data, it is seen that the proposed approach is an efficient tool to backcalculate the layer moduli of FDPs. With a properly trained ANN model, this algorithm can estimate layer properties very quickly with a low margin of errors. On the other hand, it is revealed that CS can be successfully adapted to backcalculation problems as a capable search technique. The superior performance of CS against the wellknown metaheuristic methods is due to the fewer number of control parameters to be fine-tuned. Studies show that any change in the algorithm parameter, pa does not significantly affect the convergence performance of the CS. Therefore, problemspecific sensitivity analysis might not be needed for pa . On the other hand, the value
250
A. Öcal and O. Pekcan
α determines how aggressively the nests investigate the search space and it is directly related to the dimensionality of the problem and should be selected accordingly. In this study, the best performance is reached with the α value of 0.03 as the result of the performed analysis. Another superiority of CS is originated from the concepts that the algorithm is based on; elitism, exploitation by random walk, and exploration by employing Levy flights, respectively. By constructing a good balance between these concepts, CS algorithm would be a powerful technique for seeking the mechanical layer properties of pavements. In short, due to the ability of producing consistent and accurate solutions, CS-ANN stands out in comparison to the other metaheuristics in the pavement backcalculation literature. This performance reveals the potential of CS algorithm to be implemented in the back analysis of different types of pavement structures for future studies.
References 1. Fwa TF (2006) The handbook of engineering of highway. CRC Press 2. Goktepe AB, Agar E, Lav AH (2006) Advances in backcalculating the mechanical properties of flexible pavements. Adv Eng Softw 37:421–431 3. Sivaneswaran N, Kramer, Steven L, Mahoney JP (1991) Advanced backcalculation using a nonlinear least squares optimization technique. Trans Res Rec 93–102 4. Meier R, Rix G (1993) An initial study of surface wave inversion using artificial neural networks. Geotech Test J 16:425–431 5. Quintus HL, Von Bush AJ, Baladi GY (1993) Nondestructive testing of pavements and backcalculation of moduli: second volume. ASTM 6. Bush AJ, Baladi GY (1988) Nondestructive testing of pavements and backcalculation of moduli. ASTM 7. Van Deusen D (1996) Selection of flexible backcalculation software for the minnesota road research project. Minnesota 8. Reddy MA, Reddy KS, Pandey BB (2004) Selection of genetic algorithm parameters for backcalculation of pavement moduli. Int J Pavement Eng 5:81–90 9. Kim N, Im S-B (2005) A comparative study on measured versus Predicted pavement responses from falling weight deflectometer (FWD) measurements. KSCE J Civ Eng 9:91–96 10. Sangghaleh A, Pan E, Green R et al (2013) Backcalculation of pavement layer elastic modulus and thickness with measurement errors. Int J Pavement Eng 15:521–531 11. Fileccia Scimemi G, Turetta T, Celauro C (2016) Backcalculation of airport pavement moduli and thickness using the Levy Ant Colony Optimization Algorithm. Constr Build Mater 119:288–295 12. Kim M (2007) Three-dimensional finite element analysis of flexible pavements considering nonlinear pavement foundation behavior. PhD Thesis, University of Illinois at UrbanaChampaign 13. Mulungye RM, Owende PMO, Mellon K (2007) Finite element modelling of flexible pavements on soft soil subgrades. Mater Des 28:739–756 14. Gopalakrishnan K, Agrawal A, Ceylan H et al (2013) Knowledge discovery and data mining in pavement inverse analysis. Transport 28:1–10 15. Karadelis JN (2000) A numerical model for the computation of concrete pavement moduli: a non-destructive testing and assessment method. NDT E Int 33:77–84 16. Picoux B, El Ayadi A, Petit C (2009) Dynamic response of a flexible pavement submitted by impulsive loading. Soil Dyn Earthq Eng 29:845–854
10 Cuckoo Search Based Backcalculation Algorithm …
251
17. Dong Q, Hachiya Y, Takahashi O et al (2002) An efficient backcalculation algorithm of time domain for large-scale pavement structures using Ritz vectors. Finite Elem Anal Des 38:1131– 1150 18. Li M, Wang H (2019) Development of ANN-GA program for backcalculation of pavement moduli under FWD testing with viscoelastic and nonlinear parameters. Int J Pavement Eng 20:490–498 19. Yi J-H, Mun S (2009) Backcalculating pavement structural properties using a Nelder-Mead simplex search. Int J Numer Anal Methods Geomech 33:1389–1406 20. Zaabar I, Chatti K, Lee H, Lajnef N (2014) Backcalculation of asphalt concrete modulus master curve from field-measured falling weight deflectometer data. Transp Res Rec J Transp Res Board 2457:80–92 21. Varma S, Emin Kutay M (2016) Backcalculation of viscoelastic and nonlinear flexible pavement layer properties from falling weight deflections. Int J Pavement Eng 17:388–402 22. Lav A, Goktepe A, Lav M (2009) Backcalculation of flexible pavements using soft computing. Intell Soft Comput Infrastruct Syst Eng 67–106 23. Meier R, Rix G (1995) Backcalculation of flexible pavement moduli from dynamic deflection basins using artificial neural networks. Transp Res Rec 1473:72–81 24. Meier R, Rix G (1994) Backcalculation of flexible pavement moduli using artificial neural networks. Transp Res Rec 1448:75–82 25. Saltan M, Tigdemir M, Karasahin M (2002) Artificial neural network application for flexible pavement thickness modeling. Turkish J Eng Environ Sci 26:243–248 26. Saltan M, Terzi S, Küçüksille EU (2011) Backcalculation of pavement layer moduli and Poisson’s ratio using data mining. Expert Syst Appl 38:2600–2608 27. Ceylan H, Gopalakrishnan K (2006) Artificial neural network models incorporating unbound material nonlinearity for rapid prediction of critical pavement responses and layer moduli. Int Cent Aggreg Res 14th Annu Symp 1–22 28. Ceylan H, Guclu A, Tutumluer E, Thompson MR (2005) Backcalculation of full-depth asphalt pavement layer moduli considering nonlinear stress-dependent subgrade behavior. Int J Pavement Eng 6:171–182 29. Pekcan O, Tutumluer E, Thompson M (2008) Artificial neural network based backcalculation of conventional flexible pavements on lime stabilized soils. In: Proceedings of the 12th international conference of international association for computer methods and advances in geomechanics (IACMAG), 1–6 Oct 2008, Goa, India, pp 1647–1654 30. Sharma S, Das A (2008) Backcalculation of pavement layer moduli from falling weight deflectometer data using an artificial neural network. Can J Civ Eng 35:57–66 31. Rakesh N, Jain A, Reddy MA, Reddy KS (2006) Artificial neural networks—genetic algorithm based model for backcalculation of pavement layer moduli. Int J Pavement Eng 7:221–230 32. Gopalakrishnan K (2009) Backcalculation of non-linear pavement moduli using finite-element based neuro-genetic hybrid optimization. Open Civ Eng J 3:83–92 33. Pekcan O (2011) Soft computing based parameter identification in pavements and geomechanical systems. PhD Thesis, University of Illinois at Urbana-Champaign, Urbana (IL) 34. Harichandran R, Mahmood T (1993) Modified Newton algorithm for backcalculation of pavement layer properties. Transp Res Rec J Transp Res Board 1384:15–22 35. Washington State Department of Transportation (2005) Everseries user’s guide pavement analysis computer software and case studies 36. Fwa TF, Tan CY, Chan WT (1997) Backcalculation analysis of pavement-layer moduli using genetic algorithms. Transp Res Rec 1570:134–142 37. Hu K-F, Jiang K-P, Chang D-W (2007) Study of dynamic backcalculation program with genetic algorithms for FWD on pavements. Tamkang J Sci Eng 10:297–305 38. Tsai B, Harvey J, Monismith C (2009) Case studies of asphalt pavement analysis/design with application of the genetic algorithm. In: Gopalakrishnan K, Ceylan H, Nii OA-O (eds) Intelligent and soft computing in infrastructure systems engineering. Springer, Berlin, Heidelberg, pp 205–238
252
A. Öcal and O. Pekcan
39. Nazzal M, Tatari O (2013) Evaluating the use of neural networks and genetic algorithms for prediction of subgrade resilient modulus. Int J Pavement Eng 14:364–373 40. Gopalakrishnan K (2009) Backcalculation of pavement moduli using bio-inspired hybrid metaheuristics and cooperative strategies. In: Proceedings of the 2009 mid-continent transportation research symposium, Ames, IA 41. Gopalakrishnan K, Khaitan SK (2010) Development of an intelligent pavement analysis toolbox. Proc ICE—Transp 163:211–221 42. Öcal A (2014) Backcalculation of pavement layer properties using artificial neural network based gravitational search algorithm. M.Sc. Thesis, Middle East Technical University 43. Yang X-S, Deb S (2009) Cuckoo Search via Levy flights. In: 2009 world congress on nature & biologically inspired computing (NaBIC). IEEE, pp 210–214 44. Li Z, Dey N, Ashour AS, Tang Q (2018) Discrete cuckoo search algorithms for two-sided robotic assembly line balancing problem. Neural Comput Appl 30:2685–2696 45. Chandrasekaran K, Simon SP (2012) Multi-objective scheduling problem: hybrid approach using fuzzy assisted cuckoo search algorithm. Swarm Evol Comput 5:1–16 46. Laha D, Gupta JND (2018) An improved cuckoo search algorithm for scheduling jobs on identical parallel machines. Comput Ind Eng 126:348–360 47. Chakraborty S, Dey N, Samanta S et al (2017) Optimization of non-rigid demons registration using cuckoo search algorithm. Cogn Comput 9:817–826 48. Binh HTT, Hanh NT, Van Quan L, Dey N (2018) Improved Cuckoo search and chaotic flower pollination optimization algorithm for maximizing area coverage in wireless sensor networks. Neural Comput Appl 30:2305–2317 49. Yang X-S, Deb S (2010) Engineering optimisation by Cuckoo search. Int J Math Model Numer Optim 1:330–343 50. Kaveh A, Bakhshpoori T, Ashoory M (2012) An efficient optimization procedure based on Cuckoo search algorithm for practical design of. Int J Optim Civ Eng 2:1–14 51. Kaveh A, Bakhshpoori T (2013) Optimum design of steel frames using Cuckoo search algorithm with Lévy flights. Struct Des Tall Spec Build 22:1023–1036 52. Gandomi AH, Yang X-S, Alavi AH (2013) Cuckoo search algorithm: a metaheuristic approach to solve structural optimization problems. Eng Comput 29:17–35 53. Gandomi AH, Talatahari S, Yang X-S, Deb S (2013) Design optimization of truss structures using Cuckoo search algorithm. Struct Des Tall Spec Build 22:1330–1349 54. Thompson MR, Robnett QL (1979) Resilient properties of subgrade soils. J Transp Eng ASCE 105:71–89 55. Ceylan H, Bayrak MB, Gopalakrishnan K (2014) Neural networks applications in pavement engineering: a recent survey. Int J Pavement Res Technol 7:434–444 56. Ghaboussi J (2001) Biologically inspired soft computing methods in structural mechanics and engineering. Struct Eng Mech 11:485–502 57. Payne RB (2005) The Cuckoos. Oxford University Press 58. Yang X (2010) Nature-Inspired metaheuristic algorithms, 2nd edn. Luniver Press 59. Goldberg DE (1989) Genetic algorithms in search, optimization, and machine learning. Addison-Wesley Publishing Company, Inc 60. Kennedy J, Eberhart R (1995) Particle swarm optimization. In: Proceedings of IEEE international conference on neural networks, pp 1942–1948 61. Rashedi E, Nezamabadi-pour H, Saryazdi S (2009) GSA: a gravitational search algorithm. Inf Sci 179:2232–2248 62. FHWA LTPP InfoPave (2019). https://infopave.fhwa.dot.gov/. Accessed 18 Aug 2019
Chapter 11
An Objective-Based Design Approach of Retaining Walls Using Cuckoo Search Algorithm E. B. Tutu¸s, T. Ghalandari, and O. Pekcan
1 Introduction As an inevitable part of sustainable infrastructure development, retaining walls are widely used geo-structures to restrain the natural or man-made soil masses from moving laterally. They are widely preferred structures in many projects related to road constructions, excavations, bridge abutments, etc., especially when soil slopes may have destructive potential in terms of both human and work safety. Their engineering design philosophy is generally based on providing enough lateral support to unstable ground levels. During the design process of retaining structures, lateral forces play an essential role in the selection of wall type. Among many options such as cantilever reinforced, gravity, semi-gravity, back broken, and anchored retaining walls, the most suitable one for the area of interest is usually chosen according to the lateral force acting on the retaining system. The conventional design process of retaining walls generally starts with an educated guess for the initial solution, which is either based on the engineering judgment or taken from the literature. The initial proposal is then evaluated considering the design requirements. If these requirements are not fulfilled, the design process should start from the beginning with new design variables until the decision criteria are met. Two major drawbacks exist in this procedure; first, an optimum design solution is not guaranteed. Second, in the case of changing any design parameter, the design E. B. Tutu¸s · O. Pekcan (B) Civil Engineering Department, Middle East Technical University, Ankara, Turkey e-mail: [email protected] E. B. Tutu¸s e-mail: [email protected] T. Ghalandari Construction Department, University of Antwerp, Antwerp, Belgium e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 N. Dey (ed.), Applications of Cuckoo Search Algorithm and its Variants, Springer Tracts in Nature-Inspired Computing, https://doi.org/10.1007/978-981-15-5163-5_11
253
254
E. B. Tutu¸s et al.
operation goes back to the first step, which is highly time-consuming. Therefore, considering this iterative process, the achievement of an acceptable solution within a reasonable time generally depends on the experience of the design engineer. Consequently, an inefficient design may result in both waste of workforce and other resources such as time and cost. In the end, although the design requirements are ensured theoretically, the solution development is only limited to the experience of the engineer and investment of time. During the design process, engineers try to achieve two main objectives; (i) an acceptable design, and (ii) the optimum target value, which can be set as the minimum cost or the minimum overall weight of a retaining structure or both, depending on the project type. In this sense, the conventional design approach does not present a cost-effective solution and/or sometimes produces larger cross-sections which are not only heavier but also environmentally unfriendly due to concrete casting. Ideally, the proposed design solutions should be able to address both cost and weight-related issues in the retaining walls. In recent years, to overcome the abovementioned problems in conventional design procedures and to obtain better design solutions, optimization algorithms are proposed increasingly, where the design task of retaining walls is expressed in a mathematical form. In addition to the commonly used gradient-based optimization methods, the use of metaheuristic algorithms as viable solutions is also considered as they have the potential to produce more efficient designs satisfying the objective functions quickly and in a robust manner [1]. Metaheuristic algorithms, in the focal point of interest for nonlinear optimization problems, generally adapt natural phenomena and instinctual behavior of creatures during their search processes. Compared to conventional optimization methods, they use different approaches to find the optimal solutions such that (i) they do not require gradient information, which generally makes the solution complicated when numerous design variables are handled, and (ii) they are generally not dependent to the initial solution in the design space. Metaheuristics, however, do not guarantee to produce a unique solution in the case of multiple runs. Still, studies show that they provide superior performance compared to the conventional optimization methods and they can be successfully applied to the nonlinear optimization problems [2, 3]. Firefly Algorithm, Cuckoo Search (CS) algorithm, Particle Swarm Optimization (PSO), Bat-Inspired Search, Ant Colony Search (ACO), Harmony Search (HS), Imperialist Competitive Algorithm (ICA), Charged System Search (CSS), Simulated Annealing (SA), and Genetic Algorithm (GA) can be listed under the umbrella of well-known metaheuristic algorithms [4]. In the literature, there are many research studies proposed for the optimum design of the retaining walls using the metaheuristics. For example, Saribas and Erbatur [1] and Ceranic et al. [5] suggested using metaheuristics for the optimization of retaining walls instead of the conventional engineering approach. Various others offered different techniques for the solution of the same problem such as SA by Villalba et al. [6] and Yepes et al. [7], ACO by Ghazavi and Bazzazian Bonab [8, 9], ICA by Pourbaba et al. [10], Gravitational Search algorithm by Khajehzadeh et al. [11], CSS by Talatahari and Sheikholeslami [12] and Talatahari et al. [13], and Big Bang–Big
11 An Objective-Based Design Approach …
255
Crunch by Camp and Akın [14]. Besides, Ahmadi and Varaee [15] used a swarm intelligence method based on PSO algorithm to optimize the reinforced concrete retaining wall, Sadoglu [16] discussed symmetric cross-sectional gravity retaining walls in different heights, Kaveh and Abadi [17] implemented HS algorithm, and Gandomi et al. [18] and Gandomi and Kashani [19] and Kashani et al. [20] studied the performance of different swarm intelligence and evolutionary algorithms in finding optimal design solution, respectively. Yalcin et al. [21] proposed an automated approach to design mechanically stabilized earth walls by applying different metaheuristic algorithms. In addition, other researchers conducted parametric and probabilistic studies on cantilever retaining walls [7, 22–24]. In the light of simplicity of the algorithm, a few parameters for tuning, and yet powerful algorithm compared to the other metaheuristics reported in the literature, this study uses Cuckoo Search (CS) algorithm, a swarm-based metaheuristic method that imitates the reproductive behavior of cuckoo birds, for the optimization of reinforced concrete cantilever retaining walls.
2 Design Steps of Reinforced Retaining Walls In Fig. 1, a schematic of a reinforced concrete cantilever retaining wall is given together with the design parameters, i.e., the geometry of the wall and the amount of steel used to reinforce concrete members. This type of retaining wall consists of three main parts: stem, toe, and heel. In some cases, in order to ensure the resistance of the retaining structure against sliding, a shear key is also included in the design. The Fig. 1 Geometrical parameters of a reinforced cantilever retaining wall
256
E. B. Tutu¸s et al.
base shear key imposes extra passive force which contributes to the resisting forces against driving forces. Although the sliding failure of the walls can be compensated by increasing the total weight, the construction of a shear key is more cost-effective compared to having a more massive wall. The design stages of reinforced retaining walls are composed of two main parts that can be grouped as (1) geotechnical and (2) structural. In the geotechnical design phase, the whole structure is theoretically imagined to respond as a rigid body and it is checked for the external stability. Three possible failure modes that are considered in the evaluation of external stability are sliding against base, overturning with respect to toe, and bearing capacity of foundation due to loss of support. Sliding failure occurs along the foundation base, while overturning happens about the toe, and bearing capacity of base soil is due to loss of soil support beneath the foundation. Also, in the structural design phase, all different structural sections such as stem, heel, toe and shear key (if available) of a cantilever retaining wall should be checked for the internal stability, against both flexural moments and shear forces.
2.1 Geometrical Design Parameters The schematic retaining wall given in Fig. 1 consists of 12 geometrical design parameters. The parameters from X1 to X8 are related to the geometry of the wall, and the remaining variables from R1 to R4 are associated with the reinforcement indexes for different structural sections. The reader should notice that a shear key is an optional geometrical design parameter of a cantilever retaining wall; hence, X6, X7, X8, and R4 are used for a cantilever retaining wall with a shear key. According to the definition used in this paper; X1, X2, and X3 correspond to the total width of the base, width of the toe, and stem thickness at the bottom, respectively. The other design parameters X4 and X5 refer to the stem width at the highest of the wall and base slab thickness. The parameters related to the shear key; X6, X7, and X8 are the distance of shear key with respect to toe, width, and depth of the shear key, respectively. For the reinforcement indexes, R1 and R2 refer to the vertical and horizontal steel reinforcement of the stem and toe, respectively. On the other hand, R3 and R4 correspond to the reinforcement index in the heel and vertical reinforcement of the shear key, respectively. One should be aware of the fact that R1–R4 are not dimensions, but indexes that are selected from a pool of design variables. The pool of reinforcement design variables is given in Table 1. This table presents the possible combinations of numbers and bar sizes with various selection criteria, such as the minimum bar spacing, minimum concrete cover, and maximum bar spacing which are related to the specific bar size.
11 An Objective-Based Design Approach … Table 1 Reinforcement selection pool
257
Index number
Number of bars
Bar size (mm)
Area (cm2 )
1
3
10
2.356
2
4
10
3.141
3
3
12
3.392
.
.
.
.
.
.
.
.
.
.
.
.
221
16
30
113.097
222
17
30
120.165
223
18
30
127.234
2.2 Geotechnical Stability of Reinforced Cantilever Retaining Walls Figure 2. illustrates the typical forces that act on a cantilever retaining wall. In these forces, Wc refers to the weight of concrete wall while WH and WT are the weight of the soil retained at the back of wall, and on the toe, respectively. On the other hand, the inclination of the backfill slope is shown with β and the resultant forces for passive and active lateral earth pressure are given with PP and PA . The resultant force from the bearing stress of the base soil and the uniform load acting on the backfill are presented with PB and Q, respectively. PA and PP are calculated using Rankine’s earth pressure theory while the bearing capacity of base soil is calculated using Terzaghi’s bearing capacity theory. The Rankine active earth pressure coefficient for inclined backfill is given in Eq. 1 and the Rankine passive earth pressure coefficient is given in Eq. 2. K a = cos β
cos β −
cos2 β − cos2 θ
cos β + cos2 β − cos2 θ θ 2 K p = tan 45 + 2
(1) (2)
where β is the slope of the retained soil backfill, and θ is the angle of friction for the backfill material. One of the external failure modes of retaining walls is overturning about the toe. In the overturning failure, the factor of safety is defined as the ratio of resisting moments to overturning ones about the toe. Resisting forces include the weight of concrete wall Wc , backfill soil weight WH , the vertical component of active lateral force PA,vertical . Hence, the safety factor for overturning failure of a wall is calculated using Eq. 3.
258
E. B. Tutu¸s et al.
Fig. 2 Active forces in a symbolic cantilever retaining wall
F SOverturning
MR = MO
(3)
where MR and MO are the total resisting and overturning moments. Another failure mode in the retaining walls is the translation of the wall relative to the base soil. The factor of safety for sliding failure is defined as in Eq. 4. FR F SSliding = FD
(4)
where FR and FD are total preventive and driving forces, respectively. These two forces are calculated using the following (Eqs. 5 and 6):
FR =
WWall
2φbase tan 3
+
2Bcbase + Pp 3
(5)
11 An Objective-Based Design Approach …
259
FD = PA cos β
(6)
where WWall is the weight of casted concrete in the wall, B refers to the width of the slab of the foundation, φbase and cbase are the friction angle and cohesion of the base soil, respectively. Pp is the passive earth pressure calculated using Eq. 7. PP =
1 γbase Dl2 K p + 2cbase Dl K p 2
(7)
where γbase corresponds to the dry unit weight of the base soil. Dl refers to the height soil deposited at the wall front, and K p is obtained from Eq. 2. Another failure mode in the retaining walls is caused by the bearing capacity loss of the supporting soil in the foundation. The factor of safety, which is computed for bearing capacity, is given in Eq. 8. F SBearing Capacity
qmax = qult
(8)
where qult and qmax correspond to the ultimate bearing capacity of foundation soil the maximum bearing pressure, in which qmax and qmin are determined using Eq. 9. qmin , qmax =
6e V 1± B B
(9)
where V is the sum of the vertical forces and e refers to the eccentricity of the resultant forces that are calculated as in Eq. 10. e=
B − 2
MR − MO V
(10)
2.3 Structural Strength Requirements of Reinforced Cantilever Retaining Walls Apart from the external stability of retaining walls, the internal stabilities of concrete members should be checked for different sections. For this reason, the ACI 318 M14 specification code is used to evaluate the structural strength requirements of all sections. Moment and shear capacity of all parts of the wall (toe, heel, stem, and shear key) must be greater than the subjected moment and shear (Eqs. 11 and 12). The flexural moments and shear forces for each section are calculated in the most critical part of the section.
260
E. B. Tutu¸s et al.
Mn ≥1 Md
(11)
Vn ≥1 Vd
(12)
The nominal flexural and nominal shear strength are calculated using the following Eqs. 13 and 14. a M n = φ1 A s f y d − 2 Vn = φ2 0.17 f c bd
(13) (14)
where φ1 is the coefficient of nominal flexural strength and taken as 0.9 (ACI 318 M14), as stands for steel reinforcement area. Yield strength of reinforcement steel is represented as fy , the distance from compression surface is symbolized as d, depth of the stress block is a, φ2 is the coefficient of the nominal shear strength is taken 0.75 (ACI 318 M-14), b is the breadth of the concrete section, the compressive strength of concrete is represented as fc .
2.4 Design Constraints for Reinforced Concrete Cantilever Retaining Walls As discussed in the previous sections, an acceptable design should meet both geotechnical and structural requirements. These requirements guarantee that a design solution is expected to function safely throughout its lifetime. In the optimization-based engineering design approach, the requirements are imposed on a solution as a group of limitations, called design constraints. These constraints are formulated mathematically and applied to the problem to guarantee optimal and feasible solutions. In this study, together with both geotechnical and structural limitations, geometrical constraints are also described during the design of the cantilever retaining walls. Geometrical constraints are dependent on the definition of geometrical design parameters and they are used to maintain the feasibility of a design solution. Furthermore, in order to satisfy the bonding strength between concrete and steel in different structural sections for a retaining wall (i.e. stem), selected reinforcements should be increased by the development length. According to ACI 318 M–14 specification code, the development length is calculated using Eq. 15.
ld =
⎧ f y ψt ψe ⎪ ⎪ ⎨ 2.1λ√ f db ≥ 300 mm f or db < 19 mm c
f ψt ψe ⎪ ⎪ ⎩ y √ db ≥ 300 mm f or db ≥ 19 mm 1.7λ f c
(15)
11 An Objective-Based Design Approach …
261
where ψt is the casting position factor, λ is the lightweight concrete factor, and ψe is an epoxy factor. In this study, these modification factors are taken as 1.0. In the case that there is not enough space to extend the reinforcements by the development length, standard hooks are replaced. Hence, the development of standard hooks should be selected as the maximum among 8db , 150 mm and the value calculated from Eq. 16. ldh =
0.24 f y ψc ψe ψt √ λ fc
(16)
where ψc is the cover factor, λ is the lightweight concrete factor, ψr is the confining reinforcement factor. These modification factors are taken as 1.0 except for the ψc which is taken as 0.7.
3 Optimum Design of Reinforced Concrete Cantilever Retaining Walls 3.1 Objective Function In the optimization of the reinforced concrete cantilever retaining walls, the design stages are translated into a mathematical form, including the defined objective functions and the design requirements. An acceptable design solution must satisfy the design constraints described in Table 2. A final engineering solution should have minimum objective function value satisfying the requirements of the design criteria. In this sense, the objective function is defined depending on the need for a project. For example, the target of a project can be the one with the minimum cost. In recent studies, researchers focused on producing solutions satisfying two independent objective functions, i.e., cost and weight of the wall, together [1, 14, 18, 25–27]. In these studies, the defined cost and weight objective functions seek to find a solution with either minimum reinforcement value or volume of concrete. In this study, the cost and weight objective functions are defined as given in Eqs. 17 and 18: f cos t = Cs Wst + Cc Vc
(17)
where Vc is the volume of concrete used in the wall (m3 ), Wst is the weight of the steel reinforcement placed in the wall (kg), and Cs and Cc are the unit cost of steel ($/kg) and unit cost of concrete ($/m3 ), respectively. f weight = Wst + 100Vc γc
(18)
262 Table 2 List of constraints in design procedure of retaining wall examples
E. B. Tutu¸s et al. Constraint description
Equation
Geometrical X1 X 2 +X 3 − 1 ≥ X1 X 6 +X 7 − 1 ≥ X3 X4 − 1 ≥ 0
g1 (x) g2 (x) g3 (x)
0 0
Geotechnical F S Odesign F SO F SSdesign F SS F S Bdesign F SB
−1≥0
g4 (x)
Overturning failure
g5 (x)
Sliding failure
g6 (x)
Bearing capacity failure
g7 (x)
Soil in tension
qmin ≥ 0
g8−11 (x)
Moment capacity
g12−15 (x)
Shear capacity
Mn Md Vn Vd
g16−19 (x)
Minimum reinforcement area
−1≥0 −1≥0
Structural
g20−23 (x)
Maximum reinforcement area
g24 (x)
Minimum spacing of reinforcement
g25 (x)
Maximum Spacing of Reinforcement
g26 (x)
Minimum clear cover
g27 (x)
Development length for stem
g28 (x)
Development length for toe
g29 (x)
Development length for heel
g30 (x)
Development length for key
−1≥0 −1≥0
As
−1≥0 min As max −1≥0 As A
s
sbar −1≥0 min smax sbar − 1 ≥ 0 s
Cc C
c
min
−1≥0
X 5 −dcover ldh,stem
− 1 ≥ 0 or
X 5 −dcover ld,stem − 1 ≥ 0 X 1 −X 2 −dcover −1≥ ld,toe X 2 +X 3 −dcover ld,heel
0
−1≥0
X 5 −dcover ldh,stem
− 1 ≥ 0 or
X 5 −dcover ld,stem
−1≥0
where γc is the unit weight of concrete. In the previous studies, the functions defined for cost and weight are treated as single objectives and optimized independently. Therefore, a solution optimized considering the cost is not necessarily the one optimized according to the weight, as fitness functions are not linearly correlated. Different than the previous ones, in this study, a new approach is proposed to optimize both cost and weight objective functions simultaneously. One of the objective functions is selected as the fitness,
11 An Objective-Based Design Approach …
263
while the other one is converted to a constraint. Through the application of this approach, a final design solution not only provides the minimum value for the target function (i.e., cost) but also limits the other objective function by putting a threshold value (i.e., weight).
3.2 Cuckoo Search Optimization Algorithm Metaheuristic algorithms are commonly used for solving complex problems to provide near-global optimum solutions; however, they do not guarantee to find the global best solutions. Among those, the Cuckoo Search algorithm (CS) is a metaheuristic swarm-based algorithm that imitates the reproductive behavior of cuckoo birds, which was developed by Yang and Deb [28]. There have been several studies on the application of the CS algorithm in finding an optimized solution in civil engineering problems [18, 29–31], and other fields [32–34]. The CS starts with an initial population of cuckoos, generated randomly. Each individual cuckoo in the population lay eggs in the habitant of host birds. In the CS algorithm, eggs represent a potential solution to the optimization problem. As a population-based algorithm, the CS seeks to find new and improved candidate solutions by eliminating the least good ones. In this study, the simplest form of the CS is adopted, in which each nest has only one egg. The CS was described by three idealized rules. 1. A cuckoo lays one egg and drops the egg in a random nest; 2. Based on the fitness function values, the best nests including high-quality eggs are transferred to the following generation; 3. The number of host nests is constant, and eggs could be found by the owner of the nest with a probability of pa, (pa = [0,1]). According to the probability, if the egg is discovered in a nest, the host bird either refuses the egg by throwing it away or leaves the nest to build a new one. In the implementation phase, after selecting parameters of the algorithm such as n, pa, step size (α), and stopping criteria, the three idealized rules are structured in the following steps: Step 1––Initialization: Generate a random population of solutions, with the defined number of nests (n) within the boundary limits of the design space. Every nest in the population specifies a candidate solution, which stores the information related to the design problem and is defined as X i in Eq. 19.
X i = xi1 , . . . , xid , . . . , xiN f or i = 1, 2, . . . , n
(19)
where x di is the variable of ith nest in dth dimension, N is the number of design variables. Step 2––Evaluation: Evaluate the objective function for each nest in the population. Sort the fitness functions and select the best nest to transfer it to the next generation.
264
E. B. Tutu¸s et al.
Step 3––New Solutions Generation: Develop new solutions for the nests using Eq. 20. If the new solution of a nest exceeds the variable’s boundary limits, the solution is forced to be replaced with the boundary value. X it+1 = X it + α ⊕ Levy(λ), i = 1, 2, . . . , n
(20)
where X ti is the solution of the ith individual in iteration t, X t+1 i is the newly generated solution in iteration t + 1, α is the step size that is related to the size of the problem’s solution set, levy(λ) is the step length in the Mantegna’s algorithm and calculated using Eq. 21. Levy(λ) =
u |v|1/λ
(21)
where λ is a control parameter for step length within (1,3], u and v are calculated from normal distributions as following Eq. 22.
u ∼ N 0, σu2 , v ∼ N 0, σv2
(22)
where σ 2u and σ 2v are standard deviations, calculated using Eq. 23. σu2 =
1/λ Γ (1 + λ) sin πλ , σv = 1 λ−12
λ2 2 Γ 1+λ 2
(23)
where Γ (z) refers to the gamma function, calculated as Eq. 24: ∞ Γ (z) =
t z−1 e−t dt
(24)
0
Step 4––Discovering: Find and replace the cuckoo eggs with a probability of pa . If a cuckoo egg is discovered and then dropped or abandoned, generate a new nest using a random variable generator presented in Eq. 25. X it+1 = X it + randn(X tj − X kt )
(25)
and X tk are randomly selected where randn is a random number within [0,1], X t+1 j nests from the population. Step 5––Termination: Stop iterations if the termination criteria are satisfied. Otherwise, return to Step 3 of the algorithm. For the tuning purpose of the parameters implemented in the CS algorithm in this paper, two main control parameters, probability of discovery (pa) and step size (α), are evaluated for the defined cost and weight objective functions. It is noteworthy mentioning the point that sensitivity of parameters are problem-dependent, and
11 An Objective-Based Design Approach …
265
should be tuned for each example separately for two objective functions. For the initial selection of the parameters, previous studies from literature are selected as basis references. In the process of test performance for sensitivity analysis of the algorithm parameter, the range of parameters is selected as summarized in Eq. 26. pa ∈ {0.1, 0.2, 0.5, 0.8} α ∈ {0.01, 0.02, 0.04, 0.08, 0.1}
(26)
Because of larger design space and more sensitive response to parameter changes, Example-2 is selected as the main problem for parameter tuning purpose. Hence, both cost and weight objective functions have been simulated for 20 runs with a combination of different sets of parameters. Analysis results in Fig. 3. show the output results of mean and best design values in the cost objective function with
175-180
180-182
Cost Objective Function ($/m)
170-175
0.8
0.5
0.2
0.01
0.02
0.04
(a) Probability of Discovery (pa)
Fig. 3 Sensitivity of the CS parameters for the cost function; mean values (a), best solutions (b)
0.1 0.1
0.08
Step Size ( ) 170-172
172-174
0.01
174-175
0.8
0.5
0.2
0.02
0.04
Step Size ( )
0.08
0.1 0.1
(b) Probability of Discovery (pa)
168-170
Cost Objective Function ($/m)
166-168
266
E. B. Tutu¸s et al.
respect to changes in the algorithm’s parameters. It can be clearly seen that for the cost function, the objective function values become less sensitive to change of parameters in the with the increase of probability and decrease of step size to a certain range of, pa = 0.2 and α = 0.08. A similar procedure is implemented to the weight objective function for the tuning of parameters, and results are summarized in Fig. 4. Although the pattern of change for the value of objective function is the same for weight function, control parameters become less sensitive in the neighborhood of pa = 0.5 and α = 0.08 for average values and pa = 0.5 and α = 0.04 for best values. The aim of parameter tuning for metaheuristic algorithms is to decrease the sensitivity of provided solutions with changes in algorithm parameters, as the performance of metaheuristics is highly dependent on their parameter selection. As a result, in order to achieve optimum design solutions in the studied examples in this paper,
5764-5766
5766-5768
5768-5770 0.8
0.5
0.2
0.01
0.02
0.04
(a)
Probability of Discovery (pa)
5762-5764
Weight Objetive Function (kg/m)
5760-5762
0.1 0.1
0.08
Step Size ( ) 5760.5-5761
5761-5761.5
0.01
5761.5-5762 0.8
0.5
0.2
0.02
0.04
Step Size ( )
0.08
0.1 0.1
(b) Probability of Discovery (pa)
5760-5760.5
Weight Objetive Function (kg/m)
Fig. 4 Sensitivity of CS parameters for the weight function; mean values (a), best solutions (b)
11 An Objective-Based Design Approach …
267
control parameters of the CS algorithm, probability of discovery, and step size are selected as pa = 0.2 and α = 0.08, respectively.
3.3 Design Examples of Reinforced Cantilever Retaining Walls In this study, two different reference design examples from the literature [1, 18, 26] are examined using the CS algorithm. The first example is a concrete cantilever wall with 3.0 m height, while the second example is a 4.5 m concrete cantilever wall with some differences in the input parameters. A complete list of the input parameters for the design examples is given in Table 3. The optimum design process of reinforced concrete cantilever retaining walls is programmed using MATLAB software. Two design examples are optimized based on the cost and weight objective functions. All the presented solutions of the examples satisfy the geotechnical and structural design requirements. For each case, the number of nests and iterations are limited to 20 and 500, respectively. In the CS algorithm, the probability of discovery pa is selected as 0.2. While presenting the results of optimization, all simulations have been run for 50 times, and the best, the worst, mean, and standard deviation of the output results for both examples have been obtained from these runs. Concerning the examples, a design has 8 or 12 design variables (Fig. 1), in which each design variable has specific boundary limits. Table 4 provides lower boundaries (LB) and upper boundaries (UB) for the design examples of cantilever retaining walls with and without the shear key. Example-1 is a cantilever retaining wall with 3.0 m height without a shear key. Both cost and weight optimization have been performed using two independent objective functions. The results obtained using the CS algorithm, the worst, the best, mean, and standard deviation values for both cost and weight functions are presented in Table 5. The best design solution variables for the cost and weight functions are shown in Table 6. The convergence rate of average and best simulations for the cost and weight functions are shown in Figs. 5 and 6. In Example-2, the studied cantilever retaining wall includes a shear key to increase sliding resistance. In addition to the shear key, the main difference of Example-2 compared to the previous one is that the stem is higher and therefore exposed to more significant lateral forces from the retained soil. Similar to the first example both cost and weight functions are optimized separately using single-objective functions. Summary of analysis results including the worst, the best, mean, and standard deviation of multiple simulations for both cost and weight objectives are presented in Table 5. The final design solution variables for two objective functions are given in Table 6. Finally, the convergence rate of mean and the best runs for the cost and weight objective functions for Example-2 are shown in Figs. 7 and 8.
268
E. B. Tutu¸s et al.
Table 3 Input parameters for two design examples Input parameter
Unit
Symbol
Value Example 1
Example 2 4.5
Stem height
m
H
3.0
Backfill slope
°
β
10
0
Surcharge load
kPa
q
20
30
Reinforcing steel yield strength
MPa
fy
400
400
Steel cost
$/kg
Cs
0.40
0.40
Concrete compressive strength
MPa
fc
21
21
Concrete cover
cm
dc
7
7
Unit weight of concrete
kN/m3
γc
23.5
23.5
Concrete cost
$/m3
Cc
40
40
Shrinkage and temperature reinforcement percent
–
ρst
0.002
0.002
Unit weight of retained soil
kN/m3
γs
17.5
17.5
Internal friction angle of the retained soil
°
ϕ
36
36
Unit weight of base soil
kN/m3
γ base
18.5
18.5
Internal friction angle of base soil
°
ϕ base
0
34 0
Cohesion of base soil
kPa
cbase
125
Depth of soil in front of wall
m
Dl
0.5
0.75
Design load factor
–
LF
1.7
1.7
Factor of safety for bearing capacity
–
SFBearing Capacity, design
3.0
3.0
Factor of safety against sliding
–
FSSliding, design
1.5
1.5
Factor of safety for overturning stability
–
FSOverturning, design
1.5
1.5
Table 4 Boundary values of design variables for design examples Variables
Example-1
Example-2
LB (m)
UB (m)
LB (m)
UB (m)
X1
(4.8/11)*H
(11/9)*H
(4.8/11)*H
(11/9)*H
X2
(4.8/33)*H
(7/27)*H
(4.8/33)*H
(7/27)*H
X3
20
H/9
25
H/9
X4
20
H/11
25
H/11
X5
(12/135)*H
H/9
(12/135)*H
H/9
11 An Objective-Based Design Approach …
269
Table 5 Output results for cost and weight functions for Examples 1 and 2 Cost ($/m) Worst
Weight (kg/m)
Best
Mean
SD
Worst
Best
Mean
SD
Example-1
77.86
73.88
74.79
0.83
2732.05
2621.40
2632.70
23.00
Example-2
175.30
166.18
168.26
2.00
5931.35
5760.64
5767.04
23.89
Table 6 Final design solutions for cost and weight functions for Examples 1 and 2 Variables
Example-1 Cost
Example-2 Weight
Cost
Weight
X1 (m)
1.77
1.77
2.71
2.71
X2 (m)
0.65
0.70
0.73
0.86
X3 (m)
0.28
0.20
0.43
0.30
X4 (m)
0.20
0.20
0.25
0.25
X5 (m)
0.27
0.27
0.40
0.40
X6 (m)
–
–
1.96
1.96 0.20
X7 (m)
–
–
0.2
X8
–
–
0.2
0.20
R1
33 (15x10 mm)
77 (27x10 mm)
71 (25x10 mm)
129 (23x14 mm)
R2
14 (9x10 mm)
14 (9x10 mm)
40 (17x10 mm)
33 (15x10 mm)
R3
14 (9x10 mm)
14 (9x10 mm)
33 (15x10 mm)
33 (15x10 mm)
R4
–
–
7 (6x10 mm)
7 (6x10 mm)
Fig. 5 Convergence history of average and least solutions for the cost function in Example-1
Average Solutions
Best Solution
100
Cost ($/m)
95 90 85 80 75 70 0
100
200
300
Iterations
400
500
270
E. B. Tutu¸s et al.
Fig. 6 Convergence history of average and least solutions for the weight function in Example-1
Average Solutions
Best Solution
2700
Weight (kg/m)
2690 2680 2670 2660 2650 2640 2630 2620 0
100
200
300
400
500
Iterations
Fig. 7 Convergence history of average and least solutions for the cost function in Example-2
Average Solutions
Best Solution
245
Cost ($/m)
235 225 215 205 195 185 175 165 0
100
200
300
400
500
Iterations
Fig. 8 Convergence history of average and least solutions for the weight function in Example-1
Average Solutions
Best Solution
6000
Weight (kg/m)
5950 5900 5850 5800 5750 0
100
200
300
Iterations
400
500
11 An Objective-Based Design Approach …
271
3.4 An Objective-Based Design Approach for Reinforced Cantilever Retaining Walls In the previous sections, it was discussed that the fitness functions are independent and optimized as a single-objective function. In this manner, a final engineering solution optimized based on one of the objective functions (i.e., cost) is not necessarily an optimal one for the other fitness function (i.e., weight). In practice, a design not only should satisfy safety but also it should be cost and weight effective. Hence, a design with the least cost, but indeterminate weight is not plausible. In order to evaluate the behavior of objective functions in the optimization process of a reinforced concrete cantilever retaining wall, the objective-based engineering design approach is applied on Example-2 and analysis results are given below. In this section, the cost of the wall in Example-2 is assigned as the objective function of optimum design. Figure 9 shows final solutions after 50 runs, where the cost of the Example-2 is optimized, and the weight of each solution is calculated with respect to the optimized final design variables. As can be seen from Fig. 9, a design with the least cost resulted in 166.18 ($/m), but the related weight is around 6386.93 (kg/m), far greater than 5760 (kg/m) as the least weight found. Figure 10 shows the range of changes for the objective functions (o.f.) for each other, as every point on the figure, shows one simulation. To overcome the multi-objective optimum design of the reinforced cantilever retaining walls, the following steps are implemented into MATLAB code developed for design purposes: (i) optimum design of the subjected example is simulated for n number of runs for both cost and weight objective functions (n = 50), (ii) the range of escalation for objective functions values are detected, (iii) based on the importance level or depending on the project, one of the objective functions stays as the single objective function, and the other objective function transfers to a constraint, (iv) a sensitivity analysis is done on the optimum design of the example according to the range of change provided in step 2. 6700
Weight Objective Function (kg/m)
Fig. 9 Comparison of objective function values for 50 simulations
6600 6500 6400 6300 6200 6100 6000 5900 5800 165 166 167 168 169 170 171 172
173 174 175 176
Cost Objective Function ($/m)
272
E. B. Tutu¸s et al. Weight o.f.
176
6800
174
6600
172
6400
170
6200
168
6000
166
5800
164
5600
162
Weight Objective Function (kg/m)
Cost Objective Function ($/m)
Cost o.f.
5400 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49
Simulation No.
Fig. 10 Range of changes for objective functions in Example-2
These steps are shown in Fig. 11, in which the evolution of design solutions in the objective-based approach is demonstrated. Sensitivity analysis is conducted in five steps by constraining the weight of the wall (f weight ) less than 5800, 5900, 6000, 6100, and 6200 (kg/m). This approach gives a designer an opportunity to consider the interaction of multiple objective functions and provides an overview of a potential design. For example, in Fig. 11e, the best design that is the most optimum in terms of both objective functions is X = (f cost = 180.15, f weight = 5763.77).
4 Conclusions In this paper, the Cuckoo Search (CS) algorithm is implemented in the design process of reinforced cantilever retaining walls, as an efficiently proved metaheuristic algorithm, instead of the conventional trial-and-error approach. The paper followed three main objectives, first, to transform the conventional approach to an automated design task accompanied by defining optimization problem, geotechnical and structural design requirements, and objective functions. In this sense, the use of developed design framework resulted in less computation time and more efficient solutions. Second, an overview of the CS algorithm with an application for finding low-weight and low-cost design answers are sought. Third, a novel objective-based approach is introduced in order to increase the accuracy of the final design solution with respect to multi-objective perspective. This approach enables a designer to have a quick and accurate outline of the final design solutions, and select the most appropriate answer based on the objective priority of a project.
273 6250 6200 6150 6100 6050 6000 5950 5900 5850 5800 5750
(a)
165 167.5 170 172.5 175 177.5 180 182.5 185 187.5 190
Weight Objective Function (kg/m)
Cost Objective Function ($/m) 6250
(b)
6150 6050 5950 5850 5750
165 167.5 170 172.5 175 177.5 180 182.5 185 187.5 190
Weight Objective Function (kg/m)
Cost Objective Function ($/m) 6250
(c)
6150 6050 5950 5850 5750
165 167.5 170 172.5 175 177.5 180 182.5 185 187.5 190
Weight Objective Function (kg/m)
Cost Objective Function ($/m) 6250
(d)
6150 6050 5950 5850 5750
165 167.5 170 172.5 175 177.5 180 182.5 185 187.5 190 Cost Objective Function ($/m) Weight Objective Function (kg/m)
Fig. 11 Evolution steps of objective-based design approach in Example-2; weight constraint equals to a 6200, b 6100, c 6000, d 5900, and e 5800
Weight Objective Function (kg/m)
11 An Objective-Based Design Approach …
6250
(e)
6150 6050 5950 5850 5750
165 167.5 170 172.5 175 177.5 180 182.5 185 187.5 190 Cost Objective Function ($/m)
274
E. B. Tutu¸s et al.
The results presented in this paper provide solid supports for the idea that the CS algorithm can be applied effectively for various engineering optimization problems. Still, further studies are needed to validate its efficiency and performance for other types of gravity retaining walls. In addition, the effects of other loading types can be taken into account, especially for earthquake-prone zones. Lastly, further studies could assess the interaction of objectives more in-depth using multi-objective CS algorithms.
References 1. Saribas A, Erbatur F (1996) Optimization and sensitivity of retaining structures. J Geotech Eng 122(8):649–656 2. Hasançebi O, Azad SK (2015) Adaptive dimensional search: a new metaheuristic algorithm for discrete truss sizing optimization. Comput Struct 154:1–16 3. Dey N (2017) Advancements in applied metaheuristic computing. IGI Global 4. Dey N, Ashour AS, Bhattacharyya S (2019) Applied nature-inspired computing: algorithms and case studies. Springer 5. Ceranic B, Fryer C, Baines R (2001) An application of simulated annealing to the optimum design of reinforced concrete retaining structures. Comput Struct 79(17):1569–1581 6. Villalba P et al (2010) CO2 optimization of reinforced concrete cantilever retaining walls. In: 2nd International conference on engineering optimization, September 7. Yepes V et al (2008) A parametric study of optimum earth-retaining walls by simulated annealing. Eng Struct 30(3):821–830 8. Ghazavi M, Bonab SB (2011) Optimization of reinforced concrete retaining walls using ant colony method. In: 3rd International symposium on geotechnical safety and risk (ISGSR) 9. Ghazavi M, Bonab SB (2011) Learning from ant society in optimizing concrete retaining walls. J Technol Educ 5(3):205–212 10. Pourbaba M, Talatahari S, Sheikholeslami R (2013) A chaotic imperialist competitive algorithm for optimum cost design of cantilever retaining walls. KSCE J Civ Eng 17(5):972–979 11. Khajehzadeh M, Taha MR, Eslami M (2014) Multi-objective optimisation of retaining walls using hybrid adaptive gravitational search algorithm. Civ Eng Environ Syst 31(3):229–242 12. Talatahari S et al (2012) Optimum design of gravity retaining walls using charged system search algorithm. Math Probl Eng 2012 13. Talatahari S, Sheikholeslami R (2014) Optimum design of gravity and reinforced retaining walls using enhanced charged system search algorithm. KSCE J Civ Eng 18(5):1464–1469 14. Camp CV, Akin A (2011) Design of retaining walls using big bang–big crunch optimization. J Struct Eng 138(3):438–448 15. Ahmadi-Nedushan B, Varaee H (2009) Optimal design of reinforced concrete retaining walls using a swarm intelligence technique. In: The first international conference on soft computing technology in civil, structural and environmental engineering, UK 16. Sadoglu E (2014) Design optimization for symmetrical gravity retaining walls. Acta Geotechnica Slovenica 11(2):70–79 17. Kaveh A, Abadi A (2011) Harmony search based algorithms for the optimum cost design of reinforced concrete cantilever retaining walls. Int J Civ Eng 9(1):1–8 18. Gandomi AH et al (2015) Optimization of retaining wall design using recent swarm intelligence techniques. 103:72–84 19. Gandomi AH, Kashani AR (2018) Automating pseudo-static analysis of concrete cantilever retaining wall using evolutionary algorithms. 115:104–124 20. Kashani AR, Saneirad A, Gandomi AH Optimum design of reinforced earth walls using evolutionary optimization algorithms. p. 1–24
11 An Objective-Based Design Approach …
275
21. Yalcin Y, Orhon M, Pekcan O (2019) An automated approach for the design of Mechanically Stabilized Earth Walls incorporating metaheuristic optimization algorithms. 74:547–566 22. Basha BM, Babu GS (2007) Reliability based design optimization of gravity retaining walls. Probab Appl Geotech Eng ASCE Geotech Spec Publ 170:1–10 23. Sivakumar Babu G, Basha BM (2008) Optimum design of cantilever retaining walls using target reliability approach. Int J Geomech 8(4):240–252 24. Zevgolis IE, Bourdeau PL (2010) Probabilistic analysis of retaining walls. Comput Geotech 37(3):359–373 25. Gandomi AH, Kashani AR, Zeighami F (2017) Retaining wall optimization using interior search algorithm with different bound constraint handling. Int J Numer Anal Meth Geomech 41(11):1304–1331 26. Gandomi AH et al (2016) Optimization of retaining wall design using evolutionary algorithms. Struct Multidiscip Optim 55(3):809–825 27. Temur R, Bekdas G (2016) Teaching learning-based optimization for design of cantilever retaining walls. Struct Eng Mech 57(4):763–783 28. Yang X-S, Deb S (2009) Cuckoo search via Lévy flights. In: 2009 World congress on nature & biologically inspired computing (NaBIC). IEEE 29. Gandomi AH et al (2013) Design optimization of truss structures using cuckoo search algorithm. 22(17):1330–1349 30. Gandomi AH et al (2015) Slope stability analyzing using recent swarm intelligence techniques. 39(3):295–309 31. Gandomi AH, Yang X-S, Alavi AH (2013) Cuckoo search algorithm: a metaheuristic approach to solve structural optimization problems. 29(1):17–35 32. Binh HTT, Hanh NT, Dey N (2018) Improved cuckoo search and chaotic flower pollination optimization algorithm for maximizing area coverage in wireless sensor networks. 30(7):2305– 2317 33. Li Z et al (2018) Discrete cuckoo search algorithms for two-sided robotic assembly line balancing problem. 30(9):2685–2696 34. Chakraborty S et al (2017) Optimization of non-rigid demons registration using cuckoo search algorithm. 9(6):817–826
Chapter 12
A Hybrid Cuckoo Search Algorithm for Cost Optimization of Mechanically Stabilized Earth Walls M. Altun, Y. Yalcin, and O. Pekcan
1 Introduction Optimization studies deal with finding the best possible solution for a given problem under the given constraints to efficiently utilize limited resources. In order to solve complex real-world problems from various engineering branches, several optimization algorithms were introduced. Moreover, new methods have been developed to improve the solutions in the cases where the performance of the existing ones is inadequate. Most of the conventional optimization algorithms start from a prescribed initial point and depend on gradient information to explore the design space [1]. Provided that the optimization problem is simple, these gradient-based methods are efficient. However, they demand significant computational effort to solve complex, large-scale problems and tend to get trapped in local optima in multimodal search space [2]. In order to overcome these issues, problem-specific heuristic approaches were proposed, yet they have limited applications as these methods are not directly applicable to different problem types [3]. More recently, metaheuristic algorithms were introduced as an alternative to gradient-based methods and heuristic approaches. In general, metaheuristics are preferred for being (i) based on derivative-free solution update principles, (ii) capable of exploring the search space and locating global optima, (iii) computationally efficient, and (iv) applicable to a broad range of problems [4]. Often inspired by existing
M. Altun · Y. Yalcin · O. Pekcan (B) Civil Engineering Department, Middle East Technical University, Ankara, Turkey e-mail: [email protected] M. Altun e-mail: [email protected] Y. Yalcin e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 N. Dey (ed.), Applications of Cuckoo Search Algorithm and its Variants, Springer Tracts in Nature-Inspired Computing, https://doi.org/10.1007/978-981-15-5163-5_12
277
278
M. Altun et al.
phenomena, these algorithms are formulated to mimic natural concepts with mathematical operations that search the design space for optimal solutions. For instance, Genetic Algorithm (GA) [5] and Differential Evolution (DE) [6] are based on Darwin’s principles of natural selection and evolution. Particle Swarm Optimization (PSO) imitates the behavior of bird flocks [7], Simulated Annealing adopts the annealing process of metals [8], Gravitational Search Algorithm applies the law of gravity [9], while Teaching Learning Based Optimization simulates teacher–learner relationships [10]. Although there are numerous powerful techniques to choose from, none of the metaheuristic algorithms can be universally better than the others according to “No Free Lunch Theorem” [11]. Therefore, the optimization literature is still advancing through the development of novel metaheuristic algorithms and studies on engineering applications. Some of the recently developed algorithms in that regard are Artificial Bee Colony (ABC) [12], Cuckoo Search (CS) [13], Bat-Inspired Algorithm [14], Firefly Algorithm [15], Teaching Learning based Optimization [10], Water Cycle Algorithm [16], Grey Wolf Optimizer [17], Interior Search Algorithm [18], Symbiotic Organisms Search [19], Adaptive Dimensional Search [20], Crown Search Algorithm [21], Elitist Stepped Distribution Algorithm [4], Nuclear Fission– Nuclear Fusion [22], Squirrel Search Algorithm [23], and Optimization Booster Algorithm [24]. As an outcome of the advancements in optimization literature, metaheuristics have become imperative in modern engineering applications, facilitating the use of automated analysis and design procedures in fields such as civil, electrical, mechanical, chemical, and industrial engineering as well as planning, management, etc. [25]. One recent application was proposed in the field of geotechnical engineering by Yalcin et al. [26] to minimize the material cost of Mechanically Stabilized Earth Walls (MSEWs), which are composite earth retaining structures that use the concept of soil reinforcement. The study includes a complete design framework, in which the objective is formulated as a constrained minimization problem with variables that represent the reinforcement layout and material. The framework was successfully applied with metaheuristics such as GA, DE, PSO, and ABC. However, the subject is still open for improvement in terms of statistical reliability and algorithmic efficiency. In this sense, the present study explores the potential of the CS concept. Inspired by the breeding strategy of cuckoos, Yang and Deb introduced CS in 2009 for continuous optimization problems [13]. The algorithm, which adopts Lévy flight random walk, has been effectively applied to both unconstrained and constrained benchmarks as well as engineering optimization problems in the past decade [27– 30]. Although CS was introduced for continuous optimization problems, binary and discrete CS variants were also developed for combinatorial and discrete optimization problems [31–33]. The algorithm embodies a robust framework including two consecutive solution update steps—one step for exploration of the design space to locate globally optimal regions and another step to intensify on promising areas for enhanced precision. Most significantly, the method utilizes a parameter named discover probability, which may be manipulated to adjust the exploration and exploitation characteristics of the algorithm. As a result, it is possible to induce an elitist search strategy for increased algorithmic efficiency and achieve statistically sound
12 A Hybrid Cuckoo Search Algorithm for Cost …
279
solutions at the same time. Nevertheless, adopting alternative solution update operations in the framework can further ameliorate the efficiency of CS, enhance its capability, and extend its application range. Accordingly, several modifications were proposed in the available studies. The related studies mostly focus on improving two parameters; namely, the step size of Lévy flights and discovery probability of the host cuckoo, which are prescribed constants in the basic CS algorithm. The modifications include adaptive step size implementation based on linear and nonlinear functions of generation number [34–36], chaotically randomized step size [37], and grouping/information sharing schemes [35] to improve the exploration capability of CS. In addition, step size and discovery probability were dynamically correlated to balance the exploration and exploitation characteristics of the method [38]. Although these adaptations yield improvement in solution quality, more drastic alterations may be required to successfully solve the challenging optimization problems encountered in engineering practice. Hybridization is another option to achieve improved performance. Integrating the strong features of two or more algorithms, hybrid methods may prove to be more robust and efficient than their predecessors. The available studies employed CS for hybrid algorithms in four distinct manners: (i) Lévy flight solution update operation of CS was incorporated with (a) Ant Colony Optimization to improve local search ability [39] and (b) Artificial Bee Colony algorithm to update solutions [40]; (ii) solution update operations of other algorithms were embedded into CS formulation through (a) updating the solutions using genetic operators [41], (b) increasing the convergence rate and precision by exploiting the local search mechanism of Frog Leaping algorithm [42], and (c) replacing the biased random walks of CS with a modified DE operation [43]; (iii) the solution update operation of CS was used conjointly with another method (i.e., some ratio of the population is updated with CS, while the rest are based on the other algorithm) to eliminate the deficiencies of both [44]; and (iv) the solutions were updated with successive iterations of CS and another algorithm (i.e., CS updates the solutions for one generation and the other algorithm takes place in the next, repeating these steps until termination) [45]. In the present study, a hybrid CS variant incorporating the solution update principles of DE; namely Hybrid Cuckoo Search—Differential Evolution (HCSDE) algorithm, is developed for MSEW optimization. The performance of the algorithm is analyzed with a variety of wall design benchmarks and comparatively evaluated with respect to commonly used algorithms such as CS, DE, GA, and PSO. To deliver the outcomes of the study, the following sections are arranged as follows: In Sect. 2, MSEW design problem and the analysis concepts are discussed. Then, the design objective is formulated as a constrained minimization problem. In Sect. 3, the concepts and steps of CS and DE algorithms are summarized, HCSDE algorithm is introduced, and its implementation is explained in detail. Covering the numerical experiments, Sect. 4 includes the benchmark MSEW design problem set, model constants, and the parameter settings used in the analyses. Then, the results of the comparison algorithms are presented and discussed to assess the performance of HCSDE. Lastly, in Sect. 5, conclusions and final remarks are given.
280
M. Altun et al.
2 MSEW Design Problem MSEW design is a mixed discrete-continuous optimization problem where the solution variables represent the reinforcement properties (i.e., reinforcement length, spacing, and material type), which may differ in successive layers. Since layer spacing is variable and the wall height is constant, the number of reinforcement layers is initially unknown. As a result, the dimensionality of the problem is indeterminate unless a structured formulation is adopted. In addition, the problem becomes multimodal considering the availability of alternative solutions with different material types and reinforcement layout configurations. Moreover, the design criteria based on structural stability and construction feasibility are dependent on multiple design variables, making deterministic design strategies and rule-based heuristic approaches difficult to apply. Addressing this issue, Yalcin et al. [26] introduced a design framework and incorporated metaheuristic optimization algorithms to reach globally optimal solutions.
2.1 Problem Variables and Trial Design Generation Addressing the indeterminacy issue, Yalcin et al. [26] proposed a trial wall generation algorithm that considers the reinforcement layers in groups, which is an approach suitably in line with the common practice. To be more specific, the number of reinforcement groups is prescribed and each group is designed to have uniform reinforcement layout and material within. The algorithm involves geometric constraints based on Federal Highway Administration (FHWA) design guidelines [46, 47] and can be adopted for uniform, variable, and trapezoidal reinforcement configurations denoted as Types I, II, and III in Fig. 1, respectively. The dimensionality of the design problem depends on wall type selection. The parameters regarding zmin
z
zmin
zmin
La
(La)ng
La
ngth
ngth
group
group
(sv)n H
H sv
45+ r/2
(sv)1
Hb B=L
Type I (a)
i th
H
group
(sv)i
(La)1
(sv)i
45+ r/2
(sv)n
(La)i i th group
1st group
Hb B=L
Type II (b)
Fig. 1 MSEW configurations: a Type I, b Type II, c Type III
1st
(sv)1
45+ r/2
Hb B=Ar /H
Type III (c)
group
12 A Hybrid Cuckoo Search Algorithm for Cost …
281
the reinforcement material and vertical layout are selected from discrete sets, while the lengths of the reinforcement layers are assigned with continuous variables. The procedure is summarized in the following pseudocode and the detailed formulation is available in [26]. Step 1—Initialization: Select the wall type and number of reinforcement groups, ng . Calculate the problem dimension, D, and generate a design vector using an optimization algorithm as given in Eq. (1). ⎧ for T ype I ⎨4 x = (x1 , . . . , xd , . . . , x D ) where, D = 3n + 1 for T ype I I and xd ∈ [0, 1] ⎩ g for T ype I I I 4n g
(1) Step 2—Vertical layout: Using Eqs. (2) and (3), calculate the distance between the lowermost reinforcement layer and the wall base, H b , and the vertical spacing of each group, (sv )i . These values are selected from discrete sets within the range of 0.15–0.40 m for H b and 0.15–0.80 m for (sv )i . Hb = 0.15 + 0.05 × 5x1 + 0.5
(2)
(sv )i = 0.15 + 0.05 × 13xi+1 + 0.5 for i = 1, 2, . . . , n g
(3)
where (sv )i : vertical spacing of ith group (m). Step 3—Reinforcement materials: Select a material type for each reinforcement group from a discrete set of available geosynthetic products. The product names are denoted as UG in Eq. (4). (G)i ∈ U G 1 , U G 2 , . . . , U G n p for i = 1, 2, . . . , n g
(4)
where (G)i : geosynthetic product selected for ith group, np : number of available geosynthetic products. Step 4—Reinforcement layers: Using Eqs. (5)–(7), calculate the number of layers in each reinforcement group (i.e., for Type I walls, skip Eqs. (6)). (H )o = H − Hb (n r )i = xi+2n g +1 ×
(H )i−1 (sv )i
for i = 1, 2, . . . , n g −1
(H )i = (H )i−1 − (n r )i × (sv )
(H )n g −1 (n r )n g = (sv )n g
where H: wall height (m), (nr )i : number of layers in ith reinforcement group.
(5)
(6)
(7)
282
M. Altun et al.
Step 5—Horizontal layout: For each group calculate the reinforcement length. For Type I and II walls, FHWA [47] suggests that the average breadth of the reinforced soil zone must be greater than 0.7H and the reinforcements should extend beyond the active failure wedge by at least 1 m. Additionally, for Type III walls, the minimum reinforcement length is recommended as the higher of 2.5 m and 0.4H; and the increment between the lengths of consecutive groups should be kept below 0.15H. Based on these criteria, Eqs. (8) and (9) are used for Type I and II walls, while Eqs. (10)–(12) are applied for Type III.
L min = max{ L a + 1 , 0.7H }
(8)
L = L min + x3n g +1 × 2H − L min
(9)
= max { (L a )1 + 1 , 0.4H , 2.5 m}
(10)
(L)1 = L min 1 + x3n g +1 × 2H − L min 1
(11)
L min
1
L min i = max (L a )i + 1 , (L)i−1 (L max )i = max (L)i−1 + 0.15H , L min i (L)i = L min i + xi+3n g × (L max )i − L min i
⎫ ⎪ ⎬ ⎪ ⎭
(12)
where (L)i : reinforcement length of ith group (m), (L a )i : the distance between the wall face and the failure plane at the uppermost reinforcement level of ith group (m). Step 6—Data processing: Determine the physical parameters required for stability analysis based on Fig. 1.
2.2 Design Constraints and Objective In a broad sense, an MSEW can be regarded as a gravity-type retaining wall in which the stabilizing forces originate from the weight of the reinforced soil zone. Accordingly, conventional stability checks considering sliding, overturning, and bearing failure modes are valid for these structures and they are often referred to as external stability criteria, illustrated in Fig. 2a–c. For these calculations, the reinforced soil zone is assumed to be a rigid block that is subject to permanent and transient loads (i.e., self-weight of soil, earth surcharge, horizontal earth thrust, live load surcharge, vehicular load, earthquake load). In addition, the integrity of the reinforced soil zone is ensured by the internal stability criteria, which are based on the stability of the reinforcement layers against pullout and rupture failure, shown in Fig. 2d and e.
12 A Hybrid Cuckoo Search Algorithm for Cost …
Sliding Failure
283
Bearing Failure
Overturning Failure
(a)
(c)
(b)
Pullout Failure
Rupture Failure
(d)
(e)
Fig. 2 Failure modes: a sliding, b overturning, c bearing, d pullout, e rupture
The force diagram of a generic MSEW having a leveled backslope and cohesionless backfill is given in Fig. 3. In the figure, self-weight of the reinforced fill is represented with W, F 1 denotes the horizontal thrust due to static earth pressure, and F 2 is the horizontal thrust due to surcharge loading q. The seismic effects are reflected with PIR and PAE , which are the horizontal inertial force and thrust due to
q
Reinforced fill
La
ht H
Backfill
Le
PIR
ΔPAE
F2
Failure w edge
F1 H/2
W
H/2 H/3
e R
Hb B
Fig. 3 MSEW free-body diagram
Foundation soil
0.6H
284
M. Altun et al.
seismic earth pressure, respectively. Lastly, R and e denote the reaction on the base of the reinforced fill and the related load eccentricity, respectively. Considering these forces, the problem constraints are developed according to the load and resistance factor design concept proposed in [47], while the objective function (i.e., which is the combined cost of reinforcement material and reinforced fill per meter wall length) is derived from the unit prices available in [48]. There are a total of 14 constraint functions, 11 of which are based on stability criteria. The other constraints are imposed to satisfy geometrical feasibility. The formulation of the optimization problem is summarized below. The stability analysis procedure and the derivation of constraint functions are not given explicitly for the sake of brevity. However, the resulting definitions and their relevance in the optimization framework are conceptually summarized. The complete derivation of the equations, as well as comprehensive discussions, are available in [26]. Minimize ng 3γr H × (B + 0.3) + f = (n r )i (L)i 9.81 i=1 Tult × 0.03 × +2 R FI D × R FC R × R FD i
(13)
subject to 1 −1≤0 C DR1 eo,1 g2 (x) = −1≤0 emo,1 σ B,1 g3 (x) = −1≤0 q f,1
g1 (x) =
1 −1≤0 C DR2 eo,2 g5 (x) = −1≤0 emo,2 eo,2 g6 (x) = −1≤0 emo,3 σ B,2 g7 (x) = −1≤0 q f,2 T1 −1≤0 g8 (x) = T f max L e,1 g9 (x) = −1≤0 L e max g4 (x) =
(14) (15) (16) (17) (18) (19) (20) (21) (22)
12 A Hybrid Cuckoo Search Algorithm for Cost …
285
Sf −1≤0 Tult max L e,2 −1≤0 g11 (x) = L e max g10 (x) =
0.7H −1≤0 B (L)i − (L)i−1 max −1≤0 g13 (x) = 0.15H i.e.i = 2, 3, . . . , n g g12 (x) =
g14 (x) =
0.15 −1≤0 z min
(23) (24) (25)
(26) (27)
Objective function, f • The objective function is dependent on the length and material strength grade of the reinforcement layers and the total weight of the reinforced fill. In the equation, γ r denotes the reinforced fill unit weight, T ult is the ultimate tensile capacity of the reinforcement material, and RF ID , RF CR , RF D are installation damage, creep, and durability reduction factors, respectively. Constraint functions • g1 and g4 : These constraints are applied to check the stability against sliding failure through a parameter named capacity demand ratio, denoted as CDR1 and CDR2 for static and seismic loading conditions, respectively. CDR is essentially the ratio of the horizontal resistance at the base of the reinforced zone to the total driving force, which is the factored summation of the horizontal loads given in Fig. 3. The resistance is dependent on W, interface friction angle, and the shear strengths of the reinforced fill and foundation soil. • g2 , g5 , and g6 : Overturning stability is imposed with these constraints using the limiting load eccentricity values, emo,1 , emo,2, and emo,3 . Static and seismic vertical eccentricity values, denoted as eo,1 and eo,2 , are computed using the horizontal and permanent vertical forces, while the effect of transient vertical loads such as vehicular and live load surcharge are ignored. • g3 and g7 : These constraints evaluate the stability against bearing capacity failure. The base pressures in static and seismic conditions, denoted as σ B,1 and σ B,2 , are calculated assuming that the stress distribution due to resultant eccentric base reaction, R, uniformly acts over a reduced area at the base. Then these values
286
M. Altun et al.
are compared with the factored bearing capacity values, qf,1 and qf,2 , which are dependent on the foundation soil shear strength parameters. • g8 and g10 : Applied to assess the stability against rupture failure, these constraints are dictated by the reinforcement strength grade and vertical layout. Therefore, they are the governing factors for reinforcement material selection. For static conditions, the factored tensile force on each reinforcement layer, T 1 , is compared with the factored allowable capacity, T f . For seismic conditions, the ultimate capacity is evaluated with respect to the required reinforcement resistance, S f . • g9 and g11 : In pullout failure evaluation, a critical wedge analysis is performed considering the static and seismic conditions separately for each reinforcement layer and the required effective reinforcement lengths, L e,1 and L e,2 , are compared with the design value, denoted as L e in Fig. 3. Accordingly, these constraints influence both horizontal and vertical layout in all reinforcement configurations. • g12 , g13 , and g14 : These constraints are based on the geometric feasibility requirements given in [47]. First, the 12th constraint is applied to keep the average breadth of the reinforced zone above 70% of the wall height. Relevant for Type III walls only, the 13th constraint limits the length difference between successive reinforcements below 15% of the wall height. Lastly, the 14th constraint keeps the embedment depth of the uppermost reinforcement layer, zmin , above 15 cm.
3 Optimization Algorithms 3.1 Cuckoo Search Inspired by the reproduction strategy of cuckoo birds, Cuckoo Search (CS) algorithm is a population-based nature-inspired metaheuristic algorithm, developed by Yang and Deb using Lévy flight concept [13]. In nature, cuckoos lay their eggs in both their nests and the nests of others to increase the hatching probability. When a foreign egg hatches in the host nest before the indigenous eggs, they throw the others out of the nest. On the other hand, if the host cuckoo discovers a foreign egg before hatching, they either throw them out or abandon their nest to build a new one elsewhere. Applying these concepts, CS algorithm searches the design space as follows based on the source code published by Yang and Deb. Step 0—Algorithm parameters: Prescribe CS parameters: maximum number of generations (T ), nest size (K), and discovery probability (pa ). Step 1—Initialization: Initialize random solutions in D-dimensional search space for K nests using lower and upper bound of each dimension. Step 2—Evaluation: Evaluate the fitness value of the solution set of each host nest. Step 3—Selection: Update the best solution for each nest if the new value is better than the previously recorded best. If t is the initial generation, directly assign the
12 A Hybrid Cuckoo Search Algorithm for Cost …
287
solution as best. Increase the generation number, t = t + 1. If t is even, continue with Step 4, else, go to Step 5. Step 4—Solution update-Lévy flight: For each nest, generate a new solution candidate by updating the dimensions of the solution vector randomly using Lévy distribution. In other words, apply the concept, “Cuckoos leave their eggs in the host nest.” Then, check the boundary limits and return to Step 2. Step 5—Solution update-discovery and biased relative random walk: It is assumed that the host cuckoos detect foreign eggs (i.e., solution candidates) with a probability of pa . Provided that a foreign egg is not discovered, a new solution is generated using biased relative random walk, which depends on the position of all nests. The discovered eggs are not allowed to improve; hence the previous solutions are kept. After that, the boundary limits are controlled. Step 6—Termination: End the analysis if termination conditions are satisfied. Otherwise, return to Step 2.
3.2 Differential Evolution Differential Evolution (DE) is a population-based evolutionary optimization method that imitates Darwin’s principles of natural selection and evolution [6]. Accordingly, DE algorithm updates the solutions through operations named (i) mutation, (ii) crossover, and (ii) selection. The implementation steps of the algorithm are summarized in the pseudocode given below. Step 0—Algorithm parameters: Prescribe DE parameters: maximum number of generations (T ), population size, crossover rate (CR), and mutation factor (F). Step 1—Initialization: Generate initial solutions using uniform random distribution within the boundary limits of the D-dimensional space. Step 2—Evaluation: Evaluate the fitness values of the solution candidates. Step 3—Mutation: Increase the generation number, t = t + 1. From a biological aspect, mutation represents sudden changes in genome due to environmental effects. In the algorithm, this concept is implemented as an effect of other individuals. For each individual, a variant solution vector is generated using the positions of all individuals in the population. Step 4—Crossover: Perform crossover operation to update the population. The crossover parameter controls gene exchange within the population using the combined information of individuals and their variant vectors. First assign the position of the individual to its trial vector, then update trial one using crossover operation. For each dimension of the trial vector, a random number is generated between [0, 1], if the random number is less than the crossover rate (CR), the position on the dimension is replaced with the variant value. Then, boundary conditions are checked.
288
M. Altun et al.
Step 5—Evaluation and selection: Evaluate the fitness values of the trial solutions. For each individual, compare the fitness with the historical best and accept the new position if the fitness is improved. Step 6—Termination: End the analysis if termination conditions are satisfied. Otherwise, return to Step 3.
3.3 Hybrid Cuckoo Search—Differential Evolution CS updates the solutions using two different methods: Lévy flights (Step 4) and biased relative random walks (Step 5). In the initial generations, both methods explore the design space; however, Lévy flights mainly manage the convergence rate of the algorithm. After a considerable number of generations, when the search space of the algorithm is narrowed, both methods try to exploit the solutions around the constrained space. While Lévy flights exploit the solution surrounding the temporary best nest, the biased relative random walks scan the search space. During the optimization process, CS parameter, discovery probability (pa ) manages the roles of both methods to balance local and global search capability of the algorithm. Although utilizing two different update methods provides more precise results, it also causes some shortcomings such that in some problems, CS converges prematurely and gets trapped in local optima. In other words, its exploration capability needs to be improved. On the other hand, DE explores the search space efficiently; however, its exploitation capability is limited. Therefore, hybridizing these strong features of CS and DE algorithm reveals a competitive metaheuristic. HCSDE is constructed on the main framework of CS algorithm while the solutions are updated using DE operators. The modifications on CS algorithm for HCSDE algorithm are listed below: • Levy flights of CS in Step 4 are replaced with DE operators (mutation and crossover) to improve the exploration capability of the hybrid algorithm, avoiding premature convergence. • With the replacement of Lévy flights, the convergence rate of the algorithm gets lower. As a result of this delay, the algorithm focuses on local search after a considerable number of generations. Therefore, to provide compatibility between two solution update methods and to incentivize local search for balancing exploration and exploitation capabilities of the hybrid algorithm, the biased relative random walks in Step 5 are also updated with DE operators using elite samples. Thus, while the first method explores the problem domain using all nests, the second method constrains the search space and focus on local search using the elite nests which are selected from the fittest candidates. The percent of elite samples is predetermined. For this study, the percent rate of elite samples is prescribed using discovery probability pa . Hence, this parameter also balances exploration and exploitation behavior of the algorithm as it does in CS. In the optimization process, for instance, considering 100 nests with 0.25 discovery probability, the
12 A Hybrid Cuckoo Search Algorithm for Cost …
289
worst 25 nests are eliminated and the solutions are updated using the remaining 75 elite nests. The application of HCSDE is explained in the following pseudocode and illustrated with the flowchart given in Fig. 4.
Fig. 4 The flowchart of HSCDE
290
M. Altun et al.
Step 0—Algorithm parameters: Prescribe HCSDE parameters: maximum number of generations (T ), population size, discovery probability (pa ), crossover rate (CR), and mutation factor (F). Step 1—Initialization: Generate random solutions in the D-dimensional search space for K nests according to the boundaries using Eq. (28). xid (t) = X ld + r X ud − X ld d = 1, 2, . . . , D and i = 1, . . . , K
(28)
where xid (t): position of ith nest at d th dimension in generation t, X ud (t) and X ld (t): upper and lower boundary limits of search space in dimension d, r: a uniformly distributed random number on interval [0, 1] Step 2—Evaluation: Evaluate the solution fitness of each host nest using the objective function. Step 3—Selection: Update the best solution for each nest if the new value is better than the previously recorded best, using Eq. (29). If t is the initial generation, directly assign the solution as best. Increase the generation number, t = t + 1. If t is even, continue with Step 4, else, go to Step 5. xid (t
+ 1) =
u id (t + 1) if f u id (t + 1) < f xid (t) o/w xid (t)
(29)
Step 4—Solution generation with DE position update: Update the position vector of each nest for new generation, t + 1, using mutation and crossover operations of DE algorithm as seen in Eq. (30). Then, proceed with Step 6. u id (t + 1) =
xrd1 (t) + Fr xrd2 (t) − xrd3 (t) if r < C R xid (t) o/w
(30)
where ui (t + 1): trial solution vector of host nest i, F: mutation factor, CR: crossover rate Step 5—Solution update with DE position update using elite nests: Sort the nests according to their fitness values. Select the best 100 × (1 − pa ) percent of the nests. Update the solutions according to DE position update method using the selected nests (x e ) considering discovery probability using Eqs. (31) and (32). Return to Step 2. xed1 (t) + Fr xed2 (t) − xed3 (t) if C R < 0.9 xid (t) o/w d u i (t + 1) if pa < r u id (t + 1) = o/w xid (t)
u id (t + 1) =
(31) (32)
Step 6—Termination: Determine whether the termination conditions are satisfied. If yes, sort the results and output the best solution. Otherwise, return to Step 2.
12 A Hybrid Cuckoo Search Algorithm for Cost …
291
4 Numerical Experiments 4.1 Model Constants and Parameter Settings For the numerical experiments, the MSEW design benchmark problems employed in [26] are referenced for the soil parameter setting and loading conditions summarized in Table 1. The wall face is assumed vertical and the backslope is horizontal. Reinforcement materials are selected from a discrete set of uniaxial geogrid products having the ultimate tensile strength and reduction factors given in Table 2. The wall height is varied from 8 to 15 m and all reinforcement configurations (i.e., Types I, II, and III) are evaluated separately to develop an extensive test set. To assess the performance of Hybrid Cuckoo Search—Differential Evolution on MSEW design problems, well-known metaheuristics like Genetic Algorithm and Particle Swarm Optimization are utilized for comparison, along with Cuckoo Search and Differential Evolution. The parameter settings of the algorithms are tabulated in Table 3. Referencing the previous optimization study on MSEWs, the population size Table 1 Soil parameter setting and loading conditions Reinforced fill soil
Backfill soil
Foundation soil
Loading
Parameter
Value
Friction angle
φ r = 34°
Cohesion
cr = 0
Unit weight
γ r = 20 kN/m3
Friction angle
Φ b = 28°
Cohesion
cb = 0
Unit weight
γ b = 18 kN/m3
Friction angle
Φ f = 10°
Cohesion
cf = 60 kPa
Unit weight
γ b = 18 kN/m3
Bearing capacity factors
N c = 8.4, Nγ = 1.2
Horizontal seismic coefficient
k h = 0.2
Vertical seismic coefficient
kv = 0
Live load surcharge
qL = 15 kPa
Table 2 Reinforcement product details Product name
UG1
UG2
UG3
UG4
UG5
UG6
T ult (kN/m)
58
70
114
144
175
210
RF ID
1.05
1.05
1.05
1.05
1.05
1.05
RF CR
2.60
2.60
2.60
2.60
2.60
2.70
RF D
1.00
1.00
1.00
1.00
1.00
1.00
292 Table 3 Parameter settings for comparison algorithms
M. Altun et al. Algorithm
Parameter
Value
All algorithms
Population size, K
K = 50
Maximum generation number, T
T = 1000
Selection
Roulette wheel
Crossover probability, p1
Single point, p1 = 0.80
Mutation probability, p2
Uniform, p2 = 0.01
Cognitive constant, c1
c1 = 2
Social constant, c2
c2 = 2
Inertia weight, w (linear)
w = 0.9 − 0.7t/T
GA [4, 49]
PSO [4, 9]
Mutation factor, F
F = 1.0
Crossover Rate, CR
CR = 0.9
CS [13]
Discovery probability pa
pa = 0.25
HCSDE
Mutation factor, F
F = 1.0
Crossover Rate, CR
CR = 0.9
Discovery probability pa
pa = 0.25
DE [6]
(number of nests) and the maximum number of generations were set to 50 and 1000 [26]. For this study, the same parameter settings for GA, PSO, and DE are utilized as in [26]. Considering that CS was not previously applied to the present problem, the algorithm was put through parametric sensitivity analyses as part of the preliminary studies and the results were evaluated together with the parameter settings proposed in the literature. In line with the initial work of Yang et al. [13], the algorithm parameter pa of CS has minor effects on the outcomes and an intermediate value such as 0.25 is suitable for the problem. When smaller values are selected, the algorithm performs slightly worse in all statistical measures. Although selecting relatively higher values may provide slightly better statistical performance, the minimum fitness values are generally influenced negatively in this case. Therefore, pa is set to 0.25 for the design experiments. Lastly, HCSDE solves the problems with the same parameter values adopted for CS and DE. Considering the stochastic nature of the algorithms, each problem is solved in 30 independent analyses to obtain the statistical performance measures.
12 A Hybrid Cuckoo Search Algorithm for Cost …
293
4.2 Statistical Results The statistical results of the experiments with Type I, II, and III MSEW configurations are summarized in Tables 4, 5, and 6, respectively. For each optimization algorithm, the minimum, median, maximum, mean, and standard deviation of design cost from 30 independent analyses are reported with respect to varying wall height (i.e., H = {8, 9, …, 15}m ). Among these statistical parameters, the best case for each problem is specified with bold notation. It should be noted that the results of GA, PSO, and DE for 8, 11, and 14 m wall heights already exist in [26]. Therefore, the related statistics are adapted from [26], instead of repeating the analyses. Regarding Type I walls, the results indicate that PSO, DE, CS, and HCSDE are effective as opposed to GA, in that all algorithms except GA are able to reach global optima for all wall height options. While these four algorithms generate similar median values, the mean costs from PSO for 8, 10, 12, and 14 m walls are slightly higher than those of the other three algorithms. Moreover, the mean performance of CS on 8-m high MSEW Type I problem is slightly improved compared to DE and HCSDE. Overall, HCSDE yields quite competitive results with respect to DE and CS. However, to be validated as an efficient and statistically reliable alternative for MSEW design, the performance of HCSDE in more complex problems (i.e., Type II and III benchmarks) is a more decisive factor. The results of Type II wall experiments demonstrate that HCSDE is the only algorithm that is able to locate the optima of all benchmark problems, whereas GA, PSO, DE, and CS algorithms find the optimum in four, none, seven, and two of the eight experiments, respectively. Moreover, for 9–15 m high wall optimization problems, HCSDE generates the best median values, among which five out of six are identical to the optima. On the other hand, the success rate of DE for these problems is 66%. Regarding the performance of CS in Type II benchmarks (i.e.,which are more challenging than Type I problems), the design solutions produced by the algorithm fall behind the ones obtained with DE and HCSDE, except for 8 m wall height option. Similarly, PSO and GA also yield inferior statistical parameters relative to HCSDE. In comparison to DE, the mean values of HCSDE are better in seven out of eight instances, with the exception where DE provides lower mean cost for 13 m high walls. In general, the outcomes indicate that hybridization of CS and DE results in improved design solutions, especially in terms of median, mean, and maximum cost outputs. As the problem becomes more complex when Type III configuration is considered, the capability of the optimization method becomes more significant for the statistical performance of the design framework. The results in terms of minimum, median, and mean cost values underline HCSDE and DE as more suitable and competent algorithms for MSEW optimization problems compared to GA, PSO, and CS. Moreover, HCSDE improves all statistical measures produced by DE in five of the benchmark experiments (i.e., 8, 9, 10, 11, and 12 m high walls), while yielding either better median/mean or both in the rest. More specifically, the median, mean, standard
294
M. Altun et al.
Table 4 Comparison of statistical results for Type I walls H (m)
Algorithm
8
GA
616.9176
620.0376
628.9066
1.69E+01
664.6726
PSO
616.8833
616.8833
620.0273
1.01E+01
662.7947
DE
616.8833
616.8833
616.8833
3.72E–05
616.8835
CS
616.8833
616.8833
616.8833
6.93E–07
616.8833
HCSDE
616.8833
616.8833
616.8833
4.05E–09
616.8833
GA
792.5081
823.2675
817.4560
1.42E+01
840.5504
PSO
791.9680
791.9680
791.9680
3.47E–13
791.9680
DE
791.9680
791.9680
791.9680
3.13E–05
791.9682
CS
791.9680
791.9680
791.9680
4.15E–07
791.9680
HCSDE
791.9680
791.9680
791.9680
3.95E–10
791.9680
GA
992.9199
1001.9518
1017.1541
3.34E+01
1099.8964
PSO
989.9253
992.8213
997.9769
2.20E+01
1078.8428
DE
989.9253
989.9253
990.5418
1.16E+00
992.8234
9
10
11
12
13
14
Min.
Median
Mean
St. Dev.
Max.
CS
989.9253
989.9253
989.9253
6.30E–05
989.9255
HCSDE
989.9253
989.9253
990.6976
1.30E+00
992.8213
GA
1301.1807
1348.9980
1344.0087
3.44E+01
1433.4806
PSO
1301.1610
1301.1610
1301.1610
7.63E–09
1301.1610
DE
1301.1610
1301.1610
1301.1610
1.18E–05
1301.1611
CS
1301.1610
1301.1610
1301.1610
2.10E–06
1301.1610
HCSDE
1301.1610
1301.1610
1301.1610
4.04E–06
1301.1611
GA
1815.5145
1903.6130
1893.6993
5.26E+01
2009.9235
PSO
1814.2223
1814.2223
1816.9728
1.51E+01
1896.7357
DE
1814.2223
1814.2223
1814.2224
5.16E–04
1814.2251
CS
1814.2223
1814.2223
1814.2223
1.96E–06
1814.2223
HCSDE
1814.2223
1814.2223
1814.2223
1.76E–07
1814.2223
GA
2465.9193
2500.0466
2554.2346
9.87E+01
2807.6197
PSO
2465.9169
2465.9169
2465.9169
1.85E–12
2465.9169
DE
2465.9169
2465.9169
2465.9169
3.16E–05
2465.9170
CS
2465.9169
2465.9169
2465.9169
2.16E–05
2465.9170
HCSDE
2465.9169
2465.9169
2465.9169
4.24E–07
2465.9169
GA
3269.0360
3498.5974
3500.6801
2.34E+02
4473.5113
PSO
3268.3926
3268.3926
3283.4583
5.73E+01
3494.3773
DE
3268.3926
3268.3926
3268.3954
1.02E–02
3268.4476
CS
3268.3926
3268.3927
3268.3929
8.97E–04
3268.3975
HCSDE
3268.3926
3268.3926
3268.3927
4.04E–04
3268.3949 (continued)
12 A Hybrid Cuckoo Search Algorithm for Cost …
295
Table 4 (continued) H (m)
Algorithm
Min.
Median
Mean
St. Dev.
Max.
15
GA
4315.5977
4396.5702
4437.5119
2.26E+02
5273.9061
PSO
4315.4528
4315.4528
4315.4528
0.00E+00
4315.4528
DE
4315.4528
4315.4528
4315.4534
2.39E–03
4315.4655
CS
4315.4528
4315.4528
4315.4530
5.87E–04
4315.4560
HCSDE
4315.4528
4315.4528
4315.4530
7.61E–04
4315.4570
The values are in terms of unit MSEW cost per meter length (USD/m)
deviation, and worst values of HCSDE are better than those of DE except for single occurrences, which highlight the statistical reliability of the algorithm resulting from the proposed modifications. Considering these accurate solutions and relatively small standard deviation values, HCSDE can be regarded as a robust algorithm for Type III MSEW design optimization problems. Despite the promising performance of HCSDE, there are some minor issues that require attention. More specifically, in Type II and III experiments, HCSDE converges to local optima on occasion. These occurrences are clearly seen in 10 and 12 m-high Type II wall experiments where the median fitness values obtained with DE and HCSDE are exactly the same values that are slightly worse than the minimums obtained. Similar outcomes are observed in Type III design experiment data as well.
4.3 Convergence Rate of Algorithms To assess the computational efficiency of the optimization algorithms, the convergence graphs for 9 and 12 m high wall design experiments are selected. For all reinforcement configurations and optimization algorithms, the progress of minimum and mean fitness values are given in Fig. 5. Dealing with Type I design problems, all algorithms except for GA converge to global optimum within 1000 generations. Despite reaching the optima, CS and PSO are the most computationally demanding algorithms than the others as indicated by the mean fitness values. While the minimum fitness values from DE and the proposed hybrid algorithm HCSDE progress at similar rates, HCSDE has slightly better convergence rate in terms of mean values. Considering Type II and III experiments, the issues regarding GA, PSO, and CS are amplified by the increase in problem complexity as the analyses with these algorithms have not converged fully in any of the experiments. Meanwhile, DE and HCSDE successfully solve the problems within the specified computational time. The minimum cost progress is again similar for these algorithms; however, the mean values obtained with HCSDE are either similar to or slightly better than those of DE at the same computational cost. The results highlight the algorithmic efficiency of HCSDE over the other methods and validate the proposed modifications.
296
M. Altun et al.
Table 5 Comparison of statistical results for Type II walls H (m)
Algorithm
8
GA
588.2807
610.0224
610.3723
1.09E+01
640.9741
PSO
586.7442
600.4343
599.9797
1.14E+01
635.6226
DE
586.7442
590.9605
589.3677
2.24E+00
593.7714
CS
586.7442
587.3746
587.8261
1.48E+00
592.1988
HCSDE
586.7442
588.2277
588.7274
2.05E+00
590.9605
GA
765.4465
796.9182
799.0466
2.20E+01
861.4429
PSO
739.6415
760.2076
763.8715
1.39E+01
792.8454
DE
739.6415
739.6415
743.9196
6.72E+00
757.1704
CS
739.7843
742.6526
743.6165
4.04E+00
754.3600
HCSDE
739.6415
739.6415
743.1068
6.42E+00
757.1704
GA
947.2627
978.7944
979.2989
2.19E+01
1033.5647
PSO
926.7856
959.3101
962.2168
2.08E+01
1015.7350
DE
930.1908
930.6681
931.9061
2.13E+00
935.2509
9
10
11
12
13
14
Min.
Median
Mean
St. Dev.
Max.
CS
927.0806
933.6887
932.8931
2.37E+00
935.7342
HCSDE
926.7856
930.6681
931.1204
2.39E+00
935.2509
GA
1218.5979
1262.1885
1267.6276
2.82E+01
1333.8837
PSO
1203.8436
1241.0121
1249.1471
3.61E+01
1319.3029
DE
1203.3486
1204.1654
1206.6293
4.30E+00
1219.1763
CS
1203.4178
1207.4154
1207.5131
3.03E+00
1212.9802
HCSDE
1203.3486
1203.3486
1205.0750
3.53E+00
1219.1763
GA
1676.2860
1742.2567
1755.5322
5.75E+01
1927.8388
PSO
1642.9549
1684.6639
1692.1113
4.20E+01
1846.7815
DE
1638.1615
1642.9549
1643.6644
3.52E+00
1654.6220
CS
1638.8442
1644.2304
1645.0499
3.86E+00
1655.7899
HCSDE
1638.1615
1642.9549
1643.3936
1.42E+00
1646.0300
GA
2256.1831
2349.5426
2373.2406
9.62E+01
2602.0532
PSO
2170.7871
2239.0591
2259.1933
7.39E+01
2501.3805
DE
2168.3320
2168.3320
2174.8566
1.12E+01
2200.0309
CS
2168.8943
2179.2416
2183.8825
1.15E+01
2205.5200
HCSDE
2168.3320
2168.3320
2176.4007
1.29E+01
2200.0309
GA
2932.7284
3088.4741
3116.8334
1.37E+02
3525.7377
PSO
2848.2736
2926.4582
2937.6780
8.59E+01
3268.3926
DE
2848.2736
2848.2736
2850.5301
7.49E+00
2883.7205
CS
2848.2736
2854.2091
2858.9891
1.11E+01
2888.8929
HCSDE
2848.2736
2848.2736
2848.3754
5.58E–01
2851.3286 (continued)
12 A Hybrid Cuckoo Search Algorithm for Cost …
297
Table 5 (continued) H (m)
Algorithm
Min.
Median
Mean
St. Dev.
Max.
15
GA
3738.3502
4006.6641
3988.0470
1.28E+02
4292.8619
PSO
3646.0412
3790.4281
3810.9842
1.52E+02
4315.4528
DE
3646.0412
3646.0412
3653.3551
1.78E+01
3710.9567
CS
3646.8455
3676.7598
3677.8675
1.99E+01
3721.9803
HCSDE
3646.0412
3646.0412
3651.8143
1.57E+01
3696.6300
The values are in terms of unit MSEW cost per meter length (USD/m)
4.4 Optimum Design Solutions and Governing Constraints The best design solutions obtained through the experiments are given in Tables 7, 8, and 9 for Type I, II, and III, respectively. Considering Type I walls, reinforcement design is governed by external stability criteria. For wall heights below 11 m, seismic sliding is the most critical stability condition, as indicated by the exploitation of the 4th constraint. For the other cases, static bearing governs the design, especially in terms of reinforcement length. Regarding the internal stability, static rupture failure criterion is critical and the determining factor for reinforcement strength grade selection for all cases. The outcomes related to external stability are similar for Type II and III walls. However, the internal stability constraints are different as the reinforcement layers are further optimized by allowing variable layout and material properties. As a result of reinforcement strength reduction and layout optimization, the unit wall costs are further reduced when these MSEW configurations are adopted. In addition, geometric constrains may also be critical in some cases. For instance, 13th constraint, which is imposed to limit the length difference between successive reinforcement layers, is exploited in all Type III wall design examples.
5 Conclusion In this study, a hybrid metaheuristic optimization method named Hybrid Cuckoo Search—Differential Evolution algorithm is proposed to ameliorate the drawbacks of CS algorithm. The algorithm is constructed based on the main framework of CS algorithm, while the solutions are updated using mutation and crossover operators of Differential Evolution. In the resulting algorithm, the new solution candidates are generated with two successive update methods. In the first one, the positions of all nests are evaluated to thoroughly explore the search space. In the successive generation, the nests are randomly selected from the elite samples to both accelerate the convergence rate and improve the exploitation capability. The combined application of these concepts aims to balance the exploration and exploitation characteristics of the algorithm. The proposed algorithm, HCSDE, was integrated into the
298
M. Altun et al.
Table 6 Comparison of statistical results for Type III walls H (m)
Algorithm
8
GA
587.4088
606.7250
610.9423
2.06E+01
686.8758
PSO
581.9476
596.9659
602.3195
1.74E+01
639.9951
DE
581.7653
583.8108
585.2143
3.05E+00
590.5744
CS
583.9945
589.8750
590.1451
2.76E+00
595.2457
HCSDE
581.4868
583.2290
583.2228
1.60E+00
590.5744
GA
740.5114
783.2293
785.4130
2.43E+01
838.8991
PSO
734.2551
768.4763
772.3324
1.86E+01
812.7945
DE
732.3428
741.6702
742.7817
7.50E+00
755.9807
CS
738.7743
752.2957
751.4322
5.71E+00
761.9847
HCSDE
732.3287
735.5598
737.6038
5.62E+00
749.7365
GA
929.8568
969.4945
971.4811
2.61E+01
1038.9104
PSO
925.8588
952.4272
957.2879
2.44E+01
1021.2342
DE
918.8761
922.7813
923.5761
4.71E+00
942.0181
9
10
11
12
13
14
Min.
Median
Mean
St. Dev.
Max.
CS
924.2747
934.1178
934.2489
3.43E+00
941.1207
HCSDE
915.6281
922.7324
922.2017
2.13E+00
926.9116
GA
1210.8779
1255.2346
1264.8795
4.48E+01
1419.8085
PSO
1193.6491
1228.4928
1240.7280
3.38E+01
1301.1610
DE
1189.5802
1196.7234
1197.4356
7.76E+00
1225.6045
CS
1204.2766
1211.3258
1211.4701
4.59E+00
1219.8487
HCSDE
1189.5801
1193.6503
1194.7447
4.40E+00
1204.6620
GA
1645.0598
1739.1089
1756.8713
7.79E+01
1902.4231
PSO
1629.0425
1673.2134
1684.4439
4.41E+01
1814.2223
DE
1623.8077
1624.2832
1626.3011
3.36E+00
1634.3706
CS
1634.7350
1648.1700
1649.0593
8.17E+00
1673.4233
HCSDE
1623.8077
1624.0878
1626.2408
3.24E+00
1634.5670
GA
2202.6204
2336.7718
2359.2342
1.24E+02
2631.8166
PSO
2148.7916
2223.7642
2241.2472
5.85E+01
2433.2806
DE
2146.3534
2168.1717
2163.4233
1.15E+01
2191.8368
CS
2167.2899
2189.7511
2192.4425
1.15E+01
2214.8500
HCSDE
2146.3536
2169.5437
2162.6510
1.06E+01
2173.9823
GA
2894.0623
3094.5843
3102.6067
1.22E+02
3399.6743
PSO
2823.1308
2924.7265
2943.7428
1.04E+02
3314.8534
DE
2816.3977
2832.7236
2834.0890
1.66E+01
2870.7774
CS
2833.1677
2867.3001
2865.0930
2.09E+01
2909.0957
HCSDE
2816.3993
2820.9014
2826.7224
1.39E+01
2869.4354 (continued)
12 A Hybrid Cuckoo Search Algorithm for Cost …
299
Table 6 (continued) H (m)
Algorithm
Min.
Median
Mean
St. Dev.
Max.
15
GA
3798.1206
3973.5283
3992.1065
1.50E+02
4382.6802
PSO
3632.6349
3783.2841
3804.2320
1.23E+02
4135.9288
DE
3605.3053
3608.1428
3618.9597
1.86E+01
3667.3956
CS
3643.6315
3687.1272
3684.3080
2.31E+01
3722.0405
HCSDE
3605.3054
3605.8228
3622.9373
2.36E+01
3665.2258
The values are in terms of unit MSEW cost per meter length (USD/m) Type I, H=9 m
Type I, H=12 m
Type II, H=9 m
Type II, H=12 m
Type III, H=9 m
Type III, H=12 m
Fig. 5 Convergence graphs of selected benchmarks
10
7.1053
0.80
144
0.25
– 0.2415
– 0.4379
– 0.1922
– 2.E–16
– 0.4468
– 0.4854
– 0.4436
– 0.0095
– 0.6033
– 0.2158
– 0.3432
– 0.2119
– 1.0000
– 0.7273
L (m)
sv (m)
T ult (kN/m)
H b (m)
g1
g2
g3
g4
g5
g6
g7
g8
g9
g10
g11
g12
g13
g14
Cost (USD/m)
nr
8
616.8833
H (m)
– 0.7692
– 1.0000
– 0.2022
– 0.4185
– 0.2631
– 0.6488
– 0.0697
– 0.3834
– 0.4805
– 0.4353
– 2.E–16
– 0.1116
– 0.4411
– 0.2489
0.35
175
0.80
7.8967
11
791.9680
9
Table 7 Design details of Type I walls 10
– 0.8125
– 1.0000
– 0.1943
– 0.4940
– 0.2520
– 0.6856
– 0.0519
– 0.3249
– 0.4766
– 0.4257
– 2.E–16
– 0.0330
– 0.4440
– 0.2549
0.40
210
0.80
8.6881
12
989.9253
11
– 0.5714
– 1.0000
– 0.2403
– 0.2771
– 0.2266
– 0.6823
– 0.0199
– 0.3082
– 0.5305
– 0.4801
– 0.0490
0
– 0.5158
– 0.3069
0.25
210
0.80
10.1352
14
1301.1610
12
– 0.7000
– 1.0000
– 0.3196
– 0.6135
– 0.2561
– 0.8110
– 0.0582
– 0.3189
– 0.6132
– 0.5678
– 0.0831
0
– 0.6184
– 0.3543
0.30
210
0.70
12.3454
17
1814.2223
13
– 0.6667
– 1.0000
– 0.3890
– 0.6925
– 0.2425
– 0.8596
– 0.0399
– 0.3265
– 0.6786
– 0.6382
– 0.1164
0
– 0.6969
– 0.3994
0.20
210
0.65
14.8947
20
2465.9169
14
– 0.7000
– 1.0000
– 0.4477
– 0.7796
– 0.2177
– 0.8974
– 0.0073
– 0.3319
– 0.7291
– 0.6930
– 0.1460
– 1.E–16
– 0.7556
– 0.4394
0.30
210
0.60
17.7451
23
3268.3926
15
– 0.7000
– 1.0000
– 0.4962
– 0.8281
– 0.2546
– 0.9209
– 0.0554
– 0.3357
– 0.7675
– 0.7350
– 0.1705
0
– 0.7990
– 0.4735
0.20
210
0.55
20.8412
27
4315.4528
300 M. Altun et al.
9
0.40
−0.2489
−0.4411
−0.1116
−2.E−16
−0.4353
−0.4805
−0.3834
−0.0007
−0.6420
−0.1579
−0.3890
−0.2022
−1.0000
−0.7500
[0.80 0.80 0.75]
[144 114 70]
0.30
−0.2415
−0.4379
−0.1922
−2.E−16
−0.4468
−0.4854
−0.4436
−0.0159
−0.6385
−0.2105
−0.4428
−0.2119
−1.0000
−0.7857
[(sv )1 (sv )2 (sv )3 ] (m)
[(Tult )1 (Tult )2 (Tult )3 ] (kN/m)
H b (m)
g1
g2
g3
g4
g5
g6
g7
g8
g9
g10
g11
g12
g13
g14
[4 3 4]
[175 114 70]
[0.80 0.80 0.80]
7.8967
[3 3 4]
7.1053
739.6415
L(m)
Cost (USD/m)
nr
8
586.7442
H (m)
Table 8 Design details of Type II walls 10
−0.8125
−1.0000
−0.1943
−0.4940
−0.2520
−0.6856
−0.0519
−0.3249
−0.4766
−0.4257
−2.E−16
−0.0330
−0.4440
−0.2549
0.40
[210 114 70]
[0.80 0.80 0.80]
8.6881
[6 3 3]
926.7856
11
−0.8000
−1.0000
−0.2403
−0.5785
−0.1874
−0.7523
−0.0051
−0.3082
−0.5305
−0.4801
−0.0490
0
−0.5158
−0.3069
0.35
[210 144 70]
[0.80 0.75 0.75]
10.1352
[4 64]
1203.3486
12
−0.8125
−1.0000
−0.3196
−0.6803
−0.2453
−0.8132
−0.0582
−0.3189
−0.6132
−0.5678
−0.0831
0
−0.6184
−0.3543
0.35
[210 210 114]
[0.70 0.80 0.80]
12.3454
[4 6 5]
1638.1615
13
−0.7692
−1.0000
−0.3890
−0.7220
−0.2009
−0.8542
−0.0050
−0.3265
−0.6786
−0.6382
−0.1164
0
−0.6969
−0.3994
0.30
[210 175 114]
[0.65 0.80 0.80]
14.8947
[6 5 6]
2168.3320
14
−0.7273
−1.0000
−0.4477
−0.7533
−0.1724
−0.8823
−0.0088
−0.3319
−0.7291
−0.6930
−0.1460
−l.E−16
−0.7556
−0.4394
0.20
[210 210 114]
[0.60 0.75 0.80]
17.7451
[5 7 7]
2848.2736
15
−0.7000
−1.0000
−0.4962
−0.7835
−0.1753
−0.9030
−0.0061
−0.3357
−0.7675
−0.7350
−0.1705
0
−0.7990
−0.4735
0.25
[210 210 114]
[0.55 0.80 0.80]
20.8412
[8 67]
3646.0412
12 A Hybrid Cuckoo Search Algorithm for Cost … 301
0.40
−0.2489
−0.4411
−0.1116
−l.E−07
−0.4353
−0.4805
−0.3834
−0.0007
−0.7414
−0.1579
[3 3 4]
5.7336
6.9265
8.1192
[0.80 0.80 0.80]
[144 114 70]
0.20
−0.2415
−0.4379
−0.1922
−7.E−08
−0.4468
−0.4854
−0.4436
−0.0007
−0.7065
−0.1707
[(nr )1 (nr )2 (nr )3 ]
(L)1 (m)
(L)2 (m)
(L)3 (m)
[(sv )1 (sv )2 (sv )3 ] (m)
[(T ult )1 (T ult )2 (T ult )3 ] (kN/m)
H b (m)
g1
g2
g3
g4
g5
g6
g7
g8
g9
g10
9
Cost (USD/m)
[175 114 70]
[0.80 0.80 0.80]
9.2146
7.8658
6.5197
[4 3 4]
732.3287
8
581.4868
H (m)
Table 9 Design details of Type III walls 10
−0.2520
−0.7697
−0.0519
−0.3249
−0.4766
−0.4258
−5.E−06
−0.0330
−0.4440
−0.2549
0.40
[210 144 70]
[0.80 0.80 0.80]
10.0748
8.5838
7.6052
[4 5 3]
915.6281
11
−0.1188
−0.8193
−0.0100
−0.3082
−0.5305
−0.4801
−0.0490
−3.E−10
−0.5158
−0.3069
0.35
[210 144 58]
[0.80 0.70 0.80]
12.0777
10.4277
8.7777
[6 5 3]
1189.5801
12
−0.1826
−0.8598
−0.0051
−0.3189
−0.6132
−0.5678
−0.0831
4.E−12
−0.6184
−0.3543
0.25
[210 144 70]
[0.70 0.75 0.75]
14.2579
12.4579
10.6579
[6 64]
1623.8077
13
−0.2009
−0.8800
−0.0050
−0.3265
−0.6786
−0.6382
−0.1164
−3.E−09
−0.6969
−0.3994
0.30
[210 175 114]
[0.65 0.80 0.80]
16.6796
14.7297
12.7798
[6 5 6]
2146.3534
14
−0.1663
−0.9016
−0.0007
−0.3319
−0.7291
−0.6930
−0.1460
−3.E−10
−0.7556
−0.4394
0.20
[210 210 114]
[0.60 0.80 0.80]
19.6051
17.5051
15.4051
[757]
2816.3977
15
(continued)
−0.1692
−0.9180
−0.0016
−0.3357
−0.7675
−0.7350
−0.1705
−7.E−10
−0.7990
−0.4735
0.20
[210 210 114]
[0.55 0.80 0.80]
22.8962
20.6462
18.3962
[867]
3605.3053
302 M. Altun et al.
9
−0.5586
−0.2022
−0.0009
−0.7500
8
−0.5287
−0.2119
−0.0059
−0.7500
H (m)
g11
g12
g13
g14
Table 9 (continued) 10
−0.8125
−0.0060
−0.1943
−0.6294
11
−0.8000
−2.E−07
−0.2403
−0.6968
12
−0.8000
−l.E−07
−0.3196
−0.7569
13
−0.7692
−3.E−05
−0.3890
−0.7710
14
−0.7500
−3.E−07
−0.4477
−0.8017
15
−0.7273
−2.E−07
−0.4962
−0.8246
12 A Hybrid Cuckoo Search Algorithm for Cost … 303
304
M. Altun et al.
mechanically stabilized earth wall design framework to minimize the material cost and tested with 24 design benchmarks, where three different MSEW reinforcement configurations were considered. The performance of the algorithm was compared with the basic CS, DE, and other well-known metaheuristics such as GA and PSO. The results indicate that HCSDE significantly improves the statistical outcomes and convergence rate of CS algorithm for most of the benchmark problems. Moreover, it slightly enhances the robustness of DE on the account of mean, median, and standard deviation parameters. Furthermore, for complex cases, HCSDE improves the results considerably compared to GA and PSO. In light of the experimental results, it can be concluded that HCSDE demonstrates potential based on the promising results of the present study. It should be noted that there are still some room for improvement in terms of robustness. Although HCSDE performs statistically better than the compared algorithms, the analyses might converge to local optima in rare occurrences as observed in some of the design experiments. To ameliorate these minor defects, adaptive parameter selection strategies might be utilized for the discovery probability, enhancing the capability of the algorithm further. Another future study subject might be the application of HCSDE to other challenging optimization problems in Civil Engineering. High-dimensional design problems encountered in structural and geotechnical engineering, inverse problems resulting from site monitoring and safety analysis, project scheduling, and energy optimization problems might be suitable for future applications.
References 1. Dhadwal M, Jung S, Kim C (2014) Advanced particle swarm assisted genetic algorithm for constrained optimization problems. Comput Optim Appl 58:781–806 2. Gogna A, Tayal A (2013) Metaheuristics: review and application. J Exp Theor Artif Intell 25:503–526 3. Boussaïd I, Lepagnot J, Siarry P (2013) A survey on optimization metaheuristics. Inf Sci (Ny) 237:82–117 4. Altun M, Pekcan O (2017) A modified approach to cross entropy method: Elitist stepped distribution algorithm. Appl Soft Comput J 58:756–769 5. Goldberg DE (1989) Genetic algorithms in search, optimization, and machine learning. Addison-Wesley Longman Publishing Co., Inc., Boston 6. Storn R, Price K (1997) Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J Glob Optim 11:341–359 7. Kennedy J, Eberhart R (1995) Particle swarm optimization. In: IEEE international conference on neural networks. Proceedings, pp 1942–1948 8. Kirkpatrick S, Gelatt CD, Vecchi MP (1983) Optimization by simulated annealing. Science 220:671–680 9. Rashedi E, Nezamabadi-pour H, Saryazdi S (2009) GSA: a gravitational search algorithm. Inf Sci (Ny) 179:2232–2248 10. Rao RV, Savsani VJ, Vakharia DP (2011) Teaching–learning-based optimization: a novel method for constrained mechanical design optimization problems. Comput Des 43:303–315 11. Wolpert DH, Macready WG (1997) No free lunch theorems for optimization. IEEE Trans Evol Comput 1:67–82
12 A Hybrid Cuckoo Search Algorithm for Cost …
305
12. Karaboga D, Basturk B (2007) A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. J Glob Optim 39:459–471 13. Yang X, Deb S, Behaviour ACB (2009) Cuckoo search via Levy flights. In: 2009 world congress on nature & biologically inspired computing (NaBIC). Coimbatore, India, pp 210–214 14. Hasançebi O, Teke T, Pekcan O (2013) A bat-inspired algorithm for structural optimization. Comput Struct 128:77–90 15. Lei H, Chuanxin Z, Changzhi W et al (2017) Discrete firefly algorithm for scaffolding construction scheduling. J Comput Civ Eng 31:4016064 16. Eskandar H, Sadollah A, Bahreininejad A, Hamdi M (2012) Water cycle algorithm-a novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput Struct 110–111:151–166 17. Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer. Adv Eng Softw 69:46–61 18. Gandomi AH (2014) Interior search algorithm (ISA): a novel approach for global optimization. ISA Trans 53:1168–1183 19. Cheng M-Y, Prayogo D (2014) Symbiotic organisms search: a new metaheuristic optimization algorithm. Comput Struct 139:98–112 20. Hasançebi O, Azad SK (2015) Adaptive dimensional search: a new metaheuristic algorithm for discrete truss sizing optimization. Comput Struct 154:1–16 21. Askarzadeh A (2016) A novel metaheuristic method for solving constrained engineering optimization problems: crow search algorithm. Comput Struct 169:1–12 22. Yalcin Y, Pekcan O (2018) Nuclear fission–nuclear fusion algorithm for global optimization: a modified big bang–big crunch algorithm. Neural Comput Appl 23. Jain M, Singh V, Rani A (2019) A novel nature-inspired algorithm for optimization: squirrel search algorithm. Swarm Evol Comput 44:148–175 24. Pakzad-Moghaddam SH, Mina H, Mostafazadeh P (2019) A novel optimization booster algorithm. Comput Ind Eng 136:591–613 25. Greiner D, Periaux J, Quagliarella D et al (2018) Evolutionary algorithms and metaheuristics: applications in engineering design and optimization. Math Probl Eng 2018:2793762 26. Yalcin Y, Orhon M, Pekcan O (2019) An automated approach for the design of mechanically stabilized earth walls incorporating metaheuristic optimization algorithms. Appl Soft Comput J 74:547–566 27. Shehab M, Khader AT, Al-Betar MA (2017) A survey on applications and variants of the cuckoo search algorithm. Appl Soft Comput J 61:1041–1059 28. Fister I, Fister D, Fistar I (2013) A comprehensive review of cuckoo search: variants and hybrids. Int J Math Model Numer Optim 4:387–409 29. Gandomi AH, Yang XS, Alavi AH (2013) Cuckoo search algorithm: a metaheuristic approach to solve structural optimization problems. Eng Comput 29:17–35 30. Abdel-Basset M, Hessin AN, Abdel-Fatah L (2018) A comprehensive study of cuckoo-inspired algorithms. Neural Comput Appl 29:345–361 31. Ouaarab A, Ahiod B, Yang XS (2014) Discrete cuckoo search algorithm for the travelling salesman problem. Neural Comput Appl 24:1659–1669 32. Rodrigues D, Pereira LAM, Almeida TNS et al (2013) BCS: a binary cuckoo search algorithm for feature selection. In: Proceedings of IEEE international symposium on circuits and systems, Beijing, China, pp 465–468 33. Li Z, Dey N, Ashour AS, Tang Q (2018) Discrete cuckoo search algorithms for two-sided robotic assembly line balancing problem. Neural Comput Appl 30:2685–2696 34. Walton S, Hassan O, Morgan K, Brown MR (2011) Modified cuckoo search: a new gradient free optimisation algorithm. Chaos Solitons Fractals 44:710–718 35. Zhang Y, Wang L, Wu Q (2012) Modified adaptive cuckoo search (MACS) algorithm and formal description for global optimisation. Int J Comput Appl Technol 44:73–79 36. Binh HTT, Hanh NT, Van Quan L, Dey N (2018) Improved cuckoo search and chaotic flower pollination optimization algorithm for maximizing area coverage in wireless sensor networks. Neural Comput Appl 30:2305–2317
306
M. Altun et al.
37. Wang GG, Deb S, Gandomi AH et al (2014) A novel cuckoo search with chaos theory and elitism scheme. In: Proceedings of 2014 international conference on soft computing and machine intelligence, ISCMI 2014, New Delhi, India, pp 64–69 38. Valian E, Mohanna S, Tavakoli S (2011) Improved cuckoo search algorithm for global optimization. Int J Commun Inf Technol 1:31–44 39. Raju R, Babukarthik RG, Dhavachelvan P (2013) Hybrid ant colony optimization and cuckoo search algorithm for job scheduling. In: Meghanathan N, Nagamalai D, Chaki N (eds) Advances in computing and information technology. Springer, Berlin, Heidelberg, pp 491–501 40. Singla S, Jarial P, Mittal G (2015) Hybridization of cuckoo search & artificial bee colony optimization for satellite image classification. Int J Adv Res Comput Commun Eng 4:326–331 41. Kanagaraj G, Ponnambalam SG, Jawahar N (2013) A hybrid cuckoo search and genetic algorithm for reliability-redundancy allocation problems. Comput Ind Eng 66:1115–1124 42. Liu X, Fu M (2015) Cuckoo search algorithm based on frog leaping local search and chaos theory. Appl Math Comput 266:1083–1092 43. Zhang Y, Yu C, Fu X et al (2015) Spectrum parameter estimation in Brillouin scattering distributed temperature sensor based on cuckoo search algorithm combined with the improved differential evolution algorithm. Opt Commun 357:15–20 44. Huang J, Gao L, Li X (2015) An effective teaching-learning-based cuckoo search algorithm for parameter optimization problems in structure designing and machining processes. Appl Soft Comput J 36:349–356 45. Sheikholeslami R, Zecchin AC, Zheng F, Talatahari S (2015) A hybrid cuckoo–harmony search algorithm for optimal design of water distribution systems. J Hydroinformatics 18:544–563 46. Elias V, Christopher BR, Berg RR (2001) Mechanically stabilized earth walls and reinforced soil slopes design & construction guidelines. Federal Highway Administration (FHWA), Washington, DC 47. Berg RR, Christopher BR, Samtani NC (2009) Design and construction of mechanically stabilized earth walls and reinforced soil slopes. Federal Highway Administration (FHWA), Washington, DC 48. Basudhar PK, Vashistha A, Deb K, Dey A (2008) Cost optimization of reinforced earth walls. Geotech Geol Eng 26:1–12 49. Javidy B, Hatamlou A, Mirjalili S (2015) Ions motion algorithm for solving optimization problems. Appl Soft Comput 32:72–79
Chapter 13
A Cuckoo Search Algorithm Inspired from Membrane Systems A. Maroosi
1 Introduction Metaheuristic algorithms are one of the most popular optimization algorithms. Diversification and intensification of the solutions are the key features of the metaheuristic algorithms [1, 2]. During the intensification, the algorithm seeks around the current best solution to improve the best solution; while during the diversification stage the algorithm search space appropriately and usually does so by randomizing the solutions. Metaheuristic algorithms are capable to solve optimization problems in different fields [3, 4] such as traveling salesman problem [5], power system problems [6] numerical functions [7, 8], etc. The cuckoo search algorithm is a metaheuristic algorithm recently proposed in [9]. The cuckoo algorithm has been used in various fields for optimization. For example, the study [10] used this algorithm for the tuning of PID-structured (TCSC) controller when a power system subjected to different load conditions. Reference [11] applied cuckoo search algorithm with chaotic flower pollination for solving area coverage problem in wireless sensor networks. Two-sided robotic assembly line balancing problem was resolved by discreet cuckoo search algorithm in [12]. The demons algorithm was optimized by the cuckoo search algorithm for image registration in [13]. The study in [14] used the cuckoo algorithm to estimate the parameter modeling of the average coefficients of ammonium ionic liquids using the e-NRTL model. The study in [15] used the cuckoo algorithm to determine the optimal bio-diesel ratio. Membrane systems are based on their computation of biology and are parallel and distributed systems. A membrane system consists of a membrane-like structure in which each membrane may be labeled and polarized as positive, negative, and zero. Each membrane system has objects inside its membranes. These objects are A. Maroosi (B) Department of Computer Engineering, University of Torbat Heydarieh, Torbat Heydarieh 9516168595, Iran e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 N. Dey (ed.), Applications of Cuckoo Search Algorithm and its Variants, Springer Tracts in Nature-Inspired Computing, https://doi.org/10.1007/978-981-15-5163-5_13
307
308
A. Maroosi
evolvable and transferable by laws such as laws of evolution, transfer, and so on. Membrane systems have a high level of parallelism. Investigates in [16–22] show how they can exploit the parallelism of membrane systems. In [23], a parallel implementation of a membrane system on the graphics processing units (GPU) is compared with the conventional approach of its implementation on a central processing unit (CPU). Since data transferring between computer and GPU is a time-consuming process, it is suggested all process even sequential parts of work should be analyzed on the GPU and final results transferred to CPU. The work in [18] improves the parallelism of an active membrane system for solving the N-queen problem. Different rules including division, evolutionary, and communication rules are used to manipulate this problem. The active membrane system with active membranes plays as a local search algorithm for solving this problem. In [21], a clustering approach was proposed to cluster objects with high communication to the same cluster to be assigned to the same thread. In this study, the membranes with high interconnection are allocated to similar GPU thread blocks. By this approach, both inter-thread and inter-block communications are reduced and the speed of processing is increased. The study [24] tried to improve the parallelism of a membrane system by increasing evolutionary rules inside the membranes and decrease the inter-membrane communication. This approach was used to solve the N-Queen problem. In this study, GPU is used as many cores for parallel processing. In [22], first the parameters of harmony search as an inspired algorithm from musical improvisation processes were adapted. Then a parallel framework based on membrane systems structure was proposed. To evaluate the performance of the algorithm the introduced algorithm was run on multicore processors. This research uses the parallel and distributed properties of membrane systems to improve the cuckoo search algorithm to solve the optimization problem. The rules of membranes that are related to the cuckoo search algorithm are implemented. The transient rules are also used to transfer objects between membranes and increase the robustness of the algorithm. The results show that the performance of the proposed algorithm is improved by the proposed method.
2 Membrane Computing Membrane computing is an area in natural computing in which its computation is inspired from living cells [25, 26]. The basic components of membrane systems are objects and rewrite rules. Objects denote chemical components and rules represent actors for chemical reactions. Two of the main variants of membrane systems are Cell-like and Tissue-like membrane systems.
13 A Cuckoo Search Algorithm Inspired from Membrane Systems
309
2.1 Cell-Like Membrane Systems A cell-like membrane system is a kind of membrane systems in which several membranes can be enclosed by a membrane called skin membrane. A case of the Euler–Venn and Tree diagram representations are shown in Figs. 1 and 2. The cell is divided into different parts by membranes. This enables the system to control chemical interactions inside each membrane or between membranes. Membranes include multiset of objects and rules. Both the objects in membranes and the membranes can be evolved by rules. The skin membrane is the outer membrane that covers other membranes. The space around the skin membrane is called the environment. Skin Membranes
Elementary membrane
7 3 Environment
6 5
Environment 2 4 1
Fig. 1 Representation of membrane system as the Euler–Venn diagram
1
Fig. 2 Representation of membrane system as a tree
4 3 5
7
6
2
310
A. Maroosi
The elementary membranes are membranes without other membranes inside them. Membranes are identified by labels. Each membrane can have charges that can change during the process. If the structure of the membrane system is represented by the tree then the skin membrane is represented by the root node, tree leaves indicate elementary membranes and the labeled membranes are showed by labeled nodes (see Fig. 1). In the description of the membrane system by parenthesis, each membrane is represented by matched pairs of parentheses with a label. The representation of the membrane system in Fig. 1 can be specified as follows: [ []2 []3 [ [ []6 []7 ]5 ]4 ]1
(1)
The basic rules for a membrane system can be categorized as follows [27]: • Transformation rules: These rules used some objects inside a membrane and produced other objects inside that membrane. In other words, transformation rules show which objects should be used and which objects should be produced inside a membrane. • Communication rules: By these rules, objects inside a membrane can be simultaneously evolved and transferred to other membranes. • Structure rules: These rules evolve membranes simillar the transformation rules that evolve objects. A formal definition of cell-like membrane systems can be defined as π = (O, H, μ, ω1 , . . . , ωm , (R1 , ρ1 ), . . . , (Rm , ρm ), i out ), where [28]: (1) m ≥ 1 is the number of the membranes in the initial structure of the membrane system called the initial degree of the membrane system; (2) O represents a set that includes objects; (3) H indicates a set that consists of membranes labels, i.e., H = {1, . . . , m}; (4) μ indicates the membrane system structure, comprising of m initial labeled membranes; (5) ω1 , . . . , ωm represent the multiset from objects or strings of set O, that are located in the m membranes of μ; (6) R = {(R1 , ρ1 ), . . . , (Rm , ρm )} where Ri , 1 ≤ i ≤ m represent the sets of evolving rules in the ρi , 1 ≤ i ≤ m are the priority of applying rules. Evolving rules are represented as x → y where x is a multiset of objects from O and y = (y1 , here) (y2 , out) (y3 , in j ) or y = (y1 , here) (y2 , out) (y3 , in j ) δ, where y1 , y2 , y3 , y1 , y2 , y3 are strings over O. The membrane that participated in the reactions can be dissolved by rules and the output that indicates a membrane is dissolved is δ, where δ ∈ / O. (7) i out is the output membrane that the final result will be available in this region. i out gets values 1 to m. By application of rule x → y, the objects x is consumed in membrane i. For right hand of the rule when y is, for instance, equal to (e, here) then object e is produced in membrane i. When y has ( f, out) means object f will be sent to the outer membrane that encircled membrane i. When y has (g, in j ), it means object g
13 A Cuckoo Search Algorithm Inspired from Membrane Systems
311
sent to the interior membrane j. When y has δ, it means the membrane i is dissolved and contains membrane i including objects or membranes that become the member of outer membrane. The degree in a rule x → y is equivalent to the number of objects in multiset x or on the left side of the rule. When just one object exists in the left side of the rules, it called non-cooperative rule because there will be no cooperation with each other to produce other objects.
2.2 Tissue-Like Membrane Computing Model The kind of membrane system with multiple cells within the same environment is called the tissue-like membrane computing model [29]. This system emulates the interaction between cells in tissues. In other words, the cells are linked to each other and can directly communicate. The π = (O, E, ω1 , . . . , ωm , R1 , . . . , Rm , iout ), is an expression of a tissue-like membrane computing, where 1 m is the degree of tissue-like membrane system and it is equivalent to the cells number in the model; 2 O is the objects set; 3 E ⊆ O is a set that contains the available objects in the environment; 4 ω1 , . . . , ωm are multiset of objects from alphabets in O for m initial cells; 5 R1 , . . . , Rm represent the sets of transformation and communication rules in regions 1, . . . , m as follows: • The transformation rules: u → v in these kind of rules a multiset of objects u in cell i are consumed and a multiset of objects v will be produced in the cell i; • The communication rules: (i, x/y, j) for i, j ∈ {0, 1, 2, …, m}, i = j and x, y ∈ O; 6 iout ∈ {0, 1, 2, …, m} is the cell in which final output will be available in this cell. In a tissue-like membrane system with m elementary cells, labels of cells are from 1 to m. The number 0 indicates the environment label and iout represents output cell or environment. The ωi refers to strings in cell i and E ⊆ O is a multiset of objects in the environment. Many copies of an object can be available in each cell. By the rule (i, x/y, j), two cells i and j exchange information in which a set of objects x in cell i is exchanged with y in cell j. Notably, the value of i and j can be zero which means a cell will exchange objects with the environment. When more than one rule can use an object, a rule is chosen non-deterministically.
3 Parallel Processing Parallel processing can be used to accelerate the solving of computational problems. Different approaches can be used to provide computational resources. A computer with multiple processors or processors on networks belong to different computers [30]. The advantages of parallel processing are as follows:
312
A. Maroosi
1. By this way, it is possible to save time because the speed of processing is increased. It is possible to make a powerful processing unit by a cluster of cheap processing devices. Thus, it can also save costs. 2. It is possible to solve problems that need a lot of memory or time and not possible to solve by conventional computational resources. 3. Parallel processing platforms give this chance to the user for simulating a system in a parallel way. 4. By parallel processing, it is possible to use distributed resources on different locations in the network. One of the approaches for designing a parallel program is the Foster methodology [31] which is as follows: 1. Partitioning: Processes of data are partitioned to smaller tasks as possible. 2. Communication: Proper communication structures and precise algorithms are developed based on necessary communications. 3. Agglomeration: Tasks are integrated into larger tasks when two previous stages could not find a good structure or task for the required performance. 4. Mapping: The appropriate approaches should be considered to assign tasks to the processors, to decrease communications costs while maximizing the processor usage. In this work, we tried to minimize communication between membranes by appropriate scheduling time of communication. In other words, the rate of inter-membrane communication is very low in with respect to the internal evolution of membranes. Parallel processing platforms have different structures based on simultaneous instructions number and data stream number they are listed as below [32, 33]: • Single Instruction, Single Data stream (SISD): there is no parallelism in this architecture for instruction and data. • Single Instruction, Multiple Data streams (SIMD): in this structure, the same instructions are applied on different data by processing units like GPUs. • Multiple Instruction, Single Data stream (MISD): multiple instructions operate on a single data stream. • Multiple Instruction, Multiple Data streams (MIMD): in this architecture, both data and instructions are different for each processing unit like distributed systems or a multicore processor. In this study, the MIMD approach is used to improve the membrane-inspired cuckoo search algorithm.
4 Membrane-Inspired Evolutionary Algorithms Optimization algorithms that inspired from biological behaviors are called bioinspired algorithms. Membrane-inspired evolutionary algorithms are kind of bioinspired algorithms that stimulate living cell processes.
13 A Cuckoo Search Algorithm Inspired from Membrane Systems
313
Previous studies showed that membrane systems are non-deterministic and Turing complete in which their workspace can be expanded exponentially in linear time [34]. It means these systems can solve NP-hard problems in linear times. The first approach for using membrane systems as evolutionary algorithms was introduced by Nishida [35]. In this study, a cell-like membrane system with a tabu search was used to solve the traveling salesman problem. A tissue-like membrane system was used in [36] to solve the vertex cover problem. A cell-like membrane system is used in [37] to find a solution for a storage problem. The advantages of membrane systems to solve optimization problems against other parallel optimization algorithms are analyzed in [38]. A quantum evolutionary algorithm inspired membrane system structure in [39] was used to solve the knapsack problem. Communication rules play an important role in the improvement of this algorithm. Pheromone rules besides transformation and communication rules are applied in membrane-inspired ant colony optimization for traveling salesman problem [40]. A cell-like membrane-inspired algorithm is used for estimating the parameters of fuel-cell model for the proton exchange [41]. In [42], a hybrid membrane-inspired algorithm was analyzed in the aspect of exploration and exploitation balances. Results show that by using of membrane computing approach can achieve proper balances between exploration and exploitation stages. Manufacturing parameter optimization problems were solved by a class of tissue-like membrane system in [42]. A combination of Honey Bee Mating algorithm was used in [43] to solve the channel assignment problem. The successful rate of solving channel assignment problems by membrane-inspired Honey Bee Mating algorithm is better than the original Hybrid Honey Bee Mating algorithm. A hybrid of membrane computing model and bee algorithm was used in [44] for intrusion detection in computer networks. Results on a well-known data set show the superiority of the hybrid membrane-inspired bee algorithm in comparison with the original bee algorithm. Usual bio-inspired algorithms like bat algorithms were unsuccessful for solving feature selection in face recognition in a large scale. A bio-inspired feature selection method based on membrane computing has been used in [45] to maintain diversity and keep the balance between exploration and exploitation for face recognition. An approach based on the membrane system was deployed in the virtual network embedding problem to improve the quality of solution while keeping diversity [34]. In this approach, a P System nonelementary active membrane is used to achieve nested parallelism. A membrane evolutionary artificial potential field was proposed in [46] for path planning problem in robots. The multiset of parameters based on biochemical inspiration rules are applied for minimization of the path length. A membrane evolutionary algorithm was employed in [47], including four rules selection, division, fusion, and cytolysis to solve the maximum clique problem.
5 Cuckoo Search Optimization Algorithm The cuckoo search optimization algorithm is explained in detail in the articles [9, 48, 49]. The main steps of the cuckoo search algorithm are as follows:
314
A. Maroosi
1. Only the best nests with high-quality eggs are transferred to the next stage. The number of host nests is constant. 2. Eggs laid by the host cuckoo can be identified as a foreign egg by the probability pd ∈ [0, 1]. In this case, the host bird has two options to either discard the egg or leave the nest. The ratio of pd from n is replaced by new nests with new solutions. The egg quality is the same as the solution quality in optimization problems. In other words, eggs in a nest represent solutions and the goal is to replace a poor quality egg by new solutions (new eggs) that are better. Clearly, this can be extended to more complex cases so that each nest can have more than one egg representing a set of solutions. When a set of solutions is produced, a Levy flight is done and only high-quality nests are moved to the next step. When the new solution of xit+1 is produced for the cuckoo i Levi’s flight is given by the following equation. xit+1 = xit + α Levy(λ)
(2)
where α is a positive value that indicates the length of the jump. This length of jump should depend on the size of the problem. In most cases, this value can be taken as one. The nature of formula (2) is random in nature so that the next point of motion depends on the location of motion (part 1 of (2)) and the probability of motion (part 2 of (2)). The magnitude of the motion is obtained from the Levy distribution, which has an uncertain variance and mean. Levy’s distribution is as follows. Levy ∼ u = t −λ , λ ∈ (0, 3]
The following is the pseudo code of the cuckoo search algorithm: Start of the algorithm Generate n host nests x i , i = 1,..., n Calculate the fitness of solutions f (x ), x = (x 1 , x 2 ,..., x u ) ; Repeat the loop until t < MaxGeneration Flying a cuckoo randomly by following Levi's distribution (generate solution i ); Calculate the i th solution cost function; Select a nest from n existing nests (e.g. j th nest); If the cost function of i th solution is better than the cost function j then The solution j will be replaced by i ; The pd ratio of population from the worst nests are discarded, and new ones are generated; The solutions are sorted by the best, and the best is chosen; End of loop iteration End of the algorithm
(3)
13 A Cuckoo Search Algorithm Inspired from Membrane Systems
315
6 Parallelizing Cuckoo Search with Inspiration from Membrane Systems The proposed system includes m membranes. The number of individuals is equal to n that is distributed among membranes as objects. Each membrane simulates a cuckoo search algorithm. Operations over the objects inside each membrane can be performed simultaneously on different processors. The proposed membrane-inspired cuckoo algorithm is as follows:
Input parameters include: maximum number of iterations (MaxIter), maximum number of inter-membrane information exchange (MaxExch) Repeat external loop until the iteration is not greater than MaxIter; For membrane 1 to m perform simultaneously on different processing cores: Set the number of repetitions of the inner loop to zero; Repeat the inner loop as long as the internal iteration is less than MaxExch Creating a New Solution Using Cuckoo Flight Following the Levy Distribution; Calculation of new solution cost function; Choose a nest from the nest at random; Replace the new solution if the cost of the new solution is better than the random choice; The pd ratio of population from the worst nests are discarded and new ones are generated; Sort solutions and choose the best solution; Increase number of internal iteration End of inner loop Exchange of information between membranes Increase the number of external iteration End of external loop In the proposed algorithm, n is the number of solutions (nests) and m is the number of membranes. In this algorithm, the n solutions (objects) are divided between the m membranes. Therefore, there are n/m objects in each membrane where the objects of each membrane are processed individually on a separate computer processing core. The maximum number of iterations is divided by the number of times the data is exchanged between membranes. The membranes then send the objects to the main processing core, and the main processing core returns a combination of objects from different membranes to each membrane. In this way, information between the membranes is exchanged. This operation is repeated for the maximum number of data exchange between the membranes that are specified by the user. According to the rules (i, u/v, j) of the membrane system, where i, j ∈ j {1, 2, . . . , m}, i = j, and membrane i and j exchange objects u = Iti and v = It .
316
A. Maroosi
Where object u is the tth solution in the ith membrane and so on for object v. In this study, four membranes are considered in which in each membrane of a cuckoo search algorithm is run as shown Fig. 3. During the exchange information first membrane keeps its first individual and exchange second individual with second individual in second membrane. Third and fourth individuals are exchanged with individuals in third and fourth membranes (see Fig. 4).
Membrane 1
Membrane 2
Cuckoo Serach
Cuckoo Serach
Membrane 4
Membrane 3
Cuckoo Serach
Cuckoo Serach
Fig. 3 The structure of proposed membrane-inspired cuckoo search algorithm
Membrane1
Membrane2
Membrane3
Membrane4
I 11
I 12
I 13
I 14
I 22
I 21
I 24
I 23
I 33
I 34
I 31
I 32
I 44
I 43
I 42
I 41
I 41k +1
I 42k +1
I 43k +1
I 44k +1
I 42k + 2
I 41k + 2
I 44k + 2
I 43k + 2
I 43k + 3
I 44k + 3
I 41k + 3
I 42k + 3
I 44k + 4
I 43k + 4
I 42k + 4
I 41k + 4
Fig. 4 Individuals of membranes after exchange of individuals between membranes
13 A Cuckoo Search Algorithm Inspired from Membrane Systems
317
In other words, Iti for the first membrane (i.e., i = 1) for t = 4k +2 are exchanged with tth solution in second membrane (i.e., It2 ). For t = 4k + 3 and t = 4k + 4, the individual It1 are exchanged with It3 and It4 , respectively. This pattern of exchanging information helps all membranes use information of other membranes. However, rules in membrane systems can enable other approaches for exchange information.
7 Simulation and Results In this work, the introduced membrane-inspired cuckoo search algorithm is simulated on well-known optimization problems. In this paper, the Colville problem is solved using the proposed algorithm. The Colville problem is defined as follows: f (x) ¯ =100(x12 − x2 )2 + (1 − x1 )2 + 90(x32 − x4 )2 + (1 − x3 )2 + 10.1((1 − x2 )2 + (1 − x4 )2 ) + 19.8(x2 − 1)(x4 − 1)
(4)
Which in Eq. (4) −18 ≤ xi ≤ 10; i = 1, 2, 3, 4, the solution is equal to (1, 1, 1, 1) and the value of the cost function at this point is zero. It is simulated on a computer with Intel Core i5, 2.5 GHz speed, and 4GB of RAM. Simulations have been carried out for both the conventional cuckoo algorithm, and the proposed membrane-inspired algorithm. The proposed algorithm has four membranes. The number of nests in each population (membrane) is 30, the number of exchange information between membranes during the process is 4. The discovery rate of alien eggs/solutions is 0.25 and the Levy exponent (β) is 1.5. The results show that the proposed algorithm with four membranes performs better than the conventional algorithm. Note that the membranes run on different processing cores simultaneously and by this approach, it is possible to run the algorithm on parallel equipment. As shown in Fig. 5, the deviation of the standard
Fig. 5 Comparison of the proposed membrane-inspired cuckoo search algorithm and the usual cuckoo search algorithm for Colville problem
318
A. Maroosi
cuckoo search algorithm is higher than the proposed method. This shows that the proposed membrane-inspired cuckoo algorithm performs better than the conventional algorithm. Beside Colville problem proposed approach evaluated on ten optimization functions of CEC05 benchmark [50] including five F1–F5 unimodal benchmark functions and five F6–F10 multimodal benchmark functions with different local optimums as follows: Unimodal functions: Shifted Sphere (F1), Shifted Schwefel 1.2 (F2), Shifted Rotated Elliptic (F3), Shifted Schwefel 1.2 with Noise (F4), and Schwefel 2.6 (F5). Multimodal functions: Shifted Rosenbrock (F6), Shifted Rotated Griewank (F7), Shifted Rotated Ackley (F8), Shifted Rastrigin (F9), and Shifted Rotated Rastrigin (F10). Simulations have been done for the proposed algorithm and conventional algorithm on benchmark functions F1–F10. Simulation results in Table 1 show that in most cases the proposed algorithm has better solutions with smaller means and standard divisions. Results for benchmark functions F1–F5 show the ability of the proposed algorithm in exploitation because these functions are unimodal functions. Results for F6–F10 indicate the exploration ability of the proposed algorithm. The simulation results show that the proposed algorithm has better performance both on unimodal and multimodal benchmark functions. Simulations run for 10 times and Fig. 6 illustrates solutions for F1 with dimension equal to 10. Figures 7 and 8 show results for F5 and F10. These figures visualize results for the proposed algorithm against the usual algorithm that show the superiority of the proposed algorithm. Table 1 The best, mean, and standard deviation (Std) values of the proposed algorithm and conventional cuckoo search algorithm for F1–F10 with dimension equal to 10 Original Cuckoo search algorithm
Membrane-inspired Cuckoo search algorithm
Best
Mean
Std
Best
Mean
Std
F1
12.72
56.3
28.2
9.75
29.6
11.6
F2
181
545
289
160
451
114
F3
1.03E+06
3.34E+06
1.86E+06
5.54E+05
1.72E+06
7.88E+05
F4
914
1700
503.7942
454.2238
1010.616
432.0741
F5
1548
2084
465
1079
1558
345
F6
5.49E+05
2.99E+07
5.11E+07
6.62E+05
2.67E+06
1.71E+06
F7
2.17
3.02
0.63
1.84
2.27
0.35
F8
20.32
20.5
0.11
20.1
20.4
0.11
F9
26.11
41.6
8.52
28.7
36.6
4.30
F10
42.61
55.5
8.13
37.4
48.1
8.44
13 A Cuckoo Search Algorithm Inspired from Membrane Systems
319
Fig. 6 Comparison between the usual cuckoo search algorithm and membrane-inspired cuckoo search algorithm for F1 with dimension equal to 10
Fig. 7 Comparison between the usual cuckoo search algorithm and membrane-inspired cuckoo search algorithm for F5 with dimension equal to 10
To verify the performance of the proposed membrane-inspired evolutionary algorithm, the proposed algorithm applied for a higher dimension of the benchmark optimization functions. In the following, we can see results for 10 benchmark functions for dimension equal to 30 (see Table 2). As can be seen in Table 2 and Figs. 9, 10, and 11, the quality of the proposed algorithm in comparison with the conventional algorithm in higher dimensional problems is noticeable than lower dimensional problems. For example, for F1 with dimension 10, the std values are 11 and 28 for the proposed and conventional algorithms while for F1 with dimension 30 std values are equal to 25 and 93.
320
A. Maroosi
Fig. 8 Comparison between the usual cuckoo search algorithm and membrane-inspired cuckoo search algorithm for F10 with dimension equal to 10
Table 2 The best, mean, and standard deviation (Std) values of the proposed algorithm and conventional cuckoo search algorithm for F1–F10 with dimension equal to 30 Original Cuckoo search algorithm
Membrane-inspired Cuckoo search algorithm
Best
Best
Mean
Std 93.5
Mean
Std 25.3
F1
130
256
75.0
110
F2
1.37E+04
2.43E+04
5.29E+03
1.21E+04
1.58E+04
1.96E+03
F3
6.66E+07
1.13E+08
3.57E+07
4.95E+07
7.98E+07
2.04E+07
F4
3.70E+04
4.75E+04
7.64E+03
2.39E+04
3.35E+04
6.78E+03
F5
9.23E+03
1.09E+04
1.24E+03
8.40E+03
8.93E+03
4.06E+02
F6
1.00E+10
1.00E+10
0.00E+00
1.00E+10
1.00E+10
0.00E+00
F7
2.98E+03
2.98E+03
4.79E−13
8.96E+00
2.69E+03
9.41E+02
F8
21.0
21.0
0.03
20.9
21.0
F9
192
221
13.4
180
203
15.4
F10
238
272
26.8
204
228
13.1
0.05
8 Conclusions Membrane evolutionary algorithms are successful algorithms for solving optimization problems. The cuckoo search algorithm that inspired its processing from nature was applied to solve many optimization problems. Membrane systems introduce parallel and distributed structures that can increase the diversity of the algorithms. The cuckoo search algorithm has a simple mechanism with a few parameters. However, by the appropriate combination of membrane systems and cuckoo search algorithm,
13 A Cuckoo Search Algorithm Inspired from Membrane Systems
321
Fig. 9 Comparison between the usual cuckoo search algorithm and membrane-inspired cuckoo search algorithm for F1 with dimension equal to 30
Fig. 10 Comparison between the usual cuckoo search algorithm and membrane-inspired cuckoo search algorithm for F5 with dimension equal to 30
we can obtain both advantages of membrane systems and cuckoo search as optimization algorithms. The result of the hybrid membrane-inspired cuckoo search algorithm showed better results than the conventional cuckoo search algorithm.
322
A. Maroosi
Fig. 11 Comparison between the usual cuckoo search algorithm and membrane-inspired cuckoo search algorithm for F10 with dimension equal to 30
References 1. Blum C, Roli A (2003) Metaheuristics in combinatorial optimization: overview and conceptual comparison. ACM Comput Surv (CSUR) 35(3):268–308 2. Yang X-S (2009) Harmony search as a metaheuristic algorithm. In: Music-inspired harmony search algorithm. Springer, pp 1–14 3. Dey N (2017) Advancements in applied metaheuristic computing. IGI Global 4. Dey N, Ashour AS, Bhattacharyya S (2019) Applied nature-inspired computing: algorithms and case studies. Springer 5. Singh G, Gupta N, Khosravy M (2015) New crossover operators for real coded genetic algorithm (RCGA). In: 2015 International conference on intelligent informatics and biomedical sciences (ICIIBMS). IEEE, pp 135–140 6. Gupta N, Khosravy M, Patel N, Senjyu T (2018) A bi-level evolutionary optimization for coordinated transmission expansion planning. IEEE Access 6:48455–48477 7. Gupta N, Khosravy M, Patel N, Sethi I (2018) Evolutionary optimization based on biological evolution in plants. Procedia Comput Sci 126:146–155 8. Gupta N, Patel N, Tiwari BN, Khosravy M (2018) Genetic algorithm based on enhanced selection and log-scaled mutation technique. In: Proceedings of the future technologies conference. Springer, pp 730–748 9. Yang X-S, Deb S (2009) Cuckoo search via Lévy flights. In: 2009 world congress on nature & biologically inspired computing (NaBIC). IEEE, pp 210–214 10. Sethi R, Panda S, Sahoo BP (2015) Cuckoo search algorithm based optimal tuning of PID structured TCSC controller. In: Computational intelligence in data mining, vol 1. Springer, pp 251–263 11. Binh HTT, Hanh NT, Dey N (2018) Improved cuckoo search and chaotic flower pollination optimization algorithm for maximizing area coverage in wireless sensor networks. Neural Comput Appl 30(7):2305–2317 12. Li Z, Dey N, Ashour AS, Tang Q (2018) Discrete cuckoo search algorithms for two-sided robotic assembly line balancing problem. Neural Comput Appl 30(9):2685–2696 13. Chakraborty S, Dey N, Samanta S, Ashour AS, Barna C, Balas M (2017) Optimization of non-rigid demons registration using cuckoo search algorithm. Cogn Comput 9(6):817–826
13 A Cuckoo Search Algorithm Inspired from Membrane Systems
323
14. Jaime-Leal JE, Bonilla-Petriciolet A, Bhargava V, Fateen S-EK (2015) Nonlinear parameter estimation of e-NRTL model for quaternary ammonium ionic liquids using cuckoo search. Chem Eng Res Des 93:464–472 15. Wong PK, Wong KI, Vong CM, Cheung CS (2015) Modeling and optimization of biodiesel engine performance using kernel-based extreme learning machine and cuckoo search. Renewable Energy 74:640–647 16. Maroosi A, Muniyandi RC (2013) Membrane computing inspired genetic algorithm on multicore processors. J Comput Sci 9(2):264 17. Maroosi A, Muniyandi RC (2013) Accelerated simulation of membrane computing to solve the n-queens problem on multi-core. In: International conference on swarm, evolutionary, and memetic computing. Springer, pp 257–267 18. Maroosi A, Muniyandi RC (2014) Accelerated execution of P systems with active membranes to solve the N-queens problem. Theoret Comput Sci 551:39–54 19. Maroosi A, Muniyandi RC (2014) Enhancement of membrane computing model implementation on GPU by introducing matrix representation for balancing occupancy and reducing inter-block communications. J Comput Sci 5(6):861–871 20. Maroosi A, Muniyandi RC (2013) Membrane computing inspired genetic algorithm on multicore processors. JCS 9(2):264–270 21. Maroosi A, Muniyandi RC, Sundararajan E, Zin AM (2014) Parallel and distributed computing models on a graphics processing unit to accelerate simulation of membrane systems. Simul Model Pract Theory 47:60–78 22. Maroosi A, Muniyandi RC, Sundararajan E, Zin AM (2016) A parallel membrane inspired harmony search for optimization problems: a case study based on a flexible job shop scheduling problem. Appl Soft Comput 49:120–136 23. Maroosi A, Muniyandi RC, Sundararajan EA, Zin AM (2013) Improved implementation of simulation for membrane computing on the graphic processing unit. Procedia Technol 11:184– 190 24. Ravie C, Ali M (2015) Enhancing the simulation of membrane system on the GPU for the n-queens problem. Chin J Electron 24(4):740–743 25. García-Quismondo M, Levin M, Lobo D (2017) Modeling regenerative processes with membrane computing. Inf Sci 381:229–249 26. Paun G (2010) Membrane computing. Scholarpedia 5(1):9259 27. Bianco L (2007) Membrane models of biological systems. PhD thesis, University of Verona 28. P˘aun G (2000) Computing with membranes. J Comput Syst Sci 61(1):108–143 29. Martın-Vide C, P˘aun G, Pazos J, Rodrıguez-Patón A (2003) Tissue P systems. Theor Comput Sci 296(2):295–326 30. Barney B (2010) Introduction to parallel computing. Lawrence Livermore Nat Lab 6(13):10 31. Foster I (1995) Designing and building parallel programs. Addison Wesley Publishing Company 32. Duncan R (1990) A survey of parallel computer architectures. Computer 23(2):5–16 33. Flynn M (1972) Some computer organizations and their effectiveness. IEEE Trans Comput 100(9):948–960 34. Yu C, Lian Q, Zhang D, Wu C (2018) PAME: evolutionary membrane computing for virtual network embedding. J Parallel Distrib Comput 111:136–151 35. Nishida TY (2004) An application of P system: a new algorithm for NP-complete optimization problems. In: Proceedings of the 8th world multi-conference on systems, cybernetics and informatics, pp 109–112 36. Niu Y, Wang Z, Xiao J (2015) A uniform solution for vertex cover problem by using time-free tissue p systems. In: Bio-inspired computing-theories and applications. Springer, pp 306–314 37. Leporati A, Pagani D (2006) A membrane algorithm for the min storage problem. In: Membrane computing. Springer, pp 443–462 38. Zaharie D, Ciobanu G (2006) Distributed evolutionary algorithms inspired by membranes in solving continuous optimization problems. In: Membrane computing. Springer, pp 536–553
324
A. Maroosi
39. Zhang G-X, Gheorghe M, Wu C-Z (2008) A quantum-inspired evolutionary algorithm based on P systems for knapsack problem. Fundamenta Informaticae 87(1):93–116 40. Zhang G, Cheng J, Gheorghe M (2011) A membrane-inspired approximate algorithm for traveling salesman problems. Roman J Inform Sci Technol 14(1):3–19 41. Yang S, Wang N (2012) A novel P systems based optimization algorithm for parameter estimation of proton exchange membrane fuel cell model. Int J Hydrogen Energy 37(10):8465–8476 42. Zhang G, Cheng J, Gheorghe M, Meng Q (2013) A hybrid approach based on differential evolution and tissue membrane systems for solving constrained manufacturing parameter optimization problems. Appl Soft Comput 13(3):1528–1542 43. Ali M, Muniyandi RC (2013) A hybrid membrane computing and honey bee mating algorithm as an intelligent algorithm for channel assignment problem. In: Proceedings of the eighth international conference on bio-inspired computing: theories and applications (BIC-TA). Springer, pp 1021–1028 44. Idowu RK, Maroosi A, Muniyandi RC, Othman ZA (2013) An application of membrane computing to anomaly-based intrusion detection system. Procedia Technol 11:585–592 45. Alsalibi B, Venkat I, Al-Betar MA (2017) A membrane-inspired bat algorithm to recognize faces in unconstrained scenarios. Eng Appl Artif Intell 64:242–260 46. Orozco-Rosas U, Montiel O, Sepúlveda R (2019) Mobile robot path planning using membrane evolutionary artificial potential field. Appl Soft Comput 77:236–251 47. Guo P, Wang X, Zeng Y, Chen H (2019) MEAMCP: a membrane evolutionary algorithm for solving maximum clique problem. IEEE Access 7:108360–108370 48. Tuba M, Subotic M, Stanarevic N (2011) Modified cuckoo search algorithm for unconstrained optimization problems. In: Proceedings of the 5th European conference on European computing conference. World Scientific and Engineering Academy and Society (WSEAS), pp 263–268 49. Yang X-S, Deb S (2010) Engineering optimisation by cuckoo search. Int J Math Modelling Numer Optim 1(4):330–343 50. Suganthan PN, Hansen N, Liang JJ, Deb K, Chen Y-P, Auger A, Tiwari S (2005) Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. KanGAL report