Recent Advances in Soft Computing and Cybernetics (Studies in Fuzziness and Soft Computing, 403) 3030616584, 9783030616588

This monograph is intended for researchers and professionals in the fields of computer science and cybernetics. Nowadays

137 103 12MB

English Pages 327 [314] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Contents
Soft Computing
Recurrence Plot and Convolutional Neural Networks for Terrain Classification Using Energy Consumption of Multi-legged Robots
1 Introduction
2 Convolutional Neural Network
3 Methodology
3.1 Time-Series Classification Problem
3.2 Proposed Methodology
4 Experimental Results
4.1 Data Description
4.2 Experimental Setting
4.3 Results
5 Conclusions and Future Work
References
On Performance Evaluation of Distributed System Size Estimation Executed by Average Consensus Weights
1 Introduction
2 Theoretical Background
2.1 Model of Average Consensus Algorithm for Distributed System Size Estimation
2.2 Analyzed Weights of Average Consensus Algorithm
3 Research Methodology
4 Experiments and Discussion
5 Conclusion
References
Determination of Air Jet Shape with Complex Methods Using Neural Networks
1 Introduction
2 Initial Status of the Research
3 Development of New Methods
3.1 Conditional Thresholding
3.2 Conditional Thresholding of Continuous Detection
3.3 Method of Two Experts
3.4 Conditional Thresholding of the Merged Results of Two Experts
3.5 Smoothness of the Border
4 Conclusion
Reference
Evaluation of Permutation-Based Mutation Operators on the Problem of Automatic Connection Matching in Closed-Loop Control System
1 Introduction
2 Problem Description
3 Mutation Operators
4 Experiment Results
5 Conclusion
References
Optimization of Deposition Parameters of a DLC Layer Using (RF) PECVD Technology
1 Introduction
1.1 Design of Experiment (DoE)
2 Experimental Setup
3 Results
3.1 Hardness of Deposited DLC Coatings
3.2 Design of Experiment and a Regression Model
3.3 Scratch Test
4 Conclusions and Discussion
References
Modelling of Magnetron TiN Deposition Using the Design of Experiment
1 Introduction
1.1 Design of Experiment (DoE)
2 Experimental Setup
3 Results
3.1 Thickness Measurement by XRR
3.2 Hardness Measurement of TiN Coating
3.3 Statistical Analysis
4 Conclusions and Discussion
References
Solvent Optimization of Transferred Graphene with Rosin Layer Based on DOE
1 Introduction
1.1 State of the Arts
1.2 Statistics
2 Experimental Setup and Material
3 Measurement of Residual Contamination of Graphene and DOE
3.1 Description of the DOE Evaluation Procedure
4 Statistical Evaluation
5 Conclusion and Discussion
References
Statistical Analysis of the Width of Kerf Affecting the Manufacture of Minimal Inner Radius
1 Introduction
1.1 Technology of Wire Electrical Discharge Machining
1.2 Statistics
2 Experimental Setup and Material
3 Methodology of Measuring the Width of Kerf
4 Statistical Evaluation
5 Conclusion and Discussion
References
Many-objective Optimisation for an Integrated Supply Chain Management Problem
1 Introduction
2 Preliminaries
2.1 Non-dominated Sorting Genetic Algorithm-III
2.2 Performance Metrics
3 Problem Description
3.1 Problem Formulation
4 Preliminary Experiments
4.1 Experimental Setup
4.2 Problem Instances
4.3 Parameter Tuning of NSGA-III
5 Computational Results
6 Conclusions
References
A Classification-Based Heuristic Approach for Dynamic Environments
1 Introduction
2 The Proposed Method
2.1 Change Characterization
2.2 Mutation Rate Selection
2.3 Using Sentinels as Solutions
3 Experiments
3.1 Experimental Design
3.2 The Experiments for Component Analysis of the Proposed Method
3.3 The Experiments for Comparison with Similar Methods in Literature
4 Conclusion and Future Work
References
Does Lifelong Learning Affect Mobile Robot Evolution?
1 Introduction
2 Related Work
3 The Algorithm
3.1 Reward Functions
4 Experiments and Results
5 Discussion
6 Conclusion and Future Work
References
Cover Time on a Square Lattice by Two Colored Random Walkers
1 Introduction
2 The Two-Colored-Random-Walker Model (2CRW)
2.1 Random Walking and Coloring
3 Fitness of intelligence sequences combinations on a Square lattice
4 Features of a Fit Intelligence Sequence Combination
4.1 A Subsection Sample
4.2 The Neutrality of R and B
5 Discussion
5.1 Two General Rules for a Fit Sequence Combination
5.2 Extension to Complex Networks
5.3 Optimization with Genetic Algorithm
6 Conclusion
References
Transition Graph Analysis of Sliding Tile Puzzle Heuristics
1 Introduction
1.1 A Simple Example—5-puzzle Transition Graph
2 Detailed Analysis of the State Space
3 Conclusions
References
Stochastic Optimization of a Reinforced Concrete Element with Regard to Ultimate and Serviceability Limit States
1 Introduction
2 Deterministic Approach
3 Stochastic Approach
4 Example
5 Solution Algorithm
5.1 Probability Assessment
5.2 Calculation Initialization
5.3 Regression Analysis
5.4 Iteration
5.5 Heuristic Algorithm
6 Calculation Results
7 Conclusions
References
Popular Optimisation Algorithms with Diversity-Based Adaptive Mechanism for Population Size
1 Introduction
2 Diversity-Based Adaptation of Population Size
3 Algorithms in Experiments
4 Experiments and Results
5 Conclusion
References
Training Sets for Algorithm Tuning: Exploring the Impact of Structural Distribution
1 Introduction
2 Related Work
3 Experiment Preliminaries
3.1 A Simple Algorithm and Its Tuner
3.2 Training and Test Sets
4 Experiments and Results
4.1 Evaluation Approach
4.2 Uniform/Gaussian Experiments
5 Discussion and Future Work
References
Choosing an Optimal Size Range of the Product ``Pipe Clamp''
1 Introduction
2 Size Range Optimization
2.1 Stage 1. Choice of Main Parameters
2.2 Stage 2. Determining Demand
2.3 Stage 3. Choice of Optimality Criterion
2.4 Stage 4. Determining the Functional Relationship Between the Optimality Criterion and the Influencing Factors
2.5 Stage 5. Establishing a Mathematical Model
2.6 Stage 6. Choice of a Mathematical Method
2.7 Stage 7. Algorithmic and Software Development
2.8 Stage 8. Solving the Problem—Choosing an Optimal Size Range
2.9 Stage 9. Sensitivity Analysis of the Optimal Solution
3 Conclusion
References
Cybernetics
Data Aggregation in Mobile Wireless Sensor Networks Represented as Stationary Edge-Markovian Evolving Graphs
1 Introduction
2 Related Work
3 Mathematical Model of Average Consensus Algorithm Over Stationary Edge-Markovian Evolving Graphs
4 Applied Research Methodology
5 Experimental Section and Discussion
6 Conclusion
References
The Simple Method of Automatic Measurement of Air Infiltration in the Smart Houses
1 Introduction
2 Contribution
3 Verification
4 Resume
Reference
Evaluation of Surface Hardness Using Digital Microscope Image Analysis
1 Introduction
2 Materials and Methods
2.1 International Standards for the Evaluation of Rockwell Hardness Using a Microscope
2.2 Measuring Instruments
2.3 Microscope Image Analysis Software
2.4 Measurement and Evaluation Procedure
3 Application of Microscope Image Analysis Software and Results
4 Discussion
5 Conclusion
References
Eigenfrequency Identification of the Hydraulic Cylinder
1 Introduction
1.1 Brief Review of the Relay Feedback Identification Methods
2 Analysis of the Identified System
3 Identification Using the Self-excited Oscillations
4 Application of the Method
5 Conclusions
References
Robotic System for Mapping of the Environment for Industry 4.0 Applications
1 Introduction
2 HW a SW Design of the Robotic System
2.1 Hardware Design
2.2 Software Design of the Mobile Robotic System
3 Map of the Environment
3.1 Mapping of the Environment Using Sensor Head
3.2 Detection of Full Cycle in the Space Based on the Angle of Rotation
4 Achieved Results
5 Conclusion
References
Modification of the Teichmann Model of Population Evacuation in Conditions of Shuttle Transport
1 Introduction
2 Model
3 Modified Model for Shuttle Transport
4 Conclusion
References
Interpersonal Internet Messaging Prospects in Industry 4.0 Era
1 Introduction
1.1 Electronic Mail Still Prevails in Internet Messaging
1.2 Differences in E-Mail Service in the Past and Present
1.3 Current Electronic Mail Challenges and Issues
2 Alternative Messaging Services
2.1 Commercial Messengers
2.2 Inter–Process Messaging
3 Research of Messaging Habits and Expectations
3.1 Poll Methodology
3.2 Poll Results
4 Conclusions
References
Classification of Deformed Objects Using Advanced LR Parsers
1 Introduction
1.1 Object Recognition
2 Deformed Object Recognition
2.1 Extending Grammar by Deformation Rules
2.2 LR Parser, Tomita Parser
3 Error Handling Based on a LR Table
3.1 Error Handling Based on a LR Table
3.2 Tomita Parser with Error Correction
4 Algorithm Description
4.1 Additional Automaton Adjustments
4.2 Evaluation of Algorithm Results
5 Conclusion
References
Analog Two Degree of Freedom PID Controllers and Their Tuning by Multiple Dominant Pole Method for Integrating Plants
1 Introduction
2 2DOF PID Controllers
3 Special Cases of 2DOF PID Controllers
4 Multiple Dominant Pole Method for Integrating Plants
5 Conclusions
References
Recommend Papers

Recent Advances in Soft Computing and Cybernetics (Studies in Fuzziness and Soft Computing, 403)
 3030616584, 9783030616588

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Studies in Fuzziness and Soft Computing

Radek Matoušek Jakub Kůdela   Editors

Recent Advances in Soft Computing and Cybernetics

Studies in Fuzziness and Soft Computing Volume 403

Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland

The series “Studies in Fuzziness and Soft Computing” contains publications on various topics in the area of soft computing, which include fuzzy sets, rough sets, neural networks, evolutionary computation, probabilistic and evidential reasoning, multi-valued logic, and related fields. The publications within “Studies in Fuzziness and Soft Computing” are primarily monographs and edited volumes. They cover significant recent developments in the field, both of a foundational and applicable character. An important feature of the series is its short publication time and world-wide distribution. This permits a rapid and broad dissemination of research results. Indexed by SCOPUS, DBLP, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science.

More information about this series at http://www.springer.com/series/2941

Radek Matoušek Jakub Kůdela •

Editors

Recent Advances in Soft Computing and Cybernetics

123

Editors Radek Matoušek Department of Applied Computer Science Faculty of Mechanical Engineering Institute of Automation and Computer Science Brno University of Technology Brno, Czech Republic

Jakub Kůdela Department of Applied Computer Science Faculty of Mechanical Engineering Institute of Automation and Computer Science Brno University of Technology Brno, Czech Republic

ISSN 1434-9922 ISSN 1860-0808 (electronic) Studies in Fuzziness and Soft Computing ISBN 978-3-030-61658-8 ISBN 978-3-030-61659-5 (eBook) https://doi.org/10.1007/978-3-030-61659-5 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Contents

Soft Computing Recurrence Plot and Convolutional Neural Networks for Terrain Classification Using Energy Consumption of Multi-legged Robots . . . . . Rickard Hole Falck, Petr Čižek, and Sebastián Basterrech

3

On Performance Evaluation of Distributed System Size Estimation Executed by Average Consensus Weights . . . . . . . . . . . . . . . . . . . . . . . . Martin Kenyeres, Jozef Kenyeres, and Ivana Budinská

15

Determination of Air Jet Shape with Complex Methods Using Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jiri Stastny, Jan Richter, and Lubos Juranek

25

Evaluation of Permutation-Based Mutation Operators on the Problem of Automatic Connection Matching in Closed-Loop Control System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vladimir Mironovich, Maxim Buzdalov, and Valeriy Vyatkin Optimization of Deposition Parameters of a DLC Layer Using (RF) PECVD Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tomáš Prokeš, Kateřina Mouralová, Radim Zahradníček, Josef Bednář, and Milan Kalivoda Modelling of Magnetron TiN Deposition Using the Design of Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Radim Zahradníček, Kateřina Mouralová, Tomáš Prokeš, Pavel Hrabec, and Josef Bednář Solvent Optimization of Transferred Graphene with Rosin Layer Based on DOE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Radim Zahradníček, Pavel Hrabec, Josef Bednář, and Tomáš Prokeš

41

53

63

71

v

vi

Contents

Statistical Analysis of the Width of Kerf Affecting the Manufacture of Minimal Inner Radius . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pavel Hrabec, Josef Bednář, Radim Zahradníček, Tomáš Prokeš, and Anna Machova Many-objective Optimisation for an Integrated Supply Chain Management Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Seda Türk, Ender Özcan, and Robert John

85

97

A Classification-Based Heuristic Approach for Dynamic Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Şeyda Yıldırım-Bilgiç and A. Şima Etaner-Uyar Does Lifelong Learning Affect Mobile Robot Evolution? . . . . . . . . . . . . 125 Shanker G. R. Prabhu, Peter J. Kyberd, Wim J. C. Melis, and Jodie C. Wetherall Cover Time on a Square Lattice by Two Colored Random Walkers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Chun Yin Yip and Kwok Yip Szeto Transition Graph Analysis of Sliding Tile Puzzle Heuristics . . . . . . . . . 149 Iveta Dirgová Luptáková and Jiří Pospíchal Stochastic Optimization of a Reinforced Concrete Element with Regard to Ultimate and Serviceability Limit States . . . . . . . . . . . . 157 Jakub Venclovský, Petr Štěpánek, and Ivana Laníková Popular Optimisation Algorithms with Diversity-Based Adaptive Mechanism for Population Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 Radka Poláková and Petr Bujok Training Sets for Algorithm Tuning: Exploring the Impact of Structural Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Mashal Alkhalifa and David Corne Choosing an Optimal Size Range of the Product “Pipe Clamp” . . . . . . 197 Ivo Malakov and Velizar Zaharinov Cybernetics Data Aggregation in Mobile Wireless Sensor Networks Represented as Stationary Edge-Markovian Evolving Graphs . . . . . . . . . . . . . . . . . . 217 Martin Kenyeres and Jozef Kenyeres The Simple Method of Automatic Measurement of Air Infiltration in the Smart Houses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 Milos Hernych

Contents

vii

Evaluation of Surface Hardness Using Digital Microscope Image Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 Patrik Kutilek, Jan Hejda, Vaclav Krivanek, Petr Volf, and Eva Kutilkova Eigenfrequency Identification of the Hydraulic Cylinder . . . . . . . . . . . . 247 Petr Noskievič Robotic System for Mapping of the Environment for Industry 4.0 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 Emília Bubeníková, Rastislav Pirník, Marián Hruboš, and Dušan Nemec Modification of the Teichmann Model of Population Evacuation in Conditions of Shuttle Transport . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 Miloš Šeda and Kamil Peterek Interpersonal Internet Messaging Prospects in Industry 4.0 Era . . . . . . 285 Tomas Sochor and Nadezda Chalupova Classification of Deformed Objects Using Advanced LR Parsers . . . . . . 297 Lukas Junek and Jiri Stastny Analog Two Degree of Freedom PID Controllers and Their Tuning by Multiple Dominant Pole Method for Integrating Plants . . . . . . . . . . 309 Miluše Vítečková, Antonín Víteček, and Dagmar Janáčová

Soft Computing

Recurrence Plot and Convolutional Neural Networks for Terrain Classification Using Energy Consumption of Multi-legged Robots ˇ Rickard Hole Falck, Petr Cižek, and Sebastián Basterrech

Abstract In this article, we analyze a Machine Learning model for classifying the energy consumption of a multi-legged robots over different terrains. We introduce a system based in three popular techniques: Recurrence Plot (RP), Convolutional Neural Network (CNN), and Dimensionality Reduction. We use RP for transforming the energy consumption of the robots to grayscale 2D images. Due to computational restrictions, we apply a linear dimensionality reduction technique for transforming the images into a smaller feature space. The CNN is applied for classifying the images and to predict the terrain. We present results using several CNN architectures over real-data obtained in six+ types of terrains. Keywords Terrain classification · Convolutional neural networks · Recurrence plot · Multi-legged robot · Locomotion control

1 Introduction Neural Networks (NNs) are a biologically-inspired programming paradigm which is heavily inspired by the brain. During the last years, one family of NNs named Convolutional Neural Network (CNN) becomes very popular in the community, due to its very good results in classification problems. Mainly, for the good performance for classifying images in the computer vision area [1]. The CNN model has a special architecture and a specific training design. In the learning process, the structure has R. H. Falck Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, Trondheim, Norway e-mail: [email protected] ˇ P. Cižek · S. Basterrech (B) Department of Computer Science, Faculty of Electrical Engineering, Czech Technical University, Prague, Czech Republic e-mail: [email protected] ˇ P. Cižek e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 R. Matoušek and J. K˚udela (eds.), Recent Advances in Soft Computing and Cybernetics, Studies in Fuzziness and Soft Computing 403, https://doi.org/10.1007/978-3-030-61659-5_1

3

4

R. H. Falck et al.

some shared parameters then the training process is faster than other large NNs [2]. One of the reasons that CNNs are doing well in computer vision is that it doesn’t flatten the input images, but rather the model can process the original image such as it is. This assures that it views the data points in the image in regards to the surroundings. In this article, we apply CNN for a terrain classification problem. Control locomotion and automatic prediction of the terrain by robots is an old and complex problem in robotics [3, 4]. We dispose of information about the terrain provided by a multi-legged walking robot. The goal is to try to identify the terrain with the smaller information from the environment as possible. Therefore, we propose to recognize the terrain using only information concerning to the energy consumption of the robot, that is a common restriction in robotics and embedded systems [5]. The energy consumption is a streaming time-series, therefore our problem is limited to develop a machine learning method for time-series classification. We present an approach based in the power of CNN for classifying images. First, we use Recurrence Plot (RP) [6] as a signal filter for transforming the unidimensional signals in grayscale 2D images. We create chunks of the energy consumption signal and we transform them to 2D images. One of the problems when it comes to computer vision is that image processing can take a lot of space and requires long processing time. Therefore, once we generated the 2D images we transform the images in a smaller space using a dimensionality reduction technique [7]. After that, we apply CNN for classifying the terrains. We evaluate the developed system over several type of terrains using real data. The developed model present good results when the classification is made between two types of very different classes of terrains. The rest of the article is organized as follows. Next section describes the CNN model. Section 3 presents the proposed system. Experimental results are introduced in Sect. 4. We end in Sect. 5 with a discussion about future research lines.

2 Convolutional Neural Network A Neural Networks (NNs) can be presented as a parallel distributed system (a directed graph) [8], we refer by a layer to a set of nodes that all of them are on the same depth in the graph. There are three categories of layers; input, hidden, and output layer. The input layer is where the input pattern is fed into the network, and the output layer produces the model results. These two layers are the only ones with interactions with the environment. The hidden layers perform non-linear transformations of the input patterns. The network can have an arbitrary number of hidden layers, usually this number is set up empirically. During the last decades the CNN model becomes very popular in the pattern recognition and knowledge representation areas [1, 9]. The model is a feedforward NN (direct graph without circuits) with a specific architecture and training design, that makes convolutional networks unique. It consists of three different types of layers; convolutional layers, pooling layers and fully-connected node layers.

Recurrence Plot and Convolutional Neural Networks …

5

The convolutional layer consists of several convolutional maps. Each map takes one (or more) image(s) as input, this input is projected in a feature map. The projection is done using a weighted linear combination of the input nodes and an activation function. Most popular activation functions are RELU, sigmoid and hyperbolic tangent functions [10]. The map takes a subset of the input, of the same size of the weights, and applies the propagation function to them, and assigns one output node with the result. The subsets of input nodes overlap each other. This subset is called a kernel. After the convolutional layer, the model has a pooling layer, which also consists of several maps. A pooling map takes a convolutional map (single) as input, and reduces the dimensions. The reduction is done by taking a set of input nodes, and applying a function to them resulting in a single value. Most often the pooling function performs the max-value or mean-value. In the pooling layer, the sets of inputs to each of the nodes, don’t overlap, because of this the input is reduced in according to the size of each set of input values. If we use a mask of 2 × 2 nodes, our output will be 1/4th of the input size. The last type of layer in the CNN architecture is a fully connected layer. This comes after the convolutional and pooling layer(s). A fully connected layer performs an activation function over a linear combination, it is a common operation in traditional feedforward NNs. The last fully connected layer, often called output layer, gives the final output of the network in our wanted format (vector). We can also apply a softmax function or any other activation function [10]. A characteristic of the model is that there are shared weights. The model contains fewer variables than the traditional feedforward NN with a comparable size. This reduced number of variables contributes to a faster training process. Figure 1 illustrates a standard example of the CNN, where the first layer contains several convolution operations, then pooling layer are performed, and finally a feedforward NN produces the output. We solved the training of the CNN using the Backpropagation method [11]. The Backpropagation algorithm works by that we first make a prediction, after the prediction is made we calculate the error of our prediction in regards to the correct answer. After the error is calculated, it is then propagated backwards through the network, during the backpropagation the changes for all the variables are calculated, one layer at a time. The change is calculated based on this variable’s involvement in the

Fig. 1 General architecture of a CNN

6

R. H. Falck et al.

prediction. After all the layers have gotten the change of its variables calculated, the variables are updated with the correction. More details about CNN, its variations, and how to train it can be seen in [1, 2, 9, 10].

3 Methodology 3.1 Time-Series Classification Problem Time-series classification problem consists in creating a learning model from a collection of labelled training time-series [12]. Given a set of time-series s1 , s2 , . . . , sT where each si has ordered real-valued observations and has an assigned class Ci , i = 1, . . . , Q. The goal is to find a parametric mapping between an element from the space of signals and the space of possible classes.

3.2 Proposed Methodology In this article, we develop a new Machine Learning approach for classifying streaming time-series data. In this preliminary work, the goal is to develop a method for terrain recognition using as few information as possible. Then, we consider only the main relevant variable involves in the energy cost of the robot during its gait, that is the electric current data. Our work hypothesis is the following: given the current data of an hexapod robot while it is walking over certain unknown terrain type, then it is possible to develop a learning system able to recognize the terrain. To solve this problem can be very useful for improving the current algorithms regarding autonomous path in robotics. However, the problem of analyzing streaming timeseries with noise, as it is the case of energy consumption time-series, can be very hard. The proposed approach for classifying time-series using CNN is as follows. Given a time-series we split the data into chunks of an arbitrary size (sliding time windows). Then for each chunk we transform the sequential data in 2D images using Recurrence Plot (RP) [6]. M , then the corresponding RP is based in the Therefore, given a signal s(i)i=1 following matrix:  Ri, j =

1:s(i) ≈ s( j), i, j = 1, . . . M, 0:s(i)≈s( j),

where M is the signal size or number of considered states, and the model has a parameter  that defines the equality among reals. We say that s(i) ≈ s( j) if a distance is lower than an arbitrary value . There are several variations of RP, the difference among them are in the type of distance functions and epsilon values, for

Recurrence Plot and Convolutional Neural Networks …

7

Fig. 2 Work-flow diagram of our proposed approach. The input signal is the energy consumption signal in a time-window, for example in the last M time steps

more details see [6]. In our case, we consider values in the range of [0; 255] instead of binary values in the RP matrix. Therefore, each RP matrix represents a grayscale 2D image. We computed distances among pairs of points, then we normalized in [0; 255]. In our specific problem, we have Q classes of signals. The signal type is given by the category of the terrain. Note that, in this preliminary work the terrain category is arbitrary labeled by an external expert. Therefore, the problem is a time-series classification using a supervised learning approach. In future works, we expect to conduct experiments without labels given by external experts. Furthermore, to insert incremental learning concepts in the learning model. We split the electric current data into chunks of size M time-step points. Then, we create the RP matrices. Therefore for each signal of size L, we obtain trunc(L/M) images, where trunc(·) is the truncate function. Each image has associated a class. Note that, to represent a unidimensional signal using 2D images increases the learning data size. As a consequence, once we generate the RP matrices we apply dimensionality reduction over the RP matrices. Therefore, we considerable reduce the number of features and data size. A dimensionality reduction technique represents a set of high-dimensional points on a lower dimensional space. Each high-dimensional point is represented by a latent low-dimensional point, in such a way that the layout of the latent points is a “good” representation of the layout of the original space. The mapped space should preserves the main topological characteristics of the original space [7, 13]. We apply a linear projection using PCA to the RP matrices. The generated space by the PCA contains the images that we use as input patterns of the CNN model. Figure 2 illustrates the work-flow of the proposed procedure.

4 Experimental Results 4.1 Data Description The data has been collected in a robotic computational laboratory.1 The experiment consisted of a multi-legged walking robot (hexapod) on a three meter length path over several types of surfaces. An adaptive motion gait has been utilized for the locomotion [14]. There were evaluated six types of terrains, and three experiments were conducted on each of them. In each experiment, we recorded the relative position of the hexapod walking robot and its power consumption, which was based in current 1 Center

for Robotics and Autonomous Systems, FEL, CVUT: https://robotics.fel.cvut.cz/cras/.

8

R. H. Falck et al.

and voltage. The type of terrains were categorized by an external expert. There are two main categories: flat and rough terrains. The rough area was composed by irregular 10 cm ×10 cm wooden cubes with different height and slope. We conducted several experiments over this wooden surface. We covered with three types of surfaces: PVC flooring, a turf-like carpet and a semi-transparent soft black fabric. The experiments over a flat surface where as well of several types, we conducted the experiments over the flat ground covered by a PVC flooring, a turf-like carpet and a semi-transparent soft black fabric. We categorize the surfaces with the following labels and codes: black flat (0), black rough (1), wooden cubes (2), flat (3), grass flat (4) and grass rough (5). In the practical experimenting, we did not only try to categorize the type of surface, but also we assembled them in groups and tried to categorize these groups. We denote by CP1 the classification problem of classifying the 6 types of terrains. Another classification problem was to categorize Flat versus Rough. We present two groups namely {0, 3, 4} and {1, 2, 5}, and we denote this problem as CP4. We also tried a different variation of this, where we removed items with either label “wooden cubes” or “flat” (we denote this problem CP5). Another problem that we tried was to group “grass” and “black” against each other; here we had the groups {0, 1} and {4, 5} (we denote the problem as CP3). We also tried an expanded version of this where we introduced a new category, which consisted of {2, 3}, so we had in total 3 categories we tried to categorize (CP 2). Table 1 summarizes the different analyzed classification problems. Figure 3 shows examples of the RP matrix visualization, that type of images are the input patterns of the CNN model. Table 1 Specification of the categorization problems

Categorization problem Type CP 1

CP 2

CP 3

Labels Categories

Multivariable 0

Black flat

1

Black rough

2

Cubes

3

Flat

4

Grass flat

5

Grass rough

Multivariable 0, 1

Black

4, 5

Grass

2, 3

Uncovered

0, 1

Black

Binary

4, 5

Grass Flat

CP 4

Binary

0, 3, 4 1, 2, 5

Rough

CP 5

Binary

0, 4

Flat

1, 5

Rough

Recurrence Plot and Convolutional Neural Networks …

9

Fig. 3 Example of the RP matrix visualization. From the left to right side, the images represent the energy when the terrains are: black flat, wooden cubes, grass flat, grass rough

4.2 Experimental Setting In our experiments we used a reduced image with a size of 28 × 28 as input for our CNN. We propose to divide the data set randomly into training and test data sets, we have chosen to use 90% of the data as training data and the remaining 10% of the data as test data to evaluate our model. We experimented with several different architectures, in total we had 5 different architectures, namely architecture A, B, C, D and E. To keep the comparison between the different architectures we used the exact same training and test sets, we also used the same number of steps, dropout rate and the same batch size. These respectively had the values 28 128, 0.6 and 1. We experimented with several different learning rates, which were as following; 1e−01, 1e−03, 1e−05, and 1e−07. The CNN has 2 layers of convolutional and pooling layer pairs, a layer of fully connected nodes, and an output layer. The number of maps in the 2 layers of pairs of Convolution and Pooling layer pairs, the number of nodes in the fully-connected layer, and the size of both the convolution and pooling kernels are specified for each of our architectures, as can be seen in Table 2. The number of output nodes depends of the number of classes.

4.3 Results In Fig. 4 we compare the results from using a learning rate of 0.1 and 1e−07. We compare the different architectures, and the different data sets/types of classification. This gives an impression of how the different architectures and classification problems perform in regards to each other. Table 3 we have presented the results of our tests. Here the different solutions of the same classification problem is grouped together. We present here the average (Mean) of each the different solutions together with the variance (Var.) of this average value. The results show that for some of the binary CPs, we can get decent results, that are close to 90% accuracy. However, for the multivariable CPs our results are far worse. CP 1 is the CP that gives us the worst results; the accuracy is around 38%, which is not great, but still far better than pure guessing, which would statistically give an accuracy around 17%, due to

10

R. H. Falck et al.

Table 2 Characteristics for the different architectures Architecture A

Architecture B

Architecture C

Architecture D

Architecture E

Kernel size

5×5

Number of maps in 1st layer

6

Number of maps in 2nd layer

12

Number of nodes, fully-connected

1024

Pooling size

2×2

Kernel size

5×5

Number of maps in 1st layer

6

Number of maps in 2nd layer

12

Number of nodes, fully-connected

3072

Pooling size

2×2

Kernel size

5×5

Number of maps in 1st layer

6

Number of maps in 2nd layer

12

Number of nodes, fully-connected

1024

Pooling size

2×2

Kernel size

5×5

Number of maps in 1st layer

64

Number of maps in 2nd layer

64

Number of nodes, fully-connected

1024

Pooling size

4×4

Kernel size

10 × 10

Number of maps in 1st layer

24

Number of maps in 2nd layer

48

Number of nodes, fully-connected

2048

Pooling size

4×4

Fig. 4 Sensitivity analysis using different architectures in the five types of categorization problems

Recurrence Plot and Convolutional Neural Networks …

11

Table 3 Summary of the experimental results Learning rate

0.1

Architecture

Mean

0.001 Var

Mean

1e−05 Var

Mean

1 e−07 Var

Mean

Var

CP 1 A

0.37

0.0332

0.387

0.0281

0.3849

0.0277

0.3849

0.0277

B

0.3619

0.0345

0.3747

0.0269

0.3755

0.0269

0.3755

0.0269

C

0.3794

0.0367

0.402

0.0286

0.4015

0.0281

0.4015

0.0281

D

0.3619

0.0030

0.3841

0.0166

0.3841

0.0166

0.3841

0.0166

E

0.3794

0.0332

0.4062

0.0354

0.4062

0.0337

0.4062

0.0337

A

0.6405

0.0546

0.6752

0.0365

0.6752

0.0365

0.6752

0.0365

B

0.6502

0.0361

0.6678

0.0367

0.6678

0.0367

0.6678

0.0367

C

0.6789

0.0431

0.6817

0.0402

0.6817

0.0402

0.6817

0.0402

D

0.6519

0.0500

0.6596

0.0416

0.6601

0.0406

0.6601

0.0406

E

0.6496

0.0406

0.6550

0.0423

0.6544

0.0407

0.6544

0.0407

A

0.8457

0.0907

0.8531

0.0776

0.8526

0.0785

0.7430

0.1293

B

0.8302

0.0874

0.8491

0.0771

0.8485

0.0755

0.7057

0.1782

C

0.8480

0.0897

0.8508

0.0857

0.8514

0.0849

0.6959

0.1872

D

0.8279

0.1156

0.8353

0.1075

0.8353

0.1075

0.6592

0.2143

E

0.8313

0.1096

0.8434

0.0963

0.8434

0.0963

0.6701

0.2090

A

0.5513

0.0437

0.5232

0.0643

0.5820

0.0340

0.5823

0.0343

B

0.5496

0.0317

0.4950

0.0454

0.5573

0.0519

0.5556

0.0497

C

0.5402

0.0342

0.5183

0.0501

0.5709

0.0421

0.5686

0.0396

D

0.5627

0.0402

0.5442

0.0262

0.5700

0.0218

0.5743

0.0265

E

0.5649

0.0188

0.5689

0.0249

0.5936

0.0151

0.5928

0.0142

A

0.8584

0.0131

0.8735

0.0045

0.8748

0.0049

0.8748

0.0049

B

0.8455

0.0235

0.8705

0.0064

0.8731

0.0060

0.8731

0.0060

C

0.8597

0.0164

0.8744

0.0053

0.8748

0.0049

0.8748

0.0049

D

0.7823

0.0096

0.8412

0.0121

0.8442

0.0141

0.8451

0.0148

E

0.7831

0.0261

0.8434

0.0044

0.8455

0.0074

0.8460

0.0076

CP 2

CP 3

CP 4

CP 5

it being 6 categories. In these experiments we don’t see any huge advantages of using the different learning rates, and different architectures. However, we used the same amount of steps in the training of all the different data sets, learning rates, and architectures; it might be that some of these variations demanded a higher or lower amount of steps, and therefore is not allowed to reach it’s potential. How we changed the data set for the different experiments can also have had an influence; when we

12

R. H. Falck et al.

removed 2 of the categories, we more or less halved our training and test data size, as we used the same number of steps this means that each of the objects in the smaller data sets were used in training more often, and we had less variance, all which could have contributed to the better results.

5 Conclusions and Future Work We developed a system for terrain recognition that only uses the energy consumption of a multi-legged robot during its gait. The system obtained good accuracy when the objective was to do a binary type of classification, for example rough versus flat terrain. The system is based in the good properties of Convolutional Neural Networks for classification problems. In addition, we used Recurrence Plot as a filter for creating the input patterns of the CNN. Even though the system only considers the energy consumption signal, the reached accuracy was relatively high even with small CNNs. On the other hand, when the problem was to classify the terrain with more details than the binary classification, the reached accuracy was unsatisfactory. Therefore, for classifying many type of terrains is necessary to insert in the learning system more input variables. In spite of that, our proposal has some advantages, it can deal with streaming data and it uses only energy consumption. We believe that to classify the terrain using only this information can be very helpful for robotic locomotion problems. Future work might include more experimenting with the CNN architecture design, and to apply other type of dimensionality reduction techniques. Acknowledgements This work has been supported by the Czech Science Foundation (GACR) under research project No. 18-18858S.

References 1. Hinton, G., LeCun, Y., Bengio, Y.: Deep learning. Nature 521, 436–444 (2015) 2. Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015) 3. Mcghee, R.B., Iswandhi, G.I.: Adaptive locomotion of a multilegged robot over rough terrain. IEEE Trans. Syst. Man Cybern. 9(4), 176–182 (1979) 4. Plagemann, C., Mischke, S., Prentice, S., Kersting, K., Roy, N., Burgard, W.: Learning predictive terrain models for legged robot locomotion. In: 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp 3545–3552, Sept 2008 ˇ 5. Cížek, O., Faigl, J.: On localization and mapping with rgb-d sensor and hexapod walking robot in rough terrains. In IEEE International Conference on Conference Systems, Man, and Cybernetics (SMC), pp. 2273–2278 (2016) 6. Marwan, N., Romano, M.C., Thiel, M., Kurths, J.: Recurrence plots for the analysis of complex systems. Phys. Rep. 438, 237–329 (2007) 7. Borg, I., Groenen, P.J.F.: Modern Multidimensional Scaling. Theory and Applications, 2nd edn. Springer (2005)

Recurrence Plot and Convolutional Neural Networks …

13

8. Rumelhart, D.E., Hinton, G.E., McClelland, J.L.: A general framework for parallel distributed processing. In: Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1 of Computational Models of Cognition and Perception, Chapter 2, pp. 45–76. MIT Press, Cambridge, MA (1986) 9. Neural Networks and Deep Learning. Determination Press (2015) 10. Bengio, Y.: Learning deep architectures for Ai. Found. Trends Mach. Learn. 2(1), 1–127 (2009) 11. Werbos, P.J.: Generalization of backpropagation with application to a recurrent gas market model. Neural Netw. 1(4), 339–356 (1988) 12. Bagnall, A., Lines, J., Hills, J., Bostrom, A.: Time-series classification with COTE: the collective of transformation-based ensembles. IEEE Trans. Knowl. Data Eng. 27, 2522–2535 (2015) 13. Lee, J.A., Peluffo-Ordónez, D.H., Verleysen, M.: Multi-scale similarities in stochastic neighbour embedding: reducing dimensionality while preserving both local and global structure. Neurocomputing 169, 246–261 (2015) 14. Mur-Artal, R., Tardós, J.D.: ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras. IEEE Trans. Robot. 33(5), 1255–1262 (2017)

On Performance Evaluation of Distributed System Size Estimation Executed by Average Consensus Weights Martin Kenyeres, Jozef Kenyeres, and Ivana Budinská

Abstract The size of a system is fundamental information for effective and proper operation of many distributed algorithms. This paper addresses the average consensus algorithm for distributed system estimation, or more specifically, a comparative study of its five frequently applied weights. We use four different methodologies in order to find the most performing weights in terms of the asymptotic convergence factor, the per-step convergence factor, their associated convergence times, the mean square error and the convergence rate over 30 random geometric graphs with varying leaders and either with or without bounded execution. Keywords Distributed computing · Average consensus · System size estimation

1 Introduction The information about the system size n is crucial for many distributed algorithms [1]. In the literature, a lot of mechanisms requiring this information (or at least its approximated value, i.e., an estimate) can be found, e.g., distributed hash tables, gossip-based algorithms, time division multiple access schemes etc. [1]. In many applications, the system size n is required to be known by each entity beforehand [2]. It is not a trivial task to both effectively estimate the system size and ensure a high robustness of the estimation process [1]. As a consequence, various approaches to obtain this information (or its estimate) appeared. One of them is extrema propagation, which is a fault-tolerant technique estimating the system size in a distributed manner with a fast rate [1]. Moreover, this approach can be used to estimate the M. Kenyeres (B) · I. Budinská Institute of Informatics, Slovak Academy of Sciences, Dúbravská cesta 9, Bratislava, Slovakia e-mail: [email protected] URL: http://www.ui.sav.sk/w/en/dep/mcdp/ J. Kenyeres Sipwise GmbH, Europaring F15, 2345 Brunn am Gebirge, Austria e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 R. Matoušek and J. K˚udela (eds.), Recent Advances in Soft Computing and Cybernetics, Studies in Fuzziness and Soft Computing 403, https://doi.org/10.1007/978-3-030-61659-5_2

15

16

M. Kenyeres et al.

network diameter. Another approach is tagging, where the entities in a system are allocated the unique identity number and locally store tables containing these numbers [2]. A similar approach is ordered numbering, based on exchanging and storing the maximal value [2]. Probability approaches presented in [3] find a wide application, especially in large-scale networks. They can be based on various principles such as random walks, randomly generated numbers, a capture-recapture concept, probabilistic counting etc. [3]. The modern solutions are also based on distributed orthogonalization without any a-priori knowledge of system entities preferences [2]. As foundable in the literature [4–6], a modification of the well-known average consensus algorithm for distributed averaging is also frequently applied. This algorithm finds a wide range of applications in various areas such as the wireless sensor networks, Internet of Things, the blockchains, the quantum networks etc. [6–12]. In this paper, we focus our attention on the average consensus algorithm for distributed system size estimation. We choose five weights frequently applied for this algorithm (namely, the Maximum Degree, the Metropolis-Hastings, the Local Degree, the Best Constant, and the Optimized Convex weights) for evaluation. Our goal is to examine the performance of the selected weights over random geometric graphs (RGGs) using four methodologies. In the literature, a paper focused on a complex comparative study of the average consensus algorithm for distributed system size estimation is missing, which motivates us to carry this research out. The next section addresses theoretical insights into the examined area, i.e., we present mathematical tools to describe distributed systems, the average consensus algorithm, and the analyzed weights. The third section is focused on the research methodology, i.e., we introduce the applied methodologies and define the used metrics. The last section is concerned with the experimentally obtained results, an overall comparison of the analyzed weights, and a discussion about the observed phenomena.

2 Theoretical Background 2.1 Model of Average Consensus Algorithm for Distributed System Size Estimation A system executing the average consensus algorithm can be modelled as an indirect finite graph G formed by the vertex set V and the edge set E (i.e., G = (V, E)) [5, 13, 14]. Here, the set V consists of all the vertices in a graph, which represents the particular entities in the inspected system. The set E contains all the edges, whose existence between two corresponding vertices vi and vj indicates their direct mutual connectivity1 (labelled as eij ). Subsequently, we can define the set Ni gathering all the adjacent vertices of the corresponding vertex vi as Ni = {vj : eij ∈ E}. We assume that each entity stores and updates (at each iteration) only one scalar value x i (k). 1 the

distance between them is one hop.

On Performance Evaluation of Distributed System …

17

Furthermore, we assume that the beginning of the algorithm is labeled as k = 0. In this phase, each entity takes one of these two values: “1” or “0” [5]. Only one of the entities takes “1”, i.e., it is the one appointed as the leader. In this paper, we investigate two different scenarios: either the entity with the index determined by ID = arg max{| i Ni |} or the entity whose index is δID = arg min{| Ni |} is appointed as the leader. i The inner states of the other entities take “0” (mathematically, this can be expressed as formulae (1) and (2)). xID (0) = 1, xi (0) = 0

for

∀vi ∈ V\{vID }

(1)

xδID (0) = 1, xi (0) = 0

for

∀vi ∈ V\{vδID }

(2)

The average consensus algorithm is based on an iterative neighbor-to-neighbor communication and so can be described using a difference equation as [5]: x(k + 1) = W × x(k)

(3)

Here, W is the weight matrix, whose elements2 determine the convergence rate, the robustness, the convergence/the divergence of the algorithm, the difficulty of the initial configuration etc. [5]. In the average consensus algorithm, each inner state asymptotically converges to the value of the estimated aggregated function (the system size in our case) [5].

2.2 Analyzed Weights of Average Consensus Algorithm This section is concerned with the weights chosen for an evaluation in this paper. The first weights of our interest are the Maximum Degree weights (abbreviated as MD). They require the information about the maximum degree of the graph  for their initial configuration. Their weight matrix WMD can be expressed as the Perron matrix defined as follows [13]:

[W ]MD ij

⎧ 1 ⎪ , if eij ∈ E ⎨  = 1 − di . 1 , if i = j ⎪ ⎩ 0, otherwise

(4)

Note that the equality holds if the inspected graph is not bipartite and regular [14]. Other weights are the Metropolis-Hastings (MH), which require only locally available information for their proper initial configuration. Each entity has to be aware of its degree and the degrees of its neighbors. Therefore, their weight matrix WMH can be defined as [14]: 2 Given

by the chosen weights.

18

M. Kenyeres et al.

[W ]MH ij

=

⎧ ⎪ ⎨ ⎪ ⎩

1

1 , max{di ,dj }+1  MH − i=j [W ]ij ,

if eij ∈ E if i = j otherwise

0,

(5)

The Local Degree weights (LD) are derived from MH by removing one from the denominator [15]. This is assumed to result in a higher convergence rate, but the global information about whether or not the inspected graph is bipartite and regular needs to be available (otherwise, this configuration can lead to the divergence) [14]. Their weight matrix WLD can be composed as [15]:

[W ]LD ij

=

⎧ ⎪ ⎨ ⎪ ⎩

1 , max{di ,dj }

1−



LD i=j [W ]ij ,

0,

if eij ∈ E if i = j otherwise

(6)

The next analyzed weights are the Best Constant weights (BC), which require the exact values of the second smallest λN −1 (L) and the largest λ1 (L) eigenvalue of the corresponding Laplacian matrix L. These weights are considered to be the fastest among the weights describable by a Perron matrix [15]. Their weight matrix W is defined as follows [5]:

[W ]BC ij

=

⎧ ⎪ ⎨ ⎪ ⎩

1−

2 , λ1 (L)+λN −1 (L) di · λ1 (L)+λ2 N −1 (L) ,

0,

if eij ∈ E if i = j

(7)

otherwise

The last weights of our interest are the Optimized Convex (OW). They are proposed such that the fast distributed linear averaging problem is reformulated as the spectral radius (ρ(·)) minimization problem as follows [15]: 1 · 1 × 1T ) n subject to L ∈ S, 1T × W = 1T , W × 1 = 1 minimize

ρ(W −

(8)

W is an optimization variable, and S are the sparsity pattern limits of L [15].

3 Research Methodology We address the chosen methodologies of our research in this section. As mentioned above, we choose four different methodologies, namely:

On Performance Evaluation of Distributed System …

19

Fig. 1 Representative of analyzed random geometric graphs

– The asymptotic convergence factor r asym , the per-step convergence factor r step , and the associated convergence times, τ asym , τ step are averaged over 30 random geometric graphs (RGGs) with 200 entities (see Fig. 1). – The average consensus algorithm is executed without any stopping criterion—the mean square error (MSE) over the iterations averaged over all 30 RGGs with 200 entities is analyzed at the 1th–the 300th iteration. – The execution of the algorithm is bounded by a stopping criterion—the number of the iterations necessary for the entities in a system to achieve the consensus3 averaged over all 30 RGGs with 200 entities is analyzed. The precision of the implemented stopping criterion is high. – The same scenario as the previous one with the difference that the precision of the implemented stopping criterion is low. The first used metric is the asymptotic convergence factor r asym , a measure for a performance evaluation. It can be used, provided that an asymptotic convergence of the algorithm is ensured [15]. This and its associated convergence time τ asym are defined as follows [15]:  rasym (W) = sup lim

x(0)=x k→∞

|| x(k) − x ||2 || x(0) − x ||2

 k1

, τasym =

1 1 log( rasym )

(9)

Another frequently applied metric for analyses of the average consensus algorithm is the per-step convergence factor r step and its associated convergence time τ step defined as [15]: || x(k + 1) − x ||2 1 , τstep = (10) rstep (W) = sup 1 || x(k) − x || log( ) 2 x(k)=x rstep 3A

lower value means a higher convergence rate.

20

M. Kenyeres et al.

Furthermore, we use the mean square error over the iterations (MSE(k)) as a metric for a precision evaluation. It is defined as follows [16]: MSE(k) =

 n  x(0) 2 1 xi (k) − 1T × · n i=1 n

(11)

As mentioned above, two last experiments are focused on bounded execution of the algorithm, i.e., we examine the convergence rate expressed as the number of the iterations necessary for the consensus when the algorithm is stopped at the first iteration when the following condition is met [5]: |max{x(k)} − min{x(k)}| < P

(12)

Here, the parameter P determines the precision of the final estimates. Its higher value ensures a higher precision but at a cost of an algorithm deceleration. We set its value to either 10−4 (high precision) or 10−1 (low precision).

4 Experiments and Discussion In this section, we present the results from numerical experiments performed in Matlab2016a. We compare five frequently quoted weights for the average consensus algorithm using four different methodologies discussed above over 30 RGGs with 200 vertices—in each analysis, only the average value over all these graphs is shown. In Fig. 2, the asymptotic convergence factor r asym , the per-step convergence factor r step , and their associated convergence times τ asym , τ step are shown (a lower value indicates a higher performance). Since all the examined weight matrices are symmetric, r asym = r step ⇐⇒ τ asym = τ step [15]. From the results, it can be seen that OW outperform all the other examined weights. The second best are BC, the third one are LD, the fourth one are MH, and the worst performance is achieved by MD. The next experiment is focused on an analysis of MSE over the first 300 iterations. In Fig. 3 (top figure), the results for all the examined weights are shown when the

Fig. 2 Asymptotic/per-step convergence factor r asym , r step (both pictured left), associated convergences times τ asym , τ step (both pictured right) averaged over 30 RGGs

On Performance Evaluation of Distributed System …

21

Fig. 3 MSE of inspected weights in decibels over first 300 iterations—(top figure): best-connected entity is leader, (bottom figure): worst-connected entity is leader

best-connected entity is the leader. From the results, we can see that MD, MH, and LD significantly outperform BC and OW at earlier iterations (the difference between MD, MH, LD and between BC,OW is just negligible). The best performance at these iterations is achieved by LD, which outperform MD and MH (slightly) over the whole examined interval. Similarly, MH achieve a better performance than MD over the first 300 iterations. Furthermore, OW achieve the best performance at later iterations, i.e., they outperform MD, MH, and LD from approximately the 25th iteration, and BC over the whole examined interval. BC have second smallest MSE at the later iterations, i.e., these weights outperform MD from approximately the 100th iteration, MH from approximiatelly the 200th iteration, and BC from approximiatelly the 230th iteration. At the 300th iteration, OW significantly outperform the other analyzed weights, i.e., their MSE [dB] is equaled to −296.68 dB, meanwhile, second lowest MSE (i.e., MSE of BC) takes −149.21 dB. When the worst-connected entity is the leader (Fig. 3 (bottom figure)), the best performance is achieved by OW over the whole interval (although the difference between all five weights is negligible at

22

M. Kenyeres et al.

earlier iterations). The second most performing are BC over all the 300 iterations except for approximiatelly the first 10 iterations, when they are outperformed by MH and LD. LD are the third best, and MH the fourth ones (except on the mentioned short interval in the beginning). The worst performance over the whole interval is achieved by MD. At the 300th iteration, MSE of OW takes −302.69 dB, while, BC take −171.65 dB—here, a very high performance of OW is seen again. Two other experiments deal with an analysis of the convergence rate expresses as the iteration number for the consensus when the algorithm is bounded by the stopping criterion defined in (12). In Fig. 4 (left figure), we show the convergence rates when the best-connected entity is appointed as the leader and P = 10−4 . We can see that the lowest value (and so, the highest convergence rate) is observed for OW, LD are the second fastest, MH the third one, BC the fourth one, and MD achieve the slowest convergence rate. In Fig. 4 (right figure), the results obtained with the worst-connected entity as the leader are shown. OW are the fastest again, however, the second fastest are BC, outperforming LD (the third) and MH (the fourth), and MD are the slowest again. In Fig. 5, the results from the experiments with the precision P = 10−1 are shown. From the results shown in Fig. 5 (left figure)—i.e., the best-connected entity is the leader, we can see that LD are the fastest among the examined weights, MH are the second one, MD the third (the difference in the convergence rate between these three weights is negligible), OW are the fourth one, and BC achieve a significantly lower convergence rate than the concurrent weights. In the case that the worst-connected

Fig. 4 Convergence rate—high precision of stopping criterion (12)—(left figure): best-connected entity is leader, (right figure): worst-connected entity is leader

Fig. 5 Convergence rate—low precision of stopping criterion (12)—(left figure): best-connected entity is leader, (right figure): worst-connected entity is leader

On Performance Evaluation of Distributed System …

23

entity is selected for the leader (Fig. 5 (right figure)), OW achieve the highest convergence rate, LD the second one, MH the third one, BC the fourth, and MD are the slowest weights in this scenario.

5 Conclusion In this paper, we present a comparative study of average consensus weights for distributed system size estimation. We compare five frequently cited weights, namely MD, MH, LD, BC, and OW, in order to find the best performing option in terms of the r asym , r step , τ asym , τ step , MSE(k), and the number of the iterations for the consensus when the algorithm is bounded by a stopping criterion with a varying precision. We demonstrate our intention on 30 RGGs and appoint either the best- or the worstconnected entity as the leader. It can be seen from the results that OW achieve the highest performance in terms of r asym , r step , τ asym , τ step , and in each scenario when the worst-connected connected entity is the leader. In the case that the bestconnected entity is appointed as the leader, LD outperform the concurrent weights at the earlier iterations or when the precision of the implemented stopping criterion is low. For a higher number of the iterations and when the stopping criterion is set to a high precision, the best performance is observable for OW again. Moreover, the performance of MD, MH, LD is decreased (when the algorithm is bounded) when the worst-connected entity is chosen as the leader instead of the best-connected one, while, the character of OW and BC performance is the exact opposite. Also, BC are the most significantly affected by the leader selection among the examined weights. Acknowledgements This work was supported by the VEGA agency under the contract No. 2/0155/19, by CHIST ERA III—Social Network of Machines (SOON), and by CA15140— Improving Applicability of Nature-Inspired Optimisation by Joining Theory and Practice (ImAppNIO).

References 1. Cardoso, J.C.S., Baquero, C., Almeida, P.S.: Probabilistic estimation of network size and diameter. In: Proceeding of the 2009 Fourth Latin-American Symposium on Dependable Computing, pp. 33–40. IEEE Press, New York (2009) 2. Sluciak, O., Rupp, M.: Network size estimation using distributed orthogonalization. IEEE Signal Process. Lett. 20(4), 347–350 (2013). https://doi.org/10.1109/LSP.2013.2247756 3. Terelius, O., Varagnolo, M., Johansson, K.H.: Distributed size estimation of dynamic anonymous networks. In: Proceeding of the 51st IEEE Conference on Decision and Control, CDC 2012, pp. 5221–5227. IEEE Press, New York (2012). https://doi.org/10.1109/CDC.2012. 6425912 4. Shames, I., Charalambous, T., Hadjicostis, C.N., Johansson, M.: Distributed network size estimation and average degree estimation and control in networks isomorphic to directed graphs. In: Proceeding of the 2012 50th Annual Allerton Conference on Communication, Control, and

24

5.

6. 7. 8. 9.

10.

11.

12.

13.

14.

15. 16.

M. Kenyeres et al. Computing (Allerton), pp. 1885–1892, IEEE Press, New York (2012). https://doi.org/10.1109/ ICASSP.2014.6854643 Kenyeres, M., Kenyeres, J., Skorpil, V., Burget, R.: Distributed aggregate function estimation by biphasically configured metropolis-hasting weight model. Radioengineering 26(2), 479–495 (2017). https://doi.org/10.13164/re.2017.0479 Orostica, B., Nunez, F.: Robust gossiping for distributed average consensus in IoT environments. IEEE Access 26(2), 994–1005 (2019). https://doi.org/10.1109/ACCESS.2018.2886130 Wu, S., Wei, Y., Gao, Y., Zhang, W.: Robust gossiping for distributed average consensus in IoT environments. Wirel. Netw. 1–7, (2019). https://doi.org/10.1007/s11276-019-01969-w Kenyeres, M., Kenyeres, J., Skorpil, V.: Split distributed computing in wireless sensor networks. Radioengineering 24(3), 749–756 (2015). https://doi.org/10.13164/re.2015.0749 Mazzarella, L., Sarlette, A., Ticozzi, F.: Consensus for quantum networks: symmetry from gossip interactions. IEEE Trans. Autom. Control 60(1), 158–172 (2015). https://doi.org/10. 1109/TAC.2014.2336351 Wan, Y., Yan, J., Lin, Z., Sheth, V., Das, S.K.: On the structural perspective of computational effectiveness for quantized consensus in layered uav networks. IEEE Trans. Control Netw. Sys. 6(1), 276–288 (2019). https://doi.org/10.1109/TCNS.2018.2813926 Mingxiao, D., Xiaofeng, M., Zhe, Z., Xiangwei, W., Qijun, C.: A review on consensus algorithm of blockchain. In: Proceedings of IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 2567–2572. IEEE Press, New York (2017). https://doi.org/10.1109/ SMC.2017.8123011 Yeow, K., Gani, A., Ahmad, R.W., Rodrigues, J.J.P.C., Ko, K.: Decentralized consensus for edge-centric internet of things: A review, taxonomy, and research issues. IEEE Access 6, 1513– 1524 (2018). https://doi.org/10.1109/ACCESS.2017.2779263 Macua, S.V., et al.: How to implement doubly-stochastic matrices for consensus-based distributed algorithms. In: 8th IEEE Sensor Array and Multichannel Signal Processing Workshop (SAM), pp. 333–336, IEEE Press, New York (2014). https://doi.org/10.13164/re.2017.0479 Schwarz, V., Hannak, G., Matz, G.: On the convergence of average consensus with generalized metropolis-hasting weights. In: Proceeding of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 333–336, IEEE Press, New York (2014). https:// doi.org/10.13164/re.2017.0479 Xiao, L., Boyd, S.: Fast linear iterations for distributed averaging. Syst. Control. Lett. 53(1), 65–78 (2004). https://doi.org/10.1016/j.sysconle.2004.02.022 Skorpil, V., Stastny, J.: Back-propagation and k-means algorithms comparison. In: Proceedings of the 2006 8th international Conference on Signal Processing, pp. 1871–1874, IEEE Press, New York (2006). https://doi.org/10.1109/ICOSP.2006.345838

Determination of Air Jet Shape with Complex Methods Using Neural Networks Jiri Stastny, Jan Richter, and Lubos Juranek

Abstract This article deals with the computer evaluation of airflow images. The airflow is visualized by continuous gas fibers, as for example smoke, fog or another visible additive. One of the most important properties of airflow is the shape of the stream. The principle of determining the shape of the stream is the detection of the additive. For 2D images with a heterogeneous background, it may be very difficult to distinguish the additive from the environment. This paper deals with the possibility of detecting an additive in airflow images with a heterogeneous back-ground. Artificial neural networks will be used for this purpose. Keywords Visualization · Artificial neural networks · Additive detection · Image processing · Air jet shape · Airflow

1 Introduction The description of flowing air properties is important in the design of ventilation systems, aerodynamics, air condition systems, etc. [1]. Current properties are usually measured by measuring devices [2]. A different approach involves optical methods that use optical sensing devices to acquire specific images of airflow, which are then subjected to computer-aided evaluation via image processing per principals of [3] and [4]. A very useful approach is to use visualization methods. Their principle is the introduction of visible substances into a colorless air stream. This will make the stream visible and, there-fore, it is possible to take pictures or videos of such flows. J. Stastny (B) · J. Richter Brno University of Technology, Technicka 2896/2, 61969 Brno, Czech Republic e-mail: [email protected] J. Richter e-mail: [email protected] J. Stastny · L. Juranek Mendel University in Brno, Zemedelska 1665/1, 61300 Brno, Czech Republic e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 R. Matoušek and J. K˚udela (eds.), Recent Advances in Soft Computing and Cybernetics, Studies in Fuzziness and Soft Computing 403, https://doi.org/10.1007/978-3-030-61659-5_3

25

26

J. Stastny et al.

Additives may take the form of particles such as sparks, carbon blacks or helium bubbles [5]. Another group of additives is one that forms continuous fibers, e.g. smoke [6], fog or colored gas [7]. If the images are arranged in a laboratory environment, their backgrounds can be selected as homogeneous, easily distinguishable from the color of the additive. Evaluation of such images is the aim of [8]. However, if the images are taken under ordinary conditions, the background is generally heterogeneous. Thus, it often reduces the distinctiveness of the additive in the stream from the surrounding area. Humans can be the inspiration for the computer processing of such images. They can detect the shape of the stream, even if the additive is poorly distinguishable from the back-ground. A similar skill will also be required from an artificial neural network.

2 Initial Status of the Research This research uses the knowledge of [9], which describes the detection of the additive in the airflow image by using a multilayer perceptron neural network. According to the near area of a specific point, the network is able to determine whether or not the additive is at this point. R, G and B values of the area pixels are passed to the net-work. Suitable ambient dimensions are 5 × 5 px or 7 × 7 px. Therefore, the total number of inputs of the neural network is 75 or 147 values. According to these inputs, the network determines whether or not the additive is in the middle pixel of the studied area. The network presents the decision in a single output neuron, when 1 means presence of additive in the middle point of the area, 0 means absence of the additive. As per [9], it is enough to process the brightness of the surrounding pixels instead of the individual color components for simple images with a homogeneous background and evident additive. A suitable network for such purpose has one hidden layer with approximately 20% more neurons than the input layer. A suitable topology is e.g. 25-30-1. However, this article focuses on complex images with a heterogeneous background, less distinguishable from the additive. The use of individual R, G, B color components represents a significant improvement in detection quality. This leads to the need to triple the number of input neurons. In addition to that, source [9] recommends adding one hidden layer for shade processing. For example, a suitable network for 5 × 5 px may have a 75-90-30-1 topology. Literature [10] or [11] speaks in general about neural network settings. The multilayer perceptron neural network uses the approach of learning with a teacher, so it demands learning patterns. In this case, the patterns present examples of areas of additive (requested output value 1) and areas of background (requested output value 0). All the patterns are created by a person, who is in the role of a teacher of the neural network. The same number of 0 and 1 patterns for learning is passed to the network, which will then adapt to them. After the learning process, the neural network is able to provide a similar decision process, as was initially made by the person. So, as the following step, the neural

Determination of Air Jet Shape with Complex …

27

network decides for each pixel (according to its neighborhood), whether the pixel contains an additive or not. Areas that are assigned as additive are then processed to hold the entire air jet shape. However, the additive may also be detected outside the main stream. This area is not important for assessing the shape of the stream. Conversely, there may be places marked as background within the area of airflow. Either it was identified by an incorrect decision of the network or due to the reduced density of the additive. Therefore, [9] also mentions the smoothing algorithm. This is a simple border construction of the area of the additive. Whatever lies outside the main area is considered to be back-ground. Whatever lies inside is considered an additive. The result of the entire process for initial image in Fig. 1 is evident from Fig. 2. However, detection by a neural network is not ideal in specific cases. Problem areas are those very similar in color as the additive, which adjoins the area of the air jet. The entire area of airflow is then extended to such areas. Figure 4 demonstrates this phenomenon (evaluated image is captured in Fig. 3), where a streamed area (visualized by smoke as the additive) has spread to a glossy floor that has a very similar color of fog. Another improperly marked area is the shiny supply tube or hall illumination. This article defines how to use simple neuronal network evaluation to develop complex additive detection methods.

Fig. 1 [9] Source for additive detection

28

J. Stastny et al.

Fig. 2 Result of additive detection via neural network in Fig. 1 [9]

Fig. 3 Image with glossy floor, shiny tube and illumination [9]

3 Development of New Methods The following chapters will describe how to use detectors (neural networks) to determine the most accurate shape of the stream.

Determination of Air Jet Shape with Complex …

29

Fig. 4 Incorrect extension of airflow area [9]

3.1 Conditional Thresholding It is an additional method, working on the principle of thresholding. This method will be useful in complex procedures for the detection of additives. While simple thresholding sorts pixels into two groups according to a single threshold, conditional thresholding sorts pixels into three groups according to two thresholds, P1 and P2 , where threshold value P1 < P2 . The pixels are grouped according to their value of searched phenomenon: P2 < I → group (a)—pixels with the certainty of the searched phenomenon. P1 < I < = P2 → group (b)—pixels with the possibility of the searched phenomenon. I < = P1 → group (c)—pixels with the exclusion of the searched phenomenon. The area of the searched phenomenon includes the whole group (a) and the areas of group (b) which directly adjoin with (a). Conditional thresholding can be used for the results of a neural network. The searched phenomenon will then be determined by the value which was set for a particular pixel by the neural network. However, the algorithm can also be used for the direct evaluation of an arranged image when the search phenomenon is determined directly from pixel brightness. Figure 5 captures the flow visualized by smoke. In addition to the main stream, smoke remains outside the current. Figure 6 captures the result after thresholding per two thresholds. White pixels exceed the upper threshold P2 . Values of grey pixels are between the thresholds. Black pixels have a value below the lower threshold P1 . Figure 7 shows the result after the subsequent allocation. Sample of pseudocode of conditional thresholding:

30

J. Stastny et al.

Fig. 5 Airflow image with smoke also outside the stream. Source Department of Thermomechanics and Environmental Engineering, BUT, FME [s.a.]

Fig. 6 Thresholding by double threshold

Determination of Air Jet Shape with Complex …

31

Fig. 7 Result of subsequent allocation

Sample of pseudocode of conditional thresholding: //split into A, B, C + paint (temporarily for B) foreach pixel[X,Y] do load brightness of the pixel if brightness > P2 then mark this point as group A set white color to the pixel elseif brightness > P1 then mark this point as group B set black color to the pixel (may be overwritten) else mark this point as group C set black color to the pixel endif //processing of group B foreach pixel in group B if there is a group A point next to processed B pixel move this B pixel into group A set white color to this pixel do the same recursively for all the neighbors which are in group B //as these points are moved to group A within this //the upper "foreach" will skip them aftwerwards endif

32

J. Stastny et al.

Fig. 8 Continuous evaluation by neural network

3.2 Conditional Thresholding of Continuous Detection If conditional thresholding is applied to the result of basic detection by a neural network and then a smoothing algorithm is applied, a less dense additive can be detect-ed. The result that such area is an additive is determined by the fact that the region with a dense additive area is adjacent. The neural network in this case does not return a value of 0 or 1 (i.e. simple thresholding of a continuous result); it returns a real value of that is then subjected to conditional thresholding. Continuous evaluation by a neural network is captured by Fig. 8. The pixel brightness is linearly derived from the real output of the neural network, with 0 corresponding to black and 1 to white. If the image is appropriately thresholded, we will get the result presented in Fig. 9 (binary image) and Fig. 10 (real image with highlighted detected area). They demonstrate the advantage of this method—the possibility of detecting an even less dense additive; therefore, the air jet shape is more credible. At the same time, it is obvious that the failure of simple neural network evaluation remains, i.e. the occurrence of areas incorrectly assigned as an additive due to a very similar color.

3.3 Method of Two Experts The above described symptoms have not been reliably eliminated by increasing the number of neurons in the neural network. The reason is a considerable similarity of additive and shiny places. It is very difficult to train the network to distinguish such

Determination of Air Jet Shape with Complex …

33

Fig. 9 Result of conditional thresholding of continuous detection (white—additive, black—background)

Fig. 10 Result of conditional thresholding of continuous detection (border of detected area in cyan)

minor differences between the additive and places similar in color. However, there is a possibility of using a second neural network that will specialize in these differences. The network then decides which of the pixels of the territory, which was designated by the first network as an additive, is indeed an additive. The network specializes in this specific task and it cannot decide on the pixels outside the designated area (it

34

J. Stastny et al.

would be a random decision). Therefore, this network is called a specific expert. The first network that created the sub-territory is trained for the entire image, so it will be called a general expert. So, the general expert selects pixels that could be an additive. Upon selection, a specific expert enters to further categorize them into additive and additive-like environments. The learning process of the general expert is the same as the process of simple evaluation of the neural network. However, it is not necessary to require the network to distinguish the additive and additive-like environments. We require, above all, the ability to distinguish the additive from the background. A specific expert will only be trained after a general expert performs its evaluation. Figure 11 shows the result of a general expert on which the specific expert is trained. Figure 12 is the result of the specific expert. The same result would be achieved by a procedure where both experts would evaluate the whole picture separately. Both results would then be merged. Only the pixels assigned by both experts as an additive will actually be declared an additive. Thus, in order for a pixel to be considered an additive pixel, the two experts must agree on it. Note: There can be more than two experts used for evaluation. The final decision may be the decision by the majority, or any expert may have a veto. A combination of a larger number of experts may be the subject of further research.

Fig. 11 Result of the general expert

Determination of Air Jet Shape with Complex …

35

Fig. 12 Result of the specific expert

3.4 Conditional Thresholding of the Merged Results of Two Experts When comparing Figs. 10 and 12, there is an evident disadvantage of the method of two experts—the impossibility to expand the detected area even in areas with a less dense additive. There are more ways of implementing conditional thresholding into the method of two experts. The most effective method found is a method of conditional thresholding of the merged results of two experts. The same neural networks that were created within the method of two experts can be used for this meth-od. However, their usage differs from the mentioned method. Both experts process the image and determine a real value of for each pixel. The results of continuous detection by the general expert is captured in Fig. 13. A similar result of the specific expert is shown in Fig. 14. Notice that the decision of the specific expert is random outside the area detected by the general expert. In the next step, both images are merged with the same 50% weight. The resulting brightness of pixel [X, Y] will be: I A X Y = 0.5I G X Y + 0.5I S X Y

(1)

The result of merging is in slide 15 (Fig. 15). Such an image is prepared for conditional thresholding with appropriate thresholds. The thresholds are usually chosen strictly. Especially thanks to the lower threshold P1 > 0.5, the phenomenon of incorrect assigning as an additive is minimized. As a result, pixels of an additive may be considered pixels, with which one network is completely certain and the other would at least admit it. Figure 16 captures

36

J. Stastny et al.

Fig. 13 Continuous detection by a general expert

Fig. 14 Continuous detection by a specific expert

the resulting detected area after the smoothing algorithm when threshold brightness I 1 = 200 (threshold P1 = 78%) and I 2 = 250 (threshold P2 = 98%) are set. Figures 14 and 15 show that a specific expert is not able to exclude the entire glossy pipe but only some of its stripes. The reason is color heterogeneity of the pipe. Due to the exclusion of some of their colors, however, the detected area did not spread

Determination of Air Jet Shape with Complex …

37

Fig. 15 Image combined from the results of continuous detection by both experts

Fig. 16 Result of conditional thresholding of the merged image

over the entire area of pipe. Otherwise, it is necessary to use an otherwise-trained specific expert, or to use one more expert learned to recognize the tube.

38

J. Stastny et al.

Fig. 17 Conditional thresholding of the merged image—border before smoothness

3.5 Smoothness of the Border If further evaluation requires it, the airflow border can be adjusted by a sophisticated method to make it smoother. Besides geometric algorithms, there is a method using linear filtration. The image must be modified according to the detected area as in Figs. 1 or 2—everything inside the area is white and everything outside is black. After that, the image can be filtered using a low pass filter (the size of the mask de-pends on the resolution). Then thresholding is applied with a threshold of 50%. The procedure can be repeated. Figure 17 is the result of the method of conditional thresholding of the merged results of a pair of experts before adjusting the flow of the border. Figure 18 shows this border after this procedure.

4 Conclusion The subject of the documented research is airflow images, visualized by the additive that forms continuous fibers. Such an additive may be, for example, fog, steam, smoke or other visible additives. This article discusses extended possibilities for the application of artificial neural networks as detectors of the additive. The process of human evaluation can be the inspiration for the development of such methods. A base additive detection method is presented in [9]. This detection principle has been

Determination of Air Jet Shape with Complex …

39

Fig. 18 Conditional thresholding of the merged image—border after smoothness

used for other complex procedures. The purpose is to eliminate basic processing malfunctions such as a poorly distinguishable additive and additive-like environment or the detection of a less dense additive at the stream border. A simple, human-inspired process is the conditional threshold algorithm. When an additive is detected, a person subconsciously moves from where he/she is certain, extending the area of the additive to places where the additive is not as noticeable. The conditional thresholding algorithm tries to simulate this approach. A better resolution of minor differences between the additive and the background with a similar color is provided by the second detector. The combination of two detectors (neural networks) is described in the method of two experts. The first detector is called a general expert and is trained to distinguish the background and locations that could be considered as areas of the additive. The second detector is called a specific expert. It does not concentrate on the background, but only on the area that the general expert assigned as an additive. In this area, it looks for minor differences and determines what is an additive and what is just background with a color similar to the additive. The disadvantage of the method is a more complicated way of training a specific expert and impossibility to spread the detected area into places with less dense additives. The most sophisticated method found is conditional thresholding of the merged results of two experts. This method combines both previous approaches. Thanks to a pair of experts, it can distinguish the additive from a very similar environment. It also does a good job of copying the border of the stream according to a less dense additive. As in the case of two experts, a more complex preparation of experts may be considered as a disadvantage of the method.

40

J. Stastny et al.

Acknowledgements This work was supported by the Brno University of Technology. Project: ˇ ˇ FSI-S-17-4785 INŽENÝRSKÉ APLIKACE POKROCILÝCH METOD UMELÉ INTELIGENCE.

Reference 1. Cermak, J.E.: Wind-tunnel development and trends in applications to civil engineering, J. Wind Eng. Indus. Aerodyn. 91(3), 355–370 (2003). ISSN 0167-6105 2. Pokorný, J., Poláˇcek, F., Fojtlín, M., Fišer, J., Jícha, M.: Measurement of airflow and pressure characteristics of a fan built in a car ventilation system. In: EPJ Web of Conferences, vol. 114, pp. 644–647. EDP Sciences—Experimental fluid mechanics 2015, Praha, 17.11.2015– 20.11.2015, ISSN 2100-014X 3. Galer, M.; Horvat, L. Digital Imaging: Essential Skills. Focal Press (2003). ISBN 978-0-24051913-5 4. Gonzales, R. C., Woods, R.E.: Digital Image Processing. Prentice Hall (2008) 5. Šˇtastný, J., Richter, J.: Adaptation of genetic algorithm to determine air jet shape of flow visualized by helium bubbles. J. Aerosp. Eng., 29(6), pages 12 (2016). ISSN 0893-1321 6. Oda, N., Hoshino, T.: Three-Dimensional Airflow Visualization by Smoke Tunnel. SAE Technical Paper 741029 (1974) 7. Pavelek, M., Janotková, E., Štˇetina, J.: Visualization and Optical Measuring Methods. FSI VUT, Brno (2001) 8. Caletka, P., Pech, O., Jedelský, J., Jícha, M., Richter, J.: Comparison of methods for determination of borders of stream visualized by smoke method. In: 34th Conference of Departments of Fluids Mechanics and Thermomechanics Proceedings of Ex-tended Abstracts. Ústí nad Labem: University J. E. Purkynˇe in Ústí nad Labem, 11–12 (2015). ISBN: 9788074149122 9. Richter, J., Šˇtastný, J., Jedelský, J.: Estimations of shape and direction of an air jet using neural networks. In: 19th International Conference on Soft Computing Mendel 2013. Brno University of Technology, pp. 221–226 (2013). ISBN 978-80-214-4755-4 10. Hagan, M.T., Neural Network Design. PWS, USA. 734 s (1996). ISBN 7-111-10841-8 11. Haykin, S.: Neural Networks: A Comprehensive Foundation, 2 edn. Prentice Hall. (1998). ISBN 0-13-273350-1

Evaluation of Permutation-Based Mutation Operators on the Problem of Automatic Connection Matching in Closed-Loop Control System Vladimir Mironovich, Maxim Buzdalov, and Valeriy Vyatkin

Abstract Successful application of evolutionary algorithms for large scale practical problems requires careful consideration of all the different elements of the applied evolutionary approach. Recently, a method was introduced for automatic matching of connections in a closed-loop control system that uses evolutionary algorithm in conjunction with model checking. In order to improve the method we consider using the permutation-based individual encoding, as it more realistically reflects real life scenarios, where the connections are matched on a one-to-one basis. Evaluation of the fitness function based on the model checking is computationally expensive, thus it is necessary to choose the best mutation operator in terms of the number of fitness function evaluations required to find the optimum solution. In this paper we evaluate the applicability of the permutation-based encoding for the problem of connection matching in the closed-loop control system and evaluate the performance of several permutation-based operators commonly used for ordering problems. Keywords Search-based software engineering · Permutation encoding · Mutation operators · Industrial control systems

1 Introduction Search-based software engineering [12] is a research area that aims to apply complex heuristic techniques to various software engineering problems presented as search problems [9]. Multiple software engineering tasks, e.g. software testing, software V. Mironovich (B) · M. Buzdalov · V. Vyatkin ITMO University, Saint-Petersubrg, Russia e-mail: [email protected] V. Mironovich · V. Vyatkin Aalto University, Espoo, Finland V. Vyatkin Luleå University of Technology, Luleå, Sweden © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 R. Matoušek and J. K˚udela (eds.), Recent Advances in Soft Computing and Cybernetics, Studies in Fuzziness and Soft Computing 403, https://doi.org/10.1007/978-3-030-61659-5_4

41

42

V. Mironovich et al.

maintenance, software design, were reported to be successfully solved by such automated optimization techniques [6, 7, 11]. Automation of software development [13] has been a concern for a long time, as it is a great way to increase the quality of software solutions while also decreasing the maintenance and development costs. In search-based software engineering automatic software development could be considered one of the hardest problems, as it is one of the problems where no suboptimal results are acceptable and searching for the true optimum can quickly become unfeasible with increasing complexity of the target software solution. Nevertheless, there has been a lot of efforts in solving this problem with various degrees of success [14, 15, 24]. In this paper we are going to consider a variation of the problem of automatic connection matching in a closed-loop control system introduced in [20]. One of the biggest questions for the methodology presented in [20] is its applicability towards realistic large scale problems. In order to successfully extend the proposed approach to real life applications, it would have to use a permutation-based encoding for individuals [2], as realistic closed-loop control systems usually match inputs and outputs on a one-to-one basis. Permutation encoding is commonly used in ordering problems, such as the traveling salesman problem [16], the scheduling problem [5] and the vehicle-routing problem [23]. Implementation of a permutation-based evolutionary algorithm requires using different evolutionary operators from the ones used for binary or k-ary strings. Moreover, for each problem the choice of the best operators is usually not obvious, as they commonly have some kind of a real world implication in mind. For instance, the well-known 2-OPT heuristic [17], known from the algorithms solving the traveling salesman problem, reverses a contiguous part of a permutation, which might not make sense in any other problems. One of the biggest problems for practical applications of evolutionary algorithms is their feasibility for large scale problems. In order to successfully apply evolutionary techniques in practical domains, one has to consider the efficiency of the algorithm in terms of the number of fitness evaluations required to find the optimal solution, as fitness function evaluation may be the most computationally expensive task. Thus, it is important to make a justified choice for all the elements considered within the evolutionary approach. In this paper, we limit ourselves to the mutation operators. In order to better understand which mutation operators perform the best on the considered problem of automatic connection matching in the closed-loop control system we evaluate various common permutation-based mutation operators in conjunction with a simple evolutionary algorithm. The rest of the paper is structured as follows. In Sect. 2 we describe the considered optimization problem. Section 3 outlines the mutation operators evaluated in this study. Section 4 describes the experimental setup and results. Section 5 concludes the paper.

Evaluation of Permutation-Based Mutation Operators …

43

2 Problem Description We consider automatic reconstruction of the connections between the plant and the controller in the closed-loop control system. In the closed-loop model checking [22] a certain controller is implemented and evaluated together with the plant model, which provides constant feedback on the controller’s actions. There may be cases when there exists only the white-box simulation model of the system, or even a black-box model created using methods proposed in [8]. We want to automatically reconstruct missing connections in such control system using only the specification provided in terms of temporal logic formulas. In this paper we evaluate the performance of various permutation-based mutation operators commonly used for the ordering problems [1, 10] on the considered problem of reconstructing connections in the elevator model used in [20]. The closed-loop system under study represents an elevator which moves between three floors. The interface for the elevator is presented in Fig. 1. The system consists of a controller which controls the elevator itself, its doors and responds to button presses both inside and outside the elevator. The plant handles the commands from the controller and provides realistic feedback to its actions. The optimization problem is formulated as follows: given a pair of a plant and a controller with missing measurement connections between the plant and the controller, we need to reconstruct the missing connections using the specification requirements provided in terms of temporal logic formulas such that the resulting system satisfies all of the specification requirements. The permutation-based representation is constructed as follows: each individual is a closed-loop model with connections represented as a permutation of variables [a1 , . . . , an ]. All ai values are different and take values from the set {1, 2, . . . , n}. Each variable ai represents the index of the plant measurement to be connected to

Fig. 1 Elevator model user interface

44

V. Mironovich et al.

the controller input i. Thus, the order of elements corresponds to the connections in the resulting closed-loop system. The fitness function uses the formal verification and model checking approach [3, 4] and is based on the number of satisfied specification requirements [19]. Model checking allows to express the behavior of the system in terms of the specification formulas using the computation tree logic. In computation tree logic the states of the system can be described, and the requirements for the states are represented as Boolean formulas with additional temporal properties, e.g. “always happens on each execution path” or “happens on at least one execution path”.The system is then modeled in the model checker and can be formally verified in terms of adhering to the provided specification. The elevator model used in this work is implemented in the NuSMV model checker, with specification consisting of 38 computation tree logic formulas used in [20]. Each formula represents some kind of functionality that is inherent to the elevator system. Several considered requirements are quite difficult to accomplish, such that possibly only the correct resulting system can satisfy them, while others are easier to accomplish, e.g. “the elevator can actually move” or “the elevator can open doors”, and can help better guide the algorithm towards the optimum. For the correct solution all 38 specification requirements must be satisfied.

3 Mutation Operators We evaluate the performance of five different mutation operators that are commonly used for the traveling salesman problem [1] and can be repurposed for the permutation-based encoding of the generated connections. These operators are: • SingleSwap, which swaps two random variables; • CenterInverse, which divides the individual into two sectors and reverses the order of variables in each sector; • ReverseSequence, which chooses two positions in the individual and reverses the order of variables between them; • InvertThree, which chooses three consecutive variables in the individual and reverses their order; • RotateThree, which chooses three random variables at positions i, j, k, and rotates them such that the variable at the position j goes to the position i, the variable at the position k goes to the position j and the variable at the position i goes to the position k. Additionally we introduce and evaluate the MultiSwap mutation operator. It is a SingleSwap mutation operator which after the first swap continues to swap two random variables with probability n1 until no swap is made (where n is the number of variables in the individual). We consider this operator as a way to increase the probability of introducing bigger changes for individuals generated using such

Evaluation of Permutation-Based Mutation Operators …

0

1

2

3

4

5

6

0

45

1

5

3

4

2

6

(a) The SingleSwap and MultiSwap operators. 0

1

2

3

4

5

6

3

2

1

0

6

5

4

2

6

2

5

6

3

5

6

(b) The CenterInverse operator. 0

1

2

3

4

5

6

0

1

5

4

3

(c) The ReverseSequence operator. 0

1

2

3

4

5

6

0

1

4

3

(d) The InvertThree operator.

0

1

2

3

4

5

6

0

4

2

1

(e) The RotateThree operator. Fig. 2 The mutation operators considered in this paper

a simple mutation operator as SingleSwap, possibly to help it better explore the search space. Each mutation operator is presented in Fig. 2.

46

V. Mironovich et al.

Algorithm 1 (1+1) evolutionary algorithm with the permutation encoding Parent p ← random permutation of set [a1 , .., an ] while optimal solution has not been found do c ← Mutate( p) if f (c) ≥ f ( p) then p←c end if end while

4 Experiment Results For the experimental evaluation of the chosen mutation operators we consider a simple instance of the connection reconstruction problem, where the algorithm has to reconstruct only 9 connections out of 15 (with 6 connections correctly preset in the generated solutions). Such setting was chosen because it is a sufficiently hard task to be able to see the differences in the performance of the evaluated operators, but it does not require long computational experiments compared to harder instances of the problem. Each mutation operator was tested as a part of the (1 + 1) evolutionary algorithm presented in Algorithm refalgo. For each operator we conducted 100 experimental runs with a limited fitness evaluation budget of 3000, as computing the fitness function by running the model checker takes significant processing time. The value of 3000 was chosen based on the initial evaluation of the results of the considered operators. Overall performance results (the minimum, median and maximum number of evaluations together with the number of successful runs and total runs) are presented in Table 1. The table represents the minimum and median number of fitness evaluations required to reach optimum for each algorithm with > 50% success rate. Unfinished runs where the optimum solution was not found over the 3000 fitness evaluations were not considered for the computation of the maximum values in the provided table. The CenterInverse mutation operator failed to reach the optimal solution within 3000 fitness evaluations in all 100 runs, while the InvertThree operator managed to find the optimum only once (within 725 fitness evaluations). The ReverseSequence and RotateThree mutation operators also showed lackluster performance, failing to find the optimum in some cases or spending significantly more iterations on average than the SingleSwap and MultiSwap operators. The SingleSwap and MultiSwap operators show the best results, finding the optimal solution within reasonable time in most cases. In order to better understand the observed behavior of different mutation operators we examined the progress plots for four of the considered mutation operators: SingleSwap, MultiSwap, ReverseSequence and RotateThree. Figure 3 presents the mean fitness value achieved by all the runs with the colored zone representing the standard deviation of the mean fitness value. Figure 4 presents the median fitness value achieved by each run with the zone representing the difference between the maximum and minimum fitness value achieved. Both plots show the fitness function

Evaluation of Permutation-Based Mutation Operators …

47

Table 1 Performance and the proportion of successful runs for the (1 + 1) evolutionary algorithm with different mutation operators minimum median maximum successful runs SingleSwap 21 393 1 998 98% MultiSwap 45 475 2 982 100% CenterInverse – – – 0% ReverseSequence 102 1130 2 808 68% InvertThree – – – 1% RotateThree 44 1516.5 1 859 62%

Fitness value

40

30 MultiSwap SingleSwap RotateThree ReverseSequence

20

500

1,000

1,500

2,000

2,500

3,000

Fitness evaluations

Fig. 3 Mean fitness value on each iteration with standard deviation 38

Fitness value

34 27 23 MultiSwap SingleSwap RotateThree ReverseSequence 100

101

102

103

Fitness evaluations

Fig. 4 Median fitness value on each iteration with minimum and maximum (log x scale)

value along the y axis, and the number of fitness evaluations along the x axis. For the better representation of the difference in mutation operators’ performance the x axis is log-scaled on the Fig. 4.

48

V. Mironovich et al.

Table 2 Average number of iterations spent in the states with different fitness values f = 22 f = 23 f = 24 f = 25 f = 26 f = 27 f = 28 f = 29 SingleSwap 3.19 27.18 21.47 3.36 73.72 433.33 8.27 12.16 MultiSwap 3.49 27.19 20.06 5.66 78.62 458.41 8.33 9.33 CenterInverse 91.71 1140.14 1286.84 59.97 328.76 ReverseSequence 2.54 42.9 26.82 12.3 152.62 666.63 7.63 33.16 InvertThree 124.08 534.54 807.92 366.49 758.36 RotateThree 2.2 17.37 25.36 3.78 59.46 735.37 22.55 44.82

f = 34 9.47 10.49 591.57 800.59

Additionally, for each mutation operator we computed the average number of iterations spent in the states with each possible fitness function value. The most meaningful data points are represented in Table 2. For the CenterInverse and InvertThree mutation operators it can be seen that over the course of the optimization run they cannot progress past the fitness value of 27 and get stuck in states with (22 ≤ f ≤ 25) for several hundred iterations, whereas other mutation operators manage to leave these states within less than 50 iterations. The low value in f = 27 is the number of iterations left until the termination of the algorithm, as usually by the time these operators manage to reach the state with fitness value of 27 they are almost out of the computational budget. From these results it can be concluded that CenterInverse and InvertThree mutation operators are unsuitable for the considered optimization problem. The RotateThree mutation operator and the ReverseSequence show better results but they are still considerably worse than the Swap mutation operators. The biggest problem in case of these two operators occurs when they reach the state with the fitness value of 34. For individuals with the fitness value of 34 usually only a small change is required to get to the optimal solution, namely, swapping of two incorrect variables. Unfortunately, both these operators are incapable of doing such a small operation. Additionally, both of these operators spend significantly more time trying to improve from the fitness value of 27 compared to the SingleSwap and MultiSwap mutation operators. Overall, the SingleSwap operator shows the best performance, although it failed to find the solution within the computational budget in 2 runs out of 100. Comparing the SingleSwap and the MultiSwap mutation operators we can see that introducing bigger changes into individual reduces the performance of the algorithm. The performance of the MultiSwap operator depends on the probability of additional swaps, so it may be that tuning the parameter can help improve the results of the algorithm. From Table 2 it can be seen that the biggest hurdle for these two operators is the state with the fitness value of 27. Significant share of the optimization run is spent trying to improve the solution with this fitness value. Both incomplete runs for the SingleSwap mutation operator were terminated in this state, and it is also the main reason for the worse performance of the MultiSwap mutation operator. The results

Evaluation of Permutation-Based Mutation Operators …

49

of the Wilcoxon rank-sum tests [18] on the performance data for these two operators shows that their difference is not statistically significant at a = 0.05.

5 Conclusion We evaluated the performance of various permutation-based mutation operators with (1 + 1) evolutionary algorithm on the problem of connection reconstruction for the closed-loop control system. The results show that permutation-based encoding can be successfully applied for the considered problem and the best mutation operators is the SingleSwap mutation operator. Its success can be attributed to being able to make small changes required to escape the critical states and find the optimum solution. This contrasts to the performance of the mutation operators on common ordering problems, such as the traveling salesman problem, where the ReverseSequence mutation operator usually produces the greatest results [1] and the vehicle routing problem where the swap operator produces the worst results on most instances [10]. A possible way to explain this is to analyze how elements of the individual representation interact with each other. In the case of the traveling salesman problem, a subset of the solution represents a path, thus reversing the sequence has understandable meaning in the real world solution interpretation. Unfortunately, the same ideas cannot be extended to the problem of connection reconstruction. It is hard to understand how variables interact with each other and what is the possible effect of different changes on the value of the fitness function. Still, trying to recognize the dependencies between genes in individuals may be a great way to develop better performing operators for this problem in the future. In the case of the vehicle routing problem, combinations of different mutation operators also show great results. A possible way to improve the performance of the algorithm is using combinations of the SingleSwap operator with the other non-swapping operators. Finally, a population based algorithm may provide additional performance improvement. First of all, using crossover operators can result in drastic improvement of the algorithm’s overall performance [25]. Secondly, applying the diversity enforcement similarly to [21] can help escape the fitness plateau, which seems to be the common problem to the control system generation tasks. Acknowledgements This work was financially supported by the Government of the Russian Federation (Grant 08-08).

References 1. Abdoun, O., Abouchabaka, J., Tajani, C.: Analyzing the Performance of Mutation Operators to Solve the Travelling Salesman Problem. arXiv preprint arXiv:1203.3099 (2012)

50

V. Mironovich et al.

2. Bäck, T., Fogel, D.B., Michalewicz, Z.: Evolutionary computation 1: Basic algorithms and operators. CRC Press (2018) 3. Baier, C., Katoen, J.P., Larsen, K.G.: Principles of Model Checking. MIT Press (2008) 4. Bérard, B., Bidoit, M., Finkel, A., Laroussinie, F., Petit, A., Petrucci, L., Schnoebelen, P.: Systems and Software Verification: Model-Checking Techniques and Tools. Springer Science & Business Media (2013) 5. Bierwirth, C., Mattfeld, D.C., Kopfer, H.: On permutation representations for scheduling problems. In: International Conference on Parallel Problem Solving from Nature, pp. 310–318. Springer (1996) 6. Burnim, J., Sen, K.: Heuristics for scalable dynamic test generation. In: Proceedings of the 23rd IEEE/ACM International Conference on Automated Software Engineering, pp. 443–446 (2008) 7. Buzhinsky, I., Ulyantsev, V., Veijalainen, J., Vyatkin, V.: Evolutionary approach to coverage testing of IEC 61499 function block applications. In: 2015 IEEE 13th International Conference on Industrial Informatics (INDIN), pp. 1213–1218 (2015). https://doi.org/10.1109/INDIN. 2015.7281908 8. Buzhinsky, I., Vyatkin, V.: Automatic inference of finite-state plant models from traces and temporal properties. IEEE Trans. Indust. Inf. 13(4), 1521–1530 (2017) 9. Clarke, J., Dolado, J.J., Harman, M., Hierons, R., Jones, B., Lumkin, M., Mitchell, B., Mancoridis, S., Rees, K., Roper, M., et al.: Reformulating software engineering as a search problem. IEE Proc.-Softw. 150(3), 161–175 (2003) 10. Haj-Rachid, M., Ramdane-Cherif, W., Chatonnay, P., Bloch, C.: A Study of Performance on Crossover and Mutation Operators for Vehicle Routing Problem. ILS Casablanca (Morocco), pp. 14–16 (2010) 11. Hänsel, J., Rose, D., Herber, P., Glesner, S.: An evolutionary algorithm for the generation of timed test traces for embedded real-time systems. In: 2011 IEEE Fourth International Conference on Software Testing, Verification and Validation (ICST), pp. 170–179. IEEE (2011) 12. Harman, M., Mansouri, S.A., Zhang, Y.: Search-based software engineering: trends, techniques and applications. ACM Comput. Surv. (CSUR) 45(1), 11 (2012) 13. Hoffnagle, G.F., Beregi, W.E.: Automating the software development process. IBM Syst. J. 24(2), 102–120 (1985). https://doi.org/10.1147/sj.242.0102 14. Johnson, C.G.: Genetic programming with fitness based on model checking. In: European Conference on Genetic Programming, pp. 114–124. Springer (2007) 15. Katz, G., Peled, D.: Genetic programming and model checking: synthesizing new mutual exclusion algorithms. In: International Symposium on Automated Technology for Verification and Analysis, pp. 33–47. Springer (2008) 16. Larrañaga, P., Kuijpers, C.M.H., Murga, R.H., Inza, I., Dizdarevic, S.: Genetic algorithms for the travelling salesman problem: a review of representations and operators. Artif. Intell. Rev. 13(2), 129–170 (1999) 17. Lin, S.: Computer solutions of the traveling salesman problem. Bell Syst. Tech. J. 44(10), 2245–2269 (1965) 18. Mann, H.B., Whitney, D.R.: On a test of whether one of two random variables is stochastically larger than the other. Ann. Math. Stat. 18(1), 50–60 (1947) 19. Mironovich, V., Buzdalov, M., Vyatkin, V.: Automatic generation of function block applications using evolutionary algorithms: Initial explorations. In: 2017 IEEE 15th International Conference on Industrial Informatics (INDIN), pp. 700–705. IEEE (2017) 20. Mironovich, V., Buzdalov, M., Vyatkin, V.: Automatic plant-controller input/output matching using evolutionary algorithms. In: 2018 IEEE 23rd International Conference on Emerging Technologies and Factory Automation (ETFA), pp. 1043–1046. IEEE (2018) 21. Mironovich, V., Buzdalov, M., Vyatkin, V.: From fitness landscape analysis to designing evolutionary algorithms: the case study in automatic generation of function block applications. In: Proceedings of Genetic and Evolutionary Computation Conference Companion, pp. 1902– 1905. ACM (2018)

Evaluation of Permutation-Based Mutation Operators …

51

22. Preuße, S., Lapp, H.C., Hanisch, H.M.: Closed-loop system modeling, validation, and verification. In: Proceedings of 2012 IEEE 17th International Conference on Emerging Technologies & Factory Automation (ETFA 2012), pp. 1–8. IEEE (2012) 23. Toth, P., Vigo, D.: The vehicle routing problem. SIAM (2002) 24. Tsarev, F., Egorov, K.: Finite-state machine induction using genetic algorithm based on testing and model checking. In: Proceedings of Genetic and Evolutionary Computation Conference Companion, pp. 759–762 (2011) 25. Üçoluk, G.: Genetic algorithm solution of the TSP avoiding special crossover and mutation. Intell. Autom. Soft Comput. 8(3), 265–272 (2002)

Optimization of Deposition Parameters of a DLC Layer Using (RF) PECVD Technology Tomáš Prokeš, Kateˇrina Mouralová, Radim Zahradníˇcek, Josef Bednáˇr, and Milan Kalivoda

Abstract The aim of this study was to carry out a planned experiment (DoE) in order to find a suitable combination of deposition parameters to increase the hardness of the coating. In a 20-round experiment, parameters of chamber pressure, gas flow and the power of an RF source were gradually changed. Subsequently, a micro hardness analysis was carried out using the Berkovich method. Using the DoE method, a combination of parameters was detected within which a comparable hardness of the coating was achieved as with doping of the hydrogen atoms coating to increase the number of Sp3 carbon bonds. Within the indentation test, a scratch test was carried out to determine the normal force required to break the coating. Keywords DLC · RF PECVD · Coating hardness · Scrath test · DoE · Berkovich

1 Introduction Diamond-like carbon (DLC) coatings are mainly used in applications requiring a low coefficient of friction. Plasma-assisted chemical vapor deposition technology is T. Prokeš · K. Mouralová (B) · J. Bednáˇr · M. Kalivoda Faculty of Mechanical Engineering, Institute of Mathematics, Brno University of Technology, Technicka 2, 616 69 Brno, Czech Republic e-mail: [email protected] T. Prokeš e-mail: [email protected] J. Bednáˇr e-mail: [email protected] M. Kalivoda e-mail: [email protected] R. Zahradníˇcek Department of Microelectronics, Faculty of Electrical Engineering and Communication, Brno University of Technology, Technicka 10, 616 00 Brno, Czech Republic e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 R. Matoušek and J. K˚udela (eds.), Recent Advances in Soft Computing and Cybernetics, Studies in Fuzziness and Soft Computing 403, https://doi.org/10.1007/978-3-030-61659-5_5

53

54

T. Prokeš et al.

a typical choice for the application of diamond-like carbide coatings, using one of the types of gas ionization sources such as a radio frequency source (RF-PECVD), bipolar PECVD, glow-discharged PECVD, and pulsed direct current (DC) PECVD. With RF-PECVD, DLC layers can be deposited from the CH4 gas, with an increase in layer hardness by adding hydrogen or argon to the process gas. Using DLC coatings, the growth rate on the tool edge could be reduced, which is enabled by a low friction coefficient between the coating and the machined material, along with the low adhesion of the DLC layer to the machined material. By decreasing the growth rate, tool edge durability can be increased. The major problem with the use of carbonbased coatings is a delamination influenced by the high internal stress, which can be reduced by optimizing the deposition process and multilayer coatings according to Wu’s research. According to Wu’s research [1], using a DLC coating on a cutting tool, cutting forces can be reduced by up to 16%, which is caused by, among other things, a low friction coefficient. Based on the publication Ucun [2], lower surface roughness values were achieved using DLC coatings for the machining of refractory alloys Inconel 718. Especially due to the low adhesion of the coating with the substrate and a slower increase in tool edge growth, this has a positive effect on the maintenance of the radius of the cutting edge. Ł˛epicka [3] dealt with the wear resistance of the DLC coating in her research, having achieved a threefold reduction in wear (from 1.56 × 10–10 mm3 N−1 m−1 to 1.87 × 10–7 mm3 N−1 m−1 ) in the scratch test, the DLC coating was broken at 10.65 N. Based on Dalibón’s research [4] it is desirable to deposit a DLC layer on the nitride surface [using PECVD hexamethyldisiloxane (HMDSO)] and acetylene in the form of a precursor (8% of HMDSO and 92% of CH4 ) which resulted in the increase of resistance corrosion compared to the DLC coating itself. P˘atru [5] published in his article the achieved hardness of 17.07 GPa with the deposition of a: C-H/Cr/AlN/Ti by a PVD-PECVD combined deposition system which, however, showed a high delamination of the coating around the scratch in the scratch test. Using the planned experiment, the deposition process of the DLC coating was mathematically described using the PECVD method, and the process was optimized.

1.1 Design of Experiment (DoE) With the planned experiment (DoE), a change of process parameters was systematically performed to analyse the change of the response (the hardness of the coating). The response of the process to the input factor change was modelled by a linear regression analysis, where the predictors are input process parameters, their quadrates, and interactions. Using the planned experiment, the impact of the process parameters on the change of the observed response was evaluated, with an aim of the planned experiment to optimize the process to achieve better results [6].

Optimization of Deposition Parameters of a DLC Layer Using …

55

Fig. 1 Micrograph of a cross-section through Sample 19 with a 50 kx magnification using an SE detector with an accelerating voltage of 5 kV

2 Experimental Setup The experiment samples were made from a two-inch diameter silicon wafer, which is normally used in the semiconductor industry. The wafer was divided by a Diodepumped solid-state Nd: YAG laser into 20 samples with the size of 10 × 10 mm used for the experiment. Prior to the deposition of the DLC layer, the oxidized layer was lowered using a low percent (2%) solution of hydrofluoric acid. Using a Tescan scanning electron microscope, LYRA 3, Sample 19 (Fig. 1) with the highest hardness of 22 GPa was analysed using the SE detector with an accelerating voltage of 5 kV.

3 Results 3.1 Hardness of Deposited DLC Coatings Using a nanoindentation test, the information is obtained on the mechanical properties of the thin layer, which are the hardness of the layer and the value of the young module by one measurement (Young’s modulus of elasticity). To measure the hardness, the Berkovich method can be used, which uses the three-sided pyramid and is suitable for measuring thin layers. Using a triboIndenter device Hysitron TI950 developed to measure the nanomechanical properties, H (GPa) hardness was evaluated by the Berkovich method for all 20 experimental samples.

56

T. Prokeš et al.

Fig. 2 A load graph

Fig. 3 The evaluation of the hardness of DLC coatings

The maximum load force was chosen according to the standard for the trial method ˇ of the penetration test for metallic and non-metallic coatings (CSN EN ISO 14577 [7]) at 2500 μN, with the load force waveform shown in Fig. 2. Using the multiple loads (Fig. 2) 10 measured values were obtained for one indent. At the same time for the statistical processing, each indent was repeated twice for each sample, in the area of the sample centre and at a mutual distance of 30 μm between the indents. Subsequently, out of the 20 hardness values obtained, outlying values were removed and the resulting hardness was the mean value of the measurement shown in Fig. 3. In Fig. 3 the difference between the individual samples was shown.

3.2 Design of Experiment and a Regression Model The purpose of the experiment, the results of which were evaluated in the MiniTab statistical programme, was to obtain the relationship between the hardness of the coating depending on the machine parameter settings, such as chamber pressure, process gas flow and power of the RF source. The planned experiment was based on 3 factors and their limit values are shown in Table 1. The limit values for the setting of the individual parameters were determined on the basis of the recommendations from the Oxford instruments supplier [8].

Optimization of Deposition Parameters of a DLC Layer Using … Table 1 The parameters of the deposition process

Table 2 Process parameters of deposition

57

Parameter

Chamber pressure (mTorr)

Gas flow (sccm)

RF power (W)

Maximum

100

50

300

Minimum

30

30

100

Sample Chamber pressure Gas flow (sccm) RF power (W) (mTorr) 1

65

40

200

2

65

40

200

3

100

50

100

4

30

50

300

5

30

50

100

6

30

30

100

7

100

50

300

8

65

40

200

9

100

30

100

10

65

40

200

11

30

30

300

12

100

30

300

13

100

40

200

14

65

40

200

15

65

50

200

16

65

30

200

17

65

40

100

18

65

40

200

19

65

40

300

20

30

40

200

The planned experiment was chosen as a “central composite full design” type, which for the 3 input parameters contains 20 rounds (Table 2) divided into two blocks. Six central points were used to analyse the experimental noise, while the entire planned experiment was evaluated to reduce the impact of external influences. In detail, this data collection plan is described, for example, in Montgomery [6]. The experiment was evaluated on α = 0.05. The analysis of statistically significant parameters was performed using the ANOVA method and regression models shown in Fig. 6, along with the normal residual distribution test using the Anderson–Darling Test (Fig. 4), where the P-Value was 0.440, so the zero hypothesis of normality cannot be rejected.

58

T. Prokeš et al.

99

Percent

95 90 80 70 60 50 40 30 20 10 5 1

-2

-1

0

1

2

Residual Fig. 4 Normal residual distribution test

Statistically insignificant parameters (P value > 0.05) were removed from the model to maximum simplification of the mathematical record of the obtained model between the variables and the resulting hardness of the DLC coating. Using the analysis of scattering (ANOVA), statistically significant parameters (Pvalue < 0.05) were detected, the reverse-elimination procedure was used to identify the statistically significant parameters. The major effect on the hardness of the coating was influenced by the “Power of RF source” parameter, which has a significant effect both individually and in combination with the pressure in the process chamber (Fig. 5). Based on the graphical analysis in Fig. 6, to achieve the maximum coating hardness, chamber pressure of 100 mTorr and the power of RF source of 300 W can be selected. Fig. 5 ANOVA for the coating hardness

Source Model Blocks Linear Chamber pressure RF power Square RF power · RF power 2-Way Interaction Chamber pressure · RF power Lack-of-Fit

P-Value 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.009 0.000 0.234

Optimization of Deposition Parameters of a DLC Layer Using …

59

Fig. 6 Contour plot interaction between “Chamber pressure” and “RF power”

Using ANOVA results, the dependence between the statistically significant parameters and the coating hardness were described in Eq. 1, which represents 91.9% of the experimentally determined data. H = 29.22 − 0.0918Pc − 0.0916PR F + 0.000206PR F · PR F + 0.000326Pc · PR F , (1) where: H (GPa) is the hardness of the coating, Pc (mTorr) is the chamber pressure and PRF (W) is the power of the radio frequency source. Using a Dektak XT contact profilometer from Bruker with a precision of 0.6 nm, the thicknesses of individual coatings were analysed and the values were evaluated by the software Vision 64. Using the wafer edge was covered with the Kapton tape. After the deposition and removal of the tape, a substrate transition coating was created where the thickness of the deposited DLC layer was evaluated by means of a profilometer. The measured values were entered into the Minitab programme where using the ANOVA method (Fig. 7) the dependencies were analysed which were described in Eq. 2, which represents approximately 57.8% of the experimentally measured data. T = −0.1308 + 0.0059PC + 0.000602PR F − 0.000031PC · PC ,

(2)

where: T (μm) is the thickness of the coating. Using the regression analysis between the hardness and the thickness of the coating the mutual independence of the monitored parameters was not revealed (Fig. 8).

60 Fig. 7 Anova for the coating thickness

T. Prokeš et al.

Source Model Linear Chamber pressure RF power Square RF power · RF power Lack-of-Fit

P-Value 0.004 0.004 0.011 0.006 0.159 0.159 0.725

Fig. 8 Scatterplot of the thickness and the hardness of the coating

3.3 Scratch Test The scratch test is one of the semi-quantitative techniques used for the analysis of hard and thin layers of coatings, and the scratch test is an extension of the indentation tests of hardness. In the scratch test, the indentor is attached to the specimen surface and gradually the normal force increases according to the load programme settings when moving the indentor. By analysing the lateral force dependence on normal force (Fig. 9), a critical load can be indicated, which affects the coating damage. On the basis of the measurements, a critical load with a normal force of 4847 μN was detected for Sample 19, which shows the highest hardness of 22.5 GPa. Using a scratch test, the friction coefficient can be analysed depending on the normal load of the indentor. For Sample 19, the initial friction coefficient was 0.1 (Fig. 10), which is the undisputed advantage of DLC coatings.

Optimization of Deposition Parameters of a DLC Layer Using …

61

Fig. 9 Dependence of the friction coefficient on a normal load for Sample 19

Fig. 10 Dependence of the lateral force on normal forces for Sample 19

4 Conclusions and Discussion According to the planned experiment, 20 samples of DLC coatings were made to evaluate the hardness and thickness of the coating. The highest achieved hardness of 22.5 GPa was measured on Sample 19. By optimizing the parameters, with the chamber pressure setting of 100 mTorr and the power of the RF source of 300 W the hardness up to 23 GPa can be achieved. Ebrahimi [9] also dealt with the optimization of PECVD processes which achieved a hardness of up to 22.05 GPa when doping the coating of hydrogen atoms, thereby achieving a better ratio of SP2 /SP3 carbon bonds. No dependence was found with a regression analysis between the measured coating parameters—thickness and hardness). The zero hypotheses about the hardness dependence on the thickness of the coating cannot be confirmed. Based on the analysis of the data from the scratch test, the critical normal force for breaking the coating of Sample 19 was detected to be 4847 μN. At the same

62

T. Prokeš et al.

time, the friction coefficient was evaluated at a value of 0.1, which corresponds to Dalibón’s research [4], which reached the value of 0.09. Acknowledgements The arcticle was supported by project no. FEKT-S-17-3934, Utilization of novel findings in micro and nanotechnologies for complex electronic circuits and sensor applications. Part of the work was carried out with the support of CEITEC Nano Research Infrastructure (ID LM2015041, MEYS CR, 2016–2019), CEITEC Brno University of Technology. This research has been financially supported by the Specific University Research grant of Brno University of Technology, FEKT/STI-J-18-5354. This research work was supported by the BUT, Faculty of Mechanical Engineering, Brno, Specific research 2016, with the grant "Research of modern production technologies for specific applications", FSI-S-16-3717 and technical support of Intemac Solutions, Ltd., Kurim. This work was supported by the Brno University of Technology Specific Research Program, project no. FSI-S-17-4464.

References 1. Wu, T., Cheng, K.: Micro milling performance assessment of diamond-like carbon coatings on a micro-end mill. Proc. Inst. Mech. Eng. Part J J. Eng. Tribol. 227(9), 1038–1046 (2013) 2. Ucun, I., Aslantas, K., Bedir, F.: The performance Of DLC-coated and uncoated ultra-fine carbide tools in micromilling of Inconel 718. Precis. Eng. 41, 135–144 (2015) 3. Ł˛epicka, M., Gr˛adzka-Dahlke, M., Pieniak, D., Pasierbiewicz, K., Niewczas, A.: Effect of mechanical properties of substrate and coating on wear performance of TiN-or DLC-coated 316LVM stainless steel. Wear 382, 62–70 (2017) 4. Dalibón, E.L., Escalada, L., Simison, S., Forsich, C., Heim, D., Brühl, S.P.: Mechanical and corrosion behavior of thick and soft DLC coatings. Surf. Coat. Technol. 312, 101–109 (2017) 5. P˘atru, M., Gabor, C., Cristea, D., Oncioiu, G., Munteanu, D.: Mechanical and wear characteristics of aC: H/Cr/AlN/Ti multilayer films deposited by PVD/PACVD. Surf. Coat. Technol. 320, 284–292 (2017) 6. Montgomery, D.C.: Design and Analysis of Experiments, 9th edn. Wiley (2017). ISBN 9781119113478 7. Kovové materiály—Instrumentovaná vnikací zkouška stanovení tvrdosti a materiálových ˇ parametr˚u: Cást 4: Zkušební metoda pro kovové a nekovové povlaky. (2. vyd). Nové mˇesto: Úˇrad pro technickou normalizaci, metrologii a státní zkušebnictví (2017) 8. Standard Operation Manual: Oxford Plasmalab 80 Plus Plasma Etcher. Oxford Instruments 9. Ebrahimi, M., Mahboubi, F., Naimi-Jamal, M.R.: Optimization of pulsed DC PACVD parameters: toward reducing wear rate of the DLC films. Appl. Surf. Sci. 389, 521–531 (2016)

Modelling of Magnetron TiN Deposition Using the Design of Experiment Radim Zahradníˇcek, Kateˇrina Mouralová, Tomáš Prokeš, Pavel Hrabec, and Josef Bednáˇr

Abstract The aim of this study was to find the optimal settings of magnetron sputtering deposition of thin titanium nitride layers with respect to the size and hardness of the deposited layer. Direct sputtering deposition of the ceramic target using a radiofrequency source was used for magnetron sputtering deposition. The advantage of this method is easier finding of optimal deposition parameters. X-ray reflectometry was used to characterize the coating in terms of thickness. The hardness was determined by the Berkovich tip. To optimize the thickness and hardness of the TiN coating, a rotatable central composite design of response area for pressure and performance was used. Using only statistically significant factors, mathematical models of thickness and hardness of TiN coating were constructed. Based on these mathematical models, the optimal settings of statistically significant TiN magnetron deposition factors were found. Keywords TiN · Magnetron deposition · XRR · Hardness · Design of experiment

R. Zahradníˇcek Department of Microelectronics, Faculty of Electrical Engineering and Communication„ Brno University of Technology, Technická 10, 616 00 Brno, Czech Republic e-mail: [email protected] K. Mouralová (B) · T. Prokeš · P. Hrabec · J. Bednáˇr Faculty of Mechanical Engineering, Institute of Mathematics, Brno University of Technology, Technická 2, 616 69 Brno, Czech Republic e-mail: [email protected] T. Prokeš e-mail: [email protected] P. Hrabec e-mail: [email protected] J. Bednáˇr e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 R. Matoušek and J. K˚udela (eds.), Recent Advances in Soft Computing and Cybernetics, Studies in Fuzziness and Soft Computing 403, https://doi.org/10.1007/978-3-030-61659-5_6

63

64

R. Zahradníˇcek et al.

1 Introduction Titanium nitride coatings [1], diamond like carbon [2] or aluminium oxide [3] are mainly used to increase the resistance of mechanically loaded components such as cutting tools [4]. Extending the service life of the cutting edge due to the additional coating increases its durability and economic profitability. Several techniques are currently used to prepare TiN layers, such as chemical vapour deposition [5] or magnetron sputtering deposition [6]. The chemical vapour deposition method is mainly used for the deposition on cemented carbides [5]. The disadvantage of this method is the need for a high process temperature in the range of 450–600 °C and the use of hazardous chemicals such as TiCl4 [7]. The method of sputtering deposition is a safer method in terms of process complexity and overall safety. Sputtering deposition techniques utilize reactive sputtering deposition [8] or direct deposition from TiN target using an RF source during the deposition. Reactive sputtering deposition is based on the principle of titanium sputtering in a partial nitrogen atmosphere [9]. The disadvantage of this process is the need to debug all deposition parameters such as nitrogen flow into the chamber, DC source setting or deposition pressure to achieve a stoichiometric TiN coating [10]. In case of direct deposition from the TiN target, this necessity is not necessary. However, due to the non-conductivity of the TiN target, the RF source needs to be used instead of the DC source in the case of reactive sputtering. This change is necessary due to the position of the sputtered target on the cathode surface. For a metal target that is conductively connected to a cathode, a 3 kV voltage can be applied to the anode and cathode by means of the DC source, sufficient to ignite the plasma discharge. For ceramic material such as TiN, the voltage over 20 kV is required to ignite the plasma. The DC source of such power is more cost-intensive and structure-intensive than the use of an RF source capable of igniting plasma discharge through the rapid change in radio frequency range of 13.56 MHz, despite the presence of non-conductive materials. Due to the structural differences between magnetrons, it is always necessary to find optimal deposition parameters for the utilized deposition system. The optimum deposition parameters of the TiN layer in terms of deposition rate and hardness were found using the design of experiment described in this paper.

1.1 Design of Experiment (DoE) The design of experiment is a test or sequence of tests in which the setting of input variables (factors) is changed on purpose targeted to find their effect on the output variable (response). The DoE methodology has been used for scientific experiments for decades and is an integral part of production optimization in almost every industry. The basic principles of DoE data collection are randomization, replication, and blocking. The use of DoE allows to build a mathematical response-factor relationship model applying a small number of measured samples and regression

Modelling of Magnetron TiN Deposition Using … Table 1 Settings of selected deposition factors with respect to DoE

65

Factor

Pressure (Pa)

Power (W)

Axial point min

0.034

16.686

Minimum

0.04

20

Central point

0.055

40

Maximum

0.07

60

Axial point max

0.076

65.523

analysis tools. On the basis of this model, the whole process can be optimized with respect to the monitored factors [11–13].

2 Experimental Setup The deposition of the TiN layer took place in a magnetron device from Bestec equipped with an RF source for 10 × 10 mm silicon samples. Individual samples were prepared from a 4 in. silicon wafer, which was cut to the required size using a diode laser. Prior to the deposition, individual samples were cleaned in acetone, isopropanol and water. The water residues were removed by a stream of compressed nitrogen. The thickness analysis of the deposited layer was performed by X-ray reflection (XRR). The mechanical properties of the deposited specimens were determined by microhardness measurements. For data collection, a rotatable central composite response surface plan was used for two factors that were selected, i.e. pressure and power. This experiment consists of a total of 13 measurements, including 5 central point replications and 4 axial points for quadratic effects detection. However, the adjustment of the factors at the axial points had to be adjusted to keep the plasma discharge running low. To maintain plasma discharge, it is always necessary to have a gas flow with chamber pressure at some equilibrium. At a low flow or deposition pressure, the likelihood of gas ionization (plasma ignition) is very low. Adjustment of individual values was set as close to the original setting values as possible. An overview of the factor settings used is summarized in Table 1.

3 Results 3.1 Thickness Measurement by XRR The measurement of the total overload thickness and determination of the deposition rate was performed on the XRD device Rigaku 3 kV using the X-ray reflectometry method. The substance of this method was to determine the intensity of the reflected X-ray beam from the substrate with the studied layer at a very low angle (0 up to

66 Table 2 Total thickness of TiN coating with a corresponding deposition rate

R. Zahradníˇcek et al. Sample number

Thickness (nm)

Deposition rate (nm/min)

1

6.8

0.15

2

6.8

0.15

3

6.7

0.15

4

12

0.27

5

11.8

0.26

6

12.7

0.28

7

13

0.29

8

12.7

0.28

9

11.2

0.25

10

13

0.29

11

16.5

0.37

12

18

0.40

13

22.1

0.49

10°) from the plane of the sample. During the measurement, the angle of the detector from the plane was the same as the source of the X-ray beams. Several things can be determined from the fluctuation of the intensity with the angle change. The greater the number of fluctuations, the thicker the coating layer. Furthermore, the density of the deposited material can be determined, and with the increasing density, the magnitude of the intensity fluctuation increases. The change in intensity may also be induced by the roughness of the coating substrate surface. Due to the use of silicon wafer, which has a maximum roughness of about 1 nm, the XRR measurement was not affected only by this negative factor. The deposition rates measured were recorded in Table 2.

3.2 Hardness Measurement of TiN Coating The measurement of the mechanical properties of the deposited TiN coating was carried out on Hysitron TI950 equipped with a nanoindentation head with Berkovich tip. When using a tip of this type, a three-sided pyramid is pressed into the studied layer. The mechanical properties can be determined from the coating resistance during this pressing. Thus, the force measuring head is represented essentially by a large plate condenser. On one side of the plate of the condenser there is a measuring tip which, due to the resistance of the coating during the measurement, causes a change in condenser capacity. This change is then recalculated to the desired mechanical properties. From such a nanoindentation measurement, both the hardness of the coating and the Young modulus of elasticity can be obtained. All 13 samples were measured with a load force of 7000 µN. The load force distribution during the measurement can be seen in Fig. 1. The measured hardness values for the individual

Modelling of Magnetron TiN Deposition Using …

67

Fig. 1 The course of the load curve during the hardness measurement of the TiN coating

Fig. 2 Hardness values for individual samples

samples were summarized in Fig. 2.

3.3 Statistical Analysis Regression analysis tools identified statistically significant factors affecting Thickness and Hardness. The significance level was, by analogy with the majority of DoE evaluation, chosen as 5%. Due to the significantly different order of input factors, these values were standardized so that the minimum corresponded to −1, the central

68

R. Zahradníˇcek et al.

point to 0 and the maximum to 1. Thanks to this transformation, the order of the recorded values will have no effect on creating the mathematical model. The mathematical model of thickness contains a single statistically significant factor, namely a power. However, using this single regression will provide an explanation of 85.54% (the value of the determination coefficient) of the variability observed in this response. The thickness model (transformed back to the original power units) is described by the Eq. (1). T hickness = −0.31 + 0.3299 · Power (1)

(1)

It is clear from the equation that the maximum thickness can be ensured by setting the power to the maximum possible. Therefore, to maximize this response, it seems best to use a directly measured setting at the maximum axial point for the power. Thus, the power = 65.523 W, the model value of the thickness at this point will be 21.303 nm. The visual representation of the optimal setting is Fig. 3. In addition, the mathematical model of hardness contains a statistically significant square of the power. The use of these two regressors will make it possible to explain 94.45% of the variability of the measured hardness. The hardness model (again transformed into original units) is described by the Eq. (2). H ar dness = 1.962 + 0.3321 · Power − 0.003262 · Power 2 (2)

(2)

The quadratic member in this model ensures that the maximum hardness is shifted inside the monitored area. The maximum estimated model hardness occurs for the power = 50.91 W and will be 10.416 GPa (see Fig. 4). Fig. 3 Model of TiN coating thickness with the indicated optimum

Modelling of Magnetron TiN Deposition Using …

69

Fig. 4 Model of the TiN coating hardness with the indicated optimum

4 Conclusions and Discussion Based on the design of experiment, 13 experimental samples were produced using deposition, and the effect of individual deposition parameters on the final thickness and hardness was analysed. The largest coating thickness of 21.1 nm (Sample 13) was obtained for a sample whose settings of all deposition parameters were at their maximum. The deposition power of this sample was 65.523 W at the pressure of 0.076 Pa. The nitride deposition rate was 6x slower (0.49 nm/min) compared to CVD method at 950 °C [5]. Compared to the reactive sputtering deposition method at 350 °C, the deposition rate was up to 28 times slower [8]. The advantage of magnetron deposition from the ceramic target compared to the above mentioned methods is mainly lower energy (no external heat source) and technical demands (only 2 parameters) of the whole process of TiN layer preparation. From the point of view of the evaluation of the design of experiment with respect to the coating thickness, the dependence is observed exclusively on the deposition power. The influence of the pressure in this case is negligible. The highest measured TiN layer hardness was found on Sample 5 (deposition setting: 40 W at the pressure of 0.055 Pa) i.e. 10.15 GPa. This value is 2.75 GPa greater than with the cathode ion plating method [14]. Acknowledgements The article was supported by project no. FEKT-S-17-3934, Utilization of novel findings in micro and nanotechnologies for complex electronic circuits and sensor applications. Part of the work was carried out with the support of CEITEC Nano Research Infrastructure (ID LM2015041, MEYS CR, 2016–2019), CEITEC Brno University of Technology. This research has been financially supported by the Specific University Research grant of Brno University of Technology, FEKT/STI-J-18-5354. This work was supported by the Brno University of Technology Specific Research Program, project no. FSI-S-17-4464.

70

R. Zahradníˇcek et al.

References 1. Srinivas, B., et al.: Performance evaluation of Titanium nitride coated tool in turning of mild steel. In: IOP Conference Series: Materials Science and Engineering, vol. 330(1). IOP Publishing (2018) 2. Ucun, I, Aslantas, K., Bedir, F.:. The performance of DLC-coated and uncoated ultra-fine carbide tools in micromilling of Inconel 718. Prec. Eng. 41, 135–144 (2015) 3. Chen, Z., et al.: Mechanical properties and microstructure of Al2 O3 /Ti (C, N)/CaF2 @ Al2 O3 self-lubricating ceramic tool. Int. J. Refract. Metals Hard Mater. 80, 144–150 (2019) 4. Badaluddin, N.A., et al.: Coatings of cutting tools and their contribution to improve mechanical properties: a brief review. Int. J. Appl. Eng. Res. 13(14), 11653–11664 (2018) 5. von Fieandt, L, et al.: Chemical vapor deposition of TiN on transition metal substrates. Surf. Coat. Technol. 334, 373–383 (2018) 6. Ghasemi, S, et al.: Structural and morphological properties of TiN deposited by magnetron sputtering. Surf. Topogr. Metrol. Prop. 6(4), 045003 (2018) 7. Ramanuja, N., et al.: Synthesis and characterization of low pressure chemically vapor deposited titanium nitride films using TiCl4 and NH3 . Mater. Lett. 57(2), 261–269 (2002) 8. Liang, H, et al.: Thickness dependent microstructural and electrical properties of TiN thin films prepared by DC reactive magnetron sputtering. Ceram. Int. 42(2), 2642–2647 (2016) 9. Kavitha, A., et al.: Effect of nitrogen content on physical and chemical properties of TiN thin films prepared by DC magnetron sputtering with supported discharge. J. Electron. Mater. 46(10), 5773–5780 (2017) 10. Zalnezhad, E., Ahmed, A.D., Sarhan, Hamdi, M.: Optimizing the PVD TiN thin film coating’s parameters on aerospace AL7075-T6 alloy for higher coating hardness and adhesion with better tribological properties of the coating surface. Int. J. Adv. Manuf. Technol. 64(1–4), 281–290 (2013) 11. Myers, R.H., Montgomery, D.C., Anderson-Cook, C.M.: Response Surface Methodology: Process and Product Optimization Using Designed Experiments. Wiley, New Jersey (2016) 12. Montgomery, D.C.: Design and Analysis of Experiments. Wiley, New York (2001) 13. Box, G.E.P., Hunter, J.S., Hunter, W.G.: Statistics for Experimenters: Design, Innovation, and Discovery. Wiley-Interscience, Hoboken (2005) 14. Kong, D.J., Fu, G.Z.: Nanoindentation analysis of TiN, TiAlN, and TiAlSiN coatings prepared by cathode ion plating. Sci. China Technol. Sci. 58(8), 1360–1368 (2015)

Solvent Optimization of Transferred Graphene with Rosin Layer Based on DOE Radim Zahradníˇcek, Pavel Hrabec, Josef Bednáˇr, and Tomáš Prokeš

Abstract The method of transferring graphene to semiconductor substrates based on rosin gradually becomes common in practice. Unlike the previous method (use of polymethacrylate) this procedure does not lead to significant contamination at the end of the transfer. The disadvantage, however, is the need to use two solvents: acetone and banana oil instead of one. The purpose of this article is to investigate, by means of a planned experiment, the influence of rosin solvents conventionally used on the resulting graphene layer contamination. The solvents used were isopropyl alcohol (IPA), ethanol and xylene. It was shown by the measured data using atomic force microscopy (AFM), that the lowest contamination can be achieved using isopropyl alcohol. Keywords Graphene transfer · Solvent · Rosin · AFM · DoE

1 Introduction Since the discovery of graphene in 2004 [1], the number of potential applications in industry has been increasing. This increase is caused due to the use of physical R. Zahradníˇcek (B) Department of Microelectronics, Faculty of Electrical Engineering and Communication, Brno University of Technology, Technicka 10, 616 00 Brno, Czech Republic e-mail: [email protected] P. Hrabec · J. Bednáˇr Faculty of Mechanical Engineering, Institute of Mathematics, Brno University of Technology, Technicka 2, 616 69 Brno, Czech Republic e-mail: [email protected] J. Bednáˇr e-mail: [email protected] T. Prokeš Faculty of Mechanical Engineering, Institute of Manufacturing Technology, Brno University of Technology, Technicka 2, 616 69 Brno, Czech Republic e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 R. Matoušek and J. K˚udela (eds.), Recent Advances in Soft Computing and Cybernetics, Studies in Fuzziness and Soft Computing 403, https://doi.org/10.1007/978-3-030-61659-5_7

71

72

R. Zahradníˇcek et al.

properties of graphene, including high mechanical resistance, electrical and thermal conductivity, and transparency. The resistance of the material to mechanical damage can be applied in the research of new composite layers [2] and coatings [3]. Light transmission along with low resistance to graphene can be used to construct a new generation of solar cells [4] and organic diodes [5]. Together with high bending resistance, graphene can be used for flexible touch screens [6]. The effectiveness of the above-mentioned applications is directly dependent on the quality of the graphene influenced by the method of its transfer to the target substrate. The most commonly used transfer method described in literature is the so-called wet transfer using a thin layer of poly(methyl methacrylate) (PMMA) for mechanical support, see Fig. 1. At the end of the process, the support layer is dissolved with acetone. However, as there is a strong chemical bond between the PMMA and the graphene molecules, it cannot be completely removed from the surface. PMMA residues on the surface of graphene create small islets that bind water from the environment. This bound water increases the resulting resistance value through P doping [7, 8]. One of the methods for removing PMMA residues from the surface of graphene is thermal decomposition at temperatures above 500 °C [9]. Unfortunately, this process is energy-intensive and especially unsuitable for heat-sensitive materials such as organic semiconductors. Another possibility to reduce graphene resistance and the amount of residual graphene contamination is to use other types of polymers such as naphthalene, parylene or rosin [10–12]. In the case of graphene transfer using rosin, it was necessary to use two solvents to remove most of the polymer, namely acetone and banana oil. Since this combination did not eliminate all contamination, it is necessary to find another. In this study, the influence of solvent type (isopropyl alcohol (IPA), ethanol and xylene), time and temperature on residual graphene contamination before applying banana oil will be investigated. To evaluate the impact of individual solvents, a design of experiment (DoE) method was used,

Fig. 1 Schematic representation of wet graphene transfer with the mechanical support of the poly(methyl methacrylate) layer

Solvent Optimization of Transferred Graphene with Rosin …

73

which is very often used in wire electrical discharge machining (WEDM) evaluation of machining quality [13–15].

1.1 State of the Arts Chen et al. [16] discuss in their article recent advances in the transfer of as-grown CVC graphene to target substrates. They studied three categories of methods to transfer graphene on a target substrate, and the transfer with a support layer utilizing polymers was among them as the most widely used support layer in current graphene transfer methods. As a polymer base, PMMA, polydimethylsiloxane (PDMS), thermal release tape and natural polymers, such as cellulose acetate derived from cellulose and agarose extracted from seaweed, were employed for graphene transfer. The experiments proved, that natural polymers, compared to PMMA and PDMSassisted techniques, form a soft flexible thin film on the top of graphene, protecting it from unfavourable forces and contaminations during the transfer process. However, despite this advantage the complete removal of adsorbed polymer requires tedious cleaning process, which makes a built-in problem for all polymer supported graphene transfer methods. Chen et al. [7] assess in their critical review the recent development in transferring large-area graphene grown on metal and non-metal substrates by utilizing various synthesized methods, with a special emphasis on Cu and SiC substrates, which were polymer-assisted. PMMA is the most frequently utilized polymer for graphene transfer, which has many prominent features, such as relatively low viscosity, excellent wetting capability, flexibility and good dissolubility in several organic solvents, which is then removed by acetone, washed by the DI water and dry. Besides PMMA, poly(bisphenol A carbonate) (PC) and polyisobutylene (PIB) polymers were used as well, which compared to PMMA could be more easily removed by organic solvents. Graphene films were later on characterized using different methods, including atomic force microscope (AFM), which gave direct information to determine if there were contaminants or wrinkles on them. However, Chen et al. claim that it is possible to distinguish only between one and two layers by AFM if graphene contains folds or wrinkles. Jung et al. [17] provide a comprehensive review of the issues connected with graphene transfer methods and graphene film growth. They also claim that PMMA is the most common method for graphene transfer which involves wet etching of the metal substrate and water-mediated delivery of graphene to target substrate. The drawbacks of this method are also pointed out, such as possible oxidation of graphene due to the strong oxidation power of metal etchants and contamination of graphene by etching residues such as ionic impurities from the etchant and metallic residues from incomplete etching. The polymeric residues are another source of contamination of graphene, which directly affect the electrical properties of graphene, resulting in significant degradation of the performance of graphene devices.

74

R. Zahradníˇcek et al.

Kim et al. [18] used pentacene as a supporting and sacrificing layers for the clean and doping-free graphene transfer, which was then physically removed from the graphene surface by using intercalating organic solvent. They also point out the disadvantages of the conventional PMMA coating layers which generate residue of polar PMMA on graphene surface, and the substrate-induced graphene doping as a result of the removal process. According to the results of their experiments with pentacene it was proved that unintentional substrate-induced doping that occurred during the thermal removal of pentacene; however, on the other hand, chemical desorption of the pentacene produced undoped graphene without a pentacene residue which was confirmed by AFM, Raman spectra and kelvin probe force microscopy images. Chen et al. [10] in their study emphasize the inappropriacy of the traditional polymer-based methods (PMMA) of graphene transfer for application to plastic thin films such as PET substrates. Therefore, they suggest a new method, i.e. naphthaleneassisted graphene transfer technique which provides a reliable route to residue-free transfer of graphene to both hard and flexible substrates. In the experiments small polycyclic hydrocarbon naphthalene replaces large polymers, such as PMMA, which enables graphene transfer to be performed without leaving behind contaminants that affect the properties of graphene. The cleanliness of graphene was evaluated by AFM and SEM, which reveal a very clean graphene surface which is associated with the clean removal of the supporting naphthalene layer by sublimation. Belyaeva et al. [19] focus their attention on graphene transfer methods. They underline that although polymer protect graphene from cracking and folding (compared to polymer-free transfers) but at the cost of extensive contamination. In their study they demonstrate that cyclohexane operates in the similar way as a polymer support, but without major contamination. Cyclohexane protects graphene from mechanical deformation and minimizes vibrations of the water surface. However, the surface of the hexane-transferred graphene also contains wrinkles, but of smaller size, and the particles that could be seen in AFM images can be attributed to dust particles, airborne contaminants, and possibly copper etchant crystals/residuals. Chen et al. [20] in this study focus on a novel graphene transfer technique to various substrates, which provides a route to high-throughput, reliable and economical transfer of graphene without introducing large cracks and residue contamination from polymers, such as PMMA or magnetic impurities. The transferred graphene was further characterized with Raman spectroscopy, AFM, and X-ray photoelectron spectroscopy. The above-mentioned method utilizes cellulose acetate as the coating layer, which protects graphene from unfavourable forces and contaminations during the transfer process. It also largely reduces the cost, time and contaminations of the obtained graphene layer. The AFM images showed a very clean and relatively uniform surface of graphene and revealed the presence of wrinkles compared to a conventional PMMA based method. Her et al. [21] present a new transfer procedure for graphene using acetic acid, which removes the residue common in standard acetone treatments, which were later on characterized using Raman spectroscopy and AFM. They emphasize that while utilizing PMMA which is further dissolved by acetone, it fails to fully remove the

Solvent Optimization of Transferred Graphene with Rosin …

75

PMMA leaving a residue on the graphene surface and the substrate. In their procedure after transfer to the target substrate the graphene-PMMA stack was placed in glacial acetic acid for 24 h and then the sample was cleaned in an aqueous methanol solution. Afterwards both samples (cleaned with acetone and acetic acid) were compared using Raman spectroscopy and AFM, which showed better quality of the graphene surface cleaned by the new procedure. Wood et al. [22] examine the transfer of graphene grown by CVD with polymer scaffolds of PMMA, poly (lactic acid) (PLA), poly (phthalaldehyde) PPA, and poly (bisphenol A carbonate) (PC). It was proved by the experiments that reactive PC scaffolds provide the cleanest graphene transfer without any annealing compared to films transferred with PLA, PPA, PMMA, since they can be fully removed off the graphene by room temperature dissolution in chloroform. In their compared samples with different polymers the process produced large area graphene transfers, highlighted the amount of polymer contamination clearly, and examined the fundamental chemistries involved in transfer polymer dissolution. Yang et al. [23] point out in their study the disadvantage of the traditional method of graphene transfer using PMMA, which involves wet etching of the metal growth substrate to isolate the graphene film from the growth substrate in the transfer process. However, this metal etching process usually contaminates the graphene surface with ionic impurities from the metal etchant or metallic residues from incomplete etching. To avoid the additional cleaning steps, they suggest the combination of a simple pretreatment of a CVD-graphene growth substrate with a transfer-printing technique which allows the direct delamination of single-layer graphene from the growth substrate without any special instruments and effective transfer to target substrates with a high degree of freedom. Topological features of the transferred graphene were further investigated by using AFM, and the transferred graphene showed a flat and clean surface without any visible polymer residues.

1.2 Statistics During the statistical evaluation, the design of experiment was used. The design of experiment is a test or multiple tests in which the process input parameters are systematically changed to determine the corresponding changes in the response variable of the so-called response. The response is modelled by regression analysis and analysis of variance (ANOVA). Using DoE, the significance of predictors and their settings to optimize the process can be determined [24]. In our case, we have one categorical three-level dilution factor and two numerical factors: temperature and time. The response is root mean square (RMS) (nm) and top hight (nm). If a three-fold central composite design for individual diluents was performed, at least 10 measurements (2 centre points for capturing measurement variability, 4 cube points and 4 axial points) would be required for each diluent, i.e. a total of 30 measurements, see Fig. 2. However, this option was economically demanding, so the own alterna-

76

R. Zahradníˇcek et al.

Fig. 2 Central composite design (cube points, axial points and composition)

tive approach was designed to capture central point variability and the best solvent response area.

2 Experimental Setup and Material Graphene was prepared by chemical vapor phase deposition (CVD), with the graphene growth catalyser being a 40 × 40 mm copper foil 25 μm thick with a 99.99% purity. Methane was used as the carbon source during the chemical deposition of graphene. The CVD process of graphene preparation was carried out in a Nanofab device from Oxford Instruments. The production process had a total of 4 steps. The first step was to warm the chamber with the copper foil inserted at a working temperature of 1050° C. The next step was to anneal the copper foil at an operating temperature for 30 min to reduce the surface roughness of the copper surface and increase its grain size. The working atmosphere in which annealing took place consisted of 1000 sscm of argon and 200 sscm of hydrogen. After the annealing process, the graphene growth itself began with the addition of 100 cc of methane to argon and hydrogen. Within 30 min, a complete graphene layer was created on a Catalysis substrate. After cooling the chamber to room temperature in the last step, the substrate was pulled out using the loadlock mechanism. To verify the success of the preparation of graphene, the copper foil was exposed to air at 150 °C. 6 min later, the copper foil’s bottom was oxidized, indicating that the top side was protected by a graphite layer prior to oxidation. Rotary coating was applied to the graphene prepared in this way. The rotation setting parameters were as follows: the number of revolutions per minute was set at first to 500 revolutions for 5 s, followed by the setting of 1200 revolutions for one minute. Layer hardening was carried out at room temperature. Due to the fact that during the CVD process the graphene was formed on both sides of the copper foil, it was necessary to remove the imperfect graphene layer from the bottom side of the foil because it complicates the copper etching process due to the lack of removal. To

Solvent Optimization of Transferred Graphene with Rosin …

77

Fig. 3 Optical comparison of graphene on SiO2 a after the removal of rosin by IPA and, b before it

remove this layer, a 50-W oxygen plasma was used in a Nano resist stripper device from Diener. The CVD graphene size for the next wet transfer was adjusted to about 3 × 3 mm by means of scissors. Ammonium persulfate was chosen as the copper solvent. Samples on which graphene and rosin were applied had a size of 10 × 10 mm and were obtained by cutting the silicon substrate with 270 nm of silicon oxide by the WEDM method [25–27]. Figure 3 shows a comparison between the graphene-transferred sample with the removed rosin layer and without it.

3 Measurement of Residual Contamination of Graphene and DOE Evaluation of residual contamination was carried out using a Bruker atomic force microscope in semiconductor mode using the scanasyst air tip. The results obtained were further processed and analysed using Gwyddion programme. The obtained RMS values and contamination levels served as input data for the DoE evaluation.

3.1 Description of the DOE Evaluation Procedure Since the experiments were time consuming and costly, a classical DoE procedure could not be applied, requiring a minimum of 30 measurements for the 3 solvents. Therefore, a sequential approach was chosen. In the first phase, the best solvent was evaluated using analysis of variance and nonparametric tests, and in the second

78

R. Zahradníˇcek et al.

phase an optimal temperature and time setting for the solvent was searched for. This procedure cannot be used to model the mutual impact of solvent and temperature or time, however the number of measurements has been reduced by 50%. A second disadvantage, which would include the original central composite design, is the absence of repeated measurements in cube and axial points, which causes a little test strength.

4 Statistical Evaluation The analysis of variance (ANOVA) was used to select the best solvent. Three observations of RMS (nm) and Top Hight (nm) for individual solvents were chosen, where the optimal time and temperature were selected, see Table 1. To determine the optimal time and temperature, the estimation based on the chemical properties of the individual solvents was used. The graphene area from which the data using AFM were obtained for statistical evaluation was 2 × 2 μm. The scanning rate was 0.8 Hz with a lateral resolution of 4 nm. Figure 4 shows representative AFM measurements for individual solvents that reached the lowest RMS and Top Hight values. Although the difference is technically significant for us, it is not statistically significant, and the mean values were the same for both responses (p-value RMS = 0.173; p-value Top Hight = 0.502) at a significance level of 0.05, see Fig. 5. Due to a small number of observations, there is a risk of an error of the second type β. So that the mean values are not the same, but there is a false statement that they do. For this reason, a test force analysis was performed and the range of selection (number of measurements for individual solvents) was calculated to guarantee that the maximum difference of the mean values 2 will be statistically significant (α = 0.05; β = 0.1) while maintaining the same variability as our RMS response (Fig. 6). The result is 14 measurements per solvent, which is economically and temporally unacceptable for us. Similar results were obtained from the Kruskal–Wallis Medium Equity Test (pvalue RMS = 0.177; p-value Top Hight = 0.561). In the second phase, the planned experiment for 2 factors: temperature and time (Table 2), where the same responses were observed. The data listed in Table 2 were evaluated from the AFM images shown in Fig. 7. The DoE results (Fig. 8) show that none of the parameters or their interaction is statistically significant at the significance level of 0.05, which is positive, because Table 1 DoE phase 1

Solvent

Time (min)

Temperature (°C)

Ethanol

30

21

IPA

20

30

Xylene

10

50

Solvent Optimization of Transferred Graphene with Rosin …

79

Fig. 4 AFM measurements of graphene transferred by rosin after removal in a ethanol, b IPA, c xylene

Fig. 5 Individual value plot

80

R. Zahradníˇcek et al.

Fig. 6 Power curve for one-way ANOVA

Table 2 DoE phase 2

Temperature A

Time B

RMS (nm)

Top hight (nm)

20

10

2.13

40

40

10

1.13

22

20

30

1.08

24

40

30

1.67

51

30

20

1.49

29

30

20

1.51

27

30

20

0.78

13

the change in time and dissolution temperature in our range does not affect the dissolution result. Because of the small number of samples, there is a risk of an error of the second type.

5 Conclusion and Discussion The AFM measurements for individual experimental samples show that the use of isopropyl alcohol appears to be the best choice. Compared to the available literature, where using the acetone and banana oil enabled to reach a maximum contamination level of 15 nm [12], one sample with contamination less than 2 nm was achieved using IPA. On average, this value is 8 nm higher. Measured RMS values are twice better compared to graphene transmission by PMMA [22]. From the data obtained, it can be concluded that even better values could be achieved in the case of combination of IPA and banana oil. The evaluation of the planned experiment, on the other hand,

Solvent Optimization of Transferred Graphene with Rosin …

81

Fig. 7 AFM measurements of the second phase of DOE for temperature and time: a 20 °C/10 min, b 40 °C/10 min, c 20°/C 30 min, d 40 °C/30 min, e 30 °C/20 min, f 30 °C/20 min, g 30 °C/20 min

82

R. Zahradníˇcek et al.

Fig. 8 Pareto charts of terms

due to the statistically insignificant difference between the measured data for the individual solvents, shows that the solvent used does not matter. In the second phase of the DoE, the effect of temperature and time on total contamination and RMS was investigated using isopropyl alcohol. Due to the small difference between the data, it turned out that both experimental factors are statistically insignificant. This insignificance of individual parameters can be explained statistically by the very good solubility of the rosin in the selected solvents. Acknowledgements This research has been financially supported from projects no. FEKT-S-173934 and FEKT/STI-J-18-5354. Part of the work was carried out with the support of CEITEC Nano Research Infrastructure (MEYS CR, 2016–2019).

References 1. Geim, A.K., Novoselov, K.S.: The rise of graphene. Nat. Mater. 6(3), 183–191 (2007) 2. Mohan, V.B., Lau, K., Hui, D., Bhattacharyya, D.: Graphene-based materials and their composites: a review on production, applications and product limitations. Compos. B 142, 200–220 (2018) 3. Nine, M.J., Cole, M.A., Tran, D.N., Losic, D.: Graphene: a multipurpose material for protective coatings. J. Mater. Chem. 3(24), 12580–12602 (2015) 4. Bhopal, M.F., Lee, D.W., Rehman, A., et al.: Past and future of graphene/silicon heterojunction solar cells: a review. J. Mater. Chem. C 5(41), 10701–10714 (2017) 5. Wu, T.L., Yeh, C.H., Hsiao, W.T., et al.: High-performance organic light-emitting diode with substitutionally boron-doped graphene anode. ACS Appl. Mater. Interfaces 9(17), 14998– 15004 (2017)

Solvent Optimization of Transferred Graphene with Rosin …

83

6. Jang, H., Park, Y.J., Chen, X., et al.: Graphene-Based flexible and stretchable electronics. Adv. Mater. 28(22), 4184–4202 (2016) 7. Chen, Y., Gong, X.L., Gai, J.G.: Progress and challenges in transfer of large-area graphene films. Adv. Sci. 3(8), 1500343–1500363 (2016) 8. Goniszewski, S., Adabi, M., Shaforost, O., et al.: Correlation of p-doping in CVD graphene with substrate surface charges. Sci. Rep. 6, 22858–22867 (2016) 9. Ahn, Y., Kim, J., Ganorkar, S., et al.: Thermal annealing of graphene to remove polymer residues. Mater. Exp. 6(1), 69–76 (2016) 10. Chen, M., Stekovic, D., Li, W., Arkook, B., et al.: Sublimation-assisted graphene transfer technique based on small polyaromatic hydrocarbons. Nanotechnology 28(25), 255701–255707 (2017) 11. Kim, M., Shah, A., Li, C., et al.: Direct transfer of wafer-scale graphene films. 2D Mater. 4(3), 035004–035013 (2017) 12. Zhang, Z., Du, J., Zhang, D., et al.: Rosin-enabled ultraclean and damage-free transfer of graphene for large-area flexible organic light-emitting diodes. Nat. Commun. 8, 14560–14569 (2017) 13. Mouralova, K., Kovar, J., Klakurkova, L., et al.: Analysis of surface morphology and topography of pure aluminium machined using WEDM. Measurement 114, 169–176 (2018a) 14. Mouralova, K., Kovar, J., Klakurkova, L., Prokes, T.: Effect of width of kerf on machining accuracy and subsurface layer after WEDM. J. Mater. Eng. Perform. 27, 1908–1916 (2018) 15. Mouralova, K., Klakurkova, L., Matousek, R., et al.: Influence of the cut direction through the semi-finished product on the occurrence of cracks for X210Cr12 steel using WEDM. Arch. Civ. Mech. Eng. 18(4), 1318–1331 (2018) 16. Chen, M., Haddon, R.C., Yan, R., et al.: Advances in transferring chemical vapour deposition graphene: a review. Mater. Horizons 4(6), 1054–1063 (2017) 17. Jung, D.Y., Yang, S.Y., Park, H., et al.: Interface engineering for high performance graphene electronic devices. Nano Converg. 2(1), 11 (2015) 18. Kim, H.H., Kang, B., Suk, J.W., et al.: Clean transfer of wafer-scale graphene via liquid phase removal of polycyclic aromatic hydrocarbons. ACS Nano 9(5), 4726–4733 (2015) 19. Belyaeva, L.A., Fu, W., Arjmandi-Tash, H., et al.: Molecular caging of graphene with cyclohexane: transfer and electrical transport. ACS Central Sci. 2(12), 904–909 (2016) 20. Chen, M., Li, G., Li, W., et al.: Large-scale cellulose-assisted transfer of graphene toward industrial applications. Carbon 110, 286–291 (2016) 21. Her, M., Beams, R., Novotny, L.: Graphene transfer with reduced residue. Phys. Lett. A 377(21– 22), 1455–1458 (2013) 22. Wood, J.D., Doidge, G.P., Carrion, E.A., et al.: Annealing free, clean graphene transfer using alternative polymer scaffolds. Nanotechnology 26(5), 055302 (2015) 23. Yang, S.Y., Oh, J.G., Jung, D.Y., et al.: Metal-etching-free direct delamination and transfer of single-layer graphene with a high degree of freedom. Small 11(2), 175–181 (2015) 24. Montgomery, D.C.: Design and analysis of experiments. Wiley, Hoboken (2017) 25. Mouralova, K., Kovar, J., Klakurkova, L., et al.: Analysis of surface and subsurface layers after WEDM for Ti-6Al-4V with heat treatment. Measurement 116, 556–564 (2018b) 26. Mouralova, K., Kovar, J., Klakurkova, L., et al.: Comparison of morphology and topography of surfaces of WEDM machined structural materials. Measurement 104, 12–20 (2017) 27. Mouralova, K., Matousek, R., Kovar, J., et al.: Analyzing the surface layer after WEDM depending on the parameters of a machine for the 16MnCr5 steel. Measurement 94, 771–779 (2016)

Statistical Analysis of the Width of Kerf Affecting the Manufacture of Minimal Inner Radius Pavel Hrabec, Josef Bednáˇr, Radim Zahradníˇcek, Tomáš Prokeš, and Anna Machova

Abstract Unconventional technology of wire electrical discharge machining (WEDM) is essential in manufacturing precise parts for the aviation, automotive military or medical industry. Precision of dimensions and shapes of machined parts is preserved because of the ability to machine the material after the final heat treatment. However efficient and precise machining also depends on the ability to manufacture as small radiuses as possible, which can be achieved by minimizing the width of kerf. To minimize the width of kerf “Half central composite response surface design” plan of experiment considering gap voltage, pulse on time, pulse off time, wire feed and discharge current was implemented. Measurement system analysis (MSA) was computed for the width of kerf measuring system and based on its results mathematical model of the average width of kerf was found. Optimal setting of statistically significant machine parameters to minimize the average with of kerf was deducted from this model.

P. Hrabec (B) · J. Bednáˇr Faculty of Mechanical Engineering, Institute of Mathematics, Brno University of Technology, Technicka 2, 616 69 Brno, Czech Republic e-mail: [email protected] J. Bednáˇr e-mail: [email protected] R. Zahradníˇcek Department of Microelectronics, Faculty of Electrical Engineering and Communication, Brno University of Technology, Technicka 10, 616 00 Brno, Czech Republic e-mail: [email protected] T. Prokeš Faculty of Mechanical Engineering, Institute of Manufacturing Technology, Brno University of Technology, Technicka 2, 616 69 Brno, Czech Republic e-mail: [email protected] A. Machova Faculty of Civil Engineering, Institute of Social Sciences, Brno University of Technology, Zizkova 17, Veveri, 602 00 Brno, Czech Republic e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 R. Matoušek and J. K˚udela (eds.), Recent Advances in Soft Computing and Cybernetics, Studies in Fuzziness and Soft Computing 403, https://doi.org/10.1007/978-3-030-61659-5_8

85

86

P. Hrabec et al.

Keywords WEDM · Electrical discharge machining · Design of experiment · Measurement system analysis · Aluminum alloy · Width of kerf

1 Introduction 1.1 Technology of Wire Electrical Discharge Machining Wire electrical discharge machining (WEDM) is unconventional machining technology, which uses thermoelectrical principles for machining of the material. This runs on two electrodes at the same time, while being submerged in dielectric medium—liquid with high electric resistance. Machined material is melted out and evaporated during eroding thanks to the effect of periodical electrical discharges. These are carried by the electrode to the machined part as impulses of some frequency and voltage. There are no classical cutting forces during WEDM, which allow cutting of any electrically conductive material, disregarding its hardness, toughness or mechanical properties. Therefore, it is possible to machine the parts to the final dimensions after the heat treatment. This technology allows efficient machining of wide variety of materials such as titanium and aluminum alloys, which are used in automotive and aviation industry. WEDM allows machining of soft material without any deformation whatsoever, because there is no mechanical strain affecting the machined part. WEDM is a vital technological operation in many manufacturing industries such as aviation, automotive, military or medical tools manufacturing. Demands to improve performance of WEDM are omnipresent due to wide use of this method. These demands consist mainly of dimensional precision, material removal rate and chemical and topological quality of machined surface [1–4]. Vaporizing of eroded material during the process occurs, creating kerf as shown in Fig. 1. Size of this kerf is directly dependent on a set of physical and mechanFig. 1 Diagram of wire electrical discharge machining

Statistical Analysis of the Width of Kerf ...

87

ical properties of the machined material and mainly on machine parameters setting. Width of kerf is an important parameter, which is necessary to carefully observe, because it affects final dimension of machined part and it has a limiting impact on the manufacturing of the minimal inner radius [5]. WEDM is a stochastic phenomenon, during which it is most important to achieve as high as possible surface quality without defects concurrently with maximization of the cutting speed, preserving high precision of shapes. Extended research was conducted [6–10] including different metallic and nonmetallic materials and different heat treatments for this purpose. The goal of this study was to find optimal machine parameter setting for minimizing width of kerf and to analyze our measurement system of width of kerf. Efficiency and precision of machining can be preserved and improved only by complex understanding of input parameters of the process and their effect.

1.2 Statistics Design of experiment (DoE) is a trial or sequence of trials, during which the setting of input parameters is systematically changed in order to determine their effect on the output (response). The aim of DoE is to compile a mathematical model of response, using only statistically significant parameters. To achieve this goal and to determine the quality of the model, well known tools of linear regression analysis are used. This mathematical model then can be used to find the optimal setting of parameters to achieve the best response (minimal width of kerf) [11–13]. The purpose of a measurement system analysis (MSA) is to determine whether measurement results are repeatable and reproducible. Repeatability in this case means ability to gain similar results in the future measurements. Reproducibility in this meaning denotes the ability to acquire similar results by another person. Measurement system will be adequate if variability caused by repeated measurements together with variability caused by measuring by a different operator is negligible compared to total variability present in the data. Measurement system will be at least conditionally adequate if standard deviations caused by repeatability and reproducibility is lower than 30% of total standard deviation [14].

2 Experimental Setup and Material Samples for this experiment were created using aluminum alloy 7475-T7351with chemical composition by standard in weight percentage 5.2–6.2% Zn, 1.2–1.9% Cu, 1.9–2.6% Mg, 0.18–0.25% Cr, 0.1% Si, 0.12% Fe, 0.06% Mn, 0.06% Ti, Al— balance. For the optimization of strength and break toughness a special heat treatment developed by Alcoa company was used. This hard aluminum alloy is distributed as sheets or plates for uses requiring combination of high tensile strength, incredible

88

P. Hrabec et al.

Table 1 Boundary values of machine parameters Parameter

Gap voltage (V)

Minimum

50

Middle

60

Maximum

70

Pulse off time μs)

(m min−1 )

Discharge current (A)

6

50

10

25

8

40

12

30

10

30

14

35

Pulse on time (μs)

resistance to breaking and resistance to fatigue ruptures. It is perfect for welding, it has good electrical and heat conductivity, tensile strength 490 MPa and yield strength 400 MPa. However, this alloy is less ductile, it has lower resistance against repeated static strain and it is more sensitive to dents and tension concentrators. It is most often used in aviation as sheets covering fuselage and wings of the airplane. For this experiment a prism with thickness of 20 mm was used. The WEDM machine used in this study was high precision five axis CNC machine MAKINO EU64. As electrode, brass wire (60% Cu and 40% Zn) PENTA CUT E with a diameter of 0.25 mm was used. Samples were immersed in the deionized water which served as dielectric media and also removed debris in the gap between the wire electrode and workpiece during the process. Design of experiment was based on observing of the effect of five independent technological parameters of cutting process gap voltage (U), pulse on time (T on ), pulse off time (T off ), discharge current (I), wire feed (v) and their boundary values (Table 1). Boundary values of parameter settings were determined in extensive previous testing [15]. “Half central composite response surface design” containing 33 rounds split into two blocks was chosen for this experiment (Table 2). Runs were randomized during this experiment in order to lower any systematic error and the experiment contains seven repeated central points. This plan is described in full detail for example in Montgomery [11] or in [13].

3 Methodology of Measuring the Width of Kerf Metallographic cuts of cross-sections for the analysis of the width of kerf of material machined by WEDM were created. These samples were prepared by standard techniques—wet grinding and polishing by diamond pastes using automatic preparation system TEGRAMIN 30 from the Struers company. Final mechanical-chemical polishing was done using suspension OP-Chem from the same company. Then the samples were etched using Keller’s Reagent. The width of kerf was analyzed using light microscopy on inverted light microscope (LM) Axio Observer Z1m by the ZEISS company. Each kerf was systematically measured in five places using measurement software produced also by ZEISS company as is shown in Fig. 2. Measurement results were summarized in Table 3.

Statistical Analysis of the Width of Kerf ...

89

Table 2 Machining parameters used in the experiment Number of sample

Gap voltage (V)

1

70

2

60

3

Pulse on time (μs)

Pulse off time (μs)

Wire feed (m min−1 )

Discharge current (A)

8

40

12

30

8

30

12

30

60

8

40

12

25

4

60

10

40

12

30

5

50

8

40

12

30

6

60

8

50

12

30

7

60

6

40

12

30

8

60

8

40

12

35

9

60

8

40

10

30

10

60

8

40

14

30

11

60

8

40

12

30

12

50

6

30

10

35

13

70

10

50

10

25

14

70

10

30

10

35

15

60

8

40

12

30

16

70

6

50

10

35

17

70

10

50

14

35

18

60

8

40

12

30

19

60

8

40

12

30

20

70

6

50

14

25

21

50

6

30

14

25

22

60

8

40

12

30

23

70

10

30

14

25

24

50

6

50

10

25

25

60

8

40

12

30

26

50

10

50

14

25

27

50

10

30

10

25

28

50

6

50

14

35

29

50

10

50

10

35

30

70

6

30

14

35

31

50

10

30

14

35

32

60

8

40

12

30

33

70

6

30

10

25

90

P. Hrabec et al.

Fig. 2 Example of width of kerf measurement for sample 24, where the lowest width of kerf was measured

Table 3 Averages and standard deviations of the measurement of the width of kerf Number of sample

Average width of kerf (μm)

1

393.9

2

377.434

3

401.204

4 5

St. dev. width of kerf (μm)

Number of sample

Average width of kerf (μm)

St. dev. width of kerf (μm)

5.068

18

395.164

7.981

7.802

19

395.386

3.874

10.637

20

383.662

7.68

392.334

4.659

21

381.154

7.814

387.4

7.498

22

391.332

6.587

6

379.94

7.632

23

393.414

14.708

7

399.438

3.998

24

370.63

4.336

8

391.66

6.078

25

391

4.582

9

394.182

12.548

26

388.154

3.474

10

393.856

3.486

27

397.022

5.639

11

394.84

3.424

28

383.228

7.531

12

385.528

6.359

29

406.664

5.968

13

393.298

5.1

30

401.52

10.36

14

417.726

11.304

31

421.448

13.868

15

390.122

5.04

32

395.604

7.381

16

384.212

6.813

33

379.064

6.483

17

411.264

5.15

Statistical Analysis of the Width of Kerf ...

91

4 Statistical Evaluation It was possible to test the essential assumption for the linear regression (Variance homogeneity) because of the repeated measures of the width of kerf. Hypothesis about this assumption was not rejected according to Bartlett’s test on the significance level α = 0.05 (Fig. 3). However, repeated measurement of all samples induces strict upper bound for the amount of variability from the data that any mathematical model will be able to explain. The amount of “explainable” variability can be determined using analysis of variance (ANOVA) for samples. Statistical significance of the ANOVA model (PValue = 0) proves without a doubt that there are statistically significant differences in width of kerf for different samples. These differences can be clearly seen on the interval plot on Fig. 4. Coefficient of determination of this ANOVA model was determined to be 71.48%. This number is also the upper bound for any model created by the method of linear regression, because the rest of the variability is “hidden” in repeated measurements. Mathematical model of the width of kerf was created using “stepwise” method with 0.05 level to enter and same level to remove a factor. Starting point was a “full quadratic” model containing all the interactions of the second order and all squares of the parameters. This regression model “explains” 45.23% of the variability in the data, however it is 63.28% of the upper bound. This mathematical model of the width of kerf (W ) containing only statistically significant parameters (see Table 4) is written in Eq. 1:

Fig. 3 Bartlett’s test for equal variances

92

P. Hrabec et al.

Fig. 4 Interval plot of the width of kerf for different samples

W = 387.7 + 0.205 · U − 4.23 · Ton − 0.296 Toff − 0.977 · I + 0.283 · Ton · I, (1) where W (μm) is width of kerf, U(V) is gap voltage, T on (μs) is pulse on time, T off (μs) is pulse off time and I(A) is discharge current. Parameter wire feed was not statistically significant. Despite clear statistical significance of this model, the quality of this model is not the best. Against any predictive capabilities of this model it is also the fact that 28% of total variability is hidden in repeated measurements. To confirm this suspicion short study of MSA was performed. Five samples were randomly selected for this analysis performed by two operators with two measures for each sample using methodology described above. Results of MSA (Table 5) shows that even though reproducibility is negligible (less than 0.01% of total standard deviation) repeatability causes that this measurement system is not even conditionally adequate (47.77% of total standard deviation). Table 4 P-values of statistically significant parameters for the regression model of the width of kerf

Source

P-value

Model

0.000

Linear

0.000

Gap voltage (V)

0.044

Pulse on time (μs)

0.000

Pulse off time (μs)

0.004

Discharge current (A)

0.000

2-Way interaction

0.009

Pulse on time (μs) * Discharge current (A)

0.009

Statistical Analysis of the Width of Kerf ... Table 5 MSA results for the width of kerf measurement system

Source

93 Study var. (%)

Total gage R&R

46.77

Repeatability

46.77

Reproducibility Operators Part-to-part Total variation

0.00 0.00 88.39 100.00

Because of this unfavorable result it was decided to use collected data to perform MSA for the average width of kerf and then, if successful, try to make a regression model. Results of MSA for the average width of kerf were summarized in Fig. 5. Similarly to previous MSA, reproducibility is negligible, but this time standard deviation of repeatability takes only 23.02% of the total standard deviation making this measurement system conditionally adequate for measuring average width of kerf. Because of favourable MSA result, regression model for the average width of kerf was computed using again the stepwise method as described above. In this regression model only two statistically significant predictors were found pulse on time (μs) and a discharge current (A) (both s P-Value < 0.01). Regression model of average width of kerf is denoted by Eq. (2): Wavg = 320.3 + 4.247 · Ton + 1.285 · I,

(2)

where W avg is average width of kerf(μm), T on (μs) is pulse on time and I(A) is discharge current. From the corresponding response surface (Fig. 6) it is clear that

Fig. 5 Summary of MSA for average width of kerf. a Average width of kerf by operators, b average width of kerf by Parts, c components of variation

94

P. Hrabec et al.

Fig. 6 Response surface for the average width of kerf

for minimizing average width of kerf the optimal setting would be discharge current = 25 A and pulse on time = 6 μs.

5 Conclusion and Discussion Based on described design of experiment 33 samples from aluminum alloy 7475T7351 were created. Metallographic cuts of cross-sections of these samples were made and then width of kerf was measured using methodology described above. Analysis of variance clearly proved effect of different machine parameter settings on the width of kerf. Mathematical model of the width of kerf was created, but during MSA some clear drawbacks were discovered and mathematical model just for the average width of kerf (with satisfactory MSA) was found. Another upgrade for the measurement system can probably be made by using more advanced image processing tools, as in [10] or [16] which will allow much more repeated measures of one sample, while still occupying bearable amount of time. Optimal machine parameter setting for minimizing the average width of kerf was found, but due to not so high r-squared there are other significant parameters affecting width of kerf that were not included in DoE. Somashekar, Ramachandran a Mathew in [17] achieved similar r-squared in their model for the width of kerf using gap voltage (V), capacitance (μF) and feed rate (μm s−1 ). Patil and Brahmankar in [18] found in addition flushing pressure (MPa) to be statistically significant, but they used higher significance level than 0.05. Unfortunately, in these papers and similarly in [19–22] methodology for measuring width of kerf is unclear, making it difficult to compare their conclusions to results presented in this study. From this it can be concluded, that even though some work is already done there is plenty of room to improve the measurement system of width of kerf or in finding other parameters with significant effect on width of kerf.

Statistical Analysis of the Width of Kerf ...

95

Acknowledgements This work is an output of research and scientific activities of NETME Centre, supported through project NETME CENTRE PLUS (LO1202) by financial means from the Ministry of Education, Youth and Sports under the “National Sustainability Programme I”. This research has been financially supported from projects no. FEKT-S-17-3934 and FEKT/STIJ-18-5354. This paper was supported by BUT, Faculty of Mechanical Engineering, Brno, Specific research 2016, with the grant “Research of modern production technologies for specific applications”, FSIS-16-3717 and technical support of Intemac Solutions, Ltd., Kurim. Part of the work was carried out with the support of CEITEC Nano Research Infrastructure (MEYS CR, 2016–2019).

References 1. Jameson, E.C.: Electrical Discharge Machining. Society of Manufacturing Engineers, Southfield (2001) 2. Ho, K.H., Newman, S.T., Rahimifard, S., Allen, R.D.: State of the art in wire electrical discharge machining (WEDM). Int. J. Mach. Tools Manuf. 44, 1247–1259 (2004) 3. Ho, K.H., Newman, S.T., Rahimifard, S., Allen, R.D.: State of the art electrical discharge machining (EDM). Int. J. Mach. Tools Manuf. 43, 1287–1300 (2003) 4. Boothroyd, G., Knight, W.A.: Fundamentals of Machining and Machine Tools. Taylor and Francis, Boca Raton (2005) 5. Dodun, O., Gonçalves-Coelho, A.M., Sl˘atineanu, L., Nagî¸t, G.: Using wire electrical discharge machining for improved corner cutting accuracy of thin parts. Int. J. Adv. Manuf. Technol.. 41, 858–864 (2009) 6. Mouralova, K., Matousek, R., Kovar, J., Mach, J., Klakurkova, L., Bednar, J.: Analyzing the surface layer after WEDM depending on the parameters of a machine for the 16MnCr5 steel. Measurement 94, 771–779 (2016) 7. Mouralova, K., Kovar, J., Klakurkova, L., Bednar, J., Benes, L., Zahradnicek, R.: Analysis of surface morphology and topography of pure aluminium machined using WEDM. Measurement 114, 169–176 (2018) 8. Mouralova, K., Kovar, J., Klakurkova, L., Prokes, T., Horynova, M.: Comparison of morphology and topography of surfaces of WEDM machined structural materials. Measurement 104, 12–20 (2017) 9. Mouralova, K., Kovar, J., Klakurkova, L., Blazik, P., Kalivoda, M., Kousal, P.: Analysis of surface and subsurface layers after WEDM for Ti–6Al–4V with heat treatment. Measurement 116, 556–564 (2018) 10. Mouralova, K., Kovar, J., Klakurkova, L., Prokes, T.: Effect of Width of kerf on machining accuracy and subsurface layer after WEDM. J. Mater. Eng. Perform. 27, 1908–1916 (2018) 11. Montgomery, D.C.: Design and Analysis of Experiments. Wiley, Hoboken, NJ (2017) 12. Mathews, P.G.: Design of Experiments with MINITAB (2010). ISBN 9788122431117 13. Goos P., Jones, B.: Optimal Design of Experiments a Case Study Approach (2011) 14. Juran, J.M., Gryna, F.M.: Juran’s Quality Control Handbook. McGraw-Hill, New York (1988) 15. Mouralova, K.: Moderní technologie drátového elektroerozivního ˇrezání kovových slitin. CERM thesis. Brno (2015). ISBN 80-214-2131-2 16. Mouralova, K., Kovar, L., Bednar, J., Matousek, R., Klakurkova, L.: Statistical evaluation width of kerf after WEDM by analysis of variance. Mendel J. Series. 2016, 301–304 (2016) 17. Somashekhar, K.P., Ramachandran, N., Mathew, J.: Material removal characteristics of microslot (kerf) geometry in μ-WEDM on aluminum. Int. J. Adv. Manuf. Technol.. 51, 611–626 (2010)

96

P. Hrabec et al.

18. Patil, N.G., Brahmankar, P.K.: Some studies into wire electro-discharge machining of alumina particulate-reinforced aluminum matrix composites. Int. J. Adv. Manuf. Technol.. 48, 537–555 (2010) 19. Mahapatra, S.S., Patnaik, A.: Optimization of wire electrical discharge machining (WEDM) process parameters using Taguchi method. Int. J. Adv. Manuf. Technol. 34, 911–925 (2007) 20. Tosun, N., Cogun, C., Tosun, G.: A study on kerf and material removal rate in wire electrical discharge machining based on Taguchi method. J. Mater. Process. Technol. 152, 316–322 (2004) 21. Kerfs width analysis for wire cut electro discharge machining of SS 304 L using design of experiments. Ind. J. Sci. Technol. 3, 369–373 (2010) 22. Gupta, P., Gupta, R.D., Khanna, R., Sharma, N.: Effect of process parameters on kerf width in WEDM for HSLA using response surface methodology. J. Eng. Technol.. 2, 1–6 (2012)

Many-objective Optimisation for an Integrated Supply Chain Management Problem Seda Türk, Ender Özcan, and Robert John

Abstract Due to the complexity of the supply chain with multiple conflicting objectives requiring a search for a set of trade-off solutions, there has been a range of studies applying multi-objective methods. In recent years, there has been a growing interest in the area of many-objective (four or more objectives) optimisation which handles difficulties that multi-objective methods are not able to overcome. In this study, we explore formulation of Supply Chain Management (SCM) problem in terms of the possibility of having conflicting objectives. Non-dominated Sorting Genetic Algorithm-III (NSGA-III) is used as a many-objective algorithm. First, to make an effective search and to reach solutions with better quality, parameters of algorithm are tuned. After parameter tuning, we used NSGA-III at its best performance and tested it on twenty four synthetic and real-world problem instances considering three performance metrics, hypervolume, generational distance and inverted generational distance.

1 Introduction Supply chain management is critical to achieve sustainable competitive advantage for a company. One major aspect of supply chain management is to select suppliers which can support the success of a company meeting expectations of the company. Another one is to plan and control inventory through the whole network from suppliers to customers, balancing material flows among the entire processes of a supply chain S. Türk (B) Faculty of Engineering, School of Industrial Engineering, Igdir University, Igdir, Turkey e-mail: [email protected] E. Özcan · R. John School of Computer Science, University of Nottingham, Nottingham, UK e-mail: [email protected] R. John e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 R. Matoušek and J. K˚udela (eds.), Recent Advances in Soft Computing and Cybernetics, Studies in Fuzziness and Soft Computing 403, https://doi.org/10.1007/978-3-030-61659-5_9

97

98

S. Türk et al.

effectively. Thus, supplier selection combined with effective inventory planning has been studied by a number of researchers [5, 7, 9]. In Turk et al. [12], a generic local search meta-heuristic is used to solve the integrated SCM problem which aims to deal with both supplier selection and inventory planning aggregating two objectives subject to several constraints. A simulated annealing approach is applied to the problem balancing the trade-off between supply chain operational cost and supplier risk using two scalarization approaches, weighted sum and Tchbycheff. That study illustrated the multiobjective nature of the problem testing the proposed approaches on a simple single problem instance. Turk et al. [12] provided an approach which is capable of capturing the trade-off between risk and cost via scalarisation of both objectives. This gives flexibility to the decision makers to choose from a set of trade-off solutions for supply chain management. In Turk et al. [13], three population based meta-heuristic algorithms are used to tackle the same problem with an attempt to detect the best performing approach. The problem is formulated as a two-objective problem and the performance of three Multi-objective Evolutionary Algorithms (MOEAs), Nondominated Sorting Genetic Algorithm-II (NSGA-II), Strength Pareto Evolutionary Algorithm 2 (SPEA2) and Indicator Based Evolutionary Algorithm (IBEA) are investigated. Although MOEAs performed reasonably well in this study for the two-objective problem, there could be even more conflicting objectives which can be considered in the solution model and then simultaneously optimised. In this problem, the total cost is the sum of 5 different costs, production cost, holding cost, batch cost, transportation cost and stock-out cost, related to production processes. In this study, we treat each cost component as a separate objective and solve the integrated supply chain management problem as a many objective problem using NSGA-III. To the authors’ knowledge, this is tone of the first studies in many objective supply chain management problem. The problem takes into account six objectives, i.e., total risk, production cost, holding cost, batch cost, transportation cost and stock-out cost. The main purpose of this paper is to handle multiple optima and other complexities for the integrated problem of supplier selection and inventory planning formulated as a many-objective problem. The rest of paper is organised as follows. In Sect. 2, background information on NSGA-III is provided. In Sect. 3, the definition of the problem is reviewed briefly. In Sect 4, numerical experiments are presented. In Sect. 5, computational results are discussed and in Sect. 6, the conclusion and possible directions for future work are given.

2 Preliminaries This section will provide an introduction to technique that has been used in this work and an overview of related work in the literature.

Many-objective Optimisation for an Integrated Supply Chain …

99

2.1 Non-dominated Sorting Genetic Algorithm-III Recently, there has been a growing interest in many-objective (four or more objectives) optimisation problems. Most MOEAs have faced difficulties in solving manyobjective optimisation. Difficulties in handling many objectives can be listed as: i) a large fraction of population becomes non-dominated solutions within consideration of the number of objectives, ii) in a large dimensional space, diversity measurement becomes difficult and computationally expensive, iii) recombination operators may be insufficient to improve offspring solutions [2]. Deb and Jain [2] developed a NSGA-II procedure with significant chances in the selection operator to overcome these difficulties and called it NSGA-III. NSGA-III remains similar to NSGA-II apart from replacing the crowding distance in the selection operator with a systematic evaluation of individuals in the population with respect to the reference points [14]. Initially, a population Pt of size N is randomly generated and then those N individuals are sorted into different nondomination levels. Then, an offspring population Q t of size N is created applying crossover and mutation operators with associated probabilities (rates). Pt and Q t are merged to form Rt of size 2N which includes elite members of both parent and offspring populations. All individuals in Rt are sorted into a number of nondomination levels such as F1 , F2 and so on. The rest of the NSGAIII algorithm works quite different from NSGA-II. After sorting Rt let us obtain a new population St with size N examining individuals in St corresponding to a set of reference points either predefined or supplied. All objective values and reference points are first normalised to keep them in an identical range. Then, in order to associate each individual in the population St with a reference point, a reference line is determined joining the reference point with the origin of the normalized space. Next, the perpendicular distances between each individual in St and their corresponding lines are calculated and each individual is associated to the closest reference line. After that, in order to selectively choose which points will be in the next population Pt+1 , a niche preserving operator explained in Deb and Jain [2] is used.

2.2 Performance Metrics The performance of multi-objective algorithms is assessed using various metrics including number of the non-dominated solutions found, distance of the final pareto set to the global pareto-optimal front (accuracy), distribution of the final pareto set with respect to the pareto-optimal front, and spread of the pareto set (diversity) [8, 15]. When dealing with multi-objective optimisation problems, the purpose is to achieve a desirable non-dominated set. However, for a number of reasons, the assessment of results becomes difficult; (1) several solutions are generated rather than one like in a single objective optimisation problem, (2) a number of runs need to be performed to assess the performance of EAs due to their stochastic nature, iii)

100

S. Türk et al.

different entities, such as, coverage, diversity of a set of solutions, could be measured and used as a guidance during the search process [10]. In order to handle difficulties, there are several performance metrics proposed in the literature. In this study, three performance metrics, hypervolume (HV), generational distance (GD) and inverted generational distance (IGD) are used (further explanations given in the work of Turk et al. [13]). In addition, this approach provides a set of trade-off solutions. A common way of (automatically) reducing all trade-off solutions into a ‘preferable’ reasonable single solution is detecting the solution at the knee point on the pareto-front. We have used the method presented in [1] to obtain a single solution called a knee solution based on the knee point for all problem instances.

3 Problem Description In Turk et al. [13], a two-stage integrated approach is presented to the supplier selection and inventory planning. In the first stage, in order to get a risk value of each supplier, suppliers are evaluated based on various criteria derived from cost, quality, service and delivery using Interval Type-2 Fuzzy Sets (IT2FSs). In the second stage, the information of supplier rank is fed into an inventory model built to cover the effect of suppliers on the total cost of a supply chain. The integrated SCM problem is formulated as a multi-objective problem which aims to handle two objectives; total cost and total risk. The total cost is the sum of 5 different costs, production cost, holding cost, batch cost, transportation cost and stock-out cost, related to production processes. Due to the existence of conflicting objectives, we treat each cost component as a separate objective and solve the integrated supply chain management problem as a six objective problem using NSGA-III. The formulation of the problem can be found in the work of Turk et al. [13]. In this section, only formulation of each objective is given.

3.1 Problem Formulation The formulation of six objective supply chain problem is presented below with relevant notation shown in Tables 1, 2 and 3. Table 1 Notations

Notation

Meaning

Notation

Meaning

i

Supplier

p

Product

j

Manufacturing plant

c

Components

k

Customers

t

Discrete time period

Many-objective Optimisation for an Integrated Supply Chain … Table 2 Notation for decision variables [12]

Variable

Meaning

PA ( p, j, k, t) Amount of product p from plant j to customer k in period t C A (c, i, j, t)

Table 3 Notation used for the formulation of the problem [12]

101

Amount of component c from supplier i to plant j in period t

Notation

Meaning

PI ( p, j, t)

Inventory of product p at plant j in period t

C I (c, j, t)

Inventory of component c at plant j in period t

Y P ( p, k)

Product p’s selling price for customer k

TC (c, i, k)

Carrying cost for component c between supplier i and plant j

TP ( p, j, k)

Carrying cost for product p between plant j and customer k

IC (c, j)

Component c’s inventory cost at plant j

I P ( p, j)

Product p’s inventory cost at plant j

SC (c, j)

Shortage cost at plant j for component c

S P ( p, j)

Shortage cost at plant j for product p

OC (c, i)

Ordering cost of supplier i for component c

M P ( p, j)

Manufacturing cost for product p at plant j

S( p, j)

Setup cost in plant j for product p

D S (i, j)

Distance between supplier i and plant j

D P ( j, k)

Distance between plant j and customer k

Rank(i)

Rank of vendor i

Risk(i)

Risk of vendor i

PM ( p, k, t) Non-fulfilment amount of product p for customer k in period t

In this research, the integrated SCM problem is considered as a six objective problem solved by a many-objective optimisation method. The aim of this study is to minimise: (1) potential risk endured T R (Eq. 6) as a result of the supplier selection and (2) the each cost of the supply chain T H C (Eq. 1), T T C (Eq. 2), T BC (Eq. 3), T PC (Eq. 4), T SC (Eq. 5). In Eq. 1, the total cost of inventory is shown for the components and products successively. Equation 2 accumulates the transportation cost considering the products in the first row and components in the second row, respectively. In Eq. 3, the component order and setup costs as batch costs are added. The manufacturing and shortage costs for each product and each component are included in Eq. 4 as a total production cost. The stock-out cost is depicted in Eq. 5 as a penalty cost incurred when the quantity of production does not satisfy the customer demands.

102

S. Türk et al.

⎛   ⎝ minimise T H C = I P ( p, j) × PI ( p, j, t) t

p



+

c

⎛ minimise T T C = ⎝



IC (c, j) × C I (c, k, t)⎠

(1)

j

 (PA ( p, j, k, t) × D P ( j, k) × TP ( p, j, k)) t

+

j

p

j

k

 c

i



(C A (c, i, j, t) × D S (i, j) × Tc (c, i, j))⎠

(2)

j

⎛ minimiseT BC = ⎝

 t

+

c

minimise T PC = ⎝

j

+





minimiseT SC =

j

M P ( p, j) × PA ( p, j, k, t)

S p ( p, j) × PI ( p, j, t) ⎞ SC (c, j) × C I (c, j, t)⎠

(4)

j

 t

p

PM ( p, k, t) × Y P ( p, k)

c

(5)

k

 t

(3)

k

j

c

minimiseT R =

S( p, j) × PA ( p, j, k, t)⎠

k

p

p

+



 t

OC (c, i) × C A (c, i, j, t)

j

 p



i

i

C A (c, i, j, t) × Risk(i)

(6)

j

Equation 6 demonstrates the total risk of suppliers with respect to Eq. 7 which shows the calculation of a coefficient for the risk of each supplier by normalising the supplier rank (detailed in Turk et al. [13]).  Risk(i) =

i Rank(i) Rank(i)

(7)

Many-objective Optimisation for an Integrated Supply Chain …

103

4 Preliminary Experiments 4.1 Experimental Setup The results found in stage one is carried to the second stage to solve the integrated problem using NSGA-III. The Jmetal suite [3, 4] is used to run all experiments with NSGA-III as a many-objective evolutionary algorithm. Each trial is repeated for 30 times during the experiments, where each run yields a set of trade-off solutions. A run terminates whenever 5000 iterations/generations are exceeded. The same chromosome representation explained in Turk et al. [13] is used to depict a potential inventory plan. The initial population is generated randomly. A binary tournament selection is employed to create an offspring population. Simulated Binary Crossover (SBX) and Polynomial Mutation operators are used. The parameters of NSGA-III include the population size (P), crossover probability (Pc ), distribution index for crossover (Dm ), distribution index for mutation (Dc ) and number of divisions (Nd ). NSGA-III has the same algorithmic control parameters as NSGA-II, including number of divisions (Nd). The number of divisions is utilised to determine how many reference points will be used in a reference line to maintain diversity in obtained solutions. All the algorithmic control parameters are tuned.

4.2 Problem Instances Twenty four problem instances provided by Turk et al. [13] are used in this study. Problem instances have different characteristics and sizes, where four of them are real world problem instances and 20 of them are randomly generated based on those instances.

4.3 Parameter Tuning of NSGA-III Parameters of NSGA-III are tuned using Taguchi orthogonal arrays [11] as a design of experiments method. We investigated five control parameters for NSGA-III with the following potential settings: P ∈ {25, 50, 100, 200}, Pc ∈ {0.6, 0.7, 0.8, 0.9}, Dc , Dm ∈ {5, 10, 15, 20} and Nd ∈ {3, 4, 5, 6}. The L 16 Taguchi orthogonal arrays design is used to achieve the best parameter configuration. We followed the same way explained in Turk et al. [13] and obtained results as shown in Table 4. The mean effect of each parameter is calculated in the same manner as explained in the work of Turk et al. [13]. Figure 1 provides the main effects plot indicating the performance of each parameter value setting. The best configuration for NSGA-III is attained as 200 for P, 0.7 for Pc , 10 for Dc , 5 for Dm and 6 for Nd . In addition,

P

25

25

25

25

50

50

50

50

Experiment number (EN)

1

2

3

4

5

6

7

8

0.9

0.8

0.7

0.6

0.9

0.8

0.7

0.6

Pc

15

20

5

10

20

15

10

5

Dc

10

5

20

15

20

15

10

5

Dm

3

4

5

6

6

5

4

3

Nd

13.5

9.4

6.8

4.2

5.2

6.4

8.9

13.4

Average rank NSGA-III

16

15

14

13

12

11

10

9

EN

200

200

200

200

100

100

100

100

P

0.9

0.8

0.7

0.6

0.9

0.8

0.7

0.6

Pc

5

10

15

20

10

5

20

15

Dc

15

20

5

10

5

10

15

20

Dm

Table 4 Average rank of NSGA-III, with a particular parameter configuration based on the L 16 Taguchi orthogonal array Nd

4

3

6

5

5

6

4

3

Average rank NSGA-III

10.2

14.2

3.0

5.6

5.9

4.0

14.0

11.4

104 S. Türk et al.

Many-objective Optimisation for an Integrated Supply Chain …

105

Fig. 1 Main effects plot with mean rank values in NSGA-III

Table 5 ANOVA test results for dismissing the contribution of each parameter for NSGA-III in terms of percent contribution MOEAs

P

Pc

Dc

Dm

Nd

Error

Total

NSGA-III

0.28

0.28

0.11

2.51

96.81



100%

the contribution of each parameter setting on the performance of NSGA-III is analysed using ANOVA. Table 5 summarises the results. The number of divisions has a significant contribution within a confidence level of 95% on the performance of NSGA-III. In the same manner, hypervolume of three selected instances is used to assess the performance of each parameter setting configured based on the Taguchi method (Table 6).

5 Computational Results Given six objective functions, the experimentation is conducted in exactly the same manner as when generating a Pareto set of solutions with two objectives as explained in Turk et al. [13]. NSGA-III is applied to 24 problem instances. The highdimensional trade-off front in a many-objective evolutionary algorithm is analysed. Three different performance metrics and its cost results are considered to investigate its performance. In order to tackle difficulties of the high-dimensional trade-off front for the six objective problem, we aggregate all cost values together and evaluated them same as the two objective problem (to find pareto-front sets and its performance). For 24 instances, NSGA-III provides better mean values of hypervolume, generational distance and inverted generational distance than NSGA-II, SPEA 2 and IBEA in Turk et al. [13].

106

S. Türk et al.

Table 6 Performance of NSGA-III in terms of hypervolume (HV), generational distance (GD) and inverted generational distance (IGD) Inst Inst1

HV

GD

IGD

Inst

HV

GD

IGD

Mean 0.8617

14,191.2

465,424.7 Inst13 Mean 0.9206

stnd.

0.0178

12,478.6

178,732.2

0.0088

41,477.9 279,123.7

Inst2

Mean 0.9026

20,818.4

165,035.9 Inst14 Mean 0.8746

79,809.1 212,990.4

Inst3

Mean 0.8263

46,785.9 2,189,103.3 Inst15 Mean 0.9263

94,034.5 204,205.7

stnd.

19,769.8 1,026,613.6

46,892.7

stnd.

Inst4

0.0070 0.0084

9,786.4

55,906.3

stnd. stnd. stnd.

0.0100 0.0074

76,152.2 713,037.7

34,490.0

57,665.8 57,232.2

Mean 0.8383

30,252.1

457,079.3 Inst16 Mean 0.8980

90,109.0 182,753.4

stnd.

0.0102

23,904.7

271,723.8

44,550.3

Inst5

Mean 0.8971

46,179.5

298,860.6 Inst17 Mean 0.8862 115,584.6 534,597.3

0.0091

29,988.4

127,470.0

Inst6

Mean 0.8693

58,723.5

396,973.3 Inst18 Mean 0.9083 131,657.6 209,729.7

stnd.

0.0045

28,026.6

149,382.0

Mean 0.8770

51,260.7

168,246.7 Inst19 Mean 0.9837

stnd.

0.0078

24,129.5

34,780.2

Inst8

Mean 0.8717

19,298.9

85,388.1 Inst20 Mean 0.9792 117,175.2 301,284.0

stnd.

11,332.0

Inst9

Mean 0.8899 118,589.7

199,444.9 Inst21 Mean 0.9847

57,679.7 307,158.7

stnd.

136,917.6

17,709.2

stnd.

Inst7

0.0060

0.0153 122,349.9

Inst10 Mean 0.8731 stnd.

31,074.0

0.0078

11,447.5

Inst11 Mean 0.8630

61,447.7

stnd.

0.0077

65,245.3

Inst12 Mean 0.8802

82,517.8

stnd.

0.0082

44,704.8

19,731.9

stnd. stnd. stnd. stnd. stnd. stnd.

0.0070 0.0120 0.0110 0.0016 0.0026 0.0017

141,270.7 Inst22 Mean 0.9862 41,997.4

stnd.

0.0013

162,999.2 Inst23 Mean 0.9886 47,410.7

stnd.

0.0007

233,954.7 Inst24 Mean 0.9852 73,336.6

stnd.

0.0014

53,838.6

64,992.4 190,702.7 70,231.8

45,540.7

43,111.0 247,124.0 19,239.8 51,612.7

72,672.2 65,545.4 92,594.1

67,881.1 229,310.3 29,465.3

58,393.4

47,219.0 179,730.0 13,601.8

49,710.5

75,444.5 261,063.7 30,343.3

58,524.4

Six objectives, total risk, total production cost, total holding cost, total batch cost, total transportation cost and total stock-out cost computed for each knee solution to each instance is summarised in Table 7. We have observed that NSGA-III achieved knee solutions for the majority of the instances with a low customer satisfaction rate. To find reasons for poor levels of customer service and to visualise relationship among objectives, Fig. 2 depicts the pareto optimal set of Inst19 as an example in ‘parallel coordinates’ [6] generated by using NSGAIII. Each green line represents a pareto optimal solution and indicates change through objectives from one to another. The black line shows the knee solution for Inst19. Figure 3 displays the same data set but this time, there are two lines and each line represents a single solution in the pareto set of Inst19. In Fig. 3, the solution 1 with a high risk has low production and stock-out costs while the solution 2 represents low risk scenario with high production and stock-out costs. Also, the high-risk solution consists of relatively low

Many-objective Optimisation for an Integrated Supply Chain …

107

Table 7 Objective wise results of NSGA-III Inst

TR

Inst1

10,875.0

TC 3,120.0

83.97%

Service

TBC 330.0

TPC

TTC

2,100.0

190.0

Inst2

7,554.3

3,812.3

85.68%

Inst3

7,394.3

4,189.4

78.02%

300.0

879.0

270.0

1,426.6

Inst4

10,398.8

4,667.7

69.45%

270.0

Inst5

1,814.0

Inst6

9,052.5

5,768.6

46.81%

5,133.9

83.91%

Inst7

15,108.8

6,332.5

Inst8

21,172.1

Inst9

15,617.4

Inst10

TSC

THC

0.0

500.0

176.2

1,911.1

546.0

166.8

1,405.0

921.0

1,643.5

192.2

1,136.1

1,425.9

200.0

703.0

122.9

1,674.5

3,068.2

230.0

2,111.6

182.7

1,783.7

826.0

61.78%

490.0

2,220.0

269.0

933.5

2,420.0

5,480.9

80.29%

700.0

2,524.2

307.9

868.6

1,080.3

4,741.2

80.25%

680.0

1,963.0

302.5

859.5

936.3

16,217.0

5,872.8

57.83%

600.0

1,695.1

267.1

834.1

2,476.4

Inst11

19,834.3

6,005.7

76.31%

710.0

3,089.6

322.9

460.4

1,422.8

Inst12

11,343.0

8,720.8

67.07%

480.0

4,023.6

243.7

1,101.3

2,872.1

Inst13

11,051.8

7,043.5

55.21%

390.0

1,460.0

281.0

1,757.5

3,155.0

Inst14

17,517.9

7,601.5

66.56%

540.0

2,679.9

285.5

1,554.2

2,541.9

Inst15

8,401.4

6,595.7

59.07%

480.0

1,404.1

238.1

1,774.0

2,699.5

Inst16

11,969.5

6,430.6

64.05%

460.0

1,748.5

300.7

1,609.3

2,312.1

Inst17

12,135.5

9,952.9

60.74%

480.0

3,554.5

290.0

1,721.3

3,907.1

Inst18

6,130.3

7,093.1

51.47%

410.0

1,250.1

247.0

1,743.8

3,442.2

Inst19

16,688.4

10,312.5

83.42%

680.0

690.0

348.0

6,884.5

1,710.0

Inst20

24,902.3

11,994.0

79.19%

970.0

1,657.1

442.6

6,427.7

2,496.5

Inst21

16,885.4

11,549.7

83.39%

830.0

1,380.8

365.9

7,054.8

1,918.2

Inst22

22,838.1

10,990.0

93.93%

1,090.0

1,715.7

365.4

7,152.3

666.6

Inst23

15,405.6

9,925.7

99.53%

890.0

1,121.2

358.4

7,509.5

46.5

Inst24

23,212.4

9,713.2

91.88%

1,020.0

829.9

417.1

6,657.8

788.4

TR Total Risk, TC Total Cost, TBC Total Batch Cost, TPC Total Production Cost, TTC Total Transportation Cost, TSC Total Stock-out Cost, THC Total Holding Cost

holding cost. Moreover, from the visualisation, we can observe that there is obvious inverse-correlation between the transportation cost and holding cost as seen in Fig. 2. In this sense, decreasing the holding cost will increase transportation cost. However, the correlation between other objectives is not quite as obvious. Therefore, the relationship between risk and each cost is investigated in Fig. 4. There is no specific relationship between risk and production cost, transportation cost and batch cost. It is obviously seen that there is negative relationship between risk and holding cost, and between risk and stock-out cost. In summary, we have explored the performance of NSGA-III, in the six objective integrated SCM problem. Based on performance metrics, NSGA-III performed reasonably well in this study. However, the empirical results indicate that NSGAIII did not achieve high quality trade-off solutions satisfying at least 90% of the

108

S. Türk et al.

Fig. 2 Results of NSGA-III in parallel coordinate for Inst19; green lines show trade-off solutions among objectives and black line indicates the knee solution

Fig. 3 Two solutions of NSGA-III highlighted in parallel coordinate; Trade-off among objectives for Inst19

Many-objective Optimisation for an Integrated Supply Chain …

109

Fig. 4 Risk versus cost objectives’ results of NSGA-III in parallel coordinate for Inst19

customer demand while each cost is considered as an objective. In addition, to investigate conflicting objectives, the parallel coordinates figures are used visualising the relationship among objectives. It is observed that there is an obvious relationship between some objectives such as risk and stock-out cost. Objective reduction can be alternative way removing the redundant objectives in the original objective set. To improve performance and to achieve acceptable cost results, some objectives might be excluded and the problem will be solved again. Also the model can be improved to reduce the stock-out cost.

110

S. Türk et al.

6 Conclusions This paper provides an investigation of meta-heuristic algorithm, NSGA-III on the integrated SCM problem as one of the first studies in literature. Also, this chapter analysed performance of NSGA-III using three well known performance metrics. First, the optimal parameter setting is found for the algorithm. After tuning, the algorithm is tested on twenty four problem instances. The results show the overall success of NSGA-III comparing to NSGA-II given in Turk et al. [13]. Moreover, we examine the trade-off between all contributing costs to the total cost and risk, separately. The many-objective optimisation algorithm, NSGA-III is applied to the six objective formulation of the same problem [13]. NSGA-III performs well over all instances. However, it is found that NSGAIII cannot satisfy customer expectations while producing high stock-out cost. Based on these findings, the number of objectives would be reduced to four for many-objective optimisation based on relationships among objectives found as in the parallel coordinates figures. Another future study could be applying the approach to new unseen instances possibly even larger than the ones used in this study and/or changing the decision makers’ supplier related preferences creating more instances.

References 1. Bechikh, S., Said, L.B., Ghédira, K.: Searching for knee regions in multi-objective optimization using mobile reference points. In Proceedings of SAC’10, pp. 1118–1125. ACM, New York, NY, USA (2010) 2. Deb, K., Jain, H.: An evolutionary many-objective optimization algorithm using referencepoint-based nondominated sorting approach, Part I: solving problems with box constraints. IEEE Trans. Evol. Comput. 18(4), 577–601 (2014) 3. Durillo, J.J., Nebro, A.J., Alba, E.: The jMetal framework for multi-objective optimization: Design and architecture. In: CEC 2010, pp. 4138–4325. Spain, July 2010 4. Durillo, J.J., Nebro, A.J.: jMetal: a java framework for multi-objective optimization. Adv. Eng. Softw. 42, 760–771 (2011) 5. Ghodsypour, S.H., O’brien, C.: The total cost of logistics in supplier selection, under conditions of multiple sourcing, multiple criteria and capacity constraint. Int. J. Prod. Econ. 73(1), 15–27 (2001) 6. Inselberg, A.: Parallel Coordinates: Visual Multidimensional Geometry and Its Applications. Springer, New York, Secaucus, NJ, USA (2009) 7. Mohammaditabar, D., Ghodsypour, S.H.: A supplier-selection model with classification and joint replenishment of inventory items. Int. J. Syst. Sci. 0(0), 1–10 (2014) 8. Narukawa, K., Rodemann, T.: Examining the performance of evolutionary many-objective optimization algorithms on a real-world application. In Proceedings of the 2012 Sixth International Conference on Genetic and Evolutionary Computing, ICGEC’12, pp. 316–319. IEEE Computer Society, Washington, DC, USA (2012) 9. Parhizkari, M., Amiri, M., Mousakhani, M.: A multiple criteria decision making technique for supplier selection and inventory management strategy: a case of multi-product and multisupplier problem. Decis. Sci. Lett. 2(3), 185–190 (2013)

Many-objective Optimisation for an Integrated Supply Chain …

111

10. Sarker, R., Carlos, A.: Coello, C.A.C.: Assessment methodologies for multiobjective evolutionary algorithms. In: Evolutionary Optimization volume 48 of International Series in Operations Research & Management Science, pp. 177–195. Springer, US (2002) 11. Taguchi, G., Yokoyama, Y.: Taguchi methods: design of experiments. In: Taguchi Methods Series. ASI Press (1993) 12. Turk, S., Miller, S., Özcan, E., John, R.: A simulated annealing approach to supplier selection aware inventory planning. In: IEEE Congress on Evolutionary Computation, CEC 2015, Sendai, Japan, 25–28 May 2015, pp. 1799–1806 (2015) 13. Turk, S., Ozcan, E., John, R.: Multi-objective optimisation in inventory planning with supplier selection. Expert Syst. Appl. 78, 51–63 (2017) 14. Yuan, Y., Xu, H., Wang, B.: An improved NSGA-III procedure for evolutionary many-objective optimization. In: Proceedings of GECCO’14, pp. 661–668. ACM, New York, NY, USA (2014) 15. Zitzler, E.: Evolutionary Algorithms for Multiobjective Optimization: Methods and Applications (1999)

A Classification-Based Heuristic Approach for Dynamic Environments Seyda ¸ Yıldırım-Bilgiç and A. Sima ¸ Etaner-Uyar

Abstract Some of the earlier studies on dynamic environments focus on understanding the nature of the changes. However, very few of them use the information obtained to characterize the change for designing better solver algorithms. In this paper, a classification-based single point search algorithm, which makes use of the characterization information to react differently under different change characteristics, is introduced. The mechanisms it employs to react to the changes resemble hyper-heuristic approaches previously proposed for dynamic environments. Experiments are performed to understand the underlying components of the proposed method as well as to compare its performance with similar single point search-based hyper-heuristic approaches proposed for dynamic environments. The experimental results are promising and show the strength of the proposed heuristic approach. Keywords Classification · Dynamic environments · Fitness landscape · Heuristics · Meta-heuristics · Hyper-heuristics · Sentinel

1 Introduction Real-world optimization problems are mostly dynamic in nature. The aim of a dynamic optimization method is not just finding a stationary optimum solution but also to track the changing optimum [8]. There are several mainstream methodologies that are applied in dynamic optimization problems. These methodologies can be categorized into four groups according to Jin et al. [10]. These are: maintenance of diversity, reacting to changes in the environment, making use of memory and lastly using multiple populations. S. ¸ Yıldırım-Bilgiç (B) · A. S. ¸ Etaner-Uyar Department of Computer Engineering, Istanbul Technical University, Istanbul, Turkey e-mail: [email protected] A. S. ¸ Etaner-Uyar e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 R. Matoušek and J. K˚udela (eds.), Recent Advances in Soft Computing and Cybernetics, Studies in Fuzziness and Soft Computing 403, https://doi.org/10.1007/978-3-030-61659-5_10

113

114

S. ¸ Yıldırım-Bilgiç and A. Etaner-Uyar

Some studies focus on representing different environments for adapting to changes. In the associative memory schemes, good solutions are stored with associated environmental data. In this manner, when a new environment which is similar to a collected environment instance occurs, the linked good solutions can be employed for generating new solutions [17]. The Memory Indexing Algorithm (MIA) indexes environments with problem-specific knowledge such as the quality of the environment [11]. MIA applies Estimation of Distribution Algorithms (EDAs) and hypermutation mechanism together to react on environmental changes. If the environment is similar to previously seen ones, MIA uses a distribution array to initialize the population. Peng et al. [16] have presented an environment identification-based memory management scheme (EI-MMS). The EI-MMS uses, the probability models to characterize and store the landscape of dynamic environments. Then, it applies this stored data to adjust EDAs in compliance with environmental changes. The study proposed an environment identification method for finding best-fitting memory elements for new environments. There are also prior studies about the characterization of dynamic environments [1, 2, 9]. Branke et al. [3] proposed a number of measures to characterize the nature of a change. The approach proposed in our study expands this previous work by using the information about the change characteristic for building a more effective solver. We focus on creating a classification-based single point search approach for dynamic environments. The method proposed in this paper uses the characteristics of the change for adapting to the changing environment. The novelty of this work lies in making use of the representation of the change to track the optimum smartly. Several measures are applied to extract features from the dynamic landscape. These features provide information for the classification process of dynamic environments. The mechanisms that we propose in our study to respond to changes are similar to the previously proposed hyper-heuristic approaches for dynamic environments. It chooses proper mutation rates for a specific change characteristic like a selection hyper-heuristic chooses low-level heuristics. The proposed algorithm is tested using the Moving Peaks Benchmark (MPB). The MPB is a test benchmark for dynamic optimization problems, created by Branke [1]. The benchmark generates dynamic landscapes with a number of peaks; every change in the environment creates differences in heights, widths, and locations of each peak. The MPB is suitable for this study since it presents similar characteristics like in real-world problems [3]. Experiments are performed to understand the underlying components of the proposed method as well as to compare its performance with similar single point searchbased hyper-heuristic approaches for dynamic environments in literature.

2 The Proposed Method We introduce here a classification-based single point search algorithm for solving dynamic optimization problems. The proposed algorithm consists of two components: change characterization and mutation rate selection. Briefly, at the change

A Classification-Based Heuristic Approach for Dynamic …

115

characterization step, a learning mechanism is employed to classify the landscape change with the data from the measure calculations. Then at the mutation phase, the algorithm draws a standard deviation value within the given predetermined interval for that class like a hyper-heuristic approach chooses a low-level heuristic. The algorithm of the proposed method is given in Algorithm 1. Algorithm 1 Classification-based single-point search algorithm 1: sentinelPlacement() 2: individual ← createRandomIndividual() 3: while numberOfIterations do 4: if environmentChanged then 5: mutationStepCount ← 0 6: measures ← calculateMeasures(sentinels) 7: bestSentinel ← findBestSentinel() 8: if bestSentinel.fitness is better than individual.fitness then 9: copySentinel(individual, bestSentinel) 10: end if 11: classResult ← findClass(model, measures) 12: end if 13: if classResult.isDetermined() then 14: if mutationStepCount ’ and ‘

S+

S+







S−

S+

>

S−

S+




Snt versus Cls

S+

S+

S+

S+

S+

S+

S+

S+

S+

RandHM versus RandHM+Snt

S−

S−

S−

S−

S−

S−

S−

S−

S−

A Classification-Based Heuristic Approach for Dynamic …

121

Firstly, the Cls method is compared with the Snt method. The Snt method clearly has better performance than the Cls method, as seen from the results. However, when the Cls method combines with the Snt, it outperforms both the Cls and the Snt method alone. On the other hand, the results show that the RandHM+Snt performs better than the Snt method yet, the Cls+Snt still has better results. Overall, the experimental observations indicate that the Cls+Snt combination is the best choice for the solver approach.

3.3 The Experiments for Comparison with Similar Methods in Literature The proposed method is compared with other single-point search based hyperheuristic algorithms proposed for dynamic environments since the Cls part of the method exhibits similar behaviors of a hyper-heuristic. One of the previous studies on hyper-heuristics in dynamic environments, Kiraz et al. [12] shows the appropriateness of selection hyper-heuristics as solvers for dynamic environments. The study carried on hyper-heuristics based on single-point search framework with using the MPB. The experimental results of the study demonstrate that selection hyperheuristics are capable of reacting and tracking well in different types of changes [12]. In Kiraz’s study, the Choice Function (CF) heuristic selection [6] method combined with the Improving and Equal (IE) move acceptance method [6, 7] (CF-IE) has the best performance compared to the other approaches.1 In this section, we compared our classification-based method with the CF-IE approach, along with the Hypermutation [5] Improving and Equal (HM-IE) and a basic single-point search method (NoHM). The experimental settings of CF-IE and HM-IE are directly taken from Kiraz’s study [12]. The CF-IE uses seven mutation operators that have seven different standard deviations {0.5, 2, 7, 15, 20, 25, 30} are used as low-level heuristics. The HMIE method operates gaussian mutation using 2 as a default standard deviation. The method changes its standard deviation of 2–7 for 70 sequential steps if a change occurs. The NoHM method runs with using the default standard deviation of 10 in gaussian mutation operation without reacting the changes. Table 3 summarizes the results of the experiments. The experiment shows us the Cls+Snt is significantly better than other compared methods for all frequency severity pairs. The CF-IE gives better than average outputs. It is more efficient with lower frequency severity settings. The NoHM has poorer scores among the group.

1 In

this study, the settings for different severity classes are determined more widely sparsed compared to Kiraz’s study for capturing the behavior of the changing environment. Since the severity settings differ, the experimental results are also different.

122

S. ¸ Yıldırım-Bilgiç and A. Etaner-Uyar

Table 3 The comparison of ANOVA test results of the proposed method with other approaches in the literature for different frequency and severity combinations Algorithm

LF

MF

HF

LS

MS

HS

LS

MS

HS

LS

MS

HS

Cls+Snt versus CF-IE

S+

S+

S+

S+

S+

S+

S+

S+

S+

Cls+Snt versus HM-IE

S+

S+

S+

S+

S+

S+

S+

S+

S+

Cls+Snt versus NoHM

S+

S+

S+

S+

S+

S+

S+

S+

S+

CF-IE versus HM-IE

S+

>

S+

S+

>

S−

S−

S−

S−

CF-IE versus NoHM

S+

S+

S+

S+

S+

S+

S+

S+

S−

HM-IE versus NoHM

S+

S+

S+

S+

S+

S+

S+

S+

S+

Table 4 Offline errors of the random-run experiments averaged over 100 runs for different change frequency settings Algorithm LF MF HF Cls+Snt CF-IE HM-IE NoHM

3.69 ± 0.77 10.70 ± 5.20 11.40 ± 5.32 18.50 ± 4.88

4.25 ± 0.97 14.20 ± 5.53 14.73 ± 5.60 24.92 ± 5.70

8.12 ± 1.67 40.01 ± 8.53 29.25 ± 6.29 39.90 ± 6.20

The Random-Run Experiments For further analysis, the change adaptability of the compared approaches is tested with an experiment. For maintaining dynamism as close as to real life, a test setting with random changes in the dynamic environment is established. The test is carried out by ensuring that each change step, different severity setting is randomly chosen from the determined three classes while the frequency is kept fixed. The results are listed in Table 4. The offline error of all approaches rises as the frequency increases. Looking at the random-run experiment results, it can be deduced that the Cls+Snt has the best performance. Since the approach reacts to changes in a dynamic environment immediately it is capable of following the random sequential changes. The Experiments for Different Severity Settings Cls+Snt, CF-IE, HM-IE, and NoHM are also run with the MPB severity settings of Kiraz’s study for the randomrun experiment. Figure 1 shows the corresponding results as box-plots of offline errors for this experiment. Cls+Snt gives similar results compared to the previous random-run experiment. The change in the severity settings does not have much effect on Cls+Snt method. The offline errors of this version for LF, MF and HF are: – – – –

Cls+Snt: 4.00 ± 2.07, 4.78 ± 1.91 and 7.33 ± 2.41 CF-IE: 10.36 ± 5.49, 11.07 ± 5.96 and 19.32 ± 6.12 HM-IE: 12.87 ± 6.47, 15.43 ± 6.57 and 23.10 ± 7.36 NoHM: 19.36 ± 6.75, 23.31 ± 6.72 and 31.44 ± 8.91 respectively.

A Classification-Based Heuristic Approach for Dynamic …

123

LF Offline Error

80 60 40 20 0 Cls+Snt

CF−IE

HM−IE

NoHM

HM−IE

NoHM

HM−IE

NoHM

MF Offline Error

80 60 40 20 0 Cls+Snt

CF−IE

HF Offline Error

80 60 40 20 0 Cls+Snt

CF−IE

Fig. 1 Box-plots of offline errors for random-run using different severity settings

4 Conclusion and Future Work In this paper, a classification-based single point search algorithm, which makes use of change characterization information to react differently under different change characteristics in dynamic environments, is introduced. The proposed algorithm reacts to changes like hyper-heuristic approaches previously proposed for dynamic environments. The method uses the representation of change to track the optimum smartly. First, we examined the components of the proposed method and compared it with similar hyper-heuristic approaches from literature. The observed results show that the classification-based method performs well in dynamic environments. All approaches are compared under the assumption that the occurrence of a landscape change is known and thus detection step is ignored. If change detection is considered as an issue, the proposed method will have a clear advantage with the sentinels, even when the occurrence of changes cannot be easily detected. The classification model plays an essential role in our approach. The capability of the model is explored for different environment settings with this experiment. A comprehensive analysis can be conducted for building a more generic model. The research of a more generic model can be a separate study topic. As future work, the parameter dependency of the proposed algorithm can be investigated more extensively. We are aware that

124

S. ¸ Yıldırım-Bilgiç and A. Etaner-Uyar

the process of generating input data for classifier costs several evaluations directly proportional to the sentinel count. However, the method turns that into an advantage by using the best sentinel information. The trade-off between the sentinel count and the accuracy of the classifier can be examined further to be optimized. Improvements in this aspect will decrease computational complexity and save time to the approach. Finally, the main idea explored in this study can be applied to population-based search algorithms and thus can be compared with other state-of-art population based heuristic methods for dynamic environments.

References 1. Branke, J.: Evolutionary Algorithms for Dynamic Optimization Problems: A Survey. AIFB (1999) 2. Branke, J.: Evolutionary Optimization in Dynamic Environments (2001) 3. Branke, J., Saliho˘glu, E., Uyar, S.: ¸ Towards an analysis of dynamic environments. In: Proceedings of the 7th Annual Conference on Genetic and Evolutionary Computation, pp. 1433–1440. ACM (2005) 4. Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001) 5. Cobb, H.G.: An Investigation into the Use of Hypermutation as an Adaptive Operator in Genetic Algorithms Having Continuous, Time-Dependent Nonstationary Environments. Technical Report. Naval Research Lab Washington, DC (1990) 6. Cowling, P., Kendall, G., Soubeiga, E.: A hyperheuristic approach to scheduling a sales summit. In: International Conference on the Practice and Theory of Automated Timetabling, pp. 176– 190. Springer (2000) 7. Cowling, P., Kendall, G., Soubeiga, E.: Hyperheuristics: a tool for rapid prototyping in scheduling and optimisation. In: Workshops on Applications of Evolutionary Computation, pp. 1–10. Springer (2002) 8. Cruz, C., González, J.R., Pelta, D.A.: Optimization in dynamic environments: a survey on problems, methods and measures. Soft Comput. 15(7), 1427–1448 (2011) 9. De Jong, K.: Evolving in a changing world. In: International Symposium on Methodologies for Intelligent Systems, pp. 512–519. Springer (1999) 10. Jin, Y., Branke, J.: Evolutionary optimization in uncertain environments—a survey. IEEE Trans. Evol. Comput. 9(3), 303–317 (2005) 11. Karaman, A., Uyar, S., ¸ Eryi˘git, G.: The memory indexing evolutionary algorithm for dynamic environments. In: Workshops on Applications of Evolutionary Computation, pp. 563–573. Springer (2005) 12. Kiraz, B., Etaner-Uyar, A.S., ¸ Özcan, E.: Selection hyper-heuristics in dynamic environments. J. Oper. Res. Soc. 64(12), 1753–1769 (2013) 13. Morrison, R.W.: A new EA for dynamic problems. In: Designing Evolutionary Algorithms for Dynamic Environments, pp. 53–68. Springer (2004) 14. Nguyen, T.T., Yang, S., Branke, J.: Evolutionary dynamic optimization: a survey of the state of the art. Swarm Evol. Comput. 6, 1–24 (2012) 15. Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., et al.: Scikit-learn: machine learning in python. J. Mach. Learn. Res. 12(Oct), 2825–2830 (2011) 16. Peng, X., Gao, X., Yang, S.: Environment identification-based memory scheme for estimation of distribution algorithms in dynamic environments. Soft Comput. 15(2), 311–326 (2011) 17. Yang, S.: Explicit memory schemes for evolutionary algorithms in dynamic environments. In: Evolutionary Computation in Dynamic and Uncertain Environments, pp. 3–28. Springer (2007)

Does Lifelong Learning Affect Mobile Robot Evolution? Shanker G. R. Prabhu, Peter J. Kyberd, Wim J. C. Melis, and Jodie C. Wetherall

Abstract Evolutionary Algorithms have been applied in robotics over the last quarter of a century to simultaneously evolve robot body morphology and controller. However, the literature shows that in this area, one is still unable to generate robots that perform better than conventional manual designs, even for simple tasks. It is noted that the main hindrance to satisfactory evolution is poor controller generated as a result of the simultaneous variation of the morphology and controller. As the controller is a result of pure evolution, it is not given a chance to improve before fitness is calculated, which is the equivalent of reproducing immediately after birth in biological evolution. Therefore, to improve co-evolution and to bring artificial robot evolution a step closer to biological evolution, this paper introduces Reinforced Co-evolution Algorithm (ReCoAl), which is a hybrid of an Evolutionary and a Reinforcement Learning algorithm. It combines the evolutionary and learning processes found in nature to co-evolve robot morphology and controller. ReCoAl works by allowing a direct policy gradient based RL algorithm to improve the controller of an evolved robot to better utilise the available morphological resources before fitness evaluation. The ReCoAl is tested for evolving mobile robots to perform navigation and obstacle avoidance. The findings indicate that the controller learning process has both positive and negative effects on the progress of evolution, similar to observations in evolutionary biology. It is also shown how, depending on the effectiveness of the learning algorithm, the evolver generates robots with similar fitness in different generations. Keywords Evolutionary robotics · Co-evolution · Reinforcement learning

S. G. R. Prabhu (B) · W. J. C. Melis · J. C. Wetherall School of Engineering, University of Greenwich, Chatham, Kent M4 4TB, UK e-mail: [email protected] P. J. Kyberd School of Energy and Electronic Engineering, University of Portsmouth, Portsmouth PO1 3DJ, UK © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 R. Matoušek and J. K˚udela (eds.), Recent Advances in Soft Computing and Cybernetics, Studies in Fuzziness and Soft Computing 403, https://doi.org/10.1007/978-3-030-61659-5_11

125

126

S. G. R. Prabhu et al.

1 Introduction In Evolutionary Robotics (ER), the co-evolution process simultaneously evolves a mobile robot’s body and controller with an Evolutionary Algorithm (EA). This results in a randomly modified or added body part being fitted with a random set of neurons. Additionally, the Neural Network (NN) weights are modified and/or new neurons are added during the mutation and crossover operations [1]. This controller modification induces a problem in the controller, as it is not given enough time to improve itself before the next stage of the evolution. The reason is, when the controller improves during variation operations (mutation and recombination), a change in morphology will immediately impede this progress. A second problem is that the newly evolved robots are expected to accomplish the task they are evolved for, immediately after they are created. This is in contrast with biological evolution where the suitability of each candidate/organism is not tested immediately but is usually tested after it matures through a learning processes, mainly through the influence of environmental factors. Furthermore, it has been observed that almost 50% of psychological abilities of human beings are attributable to environmental influences and the rest are inherited through genetics [2]. If this idea is extended to ER, a co-evolved robot that is evaluated as soon as it is created, would only be using its inherited skills, and the skills gained as a result of the mutation and crossover operations. This leaves an untapped potential in each of the robots to learn and grow and exhibit a higher performance than demonstrated during the fitness evaluation. To tackle these problems, we hypothesise that the introduction of a learning phase for every evolved robot will not only allow it to better utilise the morphology but also provide sufficient environmental feedback to better handle the task. The learning phase will therefore help fix the problems introduced through the random modifications and also provide time for each robot to better utilise its functional parts through the controller. This is tested through a learning algorithm that trains each robot before evaluation by the fitness function. The new process is shown in Fig. 1.

Fig. 1 ReCoAl and NN architecture overview. The conventional evolution process is interrupted by a learner that intercepts robots before they would have undergone fitness evaluation. After the learning process, the new robot is used for fitness evaluation. The updated robot and fitness value are fed back to the evolver for population update and parent selection, respectively

Does Lifelong Learning Affect Mobile Robot Evolution?

127

There are a number of works reported in the past year (e.g. [3]) that attempt to solve the problem of building controllers for co-evolved robots when the morphology is unknown at the beginning of evolution and this paper could be considered in parallel with them. Unlike typical learning applications in robotics, the design of an algorithm for co-evolved robots is challenging for a number of reasons: In the case of Supervised Learning (SL), the input–output mapping is already known through the training data, and the SL algorithms simply have to find a model that fits this mapping. However, for co-evolved robots, no such training information is available, as the list of inputs and outputs change randomly and therefore building a training data set is impractical. Furthermore, Unsupervised Learning (UL) methods are also not possible due to this lack of training data. Here, the only available information is the goal specification, or coordinates, that indicate where the autonomous robot should reach. Hence, Reinforcement Learning (RL) is chosen as it could work with limited background information when compared to SL and UL methods by learning through rewards gained [4]. RL methods are either model-based or model-free. The model-based method is unsuitable as the robot design and environmental dynamics are not known in advance. In model-free approaches, the policy search-based methods are preferred over value function-based approaches as they have the option of integrating expert knowledge into policy at the time of initialisation with fewer parameters. The key reason why RL methods excel over other methods is their availability of an exploration phase in the learning process that allows the robot to explore new experiences while performing the task. Even though these principles can also be applied to evolved robots, for the demonstration of a proof-of-concept, policy gradient methods from Direct Policy Search (DS) is chosen as they can directly learn the policy without going through the lengthy process of policy exploration, evaluation and update. The chosen version is an online direct gradient-based reinforcement learning algorithm called Online Partially Observable Markov Decision Process (OLPOMDP) [5], which is a lightweight algorithm that offers the option of direct policy update at every time-step and is guaranteed to converge. In this particular case, because the evolved robot always has a controller (except in the initial generations), the DS method has the advantage of passing on the learnt policy from one generation to the next.

2 Related Work Even though combining EA and RL methods are not new with several EA-RL approaches already published, the use of RL for enhancing the effectiveness of EA has been mainly used in constrained applications such as parameter control. As per [6], the algorithms can be classed into three types: RL in Evolutionary Computation (EC), EC in RL and general purpose optimisers. The RL in EC and EC in RL categories respectively handle algorithms that use RL to enhance the EC process and vice versa. On the other side, general purpose optimisers are neither

128

S. G. R. Prabhu et al.

purely EC or RL but combine some concepts from both e.g. a predator–prey model that uses RL for survival and EA for evolution [7]. EC in RL algorithms: In multi-agent systems, for learning from each other, the RL agents use an EA to help each other exchange and acquire the best skills learnt [8]. More examples are [9] where a Genetic Algorithm (GA) is used to perform a policy search, Genetic Programming (GP) is used for RL state feature discovery [10], evolutionary function approximation applied in RL [11] and Covariance Matrix Evolution Strategy (CMA-ES) used for policy search [12]. There are several papers that report the use of RL principles to improve the evolution process. Buzdalova et al. proposed a combination of EA with a Multi-Objective RL (EA-MORL) [13]. Here, the fitness function for the evolver is controlled by the RL algorithm. Effectively resulting in a multi-objective selection through the evolution process. Proper parameter tuning is a well-recognised hurdle during the use of EAs. In an attempt to fix the issue, Karafotias et al. uses RL to modify EA parameters during evolution [14]. The GA-RL algorithm in [15] evolves states, and actions are represented as variation operations. Selected actions depend on the state-action probability and rewards that are used to update the probability depend on the relative parent–offspring fitness. A similar method is presented in [16] where a population of RL agents participates in a GA driven evolution. The most recent hybrid algorithm is Evolutionary Reinforcement Learning (ERL) developed by Khadka et al. [17]. A population of actors are involved in the fitness evaluation in the environment and the fitness allocation depends on the rewards received at every time-step. While the typical EA progresses, an RL-actor simultaneously observes the experiences at the fitness evaluation simulator and updates itself through an actor-critic RL algorithm. Then, at random times, the RL-actor is injected into the population to pass on the newly learnt experiences for subsequent generations.

3 The Algorithm An overview of the Reinforced Co-evolution Algorithm is shown in Fig. 1. Here each robot is sent to a virtual 3D simulator for learning before its fitness is evaluated. The goal of the learning process here is to modify the NN controller while keeping the morphology fixed. The pseudo code in Algorithm 1 (Table 1) shows the learning and evaluation process for each of the robots. The OLPOMDP algorithm is integrated into Algorithm 1. Here, z t is referred to as the eligibility trace, T is the maximum time allocated for learning, t is the time-step, β(beta) is the discount factor that increased or decreased historic memory and ∝ (alpha) is the learning rate. At the beginning of each learning episode, the simulator is reset, and the robot is initialised with the most recent controller available. This is the controller sent by the evolver at the first instance and the modified NN at the end of the last episode in subsequent instances. Learning starts with the observation of state yt (as in Algorithm 1.9) that is the sensor readings from the simulator. Which is fed into the NN policy that generates a control

Does Lifelong Learning Affect Mobile Robot Evolution?

129

Table 1 Algorithm 1: Learn and evaluate

action u t for motor control. The learner then allocates a scenario dependent reward based on the state. The eligibility trace is then computed with the policy gradient, β and the old eligibility trace (Algorithm 1.12). The policy is finally updated and the NN weights are adjusted through, θt+1 = θt + ∝ r (i t+1 )z t+1 . To compute the error in Algorithm 1.12, a standard backpropagation mechanism is applied. Depending on the kind of neuron (simple, sigmoid or oscillator), the error signal is calculated and backpropagated into the network to obtain a gradient. The servo motor control output signal is compared with the expected control signal during the error calculation. With reference to a possible FNN as seen in Fig. 1, error at the output layer can be represented as: E=

1 (tn − yn )2 2 n

(1)

where n is the number of motor neurons and tn is the target signal. With k the index of output neurons, j the index of the hidden neurons, i the index of the input

130

S. G. R. Prabhu et al.

neurons, xk the activation signal input at the kth neuron,yk the output signal input at kth neuron, w jk the weight connecting jth hidden neuron to kth output neuron and gk or g j the gain of a corresponding output or hidden neuron, respectively (in Fig. 1). The error gradients at the output layer and hidden layer can be written respectively as: δE = −g k yk y j (tk − yk )(1 − yk ) δw jk

(2)

  δE = g j yi y j y j − 1 w jk gk yk (tk − yk )(1 − yk ) δwi j k

(3)

These gradients and the rewards are responsible for the step-by-step update of the NN weights. The (tk − yk ) term is error calculated at each output neuron. Due to the oscillatory gait of the evolved robots, the servo motor is expected to rotate to 90° or −90°. So, the target signal is set as,  tk =

0, yk > 0.5 1, yk ≤ 0.5

(4)

This is to induce an oscillatory motion at the output as there are no wheels available for evolution. The gradient calculation of weights towards oscillatory neurons are neglected as the oscillatory neuron weights are set to 0. So, instead of passing on the input signal, the oscillatory neuron adds on a sinusoidal signal from the hidden layer to the output layer. The steps in Algorithms 1.8 to 1.14 are repeated until 10 s (episode length) are elapsed after which a new episode is initiated with the updated neural network. Each episode time here is set to a low value when compared to the actual fitness evaluation (100 s) as the expectation here is to just let the NN learn to use the available resources and not solve the task. The process is repeated for a predefined set of episodes after which the robot is tested with the fitness function. The results and the robot are then fed back to the EA for updating the population and parent selection. Bootstrapping is a well-known problem in ER in the early generations of evolution. Here it means that the robots are evolved with zero weights in the NN. When this happens, the learning process cannot continue as the gradient would always be zero. In such cases, the learning process is skipped to save computational resources and time.

3.1 Reward Functions The experiments are run with scenario specific rewards. The scenarios designed are primarily for investigating navigation with obstacle avoidance. Two different types of reward allocation methods are proposed for each of the scenarios. Both of them

Does Lifelong Learning Affect Mobile Robot Evolution?

131

have exhibited success in RL applications in robotics. In the first, reward r, allocated at every time-step, is given as:   2  2 r = − x g − xt + yg − yt

(5)

where (x g , yg ) are goal coordinates and (xt , yt ) are real-time coordinates of the robot. Therefore, the reward approaches to zero as the robot gets closer to the goal. Sometimes, the robots are generated with distance sensors, and [18] shows how these sensors have not been employed properly in evolved robots. So, in navigation tasks, the direction of movement is forced to be in the direction of a selected obstacle sensor or sensors. When the robot moves in the direction of a sensor/s, the sensor reading will become meaningful data as it is combined with backpropagation with the aim to help in making better decisions by the controller. This is accomplished by using a reward function that allocated rewards as per η, the steering angle in degrees needs to make the robot turn in the direction its distance sensor is facing. The new reward is, ⎧ ⎨

0, |η| < 10◦ r = −1, 10◦ < |η| ≤ 45◦ ⎩ −10, |η| > 45◦

(6)

When there is more than one obstacle sensor, the preference is given to the sensor closest to the direction of movement. Sensors in the frontal plane of the robot are also preferred as the movement of the robot will always be in the plane perpendicular to the frontal plane of the robot.

4 Experiments and Results The algorithm is built into RoboGen [19] by making changes as necessary to add an additional learning phase into the simulator engine. A new simulator first initialises the environment with the same parameters as the fitness evaluation simulator engine in [18]. RoboGen then evolves mobile robots from a fixed set of parts (evolution parameters remain the same as in [18]). After the learning process is completed, the new controller is passed on to the fitness evaluator where the evolved robot is updated with the new controller and the evaluation takes place. Later, instead of just sending back the fitness value to the evolver for parent selection, the updated NN weights are also passed back through the communication channel. These new weights are then added onto the corresponding robot in the evolving population. At the start of all of the experiments, the robot is initialised at the centre of the arena made up of obstacles as shown in Fig. 3 and goal is set as A at (1.4, −1.4). The learning parameters set are β = 0.95 and ∝ to1, 0.1, 0.01, 0.001or0.0001.

132

S. G. R. Prabhu et al.

All experiments are performed in a Linux OS-based High-Performance Computing system with 50 nodes and 20 cores/node. In the first set of experiments, the robots need to reach A with 10 s learning time in each of the 10,000 episodes. For each learning rate, five experiments are performed with the seed to the random number generator for the evolver varied from 1 to 5. Figure 2a shows the learner’s performance in the evolution process when the seed is set at 1. It shows that the learning rate of ∝= 0.0001 obtains the maximum positive rewards (red curve) and how the same experiment is the fastest to reach the goal (red curves in Fig. 2b). It must also be noted that, setting the learning rate to 1 made the system to be unresponsive to rewards and so not able to make any changes to the NN. Consequently, future rewards are also affected, indicated by the magenta straight line in Fig. 2a. The magenta line therefore demonstrates that a higher learning rate means a slower response of the learner, which eventually results in a slower speed of evolution. An interesting observation is that the rewards obtained tend to gradually create unstable behaviour as the robots progress through different episodes (green curves, Fig. 2a). The behaviour also results in a very slow progress of evolution shown by the green curves (∝= 0.001) in Fig. 2b.

Fig. 2 a Averaged rewards versus episodes (logarithmic scale). Multiple learning rates, ∝ used as shown in the graph, β = 0.95, an Euclidian distance-based reward function is applied, each episode is 10 s, robots are trying to reach A and seed is set at 1. The rewards shown on the y-axis are the average of rewards obtained in every episode by the best fit robot in every generation. In this case, 200 rewards are averaged. Rewards need to show a movement towards 0 to demonstrate positive learning. Except when ∝= 1 (straight line in magenta), experiments with all other ∝ values show some level of positive learning. ∝= 0.0001 appears to converge faster in all of the generations. The oscillations indicate that either most of the robots’ rewards in each episode oscillated, or there is a large deviation in the rewards for multiple robots from each generation in each episode. b Progress of evolution for experiment in Fig. 2a averaged over 5 different runs. Performance shown is comparable to corresponding curve in Fig. 2a

Does Lifelong Learning Affect Mobile Robot Evolution?

133

Fig. 3 Robot trajectories before and after learning. Corresponding best robots over multiple generations from experiment in Fig. 2a (∝ = 0.0001) are tested with NN weights before learning (with red lines) and trajectory of the same robot after learning is shown in blue. Robots need to move from the centre to A at (1.4, −1.4) (cross). Red dots indicate that the robots are stationary. The progress shows how robots first learnt to move then slowly turn towards the goal

The trajectories of the best robots from multiple generations before they undergo the learning process and after the learning process is shown in Fig. 3 with the corresponding robots. The red lines show how each robot moves before learning, and the blue lines show the route taken after learning. The favourable effect of the learner to make the robot move is apparent in all of the trajectories. They also demonstrate how the robot is able to make directional changes to move towards the goal. Figure 4 shows how the learner is able to teach the robot to use the motor by generating alternate 0 or 1 signals necessary for an oscillatory movement. The shown motor control signal corresponds to the robot in Generation 10 from Fig. 3. Figure 4, shows the progression of the control signal, which before learning (red line), fluctuates from 0.6 to 1 to 0.1, while finally settling at 1 after 4 time-steps throughout the simulation. Fig. 4 The effect of ReCoAl on an individual robot. The motor input signal is plotted over time for the best robot in generation 10 shown in Fig. 3. The red lines indicate that the motor signal does not change after the initial 5 time-steps

134

S. G. R. Prabhu et al.

After 4 time-steps, the learnt controller is able to precisely switch between 0 and 1 at the output. The robot morphologies from this experiment indicate that the evolver only modifies the robot body 4 times in the entire evolution run. The size of the robot helps to further emphasise the effect of the learner to generate the trajectories as the algorithm is able to modify the controller to utilise the morphologies of robots with just three parts. This is contrary to the results generated in [18] as there, better performing robots are generally longer. The magenta lines in Fig. 2 seem to imply that higher learning rates reduce the speed of the evolver. Therefore, to find out if this is indeed due to the learner, the best robots in every generation are analysed from experiment in Fig. 2. It is found that the best robots only change once over the entire epoch despite the evolver generating ten new robot children during every generation. This demonstrates that none of the evolved children in all of the subsequent generations are able to match the performance of their ancestor from generation 4 which effectively questions the learner’s ability to adapt. In the conducted experiments it is observed that an episode length from 10 to 100 s makes the evolver that uses ∝= 0.0001 drastically reduce its speed of evolution. To explore this further, multiple experiments are conducted. Episode lengths are set at 10 s, 25 s, 50 s and 100 s with learning rates of 1, 0.01 and 0.0001 while each robot went through 3000 episodes. The progress of evolution in each of the combinations is shown in Fig. 5. Results indicate that for faster performance of the evolver, depending on various episode lengths, different learning rates need to be chosen. For optimal performance with 10 s episode length, the learning rate should be 0.0001 and for any episode lengths above 25 s, a learning rate of 1 is preferred.

Fig. 5 Effect of episode lengths on best robot fitness and average fitness of population. All parameters remain the same as for Fig. 3. The number of episodes is set to 3000, robots are evolved to reach A. An increased learning rate shows better performance in higher episode lengths. An episode length of 50 s shows the best performance for ∝ at 1. However, if episode lengths are shorter, the lower learning rates demonstrate faster performance of the evolver. An episode length of 25 s shows that the average fitness of the population behaves best with ∝ at 1

Does Lifelong Learning Affect Mobile Robot Evolution?

135

5 Discussion The new algorithm is designed specifically to improve the co-evolution process of mobile robots and the observations from its testing reported in the previous section reveals several findings. There is strong evidence to suggest that the evolution process is definitely affected by the learning process with both positive and negative effects. Positively, the trajectories of the evolved robots using the ReCoAl exhibits sufficient changes in the correct direction due to the feedback from the reward function. This is evident from robot trajectories in Fig. 3. For instance, the best robot in generation 50 from that experiment progresses from an immobile state before learning to be able to move efficiently. It first moves towards the right of the arena then makes an almost 45◦ turn to face the goal and moves along a straight line to the goal. The robot thereby gets maximum cumulative rewards, as a distance-based reward function is used. The effect of the learner to force the proper use of actuators is also clear as the triangular waveform in Fig. 4 exactly replicates the target signal set in Eq. 4 in every time-step from the beginning, with a control signal that remained constant after about 5 time-steps. The combination therefore demonstrates both the error signal calculated, and the reward signal generated, is sufficient to first create an oscillatory waveform to move. It is then able to make the necessary changes to the NN weights to trigger the direction change for moving towards the goal. Among the negative effects of the learner, the improper selection of evolution parameters could drastically reduce the speed of evolution. There are several examples demonstrating this; when the learning rate is set at 1, the robot fitness hardly changes. The literature shows how high learning rates negatively affect the convergence of learning algorithms. However, this slowing effect on the speed of evolution is new, as it is expected that the variation operations during evolution can always make random changes that would lead to robots with better fitness. An important factor that affected the outcome of the learning process are the evolved morphologies and corresponding NN parameters. There are instances when the robots do not have enough morphological capabilities to reach the goal. For instance, due to the size and number of actuators, the robot would not be able to reach the goal, even with the maximum possible speed. There are also cases when robots are evolved with motors facing upwards or in a plane that could not generate a motion that interacted with the ground, thereby making impossible for the robot to start moving. For the evolved controller; the learner only manipulates the weights of the NN. The gain and bias of each neuron also play a role in obtaining a favourable outcome which is modified by the evolver and not learner. Even when the learner tries to counter its effects, the large values of gain and bias make compensation extremely difficult or even impossible within the given timeframe and learning parameters. Experiments with positive results demonstrate how ReCoAl favours co-evolution. The gradual increase of the undulations (green curves in Fig. 2a) suggests an unstable

136

S. G. R. Prabhu et al.

behaviour. Though the exact cause of this is unknown, it is possible that the combination of error function (that has an oscillatory nature) coupled with an ever-increasing eligibility trace (z t ) could be the cause. When each learning episode is set at 10 s, there are robots in the population that perform well in the first 10 s by moving towards the goal. However, during the actual fitness evaluation they move away from the goal which results in a poor calculated fitness. To avoid this problem, longer episode lengths are applied. Nevertheless, they do not yield satisfactory results due to improper learning parameter selection. For shorter episode lengths, lower learning rates are sufficient to generate effective robots. The experiments suggest that for optimal evolution rates, the episode lengths and learning rates are related. For optimal performance when learning for 3000 episodes, 10 s episode length needs a learning rate of 0.01. For any episode length above 25 s, a learning rate of 1 performs well. The effect of rewards on the learning process is also visible throughout the experiments. In cases when reward functions make a positive impact on the results, their effect is more evident in the initial stages of each of the episodes, as indicated by the initial directional changes displayed by successful robots. However, major directional changes do not occur later, suggesting that rewards are only favoured initially in an episode. Though the (0, −1, -10) rewards have shown success in several RL applications in robotics, in these experiments, they do not make noticeable changes to the weights. As a result, the experiments designed for sensor use do not produce meaningful results, so those results are not reported here. There are also issues during the use of this function as the reward is allocated to direction and not for movement itself. It means robots can reap sufficient rewards by just facing the right direction and not moving. This kind of problem is referred as reward hacking. The learnt knowledge in a controller is passed down the generations. This is apparent when in several experiments, newly created robots began performing move and turn manoeuvres before the learning process began. This highlights the appropriate choice of selecting a DS based RL algorithm. However, the problems of reward design and parameter selection still remain as challenges. While it is observed that an error generator that induces an oscillatory movement does create robots that followed precise instructions, they are certainly not enough when more than one actuator is present in the robot. Depending on the position of the motors, the oscillatory motion of robots creates cancelling effects or even makes direction changes difficult. Nevertheless, this effect is anticipated as the robot morphology is not known beforehand and a proper error function design is not straightforward.

6 Conclusion and Future Work The paper is built on the hypothesis that learning could aid the evolution process. To demonstrate this, ReCoAl is designed to improve the robot co-evolution process, with an emphasis on the controller generated during the process. From multiple performed experiments, the system is able to evolve robots, even with very few initial

Does Lifelong Learning Affect Mobile Robot Evolution?

137

parts. Thereby, validating the objective of evolving better controllers that could better utilise the morphology for locomotion. The results not only indicate that learning can aid the evolution of robots, but it can also have detrimental effects, depending on the effectiveness of the learning algorithm. This finding is again consistent with Polmin’s observation in humans, that say our developed abilities equally depend on the environmental factors during one’s lifetime as on genetic factors [2]. Similarly, the environmental factors and learning algorithm along with the inherited skills are crucial in generating a robot capable of reaching the goal. Ultimately, the results show positive outcomes, but the full use of the algorithm is not possible due to the number of limitations outlined in the previous section. ReCoAl has shown an ability to create a robot from a small number of parts that can reach a goal in a set time, which demonstrates its potential for improvement. Furthermore, ReCoAl is built on a module that runs alongside RoboGen which makes it easier for testing new learning algorithms for co-evolution. There are several areas in paper that can be focussed on for improving the current state of co-evolution of mobile robots. Even though attempts were made to develop the evolver to use an obstacle sensor, when available, that functionality is still virtually non-existent. This stresses one of the main challenges still faced by the research community. However, in this particular case, the problem can also be due to poor reward design which is another major issue in Reinforcement Learning. Bearing in mind how the Euclidian reward worked well in the initial phase of each episode, a non-linear reward system can be designed which increases the reward over each episode. The ReCoAl can be sped up by adding checks to ignore morphologies unsuitable for learning and performing selective learning by analysing ancestor data of the offspring. An error function design for the learner is another area that can be improved by examining the morphology.

References 1. Prabhu, S.G. R., Seals, R.C., Kyberd, P.J., Wetherall, J.C.: A survey on evolutionary-aided design in robotics. Robotica 36, 1804–1821 (2018) 2. Plomin, R.: Blueprint: how DNA makes us who we are. The MIT Press 3. Lan, G., Jelisavcic, M., Roijers, D.M., Haasdijk, E., Eiben, A.E.: Directed Locomotion for Modular Robots with Evolvable Morphologies. In: Parallel Problem Solving from Nature— PPSN XV. pp. 476–487. Springer, Cham 4. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. The MIT Press 5. Baxter, J., Bartlett, P.L., Weaver, L.: Experiments with infinite-horizon, policy-gradient estimation. J. Artif. Intell. Res. 15, 351–381 (2001) 6. Drugan, M.M.: Reinforcement learning versus evolutionary computation: a survey on hybrid algorithms. Swarm Evolut. Comput. 44, 228–246 (2019) 7. Hillis, W.D.: Co-evolving parasites improve simulated evolution as an optimization procedure. Physica D 42, 228–234 (1990) 8. Beigi, A., Mozayani, N.: A simple interaction model for learner agents: an evolutionary approach. IFS 30, 2713–2726 (2016) 9. Moriarty, D.E., Schultz, A.C., Grefenstette, J.J.: Evolutionary algorithms for reinforcement learning. J. Artif. Intell. Res. 11, 241–276 (1999)

138

S. G. R. Prabhu et al.

10. Girgin, S., Preux, P.: Feature Discovery in Reinforcement Learning Using Genetic Programming. In: Genetic Programming, pp. 218–229. Springer, Berlin (2008). 11. Whiteson, S., Stone, P.: Evolutionary function approximation for reinforcement learning. J. Mach. Learn. Res. 7, 877–917 (2006) 12. Stulp, F., Sigaud, O.: Path integral policy improvement with covariance matrix adaptation. In: International conference on machine learning, pp. 1547–1554 (2012). 13. Buzdalova, A., Matveeva, A., Korneev, G.: Selection of auxiliary objectives with multiobjective reinforcement learning. In: Companion Publication of the Annual Conference on Genetic and Evolutionary Computation, pp. 1177–1180. ACM Press (2015) 14. Karafotias, G., Eiben, A.E., Hoogendoorn, M.: Generic parameter control with reinforcement learning. In: Conference on Genetic and Evolutionary Computation, pp. 1319–1326. ACM Press (2014) 15. Pettinger, J.E., Everson, R.: Controlling genetic algorithms with reinforcement learning. In: Conference on Genetic and Evolutionary Computation, pp. 692–692. ACM Press (2002) 16. Miagkikh, V.V., Punch, W.F., III: An approach to solving combinatorial optimization problems using a population of reinforcement learning agents. In: Conference on Genetic and Evolutionary Computation, pp. 1358–1365. ACM Press (1999) 17. Khadka, S., Tumer, K.: Evolution-guided policy gradient in reinforcement learning. In: Conference on Neural Information Processing Systems, pp. 1196–1208 (2018) 18. Prabhu, S.G.R., Kyberd, P., Wetherall, J.: Investigating an A-star Algorithm-based Fitness Function for Mobile Robot Evolution. In: International Conference on System Theory, Control and Computing, pp. 771–776. IEEE (2018) 19. Auerbach, J., et al.: RoboGen: robot generation through artificial evolution. In: International Conference on the Synthesis and Simulation of Living Systems, pp. 136–137. The MIT Press (2014)

Cover Time on a Square Lattice by Two Colored Random Walkers Chun Yin Yip and Kwok Yip Szeto

Abstract We treat the minimization of cover time on a network by M random walkers as a problem of a multi-agent system, where the M agents have local interactions. We introduce a model of local repulsion between the walkers and visited sites in order to minimize the waste of steps in revisiting sites, thereby also the cover time. We particularly perform numerical simulations for the case of two colored random walkers (M = 2), namely Ray and Ben, on a square lattice. The unvisited nodes are colored white, while the nodes first visited by Ray/Ben are colored red/blue. The cover time is the time when there is no more white nodes. The interaction between a walker and a colored site is a repulsive nearest-neighbor interaction, so we can model peer-avoidance (red-blue or blue-red) and self-avoidance (red-red or blue-blue) for the random walk. We investigate the proper combination of the two walkers’ strategies that minimizes the time to cover a square lattice. A strategy is represented with a binary sequence, in which 0 stands for peer avoidance and 1 for self-avoidance. We find that if the absolute difference D between the number of 1’s in Ray’s and Ben’s strategies is zero, or if Ray or Ben is neutral, i.e. half peer-avoiding and half self-avoiding, the sequence combination is likely to be good. In general, a sequence combination’s cover time is positively correlated with D. Keywords Random walk · Cover time · Square lattice · Boltzmann statistics

1 Introduction The advancements of network science has made huge contributions to studies in various fields such as social media [1, 2], ecology [3], and epidemic spreading [4]. In general, it is convenient to describe a network with a graph, in which objects C. Y. Yip (B) · K. Y. Szeto Department of Physics, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong e-mail: [email protected] K. Y. Szeto e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 R. Matoušek and J. K˚udela (eds.), Recent Advances in Soft Computing and Cybernetics, Studies in Fuzziness and Soft Computing 403, https://doi.org/10.1007/978-3-030-61659-5_12

139

140

C. Y. Yip and K. Y. Szeto

in the network are represented by nodes, while relationships between the objects are represented by edges [5]. A popular research area in network science concerns the time to cover a network, where “cover” means having visited every node in the network [6, 7]. For example, in the classical problem of travelling salesman, a salesman wishes to visit all sites of a city, i.e. to cover the city, within the shortest time. Occasionally, instead of minimization, one may wish to maximize the time to cover a network so that unwanted information flow is restricted. A sensible example would be that a group of doctors is trying to stop an epidemic spreading in their city. From time to time, multiple agents may be hired at the expense of salary to enhance the speed of covering a network [8]. However, the use of multiple agents raises a question: How can the agents be led to cover distinct parts of a network so that no resources are wasted? It would be meaningless to hire multiple agents if an agent discover no new nodes but keep visiting ones that other agents have visited. While a possible answer to the question is to allow the agents to have real-time communication, which is considerably costly, another solution is to allow agents to make certain marks on nodes that they visit so that agents nearby are informed to avoid the marked nodes [9]. This paper follows Guo et al.’s idea [10] and introduces a model with colored random walkers. Initially, we color a network’s unvisited nodes white. If the first walker who visits a white node is, for example, red in color, then the white node’s color will be changed to red permanently, even if a walker of another color visits the node later. With this configuration, when a random walker at a node decides his next destination, he preferentially visits white neighboring nodes and avoid colored nodes so as to minimize his effort to cover the network. If all neighbors of a walker are already colored, he may choose to go to a neighbor with his color (peer-avoiding) or one with a different color (self-avoiding) according to strategies predefined in his “intelligence sequence”. This paper particularly studies a model with two colored random walkers and investigates their cover time on a square lattice, whose simplicity and homogeneity allow us to temporarily ignore each node’s individual topological properties [11–13] but to focus on the model itself. Generalization to multi-agent systems with more than two colors can be made in a similar manner, though we will reserve this for another paper. In Sect. 2, we will describe the configuration of the two-colored-random-walker model (2CRW) in a more detailed manner using Boltzmann statistics. In Sect. 3, we will discuss a measure of the fitness of intelligence sequence combinations on a square lattice. Then we will compare different combinations for the two walkers in terms of cover time in Sect. 4 and discuss some generalization of this model for practical use in Sect. 5, which is followed by a conclusion in Sect. 6.

Cover Time on a Square Lattice by Two Colored …

141

2 The Two-Colored-Random-Walker Model (2CRW) Given a network with N nodes, 2CRW first assigns all nodes to be white. A red walker, say Ray, and a blue walker, say Ben, are then created with some k-bit predefined intelligence sequences R and B and put at the same initial node, which can be arbitrarily chosen for a homogenous network like a square lattice. While the initial node may be reasonably colored either red or blue, this paper’s version of 2CRW intentionally leave the initial node white so that Ray’s and Ben’s intelligence will be exploited more.

2.1 Random Walking and Coloring Within each time step, Ray and Ben take turn to move and definitely first visit any of their white neighboring nodes. (In other words, the time step increases by 1 after both Ray and Ben has moved. Note that their movements are not simultaneous so that Ben will not go to a white node that Ray has just decided to visit.) The white nodes are consequently permanently colored red or blue depending if their first visitor is Ray or Ben. If there are no white nodes around them, Ray and Ben will use their intelligence: If it is the m-th time for Ray/Ben to use his intelligence, he will use the strategy stored at the [(m − 1) mod k + 1]-th bit of R/B. Each bit of R/B stores either one strategy: 0 (peer-avoiding) or 1 (self-avoiding). Here we parameterize a peer-avoiding walker’s probabilities to visit a neighboring node with repulsion coefficients r 1 and r 2 , depending on if the node’s color is same or different from the walker’s one: 1 −r1 e , Z 1 P0 (different color) = e−r2 , Z

P0 (same color) =

(1)

where r 1 < r 2 . On the other hand, for a self-avoiding walker, 1 −r2 , e Z 1 P1(different color) = e−r1 , Z

P1 (same color) =

(2)

The probabilities are properly normalized by a partition function Z so that the walker’s probabilities to move to any neighboring nodes add up to 1. Z = e−r1 + e−r2

(3)

142

C. Y. Yip and K. Y. Szeto

Fig. 1 A flowchart of 2CRW

Ray and Ben will choose his probability of avoiding a colored site Pi for i = 0 or 1, which is defined Eqs. (1) and (2). Thus, for example, an intelligence sequence coded with (100110) means that a walker will use the sequence of avoidance probability (P1P0P0P1P1P0) for the next six colored sites that he is going to encounter. The two walkers will keep walking until they have covered the network. In other words, the process stops when there are no more white nodes in the network. We define the “cover time” of this particular combination of strategies (R, B) by the number of time steps that Ray and Ben have spent to cover the network. We summarize our algorithm in simulating the two color random walkers by a flowchart shown in Fig. 1. Since we have two walkers, we have two strategy sequences, R for Ray and B for Ben. They may use different intelligence sequences, i.e. R = B, or the same one, i.e. R = B. We will investigate which intelligence sequence combination gives the minimum cover time on a square lattice.

3 Fitness of intelligence sequences combinations on a Square lattice We measure the two colored walkers’ average cover time T on an L × L square lattice, where L = 10 and hence the number of nodes N = L 2 = 100. In the experiments, r 1 = 1 and r 2 = 2, while R and B are both k = 5-bit long, so there were in total 2k × 2k = 22k combinations of intelligence sequences. In other words, in our numerical simulation of the 2CRW, the solution space of strategy combinations for k = 5 is 32 × 32 = 1024.

Cover Time on a Square Lattice by Two Colored …

143

Table 1 The five fittest intelligence sequences and their T obtained for various n n = 10

n = 100

n = 1000

Rank

(R, B)

T ± 1SD

(R, B)

T ± 1SD

(R, B)

T ± 1SD

1

(11100, 00101)

86.0 ± 21.3

(01001, 00100)

106.3 ± 30.8

(00110, 01010)

115.8 ± 36.1

2

(00110, 00000)

87.8 ± 15.4

(10001, 00100)

109.1 ± 29.9

(11100, 01110)

116.2 ± 36.5

3

(00011, 01011)

89.8 ± 19.7

(11100, 11000)

109.1 ± 32.3

(00110, 10001)

116.3 ± 39.5

4

(00111, 10100)

90.7 ± 23.6

(10101, 01100)

109.8 ± 30.1

(11100, 11100)

116.5 ± 38.0

5

(11111, 11110)

90.9 ± 26.8

(01101, 01101)

109.8 ± 31.1

(01110, 01100)

116.5 ± 37.8

For each combination, we recorded its average cover time T after averaging over n measurements. Table 1 summarizes the five fittest intelligence sequence combinations, which yielded the shortest T ’s, obtained for n = 10, 100, and 1000. Note that the figures were rounded to one decimal place. We observe that for a larger n, T (and its SD) was in general longer. This is due to the probability to encounter an extreme case with a long T increases if more measurements are performed. In these extreme cases, we find that both walkers spend much time looking for the last few white nodes. As a comparison, the average cover time T of a model with two colorless random walkers (2RW) and of a model with one colored random walker (1CRW) were also measured for n = 1000 (see Table 2). In 2RW, both random walkers are colorless, so they do not cooperate at all but randomly move to any neighboring nodes at each time step. In 1CRW, the random walker definitely moves to any white neighboring nodes unless there are no white nodes nearby. As shown, 2CRW clearly outperformed 2RW, in which the walkers had no cooperation. To compare 2CRW and 1CRW, one may divide 1CRW’s T by 2 to compensate its smaller number of walkers and obtain an effective cover time T eff = 123.0 ± 43.9. This effective time will be ranked 689-th if it is compared with those of the 1024 sequence combinations of 2CRW. In other words, 1CRW performed poorer than more than half of the combinations. Therefore, 1CRW may be regarded as a version of 2CRW that uses a worse-than-average sequence combination. Table 2 Comparison between 2CRW, 2RW, and 1CRW for n = 1000 T ± 1SD

2CRW

2RW

1CRW

From (115.8 ± 36.1) to (144.5 ± 67.1)

423.1 ± 119.5

246.0 ± 87.8

144

C. Y. Yip and K. Y. Szeto

4 Features of a Fit Intelligence Sequence Combination Table 1 showed that, different fittest intelligence combinations were obtained for different n. This inconsistency and the large SD obtained for 2CRW probably indicated that a combination’s T can hardly converge to a single value. However, a fit intelligence sequence combination was observed to have two features.

4.1 A Subsection Sample To quantify the similarity between R and B, their absolute difference D(R, B) is measured. It is defined as the absolute difference between the number of 1’s (or 0’s) in R and B.   D(R, B) = |#1’s in R − #1’s in B| = #0 s in R−#0 s in B 

(4)

This quantity is special because a positive correlation can be observed after plotting the average cover time of the 1024 combinations against their D (see Fig. 2.) Furthermore, Fig. 1 showed that the average cover time for intelligence sequence combinations with a particular D was likely converging to a particular value as n grew. Therefore, if D(R, B) is small for a particular combination of R and B, (R, B) is expected to be a fit combination that minimize the cover time T.

Fig. 2 A positive correlation was observed between an intelligence sequence combination’s average cover time T and its absolute difference D for n = 10 (left; circles), 100 (middle; squares), and 1000 (right; rhombuses). The correlation coefficient rose from r ≈ 0.188 to r ≈ 0.509 and finally r = 0.779 as n increased.

Cover Time on a Square Lattice by Two Colored …

145

4.2 The Neutrality of R and B For each of Ray’s/Ben’s intelligence sequence Ri /Bi , its average cover time T Ri /T Bi over all partnering sequences Bj /Rj was calculated.  TRi = TBi =

j

  T Ri , B j

2k    j T R j , Bi 2k

(5)

(6)

The calculated T Ri /T Bi was further averaged to be τ R /τ B for Ri /Bi having the same x, which is the number of 1’s in Ri /Bi . Figure 3 plotted τ against x. It could be observed that τ had a minimum at x = 2 or 3 for both walkers, meaning that an intelligence sequence combination (R, B) is, on average, fit if xR or xB = 2 or 3, which is about a half of k. Therefore, it is conjectured that (R, B) is likely a fit combination of intelligence sequences if R or B is neutral, i.e. 50% peer-avoiding and 50% self-avoiding. To verify the conjecture, another experiment was performed with k = 6, which lead to 22k = 4096 intelligence sequence combinations. After measurements, the obtained τ was plotted against x in Fig. 4. As observed, there were significant troughs in τ at x = 3 for both walkers. This matched the conjecture—it is more likely for (R, B) to be a fit combination if R or B is neutral, i.e. x r or x B is a half of k.

Fig. 3 τ against x for Ray (left; circles) and Ben (right; squares) for k = 5. The troughs at x = 2 or 3 indicated that an intelligence sequence combination (R, B) is likely fit if x R or x B = 2 or 3

146

C. Y. Yip and K. Y. Szeto

Fig. 4 τ against x for ray (left; circles) and Ben (right; squares) for k = 6. The troughs at x = 3 strengthen the conjecture that an intelligence sequence combination (R, B) is more likely to be fit if R or B is neutral

5 Discussion 5.1 Two General Rules for a Fit Sequence Combination According to the observations in Sect. 4, we may come up with two general rules for looking for the fittest or a close-to-the-fittest sequence combination (R*, B*), i.e. the one that yields the shortest average cover time: 1. The absolute distance between R* and B* is likely 0, i.e. D(R*, B*) = 0. or k−1 + 1. 2. There are likely x* = k2 1’s in both R* and B*. If k is odd, x* = k−1 2 2 Indeed, as shown in Table 1, the two rules correctly describes the four fittest sequence combinations for n = 1000. However, they fail for a smaller n. This probably indicates that the two rules will be accurate if the long-term average performance of a sequence combination is considered.

5.2 Extension to Complex Networks So far, the paper only focuses on the cover time of 2CRW on a square lattice, which is a regular network, while many networks in our daily lives are better described with complex networks like random networks [14], small-world networks [15], scale-free networks [16], etc. Therefore, the model’s performance on these complex networks should be analyzed before applying it to more realistic daily-life problems. In particular, one may follow this paper’s recipe and see if the two general rules stated in Sect. 5.1 are valid for complex networks. In addition, one should keep in mind that, unlike square lattices, complex networks are heterogeneous, so the walkers’ initial position may have a great contribution to their cover time.

Cover Time on a Square Lattice by Two Colored …

147

An intermediate stage before applying 2CRW to complex networks is to apply it to randomly rewired square lattices, which have long-range connections. As mentioned in Sect. 3, the two walkers on a regular square lattice occasionally got stuck in a cluster of colored nodes and spent many steps escaping and searching for the last few white nodes. If there are long-range connections, the chance for a walker to get stuck is expected to decrease. This may consequently shorten the average cover time of any sequence combinations.

5.3 Optimization with Genetic Algorithm In general, for a fixed k, there will be S sequence combinations that match both rules stated in Sect. 5.1, where

S=

⎧ 2 ⎪ ⎪ ⎪ kk , ⎨ ⎪ ⎪ ⎪ ⎩

2

k

k−1 2

(even k)

2

+



k , (odd k) k−1 +1 2

(7)

For example, S = 200 for k = 5 and S = 400 for k = 6. Then, to look for the fittest or a close-to-the-fittest sequence combination, one may consider to do an exhaustive search in the solution space formed by the S combinations. However, this only works for a small k as S grows exponentially with k. If an exhaustive search is not feasible, one may use genetic algorithm to search for the fittest sequence combination. For example, one may evaluate the fitness of a sequence combination with its average cover time over some trials and accordingly mutate it so that both R and B have x* 1’s. This is merely a primitive idea now. Its actual implementation will be reserved for another paper.

6 Conclusion This paper has investigated the cover time on a square lattice with the two-coloredrandom-walker model (2CRW). In the model’s configuration, a network’s unvisited nodes are initially white and is permanently colored red or blue depending on the color of their first visitor. Then, two random walkers, Ray (red) and Ben (blue), preferentially walk to white nodes and become peer- or self-avoiding according to their predefined intelligence sequences when there are no white nodes near them. This paper observed that if the absolute difference D between the walkers’ sequences is 0, or if any walker is neutral i.e. half peer-avoiding and half self-avoiding, the sequence combination is likely to be fit and yield a short average cover time in a long run.

148

C. Y. Yip and K. Y. Szeto

Acknowledgements C.Y. Yip acknowledges the support of the Hong Kong University of Science and Technology Undergraduate Research Opportunity Program (UROP) for this project.

References 1. Ver Steeg, G., Galstyan, A.: Information transfer in social media. In: Proceedings of the 21st International Conference on World Wide Web, pp. 509–518. ACM (2012) 2. Watts, D.J.: The “new” science of networks. Ann. Rev. Sociol. 30(1), 243–270 (2004) 3. May, R.M.: Networks and webs in ecosystems and financial systems. Philos. Trans. Roy. Soc. Lond. A: Math. Phys. Eng. Sci. 371, 20120376 (2013) 4. Cai, W., Chen, L., Ghanbarnejad, F., Grassberger, P.: Avalanche outbreaks emerging in cooperative contagions. Nat. Phys. 11(11), 936–940 (2015) 5. Albert, R., Barabási, A.L.: Statistical mechanics of complex networks. Rev. Mod. Phys. 74, 47–97 (2002) 6. Redner, S.: A Guide to First-Passage Processes. Cambridge University Press (2001) 7. Maier, B.F., Brockmann, D.: Cover time for random walks on arbitrary complex networks. Phys. Rev. E 96, 042307 (2017) 8. Chen, W., Wang, Y., Yang, S.: Efficient influence maximization in social networks. In: Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. KDD ’09, pp. 199-208. ACM, New York (2009) 9. Burgard, W., Moors, M., Schneider, F.: Collaborative exploration of unknown environments with teams of mobile robots. In: Beetz, M., Hertzberg, J., Ghallab, M., Pollack, M.E. (eds.) Advances in Plan-Based Control of Robotic Agents, pp. 52–70. Springer, Berlin (2002) 10. Guo, W.S., Wang, J.T., Szeto, K.Y.: Spin model of two random walkers in complex networks. In: Proceedings of the 6th International Conference on Complex Networks and Their Applications (2017) 11. Hughes, B.: Random Walks and Random Environments: Random Walks, vol. 1. Oxford science Publications, Clarendon Press (1995) 12. Noh, J.D., Rieger, H.: Random walks on complex networks. Phys. Rev. Lett. 92, 1187014 (2004) 13. Bonaventura, M., Nicosia, V., Latora, V.: Characteristic times of biased random walks on complex networks. Phys. Rev. E 89, 012803 (2014) 14. Erd˝os, P., Rényi, A.: On the evolution of random graphs. Publ. Math. Inst. Hung. Acad. Sci. 5, 17–60 (1960) 15. Watts, D.J., Strogatz, S.H.: Collective dynamics of “small-world” networks. Nature 393, 440– 442 (1998) 16. Barabási, A.L., Albert, R.: Emergence of scaling in random networks. Science 286, 509–512 (1999)

Transition Graph Analysis of Sliding Tile Puzzle Heuristics Iveta Dirgová Luptáková and Jiˇrí Pospíchal

Abstract Sliding tile puzzle or n-puzzle is a standard problem for solving a game by a tree search algorithm such as A*, involving heuristics. A typical example of such a puzzle is 15-puzzle that consists of a square frame containing 4 × 4 numbered square tiles, one of which is missing. The goal is to position the tiles in correct order by sliding moves of the tiles, which use the empty space. The problem is NP-complete and for puzzles involving a greater number of tiles, the search space is too big for standard tree search algorithms. This type of puzzles is therefore quite often used for analysis and testing of heuristics. This paper aims to obtain a better characterization of popular heuristics used for this kind of problems by analysis of the transition graph of admissible moves. Our analysis shows, that both Manhattan distance and Tiles out of place heuristics work properly only near the goal, otherwise the information they provide is next to useless for a single move, IDA* with these heuristics works mainly due to reduction of branching of the search tree, for more consecutive moves. Keywords Sliding tile puzzle · n-puzzle · Heuristic · Transition graph · Game search graph · Shortest path

1 Introduction For combinatorial NP-complete problems, search for heuristics and their analysis is an ongoing process since the foundation of artificial intelligence. One of the most popular problems, used typically in schoolbook artificial intelligence examples for a study of tree search heuristics, is a sliding tile (resp. sliding block) n-puzzle, 15puzzle in its most popular form [17]. I. Dirgová Luptáková (B) · J. Pospíchal Faculty of Natural Sciences, University of St. Cyril and Methodius in Trnava, 91701 Trnava, Slovak Republic e-mail: [email protected] J. Pospíchal e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 R. Matoušek and J. K˚udela (eds.), Recent Advances in Soft Computing and Cybernetics, Studies in Fuzziness and Soft Computing 403, https://doi.org/10.1007/978-3-030-61659-5_13

149

150

I. Dirgová Luptáková and J. Pospíchal

Sliding tile puzzles have a history spanning more than one and a half century [18], where the 15-puzzle is a set of 15 square tiles numbered from 1 to 15, in a square frame space for 4 × 4 tiles, with one empty space instead of a tile. The 15 numbered tiles are initially placed in a random order, and the goal is to rearrange them by sliding moves in order 1–15 from left to right and top to bottom. Lifting any piece off the board is prohibited. The theoretical analysis of the puzzle commenced early after its introduction, when Johnson and Story [9, 21] in 1889 showed, that only half of the possible initial arrangements can be solved. They used the properties of permutations defining the ordering of the numbered tiles. An advanced theoretical analysis using Caley graphs and groups can be of practical usage in solving n-puzzles and similar problems by parallel programming [4], or in improving heuristics by application of Fourier transform to noncommutative groups [22]. There exists a wide variety of sliding block puzzles [19, 20]. The basic types are n-puzzles or (n2 − 1) puzzles (8, 15, 24, …), where the square numbered tiles from 1 to 8, 15, or 24, … are placed in an n × n frame (3 × 3, 4 × 4, 5 × 5, …) with one tile missing. The shape of the frame can be generalized from square to rectangle composed of n × m tiles, the common variants used in research are the 14-puzzle (3 × 5) and the 19-puzzle (4 × 5). In the beginning of 1900s other variants of the puzzle appeared, named e.g. as Dad’s Puzzle, or Klotski, containing blocks of different shapes, like 4 basic square tiles merged into a larger square, or 2 basic square tiles merged into a rectangle and a free space equivalent to two basic square tiles. Yet other variants used only horizontally and vertically oriented rectangular tiles, with more empty space and movement restricted by tile orientation, like Rush hour, with the goal to get the red car-tile through the gap in the frame. Variants: blocks of different shapes. In the 1930s Ma’s Puzzle (resp. the Devil’s nightcap) used non-rectangular shapes, its goal was to join its 2 L-shaped pieces together. Sokoban and Rubik cube use often similar strategies to obtain a solution. The search space ranges from 8 puzzle (105 ), Ma’s puzzle (5 × 105 ), Rush hour 6 × 6 (1010 ), 15 puzzle (1013 ), 24 puzzle (1025 ), up to 48 puzzle (1048 ), with bruteforce search time from milliseconds for 8 puzzle to billions of years for 24 puzzle [20]. The uninformed strategies used for slide-tile puzzles range from random walk, breadth-first search (obvious disadvantage memory requirement for holding two complete search depths and detecting duplicate positions), depth-first search and depth-first iterative deepening (search space graphs have cycles, must avoid already visited nodes), up to bidirectional search (when the trees meet?) [20]. The informed algorithms—heuristic search (A*, Iterative Deepening A*, etc.), use a function f (n) = g(n) + h(n), where g(n) is the length of the shortest path from start, and h(n) is an estimated distance from a goal. The estimation must not overestimate the cost. Typical heuristics used for this purpose are a number of tiles out of place or Manhattan distance (along axes). Further improvements involve weighting heuristics, which might overestimate the cost, and pattern databases [5] storing the exact distance to the goal for a subproblem [20]. Even evolutionary approaches like genetic algorithm [8] and genetic programming [7] were tried.

Transition Graph Analysis of Sliding Tile Puzzle Heuristics

151

As the n-puzzles are popular for human solvers, even a cognitive approach was analyzed and compared with a machine solution [14]. In our paper, we shall study the properties of heuristics used in informed algorithms. Such studies have been already approached from various angles. Basic comparison of A*, Greedy BFS, and Depth Limited A* Search, and hill-climbing with and without heuristics Tiles out of place and Manhattan distance was done in [10, 11]. A phase transition in heuristic search was studied in [3], and theoretical overview of the method of analysis of the transition graph is used in [13]. Node ordering in IDA* was studied in [16] and good analysis of Tiles out of place and Manhattan distance heuristics was done early in [6]. NP-completeness and an approximation algorithm giving bounded suboptimal solution is studied in [15]. Graphplan and heuristic state space planners are studied in [2, 12]. Bootstrapping improvement of heuristics for large state spaces, taking days of computational time, was studied in [1]. Large-scale sliding puzzles are solved in milliseconds, but non-optimally by combining solution from sub solutions of the size 2 × 3 in [23]. Heuristics for non-optimal solutions in large-scale problems were studied also by [24]. In this paper, we intend to analyze in more detail the Manhattan distance and Tiles out of place heuristics, which are most popular heuristics for IDA* and other algorithms using the transition graph.

1.1 A Simple Example—5-puzzle Transition Graph The search space even for one of the smallest practically used n-puzzles, 8 puzzle, is around 105 , which cannot be effectively visualized. Therefore, to help the visualization, as a simple test case we used a rectangle composed of 2 × 3 tiles, which we named 5 puzzle, since it contained 5 numbered tiles. An example of the start state and the goal state can be seen in Fig. 1. The start and goal states from Fig. 1 can be also used for an illustration of the Manhattan distance and Tiles out of place heuristics. The Manhattan distance is the distance between two points measured along axes at right angles. In Fig. 1 it means measuring the distances of positions of same-numbered tiles in start and goal states, when the distance unit equals the side of tiles. It means, for tile 1 the Manhattan distance is zero, for tile 2 the distance is 2 (when we disregard other tiles, it takes Fig. 1 5-puzzle, rectangle composed of 2 × 3 tiles with 5 numbered tiles

152

I. Dirgová Luptáková and J. Pospíchal

Fig. 2 5-puzzle, frequencies of states with a given minimum distance from the goal state

one move to the right and one move up to move the tile 2 from its position in the start state to its position in the goal state). For the tile 3 the Manhattan distance is one, for the tile 4 it is three, and for the tile 5 it is one. The Manhattan distance of the start and goal states in Fig. 1 is therefore 0 + 1 + 3 + 2 + 1 = 7. The Tiles out of place distance is much simpler; it is 4, since positions of 4 tiles in the start state do not match their positions in the goal state. How the state space looks like? As in all n-puzzles, the transition graph is composed of two unconnected components; if we would switch just the tiles 1 and 2 from the goal state to get a start state, we could never get the goal state by just sliding tiles. The state space for 5-puzzle is 6! = 720, but one component has just 360 states. For each of these states, we can find the minimum distance (number of edges of a shortest path) to the goal-state-node. The frequencies of such distances are shown in Fig. 2, only one of the states has the largest distance 21. Such an analysis can be easily found for most popular slide puzzles. For example, for 8-puzzle in [16], the largest distance is 31 and the distance with maximum number of nodes is 24 instead of 14 in Fig. 2.

2 Detailed Analysis of the State Space The transition graph of 360 states for the 5-puzzle is shown in Fig. 3. By coloring its edges, we also showed, whether Tiles out of place distance of their vertices can correctly guide us towards goal. The figure for the Manhattan distance would look very similar. Numbers of colored edges are distributed roughly equally between

Transition Graph Analysis of Sliding Tile Puzzle Heuristics

153

Fig. 3 5-puzzle, half of the state space, with a blue marked goal state and red dot farthest state from the goal. Red edges show, where direction of the edge towards goal corresponds to the increase of Tiles out of place distance of its vertices from the goal state, green edges correspond to the decrease of the distance, vertices of yellow edges have the same distance

correctly/incorrectly/do not guide us towards goal, both for Tiles out of place as well as for the Manhattan distance. However, the distribution of colors is not random. This can be seen in Fig. 4, where we show in details the situation in the search space around the goal space for both Tiles out of place distance as well as Manhattan distance. It is apparent, that for both cases, these heuristics can guide the search towards goal quite efficiently.

Fig. 4 5-puzzle, detailed situation of the search space near the goal, coloring of edges is similar to the Fig. 3, it corresponds to Tiles out of place on the left and Manhattan distance on the right hand side

154

I. Dirgová Luptáková and J. Pospíchal

Fig. 5 5-puzzle, histogram of numbers of edges versus distance from the goal, coloring of the histogram bars is equivalent to the coloring of edges in the Figs. 3 and 4

On the left hand side of Fig. 4, we shall mark each edge green, if the number of displaced tiles in the nodes of the edge decreased going along the edge in the direction of the goal state. It means, that the decrease in the number of misplaced tiles shows us a correct direction toward the goal state. A yellow marked edge means the number of displaced files does not change going from one node/state of the edge to the other. In this case the heuristic is useless to show us a direction. The red edges shows us, that the number of misplaced tiles misleads us; they direct us away from the goal. The situation is similar for Manhattan distance on the r.h.s. of Fig. 4. In Fig. 5 we can see a similar histogram as in Fig. 2, only for edges instead of vertices. We take as a distance the distance of the farther vertex of the edge from the goal. Again, green color means, the Tiles out of place, resp. Manhattan heuristic can guide us towards the goal. Figure 5 supports the results of Fig. 4. It is apparent, that edges close to the goal are mostly green or yellow, while farther from the goal the proportion of the colors is roughly the same, save the random variance. It means, that guidance of both heuristics is near random farther from the goal. This can be shown also by the analysis of the four shortest paths leading from the farthest state towards the goal, shown in Fig. 6. For both heuristics, heuristic distances drop reliably near the goal, but farther from the goal, heuristic distances of states increase or decrease along the shortest paths practically randomly.

3 Conclusions Both Manhattan distance and Tiles out of place heuristics work reliably only near the goal; otherwise they are nearly as bad as random moves. It can be concluded, that IDA* algorithm with these heuristics works mainly due to (nearly random) reduction of branching of the search trees. The advantage can get significant only for more consecutive moves.

Transition Graph Analysis of Sliding Tile Puzzle Heuristics

155

Fig. 6 5-puzzle, analysis of four shortest paths (colored red, green, blue and black) leading from the goal to the farthest state

Acknowledgements The work was in part supported by the grant APVV-17-0116 Algorithm of collective intelligence: Interdisciplinary study of swarming behaviour in bats, and the grant VEGA 1/0145/18 Optimization of network security by computational intelligence.

156

I. Dirgová Luptáková and J. Pospíchal

References 1. Arfaee, S.J., Zilles, S., Holte, R.C.: Learning heuristic functions for large state spaces. Artif. Intell. 175(16–17), 2075–2098 (2011) 2. Blum, A.L., Langford, J.C.: Probabilistic planning in the graphplan framework. In European Conference on Planning. Springer, Berlin, Heidelberg, pp. 319–332 (1999) 3. Cohen, E., Beck, J.C.: Problem difficulty and the phase transition in heuristic search. In: AAAI, pp. 780–786 (2017) 4. Cooperman, G., Finkelstein, L., Sarawagi, N.: Applications of Cayley graphs. In International Symposium on Applied Algebra, Algebraic Algorithms, and Error-Correcting Codes. Springer, Berlin, pp. 367–378 (1990) 5. Felner, A.: Early work on optimization-based heuristics for the sliding tile puzzle. In: Workshops at the Twenty-Ninth AAAI Conf. on Artificial Intelligence, pp. 32–38 (2015) 6. Gaschnig, J.: Exactly how good are heuristics?: Toward a realistic predictive theory of best-first search. IJCAI 5, 434–441 (1977) 7. Hauptman, A., Elyasaf, A., Sipper, M., Karmon, A.: GP-Rush: using genetic programming to evolve solvers for the Rush Hour puzzle. In: Proceedings of the 11th Annual Conference on Genetic and Evolutionary Computation ACM, pp. 955–962 (2009) 8. Igwe, K., Pillay, N., Rae, C.: Solving the 8-puzzle problem using genetic programming. In: Proceedings of the South African Institute for Computer Scientists and Information Technologists Conference. ACM, pp. 64–67 (2013) 9. Johnson, W.W.: Notes on the ‘15 Puzzle. I. Amer. J. Math. 2, 397–399 (1879) 10. Mathew, K., Tabassum, M.: Experimental comparison of uninformed and heuristic AI algorithms for N Puzzle and 8 Queen puzzle solution. Int. J. Dig. Inf. Wireless Commun. (IJDIWC) 4(1), 143–154 (2014) 11. Mishra, A.K., Siddalingaswamy, P.C.: Analysis of tree based search techniques for solving 8-puzzle problem. In: Innovations in Power and Advanced Computing Technologies (i-PACT), 2017. IEEE, pp. 1–5 (2017) 12. Nguyen, X., Kambhampati, S.: Extracting effective and admissible state space heuristics from the planning graph. In: AAAI/IAAI, pp. 798–805 (2000) 13. Nishimura, N.: Introduction to reconfiguration. Algorithms 11(4), 52 (2018) 14. Pizlo, Z., Li, Z.: Solving combinatorial problems: The 15-puzzle. Memory Cognit. 33(6), 1069–1084 (2005) 15. Ratner, D., Warmuth, M.: The (n2–1)-puzzle and related relocation problems. J. Symbol. Comput. 10(2), 111–137 (1990) 16. Reinefeld, A.: Complete solution of the eight-puzzle and the benefit of node ordering in IDA*. In: International Joint Conference on Artificial Intelligence, pp. 248–253 (1993) 17. Russell, S., Norvig, P.: Solving problems by searching. In: Artificial Intelligence: A Modern Approach, pp. 65–123 (2008) 18. Slocum, J., Sonneveld, D.: The 15 puzzle book: how it drove the world crazy. Slocum Puzzle Foundations (2006) 19. Spaans, R.G.: Improving sliding-block puzzle solving using meta-level reasoning. Master’s thesis, Institutt for datateknikk og informasjonsvitenskap (2010) 20. Spaans, R.G.: Solving sliding-block puzzles. Specialization project at NTNU (2009). https:// www.pvv.org/~spaans/spec-cs.pdf. Last visited 15.5.2018 21. Story, W.E.: Notes on the ‘15 Puzzle. II. Amer. J. Math. 2, 399–404 (1879) 22. Swan, J.: Harmonic analysis and resynthesis of sliding-tile puzzle heuristics. In: IEEE Congress on Evolutionary Computation (CEC). IEEE, pp. 516–524 (2017) 23. Wang, G., Li, R.: DSolving: a novel and efficient intelligent algorithm for large-scale sliding puzzles. J. Exp. Theor. Artif. Intell. 29(4), 809–822 (2017) 24. Wilt, C., Ruml, W.: Effective heuristics for suboptimal best-first search. J. Artif. Intell. Res. 57, 273–306 (2016)

Stochastic Optimization of a Reinforced Concrete Element with Regard to Ultimate and Serviceability Limit States Jakub Venclovský, Petr Štˇepánek, and Ivana Laníková

Abstract Main goal of the paper is to present an algorithm for stochastic optimization of a steel-reinforced concrete element’s cross section. Firstly, the deterministic problem is introduced and described. The description of uncertainties involved in the process and stochastic reformulation of the problem follows. Afterwards, the algorithm itself is introduced. This algorithm is based on internal cycle of deterministic optimization using reduced gradient method and external cycle of stochastic optimization using regression analysis. From the initialization state, the algorithm assess probabilities that ultimate and serviceability limit states are satisfied. Using regression analysis of these assessed probabilities, the algorithm finds new possible solution. The probabilities are assessed on orthogonal grid via modified bisection method. The paper concludes with presentation of the performed calculations and their results. Keywords Deterministic optimization · Stochastic optimization · Design of structural elements

1 Introduction It is possible to see various applications of mathematical optimization in civil engineering (structural design, reconstruction of transportation networks etc.) Initially, deterministic approaches have been introduced to solve these issues (see [1]). But despite their complexity, these approaches are insufficient to comprehend the probabilistic nature of mentioned problems and thus provide only suboptimal solutions. Hence the effort prevails to reconsider these deterministic approaches and deal with uncertainties involved in said issues in less straightforward way (see [2–4]). Paper focuses on optimization of the structural design. There are various approaches to the optimization. The difference between deterministic and stochastic J. Venclovský (B) · P. Štˇepánek · I. Laníková Faculty of Civil Engineering, Brno University of Technology, Veveˇrí 331/95, Brno, Czech Republic e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 R. Matoušek and J. K˚udela (eds.), Recent Advances in Soft Computing and Cybernetics, Studies in Fuzziness and Soft Computing 403, https://doi.org/10.1007/978-3-030-61659-5_14

157

158

J. Venclovský et al.

optimization was already mentioned. This paper is aimed on stochastic optimization. Further differences derive from the subject of optimization. In this case, it is the shape of the cross section. It is also possible to distinguish various optimization algorithms. There are analytic algorithms (e.g. [5]) and simulation algorithms (e.g. [6–8]). Analytic algorithms are based on probabilistic analysis of the problem and it is known, that they cannot be used to achieve a high precision. Simulation algorithms are based on discretization of probability into particular scenarios followed by simulation using these scenarios (every particular scenario is actually deterministic). Since the precision that needs to be achieved can be very high (even 10−6 ) it seems that the only possibility to achieve such precision is to use simulation and heuristic (see eg. [9]) algorithm simultaneously.

2 Deterministic Approach It is necessary to begin by introducing the problem in its deterministic form. In deterministic approach, all probability inputs are replaced with their design values. Furthermore, the principle of safety coefficients is applied. Simplified definition of the optimization task therefore is: min f (x), subject to :

(1)

L(x) ≤ R(x),

(2)

w(x) ≤ wmax ,

(3)

x ∈ C.

(4)

x

All conditions and variables are described in more detail in the following text. It is necessary to mention, that besides the vector of design variables x, there are another inputs into the task (1–4), namely prescribed loads and material characteristics. Vector of Design Variables. This paper restricts itself to one-dimensional construction elements–beams. The length of the beam is obviously set and cannot change. Vector x in the task described by (1–4) therefore contains variables, which describe shape of the beam’s cross section including the area of reinforcing steel. In a possible simple case the cross section is rectangular and x = (b, h, As1 , As2 ), see Fig. 1. Objective Function. The objective function f (x) in (1) can represent anything from simple cost of materials to multi-criteria function describing environmental aspects through construction, maintenance and disposal of the structure. To maintain simplicity, the objective function, which calculates the cost of materials, is used in

Stochastic Optimization of a Reinforced Concrete Element …

159

Fig. 1 Simple rectangular cross section

this paper: f (x) = Vc (x)Ccv + Ws (x)Csw ,

(5)

where V c is volume of concrete, C cv is cost of concrete per unit of volume, W s is weight of reinforcing steel and C sw is cost of steel per unit of weight. Ultimate Limit State. The condition (2) represents ULS in task (1–4), where L(x) is the effect of load and R(x) is the structural resistance. This description of ULS is very simplified. In fact, ULS consists of many equations and the conditions limiting values of strain of concrete and steel. In order to evaluate ULS it is necessary to calculate the vector of deformations first. This is achieved using matrix calculation, which is in principle based on solving the following differential equation via the finite element method: E I w (4) = q,

(6)

where E is the Young modulus of elasticity, I is the moment of inertia, w is the deflection and q is the prescribed uniformly distributed load over unit of length. Using the FEM, the beam e is discretized into n elements ei : e = {ei , i = 1, . . . , n}.

(7)

The notation of corresponding matrix calculation then is: KU = F,

(8)

where K is the stiffness matrix, U is the vector of deformations and F is the vector of load. (N.B. Vector F is vector of point forces, the values of distributed load q is transferred into vector F via the so-called influence lines.) Using this equation system, the internal forces are calculated. From them, the so-called strain parameters (see [6]) ε j , K y j , K z j , j = 1, . . . , n + 1 are derived in

160

J. Venclovský et al.

all nodes between elements ei and subsequently, the strains of concrete or steel in points characterized by coordinates y, z can be computed as: ε j (y, z) = εci + z K yi + y K zi .

(9)

For values of strain, the following conditions then must be met: εi (y, z) ≥ εc,min , (y, z) ∈ cross section vertices,

(10)

εi (y, z) ≤ εs,max , (y, z) ∈ reinforcing steel positions,

(11)

where εc,min is minimal allowed strain of concrete and εs,max is maximal allowed strain of steel. Serviceability Limit State. As can be seen from condition (3), the SLS in this paper is restricted to limiting deflection. The condition (3) is again very simplified and actually represents a large number of equations. The evaluation of SLS is similar to evaluation of ULS with the difference, that the possible formation of cracks must be taken into account. These cracks then affect the bending stiffness Bi : Bi = E Ii ,

(12)

which affects the moment of inertia I i , one of the inputs into the differential equation and therefore the matrix calculation. Whether the cracks form is determined from the vector of internal forces. It is therefore necessary to find the state of equilibrium (of variables’ values in the mentioned equations, see [6]). The final condition corresponds to (3): wi (x) ≤ wmax ,

(13)

where wi (x) is deflection taken directly from vector of deformations U and wmax is maximal allowed deflection. Design Variables’ Boundaries. Condition (4) defines boundaries of the design variables’ values. It also represents conditions following mutual relations among these variables, e.g. maximal and minimal area of steel reinforcement in relation to area of the concrete cross section. Optimization Algorithm. Optimization itself uses the reduced gradient algorithm, specifically the solver CONOPT from the optimization software GAMS. This sets some rules for the performed calculations. All relations (equations) and their first derivatives must be continuous. In some cases (stress–strain diagrams, mentioned stiffness in SLS), this is achieved using Hermit interpolation (see [6] again).

Stochastic Optimization of a Reinforced Concrete Element …

161

3 Stochastic Approach The designing process involves a lot of uncertainties. Among these are: • randomness of physical quantities used in the design (as its natural characteristic), • statistical uncertainties during the description of a quantity caused by a lack of data, • model uncertainties caused by inaccuracies in the calculation model in comparison with real structural behavior, • uncertainties caused by inaccuracies of the limit states definitions, • human element deficiencies within the design procedure and execution and usage of the structure. In the deterministic approach all these uncertainties are included in the calculation by using design values and/or via using various safety coefficients. It is obvious, that for desired probability of structure failure, the stochastic approach provides more precise and improved solution. By embodying uncertainties, the original task (1–4) is reformulated to its stochastic form: min M( f (x, ξ)), subject to :

(14)

P(L(x, ξ) ≤ R(x, ξ)) ≥ 1 − α,

(15)

P(L(x, ξ) ≤ R(x, ξ) ∧ w(x, ξ) ≤ wmax ) ≥ 1 − β,

(16)

x ∈ C,

(17)

x

where M(·) represents mean, P(·) represents probability, ξ represents uncertainty (possible scenarios), α is a desired probability that the ULS condition is met and β is a desired probability that both ULS and SLS conditions are met. Values α and β are not prescribed directly. Actually, the corresponding reliability indices α R , β R are prescribed (see [10]). The relation between probability values and reliability indices is: α =  N (−α R ),

(18)

(analogically for β and β R ), where  N () is distribution function of normed normal distribution.

4 Example Proposed solution is presented directly on a simple example.

162

J. Venclovský et al.

Fig. 2 Scheme of the simply supported beam

Fig. 3 Cross section of the beam, x = (b1 , b2 , b3 , h1 , h2 , h3 , As1 , As3 )

The task is to design 5 m long simply supported beam loaded with uniformly distributed load q and normal force N as shown on Fig. 2. Cross section of the beam is described by 8 variables b1 , b2 , b3 , h1 , h2 , h3 , As1 , As3 —6 of them describe the shape of cross section, 2 describe the area of reinforcing steel (see Fig. 3). In the FEM calculation, the beam is divided into 10 elements e1 , …, e10 , each 0.5 m in length. The distributed load q (kN/m) and normal force N (kN) are random variables with gamma distribution. Probability distributions of q and N are: q ∼ (3250; 0.01),

(19)

N ∼ (148; 0.05),

(20)

(N.B. To better understand the probability distributions, 5% and 95% quantiles of distribution of q are 31.57 kN/m and 33.44 kN/m respectively. 5% and 95% quantiles of distribution of N are 6.43 kN and 8.43 kN respectively.) The probability values α and β from task (14–17) are determined from corresponding reliability indices α R , β R , which for considered reference period of 50 years and the medium consequences class (see [10]) are: α R = 3.8 =⇒ 1 − α = 99.99277%,

(21)

Stochastic Optimization of a Reinforced Concrete Element …

β R = 1.5 =⇒ 1 − β = 93.31928%.

163

(22)

There are some other restrictions regarding design variables: h 1 , h 2 , h 3 ≥ 100 mm,

(23)

h 1 + h 2 + h 3 ≤ 350 mm,

(24)

b1 , b3 ≥ 200 mm,

(25)

b1 , b3 ≤ 900 mm,

(26)

b2 ≥ 100 mm.

(27)

For widths b1 , b3 also the following condition needs to be fulfilled: bk ≥ 2 × 50 +

4 Ask × (50 + 20), k = 1, 3. π202

(28)

This condition states, that the said width has to accomodate required area of reinforcing steel; it is considered that reinforcing steel is composed of bars 20 mm in diameter, minimum distance between surface of any steel bar and concrete surface is 50 mm, minimum distance between any two steel bars is also 50 mm. Considered costs for the objective function (5) are: Ccv = 2500 CZK per m3 of concrete,

(29)

Csw = 30 CZK per kg of steel.

(30)

5 Solution Algorithm The algorithm uses regression analysis to iterate towards solution of the task. In this chapter, the algorithm is described on the example defined in Chap. 4, including the method of probability assessment.

164

J. Venclovský et al.

5.1 Probability Assessment The probability is assessed on an orthogonal mesh via method, that can be described as modified bisection method. The principle of this method is to find the boundary between scenarios, which satisfy deterministic conditions of the task, and scenarios, which do not. This has to be done separately for ULS and SLS. Condition (17) must always be satisfied. Besides this condition, the only deterministic condition for ULS is: L(x, ξ) ≤ R(x, ξ).

(31)

Deterministic condition for SLS then is: L(x, ξ) ≤ R(x, ξ) ∧ w(x, ξ) ≤ wmax .

(32)

Note, that by assessing conditions (31) and (32) on the probability mesh, the left hand sides of conditions (15) and (16) are assessed. The fact that condition (32) is superior to condition (31) is used whenever possible (the former condition is checked first and the results are used in the latter condition if possible). The mesh layout is thicker toward the expected location of the mentioned boundary. Probability Discretization. In the example defined in Chap. 4, orthogonal mesh in its maximum size consists of 514 × 514 rectangular areas. Boundaries of these mentioned areas are specified using quantiles of probability distributions of random variables q and N. For distributed load q the probabilities whose quantiles define the boundaries of the mesh are (all in %): 0 60 92 90 95 97.5 99 99.9 99.99 99.999

:3 : 15 : 0.5 : 0.25 : 0.125 : 0.0375 : 0.01 : 0.001 : 0.0001 : 0.00001

: 60; : 72; : 90; : 95; : 97.5; : 99; : 99.9; : 99.99; : 99.999; : 100.

(33)

For normal force N, the probabilities whose quantiles define the borders are in opposite order, i.e. (all in %): 0 : 0.00001 0.001 : 0.001 : 0.0001 : 0.01; etc.

(34)

Stochastic Optimization of a Reinforced Concrete Element …

165

Each of these areas (and their respective probability values, determined by their boundaries) is represented by the point (scenario) specified as quantile of average value of the area’s probabilistic borders, i.e. for distributed load q as quantiles of (all in %): 1.5 : 3 : 58.5; 60.75 : 1.5 : 71.25; etc.

(35)

Analogically for normal force N. Assessment Principle. Principle of probability assessment is based on bisection method. From the full mesh of 514 × 514 points the mesh of 258 × 258 points is made by joining 2 adjacent scenarios of both distributed load q and normal force N (4 scenarios in total) into 1 scenario. (N.B. the first and last scenario of the full mesh always remain, i.e. for distributed load q the quantile of 1.5% and 99.999995% are present in all meshes. Analogically for normal force N. This is the reason for the meshes’ sizes, 514 = 29 + 2, 258 = 28 + 2.) This is done repeatedly, creating meshes of 130 × 130, 66 × 66, …, 4 × 4 points. The probability assessment itself starts with the smallest mesh of 4 × 4 points (4 = 21 + 2). These 16 scenarios either satisfy or dissatisfy the deterministic conditions (31) and (32). Using convolution with proper kernel (done via Fast Fourier Transform), the boundary between satisfying and dissatisfying scenarios is found. Subsequently, in the 6 × 6 mesh (6 = 22 + 2), only the descendants of the boundary scenarios from 4 × 4 mesh are tested. The whole process is repeated until the final 514 × 514 mesh is reached.

5.2 Calculation Initialization The calculation is initialized by selecting given number of scenarios of random variables. For each scenario the deterministic optimization is performed and probability, that thus acquired solution satisfies conditions (15) and 16) is assessed. The example defined in Chap. 4 is initialized by 16 scenarios, which are made of combinations of values of q and N as follows: q ∈ {30.25; 31.75; 33.25; 34.75},

(36)

N ∈ {2.25; 6.75; 8.25; 9.75}.

(37)

166

J. Venclovský et al.

5.3 Regression Analysis Acquired points are fitted with a polynomial function via least squares method. For probabilities it is the 3rd degree polynomials, for objective function it is the 1st degree polynomial. See Figs. 4 and 5 for results of this regression for objective function and probability of ULS (probability of SLS is very similar to probability of ULS and is therefore not shown). Degrees of polynomials were chosen appropriately in regard to values of probabilities and objective function values observed on a greater sample than just the mentioned 16 scenarios. Regression analysis of the probabilities is weighted. For condition (15), the error in each point t is multiplied by: 1 ,   pt,(15) − α

(38)

Fig. 4 Polynomial regression of objective function’s values obtained by deterministic optimization in initialization points

Fig. 5 Polynomial regression of probabilities, that solution obtained by deterministic optimization in initialization points will satisfy ULS

Stochastic Optimization of a Reinforced Concrete Element …

167

where pt,(15) is the assessed left hand side of condition (15) for point t. Analogically for condition (16) and β. This way, the regression resembles interpolation while near the solution.

5.4 Iteration After initialization, the iterative process follows. According to regression analysis, the point that satisfies conditions (15) and (16) and at the same time has the minimal value of objective function is selected as a possible solution. For this scenario the deterministic optimization is performed and probabilities that thus acquired solution satisfies deterministic conditions is assessed. If the assessed probabilities satisfy conditions (15) and (16) and at least one of them is also sufficiently close to its desired value, the algorithm ends and the current deterministic solution is the solution of the whole task. In the opposite case, this newly assessed point is added as another point to the regression analysis. Regression, new point selection, deterministic optimization and probability assessment are all performed again. This iterative procedure continues, until the assessed probabilities satisfy conditions (15, 16) and at least one of them is also sufficiently close to its desired value.

5.5 Heuristic Algorithm Proposed algorithm does not solve minimization task (14) but actually the task: min min f(x, ξ). ξ

x

(39)

Therefore, the algorithm is heuristic and provides only suboptimal solution.

6 Calculation Results Table 1 contains the values of design variables, which were found as the solution to the example defined in Chap. 4. The probability, that the solution satisfies ULS condition was assessed as 99.99281% (the desired value was 99.99277%). The probability, that solution satisfies SLS condition was assessed as 99.93238% (the desired value was 93.31928%)— obviously, the ULS condition was more difficult to achieve in this case. Both probabilities were subsequently tested by the Monte Carlo simulation. From 105 scenarios, 68 failed the deterministic condition for SLS resulting in 99.932%

168

J. Venclovský et al.

Table 1 Values of design variables b1 , b2 , b3 , h1 , h2 , h3 (mm) and As1 , As2 (mm2 ) in finite elements e1 , …, e10 as a solution to example defined in Chap. 4 e1

e2

e3

e4

e5

e6

e7

e8

e9

e10

b1

200.0

489.3

553.1

707.1

837.8

837.8

707.1

553.1

489.3

200.0

b2

100.0

100.0

100.0

100.0

100.0

100.0

100.0

100.0

100.0

100.0

b3

630.6

900.0

557.6

696.3

806.4

806.4

696.3

557.6

900.0

630.6

h1

150.0

100.0

100.0

100.0

100.0

100.0

100.0

100.0

100.0

150.0

h2

100.0

150.0

150.0

150.0

150.0

150.0

150.0

150.0

150.0

100.0

h3

100.0

100.0

100.0

100.0

100.0

100.0

100.0

100.0

100.0

100.0

As1

102.3

160.7

141.0

170.6

195.0

195.0

170.6

141.0

160.7

102.3

As3

347.0

1149.6

2053.7

2676.1

3170.1

3170.1

2676.1

2053.7

1149.6

347.0

probability, which is in accordance with the assessed value. For ULS, there were 8 scenarios that failed resulting in 99.992% probability. This is lower than desired but within tolerance of the testing. The solution was found for the scenario q = 34.2438 kN/m, N = 4.3080 kN. Its objective function value is 4318.1 CZK. Evidently, the values from Table 1 are not suitable for an actual design purposes. It would be necessary to make some further adjustments to actually use this output, which is not the aim of this paper. Table 1 contains some peculiar values (b3 in e1 , e2 , e9 and e10 and also some asymmetric values). This is caused by the deterministic optimization algorithm, the GRG method. Since the only function of width b3 is to accommodate reinforcement area As3 , this value can obviously be lowered in the mentioned elements. Nevertheless, the optimization algorithm kept the current values, because the minimum (defined as a reduced gradient sufficiently close to zero) was already reached.

7 Conclusions The algorithm to stochastically optimize design of reinforced concrete structural elements was developed. This algorithm was successfully tested on a simple example. Probability assessment of proposed algorithm was tested by Monte Carlo method with a small inaccuracy, which requires further attention. The future work should focus on reaching higher precision of probability of failure, which will probably require rework of the probability assessment procedure, as well as on using the algorithm in issues with more than 2 random variables (current probability assessment procedure would probably also behave unacceptably in these conditions).

Stochastic Optimization of a Reinforced Concrete Element …

169

References 1. Chakrabarty, B.K.: A model for optimal design of reinforced concrete beam. J. Struct. Eng. 108, 3238–3242 (1992) 2. Frangopol, D.M., Kongh, J.S., Ghareibeh, E.S.: Reliability-based life-cycle management of highway bridges. J. Comput. Civil Eng. 15, 27–34 (2001) 3. Ziemba, W.T., Wallace, S.W.: Applications of stochastic programming. In: SIAM conference (2004) 4. Cajanek, M., Popela, P.: A two-stage stochastic program with elliptic PDE constraints. In: Proceedings of 16th International Conference on Soft Computing MENDEL 2010, Brno, pp. 447–452 (2010) 5. Zampachova, E.: Approximations in stochastic optimization and their applications. Ph.D. thesis, Brno University of Technology (2009) 6. Plsek, J.: Design optimisation of concrete structures. Ph.D. thesis, Brno University of Technology (2011) 7. Klimes, L., Popela, P.: An implementation of progressive hedging algorithm for engineering problems. Proceedings of 16th International Conference on Soft Computing MENDEL 2010, Brno, pp. 459–464 (2010) 8. Sklenar, J., Popela, P.: Integer simulation based optimization by local search. Procedia Comput. Sci. 1(1), 1341–1348 (2010) 9. Sandera, C., Popela, P., Roupec, J.: The worst case analysis by heuristic algorithms. In: Proceedings of 15th International Conference on Soft Computing MENDEL 2009, Brno, pp. 109–114 (2009) 10. EN 1990:2002+A1—Eurocode: Basis of structural design

Popular Optimisation Algorithms with Diversity-Based Adaptive Mechanism for Population Size Radka Poláková and Petr Bujok

Abstract An adaptive mechanism for population size which is a parameter of many optimisation algorithms was recently proposed. The mechanism is based on controlling the population diversity. It was very successful when implemented into the differential evolution algorithm and its adaptive versions. The mechanism is employed in several popular optimisation algorithms here. The goal of the research was to find whether the mechanism can increase also their performance or not. The experimental tests are carried out on CEC2014 benchmark set. Keywords Swarm intelligence based algorithm · Bio-inspired algorithm · Population size · Adaptation of population size · Population diversity · Experimental comparison

1 Introduction A new adaptive mechanism for population size [1] has been recently proposed originally for differential evolution algorithm [2]. When it was implemented into the basic version of the algorithm, it increased its efficiency significantly in more than 80% out of all tested cases. Any deterioration of results did not occur there. The mechanism was also implemented into several adaptive versions of the algorithm. Overall, the modified versions of adaptive algorithms were significantly better than original versions in about 60% out of tested cases and significantly worse only in about 10% out of tested cases [1]. These results tend to natural effort to apply proposed mechanism also in another optimisation methods. The ABC [3], Bat [4], Cuckoo [5], DFO [6], Firefly [7], Flower R. Poláková (B) Centre of Excellence IT4Innovations, Division of OU, Institute for Research and Applications of Fuzzy Modeling, University of Ostrava, Ostrava, Czech Republic e-mail: [email protected] P. Bujok Department of Informatics and Computers, University of Ostrava, Ostrava, Czech Republic e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 R. Matoušek and J. K˚udela (eds.), Recent Advances in Soft Computing and Cybernetics, Studies in Fuzziness and Soft Computing 403, https://doi.org/10.1007/978-3-030-61659-5_15

171

172

R. Poláková and P. Bujok

[8], PSO [9], and SOMA [10] algorithms were selected for this study (the same as in our previous one [11]). All of them except for the Flower algorithm are placed into Swarm intelligence based algorithms. The Flower algorithm is a Bio-inspired algorithm [12]. Each of these algorithms solves optimisation problem in following form. Let’s have a real f (so-called objective function) and D-dimensional search space   function S, S = Dj=1 a j , b j , a j < b j , j = 1, 2, . . . , D. A global minimum point x ∗ , for which f (x ∗ ) ≤ f (x) holds for all x = {x1 , x2 , . . . , x D } ∈ S, is searching. The remaining part of this paper is organised as follows. The mechanism of diversity-based population size adaptation proposed in [1] is presented in Sect. 2. Eight algorithms included in our experiments are described in Sect. 3. In the next section, curried out experiment is dealt and then the experiment’s results are presented and discussed. The last section concludes the paper.

2 Diversity-Based Adaptation of Population Size An adaptive mechanism which adapts population size was proposed recently [1]. In the mechanism originally proposed for differential evolution algorithm (DE), a population diversity is controlled by either enlargement or reduction of population size. When the population diversity is large, the evolutionary process of DE has bigger power to create new point/s in different areas of search space than in a case of lower diversity. On the other side, function evaluations can be saved when the population diversity is high enough. By removing one point from population, it saves the count of function evaluations and the diversity is not decreased so much. By adding a random point into the population, its diversity can be increased and the requirement on the amount of function evaluations does not increase so much. Thus, a point is removed or added when suitable in order to exploit fully the time assigned to the search and to reach better solution. The population diversity is measured by DI characteristic defined by the expression   N  D 1 

2 xi j − x j , DI =  N i=1 j=1

(1)

where x j is the mean of jth coordinate of the points in the current generation of population P, N is current size of population, xj =

N 1  xi j , N i=1

(2)

DI is square root of average square of distance between a population point and centroid x = (x 1 , x 2 , . . . , x D ).

Popular Optimisation Algorithms with Diversity-Based Adaptive Mechanism …

173

When the initial population is created, the diversity is measured and labelled as DIinit . Then, it is used as reference value in the definition of relative measure RD of the diversity in the current generation of population (iteration of cycle in an optimisation algorithm) (3), RD =

DI . DIinit

(3)

Relative number of currently depleted function evaluations is defined by (4). RFES =

FES , Max F E S

(4)

where FES is current count of function evaluations and MaxFES is count of function evaluations allowed for the search. The size of population is changed depending on current relative diversity. It is suggested to keep the relative diversity RD near its required value rRD linearly decreasing from the value 1 at the beginning of the search to value near 0 at the end of the search (Fig. 1). It means to change the population size only when the current relative diversity is less than 0.9 × r RD or larger than 1.1 × r RD. It is suggested for first nine tenths of search. In the last tenth, the zero value is strictly required. The RD characteristic is computed after each generation. The size of population is increased by 1 and a random point from search space is added when the RD is less than 0.9 × r RD. The population size is decreased by 1 and the worst point is excluded when the RD is larger than 1.1 × r RD. There are minimal and maximal

Fig. 1 Required relative diversity rRD and the area of allowed relative diversity

174

R. Poláková and P. Bujok

values of population size used in the approach, Nmin and Nmax . The search process starts with Ninit . Nmin = 8, Nmax = 5 × D, and Ninit = 50 are suggested and also used in our experiments in this paper (as in [1]).

3 Algorithms in Experiments The ABC (Artificial Bee Colony) algorithm was proposed by Karaboga in 2005 [3]. The algorithm models the behavior of a bee flock. Each bee in hive is either employed or unemployed, unemployed bee is either onlooker or scout bee. Thus, the flock consists of three types of bees—employed bee, onlookers bee, and

scout. An employed bee xi jth position is updated by yi, j = xi, j + U (−1, 1) × xi, j − xr, j , where j is randomly selected index from {1, 2, . . . , D} of the position to be updated (D is the dimension of the problem), r is randomly selected bee different from current xi bee and U (−1, 1) is a random number from the uniform distribution with parameters given in parentheses. Then greedy selection between xi and yi proceeds. Onlooker bees select a food source based on information from employed bees, probabilities are computed based on employed bees’ fitness (objective functions). When a food source is selected, the onlooker bee generates its new position similarly as employed bee (but starts from the selected position instead of its own position) and then greedy selection is applied. If the food source of an employed bee is exhausted (the objective function cannot be improved through more than limit trials), the bee becomes the scout bee. The scout bee finds its food source randomly (yi is random point from search space). The limit parameter is the only input parameter. The Bat algorithm simulates an echolocation behavior of real bats controlled by emission rate and loudness. When a prey is located, the bats adjust the parameters of their signal. The artificial representation of this type of animals uses parameter setting which follows the original publication of Yang [4]. Maximal and minimal frequencies are set up f rmax = 2 and f rmin = 0, respectively. A local-search loudness parameter is initialized Ai = 1.2 for each bat-individual and reduced if a new bat position is better than the old one using coefficient α = 0.9. The emission rate parameter is initialized to each bat-individual ri = 0.1 and increased by parameter γ = 0.9 in the case of a successful offspring. The Cuckoo (Cuckoo search) algorithm was introduced by Yang and Deb [5]. This algorithm was inspired by cuckoo birds ‘nest-parasitism’. There are N nests (cuckoo/birds) in population. In each iteration of the algorithm, new solution is generated by Lévy flight random walk. Then a random nest is selected and if the new one is better, the original is rewritten by it. A pa -fraction of nests is abandoned and new nests are generated. Then only best nests are invited into next iteration. Probability of the cuckoo eggs’ laid in a bird-host nest is set pa = 0.25 and the control parameter of Lévy flight random walk is set to λ = 1.5 in our study. The DFO (Dispersive Flies Optimisation) algorithm [6] was proposed in 2014 by al-Rifaie. The algorithm is model of the swarming behaviour of flies over food sources in nature. The behaviour consists of two tightly connected mechanisms, one is the

Popular Optimisation Algorithms with Diversity-Based Adaptive Mechanism …

175

formation of the swarms and the other is its breaking or weakening. There is a population of flies randomly initialised in search space. In each iteration, each component of each member of population is randomly and independently recomputed taking into account the component’s value, the corresponding value of the best neighbouring fly with the best objective function (the least when minimising), ring topology is considered, and the corresponding value of the best fly in the whole swarm. The only control parameter called disturbance threshold, is set to dt = 1 × 10−3 . The Firefly algorithm models the life of a swarm of fireflies in a simplified form. The most important thing of the swarm’s life for the inner operation of the Firefly algorithm is the situation how a firefly attracts another firefly. In this algorithm, there is a population of fireflies which are moving inside the search space. There is a cycle and inside the cycle the second one. Inside these both cycles, there are two particular firefly and the worse one is moving to the better one. This artificial model of fireflies’ attraction has several control parameters. They are set to recommended values, randomization parameter α = 0.5, light absorption coefficient is set as γ = 1, and attractiveness is updated using its initial β0 = 1 and minimal βmin = 0.2 values. The algorithm was proposed by Yang in 2007 [7]. From the family of the algorithms modelling the life of plants, the Flower algorithm (Pollination Algorithm for Global Optimisation) [8] was selected. It was proposed in 2012. The Flower algorithm has an initial population of points. In each generation (iteration), for each point, the new position is computed by either global pollination or local pollination. The global pollination is done by using step vector L which obeys a Lévy distribution with probability p and p = 0.8 in our experiments. If the new point is better than the original one, the point (new position of flower) is placed into the population instead of the original one. A parameter controlling Lévy distribution is set up λ = 1.5 as in the Cuckoo algorithm. Next algorithm included into our study which also belongs to Swarm intelligence based algorithms is PSO (Particle Swarm Optimization) [9] originally proposed in 1995. Very popular and often studied PSO works with a swarm of animals (xi , i = 1, 2, . . . , N ) which are moving inside the search space from their random initial positions xi,0 with speeds vi adjusted in each generation with utilisation of current best solutions already met in the search process (global best gbest and local bests pi,best ). The speeds are initialised also randomly. The global best is the best position in swarm history and the local best is up-to-now best historical position of the current particle. New position of a particle is computed as xi,G+1 = xi,G

+ vi,G+1 . New speed vi,G+1 is computed as vi,G+1 = wvi,g + ϕ1 u 1 × pi,best − xi + ϕ2 u 2 × (gbest − xi ), where w, ϕ1 , and ϕ2 are input parameters of the algorithm and u 1 , u 2 are random. D-dimensional vectors which elements are from the uniform distribution on interval [0, 1]. This basic variant of PSO with slightly enhanced particles’ velocities updating by the variation coefficient w and coefficient c is used in our experiments. The control parameter of variation w is set for each generation as a linear interpolation from maximal value wmax = 1 to wmin = 0.3. Parameter controlling a local and a global part of the velocities’ updating is set c = 1.05. Speeds are updated by

176

R. Poláková and P. Bujok

vi,G+1 = wG+1 vi,g + cu 1 × pi,best − xi + cu 2 × (gbest − xi ) in the slightly enhanced version of PSO. SOMA (Self-organising Migration Algorithm) was proposed in 2000 as a model of a hunting pack of predators [10]. In each generation, each predator is jumping to the leader of pack and it takes the best of all its positions visited during the movement to the leader as its new position. The movement is made not necessarily in all dimensions. SOMA has several control parameters and particle strategies. The best settings based on our previous experiments with the algorithm was taken for our experiments in this paper. Parameter controlling the length of individual’s way toward to leader is set PathLenght = 2. The parameter which determines length of all single jumps on the way to the leader is set to Step = 0.11. Each dimension is selected for changing with probability Prt = 0.1. Direction of movement is chosen for each predator and migration step separately. There are several strategies of individuals’ (predators’) movement, the best performing strategy in our previous experiments, all-to-one (the leader is the best point), is applied here.

4 Experiments and Results The eight popular optimisation algorithms described in previous section were tested at three levels of dimension (10, 30, and 50) on benchmark set CEC 2014 [13]. The tests were carried out as required by [13], each run was stopped when FES (number of depleted function evaluations) was larger than MaxFES, MaxFES was set to D × 104 . For each function, dimension, and algorithm, 51 independent runs were done. The diversity-based adaptive mechanism for population size described in Sect. 2 was implemented into these optimisation algorithms. Obtained algorithms, namely ABCdiv, Batdiv, Cuckoodiv, DFOdiv, Fireflydiv, Flowerdiv, PSOdiv, and SOMAdiv, were tested under the same conditions. Results of an algorithm and its modified version were compared at each dimension for each function by Wilcoxon rank-sum statistical test with the significance level 0.05. Summarised results of these comparisons are shown in Table 1. Also the results of the same experiment for DE, in which the most popular strategy DE/rand/1/bin was used and input parameters were set to F = 0.8 and CR = 0.5, are included. For each algorithm, the number of wins of its version with diversity-based mechanism (#wins), the number of wins of its original version (#losses), and the number of cases where tie occurred for results of both compared algorithms (#≈) are shown. One can see that for DE no loss appeared for the version with diversity-based mechanism and only for small number of test problems the version was not better than the original version. Such great success does not occur for the algorithms involved in this study, but the overall results are very promising. In experiments of this paper, the algorithms with the new mechanism for population size adaptation were better in 41% out of tested 720 cases and the original algorithms were better in about 17% out of tested cases. Regarding dimensions separately, the best results appeared for dimension 30 where the algorithms with diversity-based mechanism were successful

Popular Optimisation Algorithms with Diversity-Based Adaptive Mechanism …

177

Table 1 Counts of wins and losses of algorithm’s version with diversity-based mechanism versus the original one with fixed population size according to significance of Wilcoxon rank-sum test for all tested algorithms and also for DE Alg.

Dimension # Wins

ABC

Bat

30

50

all

4

9

6

19

Dimension # wins

10

30 50 all

1

15 21 37

# Losses 0

5

8

13

Firefly # Losses 7

1

12

# ≈

26

16

16

58

# ≈

22

11 8

41

# Wins

4

20

25

49

# Wins

18

16 11 45

4

# Losses 3

3

3

9

Flower # Losses 6

6

12 24

# ≈

23

7

2

32

# ≈

6

8

7

# Wins

23

15

5

43

# Wins

11

24 23 58

Cuckoo # Losses 4

DFO

Alg.

10

PSO

21

10

14

28

# Losses 1

1

1

3

# ≈

3

5

11

19

# ≈

18

5

6

29

# Wins

3

2

0

5

# Wins

14

15 10 39

# Losses 3

6

5

14

SOMA # Losses 1

8

11 20

# ≈

24

22

25

71

# ≈

15

7

9

# Wins

78

116 101 295

# Wins

25

24 27 76

Without # Losses 25

43

55

123 DE

# Losses 0

0

0

0

# ≈

81

84

302

# ≈

6

3

14

All DE

137

5

31

in almost half out of relevant cases. For other two tested dimensions, 10 and 50, the rates of diversity-based versions’ wins were about 32% and 42%, respectively. The rates of diversity- based versions’ losses were about 10%, 18%, and 23% for dimensions 10, 30, and 50, respectively. The highest rate of statistical agreement of both versions’ results appeared in dimension 10. Only the DFO algorithm is better in its original version according to our experimental results. Other algorithms involved into our experiments are better when the diversity-based mechanism is employed. For some of them, there is a small difference between the number of cases where its modified version won and the number of cases where the original version of algorithm won (ABC, Cuckoo). The difference is quite large for other algorithms. For ABC, there is almost no influence made by implementation of the new adaptive diversity-based mechanism. The most positive influence appeared for PSO and Bat algorithms. The version with the adaptive mechanism was less successful in D = 10 for the Bat algorithm. The implementation of the mechanism was also quite helpful in case of Cuckoo in D = 10, Firefly in dimensions 30 and 50, Flower in dimension 10 and 30, and SOMA in dimensions 10 and 30. The implementation brought the deterioration of algorithms’ results in case of Cuckoo in D = 50 and Firefly in D = 10 (and also for DFO overall and the algorithm in dimensions 30 and 50). The results were almost the same for the both versions of the algorithms in all rest cases (Cuckoo in D = 30, Flower in D = 50, and SOMA in D = 50).

178

R. Poláková and P. Bujok

All sixteen algorithms tested in our experiments were also compared by Friedman statistical test. The zero hypothesis about equality of algorithms’ results was rejected in each tested dimension, significance level was set as 0.05. Algorithms’ mean ranks (also absolute ranks—in parenthesis) for each dimension are displayed in Table 2. The overall mean ranks are shown in the last column of the table. From the table, it is clear that the version with the adaptive population size mechanism is better than its original version for all algorithms except DFO. The overall winner, the Firefly algorithm, is winner also in two larger tested dimensions. The report [13] where benchmark set CEC2014 and conditions for usage of test problems from this set are defined requires to follow the solution in 14 stages, 13 stages during the search and the last at the end of the search. In these stages, also the values of DI characteristic were stored in order to study not only evolution of solutions of the original and modified algorithms but also the development of the population diversity in them. The graphs for these two characteristics were made from medians of obtained values. The implementation of the adaptive diversity-based mechanism brought the largest improvement of PSO results. When the graphs generated from medians of obtained data were studied, the following was observed. For major part of graphs for the PSO algorithm, the development of the DI characteristic is very similar for PSO and PSOdiv, but there is a different feature of these graphs which can be observed. The original algorithm has the zero value of population diversity more earlier than the modified version of algorithm (see Fig. 2a). In the modified version of the algoTable 2 Mean and absolute ranks from Friedman tests Algorithm

D = 10

D = 30

D = 50

Average

Fireflydiv

6.08 (6)

3.93 (1)

3.27 (1)

4.43

Firefly

5.82 (4)

4.73 (2)

4.30 (2)

4.95

Cuckoodiv

4.32 (1)

5.60 (4)

6.50 (7)

5.47

Cuckoo

6.02 (5)

5.27 (3)

5.42 (3)

5.57

Flowerdiv

4.77 (2)

6.10 (5)

7.10 (9)

5.99

ABCdiv

6.55 (8)

6.15 (6)

5.90 (5)

6.20

SOMAdiv

5.68 (3)

6.37 (7)

6.60 (8)

6.22

ABC

6.92 (10)

6.65 (9)

5.60 (4)

6.39

SOMA

6.73 (9)

6.43 (8)

6.15 (6)

6.44

Flower

6.32 (7)

7.03 (10)

7.40 (10)

6.92

PSOdiv

8.88 (11)

9.13 (11)

9.10 (11)

9.04

PSO

9.92 (12)

10.60 (12)

10.67 (12)

10.39

Batdiv

13.57 (14)

13.17 (13)

13.17 (13)

13.30

Bat

13.43 (13)

13.87 (14)

13.87 (14)

13.72

DFO

15.53 (16)

15.40 (15)

15.30 (15)

15.41

DFOdiv

15.47 (15)

15.57 (16)

15.67 (16)

15.57

Popular Optimisation Algorithms with Diversity-Based Adaptive Mechanism …

179

Fig. 2 Development of solution and population diversity

rithm, the diversity is controlled to be of zero value only in the last tenth of search. And this is probably why the mechanism usage is successful in PSO. The mechanism’s implementation was successful only for two out of three tested dimensions for Bat, SOMA, and Flower algorithms, for Bat in 30 and 50 dimensions, for Flower in 10 and 30 dimensions, and for SOMA in 10 and 30 dimensions. For the SOMA algorithm, the population diversity development in original algorithm is often exponentially decreasing (as on Fig. 2b). By enforcement of its correction in the algorithm, the improvement of solution occurred for large part of problems in 10 and 30 dimensions. For the Flower algorithm, the population diversity was changed only in the beginning parts of the search for most of problems, it was changed relatively fast here (Fig. 2c, d). For some problems, the diversity was almost constant during whole the search (as on Fig. 2e). By diversity mechanism’s implementation, it manages to support the convergence in the algorithm for 10 and 30 dimensions. It means the population diversity was smaller than for the original version of algorithm at the end of the search. For the Bat algorithm in 30 and 50 dimensions, population diversity was relatively small already at the beginning of the search and then it only slightly decreased or it was

180

R. Poláková and P. Bujok

small and almost constant during the whole search (as on Fig. 2f). By implementation of the mechanism into the algorithm, it manages to encourage the exploration of the search space at the beginning of the search in such cases (the diversity in the algorithm with mechanism was substantially large at the beginning parts of the search than in the original algorithm) and to improve the algorithm results. In contrast to this fact, the diversity was large at the beginning parts of the search for D = 10 for this algorithm and it decreased exponentially. The development of the diversity in the algorithm with the mechanism was very similar as for original algorithm in this dimension (Fig. 2g). For the Firefly algorithm, the developments of population diversity for the original and modified algorithms were very similar for a part of problems (similarly as on Fig. 3a–c). For the rest problems, they differ only in the final parts of the search, the diversity mechanism decreased the population diversity of the original algorithm Firefly there (as on Fig. 2h, i). In D = 10, this reduction of diversity at the end of the search prospectively adding and removing of points in population had not expected effect. In contrast to it, the influence of mechanism on results’ improvement is relatively large in higher dimensions. For the Cuckoo algorithm, the slow decrease of population diversity outbalanced in the original algorithm (Fig. 3d). In large part of tasks, the diversity was almost constant during whole search (Fig. 3e), but also the cases occurred where the diversity decreased to zero value (Fig. 3f). At D = 10, the population diversity did not change in a part of cases by implementation of the diversity mechanism (e.g. Fig. 3e, f). In the rest cases, population diversity decreased at the end of the search in comparison with the diversity in the original algorithm (see Fig. 3d). Anyway, the application of the diversity mechanism improved the algorithm’s results in D = 10. In D = 30, the success of the mechanism was not so large and the application was not successful in D = 50. For the ABC algorithm and D = 10, the most often case is that the diversity decreases at the beginning of the search and then it is constant till the end of the search - it does not decrease to 0 (see Fig. 3h), or it is almost constant during the whole search (as on Fig. 3g). In both higher tested dimensions, the situation is very similar with the only difference, that after the decrease the diversity often increases a little and then it is constant till the end of the search (as on Fig. 3j). The diversity development in ABC with the mechanism is very similar to the one in original algorithm (Fig. 3g). For a part of problems, an increase occurred in case the diversity decreased fast in original algorithm in the beginning of the search (Fig. 3i, j) or on the other side, a decrease occurred where the diversity in original algorithm at the end of the search is constant or decreases slowly (Fig. 3h, j). However, the diversity control in the ABC algorithm had not almost any influence on the quality of its results. The population diversity is constant during the whole search in original algorithm DFO in all 90 tested cases (see Fig. 3k, m). Algorithm’s modification caused a decrease only for D = 10 (in the second part of search and the decrease was not high) (as on Fig. 3k). DFO solution of problems was deteriorated by mechanism’s application.

Popular Optimisation Algorithms with Diversity-Based Adaptive Mechanism …

181

Fig. 3 Development of solution and population diversity

5 Conclusion Adaptive diversity-based mechanism for population size setting was implemented into several popular optimisation algorithms. Experiments were done on bench- mark set developed for competition organised at CEC2014 world congress. Overall, the new modifications of the algorithms behaved more effectively than original versions in more than 40% out of tested cases, whilst the original versions of the algorithms outperformed the new versions in about 17% out of tested cases. Only the

182

R. Poláková and P. Bujok

DFO algorithm’s results were deteriorated by the implementation of the new adaptive mechanism. The largest improvement appeared for the PSO algorithm in our experiments.

References 1. Polakova, R., Tvrdık, J., Bujok, P.: Adaptation of population size according to current population diversity in differential evolution. In: 2017 IEEE Symposium Series on Computational Intelligence (SSCI) Proceedings, pp. 2627–2634. IEEE (2017) 2. Storn, R., Price, K.: Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Global Optim. 11, 341–359 (1997) 3. Karaboga, D.: An idea based on honey bee swarm for numerical optimization. Technical reporttr06, Erciyes University, Kayseri, Turkey (2005) 4. Yang, X.S.: A new metaheuristic bat-inspired algorithm. In: Gonzalez, J., Pelta, D., Cruz, C., Terrazas, G., Krasnogor, N. (eds.) Nicso 2010: Nature Inspired Cooperative Strategies for Optimization, Studies in Computational Intelligence, vol. 284, pp. 65–74. Univ Laguna; Carnary Govt; Spanish Govt (2010). International Workshop on Nature Inspired Cooperative Strategies for Optimization NICSO 2008, Tenerife, Spain (2008) 5. Yang, X.S., Deb, S.: Cuckoo search via L´evy flights. In: 2009 World Congress on Nature Biologically Inspired Computing NaBIC, pp. 210–214 (2009) 6. al Rifaie, M.M.: Dispersive flies optimisation. In: Federated Conference on Computer Science and Information Systems, 2014, ACSIS-Annals of Computer Science and Information Systems, vol. 2, pp. 529–538 (2014) 7. Yang, X.S.: Nature-Inspired Optimization Algorithms. Elsevier, Amsterdam (2014) 8. Yang, X.S.: Flower pollination algorithm for global optimization. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 7445 LNCS, pp. 240–249 (2012) 9. Kennedy, J., Eberhart, R.: Particle swarm optimization. In: 1995 IEEE International Conference on Neural Networks Proceedings, vols. 1–6, pp. 1942–1948. IEEE, Neural Networks Council (1995) 10. Zelinka, I., Lampinen, J.: SOMA—self organizing migrating algorithm. In: Matousek, R. (ed.) MENDEL, 6th International Conference On Soft Computing, pp. 177–187. Czech Republic, Brno (2000) 11. Bujok, P., Tvrdık, J., Polakova, R.: Adaptive differential evolution vs. nature-inspired algorithms: An experimental comparison. In: 2017 IEEE Symposium Series on Computational Intelligence (IEEE SSCI), pp. 2604–2611 (2017) 12. Fister, I., Jr., Yang, X.S., Fister, I., Brest, J., Fister, D.: A brief review of nature-inspired algorithms for optimization. Elektrotehniski Vestnik 80(3), 116–122 (2013) 13. Liang, J.J., Qu, B., Suganthan, P.N.: Problem definitions and evaluation criteria for the CEC 2014 special session and competition on single objective real-parameter numerical optimization (2013). https://www.ntu.edu.sg/home/epnsugan/

Training Sets for Algorithm Tuning: Exploring the Impact of Structural Distribution Mashal Alkhalifa and David Corne

Abstract Algorithm tuning, especially in the context of metaheuristic algorithms, is a growing area of research and practice, which concerns configuring the parameters of an algorithm on the basis of a ‘training set’ of problem instances. There is much interest (but little research so far) on the relationship between the training set and generalization performance. In general terms, we expect that, if an algorithm is targeted at problems of a certain type, then the training set of problem instances should represent a diverse collection of problems of that type. However, many questions around this area are still the topic of research. In this paper, we explore how certain structural aspects of the training set problem instances (in contrast, for example, to problem size) affect generalization performance. Specifically, we look at spaces of simple TSP instances where the number of cities is constant, but with different spatial distributions. We find that, when the training problem sets are suitably difficult, an algorithm tuned for a specific spatial distribution tends towards being the best algorithm on test sets following that distribution. Conversely, and significantly, if the target problem set is not from a ‘particularly difficult’ spatial distribution, a better optimizer for that distribution may be produced by choosing a different spatial distribution for the training set. This has implications for the development of realworld optimizers for specific problem classes, such as, for example, vehicle routing problems with particular depot and customer distributions. Keywords Algorithm tuning · Parameter tuning · Combinatorial optimization

M. Alkhalifa (B) · D. Corne Department of Computer Science, Heriot-Watt University, Edinburgh E14 4AS, UK e-mail: [email protected] D. Corne e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 R. Matoušek and J. K˚udela (eds.), Recent Advances in Soft Computing and Cybernetics, Studies in Fuzziness and Soft Computing 403, https://doi.org/10.1007/978-3-030-61659-5_16

183

184

M. Alkhalifa and D. Corne

1 Introduction There is a wealth of different optimization algorithms in both the research literature and in everyday industrial practice. Each of these algorithms has parameters which need to be set before use, and, to a varying extent, algorithm performance will be sensitive to these parameters. Algorithm tuning, especially in the context of meaheuristic algorithms, is a growing area of research and practice, which concerns configuring the parameters of an algorithm in order to optimize its overall performance on a target set of problem instances. For example, the algorithm may be for production scheduling in a particular factory, and the target set of instances, in that case, would be future problem instances that will presumably resemble typical previous examples of daily scheduling problems faced by that factory. It is important and instructive to stress, again, that the set of target instances is, in all but pathological scenarios, a set of unknown future instances that we have never encountered before. Meanwhile, in order to perform well in such instances, we need to prepare our algorithm by finding parameters that work well across a provided training set of problem instances. In common and intuitively reasonable practice, this training set is typically constructed by either directly using previously known instances, or by constructing synthetic instances with similar properties, or by a combination of both. As an example of the latter (and arguably most common) approach, we may have a handful of instances available from recent days of production in our factory; we will include those in the training set, but we will also attempt to make the training set more representative and robust by constructing further synthetic instances based on these. However, given what we understand about the target set, what is the best way to construct the training set? There is much interest (but little research so far) on this question. The key issue is the relationship between the training set and the generalization performance. In general terms, we expect that, if an algorithm is targeted at problems of type A, then the training set should also represent a diverse collection of problems of type A. However, this glib statement leaves many open questions, and most of those are around the necessarily vague nature of the ‘type’ of instance we are interested in. For example, if we have only one real example problem instance available, how exactly should we ‘fill out’ the training set? Should we generate random instances of strictly the same size (e.g. a number of jobs to process)? Or, should the instances have a range of sizes centred on the expected sizes of target instances? What about the other statistics that might be pertinent? E.g. should we ensure that the distribution of job processing times in our synthetic instances matches those in our small set of real examples? And, if we have too few real examples with which to determine statistical properties, what distribution should we use? In this paper, we start to explore this question by focusing on a single simple scenario, and we look at how certain structural aspects of the training set affect generalization performance. Specifically, we look at spaces of simple travelling salesperson problem (TSP) instances where the number of cities is constant, but with different spatial distributions of the cities. The TSP is arguably a good place to

Training Sets for Algorithm Tuning: Exploring the Impact …

185

start in investigations of this type. In our context, this can be justified by the close connection between the TSP and a very wide range of real-world scenarios in which algorithms need to be designed for specific targets. For example, a manufacturer at a certain location L may have to solve, every day, a vehicle routing problem (VRP) in which its fleet of vehicles needs to deliver items from its depot at location L to each of a set of N customers (N varies daily) at locations in the region. The VRP is an extension of the TSP in which, in effect, we must assign delivery tasks to vehicles and simultaneously solve a TSP for each of the vehicles. If our task was to tune the parameters of an algorithm for solving this manufacturer’s daily VRP, then it seems natural to consider the distribution of customer locations that we would expect in a daily task for this customer. It may be instructive to consider a pathological case. Imagine that the manufacturer, and all of the manufacturer’s customers, were situated alongside the A1 road in the UK, between Newcastle and Leeds. This is essentially a straight line, and the solution to any TSP (and, though less trivially, any VRP) involving locations spatially distributed on a straight line is very simple—e.g. we simply travel along the line visiting each customer in turn and head back after we have visited the furthest. In such a case, we might expect the algorithm to be suitably well-tuned by only a small number of examples; if, say, the algorithm applied a set of heuristics, and its parameters guided the application probabilities of these heuristics, then it would quickly learn to only apply the ‘go to nearest unvisited customer’ on this distribution of problems. But in the more general case, where the customers are distributed spatially in 2D (and, more correctly, with the distance data between customers underpinned by a non-Euclidean and non-symmetric matrix arising from the road network), we would expect tuning performance to be more sensitive to the training set. In this paper, we, therefore, start to explore how the spatial distribution of locations in a training set of TSP instances affects the generalization performance of algorithm tuning. The main question of interest in this context is this: (Q1) If an algorithm A has been tuned on a specific spatial distribution D, will A be the most suitable algorithm to use on future instances from distribution D? The answer would seem, intuitively, to likely be yes—indeed this seems to be the implicit assumption in most work on algorithm tuning. However, as we shall see, this is not always the case, and the reason for this seems to be tied up with the underlying difficulty of the problem sets in question. In fact, we find that this intuition seems to be true when the training problem sets are suitably ‘difficult’. However, if the target problem set is not from a ‘difficult’ spatial distribution—perhaps like our A1 example, although more realistic—a better optimizer for that distribution may be tuned by choosing a different spatial distribution for the training set. This has implications for the development of real-world optimizers for specific problem classes, ranging from vehicle routing problems through to many other combinatorial challenges in logistics and transport. The remainder of the paper is set out as follows. In Sect. 2 we briefly review related work on algorithm tuning, focusing on what seems to be understood so far about the relationship between training sets of problem instances and generalization performance. In Sect. 3 we describe various preliminaries of our experimental setup, such as the simple ‘inner’ optimization algorithm, its parameters, the tuning

186

M. Alkhalifa and D. Corne

algorithm, and the TSP spatial distributions that we explore. Section 4 describes our experiments and summarizes the results; finally, we discuss and conclude in Sect. 5.

2 Related Work Over the past 20 years or so it is been increasingly recognized that human expertise— even the expertise of algorithm designers, is ill-suited to the task of configuring the parameters of an optimization algorithm. Automated approaches to this task, generally known as ‘algorithm tuning’ have been proposed since the early 2000s (e.g. [1, 2]), and arguably the most prominent approach is so-called ‘ParamILS’ [3], which is based on iterated local search; meanwhile, various other algorithm and statistical frameworks are under research for the same purposes, such as ‘iterated statistical racing’ [4, 5] and sequential model-based configuration [6, 7]. Since tuning studies were initiated, the approach has repeatedly been found to be very effective in designing algorithms with far better performance than their human-configured or human-designed counterparts, in a wide range of application domains [8–10]. Broadly speaking, there is a wide spectrum of algorithm tuning practice, and a similar spectrum of algorithmic frameworks to achieve the tuning process. At one end of the spectrum, the task is to search the space of numerical parameter settings for a relatively fixed algorithm design; this is the area of the spectrum that we study in this paper. However at the other end of the spectrum, the task is more akin to finding the ideal way to assemble an algorithm from a given collection of components; in this approach, all aspects of the algorithm design including flow-of-control and strategy decisions, as well as parameters, may be subject to manipulation. However, both approaches are conceptually identical in that they aim to find an algorithm that performs ideally on a given space of problems. At either end of the spectrum, we require a ‘training set’ of problem instances to work with, and the performance of the designed algorithm will in some way be affected by choice of training instances. However, there has been very little investigation of the relationship between choice of training instances and generalization performance. The typical approach in algorithm tuning studies is simply to procure a set of instances, divide those instances into a training and a test set, and then demonstrate that tuning over the training set provides improved performance on the test set when compared with the default parameters. Broadly speaking, in such an approach, we can note that the training set and test set invariably belong to the same ‘instance stable’, and hence these researchers are relying on the implicit intuition that, for good generalization performance, the training instances should be from the same broad problem instance distribution as the target instances. With that in mind, we would claim that little work has yet explored the idea of subverting this standard practice of keeping the training and test sets from the same distribution. Nevertheless, some work has started to explore this space. Notably, Liao et al. [11] comprehensively investigated the performance of various state-of-the-art evolutionary algorithms for solving function optimization problems. Their algorithm suite

Training Sets for Algorithm Tuning: Exploring the Impact …

187

included evolution strategies, differential evolution, memetic algorithms and more. They used default and common parameter settings first, and then tuned the algorithms, each on a variety of benchmark sets, and evaluated the tuned versions. They found that the outcomes of the tuning process generally improved performance significantly; this was of course expected and consistent with the literature. However, they also showed that the nature of the benchmark set used for training had a very significant impact on the relative ranking of the tuned algorithms. Meanwhile, Corne and Reynolds [12] explored the idea of investigating an algorithm’s performance on a highly structured space of problem instances; e.g. one subspace of instances included job shop scheduling problems where the job processing times were generated from a fixed interval [x, y]; in a neighbouring subspace of instances, the job processing times were generated from [x, y + δ], and so forth. They found that algorithms had distinct ‘footprints’ across this instance space, in terms of the subspaces in which the algorithm specializes (in terms of its performance relative to others); by extension, we can imagine that algorithm tuning morphs such footprints in a way guided by the chosen training set. This area has been further explored by Smith-Miles and co-authors [13–15], with a focus on identifying ways to generate test sets of instances for the evaluation of algorithms, although not yet with a view towards generating test sets for algorithm configuration or tuning. Overall, we note that algorithm tuning is a rich area of current research, very briefly reviewed above. Close consideration of the sets of instances used during the tuning process, and examination of how this relates to generalization performance, has been very little studied, although the literature shows patches of work that impinge on this area. In this paper, we describe an investigation into this area, by focusing on a simple optimizer, a simple tuner, and subspaces of the well-known TSP problem, and we explore how tuned algorithm performance varies with the instance set used during the tuning process.

3 Experiment Preliminaries 3.1 A Simple Algorithm and Its Tuner We use a straightforward evolutionary algorithm (EA) in this paper. The EA is elitist generational, retaining the fittest in each generation (breaking ties randomly), but otherwise replacing the remaining population with new offspring in each generation. To generate each new offspring: a parent is selected with tournament selection, and a mutation operator is applied. There are three mutation operators available, which each operate at a rate which varies linear between two fixed values. There are therefore eight parameters: population size P, tournament size T, and the six mutation parameters s1 , s2 and s3 , and f 1 , f 2 and f 3 . The mutation parameters are associated with the three mutation operators as follows: mutation operator n begins operating with a rate sn (its starting rate) and finishes with a rate of f n (its finishing rate). As

188

M. Alkhalifa and D. Corne

indicated, while the algorithm runs, the rate is changed in a linear fashion between the two. The mutation operators are respectively: adjacent-swap mutation, random-pairswap mutation, and three-gene inversion. Finally, for solving TSP instances of size N (with N cities), the chromosome is simply a permutation of N distinct characters, each representing a city. It is convenient to call the above algorithm the ‘inner EA’. Meanwhile, the tuning algorithm is also an EA, which we will refer to as the ‘outer EA’. The task of the outer EA is to search through the space of parameters for the inner EA. Thus, in the ‘outer EA’ context, a candidate solution is an array of eight numbers: two integers, representing population size and tournament size, and six real numbers to represent the mutation parameters. In this way, a candidate solution in the outer-EA represents a specific algorithm. To calculate a fitness value for such a candidate algorithm, we run the inner-EA (instantiated with the appropriate parameters) on each of a set of problem instances, for an a priori fixed number of evaluations. We record the best solution for each of a fixed number of runs (usually 5, in our experiments) on each member of the test set, and considered the median of these to represent performance on that instance. The fitness of the algorithm was then taken to be the mean of its performances over the training set instances. Though it is recognized that different problem instances are not necessarily comparable, however they are all from the same distribution and we regarded this as a simple and sufficient approach to characterizing performance for the purposes of the experiments. Finally, it is worth recalling that, of course, a range of state-of-the-art and sophisticated algorithms both for addressing the TSP and for algorithm tuning. However, our current research question is focused entirely on the relationships between training sets and generalization performance and we therefore choose to use straightforward algorithms to probe these issues to enhance the degree to which the effects due to this relationship can be highlighted. The extent to which our findings generalize across a selection of state-of-the-art optimizers and tuners remains an interesting question for further research.

3.2 Training and Test Sets An instance of the well-known traveling salesperson problem (TSP) is specified by a set of N locations (usually called cities), and a distance matrix (not necessarily symmetric) providing the distance between each distinct ordered pair of the N locations. In this paper (in common with many others where the TSP is used as an exploratory tool), we specify a TSP simply by generating a list of N 2D co-ordinates, with the distances simply calculated as Euclidean distance. We generate subclasses of TSP by distributing locations around ‘regions’. A region is simply a fixed location (not necessarily one of the TSP locations). There are two key dimensions of variation: the nature of the distribution around each region, and the number and relative placement of regions. A key parameter included in the

Training Sets for Algorithm Tuning: Exploring the Impact …

189

descriptions below is the range, r. This controls the spread of locations in a region, and also the distances between regions. • Nature of distribution: we look two kinds, uniform and Gaussian. If uniform is chosen, and the region’s centre is (x, y), then co-ordinate x(y) is independently chosen from within a fixed uniform range of size r centred on x(y). This scatters the TSP locations uniformly within a square centred on the region. If Gaussian is chosen, then then co-ordinate x(y) is independently chosen from a Gaussian distribution centred on x(y) with standard deviation of 0.1r. • Number and placement of regions: we look at several cases as follows: – 2-region-short: the cities are distributed across two regions, with a short distance (half of the range) between the centres of the regions. – 2-region-long: as 2-region-short, but the distance between region centres is the range r. – n-region-[long|short]-linear: more than 2 regions, where the regions are arranged in a straight line; can be long or short in terms of distance between regions. – 4-region-[long|short]-square, in which 4 regions are arranged with their centres forming a square of size 0.5r (short) or r (long). Altogether, we generated 18 different sets of TSP instances. 10 of each set were retained as a training set, and the other 10 were retained as a test set. In every case, there were 50 locations in total, distributed evenly across the regions. We categorise and briefly describe these sets as follows: • 1-region sets: uniform and Gaussian, named 1-region-u and 1-region-g • 2-region-sets: 2-region-short-u, 2-region-long-u, 2-region-short-g, 2-region-longg • 3-region-sets: 3-region-short-u, 3-region-long-u, 3-region-short-g, 3-region-longg; in each case the 3 regions were aligned in a straight line • 4-region-sets: 4-region-short-u-sqaure, 4-region-long-u-square, 4-region-short-gsquare, 4-region-long-g-square, 4-region-short-u-linear, 4-region-long-u-linear, 4-region-short-g-linear, 4-region-long-g-linear.

4 Experiments and Results 4.1 Evaluation Approach For the purposes of evaluation, we needed to generate reference results for each of the problem instances in our 18 sets of 20 TSP instances. To achieve this, we simply ran our inner EA with a fixed parameter set for 20,000 evaluations, repeating that 10 times for each of the 360 instances, and we recorded the best result on each of these instances.

190 Table 1 An example performance matrix for instance sets A, B, and C

M. Alkhalifa and D. Corne Trained algorithm

Ranking on ALGA

Ranking on ALGB

Ranking on ALGC

ALGA

1

2

2

ALGB

3

3

1

ALGC

2

1

3

The core of our experimental approach was as follows. Given a collection of different TSP instance training sets: 1. For each TSP training set TS: Run the outer EA (the tuning algorithm) on the training set, and store the resulting optimized algorithm configuration as ALGTS . 2. For each optimized algorithm ALGTS , run it on each test set, and record the mean overall result for each test set. For example, if our collection of TSP instance sets comprised sets labelled A, B, and C, then after step 1 this procedure would provide three tuned algorithms: ALGA , ALGB and ALGC . Following step 2, we would then have identified the performance of these three algorithms on each of the three training sets. Finally, we rank the performance of each of the three algorithms on each training set, which leads to a performance matrix, e.g. as illustrated in Table 1. Each row shows the performance of a tuned algorithm on the test instances associated with each of the instance sets. For example, ALGA was trained (by definition) on Atrain (which shares exactly the same generating spatial distribution as instance set Atest ). In this illustration, we see that ALGA had the best performance on Atest , and was second best on each of the other two test sets. Meanwhile, ALGB had the worst performance on Atest and Btest , and performed best on Ctest . It is instructive to consider what our prior expectations might for these performance matrices. Our prior expectation is that the tuning process will specialize an algorithm towards performing well on problem instances from the same precise spatial distribution that it was trained on. In other words, we generally expect the diagonal of such a matrix (given that the rows and columns follow the same ordering of their associated instance sets) to be all ones. Or, failing a clear ‘all 1 s’ result that might occur due to statistical noise or other factors, we might expect that the mean rank sore on the diagonal would be significantly lower than the mean rank off the diagonal.

4.2 Uniform/Gaussian Experiments In our first set of experiments, we mixed mixed uniform and Gaussian test cases into the same collection, and experimented only with 1-region and 2-region structures. Two important experimental settings were: every TSP instance had a total of 50 locations, and: the inner EA was always run for a maximum of 5,000 evaluations. This

Training Sets for Algorithm Tuning: Exploring the Impact …

191

Table 2 Tuned algorithm performance outcomes; inner-EA set at 5000 evaluations Trained algorithm

1-u

1-g

2-short-u

2-short-g

2-long-u

2-long-g

1-region-u

6

6

6

6

6

6

1-region-g

4

4

3

4

4

4

2-region-short-u

3

2

4

2

3

3

2-region-short-g

1

1

2

1

1

1

2-region-long-u

5

5

5

5

5

5

2-region-long-g

2

3

1

3

2

2

is a relatively small number of evaluations, but is quite common in circumstances where speedy good solutions are more important than a focus on optimal results at a relaxed timescale (this circumstance is also a very compelling scenario for justifying algorithm tuning) The outcome of these experiments is summarized in the performance matrix of Table 2. The outcomes from Table 2 defied our initial expectations, and seemed to show the generalization performance of tuning was certainly not dominated by spatial distribution. Instead, with this particular set of problems and experimental settings, the ranking of algorithm performance in each case seems to more simply reflect what we would regard as the inherent ‘difficulty’ of each problem set. Broadly speaking, we might expect the difficulty of a TSP instance to be a related to the variance in distance between nearest neighbours; this would likely be maximized in the case of 2-region-short-g (from this problem set) and minimized by 1-region-u. This initial test led us to think that, if there was any significant effect on generalization from the spatial distribution itself, rather than the inherent difficulty, this might be easier to tease out by reducing the variation in inherent difficulty across the problem instance sets. In our next experiment, we attempted this simply by reducing the number of evaluations of the inner EA to 1000; in other words, a more difficult task was set in every case. The outcome of that set of experiments is in Table 3. In Table 3 we can see a slight movement towards our a priori hypothesis; for example, the algorithm tuned for 1-region-u now performs better (relatively speaking) on 1-region-u than it does on all other test sets; also the diagonal mean rank is now very slightly lower than the off-diagonal mean rank (3.3 vs. 3.53), but this is far from Table 3 Tuned algorithm performance outcomes; inner-EA set at 1000 evaluations Trained algorithm

1-u

1-g

2-short-u

2-short-g

2-long-u

2-long-g

1-region-u

5

6

6

6

6

6

1-region-g

4

4

5

4

3

3

2-region-short-u

2

3

4

5

4

4

2-region-short-g

1

2

3

1

1

2

2-region-long-u

6

5

1

3

5

5

2-region-long-g

3

1

2

2

2

1

192

M. Alkhalifa and D. Corne

Table 4 Tuned algorithm performance outcomes; inner-EA set at 1000 evaluations Trained algorithm

2-50

4-50-s

3-50-l

4-50-l

2-100

4-100-s

3-100-l

4-100-l

2-50

5

6

5

7

7

7

7

7

4-50-square

6

5

4

5

5

5

4

5

3-50-linear

1

1

1

2

1

1

2

1

4-50-linear

2

2

2

1

3

4

5

4

2-100

8

8

8

8

8

8

8

8

4-100-square

4

3

3

3

4

3

2

3

3-100-linear

7

7

7

6

5

6

6

6

4-100-linear

3

4

6

4

2

2

3

2

significant and the main effect seems to be the inherent difficulty of the problem set. But this alone is interesting, since it suggests that for certain sets of target problem instances, the best approach to tuning an optimizer for those targets may be to use a training set of cases which are more difficult in some precise way. Finally, we attempted to further remove variations in difficulty by removing ‘uniform’ instances from the collection of instance sets. In the next experiment, we therefore only use Gaussian sets, and only considered ‘short’ distances between regions. We also add some additional structures (which were described in Sect. 3.2), and half of the collection of 8 instance sets involved 100 locations. The set of problems, and the corresponding outcomes, can all be seen in Table 4. From the outcomes in Table 4, it is still the case that no solid patterns come through. There is further movement towards the diagonal showing a better mean rank than off diagonal (3.88 vs. 5.14), however this table also shows further validation of the finding that the structural nature of training set strongly affects performance across a wide space of test instances. Again, this is arguably related to the inherent difficulty of the training set instances, although much work needs to be done to isolate and define ‘difficulty’ in this context. Another point worth noting is the performance of algorithms tuned for 50-city problems on 100-city test sets, and vice versa. Algorithms tuned on size-50 instances scored a mean rank of 3.44 on the size-50 test instances, and a mean rank of 4.25 on the size-100 test instances. Meanwhile, algorithms tuned on size-100 instances scored a mean rank of 5.75 on the size-50 test instances, and a mean rank of 4.75 on the size-100 test instances. So, in both cases we can see a weak signal for the tuning process specializing towards the size of the instance. However, this effect is somewhat swamped by the influence of the precise nature of the training instances; for example, the algorithm trained on the 3-50-region-linear instances was best or second best on all test sets, including performing best on 3 of the 4 100-instance test sets.

Training Sets for Algorithm Tuning: Exploring the Impact …

193

5 Discussion and Future Work The general area that concerns us in this paper is: given an algorithm capable of finding good solutions to optimization problems of a specific type, and further, given a subspace of those optimization problems: how can we best configure the algorithm for that subspace? Obviously, this is an extraordinarily wide area with many variables at play; answers to this question will in general depend greatly on details of the problem scenarios involved—which may range from, for example, anything from VLSI circuit design through to train driver scheduling—and clearly also depend on the algorithms involved, how we define a problem subspace, and so forth. But, for immediately practical purposes, the question can be seen as asking the following: when we come to configure an algorithm for a given target class of problems, does it suffice to simply use a training set of problems from that class? In other words: intuition suggests that a diverse set of example problems from the target class will be sufficient for tuning purposes, but is that intuition sound? In this paper we have started to explore that question in the context of a simple optimization algorithm, a simple tuning algorithm, and a simple but appropriate combinatorial problem, the TSP. Specifically, we defined subclasses of TSP with different spatial distributions of the cities, and explored the performance of algorithms that were tuned on different subsets. Our findings were to some extent inconclusive, showing a complex relationship between the performance profiles of different tuned algorithms across the various test problem sets. However, within the complexity, there were certain clear signals as follows: 1. If we wish to tune an algorithm for a target of TSP instances where the cities come from spatial distribution D, it is sometimes advisable to use a different spatial distribution for the training set. 2. Tuning algorithms by using training sets of problems that are ‘difficult’ leads to tuned algorithms that perform well across a wider space of test problems. It is important not to over generalize, since there are many factors at play. One such factor is the simplicity of the algorithms that we deployed in these experiments. With few parameters to tune, the tuning process will have been limited in its ability to tailor algorithms towards specific distributions, making the above outcomes more likely. Then again, the latter is a useful insight that should come into play when tuning ‘simple’ algorithms; that is, in circumstances where we want to tune a ‘simple’ algorithm, we would do well to ensure that the training set comprises more challenging instances than we expect in the target set. This is a quite reasonable scenario in industry contexts, where simple few-parameter algorithms may sometimes be preferred for their robustness. A main area of uncertainty however, and a key area for future research, is to understand how to characterize the ‘difficulty’ of a problem instance in this context, and then understanding how that relates to the relationship between training sets and generalization. Other interesting areas for further work are concerned with how

194

M. Alkhalifa and D. Corne

to identify other structural features of problem instances that are salient from the tuning viewpoint. If we knew the key such features, this may enable us to construct suitable training sets confidently for a given target set. For example, in TSPs and VRPs, the clustering coefficient and similar features of the spatial distribution may be informative. Acknowledgements We are grateful to King Abdulaziz City for Science and Technology (KACST) for their support of the first author.

References 1. Audet, C., Orban, D.: Finding optimal algorithmic parameters using derivative-free optimization. SIAM J. Optim. 17(3), 642–664 (2006) 2. Nannen, V., Eiben, A.E.: A method for parameter calibration and relevance estimation in evolutionary algorithms. In: Cattolico, M. (ed.) Proceedings of Genetic and Evolutionary Computation Conference, GECCO 2006, pp. 183–190. ACM Press, New York, NY (2006) 3. Hutter, F., Hoos, H.H., Stutzle, T.: Automatic algorithm configuration based on local search. In: Holte, R.C., Howe, A. (eds.) Proceedings of the Twenty-Second Conference (AAAI’07), pp. 1152–1157. AAAI Press/MIT Press, CA (2007) 4. Balaprakash, P., Birattari, M., Stutzle, T.: Improvement strategies for the F-race algorithm: sampling design and iterative refinement. In: Bartz-Beielstein, T., Blesa, M.J., Blum, C., Naujoks, B., Roli, A., Rudolph, G., Sampels, M. (eds.) Hybrid Metaheuristics, vol. 4771, pp. 108–122. LNCS. Springer, Heidelberg, Germany (2007) 5. Birattari, M., Yuan, Z., Balaprakash, P., Stutzle, T.: F-race and iterated F-race: an overview. In: Bartz-Beielstein et al. [16, pp. 311–336] 6. Hutter, F., Hoos, H.H., Leyton-Brown, K.: Sequential model-based optimization for general algorithm configuration. In: Coello Coello, C.A. (ed.) Learning and Intelligent Optimization, 5th International Conference, LION 5, vol. 6683 of Lecture Notes in Computer Science, pp. 507–523. Springer, Heidelberg, Germany (2011) 7. Hutter, F., Hoos, H., Leyton-Brown, K.: An evaluation of sequential model-based optimization for expensive blackbox functions. In: Proceedings of the 15th Annual Conference Companion on Genetic and Evolutionary Computation, pp. 1209–1216. ACM (2013) 8. Fukunaga, A.S.: Automated discovery of local search heuristics for satisfiability testing. Evol. Comput. 16(1), 31–61 (2008) 9. Lopez-Ibánez, M., Stützle, T.: Automatic configuration of multi-objective ACO algorithms. In: Dorigo, M., et al. (eds.) Swarm Intelligence, 7th International Conference, ANTS 2010, vol. 6234 of Lecture Notes in Computer Science, pp. 95–106. Springer, Heidelberg, Germany (2010) 10. Marmion, M.-E., Mascia, F., Lopez-Ibanez, M., Stutzle, T.: Automatic design of hybrid stochastic local search algorithms. In: Blesa et al. [34, pp. 144–158] 11. Liao, T., Molina, D., Stutzle, T.: Performance evaluation of automatically tuned continuous optimizers on different benchmark sets. Appl. Soft Comput. 27, 490–503 (2015) 12. Corne, D.W., Reynolds, A.P.: Optimisation and generalisation: footprints in instance space. In: International Conference on Parallel Problem Solving from Nature, pp. 22–31. Springer, Berlin, Heidelberg (2010) 13. Smith-Miles, K., Lopes, L.: Generalising algorithm performance in instance space: a timetabling case study. In: International Conference on Learning and Intelligent Optimization. Springer, Berlin, Heidelberg (2011)

Training Sets for Algorithm Tuning: Exploring the Impact …

195

14. Smith-Miles, K., Tan, T.T.: Measuring algorithm footprints in instance space. In: 2012 IEEE Congress on Evolutionary Computation (CEC). IEEE (2012) 15. Smith-Miles, K., Bowly, S.: Generating new test instances by evolving in instance space. Comput. Oper. Res. 63, 102–113 (2015)

Choosing an Optimal Size Range of the Product “Pipe Clamp” Ivo Malakov and Velizar Zaharinov

Abstract The paper presents results from the approbation of an approach for choosing an optimal size range of the product “Pipe clamp”. The latter is particularly suitable for size range optimization, as it is produced in large series, and the appropriate reduction of the sizes’ variety would lead to a significant effect. A market research is carried out, an optimality criterion is chosen, and a mathematical model of the problem is proposed. The model accounts for a specific feature of the problem—each clamp can be applied only to a certain range of pipe diameters. On the basis of a known optimization method a recurrent dependency is derived. The latter is used for calculation of the objective function, and the development of an application software. The proposed approach is universal, and can be used for size range optimization of other products, after building the particular demand and costs models. Keywords Size ranges · Optimization · Mathematical model · Approach · Costs models · Pipe clamp · algorithm · Dynamic programming · Application software

1 Introduction A significant part of the mass products in mechanical engineering, electronics, electrical engineering, automotive industry, and other industry branches are produced in size (parametric) ranges, which are at the basis of the modular product families [3, 5, 7, 13, 21–23]. Significant investments are required for their design, production and operation, and their effectiveness relies largely on the elements that build the size ranges [16, 19, 20]. That is why, achieving good economic results for the manufacturers, as well as for the users, requires precise and scientifically sound determining of the included elements in the size ranges, and the values of their main parameters. To achieve this, it is necessary to solve the problem for choosing an optimal size range, which for minimum costs and/or maximum effectiveness (profit) in the I. Malakov · V. Zaharinov (B) Technical University of Sofia, Sofia, Bulgaria e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 R. Matoušek and J. K˚udela (eds.), Recent Advances in Soft Computing and Cybernetics, Studies in Fuzziness and Soft Computing 403, https://doi.org/10.1007/978-3-030-61659-5_17

197

198

I. Malakov and V. Zaharinov

Fig. 1 Pipe clamp with one screw

fields of production and operation, has to completely satisfy certain product demand in terms of quantity [18]. This problem gains particular topicality in the current circumstances of strong competition, caused by the globalisation in the world economy, as it significantly predetermines the marketing success of the manufactured products [15, 22]. The paper discusses a size range of the product “Pipe clamp” with one screw (Fig. 1). The elements of the size range are to be used for fastening of pipes with outer diameter in the range of 12–65 mm. The design of the clamp is innovative, and the assembly and disassembly are carried out without a tool, and using only one hand. For marketing the clamp it is necessary to determine the elements of the product’s size range, which have to be produced. The product has a broad application in the construction industry when building pipelines and pipe installations in residential, industrial, office, public and other types of buildings. Its main function is the fastening of pipes, through which different fluids flow, to walls, ceilings and other bases. This product is particularly suitable for size range optimization, as it is produced in large series, and the appropriate reduction of the sizes’ variety, and the related lowering of costs, would lead to a significant effect. There are size ranges of clamps of different manufacturers offered on the market (Foerch, Sina Befestigungssystem, Wuerth, Berner, Fischer, Flamco, HILTI, Erico, Gerlach Zubehörtechnik, RECA, Muepro, Walrawen, Inka Fixing, MEFA Befestigungs- und Montagesysteme, Sikla, Techkrep, etc.), which are characterized by different number of sizes, application ranges, and different values of the main product parameters (Tables 1 and 2). The data in the table are according to the online catalogues of the companies uploaded on the Internet, and accessed in January 2019 [2, 8–11]. The analysis of the specialized literature shows, that there is no information about the way the size ranges of the product are built, as it is know-how for the manufacturing companies, and they do not publish their research on the subject. Additionally, in known publications dealing with size ranges optimization [4, 6, 14, 16, 20, 22] the problem of products that are characterized with a certain application range is not solved. Such product is the pipe clamp, as each clamp can be used for fastening pipes with an outer diameter corresponding to a certain clamping range. Solving of the problem without taking into account of this product’s feature, leads to size ranges which include elements with too wide application ranges, that can not fulfil reliably their main function due to design limitations.

Choosing an Optimal Size Range of the Product “Pipe Clamp” Table 1 Size ranges of pipe clamps Foerch Sina Wuert Application range (mm) 1 2 3 4 5 6 7 8 9 10 11

12–14 15–19 20–23 25–28 32–35 40–44 48–51 53–58 60–65

11–15 15–19 21–23 26–28 32–35 40–43 48–51 52–56 57–60 62–64

11–15 15–19 21–23 26–28 32–35 40–43 48–51 52–56 57–60 62–65

Table 2 Size ranges of pipe clamps Gerlach RECA Muepro Application range (mm) 1 2 3 4 5 6 7 8 9 10 11

12–16 15–19 20–23 26–30 31–36 40–43 48–53 50–55 54–59 60–64

11–15 15–19 21–23 26–28 32–35 40–43 48–51 52–56 57–60 62–64

12–14 15–18 19–22 23–27 26–30 32–35 37–41 42–46 48–51 53–58 59–64

199

Berner

Fischer

Flamco

Hilti

Erico

15–16 17–19 20–23 25–28 32–35 40–43 44–49 50–55 54–58 64–69

12–14 15–19 20–24 25–30 32–37 40–44 45–50 56–63

12–14 15–18 19–22 23–26 27–30 32–35 36–39 40–44 48–52 53–58 58–61

10–14 15–20 20–26 26–32 32–38 38–45 45–53 54–63

12–14 15–19 21–23 26–28 32–35 40–43 44–49 48–52 54–58 57–61 63–67

Walrawen Inka

MEFA

Sikla

Techkrep

11–14 15–18 20–23 25–28 31–35 36–39 40–43 44–45 48–51 53–56 57–63

12–15 16–20 21–25 26–30 32–37 42–46 48–52 54–58 60–65

12–20 21–27 28–35 38–45 48–56 57–63

12–15 16–19 20–23 25–28 31–35 42–45 48–53 54–58 59–63

11–15 16–20 20–24 25–28 32–35 39–46 48–53 54–58 59–66

This makes difficult the choosing of the elements that build the size range of the innovative pipe clamps, and their corresponding parameters, and that is one of the fundamental prerequisites for competitiveness and a good market share for the manufacturer. The aim of the publication is to show the results from the application of a developed approach, methods and application software for size range optimization of the product “Pipe clamp”.

200

I. Malakov and V. Zaharinov

2 Size Range Optimization The choosing of an optimal size range is related to solving the problem [18]: Determine the elements (the values of their main parameters) in a product size range, and the required quantity of each size range element. The size range must satisfy all demand with respect to type and quantity, and optimize a chosen objective function with given constraints. Solving the so defined problem is carried out by an approach that consists of the following main stages [19]: Stage 1. Stage 2. Stage 3. Stage 4. Stage 5. Stage 6. Stage 7. Stage 8. Stage 9.

Choice of main parameters. Determining demand. Choice of optimality criterion. Determining the functional relationship between the optimality criterion and the influencing factors. Establishing a mathematical model. Choice of a mathematical method. Algorithmic and software development. Solving the problem—choosing an optimal size range. Sensitivity analysis of the optimal solution.

2.1 Stage 1. Choice of Main Parameters Every product is characterized by a number of parameters, some of which are main (basic), and others—secondary (additional). The main parameters of the product determine the capability of executing a number of predetermined functions, and the secondary parameters are all the rest, that have secondary significance for the product, and do not define the product’s ability to execute its appointed tasks. The appropriate determining of the set of main parameters, in respect to which the optimization will be carried out, predetermines to a significant degree the effectiveness of the size ranges. The complexity and the amount of work needed for solving the optimization problem, depends on the choice of parameters, too. For the product in question, one main parameter is chosen x¯l —the outer diameter of the fastened pipes, as it determines the most important operational (functional) indicators of the product, it has stability, and does not depend on technical improvements, and production technology. A large range of pipes is used in practice, with different outer diameters, that require a great variety of values for the product’s main parameter—the outer diameter of the fastened pipes. In Table 3 are shown data for the nomenclature of pipes’ outer diameters in the chosen range 12–65 mm, and the designations of the respective elements (sizes) of the

Choosing an Optimal Size Range of the Product “Pipe Clamp”

201

Table 3 Outer diameters of pipes Clamp size Outer diameter (mm)

DIN 1768, DIN 8075 1786, 1754

DIN 17455 DIN 8061/62

DIN 2448/58

DIN 2440/41

Wiku

x¯1

12

12

x¯2

13

x¯3

13.5

x¯4

14

x¯5

15

15

15

x¯6

16

16

15

x¯7

17.2

17

15

x¯8

18

18

15

x¯9

19

x¯10

20

20

x¯11

21.3

20

x¯12

22

x¯13

23

x¯14

25

x¯15

26

x¯16

26.9

x¯17

28

x¯18

32

x¯19

33

x¯20

33.7

x¯21

35

x¯22

40

x¯23

42.4

x¯24

43

43

43

x¯25

44

44

44

x¯26

45

45

45

x¯27

46

46

x¯28

48.3

x¯29

50

x¯30

51

x¯31

53

x¯32

54

54

x¯33

55

55

x¯34

56

56

x¯35

57

57

x¯36

58

58

x¯37

60.3

60

x¯38

63.5

x¯39

64

64

x¯40

65

65

12 13 1/4

14 15 16 17.2

3/8

17 18 19

22

20 21.3

1/2"

22

21.3 22 23

25

25

25 26 26.9

28

3/4

28 32

27 28

32

32 33 33.7

35

1

35 40

42

35 40

42

48.3

40 42.4

46

1 1/4

40

46 1 1/2

48

50 51

51 53

57 58 60 63.5

60.3

60.3

2

60.3 63.5 65

202

I. Malakov and V. Zaharinov

pipes’ size range. As it is evident from the table, in the range in hand there is a need of L¯ = 40 pipe sizes, that build the initial size range Z¯ = {x¯1 , x¯2 , . . . , x¯l , ..., x¯ L¯ }, ¯ l = 1 ÷ L.

2.2 Stage 2. Determining Demand The demand for each clamp size included in the initial size range Z¯ = {x¯1 , x¯2 , . . . , x¯l , . . . , x¯ L¯ }, l = 1 ÷ L¯ is determined after a market research, i.e. the elements of ¯ the set N¯ = { N¯ 1 , N¯ 2 , . . . , N¯ l , . . . , N¯ L }, l = 1 ÷ L¯ are determined, where N¯ l is the quantity of the required products of size x¯l . The obtained results are shown in Fig. 2. The product’s main parameter values are input along the abscissa—the outer diameter of the fastened pipe, and along the ordinate—the corresponding quantity.

2.3 Stage 3. Choice of Optimality Criterion For solving the problem, the proposed optimality criterion is total costs for the size range, which have to be minimized. These costs include production costs, and operational costs. The proposed criterion takes into account the contradicting interests of users and manufacturers, and will lead to finding such size range, which is a compromise between the users’ requirement for high size range density, that ensures fuller, more accurate, satisfaction of demand and reducing to a minimum the losses (the additional costs) from the discrepancy between supply and demand, and the manufacturers’ requirement for size ranges with lower density, that aims for bigger batches and lowering the production costs.

Fig. 2 Product demand from market research

Choosing an Optimal Size Range of the Product “Pipe Clamp”

203

Therefore, the chosen criterion for evaluation of the alternative size ranges, and for choosing an optimal size range, is of the following kind: R=

L  j=1

 T S(xl j )N

lj

NT S Nlj

ν1

+

L 

T P(xl j )N l j

(1)

j=1

where R are the total costs for all elements in the size range; L—the number of elements in the analysed size range; xl j —the main parameter value of the l j -th size in the current size range; T S(xl j )—costs for producing the l j -th size; T P(xl j )—the operational costs for the l j -th size; N l j —the quantity (count) of the l j -th product size, N l j ∈ N = {N l1 , N l2 , . . . , N l L }, where N is the set of product demand for the current size range; ν1 —the coefficient, describing the intensity of change in costs in relation to the change in the production quantity, taking into account the “learning rate” [12]; N T S —the coefficient, accounting for the production scale factor.

2.4 Stage 4. Determining the Functional Relationship Between the Optimality Criterion and the Influencing Factors Determining the functional relationship between the optimality criterion and the influencing factors (parameters, coefficients) is carried out on the basis of production analysis, production cost data and operational costs data for the product “Pipe clamp”. STATGRAPHICS is used for information processing. The relationship between production costs and the product’s main parameter is given by the expression (Fig. 3): T S(xl j ) =

 a + bxl2j , where a = 0.0941207297, b = 0.0000169074279

(2)

The production costs model has a correlation coefficient of 0.980367531, R 2 = 96.1120497%, R 2 (with correction for the degrees of freedom) = 95.5566282%, i.e. the model reveals 95.5566282% from the dependent variable “production costs”. The coefficients (parameters) N T S and ν1 are determined from production analysis and are: (3) N T S = 50000, ν1 = 0.18 The relationship between operational costs and the product’s main parameter is given by the expression: T P(xl j ) =

1.6 , where d = 2.45701992, e = −0.0150975731 d + exl j

(4)

204

I. Malakov and V. Zaharinov

Fig. 3 Approximation of T S(xl j ) based on manufacturer data

After substituting (2)–(4) in (1) the relationship for determining the total costs is obtained, and it is the expression: R=

L 

N

lj



 0.0941207297 +

0.0000169074279xl2j

j=1

+

L  j=1

1.6N l j 2.45701992 − 0.0150975731xl j

50,000 Nlj

0.18

(5)

The change in the total costs (curve 3) in relationship to the number of sizes in the size range is shown in Fig. 4. The production costs curve (curve 1) is obtained for T P(xl j ) = 0, and the operational costs curve (curve 2) is obtained for T S(xl j ) = 0. As it is shown on the figure, the objective function (curve 3) is a continuous convex function that has one global minimum. This property will be used for determining the condition for stopping the calculations according to a chosen optimization method.

Fig. 4 Change in costs, where: 1—Production costs; 2—Operational costs; 3—Total costs

Choosing an Optimal Size Range of the Product “Pipe Clamp”

205

2.5 Stage 5. Establishing a Mathematical Model The mathematical model of the problem for choosing an optimal size range is the following: ¯ ¯ of products For a given demand N¯ = { N¯ 1 , N¯ 2 , . . . , N¯ l , . . . , N¯ L }, l = 1 ÷ L, ¯ and the feasible possibilities for satisfyZ¯ = {x¯1 , x¯2 , . . . , x¯l , . . . , x¯ L¯ }, l = 1 ÷ L, ing the different demands, i.e. the elements of the applicability matrix  L× ¯ L¯ = p ∗ ∗ ∗ ∗ ∗ ∗ ∗ ¯ , j ∈ {1, 2, . . . , L}, ¯ ϕm  L× ¯ L¯ , find L , Z = {xl1 , xl2 , . . . , xl j , . . . , xl L ∗ }, Z ⊆ Z ∗ ∗l1 ∗l2 ∗l j ∗l L ∗ ∗ ¯ N = {N , N , . . . , N , . . . , N }, L ≤ L, for which the chosen effectiveness (optimality) criterion must have minimal value: min R(L , x¯1 , . . . , x¯l , . . . , x¯ L¯ , N¯ 1 , . . . , N¯ l , . . . , N¯ L ,  L× ¯ L¯ ) =

L 

G{xl j , N l j (ϕmp , N p )}

j=1

=

L 

 T S(xl j )N l j (ϕmp , N p )

j=1

+

L 

NT S Nlj

 ν1

T P(xl j )N l j (ϕmp , N p ), L = 1 ÷ L¯

(6)

j=1

for the following conditions: L  j=1



N = lj

L  j=1

N

∗l j

=

L¯ 

N¯ l = N0

(7)

j=1

xl L = xl∗L ∗ = x¯ L¯

(8)

xl j ∈ Z¯ = {x¯1 , x¯2 , . . . , x¯l , . . . , x¯m , . . . , x¯ L¯ }, ∀ j = 1 ÷ L¯

(9)

where L¯ is the number of elements in the initial size range Z¯ = {x¯1 , x¯2 , . . . , ¯ L—the numx¯l , . . . , x¯m , . . . , x¯ L¯ }, determined after demand research, dim Z¯ = L; ber of elements in the currently analysed size range Z = {xl1 , xl2 , . . . , xl j , . . . , xl L }, ¯ There is a unique representation between the elements of the curj ∈ {1, 2, . . . , L}. rent and initial size range, whereby each element xl j corresponds to one element x¯m . ¯ eleThe elements of the current size ranges are a combination of L, L = 1 ÷ L, ¯ ments of L number of possible elements of the initial size range taking into account the allowed application ranges; L ∗ —the number of elements in the optimal size p ¯  L× range Z ∗ = {xl∗1 , xl∗2 , . . . , xl∗j , . . . , xl∗L ∗ }, j = 1 ÷ L ∗ , L ∗ ≤ L; ¯ L¯ = ϕm  L× ¯ L¯ — applicability matrix, which elements give information about the allowable application ranges for the products in the initial size range Z¯ = {x¯1 , x¯2 , . . . , x¯ L¯ } that satisfy

206

I. Malakov and V. Zaharinov

¯ p the demand N¯ = { N¯ 1 , N¯ 2 , . . . , N¯ L }, where ϕm = 1 if the product from type m, p ¯ and ϕmp = 0 oth¯ can satisfy the demand N¯ of product x¯ p , p = 1 ÷ L, m = 1 ÷ L, p lj p erwise; N (ϕm , N )—the demand of product xl j that is an element of the currently analysed size range Z = {xl1 , xl2 , . . . , xl j , . . . , xl L }, whose index l j corresponds to the index m of product x¯m ∈ Z¯ , element of the initial size range, as xl j = x¯m , where lj  p p ϕl j N p ; N0 —total quantity of production of all elements in the N l j (ϕm , N p ) = p=l j−1

size range. Condition (7) means, that all analysed size ranges, including the optimal, must satisfy all demand in terms of quantity, condition (8) means, that every size range, including the optimal, must include the element from the initial size range with the maximum value of its main parameter x¯ L¯ , and condition (9), that the elements of all size ranges are chosen from the set of elements of the initial size range, determined after demand research. ¯ ¯ is defined in tabular The demand set N¯ = { N¯ 1 , N¯ 2 , . . . , N¯ l , . . . , N¯ L }, l = 1 ÷ L, form and is determined in Stage 2. The values of the elements of the applicability matrix are determined by experts, members of the team developing the new product. The values are shown in Fig. 5.

Fig. 5 Applicabilitiy matrix

Choosing an Optimal Size Range of the Product “Pipe Clamp”

207

2.6 Stage 6. Choice of a Mathematical Method The formulated problem for choosing an optimal size range belongs to the problem class of discrete programming. By its nature the problem refers to distribution problems, but differs from them by the presence of variable number of arguments, and variable values of the arguments, which complicates its solution. Additionally, the problem characterizes with large number of possible size ranges, that have to be analysed [18]. This number, without taking into account the allowed applicable ranges of the dis L−1 ¯ ¯ k C L−1 = 2 L−1 − 1, where k cussed products, is determined from the expression k=1 ¯ is the number of elements in the size range established from the elements of the initial size range with L¯ − 1 elements [4]. For L¯ − 1 = 39 it is necessary to process 239 − 1 = 5.4975581 × 1011 different size ranges. That leads to a significant number of calculations, and imposes the use of a method for directed search of the optimal solution. Furthermore, the method used must allow for respecting the application ranges of each product in the size range. This leads to lower number of variants, but complicates the finding of solution. One of the efficient methods for solving of this class of problems is the dynamic programming method [1]. The method is based on the optimality principle of the American mathematician R. Bellman, stating: “the optimal strategy has a property such that no matter what is the initial state of the system in question, and the first stage solution, the following solutions (the solutions of the separate stages) must constitute an optimal strategy with respect to the state obtained as a result from the first solution”. On the basis of this principle the following recurrent dependencies for determining the total costs are obtained: ¯ – for l = 1, m = l ÷ L, ⎡ Rm1 =

1 m  ϕ mp



⎞ ν1



m m ⎢ ⎜ ⎟ ⎥   ⎢ ⎥ p ⎜ NT S ⎟ T S( x ¯ ) N + T P( x ¯ ) N p⎥ ⎢ ⎜ m ⎟ m m  p⎠ ⎣ ⎝ ⎦ p=1 p=1 N

p=1

(10)

p=1

¯ m = l ÷ L, ¯ m  = (l − 1) ÷ (m − 1), – for l = 2 ÷ L,

Rml = min

⎧ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎩

⎡ 1 m 

Rml−1 + 

+T P(x¯m )

p=m  +1 m  p=m  +1

ϕ mp

⎤⎫ ⎬ N p⎦ ⎭



⎞ν1

m ⎢ ⎜ N  TS ⎢ ⎜ Np⎜ m ⎢T S(x¯m )  ⎣ ⎝  p=m +1

N

⎟ ⎟ ⎟ p⎠

p=m  +1

(11)

208

I. Malakov and V. Zaharinov

where Rml are the minimum total costs for satisfying demand for products with main parameter value x¯m with a number of l sizes. As the objective function is a continuous convex function, and has one global ¯ continues, until the following condition minimum, the calculation of Rml , l = 1 ÷ L, is met: (12) R lL¯ ≤ R l+1 L¯

2.7 Stage 7. Algorithmic and Software Development An algorithm based on the proposed recurrent dependencies (10), (11), and condition (12) is developed for solving the problem. A software application is developed using the algorithm. The application uses as input Excel tables and writes the solution in the same format. The application is realised in the Python programming language [17].

2.8 Stage 8. Solving the Problem—Choosing an Optimal Size Range The problem (6)–(9) is solved with the aid of the developed tools (mathematical model of the optimization problem, cost models for production and operational costs, algorithm and application software). The results are shown in Table 4, where R ∗ are the minimum (optimal) total costs, ¯ e; L ∗ —the number of elements in the optimal size range; R LL¯ —total costs for the size range that includes all sizes, e.g. the initial size range, e. The data for the chosen optimal size range are given in Table 5. It includes L ∗ = 9 sizes. The production quantities of the elements in the optimal size range (1), and of the elements in the initial size range (2) are shown in Fig. 6. The total costs curve, determined in the solving process, is shown in Fig. 7.

Table 4 Results from solving the problem Indicators R∗ L∗ ¯

R LL¯

¯ R L¯ −R ∗ L R∗

Optimal size range 1 289 080.52 9 1 360 835.79 6%

Choosing an Optimal Size Range of the Product “Pipe Clamp” Table 5 Optimal size range Size xl∗j ∈ X ∗ , j =1÷9 1 2 3 4 5 6 7 8 9

xl∗1 xl∗2 xl∗3 xl∗4 xl∗5 xl∗6 xl∗7 xl∗8 xl∗9

209

Size x¯m ∈ X¯ , ¯ l ∈ {1, . . . , L}

Application range

x¯5 = 15

12–15

x¯9 = 19

16–19

x¯13 = 23

20–23

x¯17 = 28

25–28

x¯21 = 35

28–35

x¯27 = 46

40–46

x¯30 = 51

45–51

x¯36 = 58

53–58

x¯40 = 65

57–65

Fig. 6 Production quantity of the elements in the optimal size range, where 1— N¯ = ¯ { N¯ 1 , N¯ 2 , . . . , N¯ l , . . . , N¯ L }, 2—N ∗ = {N ∗l1 , N ∗l2 , . . . , N ∗l j , . . . , N ∗l L ∗ }

Data regarding the size ranges of known companies with the same number of elements are given in Table 6. As it shows, the found size range of the innovative clamp significantly coincides with the size ranges of the leading manufacturers in the field. The optimal size range found for the product “Pipe clamp” has the following features: – reduces the number of sizes from 40 to 9, i.e. with 77.5%; – reduces the total costs in comparison to the size range that includes all possible sizes with 71 755.27 e, i.e. with 6%;

210

I. Malakov and V. Zaharinov

Fig. 7 Change in the total costs Table 6 Comparison between manufacturers of “Pipe clamp” New clamp Foerch Inka fixing MEFA Tech-krep Type Range Type Range Type Range Type Range Type Range 1 2 3 4 5 6 7 8 9

15 19 23 28 35 46 51 58 65

12–15 16–19 20–23 25–28 28–35 40–46 45–51 53–58 57–65

14 19 23 28 35 44 51 58 65

12–14 15–19 20–23 25–28 32–35 40–44 48–51 53–58 60–65

15 20 24 28 35 46 53 58 66

11–15 16–20 20–24 25–28 32–35 39–46 48–53 54–58 59–66

15 20 25 30 37 46 52 58 65

12–15 16–20 21–25 26–30 32–37 42–46 48–52 54–58 60–65

15 19 23 28 35 45 53 58 63

12–15 16–19 20–23 25–28 31–35 42–45 48–53 54–58 59–63

– reduces the production costs in comparison to the size range that includes all possible sizes with 82 091.21 e, i.e. with 26%, for incurred loss for the user of only 10 336.43 e, i.e. only 1% (Table 7).

2.9 Stage 9. Sensitivity Analysis of the Optimal Solution Sensitivity analysis is made for the found optimal solution. The main purpose is to determine the influence of the different components, and coefficients over the

Choosing an Optimal Size Range of the Product “Pipe Clamp” Table 7 Production costs and operational costs ¯ R LL¯ Manufacturer User

404,580.42 956,255.37

211

R∗ 322,489.22 966,591.80

solution, and to point out the most important once for which the most accurate data must be obtained. That will allow for less resources to be spent on determining the values of the other factors. For the purpose of sensitivity analysis, numerical experiments are made, including solving of the problem for different values of the main coefficients and components constituting the objective function of the mathematical model. The sensitivity analysis includes the following components: ¯ 1. Changing the demand function N¯ = { N¯ 1 , N¯ 2 , . . . , N¯ l , . . . , N¯ L } while keeping the production quantity N0 ; 2. Changing the production quantity N0 ; 3. Changing the application range of the different elements from the initial size range; 4. Changing the component T S(xl j ), representing the production costs; 5. Changing the component T P(xl j ), representing the operational costs; 6. Changing the coefficient N T S , used for taking into account the production scale factor; 7. Changing the coefficient ν1 , used for taking into account the production learning rate;

The comprehensive results form the sensitivity analysis will be subject to a following publication.

3 Conclusion The paper presents the approbation of an approach for size range optimization of technical products which characterize with a certain application range, by using an example size range of the product “Pipe clamp”. The following important results are obtained: • The demand for the product “Pipe clamp” is determined in relation to the outer diameter of the fastened pipes. • A mathematical model is developed for the problem of choosing an optimal size range of a technical product with constraints over the applicability of the elements in the size range, i.e. the application range. • An analytical dependency is proposed for the optimality criterion "total costs", which includes production and operational costs.

212

I. Malakov and V. Zaharinov

• The functional dependency is determined between total costs and the influencing parameters through analysis of industry data. • A recurrent relationship is proposed for calculation of the total costs on the basis of the optimality principle of Bellman, taking into account the application range, and a stopping condition for the calculations is proposed. • An algorithm and a software product is developed for solving of single parameter problems for choosing of an optimal size range allowing the inclusion of the elements’ application range. • The optimal size range of the product “Pipe clamp” is determined taking into account predetermined application ranges for the elements of the size range. The optimal size range reduces the number of sizes from 40 to 9, i.e. with 77.5%, reduces the total costs in comparison to the size range that includes all possible sizes with 6%, reduces the production costs in comparison to the size range that includes all possible sizes with 26%, for incurred loss for the user of only 1%. Furthermore, the found optimal size range coincides in a significant part with size ranges of leading manufacturers, producing similar products. The proposed approach has a universal nature, and can also be used for size range optimization of other products.

References 1. Bellman, R.: Dynamic Programming. Princeton University Press, New York (1957) 2. Berner Product Catalogue: Mein Berner für das Installationshandwerk (2019) 3. Chakarski, D., Vakarelska, T., Dimitrova, R., Tomov, P., Nikolov, St.: Design of a specialized automated production machine for machining of openings based on innovative technology. In: Machine Building and Machine Science, Year 4, Book 4, pp. 29–35. TU-Varna (2009). ISSN 1312-8612 4. Dashenko, A.I., Belousov, A.P.: Design of Automated Lines (in Russian). High School, Moscow (1983) 5. Dichev, D., Kogia, F., Nikolova, H., Diakov, D.: A Mathematical model of the error of measuring instruments for investigating the dynamic characteristics. J. Eng. Sci. Technol. Rev. 11(6), 14– 19 (2018). ISSN 1791-2377 6. D’Souza, B., Simpson, T.W.: A genetic algorithm based method for product family design optimization. Eng. Optim. 35(1), 1–18 (2003) 7. Du, X., Jiao, J., Tseng, M.M.: Architecture of product family for mass customization. In: Proceedings of 2000 IEEE International Conference on Management of Innovation and Technology, Singapore, vol. 1, pp. 437-443 (2000) 8. Erico Product Catalogue: Nvent CADDY: Connect and Protect. Fixing, Fastening, and Support Products (2019) 9. Flamco Product Catalogue: Fixing Technology (2017) 10. Foerch Product Catalogue: Building Services. Pipe Clamps, SML Joining Technique (2019) 11. Gerlach Zubehörtechnik GmbH. Sanitär- und Heizungsbefestigungen Metallverarbeitung 12. Groover, M.: Automation, Production Systems and Computer-Integrated Manufacturing, 2nd edn. Prentice Hall (2001). ISBN 9780130889782 13. Guergov, S.: A review and analysis of the historical development of machine tools into complex intelligent mechatronic systems. J. Machi. Eng. 18(1), 107–119 (2018) ISSN 1895-7595 (Print), ISSN 2391-80-71 (Online)

Choosing an Optimal Size Range of the Product “Pipe Clamp”

213

14. Kipp, T., Krause, D.: Computer aided size range development—data mining versus optimization. In: Proceedings of ICED 09, vol. 4, Product and Systems Design, pp. 179–190. Palo Alto (2009) 15. Lotz, J., Freund, T., Würtenberger, J., Kloberdanz, H.: Principles to develop size ranges of products with ergonomic requirements, using a robust design approach. In: 6th International Conference on Applied Human Factors and Ergonomics (AHFE 2015) and the Affiliated Conferences, pp. 1250–1257 (2015) 16. Lotz, J.: Beherrschung von Unsicherheit in der Baureihenentwicklung. Dissertation, Darmstadt (2018) 17. Lutz, M.: Learning Python, 5th edn. O’Reilly Media (2013) ISBN 978-1449355739 18. Malakov, I., Zaharinov, V., Tzenov, V.: Size ranges optimization. Elsevier, Procedia Engineering 100, 791–800 (2015) 19. Malakov, I.: Optimization of parametric ranges of technical products (in Bulgarian). In: Complex Automation of the Discrete Production, TU-Sofia, Sofia, pp. 25–48 (2015). ISBN 978619-167-153-3 20. Pahl, G., Beitz, W.: Konstruktionslehre. Methoden und Anwendung. Springer, Berlin/Heidelberg (2007) 21. Sand, J.C., Gu, P., Watson, G.: Home: house of modular enhancement—A tool for modular product redesign. Concurr. Eng.: Res. Appl. 10(2), 153–164 (2002) 22. Simpson, T.W.: Product platform design and customization. In: Artificial Intelligence for Engineering Design, Analysis and Manufacturing, vol. 18, pp. 3–20. Cambridge University Press (2004) 23. Todorov, G., Dimitrov, L., Kamberov, K.: MEMS actuator designs characterization based on numerical analysis approach. In: International Conferences on Multi-Material Micro Manufacture, 4M/International Conferences on Micro Manufacturing, ICOMM, pp. 341–345 (2009)

Cybernetics

Data Aggregation in Mobile Wireless Sensor Networks Represented as Stationary Edge-Markovian Evolving Graphs Martin Kenyeres and Jozef Kenyeres

Abstract Over the past years, mobile wireless sensor networks have significantly attracted the attention of both the industry and the academy as they outperform their static variant in many aspects. This paper addresses the average consensus algorithm, a data aggregation mechanism, over mobile wireless sensor networks modeled as stationary edge-Markovian evolving graphs with a varied birth-rate and death-rate of the edges. We evaluate the performance of four various initial configurations of this algorithm by applying the mean square error in several scenarios in order to find the best performing initial configuration of the average consensus algorithm as well as to show for which graph parameters the algorithm achieves the highest performance. Keywords Distributed computing · Average consensus · Mobile wireless sensor networks · Stationary edge-Markovian evolving graphs

1 Introduction The mobile wireless sensor networks (MWSNs), a subclass of the wireless sensor networks (WSNs) formed by mobile sensor nodes, have found wide application in monitoring and data collecting over the last few years [1, 2]. Mobility of the sensor nodes can be ensured in several ways, e.g., the sensor nodes are equipped with mobilizers to control their location, the sensor nodes are fastened to various mobile objects such as cars, animals, robots, etc. [2]. The ability of the sensor nodes to move results in many advantages in various applications (e.g., in healthcare monitoring, traffic monitoring, social interaction, etc.) and unlike static WSNs, allows monitoring M. Kenyeres (B) Institute of Informatics, Slovak Academy of Sciences, Dubravska cesta 9, Bratislava, Slovakia e-mail: [email protected] URL: http://www.ui.sav.sk/w/en/dep/mcdp/ J. Kenyeres Sipwise GmbH, Europaring F15, 2345 Brunn am Gebirge, Austria e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 R. Matoušek and J. K˚udela (eds.), Recent Advances in Soft Computing and Cybernetics, Studies in Fuzziness and Soft Computing 403, https://doi.org/10.1007/978-3-030-61659-5_18

217

218

M. Kenyeres and J. Kenyeres

moving objects such as vehicles, animals, packages, chemical clouds [1]. In general, MWSNs outperform their static variant in many aspects such as the channel capacity, the probability of the successful transmissions, the selection of a better location for sensing, etc. and therefore significantly attract the attention of both the industry and the academy [1]. The fundamental design issue of the sensor nodes is their affordability, whereby WSN-based applications usually suffer from insufficient robustness to potential threads [3, 4]. Therefore, WSNs are often equipped with data fusion mechanisms to obtain increased confidence in the measurements executed by the sensor nodes with low energy requirements in spite of the negative factors affecting the operation of WSN-based applications [5]. In the literature, there are many data fusion mechanisms proposed for this purpose [6–13]. In many real-life scenarios in networking, a strong relationship between the existence/the absence of a direct connection between two entities at two subsequent iterations can be observed (e.g., link failures, P2P networks, social environment, etc. [14]). For this purpose, the edge-Markovian evolving graphs (edge-MEGs) find wide application in various areas [15–21]. In this paper, we address the average consensus algorithm (AC), a distributed iterative algorithm for distributed averaging, distributed summing, distributed network size estimation, etc. [6]. The wide application of edge-MEGs in networking motivates us to analyze AC over MWSNs modeled as stationary edge-Markovian evolving graphs (stationary edge-MEGs). They allow modeling the mobility rate of the sensor nodes without taking the geographical position into account. We analyze AC with four different mixing parameters over several sets formed by 100 stationary edge-MEGs with different parameters in order to find the best performing initial configurations using the mean square error over the iterations (MSE(k)). The next section of this paper is focused on papers concerning with AC over MWSNs. Section 3 deals with the definition of AC, its convergence conditions, and its mathematical model over MWSNs. The next section is concerned with the applied research methodology. Section 5 consists of the experimentally obtained results and a discussion about the observable phenomena.

2 Related Work A literature review contains several papers concerning with AC over mobile networks. Schwarz and Martz [22] prove that the convergence rate of AC can be accelerated by adding mobile sensor nodes in the network. Moreover, they derive a sufficient condition for agents in mobile networks to achieve the consensus despite mobility of the sensor nodes—the convergence conditions valid for static AC have to be meet at each iteration. Duan et al. [23] confirm findings revealed by Schwarz et al, i.e., mobility of the nodes accelerates AC and propose an optimizing mechanism. Furthermore, it is shown that the convergence rate in mobile networks can be accelerated by increasing the number of mobile agents. Spanos et al. [24] prove that disconnected

Data Aggregation in Mobile Wireless Sensor Networks …

219

mobile networks are able to correctly estimate an aggregate function when they are merged during the estimation process. Moreover, they prove that the preservation of the sum of all the inner states ensures the proper functioning of AC although the sensor nodes are mobile. Kingston and Beard [25] find that the consensus in networks with mobile agents can be obtained when the union of the interaction graph whose length is finite is strongly connected. Tan et al. [26] present an algorithm of a weighted average consensus algorithm that employes cubature Kalman filtering over mobile networks. The authors derive and analyze the stability conditions over non-linear systems with non-linear observation and state equations. Kia et al. [27] show how to utilize extra dynamics to improve the convergence rate and to achieve the robustness to the initial conditions. Moreover, the authors analyze the application of dynamic average consensus algorithm in cyber-physical systems. In this paper, we analyze AC in MWSNs modeled as stationary edge-MEGs in contrast to the related work. As mentioned above, edge-MEGs are an appropriate mathematical tool to model MWSNs, and the absence of papers dealing with AC over these graphs motivates us to carry this research out.

3 Mathematical Model of Average Consensus Algorithm Over Stationary Edge-Markovian Evolving Graphs As mentioned earlier, we model MWSNs as stationary edge-MEGs as follows G(k) = (V, E(k)). Let us define the set of all the vertices, which are distinguishable from each other according to the unique identity number and represent the sensor nodes in a network as V = {v1 , v2 , . . . vn }.1 As we analyze AC over stationary edge-MEGs, this set is invariant over the iterations [22]. Furthermore, let E(k) be the set gathering all the edges, representing the direct links in a network (the edge between vi and v j is labeled as ei j ). Since we model MWSNs as stationary edge-MEGs, each edge can take two states [28]: ⎛ ⎞ 0 1 M = ⎝0 1 − p p ⎠ (1) 1 q 1−q Here, M is the transition matrix. Thus, the state of every edge in a graph is selected according to a two-state Markovian process with two probabilities p and q [28]. Here, p is the birth-rate and q is the death-rate of the edges, and so these are the probabilities that determine the existence/the absence of an edge at each iteration [14]. Therefore, the formation/the extinction of an edge depends on the existence/the absence of this edge at the previous iteration [14]. Furthermore, we assume that these probabilities are the same for each edge in a graph. In AC, each sensor node initiates its initial inner state for example according to local measurement. In our work, we assume that each initial inner state is a random 1 Here,

n represents the size of a network.

220

M. Kenyeres and J. Kenyeres

variable of the uniform distribution determined by the range (0–200), which can be expressed as follows: xi (0) ∼ U (0, 200),

for ∀vi ∈ V

(2)

Here, k is the label of an iteration (we assume that k = 0 poses the beginning of the algorithm), and xi (k) represents the inner state of vi at k. At each iteration, each sensor node broadcasts its current inner state into the adjacent area as well as collects the current inner states of its neighbors [6]. According to the current inner state and the collected information, each sensor node updates its inner state as follows [14]: xi (k + 1) =

n  [W(k)]i j .x j (k)

f or s ∀vi ∈ V

(3)

j=1

Here, W is the weight matrix of AC, affecting several aspects such as the convergence rate, the robustness of the algorithm, the initial configuration, etc. [6]. In this paper, we use the following matrix, which is determined by the mixing parameter  [29] as follows: ⎧ ⎪ if ei j ∈ E ⎨, [W ]i j = 1 − di ., if i = j (4) ⎪ ⎩ 0, otherwise According to [22], AC works correctly in the mobile networks when these three following conditions are met at each iteration: 1T × W(k) = 1T

(5)

W(k) × 1 = 1,

(6)

ρ(W(k) −

1 · 1 × 1T ) < 1 n

(7)

Here, ρ(·) is the spectral radius of the analyzed matrix/vector, and 1 is an all-ones vector [6]. As shown in [30], the convergence of AC in all the arbitrary graphs excluding those that are bipartite regular is ensured when  is from (8).  ∈ (0,

1 > 

Here,  is the maximum degree of a graph.

(8)

Data Aggregation in Mobile Wireless Sensor Networks …

221

4 Applied Research Methodology In this section, we introduce the applied methodology in our research and the used metric for performance evaluation. All the experiments are executed in Matlab2018b using the software proposed by the authors of this paper and built-in Matlab functions provided by MathWorks. In our analyses, we examine AC with four various initial configurations, i.e.,  takes these four values: • • • •

   

= 0,25 . = 0,50 . = 0,75 . = 1.

1  1  1  1 

(abbreviated as 0.25) (abbreviated as 0.5) (abbreviated as 0.75) (abbreviated as 1).

As mentioned earlier, we model MWSNs as stationary edge-MEGs formed by 50 nodes. Let us recall that these graphs are determined by the birth-rate p and the death-rate q. In this paper, we perform three analysis: – the birth-rate p and the death-rate q take the same value. We change these values in order to vary mobility of the sensor nodes. We assume that low values of these probabilities represent a low mobility rate, i.e., a sensor node establishes the connection to the other nodes with difficulty, however, the established connection lasts for a longer time. Higher values of these probabilities pose a high mobility rate, i.e., a sensor node easily establishes the connection to another node, but the connection is not too stable and is soon terminated. – the birth-rate p is varied, and the death-rate q is preserved. – the birth-rate p is preserved, and the death-rate q is varied. As mentioned above, we use the mean square error over the iterations (MSE(k)) as a metric for performance evaluation. It is defined as follows [31–33]: MSE(k) =

2 n x(0) 1  · xi (k) − 1T × n i=1 n

(9)

We examine AC over 100 stationary edge-MEGs2 and depict only MSE averaged over all 100 graphs in the following range of the iterations .

5 Experimental Section and Discussion The first experiment is focused on an analysis of MSE(k) when the birth-rate p and the death-rate q are equal to one another. We vary these two parameters as follows: • p = 5%, q = 5% 2 100

graphs are generated for each pair p, q.

222

• • • • • • •

M. Kenyeres and J. Kenyeres

p = 10%, q = 10% p = 15%, q = 15% p = 20%, q = 20% p = 25%, q = 25% p = 30%, q = 30% p = 35%, q = 35% p = 40%, q = 40%.

From the results shown in Fig. 1, we can see that MSE is decreased as the number of the iteration increases for each  and each pair p, q (the same character is observed

Fig. 1 MSE (dB) as function of number of iterations for four different mixing parameters and for varied birth-rate and death-rate

Data Aggregation in Mobile Wireless Sensor Networks …

223

also in time-invariant graphs [33]). Lowest MSE (and therefore, the highest precision of the inner states) is achieved by the configuration with  = 1 over the whole examined iteration interval and for each pair p, q. Then, as  decreases, MSE increases at each iteration and for each pair p, q. The maximal difference at the same iteration between MSE of  = 1 and MSE of  = 0.25 takes even approximately 217 dB (for p, q = 40% and at the 25th iteration). Furthermore, higher values of both p and q result in lower MSE, and therefore higher mobility rate ensures higher precision of the inner states. Moreover, it is seen that an increase in mobility rate most significantly affects the configuration with  = 1 (the maximal difference3 takes approximately 102 dB). With a decrease in , this difference becomes smaller—the maximal difference3 for  = 0.25 takes only approximately 5 dB. So, mobility has a significantly higher impact on higher values of . The next experiment deals with an analysis of the impact of a variant birth-rate p (the death-rate q is preserved) on MSE of the examined four initial configurations. In this analysis, these eight scenarios are analyzed: • • • • • • • •

p = 5%, q = 40% p = 10%, q = 40% p = 15%, q = 40% p = 20%, q = 40% p = 25%, q = 40% p = 30%, q = 40% p = 35%, q = 40% p = 40%, q = 40%.

From the results (Fig. 2), we can see that an increase in the iteration number and an increase in  result in lower MSE for each analyzed scenario. The maximal difference between  = 1 and  = 0.25 is approximately 217 dB. Moreover, an increase in the birth-rate p ensures lower MSE for each examined initial configuration. Like in the previous analysis, MSE of the initial configuration with  = 1 is the most significantly affected by the change of p, and the difference in MSE is getting smaller when the values of  are decreased (the maximal difference4 for  = 1 takes approximately 172 dB, and the maximal difference4 for  = 0.25 takes only approximately 19 dB). The third experiment is concerned with an analysis of how the change of the death-rate q (now, the birth-rate p is constant) affects MSE of the analyzed initial configurations of AC. Here, the birth-rate p and the death-rate q take these values: • • • • • •

p = 10%, q = 5% p = 10%, q = 10% p = 10%, q = 15% p = 10%, q = 20% p = 10%, q = 25% p = 10%, q = 30%

3 I.e., 4 i.e.,

the difference between MSE for p,q = 5% and for p,q = 40% at the 25th iteration. the difference between MSE for p = 5% and for p = 40% at the 25th iteration.

224

M. Kenyeres and J. Kenyeres

Fig. 2 MSE (dB) as function of number of iterations for four different mixing parameters and for varied birth-rate and constant death-rate

• p = 10%, q = 35% • p = 10%, q = 40%. From the results depicted in Fig. 3, it can be seen that an increase in the iteration number and an increase in  have the same character as in two previous analyses. Here, the maximal difference between  = 1 and  = 0.25 is equaled to approximately 186 dB. Furthermore, we can see that an increase in the death-rate q causes MSE to increase for each . Again,  = 1 is the most affected by the change of q among the

Data Aggregation in Mobile Wireless Sensor Networks …

225

Fig. 3 MSE (dB) as function of number of iterations for four different mixing parameters and for constant birth-rate and varied death-rate

examined initial configurations, and this difference decreases when the value of  is decreased (the maximal difference5 for  = 1 takes approximately 103 dB, and the maximal difference5 for  = 0.25 takes approximately 15 dB).

5 i.e.,

the difference between MSE for q = 5% and for q = 40% at the 25th iteration.

226

M. Kenyeres and J. Kenyeres

6 Conclusion In this paper, we analyze AC with four different mixing parameters over MWSNs modeled as stationary edge-MEGs in order to find the best performing initial configurations in terms of MSE(k) in three extensive analyses. In all three analyses (#1—the birth-rate and the death-rate are equal, #2—the birth-rate varies, the death-rate is constant, #3—the birth-rate is constant, the death-rate varies), an increase in the iteration number and in the values of the mixing parameter ensures lower MSE for each pair p, q. In analysis 1, it is proven that higher mobility ensured by higher values of both p and q results in lower MSE. In analysis 2, we show that an increase in p causes lower MSE. Analogically, a decrease in q ensures lower MSE (analysis 3). The final conclusion is that the initial configuration with the highest possible mixing parameter among the examined ones achieves the highest performance in all three analyses, for each pair p, q, and at each examined iteration and therefore poses the best configuration over MWSNs modeled as stationary edge-MEGs. Acknowledgements This work was supported by the VEGA agency under the contract No. 2/0155/19, by CHIST ERA III—Social Network of Machines (SOON), and by CA15140— Improving Applicability of Nature-Inspired Optimisation by Joining Theory and Practice (ImAppNIO).

References 1. Yaseem, Q., Albalas, F., Jararwah, Y., Al-Ayyoub, M.: Leveraging fog computing and software defined systems for selective forwarding attacks detection in mobile wireless sensor networks. Trans. Emerging Telecommun. Technol. 29(4), 1–13 (2018). https://doi.org/10.1002/ett.3183 2. Sabor, N., Sasaki, S., Abo-Zahhad, M., Ahmed, S.M.: A comprehensive survey on hierarchicalbased routing protocols for mobile wireless sensor networks: review, taxonomy, and future directions. Wirel. Commun. Mob. Commun. (2017). https://doi.org/10.1155/2017/2818542 3. Kenyeres, M., Kenyeres, J., Skorpil, V.: The distributed convergence classifier using the finite difference. Radioengineering 25(1), 148–155 (2016). https://doi.org/10.13164/re.2016.0148 4. Zidi, S., Moulahi, T., Alaya, B.: Fault detection in wireless sensor networks through SVM classifier. IEEE Sens. J. 18(1), 340–347 (2018). https://doi.org/10.1109/JSEN.2017.2771226 5. Izadi, D., Abawajy, J.H., Ghanavati, S., Herawan, T.: A data fusion method in wireless sensor networks. Sensors 15(2), 2964–2979 (2015). https://doi.org/10.3390/s150202964 6. Kenyeres, M., Kenyeres, J., Skorpil, V., Burget, R.: Distributed aggregate function estimation by biphasically configured metropolis-hasting weight model. Radioengineering 26(2), 479–495 (2017). https://doi.org/10.13164/re.2017.0479 7. Tsai, Y.R., Chang, J.: Cooperative information aggregation for distributed estimation in wireless sensor networks. IEEE Trans. Signal Process. 59(8), 3876–3888 (2011). https://doi.org/10. 1109/TSP.2011.2153847 8. Coluccia, A., Notarstefano, G.: A Bayesian framework for distributed estimation of arrival rates in asynchronous networks. IEEE Trans. Signal Process. 64(15), 3984–3996 (2016). https://doi. org/10.1109/TSP.2011.2153847 9. Li, J., AlRegib, G.: Distributed estimation in energy-constrained wireless sensor networks. IEEE Trans. Signal Process. 57(10), 3746–3758 (2009). https://doi.org/10.1109/TSP.2009. 2022874

Data Aggregation in Mobile Wireless Sensor Networks …

227

10. Schizas, I.D., Ribeiro, A., Giannakis, G.B.: Consensus in ad hoc WSNs with noisy links—Part I: distributed estimation of deterministic signals. IEEE Trans. Signal Process. 57(10), 350–364 (2008). https://doi.org/10.1109/TSP.2007.906734 11. Boubiche, S., Boubiche, D.E., Bilami, A., Toral-Cruz, H.: Big data challenges and data aggregation strategies in wireless sensor networks. IEEE Access 6, 20558–20571 (2018). https:// doi.org/10.1109/ACCESS.2018.2821445 12. Das, U., Namboodiri, V.: A quality-aware multi-level data aggregation approach to manage smart grid AMI traffic. IEEE Trans. Parallel Distrib. Syst. 30(2), 245–256 (2019). https://doi. org/10.1109/TPDS.2018.2865937 13. Zhang, J., Hu, P., Xie, F., Long, J., He, A.: An energy efficient and reliable in-network data aggregation scheme for WSN. IEEE Access 6, 71857–71870 (2018). https://doi.org/10.1109/ ACCESS.2018.2882210 14. Clementi, A., Monti, A., Pasquale, F., Silvestri, R.: Information spreading in stationary Markovian evolving graphs. IEEE Trans. Parallel Distrib. Syst. 22(9), 1425–1432 (2011). https://doi. org/10.1109/TPDS.2011.33 15. Casteights, A., Flocchini, P., Quattrociocchi, W., Santoro, N.: Time-varying graphs and dynamic networks. Int. J. Parallel Emergent Distrib. Syst. 27(5), 387–408 (2012). https://doi.org/10. 1080/17445760.2012.668546 16. Baumann, H., Crescenzi, P., Fraigniaud, P.: Parsimonious flooding in dynamic graphs. Distrib. Comput. 24(1), 31–44 (2011). https://doi.org/10.1007/s00446-011-0133-9 17. Clementi, A.E.F., Macci, C., Monti, A., Pasquale, F., Silvestri, R.: Flooding time of edge— Markovian evolving graphs. SIAM J. Discr. Math. 24(4), 1694–1712 (2010). https://doi.org/ 10.1137/090756053 18. Beccheti, L., Clementi, A., Pasquale, F., Resta, G., Santi, P., Silvestri, R.: Flooding time in opportunistic networks under power law and exponential intercontact times. IEEE Trans. Parallel Distrib. Syst. 25(9), 2297–2306 (2014). https://doi.org/10.1109/TPDS.2013.170 19. Ogura, M., Preciado, V.M.: Stability of spreading processes over time-varying large-scale networks. IEEE Trans. Netw. Sci. Eng. 3(1), 44–57 (2016). https://doi.org/10.1109/TNSE. 2016.2516346 20. Du, R., Wang, H., Fu, Y.: Continuous-time independent edge-Markovian random graph process. Chin. Ann. Math. Ser. B 37(1), 73–82 (2016). https://doi.org/10.1007/s11401-015-0941-5 21. Baumann, H., Crescenzi, P., Fraigniaud, P.: Parsimonious flooding in dynamic graphs. Distrib. Comput. 24(1), 31–4 (2011). https://doi.org/10.1007/s00446-011-0133-9 22. Schwarz, V., Matz, G.: On the performance of average consensus in mobile wireless sensor networks. In: Proceeding of the 2013 IEEE 14th Workshop on Signal Processing Advances in Wireless Communications (SPAWC), pp. 175–179, IEEE Press, New York (2013). https://doi. org/10.1109/SPAWC.2013.6612035 23. Duan, X., He, J., Cheng, P., Chen, J.: Exploiting a mobile node for fast discrete time average consensus. IEEE Trans. Control Syst. 24(6), 1993–2001 (2016). https://doi.org/10.1109/TCST. 2016.2521802 24. Spanos, D.P., Olfati-Saber, R., Murray, R.M.: Dynamic consensus on mobile networks. In: Proceeding of the 16th IFAC World Congress, pp. 1–6. Czech Academy of Sciences, Czech Republic (2005). https://doi.org/10.1109/SPAWC.2013.6612035 25. Kingston, D.B., Beard, R.W.: Discrete-time average-consensus under switching network topologies. In: Proceeding of the 2006 American Control Conference, pp. 3551–3556. IEEE Press, New York (2006). https://doi.org/10.1109/ACC.2006.1657268 26. Tan, Q., Dong, X., Li, Q., Ren, Z.: Weighted average consensus-based cubature Kalman filtering for mobile sensor networks with switching topologies. In: Proceeding of the 2017 13th IEEE International Conference on Control and Automation (ICCA), pp. 271–276. IEEE Press, New York (2017). https://doi.org/10.1109/ICCA.2017.8003072 27. Kia, S.S., Van Scoy, B., Cortes, J., Freeman, R.A., Lynch, K.M., Martinez, S.: Tutorial on dynamic average consensus: the problem, its applications, and the algorithms. 1–66. (2018). arXiv:1803.04628

228

M. Kenyeres and J. Kenyeres

28. Wasserman, S.: Analyzing social networks as stochastic processes. J. Am. Stat. Assoc. 75(370), 280–294 (1980). https://doi.org/10.1080/01621459.1980.10477465 29. Xiao, L., Boyd, S.: Fast linear iterations for distributed averaging. Syst. Control. Lett. 53(1), 65–78 (2004). https://doi.org/10.1016/j.sysconle.2004.02.022 30. Schwarz, V., Hannak, G., Matz, G.: On the convergence of average consensus with generalized metropolis-hasting weights. In: Proceeding of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 333–336. IEEE Press, New York (2014). https:// doi.org/10.13164/re.2017.0479 31. Skorpil, V., Stastny, J.: Back-propagation and k-means algorithms comparison. In: Proceedings of the 2006 8th international Conference on Signal Processing, pp. 1871–1874. IEEE Press, New York (2006). https://doi.org/10.1109/ICOSP.2006.345838 32. Pereira, S.S., Pages-Zamora, A.: Mean square convergence of consensus algorithms in random WSNs. IEEE Trans. Signal Process. 58(5), 2866–2874 (2010). https://doi.org/10.1109/TSP. 2010.2043140 33. Aysal, T.C., Oreshkin, B.N., Coates, M.J.: Accelerated distributed average consensus via localized node state prediction. IEEE Trans. Signal Process. 57(4), 1563–1576 (2009). https://doi. org/10.1109/TSP.2008.2010376

The Simple Method of Automatic Measurement of Air Infiltration in the Smart Houses Milos Hernych

Abstract Forced ventilation systems are increasingly being used in low-energy and passive houses. This article describes a method allowing continuous automatic measurement of air infiltration and the real performance of controlled ventilation in an intelligent house equipped with a PLC-based control system without the need to use a tracer gas or special instrumentation and measuring equipment. The prerequisite of the method is the integration of the control and regulation of individual systems of the building into a common control system. Keywords Air infiltration · Smart houses · HVAC

1 Introduction Ventilation of residential and other buildings is a prerequisite for the safe and healthy living of people. Primarily, this is not the supply of air for breathing, as is often mistakenly stated—the average human consumption of oxygen is between 0.020 and 0.025 m3 per hour, whereby consumption decreases during sleep and increases during physical activity. The main reason for ventilation is the removal of harmful substances that arise from the presence of persons (water vapor, smells and odours and carbon dioxide) and products generated in the interior of the building (VOC— volatile organic compounds, carbon monoxide, various other gases, etc.). Ventilation needs are addressed by legislation and standards differently in different countries. Generally, they are derived either from the floor area, the internal volume of the building, or the number of people who are or may be in the building. In the Czech Republic, they are governed by e.g. Government Decree 361/2007 Coll., laying down the conditions for the protection of health at work, as amended; Decree 410/2005 Coll. on the hygienic requirements of premises and the operation of facilities and establishments for the education and training of children and adolescents; and Decree 268/2009 Coll. on the technical requirements of buildings. These regulations are M. Hernych (B) Technical University of Liberec, Studentska 2, 46117 Liberec, Czech Republic e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 R. Matoušek and J. K˚udela (eds.), Recent Advances in Soft Computing and Cybernetics, Studies in Fuzziness and Soft Computing 403, https://doi.org/10.1007/978-3-030-61659-5_19

229

230

M. Hernych

based on the determination of the ventilation values from the number of persons present in the building and the prevailing activities they perform there, or from the internal volume of the building. The least amount of ventilation is required in school facilities, where the value is set at a minimum of 20 m3 h−1 , whereby in residential spaces in civil buildings the values is a minimum of 25 m3 h−1 or 0.5 of the volume of the space per hour. In public and production areas it is from 25 to 90 m3 h−1 , depending on the type of work performed and the load. The vast majority of buildings in the private sector (family houses, flats) and in the public sector are not equipped with any forced ventilation system and air exchange is left to the users or natural air infiltration. There is now an effort in newly built or renovated buildings to reduce energy losses and technologies are used to minimize natural air infiltration. Particularly in passive houses, a direct condition is the airtightness of buildings, which is documented using the Blower door test. These houses cannot be without mechanical ventilation systems with heat recovery from the exhaust air. From an operational point of view, however, the problem is how long can required air exchange be provided so as to avoid unnecessarily large or, on the contrary, insufficient ventilation. Large air exchange has a negative effect on indoor humidity, which can fall well below hygienically acceptable values and unnecessarily increases heating costs, whereas small air exchange reduces the comfort felt and poses a risk of health complications. Influences that increase the uncontrolled exchange of air include the climatic conditions as well as imperfections of the building structure despite the effort to ensure the greatest possible airtightness of the building envelope, degradation of the material takes place over time, e.g. window seals, vapor barrier passages etc. Especially in windy weather, there is an increased infiltration of air into the building. On the contrary, mechanical ventilation can pose the opposite problem, e.g. the equipment is designed taking into account the nominal values declared by the manufacturer; however, the actual ventilation performance depends on many factors such as the design of the specific building (lengths of individual pipes, location of air intakes in the building, etc.) or its operation (gradual clogging of filters or recuperators, manipulation of air inlets by users, etc.). Therefore, the actual ventilation values may differ significantly from the assumed values.

2 Contribution The airtightness of a building is determined using the Blower door test, during which a large-diameter ventilator installed together with a seal in a suitable opening of the building (door, window), while all other holes in the walls are sealed. This ventilator generates a 50 Pa underpressure or overpressure and the volume of air that passes through the building under this underpressure or overpressure is measured. For passive houses in the Czech Republic, the value is set to a maximum of 0.6 of the indoor volume per hour. This test is mainly used to find hidden leaks and imperfections and to certify the building, but it is not suitable for determining the real operational air permeability of the building.

The Simple Method of Automatic Measurement …

231

For the purpose of determining the size of air infiltration, the tracer gas method is used. This consists of creating a homogeneous concentration of gas (for example, nitrous oxide N2 O, helium, freon CF2 Cl2 , sulfur hexafluoride SF6 or carbon dioxide CO2 ) in the interior of the building and measuring its loss over time. We can deduce the general differential equation for the concentration of the tracer gas V

 dc(t) + Ac(t) = Pi (t) + Ace , dt i

(1)

where V c(t) A Pi (t) ce

total volume of air in the building [m3 ], concentration of tracer gas inside the building [−], rate of air exchange in the building—air infiltration [m3 s−1 ], immediate volume of production/consumption by the source/appliance [m3 s−1 ], concentration of tracer gas in the outdoor environment [−].

The method provides sufficiently accurate air infiltration values; however, due to the required measurement time being in the range of hours, it can be affected by atmospheric conditions such as changes in atmospheric pressure or wind intensity during the measurement etc. At the same time, most of the gases used can be a risk to the inhabitants of the house or the environment. One suitable trace gas is carbon dioxide, which is a natural part of the environment and is also significantly linked to the human metabolism. For every five consumed oxygen molecules, the human organism produces four carbon dioxide CO2 molecules [1]. Humans produce this gas in considerable quantities, depending on the physical load. Therefore, CO2 is generally a good indicator of the need for ventilation. The German chemist Max Joseph von Pettenkofer (1818–1901), a Bavarian provincial school inspector, noticed this in the mid-nineteenth century and surveyed schools to find out the link between the carbon dioxide concentration and the percentage of people dissatisfied with the indoor climate. The limit value was set at a concentration of 0.1% (1000 ppm) of CO2 , which is now known as the Pettenkofer criterion. Carbon dioxide is ideal for the purposes of continuous monitoring of air infiltration, because it is always available in the inhabited buildings, it is free of charge, and there is no need to supplement or dose it. If we introduce a simplification of Eq. (1) in which we consider that there is no significant CO2 source or consumer at the time of measurement in the building, we get the following equation V and its analytical solution

dc(t) + Ac(t) = Ace , dt

(2)

232

M. Hernych

c(t) = ce + (c0 − ce )e− V t for t ≥ 0 A

(3)

where c0 is the initial concentration of carbon dioxide in the interior at time t = 0. The air infiltration A from Eq. (3) is   c(t) − ce V . A = − ln t c0 − ce

(4)

It follows from the foregoing that if we meet the input condition of the absence of a carbon dioxide source, i.e. especially humans, we can easily determine the air infiltration A value of two arbitrary points from the transient characteristics of the decrease in the carbon dioxide concentration   c(t2 ) − ce V , (5) ln A=− t2 − t1 c(t1 ) − ce where A V c(t 1 ), c(t 2 ) ce

rate of air exchange in the building—air infiltration, total volume of air in the building [m3 ], concentration of CO2 inside the building in the time t1 , resp.t2 [−], concentration of carbon dioxide in the outdoor environment [−].

3 Verification To practical verification the calculation was selected the classroom of the Laboratory of Logical Control in the A building at Faculty of Mechatronics and Interdisciplinary Studies at Technical university of Liberec. This laboratory is equipped with a forced ventilation system with indoor circulation and the necessary technology – the CO2 sensors (Protronics ADS-CO2-D with analog output 0 to 10 V) and the Tecomat PLC with the SW application for the data archiving. The classroom has a total volume V = 420 m3 , equivalent to a small family house. The first part of the experiment was aimed at achieving a high concentration of carbon dioxide during the learning, the lesson started the first day at 10:40 and ended (with a short break—the first peak of the concentration) at 14:40. The second part of the experiment started after close the door with initial concentration 1102 ppm CO2 . The next 3 days were the classroom closed and the concentration dropped to an outside value of about 390 ppm (Tables 1, 2 and 3). The results of the infiltration calculations in the tables obtained by the practical experiment show that the described method is applicable for the indicative determination of the instantaneous ventilation value, mainly in the first tens of minutes and the first hours. Low values of concentrations close to the external environment are due to measurement errors and atmospheric influences.

The Simple Method of Automatic Measurement …

233

Table 1 Table with CO2 concentration in ppm during the first hour of the experiment, the infiltration A values are calculate in m3 h−1 from time 14:40 in day D + 0 H+0 Time

14:40

14:50

15:00

15:10

15:20

15:30

15:40

CO2 concentration c(t) (ppm)

1102

1090

1075

1065

1057

1048

1032

Infiltration A (m3 h−1 )



42.83

48.71

44.83

41.13

39.75

43.47

Table 2 Table with CO2 concentration in ppm during the 3 day of the experiment, the infiltration A values are calculate in m3 h−1 from time 14:40 in day D + 0 D+0 Time

14:40 15:50 16:40 17:40 18:40 19:40 20:40 21:40 22:40 23:40

CO2 concentration 1102 c(t) (ppm)

1032

Infiltration A (m3 h−1 )

43.47 37.70 36.14 36.07 34.89 33.37

33.85 33.94 35.60

D+2

D+3



985

940

895

860

D+1 Time

832

795

763

722

0:40

1:40

2:40

8:40

14:40 20:40 2:40

8:40

14:40 0.61

CO2 concentration 688 c(t) (ppm)

657

630

522

475

425

420

Infiltration A (m3 h−1 )

37.45 38.06 39.32 32.19 32.47 32.22

36.58

460

435

400

30.13 27.71 24.48

4 Resume This simple method can be used repeatedly and with any configuration of the ventilation system. For example, it is possible to calibrate the performance of the ventilation used, to determine the condition of the ventilation unit’s filters, and the effect of the wind on the air infiltration etc. It is possible to use the obtained data to ensure ventilation that meets the hygienic regulations. In addition to the CO2 concentration sensor, an essential condition is access to information required to correctly evaluate the suitability of starting the measurement, i.e. the current state of the machine ventilation (set ventilation performance, opening of ventilation ducts etc.), closure of all windows, meteorological information and of course information about the presence of people. The ideal building is one in which all systems are controlled by a one control system, which intelligent houses generally have. The algorithm that needs to be implemented in the control system is very simple when there are no people present, the initial value of the CO2 concentration is c0 and after a certain period of time (in the range of tens of minutes, or even better hours) it is ce . By subsequently assigning this value to the formula (5) we can determine the current size of air infiltration. This measurement can be performed repeatedly, e.g. whenever an alarm is activated in a building.

CO2 concentration c(t) (ppm)

1102

1032

985

940

895

860

832

795

763

722

688

657

630

Time t 1

14:40

15:40

16:40

17:40

18:40

19:40

20:40

21:40

22:40

23:40

0:40

1:40

2:40

44.78

46.13

45.38

48.91

34.57

36.72

25.80

30.17

35.85

33.03

31.93

43.47



14:40

Time t 2

45.46

45.76

47.14

41.74

35.64

31.26

27.98

33.01

34.44

32.37

37.70



15:40

45.43

46.81

42.95

40.06

32.36

30.89

30.610

33.02

33.60

36.14



16:40

46.30

43.75

41.39

36.50

31.81

32.13

31.21

32.74

36.07



17:40

43.95

42.34

38.27

35.23

32.62

32.31

31.36

34.89



18:40

42.75

39.58

36.92

35.33

32.69

32.25

33.37



19:40

40.33

38.24

36.77

35.01

32.58

33.85



20:40

39.06

37.94

36.30

34.62

33.94



21:40

38.70

37.39

35.82

35.60



22:40

38.13

36.85

36.58



23:40

37.57

37.45



0:40

38.06



1:40

Table 3 Table with CO2 concentration in ppm during the first 12 h of the experiment, the infiltration A values are calculate in m3 h−1 for all combination of time t 1 and t 2

234 M. Hernych

The Simple Method of Automatic Measurement …

235

Fig. 1 The graph of the CO2 concentration change in the classroom within 3 days

Reference 1. Slavíková, J.: Fyziologie dýchání (in Czech). Praha: Univerzita Karlova, p 54 (1997). ISBN 80 7066-658-7

Evaluation of Surface Hardness Using Digital Microscope Image Analysis Patrik Kutilek, Jan Hejda, Vaclav Krivanek, Petr Volf, and Eva Kutilkova

Abstract The most frequently used method for hardness testing is the progressive loading test. This test measures surface hardness and requires direct measurement of the indentation’s depth. However, the diameter of the impression left by an indent can also be used to determine surface hardness. The authors designed a method and have custom-written a program for measuring and evaluating surface hardness using a digital microscope and digital microscopic image analysis. The proposed procedure and software are based on the calculation of an impression’s diameter left by an indent from digital microscopic images. The evaluation procedure and the software were tested on Ti and TiN samples of biocompatible surfaces. Measurements demonstrated that the calculated hardness results were largely similar in five different operations for measuring the sample. As a result, it seems that evaluation using surface hardness measured by three test operations appears to be sufficient. The new software and procedures proposed allows for the evaluation of surface hardness in a wide range of metal surfaces, while providing credible results. Keywords Surface hardness · Optical microscope · Image analysis · Rockwell hardness · Materials testing P. Kutilek (B) · J. Hejda · P. Volf · E. Kutilkova Faculty of Biomedical Engineering, Czech Technical University in Prague, Sitna Sq. 3105, 272 01 Kladno, Czech Republic e-mail: [email protected] URL: https://www.cvut.cz/en/faculty-of-biomedical-engineering J. Hejda e-mail: [email protected] P. Volf e-mail: [email protected] E. Kutilkova e-mail: [email protected] V. Krivanek Faculty of Military Technology, University of Defence, Kounicova 65, 662 10 Brno, Czech Republic e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 R. Matoušek and J. K˚udela (eds.), Recent Advances in Soft Computing and Cybernetics, Studies in Fuzziness and Soft Computing 403, https://doi.org/10.1007/978-3-030-61659-5_20

237

238

P. Kutilek et al.

1 Introduction Hardness is a key parameter for all material surfaces. For hardness measurement, the Rockwell H15N scale is generally used [1]. Unfortunately, the current situation in existing institutions neither provides for uniform evaluation conditions, nor a software based on an evaluation of an imprint left by the indenter. International standards valid for the EU, US and other regions describe harness tests that can be applied to hardness testers. Although each tester allows for a direct hardness measurement depending on the depth of an imprint, it does not enable to use the impression’s area to evaluate the hardness. The use of optical microscopes for determining surface hardness has already been mentioned within the international standards’ context. Nevertheless, these devices or suitable software are not commonly used when testing surface hardness [2, 3]. A suitable measurement and evaluation procedure for hardness by means of a digital microscope and versatile digital microscopic image analysis software has not yet been described before. So far, only analytical programs and/or digital microscopes as parts of expensive testers have been used. These programs and microscopes are immensely expensive, and moreover, as they are not primarily designed to perform the evaluation in line with international standards, they cannot be applied without their original system. The aim of this article is to describe the methodology and systems used for measurement and evaluation of the Rockwell Hardness tests using a digital microscope and a custom developed image analysis software. The proposed measurement method and software may be used for the evaluation of data obtained by any hardness tester.

2 Materials and Methods 2.1 International Standards for the Evaluation of Rockwell Hardness Using a Microscope Based on a comparison of international standards, identical measurement conditions and procedures were determined by the authors [4]. The international standards recommend nominal preliminary test 29.42 N and nominal total test force 147.1 N. The important characteristics, such as stylus geometry, are also determined according to international standards [4–7]. The standards define the stylus geometry shape of the Rockwell C. The recommended stylus geometry for the Rockwell C is a diamond spherical tip of 200 µm. Thus, the recommended spherical diamond tip with the radius of 200 µm (Rockwell C) was used. Every international standard describes the way to create and evaluate more than one surface failure, with five indentations being recommended. The damaged surface can then be evaluated by a microscope displaying imprints left by the indenter and identifying the Rockwell Hardness.

Evaluation of Surface Hardness Using Digital Microscope …

239

2.2 Measuring Instruments The basic and inexpensive version of the Revetest Xpress Scratch Tester (RSX-SAE-0000) (CSM Instruments SA) was the only device used to produce impressions left by an indenter, Fig. 1. The system is generally regarded as the paradigm system for surface characteristics. The measured samples can be organic or inorganic, with metallized or passivated layers, with different friction and wear protective coatings. Surfaces can include metals, alloys, glass, minerals, organic materials etc. In 2013 the cheapest model of the Revetest Xpress Scratch Tester without any supplements (load or indentation depth sensor, video microscope, PC monitor etc.) cost 30 000 e. The scratch tester’s most expensive and important parts are: the load sensors, indentation depth sensors and the feedback–controlled force actuator, which presents loading rate and indenter speed. Hardness tests can be performed with a load range between 1 and 200 N. Loading rate can be set at rates from 0 to 500 N/min. In this experiment a spherical Rockwell C diamond stylus, i.e. conical diamond intender with an angle of 120 ◦ and with its spherical tip of 200 m was used [4, 6]. The hardness tester comes with software that allows the user to predefine the measurement procedure on a PC (in this case a laptop with Windows 7 was used). The test mode included a progressive load [4–7]. Thus, the tester complies with the standards, above all ISO 6508-1. After the procedure was defined, it was exported using a USB memory stick subsequently inserted into the tester. A simple press of the “start” button on the tester started the test procedure.

Fig. 1 The basic and cheapest version of the Revetest Xpress Scratch Tester which is only used to create an imprint, but not to calculate the surface hardness (A). The low–cost Dino–Lite USB Digital Microscope with 100× magnification is used to capture images of the impressions (B) maintaining the integrity of the specifications

240

P. Kutilek et al.

Fig. 2 Section of the user graphic interface for loading the picture of the standard and defining the length of the standard (A.), Dino-Lite calibration sample used as standard (B.) [8]

After creating the impression, the sample was prepared for further observation using the optical microscope. In the case described here, the optical microscope was not a part of the tester. The inexpensive Dino–Lite USB Digital Microscope (Dino–Lite ProX AM4000/AD4000 series) (AnMo Electronics Corp.) with adjustable magnification (from 10× to 200× without cap) and shortened clear plastic end-cap was used to capture images of the imprints Fig. 1B. The low cost professional digital microscope uses a higher resolution camera (1.3 megapixel) resolution of 1280 × 1024 pixels. The camera can save images in JPEG or BMP format. The crucial component is that the MatLab software can utilize both JPEG and BMP image formats. The imaging results can be viewed on any computer via a USB 2.0 output and DinoCapture (for Windows) or DinoXcope (for MAC) software. The digital camera offers 8 software-controlled white LEDs which can be used to modify the lighting conditions. The plastic slide used for measurement (calibration sample CS-20/C20, min. pitch 0.2) was used as a standard for measurement calibration before measuring the impressions Fig. 2. Dino–Lite MS35B Rigid Table Top Pole Stand was used to hold the microscope Fig. 2. The MS35B tabletop allows for as much 20 cm of vertical working space above a sample. The focus block 2.5 cm of vertical movement to fine–tune an image for optimum detail. In 2013, the price of the digital microscope did not exceed 500 e, and the stand cost just 90 e. Each piece of software (DinoCapture and custom written software for analysis of the imprints) was installed on the same HP Compaq6735 s laptop with AMD Turion X2 dual core processor and Windows 7 operating system.

Evaluation of Surface Hardness Using Digital Microscope …

241

2.3 Microscope Image Analysis Software The new image analysis software was designed to identify an imprint’s diameter size left by an indenter and calculate the Rockwell Hardness. Several issues needed to be resolved in the software’s new design. One of the most important issue was determining the surface hardness from the diameter of the impression left by an indenter from a digital microscope’s images. In order to meet these requirements, the custom–written software program used the images captured by the Dino–Lite Digital Microscope. To determine the diameters of the imprint, the program compared the picture of the standard with the one of the imprint. After the making the imprint on a sample by the Revetest Xpress Scratch the DinoCapture software of the Dino-Lite Digital Microscope took two pictures: the picture of the impression and picture of the Dino-Lite calibration sample. Both pictures must be taken using the same settings (i.e. using the same magnification, resolution, physical image sizes and contrast) and then analyzed by the image analysis software. First, the graphical user interface was used to load the picture of the standard and then to define its length, see Fig. 2. Using a mouse cursor, the relevant length of the Dino-Lite calibration sample used as a standard, was marked out. The real size of the marked section (i.e. real physical dimension) was then filled in the corresponding row in the GUI. Subsequently, the graphical user interface was used to load the picture of the impression and to define its relevant diameter. The mouse cursor marked out the relevant diameter of a surface-failure. The program calculated the real physical diameter of the impression left by indenter. The computational algorithm was based on the ratio of the real size of an object and the size of the object in an image [8–11]. The real physical diameter D of the impression was calculated using Eq. (1): D =e·

d le

(1)

The e stands for the real physical size of the marked section of the Dino-Lite calibration sample used as a standard entered by a user. The d is the diameter of the surface–failure in the digital microscope’s image of the surface. The le is the length of the calibration standard in the digital microscopic image of the Dino–Lite’s calibration sample. Thus, the d and le are not real physical lengths of the standard and section of the scratch but represent dimensions of the shapes in images expressed by the number of pixels. The digital microscope’s images need to be of the same size and resolution. The program is also capable of calculating the hardness in accordance with the Rockwell Hardness 15N (RH15N) scale. To calculate an average value of a diameter for the calculation of hardness, at least three diameters are measured in the image of the imprint. The hardness calculation is based on the size of the calculated average diameter of the impression and the geometry of the indenter. Moreover, the aver-

242

P. Kutilek et al.

age value of diameter must be calculated five times for five different imprints. The procedure for determining the surface hardness using the shape dimensions of the impression left by an indenter is described in detail in [4, 7, 12]. The procedure described can be used to calculate Rockwell hardness. The calculated values are then displayed in the program’s window and can be saved as a simple text file with the images of the impressions. All calculations are implemented in the custom-written MatLab program. Likewise, the GUI of the program is designed and created in MatLab, (MatLab R2010b, Mathworks Inc.).

2.4 Measurement and Evaluation Procedure All the impressions were made by the basic and most inexpensive version of the Revetest Xpress Scratch Tester. The progressive loading test mode was applied as it complies with all international standards [4, 7, 12]. In accordance with the international standard (ISO 6508-2), the conditions used for testing were: nominal preliminary test 29.42 N and nominal total test force 147.1 N. The imprints were evaluated using the procedure of the digital microscopic image and the new software. The cheap Dino–Lite Digital Microscope with a shortened clear plastic end–cap allowing 100× magnification was used to capture the images of the surface. The pictures of the sample surface and those of the Dino–Lite calibration sample had to be taken using the same magnification, resolution, physical image sizes and contrast. This can be achieved by means of the picture of the calibration sample (i.e. transparent plastic slide) which is placed on the sample with the surface–failure shortly before or immediately after the picture of the surface is taken, see Fig. 2. The sample is always placed in the same position under the microscope’s lens, i.e. at the same distance from the lens. The sample with the impression is placed under the microscope with 100× magnification, focused on the specimen and the calibration sample which is placed on the specimen. After of calibration sample picture is taken and the transparent plastic slide from the specimen is removed, pictures of the imprints are taken. The custom program described above uses images captured by the digital microscope to determine the hardness Fig. 3. Loading the picture, marking the relevant length of the standard in the picture and filling the value of the real physical size in the row of the GUI are performed during calibration. After the program is calibrated, the evaluation of the impression can take place. The image of the surface is loaded, and the mouse cursor is used to mark off the relevant imprint diameter Fig. 4. The procedure is repeated five times for each of the sample surface. For each impression, i.e. surface–failure, at least three diameter readings are used to achieve the average diameter determination. The calculated values of the average diameters are then used to calculate the hardness values, which are finally displayed in the program window.

Evaluation of Surface Hardness Using Digital Microscope …

243

Fig. 3 Section of the graphical user interface for loading the surface pictures and calculating the diameters Fig. 4 Diagram determining the Rockwell Hardness using the new software

3 Application of Microscope Image Analysis Software and Results The new software and designed procedures were verified on samples of Ti and TiN on stainless steel biocompatible surfaces. The surface materials were chosen as an example of commonly used and developed materials for biomedical applications. The recommended standard measurement conditions for the measurement of the Rockwell Hardness (HR15N) by tester were used. The surface hardness evaluation was completed using the before mentioned measurement and evaluation methods.

244

P. Kutilek et al.

Table 1 Table of hardness determined by new software Surface material TiN Measured Hardness

Average values

HR15N1 HR15N2 HR15N3 HR15N4 HR15N5 HR15N

82 80 84 85 85 83

Ti 63 68 68 67 71 67

The environmental conditions included a temperature of 20 ◦ C and relative humidity of 55%. Five impressions under the same conditions were made on each sample of the material. Table 1 shows a summary of the hardness identified in each material by the optical microscope and the average diameters of the impressions.

4 Discussion Imprints diameters and hardness were observed using the method and procedure described in Table 1. It was found that the calculated hardness is broadly similar in the five operations of the measured sample. This shows the homogeneity of the surface material, accuracy and repeatability of the proposed measurement and evaluation procedures. Thus, an evaluation using the diameter of an impression left by an indenter measured by five test operations seems to be sufficient. Three diameters were measured for each impression and the average value of diameter was determined. In the case of TiN on stainless steel, the hardness is much higher than Ti hardness. As demonstrated, the new software and procedures allow for the evaluation of surface hardness in a wide range of substrate metals and thin films, while providing credible results. The results must, however, be seen only as a way of demonstrating the ability of the outlined procedures and the new software.

5 Conclusion The new software and designed procedures are based on the use of a microscope. The algorithms for calculating the hardness were designed in accordance with the international standard requirements. The user interface of the software allows the user to identify the imprint’s diameter and hardness using a digital microscope’s images.

Evaluation of Surface Hardness Using Digital Microscope …

245

No difficulties were encountered with the Ti substrate and TiN layer evaluation using a microscope and the new software. The research tested, demonstrated and proved the application of selected and modified measurement conditions of international standards to measure the surface hardness of biocompatible materials. Consequently, application of the newly designed measurement and evaluation procedure, and custom–written program, offer wide applicability in the research of new biocompatible surfaces. Acknowledgements This work was done in the framework of research project SGS17/108/OHK4/ 1T/17 sponsored by Czech Technical University in Prague. The work presented in this article has also been supported by the Czech Republic Ministry of Defence (University of Defence development program “Research of sensor and control systems to achieve battlefield information superiority”. Conflict of Interest None to report.

References 1. Liggett, W.S., Low, S.R., Pitchure, D.J., Song, J.: Capability in Rockwell c scale hardness. J. Res. Natl. Inst. Stand. Technol. 105(4), 511 (2000) 2. Germak, A. Origlia, C.: New possibilities in the geometrical calibration of diamond indenters. In: Proceedings of IMEKO XIX World Congress, vol. 1, pp. 382–385. IMEKO (2009) 3. Kim, J.D., Yoon, M.C., Ryu, J.Y.: Rockwell hardness modeling using indented volume. In: Applied Mechanics and Materials, vol. 152, pp. 312–315. Trans Tech Publications (2012) 4. ISO 6508-1: Metallic Materials—Rockwell Hardness Test—Part 1: Test Method. Geneva (2015) 5. ISO 2039-2: Plastics—Determination of Hardness—Part 2: Rockwell Hardness. Geneva (1987) 6. ISO 6508-2: Metallic Materials—Rockwell Hardness Test—Part 2: Verification and Calibration of Testing Machines and Indenters. Geneva (2015) 7. ASTM E18: Standard Methods for Rockwell Hardness and Rockwell Superficial Hardness of Metallic Materials. Conshohocken (2017) 8. Kutilek, P., Socha, V., Fitl, P., Smrcka, P.: Evaluation of the adhesion strength using digital microscope and international standards. In: Proceedings of the 16th International Conference on Mechatronics—Mechatronika, pp. 519–525 (2014) 9. de La Bourdonnaye, A., Doskoˇcil, R., Kˇrivánek, V., Štefek, A.: Practical experience with distance measurement based on the single visual camera. Adv. Mil. Technol. 7(2), 49–56 (2012) 10. Doskoˇcil, R., Fischer, J., Kˇrivánek, V., Štefek, A.: Measurement of distance by single visual camera at robot sensor systems. In: Maga, D., Štefek, A., Bˇrezina, T. (Eds.) Proceedings of the 15th International Conference of Mechatronika 2012 (Prague, 2012), pp. 143–149. IEEE, Czech Technical University in Prague 11. Kutilek, P., Hozman, J.: Determining the position of head and shoulders in neurological practice with the use of cameras. Acta Polytech. 51(3), 32–38 (2011) 12. Low, S.R. III: NIST Recommended Practice Guide: Rockwell Hardness Measurement of Metallic Materials (2001)

Eigenfrequency Identification of the Hydraulic Cylinder Petr Noskieviˇc

Abstract The paper is focused on the identification of the eigenfrequency of the linear hydraulic servo drive, which consists of the flow control valve and hydraulic cylinder. The behaviour of the closed loop controlled hydraulic drive is given by the dynamic properties of the control valve and hydraulic cylinder. The dynamic properties of the hydraulic cylinder determine the structure and tuning of the controller of the closed loop position control system. For that reason it is important to know the eigenfrequency of the cylinder. The eigenfrequency of the cylinder depends on the moving mass and on the stiffness of the hydraulic cylinder, that means it depends on type of the hydraulic cylinder and on hydraulic capacities of the cylinder chambers and connecting pipelines. Due to the dependence of the hydraulic capacity on the piston position the stiffness and the eigenfrequency vary with the piston position. The identification of the eigenfrequency using the self-excited oscillations by the arranging a non-linear element in the feedback is presented in the paper. The identified curve describing the eigenfrequency of the cylinder in dependence on the piston position can be used for the controller tuning. Keywords Identification · Oscillations · Eigen frequency · Hydraulic cylinder

1 Introduction Closed loop controlled linear hydraulic drives are very important drives in the production equipment in different application areas. To achieve the demanded technological parameters very often strong requirements on the drive dynamics are defined. Design of the closed loop control system needs a knowledge of the dynamic characteristics of the main components of the control valve and hydraulic cylinder. Some of them, mainly the data of the servovalve, can be given in the data sheets, some parameters can be read from the drive specification, some must be estimated or experimentally identified. The hydraulic cylinder is characterized as a low damped system with P. Noskieviˇc (B) VŠB-Technical University of Ostrava, 17. listopadu 15, 708 33 Ostrava-Poruba, Czech Republic e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 R. Matoušek and J. K˚udela (eds.), Recent Advances in Soft Computing and Cybernetics, Studies in Fuzziness and Soft Computing 403, https://doi.org/10.1007/978-3-030-61659-5_21

247

248

P. Noskieviˇc

lower frequency, which depends on the type of the cylinder, geometrical parameters of the cylinder, mass and also on the hydraulic capacities of the connecting pipelines. Although the oil is less compressible than the air in pneumatic drives, the hydraulic capacities have a great influence on the dynamic properties of the drive. The knowledge of the eigenfrequency and of their course in dependence on the piston position is useful for the design of the controller of the closed loop position control system. The paper describes the identification of the eigenfrequency of the hydraulic drive using the self-excited oscillation by arranging a non-linear element in the feedback.

1.1 Brief Review of the Relay Feedback Identification Methods The relay feedback identification method is based on the relay feedback method for automatic tuning of the controller introduced in [1]. From the oscillation, which occurs and has the constant period, the critical gain and frequency can be obtained and finally used for the model parameters identification. The quality of the relay identification depending on the relay parameters is analyzed in [2, 3]. The application of this method for the identification of the systems described by different transfer function is presented in [4]. The frequency of the oscillation allows to estimate the eigenfrequency of the controlled system and this is important by the control of hydraulic drives [5]. The identification of the eigenfrequency of the pneumatic cylinder is presented in [6, 7], the possibility to identify the rotary and also linear hydraulic drive is described in [8, 9]. The identification of the linear hydraulic drive with position feedback described in [8] suppose, that the servovalve dynamics is better than the hydraulic cylinder dynamics and can be neglected. The influence of the servovalve dynamics on the identification of the eigenfrequency of the hydraulic cylinder has been taken into the account in this article.

2 Analysis of the Identified System The typical structure of the hydraulic servo drive is a serial connection of the flow control valve and hydraulic cylinder, Fig. 1. The control valve can be of different types which differ in the dynamics, for example the servovalve, proportional valve or direct controlled valve. The main property of the control valve is continuously control of the flow through the variable hydraulic resistances into the chambers of the hydraulic cylinder. The servovalve can be characterized as a good damped dynamic system. The double acting single rod or the double rod hydraulic cylinder is connected with the control valve using the as short as possible pipelines. For the system analysis and identification, the non-linear

Eigenfrequency Identification of the Hydraulic Cylinder

249

x ux

w

m

e

Fig. 1 Hydraulic drive with nonlinear element in the feedback

properties of the hydraulic elements can be described by the linearized mathematical models. The hydraulic cylinder can be characterized as a low damped second order term with integrator described by the transfer function G 1 (s) =

KM . 2 2 T0M s + 2ξ M T0M s + 1

(1)

The servovalve is described by the transfer function G 2 (s) =

2 T0SV s2

K SV . + 2ξ SV T0SV s + 1

(2)

The block diagram of the basic closed loop position control system is shown in Fig. 2. The dynamic properties of the closed loop system depend on the parameters of the terms in the open loop and mainly on the ratio of the eigenfrequencies [5] κ=

Hydraulic Servovalve

Controller w

e

GC

f SV ω SV TM = = . fM ωM TSV

u

G2(s)

(3)

Hydraulic cylinder q

Ksn Fig. 2 Block diagram of the closed loop position control

G1(s)

v

1 s

y

250

P. Noskieviˇc

The typical course of the normalized critical gain for the proportional controller is shown in Fig. 3. It is apparent that only for the ration κ higher than approximately 2.5 the normalized critical gain doesn’t vary and doesn’t depend on the eigenfrequency of the servovalve and depends only on the eigenfrequency of the cylinder. In this case it is possible to estimate the eigenfrequency of the hydraulic cylinder using the self-excited oscillations method. For lower values of κ the estimated eigenfrequency is influenced by the lower eigenfrequency of the servovalve and doesn’t correspond to the eigenfrequency of the cylinder. It describes the whole drive. In the case of higher κ the bandwidth of the servovalve is higher than the natural frequency of the hydraulic cylinder, the dynamics of the servovalve can be neglected and in the following analysis the servovalve can be described only by the gain K sv . Than the transfer function of the closed loop system has the form G(s) =

K0 , 2 3 T0M s + 2ξ M T0M s 2 + s + K 0

(4)

where K 0 is the gain of the open loop system K 0 = K R K SV K sn K M

(5)

and K R is the gain of the proportional controller, K sn is the gain of the transducer. The stability analysis of this linear system allows to express the critical gain K0crit in marginal stability

Fig. 3 Normalized critical gain K 0crit /ωM for the proportional controller in dependence on κ

Eigenfrequency Identification of the Hydraulic Cylinder

251

Fig. 4 Hydraulic stiffness

m

KHA

KHB m

K 0crit =

2ξ M = 2ξ M ω0M . T0M

(6)

In this expression ω0M is the natural frequency of the hydraulic cylinder and ξM is the damping ratio of the hydraulic cylinder. These parameters limit the critical gain of the closed loop system, the controller gain and finally also the quality of the control. The formula for the calculation of the natural frequency of the hydraulic cylinder can be also derived using the results of the mathematical modelling. Under assumption of the lumped parameters the spring-mass model shown in Fig. 4 of the hydraulic cylinder can be used. The stiffness of the oil in the cylinder chamber and connected pipelines is given by the formulas KH A =

K S 2A K S B2 KHB = VA VB

KH = KH A + KHB

(7) (8)

Finally, the natural frequency of the hydraulic cylinder is given by  fM

1 · = 2π

K m



 S 2A S B2 . + S A x + V A0 S B (xmax − x) + VB0

(9)

The natural frequency of the hydraulic cylinder depds on the piston position and for the double rod cylinder the minimum value has in the mid-position. For the single rod cylinder the minimum value position doesn’t vary significantly. Typical course of the natural frequency in dependence on the piston position for the hydraulic cylinder characterized by the parameters D = 0.08 m/d = 0.063 m, h = 0.5 m and mass m = 300 kg is shown in Fig. 5. This natural frequency is the angular frequency of the non-damped oscillations. Relations between the natural frequency ω0M and the angular frequency ωM of the observed oscillations of the system on the margin stability depends on the damping ratio ξM of the hydraulic cylinder and is given by  ω M = ω0M 1 − ξ 2

(10)

P. Noskieviˇc

Eigenfrequency [Hz]

252

Eigenfrequency [Hz] calculated *

90 85 80 75 70 65 0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

Piston position [m] Fig. 5 Natural frequency of the hydraulic cylinder

The hydraulic cylinder is characterized by the low damping. For ξM = 0.2 becomes the expression (10) the form . ω M = 0.98ω0M

(11)

For ξM less than 0.2 the difference between ωM and ω0M is less than 2%. The difference is smaller than the standardly occurring final error by the experimental identification. For this reason, it is possible in some cases to identify the angular frequency ωM as a natural frequency ω0M .

3 Identification Using the Self-excited Oscillations The identification method based on the self-excited oscillations is a very simple and allows in very simple way to identify the angular frequency of the oscillations on the marginal stability. The oscillations occur after arranging the nonlinear element in the feedback. It is possible to receive the same result after placing the nonlinear element in the forward side of the closed loop system. It is suitable to use an element with the relay characteristic as a nonlinear element, see Fig. 6. Fig. 6 Relay characteristic of the non-linear element

u2

M u1

Eigenfrequency Identification of the Hydraulic Cylinder Self excited oscillations

0.4505

Piston position [m]

253

Tcrit

2*Ay

0.45

0.4495 0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.25

0.3

0.35

0.4

time [s] Relay output

Command value [V]

2

1

0

-1

-2 0

0.05

0.1

0.15

0.2

time [s]

Fig. 7 Self-excited oscillations of the piston of the hydraulic cylinder. Upper plot—piston position, lower plot—output from the non-linear element

The closed loop hydraulic positioning system with the nonlinear element in the feedback is shown in Fig. 1. After the oscillations occur the typical form of the output signal is shown in Fig. 7. Using the theory of this identification method described for example in [4] the critical gain of the closed loop system is given by r0crit =

4M π Ay

(12)

The values of the variables Ay and M can be obtained from the course of the self-excited oscillations and from the output of the relay characteristic, see Figs. 6 and 7. The angular frequency of the critical oscillations can be expressed as ωcrit =

2π Tcrit

(13)

In accordance with the expression (6) for the critical gain of the hydraulic positioning system the values of the Formulas (12) and (6) should be the same that means K 0crit = r0crit

(14)

254

P. Noskieviˇc

resp. 2ξ M ω0M = r0crit

(15)

and the critical frequency or period determines the angular frequency of the piston oscillations ωM , which can be calculated from ωM =

2π Tcrit

(16)

Using the expression (10) the natural frequency of the hydraulic cylinder is given by ω0M =

2π  2 Tcrit 1 − ξ M

(17)

From the expression (14) can be calculated the damping ratio of the hydraulic cylinder ξM =

r0crit 2ω0M

(18)

After substitution of the expression (17) in the expression (18) we obtain the expression for the damping ratio which respects the difference between the natural frequency ω0M and angular frequency ωM of the hydraulic cylinder r0crit Tcrit ξM =  2 2 16π 2 + Tcrit r0crit

(19)

Provided that the described difference (11) between the ω0M and ωM for lower damping is accepted the simplified expressions can be used for the direct calculation of the natural frequency and damping ratio of the hydraulic cylinder from the measured values 2π ω0M ∼ = ωM = Tcrit

(20)

and ξM =

r0crit Tcrit . 4π

(21)

Eigenfrequency Identification of the Hydraulic Cylinder

> 0.1

255

u

QA

p0

QP

x QA

x v

e=w-x pA

QB

const.HG

v

QB pA

1 s

Mux pB

QT

QT

Fz

pA, pB

pB

Summ Q pT

xs

Fz

pT

Leakage of the pump

Modell of the servo drive u(1)*u(2)

Mux

Modell of the pump and relief valve

xs

Gpoj

>

Position transducer

Fig. 8 Simulation model of the hydraulic drive with the relay in the feedback

4 Application of the Method The presented method was applied for the experimental identification of the eigenfrequency of the hydraulic cylinder using the simulation of the nonlinear simulation model of the hydraulic drive created in MATLAB-Simulink [10]. The structure of the simulation model of the identification experiment by the use of build subsystems of the servovalve and hydraulic cylinder is shown in Fig. 8. The piston position is controlled using the proportional controller and position feedback. At the given time after achieving the desired position the command value of the controller is switched off and the relay in the feedback is switched on. The course of the drive response and occurred self-excited oscillations is shown in Fig. 7. The simulation was done for the nine different piston positions. From the stored responses the magnitude and period of the oscillation was evaluated and afterwards using the presented formulas the eigenfrequency was calculated. The achieved results of the drive characterized by the servovalve frequency fsv = 150 Hz, ratio κ ∼ = 2.5 are shown in the Figs. 9 and 10, the results achieved for the control valve frequency fsv = 45 Hz, κ ∼ = 0.7 are presented in the Figs. 11 and 12. In this case the identified eigenfrequency is influenced by the lower eigenfrequency of the control valve, which works as a low pass filter.

5 Conclusions The application of the identification method based on the self-excited oscillation using the non-linear element in the feedback was introduced in this paper. The method was used for the identification of the eigenfrequency of the hydraulic cylinder which

256

P. Noskieviˇc 76

Identified eigenfrequency +, natural eigenfrequency identified x

74

Eigenfrequency [Hz]

72 70 68 66 64 62 60 58 56 0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

Piston position [m]

Fig. 9 Identified eigenfrequency of the hydraulic drive characterized by the ratio κ ∼ = 2.5, fsv = 150 Hz Identified damping ratio

0.3

0.29

Damping ratio [-]

0.28

0.27

0.26

0.25

0.24

0.23

0.22 0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

Piston position [m]

Fig. 10 Identified damping ratio of the hydraulic drive characterized by the ratio κ ∼ = 2,5, fsv = 150 Hz

is one of the two main components of the linear hydraulic drive operated in position closed loop control system. The second element is the flow control valve, which can be of a different type and can differ in the dynamic properties. The focus on the influence of the dynamic properties of the servovalve characterized by their eigenfrequency on the efficiency of this method to identify the cylinder eigenfrequency is next contribution of this paper. The eigenfrequency of the cylinder can be calculated

Eigenfrequency Identification of the Hydraulic Cylinder 50

257

Identified eigenfrequency +, natural eigenfrequency identified x

Eigenfrequency [Hz]

48

46

44

42

40

38

36 0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

Piston position [m]

Fig. 11 Identified eigenfrequency of the hydraulic drive characterized by the ratio κ ∼ = 0.7, fsv = 45 Hz Identified damping ratio

0.58 0.57 0.56

Damping ratio [-]

0.55 0.54 0.53 0.52 0.51 0.5 0.49 0.48 0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

Piston position [m]

Fig. 12 Identified damping ratio of the hydraulic drive characterized by the ratio κ ∼ = 0.7, fsv = 45 Hz

258

P. Noskieviˇc

using the known and mentioned formula in the design phase before the cylinder is realized. If the self-excited oscillation method is used by the experimental identification of the hydraulic drive, the eigenfrequency of the cylinder which corresponds to the calculated and expected value can be achieved only if the dynamics of the control valve is better than of the cylinder. Otherwise the results are influenced by the servovalve dynamics and then the result of the identification is the estimation of the eigenfrequency of the whole linear hydraulic drive as a serial connection of the control valve and hydraulic cylinder. The presented results were achieved using the computer simulation of the closed loop control system with the relay in the feedback. From the simulated responses, the values of the critical period and amplitude were automatically evaluated and afterwards the 2nd order model parameters—the eigenfrequency and damping ratio— were calculated. The influence of the control valve properties on the identification is also observable on the calculated values of the damping ratio. If the valve dynamics cannot be neglected, the damping ratio is higher than is the expected value of the hydraulic cylinder. The simplicity of the method is their great advantage. However, their use on the real system—hydraulic drive connected with the machine or technological equipment must be evaluated separately for each industrial application to avoid the damage of the equipment after the self-excited oscillation occurs. Acknowledgements This work was supported by the European Regional Development Fund in the Research Centre of Advanced Mechatronic Systems project, project number CZ.02.1.01/0.0/0.0/16_019/0000867 within the Operational Programme Research, Development and Education.

References 1. Åström, K.J., Hägglund, T.: Automatic tuning of simple regulators with specification on phase and amplitude margins. Automatica 20, 645–651(1984) 2. Prokop, R., Korbel, J., Matuš˚u, R.: Relay feedback identification of dynamical SISO systems— analysis and settings. Latest Trends Syst II, 450–454. ISBN 978-1-61804-244-6 3. Korbel, J., Prokop. R.: Accuracy of relay identification depending on relay parameters. In: 25th DAAAM International Symposium on Intelligent Manufacturing and Automation, pp. 370– 375. Elsevier Ltd. 4. Macháˇcek, J.: Identifikace soustav pomocí nelinearity ve zpˇetné vazbˇe (Identification using the nonlinearity in the feedback. In Czech.), Automatizace 41(9), 559–564. ISSN 0005-125X 5. Noskieviˇc, P.: Auswahlkriterium der Reglerstruktur eines lagegeregelten elektrohydraulischen Antriebes (Criterion for selection of control law for hydraulic position servo drives). O+P: Zeitschrift für Fluidtechnik. 39(1), 49–51 6. Noskieviˇc, P.: Simulation of a pneumatic servo drive. In: Proceedings of the 5th Bergen International Workshop on Advances in Technology (Chap. 6), 14 pp., Bergen University College, ISBN 82-7709-073-0 (2004) 7. Noskieviˇc, P.: Identification of the pneumatic servo system using the self-excited oscillations. In: Proceedings of the 6th JFPS International Symposium on Fluid Power, pp. 352–357, Tsukuba, Japan (2005)

Eigenfrequency Identification of the Hydraulic Cylinder

259

8. Ichiyanagi, T., Nishiumi, T.: On line identification of hydraulic servo actuators by the selfexcited oscillation method (Application to angular velocity control system). Int. J. Fluid Power 12(2), 5–14 (2011) 9. Ichiyanagi, T., Nishiumi, T., Kuribayashi, T.: Real time system identification of a hydraulic servo motor by the revised self-excited oscillation method. In: 21st International Conference on Hydraulics and Pneumatics, pp. 55–62, VŠB-Technical University of Ostrava, ISBN 97880-248-2430-7, Ostrava (2011) 10. Noskieviˇc, P.: Modelování a identifikace systém˚u (System Modeling and Identification, In Czech). I.vydání, Montanex a.s., Ostrava. ISBN 80-7227-030-2 (1999)

Robotic System for Mapping of the Environment for Industry 4.0 Applications Emília Bubeníková, Rastislav Pirník, Marián Hruboš, and Dušan Nemec

Abstract The article deals with the design of a robotic system for exploration of an unknown environment. The system has to be able to map the environment and collect information about its surroundings, which can be used in Industry 4.0. Mapping utilizes robot’s proximity sensor system without using cameras. Such approach is more complex but is not as sensitive to the ambient light as the camera systems are. Supplement information (e.g. temperature, carbon oxide or carbon dioxide concentration …) is collected using Waspmote technology by Libelium, which is a modular system for sensoric networks, easily extendible by almost any desired sensor. The main aim of the article is the hardware and software design of the robot’s sensor system sensing nearby obstacles. Keywords Robotic system · Environment map · Industry 4.0

1 Introduction Mobile robotics influences many aspects of the human life. Most applications of the mobile robots can be found in the industry, research, surveillance and security, military, hospitals, etc. [1, 2]. In the industry, mobile robots are mainly used for transportation of materials and products across manufacturing process [3]. Robotic stores [4, 5] where robots optimize the placement of each items and automatically delivers them to the production machines, are nowadays widely used. Another reason for using mobile robots is the dangerous or uncomfortable environment, e.g. factories with loud noise, high temperature, poisonous gasses, etc. Robots may work individually, their integration with human operators and the other robots is a current problem. Besides many task solved by the robot, the main task is the autonomy of the whole system, therefore the system has to be capable to conduct given tasks without intervention of the human operator. In mobile robotics, the autonomy means E. Bubeníková · R. Pirník (B) · M. Hruboš · D. Nemec Department of Control and Information Systems, Faculty of Electrical Engineering, University of Žilina, 01026 Žilina, Slovak Republic e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 R. Matoušek and J. K˚udela (eds.), Recent Advances in Soft Computing and Cybernetics, Studies in Fuzziness and Soft Computing 403, https://doi.org/10.1007/978-3-030-61659-5_22

261

262

E. Bubeníková et al.

the ability to move and navigate in an unknown environment while avoiding the collisions with obstacles. It brings the research challenge for new methods of movement control based on perception of the environment. Currently the most used method of measurement utilizes laser range finder, cameras and other optical sensors [6–9]. Robotic devices, drones and autonomous vehicles, digital assistants and artificial intelligence are the technologies that will provide the next phase of development in applications called IoT. The combination of these disciplines makes possible the development of autonomous systems combining robotics and machine learning for designing similar systems. This new hyperconnected world offers many benefits to businesses and consumers, and the processed data produced by hyperconnectivity allows stakeholders in the IoT value network ecosystems to make smarter decisions and provide better customer experience. Artificial intelligence (AI), robotics, machine learning, and swarm technologies will provide the next phase of development of IoT applications such as the Internet of Mobile Things (IoMT), Autonomous Internet of Things (A-IoT), Autonomous System of Things (ASoT), Internet of Autonomous Things (IoAT), Internet of Things Clouds (IoT-C) and the Internet of Robotic Things (IoRT) etc. that are progressing/advancing by using IoT technology [10–12]. In order to plan the movement of the mobile robotic system precisely, is crucial to know where the robot is located within the environment and how the environment itself look like [13]. Therefore the localization of the robot and mapping of the environment are the two most important requirements for the correct operation of a mobile robot. The environment is represented by two or three-dimensional map; the position of the robot is represented by its coordinates or by the distance from known features in the environment. The articles [14] reviews classical approach planning techniques and [15] reviews newest map-learning and path-planning strategies in mobile robot navigation. Autonomous robotic system integrates solutions for these problems (see Fig. 1): • localization, • path planning, • motion control,

Fig. 1 Block diagram of the mobile robot [16]

Robotic System for Mapping of the Environment for Industry 4.0 Applications

263

• obstacle avoidance. Many works, based on the complete knowledge of the robot and the environment, use a global planning method such as artificial potential fields, connectivity graph, cell decomposition, etc. These methods build some paths (set of sub-goals) which are free of obstacles [17–21]. Under the term map we may understand the representation of the environment by list of objects and their position, or vice versa, the list of locations with defined properties M = (m1 , m2 , … mN ). Map determines empty areas, which are suitable for movement of the robot and obstacles, which poses a risk of collision during movement of the robot. The records of the list can be indexed by two ways: • Feature-based maps—Each element mi of the list M represents properties (position, dimensions) of one object within the environment while i is its identifier (index). Map is therefore a list positions of the key objects (landmarks). In order to identify and distinguish landmarks, apriori knowledge about the environment is required, which is the main disadvantage of this type of representation. • Location-based maps—Each element mi of the list M represents one location within the environment, while i is the identifier of the location and m represents occupancy of the position. In the planar environment, the identifier has the form of Cartesian coordinates (x, y). Main disadvantage of such maps is their volumetricity. Under the term mapping we understand storage of the measured data into suitable data structure after their transformation into selected (environment-global) coordinate system. Plain direct storage of the measured data is non-efficient in long terms due to the high computational and memory demands. In current literature we may find following types of maps: • Discretized mapped space—the space is divided into finite number of elements (gridmaps) while each element holds the information whether it is occupied or not. • Set of the important points—only the important points of the environment measured by robot’s sensors are stored. Each element of the map holds the information about its position and the shape and profile of its surroundings. • Typological map—describes the mapped space by its geometric properties. Each element of the map holds the position of its points and defines the interconnections among them. • Metric map—points in the space are stored by vectors. We may store all measured points or only the significant ones. Main disadvantage is the sensibility to the noise of the sensors since the precise position of a single point is difficult to estimate. The localization is the estimation of the current position of the robotic system within the current static map of the environment based on sensor readings. Methods for localization can be categorized into absolute and relative methods. While absolute localization methods utilizes only current sensor readings, relative methods requires the knowledge of the previous position of the robot. Some localization methods

264

E. Bubeníková et al.

require map of the environment (position is estimated with respect to the apriori known map), other localization methods use only the readings of the sensors. Mathematically, the localization is the searching for a suitable transformation of the measured points into the coordinate system of the map while achieving the maximum overlap. There are two possible ways: we may try to match all currently measured points with the map, or, we may match only the significant features. Both ways have their pros and cons. Ideal approach combines them both: first, we match the significant points in order to roughly estimate the position, then we match all measured points achieving precise localization. Nearby obstacles are commonly sensed by ultrasonic (US) range finders. These sensors have many convenient properties, e.g. simple data processing, safety and low cost. On the other hand, the measurements of the ultrasonic sensors are distorted by errors rising from their working principle (e.g. secondary reflections …). Infrared (IR) proximity sensors can also by used, because they also have mentioned convenient properties like ultrasonic sensors, but their low working range limits their practical deployment in mobile robotics. Goal of our proposed design is to utilize advantages and eliminate mentioned disadvantages of both sensor types. The result is a sensor system fusing measurements from IR and US sensors together. In order to achieve autonomy of the mobile robotic system, besides localization also navigation is required. The main task of the navigation is to plan the trajectory from the current position to the target position regarding known obstacles while minimizing the cost of the trajectory (minimal time, minimal distance …). In some cases, only static obstacles occur, but in many cases we have to consider also moving obstacles.

2 HW a SW Design of the Robotic System Hardware and software requirements on the designed mobile robotic system were following: • Space mapping should be done using combination of sensors (proximity reflective, infrared proximity sensors, laser range finders, ultrasonic distance sensors). • Robot has to be capable to sense and process own physical properties (e.g. temperature, power consumption, acceleration, direction of movement). • Detection of the subsystem malfunction. • Subsystems has to communicate between each other. Communication between the robot and the ground station should be done via WiFi network. • Robot has own independent propulsion system. • Sensing of the environmental characteristics will be done using Meshlium technology.

Robotic System for Mapping of the Environment for Industry 4.0 Applications

265

2.1 Hardware Design Constructed robotic system (Fig. 2) has chassis with a circular footprint, which is more convenient for turning in tight spaces. Propulsion is provided by two stepper motors combined with omnidirectional non-driven ball wheel. There are 12 modulated proximity reflective sensors working in IR spectrum all around the circumference of the robot. The sensors are capable to capture close objects up to 20 cm distance. At the front and rear side, there are two sensor head, each equipped with ultrasonic distance sensor SRF-04, infrared sensor GP2Y0A02YK0F and driven by a stepper motor. Heading and attitude of the robot are estimated using magnetic compass and accelerometer. Temperature sensors are used to detect overheat of the power controllers for stepper motors, and are placed on both sides of the main PCB. Since the used MCU (microcontroller unit) had low RAM, we have used additional SRAM memory, MicroSD card has been used for storing the program and EEPROM memory for storage of the configuration and calibration parameters. Block diagram of the interconnections among motors and MCU using controllers is shown in Fig. 3. Control unit MCU uses development board Udoo Quad utilizing AT91SAM3X8E processor from Microchip. Main advantage is its build-in SRAM memory, 84 MHz clocking with low power consumption and variety of I/O ports and communication interfaces required for the connection with peripherals. The robot is driven by stepper motors, which do not require additional external sensor of the motor’s position (compared to DC and servo motors). The distance travelled by each wheel of the robot is easily computed by counting steps of the stepper motor, it is required only to know the count of steps, radius of the wheel and angular size of one step. If DC motors were used, the distance would be computed by integration of the motor’s speed across time, which would cause additional cumulative errors. Another advantage of the stepper motors is the ability of the precise speed control, since the speed of the stepper motor is determined by the switching Fig. 2 Placement of sensors and propulsion system on the real robot [22]

266

E. Bubeníková et al.

Driver

Motor left

Driver

Motor front

head Motor rear head

STEP

MCU Mux

Motor right

RESET SM1, SM2, DIR

Driver

Driver

Fig. 3 Block diagram of the connection between stepper motors and MCU [22]

frequency of its coils. Switching technique can also influence the size of one step, overall torque and power consumption. Disadvantage of the stepper motors is their lower speed compared to DC motors. On the other hand, DC motors can be regulated using PWM with necessary feedback control provided by MCU. We have used unipolar stepper motor 28BYJ-48. Stepper motors are driven by DRV8805 driver from Texas Instruments. Its power supply voltage is allowed in the range 8.2–60 V, maximal current per one phase (coil) is 1.5 A. The driver is designed to be controlled by 3.3 V TTL logic signals. It allows to set three different modes of switching: 2-phase (full stepping), 1–2-phase (half stepping) and single phase (so called wave mode). The driver is initialized by 20 µs long RESET signal. The mode of the controller is set by SM0 and SM1 pins, direction is determined by DIR pin. After the initialization, the driver executes one step on each rising edge of the signal on STEP pin. Additional information about voltage levels and timing of the driver can be found in the manufacturer’s datasheet [21].

2.1.1

Sensoric Subsystem Measuring Distance

Limiting parameter of each distance sensor is its working range and angle. If we try to cover whole circumference of the robot by static sensors, even a large number of sensors would not cover the surroundings completely (there would still be some blind angles). Increasing the number of sensors also increases the overall cost of the system. As a static sensors, it is possible to use proximity reflective sensors which detect objects within couple of centimeters from the robot. Distance is measured by 3 types of sensors: ultrasonic sensor SRF-04, proximity reflective sensors in IR spectrum and IR triangluating sensor GP2Y0A02YK0F. Each sensor has non-zero error, but using sensor fusion, it is possible to achieve more

Robotic System for Mapping of the Environment for Industry 4.0 Applications

267

precise estimation of the distance. Thanks to the sensor fusion, it is also possible to detect malfunction of the sensor. Placement of sensors on real robot can be seen in Fig. 2 [22]. • Ultrasonic sensor SRF-04 works at the frequency 40 kHz, maximal measured distance is approx. 300 cm and minimal is approx. 1–2 cm. Maximal power consumption is 50 mA using power supply with voltage 5 V. Sensor measures time of flight of the transmitted acustic signal. Measurement is repeatable 10 µS after end of last measurement. Sensor covers a cone approx. 30° wide. Interface of the sensor consist of two digital I/O pins, the first one triggers measurement (transmits acoustic signal), the second one asserts logical 1 value when the reflected signal is received. Additionally, the full time of flight (to the obstacle and back to the sensor) is reflected by pin ECHO. • Proximity reflective sensors in IR spectrum—detect object closely near the robot. Each sensor transmits modulated IR signal and measures the amount of the received reflected IR light. The modulation supresses the disturbance caused by ambient IR radiation. IR sensors are connected with MCU using multiplexors in order to decrease required number of I/O ports from MCU. • Infrared triangulating sensor SHARP GP2Y0A0A710—measuring range of the sensor is from 50 to 500 cm. Output information (reflecting measured distance) is in the form of analog signal (voltage level). Due to the working principle, the sensor readings are independent from ambient temperature and thermal radiation. In order to calibrate the sensor for mobile robotics, we have measured its characteristics—relation between distance from the object in [cm] and the sensor’s output voltage u in [V]. The main issue of the sensor is, that its output characteristics is not monotonic. For distances lower than L threshold = 50 cm the voltage increases with distance; for distance greater than L threshold the voltage decreases with distance. The measured characteristics can be approximated in the range 0–50 cm by following 3rd order polynomial:

L