146 4 31MB
English Pages 313 Year 2022
Lecture Notes in Mechanical Engineering
José Machado · Filomena Soares · Justyna Trojanowska · Sahin Yildirim · Jiří Vojtěšek · Pierluigi Rea · Bogdan Gramescu · Olena O. Hrybiuk Editors
Innovations in Mechatronics Engineering II
Lecture Notes in Mechanical Engineering Editorial Board Member Francisco Cavas-Martínez , Departamento de Estructuras, Construcción y Expresión Gráfica Universidad Politécnica de Cartagena, Cartagena, Murcia, Spain Series Editor Fakher Chaari, National School of Engineers, University of Sfax, Sfax, Tunisia Editorial Board Member Francesca di Mare, Institute of Energy Technology, Ruhr-Universität Bochum, Bochum, Nordrhein-Westfalen, Germany Series Editor Francesco Gherardini , Dipartimento di Ingegneria “Enzo Ferrari”, Università di Modena e Reggio Emilia, Modena, Italy Editorial Board Member Mohamed Haddar, National School of Engineers of Sfax (ENIS), Sfax, Tunisia Series Editor Vitalii Ivanov, Department of Manufacturing Engineering, Machines and Tools, Sumy State University, Sumy, Ukraine Editorial Board Members Young W. Kwon, Department of Manufacturing Engineering and Aerospace Engineering, Graduate School of Engineering and Applied Science, Monterey, CA, USA Justyna Trojanowska, Poznan University of Technology, Poznan, Poland
Lecture Notes in Mechanical Engineering (LNME) publishes the latest developments in Mechanical Engineering—quickly, informally and with high quality. Original research reported in proceedings and post-proceedings represents the core of LNME. Volumes published in LNME embrace all aspects, subfields and new challenges of mechanical engineering. Topics in the series include: • • • • • • • • • • • • • • • • •
Engineering Design Machinery and Machine Elements Mechanical Structures and Stress Analysis Automotive Engineering Engine Technology Aerospace Technology and Astronautics Nanotechnology and Microengineering Control, Robotics, Mechatronics MEMS Theoretical and Applied Mechanics Dynamical Systems, Control Fluid Mechanics Engineering Thermodynamics, Heat and Mass Transfer Manufacturing Precision Engineering, Instrumentation, Measurement Materials Engineering Tribology and Surface Technology
To submit a proposal or request further information, please contact the Springer Editor of your location: China: Ms. Ella Zhang at [email protected] India: Priya Vyas at [email protected] Rest of Asia, Australia, New Zealand: Swati Meherishi at [email protected] All other countries: Dr. Leontina Di Cecco at [email protected] To submit a proposal for a monograph, please check our Springer Tracts in Mechanical Engineering at https://link.springer.com/bookseries/11693 or contact [email protected] Indexed by SCOPUS. All books published in the series are submitted for consideration in Web of Science.
More information about this series at https://link.springer.com/bookseries/11236
José Machado Filomena Soares Justyna Trojanowska Sahin Yildirim Jiří Vojtěšek Pierluigi Rea Bogdan Gramescu Olena O. Hrybiuk •
•
•
•
•
•
•
Editors
Innovations in Mechatronics Engineering II
123
Editors José Machado Department of Mechanical Engineering University of Minho Guimraes, Portugal Justyna Trojanowska Poznan University of Technology Poznan, Wielkopolskie, Poland Jiří Vojtěšek Faculty of Applied Informatics Tomas Bata University in Zlín Zlín, Czech Republic Bogdan Gramescu Department of Mechatronics and Precision Mechanics Polytechnic University of Bucharest Bucharest, Romania
Filomena Soares Department of Industrial Electronics University of Minho Guimraes, Portugal Sahin Yildirim Department of Mechatronics Engineering, Faculty of Engineering Erciyes University Kayseri, Turkey Pierluigi Rea Department of Mechanical, Chemical and Materials Engineering University of Cagliari Cagliari, Italy Olena O. Hrybiuk Institute of Information Technology and Learning Tools National Academy of Educational Sciences Kyiv, Ukraine
ISSN 2195-4356 ISSN 2195-4364 (electronic) Lecture Notes in Mechanical Engineering ISBN 978-3-031-09384-5 ISBN 978-3-031-09385-2 (eBook) https://doi.org/10.1007/978-3-031-09385-2 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
This volume of Lecture Notes in Mechanical Engineering gathers selected papers presented at the Second International Scientific Conference (ICIE’2022), held in Guimarães, Portugal, on 28–30 June 2022. The conference was organized by the School of Engineering of University of Minho, throughout MEtRICs and Algoritmi Research Centres. The aim of the conference was to present the latest engineering achievements and innovations and to provide a chance for exchanging views and opinions concerning the creation of added value for the industry and for the society. The main conference topics include (but are not limited to): • • • • • • •
Innovation Industrial engineering Mechanical engineering Mechatronics engineering Systems and applications Societal challenges Industrial property
The organizers received 139 contributions from 16 countries around the world. After a thorough peer review process, the committee accepted 81 papers written by 335 authors from 15 countries for the conference proceedings (acceptance rate of 58%), which were organized in three volumes of Springer Lecture Notes in Mechanical Engineering. This volume, with the title “Innovations in Mechatronics Engineering II”, specifically focuses on cutting-edge control algorithms for mobile robots, automatic monitoring systems and intelligent predictive maintenance techniques. They cover advanced scheduling, risk-assessment and decision-making strategies, and their applications in industrial production, training and education, and service organizations. Last but not least, it analyses important issues proposing a good balance of theoretical and practical aspects. This book consists of 26 chapters, prepared by 93 authors from 8 countries.
v
vi
Preface
Extended versions of selected best papers from the conference will be published in the following journals: Sensors, Applied Sciences, Machines, Management and Production Engineering Review, International Journal of Mechatronics and Applied Mechanics, SN Applied Sciences, Dirección y Organización, Smart Science, Business Systems Research and International Journal of E-Services and Mobile Applications. A special thanks to the members of the International Scientific Committee for their hard work during the review process. We acknowledge all that contributed to the staging of ICIE’2022: authors, committees and sponsors. Their involvement and hard work were crucial to the success of ICIE’2022. June 2022
José Machado Filomena Soares Justyna Trojanowska Sahin Yildirim Jiří Vojtěšek Pierluigi Rea Bogdan Gramescu Olena O. Hrybiuk
Contents
Machine Multi-sensor System and Signal Processing for Determining Cutting Tools Service Life . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Edward Kozłowski, Katarzyna Antosz, Dariusz Mazurkiewicz, Jarosław Sęp, and Tomasz Żabiński Automatic Anomaly Detection in Vibration Analysis Based on Machine Learning Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pedro Torres, Armando Ramalho, and Luis Correia Deep Neural Networks in Fake News Detection . . . . . . . . . . . . . . . . . . . Camelia Avram, George Mesaroş, and Adina Aştilean
1
13 24
Cork-Rubber Composite Blocks for Vibration Isolation: Determining Mechanical Behaviour Using ANN and FEA . . . . . . . . . . . . . . . . . . . . . Helena Lopes and Susana P. Silva
36
Deep Neural Networks: A Hybrid Approach Using Box&Jenkins Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Filipe R. Ramos, Didier R. Lopes, and Tiago E. Pratas
51
Semi-adaptive Decentralized PI Control of TITO System with Parameters Estimates Quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . . Karel Perutka
63
Simulation of Cyber-Physical Intelligent Mechatronic Component Behavior Using Timed Automata Approach . . . . . . . . . . . . . . . . . . . . . . Adriano A. Santos, António Ferreira da Silva, and Filipe Pereira
72
Classification of Process from the Simulation Modeling Aspect System Dynamics and Discrete Event Simulation . . . . . . . . . . . . . . . . . . Jacek Jan Krzywy and Marko Hell
86
Risk Assessment in Industry Using Expected Utility: An Application to Accidents’ Risk Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Irene Brito, Celina P. Leão, and Matilde A. Rodrigues
98
vii
viii
Contents
Two-Criterion Optimization of the Worm Drive Design for Tool Magazine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Oleg Krol and Volodymyr Sokolov Design and Simulation for an Indoor Inspection Robot . . . . . . . . . . . . . 123 Erika Ottaviano and Giorgio Figliolini A Review of Fault Detection Methods in Smart Pneumatic Systems and Identification of Key Failure Indicators . . . . . . . . . . . . . . . . . . . . . . 132 Philip Coanda, Mihai Avram, Daniel Comeaga, Bogdan Gramescu, Victor Constantin, and Emil Nita Algorithmization of Functional-Modular Design of Packaging Equipment Using the Optimization Synthesis Principles . . . . . . . . . . . . 143 Oleg Zabolotnyi, Olha Zaleta, Tetiana Bozhko, Taras Chetverzhuk, and José Machado Implementation of the SMED Methodology in a CNC Drilling Machine to Improve Its Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Arminda Pata and Agostinho Silva Optoelectronic Systems for the Determination of the Position of a Mobile System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Lauren țiu Adrian Cartal, Daniel Băcescu, Lucian Bogatu, Tudor Cătălin Apostolescu, Georgeta Ionașcu, and Andreea Stanciu Cocktail Party Graphs and an Optimal Majorization for Some of the Generalized Krein Parameters of Symmetric Association Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Vasco Moço Mano and Luís Almeida Vieira The Structure of Automated Control Systems for Precision Machining of Parts Bearing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 Ivanna Trokhymchuk, Kostiantyn Svirzhevskyi, Anatolii Tkachuk, Oleg Zabolotnyi, and Valentyn Zablotskyi Study Case Regarding the Evaluation of Eye Refraction in an Optometry Office . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Alionte Andreea Dana, Negoita Alexandra Valentina, Staetu Gigi Nelu, and Alionte Cristian Gabriel Analysis and Comparison of DABC and ACO in a Scheduling Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Ana Rita Ferreira, Ângelo Soares, André S. Santos, João A. Bastos, and Leonilde R. Varela Experimental Teaching of Robotics in the Context of Manufacturing 4.0: Effective Use of Modules of the Model Program of Environmental Research Teaching in the Working Process of the Centers “Clever” . . . 216 Olena Hrybiuk and Olena Vedishcheva
Contents
ix
A Simulation-Based Approach to Reduce Waiting Times in Emergency Departments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 Aydin Teymourifar Multimethodology Exploitation Based on Value-Focused Thinking: Drones Feasibility Analysis for National Defense . . . . . . . . . . . . . . . . . . 245 Ygor Logullo de Souza, Miguel Ângelo Lellis Moreira, Bruno Thiago Rego Valeriano Silva, Mischel Carmen Neyra Belderrain, Christopher Shneider Cerqueira, Marcos dos Santos, and Carlos Francisco Simões Gomes An Application of Preference-Inspired Co-Evolutionary Algorithm to Sectorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Elif Öztürk, Pedro Rocha, Filipe Sousa, Margarida Lima, Ana M. Rodrigues, José Soeiro Ferreira, Ana C. Nunes, Cristina Lopes, and Cristina Oliveira Unstable Systems as a Challenging Benchmark for Control Engineering Students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 Frantisek Gazdos and Zdenek Charous Deep Learning in Taekwondo Techniques Recognition System: A Preliminary Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 Paulo Barbosa, Pedro Cunha, Vítor Carvalho, and Filomena Soares New Conceptions of the Future in Cyber-MixMechatronics Engineering and Claytronics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 Gheorghe Gheorghe and Florentina Badea Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
Machine Multi-sensor System and Signal Processing for Determining Cutting Tools Service Life Edward Kozłowski1 , Katarzyna Antosz2(B) , Dariusz Mazurkiewicz3 ˙ nski4 Jarosław S˛ep2 , and Tomasz Zabi´
,
1 Faculty of Management, Lublin University of Technology, Nadbystrzycka 38, 20–618 Lublin,
Poland [email protected] 2 Faculty of Mechanical Engineering and Aeronautics, Rzeszow University of Technology, Powsta´nców Warszawy 8, 35-959 Rzeszów, Poland {katarzyna.antosz,jsztmiop}@prz.edu.pl 3 Mechanical Engineering Faculty, Lublin University of Technology, Nadbystrzycka 36, 20–618 Lublin, Poland [email protected] 4 Faculty of Electrical and Computer Engineering, Rzeszow University of Technology, W. Pola 2, 35–959 Rzeszów, Poland [email protected]
Abstract. The constantly growing data resources are a challenge for manufacturing companies in every industry. This is due, inter alia, to a significant increase in the number of devices that generate data. Currently, the concept of Indus-try 4.0 and related technologies facilitate the collection, processing, and use of large amounts of data. For this purpose, the possibility use of the data mining method (wavelet analysis and logistic regression) to develop the model for supporting the decision-making process in determining the service life of the cutting tool was discussed in this article. The developed model will support to identify the parameters influencing the condition of the cutter. The predictive ability of the obtained model was assessed with the use of indicators to assess the quality of the classification. Keywords: Cutter condition · Wavelet analysis · Logistic regression quality prediction
1 Introduction Cutting tool is one of the core elements of the machine tool, which condition is significant bot for the production process efficiency and reliability of the technological machine. Machine tool is also the element which is most often relaced due to its wear, which makes it an important cost-creating element of production. On the other hand, machine tools together with other technological machines used typically in production systems of high technology industry form complex systems functioning as Industry 4.0 elements. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Machado et al. (Eds.): icieng 2022, LNME, pp. 1–12, 2022. https://doi.org/10.1007/978-3-031-09385-2_1
2
E. Kozłowski et al.
All this creates a lot of several technological and research challenges, for example [1, 2], related to their service life, health monitoring and reliability, especially with respect to predicting future states to enable inference and implementation of executive activities in terms of failure-preventing servicing. Current development of condition monitoring technology and machine learning, makes available more and more fault diagnosis methods, mostly related to signal processing and deep learning technology [3]. For example, for the purpose of cutter reliability decision-making process, it was applied [1, 2] a multi sensor measurement system integrated with signal processing. Signal processing-based fault diagnosis or remaining useful life prediction is based on feature extraction and pattern recognition. Feature extraction based on wavelet transform makes possible signal decomposition, while advanced statistical analysis results in fault features. On the other hand, Support Vector Machine (SVM) can be successfully applied as a classifier in pattern recognition [4]. All these methods usually work well when analysed theoretically, with the use of simulation data or with limited access to data [5–8]. There is a very limited feedback on their results when applied to real data obtained from industrial measurement systems. New researches with real-world data for verification of advanced data processing methods are necessary, especially due to the Industry 4.0 vision. In this vision data digitalization and data processing is expected to bring major changes in manufacturing. Novel technologies will enable a stepwise increase of productivity in manufacturing companies. From this perspective, described in this article milling machine multi-sensor system with proposed and verified data processing model may be considered as a decision-making process tool in determining cutting tools service life, extending the time of their effective use in the production process. Effective prediction of cutting tool state and its remaining useful life makes its replacement time as optimal as possible. The structure of the paper is as follows. Sections 2 presents the research problem and methodology. The structure of the paper is as follows. Section 3 presents a short literature review on materials and methods used in the research. In Sect. 4 the results and discussion are presented. Finally (Sect. 5), the conclusion are presented.
2 Problem Formulation and Research Methodology The main goal of the presented research was to develop a predictive model that would allow to identify the state of the cutting tool (milling cutter) edge. To develop the model, the data obtained from the sensor installed on the technological machine (force and torque signal reading) were used. The collected sensor signals were preprocessed by application Discrete Wavelet Transformation (DWT). In the analyzed case, the identification consisted in determining whether the cutter is sharp or blunt. The obtained value of the variable is qualitative, which is related to the classification task. Many classifiers can be used to analyze a classification problem. In this study, logistic regression were used.
3 Materials and Methods 3.1 Data Collection- Test Stand The tests were carried out on a specially organized test stand. Detailed elements of the test stand, their characteristics and data collection methods are presented in Table 1.
Machine Multi-sensor System and Signal Processing
3
Table 1. The test stand – elements and parameters No
Element
Parameters
1
Machine
Haas VM-3 CNC machine with 12.000 RPM direct drive spindle
2
Milling cutter
Four-blade solid carbide with TiAln (aluminum titanium nitride) coating. The rotational speed of the spindle during experiments – 860 rpm
3
Material
Inconel 718
4
Multi – component sensor
Type - CL16 ZEPWN. Parameters: force measurement – max 10 kN, torque measurement – max 1 kNm, the accuracy - 0.5, the sensitivity – 1 mV/V Collected signals: force sensor (P1x, P2y, P3z) and moments (M1x, M2y, M3z)
5
Data collecting system
Industrial computer (Beckhoff C6920), system: EtherCAT-based distributed I/O, real-time task developed using ST (Structured Text) language - environment: the TwinCAT 3 Beckhoff and custom Matlab/Simulink projects
Data was collected in real time with a sampling interval of 2 ms. The sampling interval of the real-time data collection job was 2 ms. The duration of the signal buffer stored in one file was 640 m/s. During the tests, data was collected from various tasks in the milling process on the machine. 3.2 Discrete Wavelet Transformation (DWT) To design a classifier it was analyzed the signals obtained from the force sensors (P1x, P2y, P3z) and moment sensors (M1x, M2y, M3z), which were preprocessed. To extract the necessary information from these signal it was used the wavelet transformation. In comparison to harmonic analysis in wavelet analysis we utilize the wavelet functions, which usually are irregular, asymmetric, and not periodic. Based on wavelet transformation the signal is decomposed into components, which created as a wavelets with a different scale (scale/frequency) and position (time/space) [9, 10]. For a certain scale and position the wavelet coefficients describe the extent of similarity of wavelet function to the fragment of analyzed signal. By applying the wavelet transform we estimate the coefficients for wavelets of various scales and positions. These coefficients provide us the essential information about signal. Let N denotes the set of natural numbers, R - set of real numbers, Z - set of integer numbers. Niech {xt }t∈Z will be a time series, (t) - the orthogonal wavelet basis (mother wavelet) and φ(t) - the scaling function (father wavelet) corresponding to walvet (t). For decomposition level j ∈ N we define the sequence of mother wavelets jk k∈Z as follows: t 1 (1) jk (t) = j−1 j − k 2 2
4
E. Kozłowski et al.
and the sequence of father wavelets φjk k∈Z as follows: t 1 φjk (t) = j−1 φ j − k 2 2
(2)
Thus the time series {xt }t∈Z can be expressed as: xt =
∞
∞
j
cjk φjk (t) +
dik ik (t)
(3)
i=−∞ k=−∞
k=−∞
where cjk is scaling coefficient, dik is detailed coefficient. For Haar decomposition [11, 12] the mother wavelet has non zero values on interval [0 , 1). Because the observed signal (time series){xt }1≤t≤n has a finite number of observations then the level j should meet the condition 1 ≤ j ≤ m = max{s ∈ N : 2s ≤ n}. For simplicity, we assume that n = 2s . From above the time series {xt }1≤t≤n we can present as follows: n 2j
xt =
−1
n
cjk φjk (t) +
−1
j 2i
dik ik (t)
(4)
i=0 k=0
k=0
for 1 ≤ j ≤ m. From (4) for the sake of level j, 1 ≤ j ≤ m the time series {xt }t∈Z can be presented in different forms. According to [8], we define the time series projection operator {xt }1≤t≤n for level j in the base φjk (t) 0≤k≤ n −1 2j
n 2j
P xt = j
−1
cjk φjk (t)
(5)
k=0
In further part of paper the sequence cjk k will be depict the information contained in observed signal {xt }1≤t≤n . More about DWT can be found in [11, 12]. 3.3 Logistic Regression Our aim is to recognize a cutter state based on strict or preprocessed measurements obtained from transducers obtained from transducers, sensors. To classify the cutter state we apply the logistic regression. Let D = {Y, X} be a data set, where vector Y ∈ Rn contains the realizations of response variable, but matrix X ∈ Rn×m contains the realizations of predictors (e.g. series of input variables) and: ⎡ ⎡ ⎤ ⎤ ⎡ ⎤ x11 x12 · · · x1m x(1) y1 ⎢ x21 x22 · · · x2m ⎥ ⎢ x(2) ⎥ ⎢ y2 ⎥ ⎢ ⎢ ⎥ ⎥ ⎢ ⎥ (6) Y = ⎢ . ⎥, X = ⎢ . . . . ⎥ = ⎢ . ⎥ ⎣ .. .. .. .. ⎦ ⎣ .. ⎦ ⎣ .. ⎦ yn
xn1 xn2 · · · xnm
x(n)
For i− th case when the cutter is blunt then we accept yi = 1 otherwise when the cutter is sharp then we take yi = 0. Thus by observing the values of input variables
Machine Multi-sensor System and Signal Processing
5
x(i) = (xi1 , xi2 , . . . , xim ) ∈ Rm it is necessary to classify the cutter state. The task consists in definition a classifier f : Rm → {0, 1}, which would allow to assessment the cutter based on observation of predictors. Let (, F, P) be a probability space and Y : → {0, 1} be a random variable with binomial distribution (see e.g. [12, 13]). Logistic regression models are applied usually to estimate the probability of realization of response variable (see e.g. [13, 15, 16]). P(Y = 1|x) value denotes the success probability based on observation of predictors x ∈ Rm , P(Y = 0|x) value denotes the defeat probability, but the ratio of success to defeat: (x) =
P(Y = 1|x ) P(Y = 1|x ) = P(Y = 0|x ) 1 − P(Y = 1|x )
(7)
means the odds. The success probability p(x) ∈ (0, 1), thus from (7) we have (x) ∈ (0, ∞) but ln((x)) ∈ (−∞, ∞). The logarithm of odds is called log-odds or logit. Logistic regression describes the dependencies between logit and predictors as follows: p(β, x) = β0 + xT β ln(x) = ln (8) 1 − p(β, x) where β0 ∈ R, β = (β1 , . . . , βm ) ∈ Rm . The aim of logistic regression consist in determining the probability of success P(Y = 1|x) = p(β0 , β, x) based on observation of predictors x ∈ Rm . From Eqs. (7)–(8) we can determine the success probability as follows: p(β0 , β, x) =
eβ0 +x
Tβ
1 + eβ0 +x
Tβ
(9)
To estimate the unknown parameters (β 0 , β) ∈ Rm+1 we apply the maximum likelihood method. We define the likelihood function as a product of probabilities of successes and from (9) we have:
n y 1−yi p β0 , β, x(i) i 1 − p β0 , β, x(i) (10) L(β0 , β, Y, X) = i=1
Maximum likelihood method consists in maximizing the objective function (10) and by solving the task max L (β0 , β, Y, X) β0 ,β
(11)
we obtain the estimators of unknown parameters β0 and β. Usually we solve the auxiliary task max ln L (β0 , β, Y, X) β0 ,β
instead the task (11), where the logarithm of likelihood function is equal: n T T yi β0 + x(i) β − ln 1 + eβ0 +x(i) β . lnL(β0 , β, Y, X) = i=1
To solve the auxiliary task (12) Newton-Raphson algorithm was applied.
(12)
(13)
6
E. Kozłowski et al.
3.4 Elasticnet Sometimes the predictors used in linear system (8) are correlated. Then the direct application Newton-Raphson algorithm to solve the task (12) does not provide the expected effect. Due to multicollinearity problem of predictors we obtain the large absolute values of estimators of unknown parameters β0 andβ. This implies the instability of forecasts. Thus the problem depends on the selection of appropriate predictors or transformation of predictors, which should be included to logistic regression (8). On the one hand, the input variables should influence the value of response variable, on the other they should not generate multicollinearity. To overcome the multicollinearity problem usually we apply such techniques as singular value decomposition, regularization, least angle regression. In our task from one hand to reduce the absolute values of estimator of unknown parameters and from the other to select the importance predictors to model (8) we apply the elasticnet method (see e.g. [14, 16, 17]), which consists in the addition of penalty depended on values of estimators of unknown parameters into objective function. Such techniques implies a shrinkage of values of estimators. From above we solve the task: max ln L (β0 , β, Y, X) − λPα (β) β0 ,β
(14)
where λ > 0 and the penalty Pα (β) is defined as linear combination of norms in spaces L1 and L2 of β values. Thus the penalty can be presented as follows: Pα (β) =
1−α βL2 + αβL1 2
(15)
For α = 0 we have a classical Tikhonov regularization (ridge regression), but for = 1 the Least Absolute Shrinkage and Selection Operator (LASSO). The elastic net is α a connection between ridge regression and LASSO. The application of elastic net method allowed to receive classifier based on logistic regression (8) with more accurate and stable detection of cutter state. 3.5 The Quality Assessment of Prediction Models - Receiver Operating Characteristics (ROC) Analysis For the quality assessment of the developed prediction models Receiver Operating Characteristics (ROC) analysis were used. ROC analysis is one of the most important methods used in the evaluation of the models in machine learning. This analysis uses the confusion matrix (YP – True positive, TN – true negative, FP - false positive, FN – false negative) assessing the accuracy of the developed classifier. The errors of the developed classifier are defined as: FP and FN. The classifier quality assessment is carried out by assessing whether the objects have been properly classified from positive to negative class and vice versa [18–22]. In the paper for evaluation the performance of the developed models the indicators presented in the Table 2 were used. To calculate the indicators for all predicted models the confusion matrices were generated. The following assumptions were made: the sharp cutter as a negative case (N) and the blunt cutter as a positive case (P). The values specified the confusion matrix
Machine Multi-sensor System and Signal Processing
7
Table 2. The quality assessment of prediction model - indicators Indicator
Description
Equation
Acc - Accuracy
Properly classified cases (positive and negative) to the total number of predictions (ability of prediction model)
TP+TN Acc = TP+TN+FP+FN
TPR - True Positives Rate
True positive cases classified to all true positive and false negative cases classified
TP TPR = TP+FN
TNR - True Negatives Rate
True negative cases classified to all true negative and false positive cases classified
TN TNR = TN+FP
PPV - Positive Predictive Value
True positive cases classified to all true and false positive cases classified
TP PPV = TP+FP
NPV - Negative Predictive Value
True negative cases classified to all cases classified as true and false negative
TN NPV = TN+FN
PV - Prevalence
TP+FN True positive and false PV = TP+TN+FP+FN negative cases classified to the total number of predictions
DR – Detection Rate
True positive cases classified to the total number of predictions
TP DR = TP+TN+FP+FN
DPV – Detection Prevalence
All positive classified (true and false) cases to the total number of predictions
TP+FP DPV = TP+TN+FP+FN
are as follows: TP (True Positive), which means a number of cases for which the cutter state was properly recognized (as blunt), TN (True Negative) - a number of cases for which the cutter state was properly recognized (as sharp), FP (False Positive) - a number of cases for which the cutter state was not properly recognized (sharp instead of blunt), FN (False Negative) - a number of cases for which the cutter condition was not properly recognized (blunt instead of sharp).
4 Results and Discussion The main aim of the research was to develop a model that would allow to recognize the cutter state (sharp or blunt). During the research, carried out on the test stand (Table 1), 2172 observations of the following signals: P1x, P2y, P3z, M1x, M2y and M3z were obtained. Then, the obtained data with using of wavelet analysis were pre-processed. The wavelets of various types and levels for preprocessing were used. In the Table 3 the used types and levels of wavelets are presented.
8
E. Kozłowski et al. Table 3. The used types and levels of wavelets.
Type
Level l=3
l=4
l=5
Daubechies (d)
–
d12, d14, d16, d18, d20
d2, d4, d6, d8, d10
Least asymmetric (la)
–
la12, la14, la16, la18, la20
la8, la10
Best localized (bl)
–
bl14, bl18, bl20
–
Coiflet (c)
c24, c30
c12, c18
c6
Table 4. The quality of prediction models - the indicators values. Wavelet
Level
Acc
TPR
TNR
PPV
NPV
PV
DR
DPV
d2
5
0.8964
0.7599
1.0000
1.0000
0.8459
0.4314
0.3278
0.3278
d10
5
0.8973
0.7631
0.9992
0.9986
0.8475
0.4314
0.3292
0.3297
d18
4
0.8959
0.7599
0.9992
0.9986
0.8458
0.4314
0.3278
0.3283
d20
4
0.8969
0.7620
0.9992
0.9986
0.8469
0.4314
0.3287
0.3292
la8
5
0.8978
0.7641
0.9992
0.9986
0.8481
0.4314
0.3297
0.3301
la10
5
0.8969
0.7620
0.9992
0.9986
0.8469
0.4314
0.3287
0.3292
la12
4
0.8964
0.7609
0.9992
0.9986
0.8464
0.4314
0.3283
0.3287
bl14
4
0.8973
0.7631
0.9992
0.9986
0.8475
0.4314
0.3292
0.3297
bl18
4
0.8950
0.7577
0.9992
0.9986
0.8446
0.4314
0.3269
0.3273
bl20
4
0.8955
0.7588
0.9992
0.9986
0.8452
0.4314
0.3273
0.3278
c6
5
0.8983
0.7652
0.9992
0.9986
0.8487
0.4314
0.3301
0.3306
c12
4
0.8950
0.7577
0.9992
0.9986
0.8446
0.4314
0.3269
0.3273
c18
4
0.8969
0.7620
0.9992
0.9986
0.8469
0.4314
0.3287
0.3292
c24
3
0.8955
0.7577
1.0000
1.0000
0.8447
0.4314
0.3269
0.3269
c30
3
0.8955
0.7577
1.0000
1.0000
0.8447
0.4314
0.3269
0.3269
Legend:
Prediction model with the highest ACC value Prediction models with the highest TNR and PPV value
To develop the predictive models for all wavelets logistic regression (8) and elastic net were used. The use of elastnic net allowed to receive classifier based on logistic regression with more accurate and stable detection of cutter state. For the prepared base, the learning time did not exceed 0.05 s and the tool condition detection time does not exceed 0.00003s.. The indicators presented in Table 2 were used to assess the quality of the developed models. In the Table 4 the obtained values of indicators for chosen developed prediction models are presented.
Machine Multi-sensor System and Signal Processing
9
The results presented in Table 4 show that the prediction ability of the developed models for individual wavelets is comparable. The value of the Accuracy indicator is in the range from 0.8983 to 0.8950. The highest value of Acc was obtained for wavelets c6 at the level l = 5, and the lowest value for wavelets la16, bl18, c12 (all at the same level l = 4). Furthermore, the highest recognition of negative class (N - blunt) (TNR = 1) is obtained by the prediction models for the wavelets d2 level l = 5, c24 and c30 level l = 3. For the same models positive predictive value is also the highest (PPV = 1). Additionally, the detection rate for true positive class (P - sharp) (DR) and predicted positive cases (DPV) were the highest for the wavelet c6 level l = 3, the values 0.3301 and 0.3306 respectively. The value of the PV indicator for all wavelets was equal 0.4314. Table 5 presents the confusion matrix for the model with the highest Acc value (wavelet c6 at the level l = 5). Analyzing the results presented in the Table 5, it should be noted that the highest impact on the obtained value of the Acc indicator have the number of cases for which the cutter state was not properly recognized (FN (False Negative) - sharp instead of blunt). Table 5. Confusion matrix for prediction model with the highest Acc value. Reference Prediction
State
Blunt
Sharp
Blunt
717
1
Sharp
220
1234
Moreover, 221 cases from 2172 were not properly recognized, which means that the prediction error in analyzing model is ca. 10% (1- Acc*100%). Furthermore, the value of the FN affects the True Positive Rate and Negative Predictive Value indicators too. Therefore, for the developed predictive models, the value of these indicators were also comparable. TPR indicator was in the range from 0.7577 to 0.7652 and NPV indicator was in the range 0.4787 to 0.8446 (Table 4). In the case of high ability of the model these values should be close to the 1.00. An additional element of the analysis of the developed predictive models was the influence of the regularization parameters (α and λ) on the Acc indicator value. On the Fig. 1 the influence of the regularization parameters (α and λ) for the model with highest Acc value (wavelet c6 l = 5) is presented. On the other hand, on the Fig. 2 the influence of the regularization parameters (α and λ) for the d2 level l = 5 (Acc = 0.8964) is presented. This prediction model is the one of the models with the highest TNR and PPV value. When analyzing the graphs presented in Figs. 1 and 2, it should be noted that for the given α and λ parameters, the distribution of the Acc values obtained for the models differ significantly. In the model developed for the c6 wavelet, the Acc values close to the maximum Acc value are obtained for a wider range of parameters α and λ. This model is more stable (Fig. 1). It is different in the case of the db2 wavelet model. Only for selected values of α and λ parameter the Acc values are close to the max Acc value.
10
E. Kozłowski et al.
Fig. 1. The influence of the regularization parameters (α and λ) for the model with highest Acc (wavelet c6 level l = 5).
The obtained Acc values for this model are changing abruptly - the model is less stable (Fig. 2).
Fig. 2. The influence of the regularization parameters (α and λ) for the chosen model with highest TNR and PPV value (d2 level l = 5) is presented.
For the analyzed models presented in Figs. 1 and 2, the optimal values of the parameters α and λ were determined. For the wavelet c6 level l = 5 the values α = 0.7895 and λ = 0.0753 were estimated. For a wavelet d2 level l = 5 they were α = 0.8421 and λ = 0.0426. With these values, the analyzed models obtained the highest Acc value, and thus the lowest prediction error.
5 Conclusions Currently, many manufacturing companies face the problem of huge amounts of data. These data are generated, inter alia, by various types of devices that carry out operations in the technological process or also by production process monitoring systems. Properly conducted analysis of production data can reveal important information useful for predicting models in manufacturing. The main goal of the presented research was to develop
Machine Multi-sensor System and Signal Processing
11
a predictive models that would allow to identify the state of the cutter. To develop the model, the data obtained from the sensors installed on the technological machine were used. During the research the model with different wavelets have been developed. For developed models the Accuracy indicator is in the range from 0.8983 to 0.8950 was obtained. The highest value of Acc was obtained for wavelets c6 and the lowest value for wave-lets la16, bl18, c12. The presented models allow to identify in the easy way the condition of the cutter and thus will reduce the number of their use in the manufacturing process. The developed model can be used on the other machines equipped with the same type of sensors. For the other machines with other different types of sensors the model can be developed according to proposed methodology.
References ˙ nski, T.: Integrating advanced 1. Kozłowski, E., Antosz, K., Mazurkiewicz, D., S˛ep, J., Zabi´ measurement and signal processing for reliability decision-making. Eksploatacja i Niezawodnosc – Mainten. Reliab. 23(4) 777–787 (2021) ˙ nski, T.: Machining process time 2. Antosz, K., Mazurkiewicz, D., Kozłowski, E., S˛ep, J., Zabi´ series data analysis with a decision support tool. In: Machado, J., Soares, F., Trojanowska, J., Ottaviano, E. (eds.) icieng 2021. LNME, pp. 14–27. Springer, Cham (2022). https://doi.org/ 10.1007/978-3-030-79165-0_2 3. Yao, D.C., Liu, H.C., Yang, J.W., Li, X.: A lightweight neural network with strong robustness for bearing fault diagnosis. Measurement 159, 1–11 (2020) ˙ nski, T., Prucnal, S., S˛ep, J.: Machining sensor data 4. Kozłowski, E., Mazurkiewicz, D., Zabi´ management for operation-level predictive model. Expert Syst. Appl. 159, 1–22 (2020) 5. Li, H., Wang, W., Li, Z., Dong, L., Li, Q.: A novel approach for predicting tool remaining useful life using limited data. Mech Syst Sig Process 143, 106832 (2020). https://doi.org/10. 1016/j.ymssp.2020.106832 6. Zhao, R., Yan, R., Chen, Z., Mao, K., Wang, P., Gao, R.X.: Deep learning and its applications to machine health monitoring. Mech. Syst. Signal Process. 115, 213–237 (2019) 7. Arrazola, P., Özel, T., Umbrello, D., Davies, M., Jawahir, I.: Recent advances in modelling of metal machining processes. CIRP Ann. 62, 695–718 (2013) 8. Arrais-Castro, A., Varela, M.L.R., Putnik, G.D., Ribeiro, R.A., Machado, J., Ferreira, L.: Collaborative framework for virtual organisation synthesis based on a dynamic multi-criteria decision model. Int. J. Comput. Integr. Manuf. 31(9), 857–868 (2018) 9. Daubechies, I.: Orthonormal bases of compactly supported wavelets. Commun. Pure Appl. Math. 41, 909–996 (1988) 10. Edwards, T.: Discrete Wavelets Transform: Theory and Implementation, Stanford University (1991) 11. Daubechies, I.: Ten lectures on wavelets. Society for Industrial and Applied Mathematics (1992). https://doi.org/10.1137/1.9781611970104 12. Walnut, D.F.: An Introduction to Wavelet Analysis. Springer, Boston (2004) 13. Freedman, D.A.: Statistical Models: Theory And Practice. Cambridge University Press (2009) 14. Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning. Springer, New York Inc. (2009) 15. James, G., Witten, D., Hastie, T., Tibshirani, R.: An Introduction to Statistical Learning. Springer, New York (2013) 16. Fox, J., Weisberg, S.: An R Companion to Applied Regression. SAGE Publications, Inc. (2019)
12
E. Kozłowski et al.
17. Rymarczyk, T., Kozłowski, E., Kłosowski, G., Niderla, K., Logistic regression for machine learning in process tomography. Sensors 19(15), 1–19 (2019) 18. Fawcett, T.: An introduction to ROC analysis. Pattern Recogn. Lett. 27, 861–874 (2006). https://doi.org/10.1016/j.patrec.2005.10.010 19. Fawcett, T.: Using rule sets to maximize ROC performance. In: Proceedings of the IEEE International Conference on Data Mining (ICDM-2001), pp. 131–138 IEEE (2001) 20. Provost, F., Fawcett, T., Kohavi, R.: The case against accuracy estimation for comparing classifiers. In: Proceedings of the ICML-98. pp.445–453. Morgan Kaufmann, San Francisco (1998) 21. Powers, D.: Evaluation: from precision, recall and F-score to ROC, unforcedness, nakedness & correlation. J. Mach. Learn. Technol. 2, 37–63 (2011) 22. Antosz, K., Jasiulewicz-Kaczmarek, M., Pa´sko, Ł., Zhang, C., Wang, S.: Application of machine learning and rough set theory in lean maintenance decision support system development. Eksploatacja i Niezawodnosc – Maint. Reliab. 23(4), 695–708 (2021)
Automatic Anomaly Detection in Vibration Analysis Based on Machine Learning Algorithms Pedro Torres1,2(B) , Armando Ramalho1,3 , and Luis Correia1 1
2 3
Instituto Polit´ecnico de Castelo Branco, Castelo Branco, Portugal {pedrotorres,aramalho,lcorreia}@ipcb.pt SYSTEC - Research Center for Systems and Technologies, Porto, Portugal CEMMPRE - Centre for Mechanical Engineering, Materials and Processes, Coimbra, Portugal
Abstract. This paper presents an approach for automatic anomaly detection through vibration analysis based on machine learning algorithms. The study focuses on induction motors in a predictive maintenance context, but can be applied to other domains. Vibration analysis is an important diagnostic tool in industrial data analysis to predict anomalies caused by equipment defects or in its use, allowing to increase its lifetime. It is not a new technique and is widely used in the industry, however with the Industry 4.0 paradigm and the need to digitize any process, it gains relevance to automatic fault detection. The Isolation Forest algorithm is implemented to detect anomalies in vibration datasets measured in an experimental apparatus composed of an induction motor and a coupling system with shaft alignment/misalignment capabilities. The results show that it is possible to detect anomalies automatically with a high level of precision and accuracy. Keywords: Industry 4.0 · Anomaly detection Vibration analysis · BigML
1
· Isolation forest ·
Introduction
According to the Europe Predictive Maintenance Market - Industry Trends and Forecast to 2027 report [1], the predictive maintenance market is expected to growing market with a CAGR (Compound Annual Growth Rate) of 39.6% in the forecast period 2020 to 2027. The increased use of new and emerging technologies to gain valuable insights into decision making has contributed to the growth of the predictive maintenance market. Several vertical end users are increasingly in need of cost reduction and downtime, which has spurred the growth of the predictive maintenance market. Industry 4.0 marks the revolution of digitization of traditional manufacturing industries supported by modern technologies such as automation, interconnectivity, real-time data processing and intelligence based on machine learning c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Machado et al. (Eds.): icieng 2022, LNME, pp. 13–23, 2022. https://doi.org/10.1007/978-3-031-09385-2_2
14
P. Torres et al.
techniques. With the growth of automation technologies, the sensing component is the basis of perception, fundamental to the smart factory concept. Condition monitoring techniques are maintenance methodologies to monitor the operating conditions of an equipment in real time, through measurement and extraction of information that allow understanding its health, wear, degradation or significant changes in operation. The collected data is used to find trends, predict failures and estimate the remaining lifetime of an asset. Through vibration analysis, by analysing the frequency, amplitude, phase, position and direction of vibrations in machinery, it is possible to identify many common faults. For example it is possible differentiate between wear on a specific gear or bearing, a lack of lubrication, an imbalance, a misalignment, a loose mounting or an electrical fault. Fault detection can be carried out before a machine is stopped, reducing downtime to its bare minimum. Early detection and predictive maintenance can also prevent more serious faults from developing. Machine learning algorithms have the ability to analyze large amounts of data, and automatically perform diagnoses, without human intervention, based on historic and correlations with failure situations, but also by self-learning. Increasingly, they are the appropriate tool for decision making with high levels of accuracy. The Fig. 1 shows a generalist architecture supported in the context of Industry 4.0, which illustrates the overview of fault detection systems in electrical machines based on vibration analysis and on which we base this work. The Operational Technology (OT) and Information Technology (IT) parts were aligned to design an Industrial Control System (ICS) in the laboratory, for acquiring, controlling and monitoring the operating status of rotating machines, producing reports, automatic alerts and recommending actions to take as a prescriptive maintenance system.
Fig. 1. Architecture overview for data acquisition and its interconnection with highlevel industry management systems.
Automatic Anomaly Detection in Vibration Analysis
15
The remainder of this paper is organized as follows: Sect. 2 describes the state of the art and related work. Section 3 describes the materials and methods for the automatic anomaly detection. Section 4 presents the experimental results achieved and, finally, Sect. 5 presents the conclusions and future work.
2
Related Work
This section presents some scientific works related to vibration analysis in rotating machines using different machine learning techniques that help to support the validity of the work of this paper and its importance in the current context of the industry. Any vibration measurement experiment for diagnosing machine operation must be in line with ISO 22096:2007. Vibration signals carry very important information for predictive maintenance applications, which is why it is widely used. The paper [2] presents and describes some condition maintenance techniques in a predictive maintenance context, in particular it shows an overview of some vibration analysis tools, such as ICA (independent component analysis), TFA (time-frequency analysis), ED (energy distribution) and CD (change detection). In [3], anomaly detection techniques using machine learning models such as K- Nearest Neighbour (KNN), Support Vector Regression (SVR) and Random Forest (RF) have been applied to vibration data for early fault detection of industrial electric motors. According to the authors the Random Forest presented the best performance compared to SVR and KNN, based on less number of false positives and the detection time. In [4], is proposed a method for providing a visual explanation of the predictions of a convolutional neural network (CNN)- based anomaly detection system. The CNN takes the monitoring target machine’s vibrational data as input and predicts whether the target’s state is healthy or anomalous. For a relation between the motor speed and vibration signals, [5] proposes a CNN based deep learning approach for automatic motor fault diagnosis. In the same research line [6] establishes a comparison of fault motor diagnosis using RNN (Recurrent Neural Networks) and k-means in vibration analysis. For accurate detection, it is important that the acquisitions and the entire experimental acquisition chain are also calibrated and that the best sensors are used for the best application. Accelerometers are widely used in this type of applications, however, other types of sensors such as magnetoresistive sensors have shown their potential due to their high sensitivity. In [7] was studied the comparison between magnetoresistive sensors and accelerometers in the acquisition of vibration signals to validate the accuracy in vibration analysis condition monitoring systems. Most of the research work presented in this section involves extensive mathematical formulations in the development and implementation of the algorithms, as well as their adaptation to fault detection scenarios. In real implementations, it is sometimes not feasible to implement these solutions in industrial controllers capable of running in real-time. On the other hand, the solution proposed in
16
P. Torres et al.
this work uses BigML [8] as a machine learning tool, as it is a freely available and web-based tool, accessible even to developers with few mathematical bases on artificial intelligence algorithms. With this approach, anomaly detection processes can be more quickly and efficiently applied in real-world vibration analysis scenarios.
3
Materials and Methods
According to [9], vibration analysis methodology in rotating machines help to determine, unbalanced, misalignment, looseness, bearing faults, gear defects, belt wear an tear, pum cavitation and others. Analysis is usually performed by measuring mechanical movements with accelerometers or magnetic sensors. Typically the signal amplitude is analyzed on the time and frequency through a FFT computation. Based on the ISO-10816 Vibration Severity Chart standard, it is possible to categorize the severity of the problem into 4 classes depending on the power and size of the rotary machine. According to the same standard, typical faults produce unusual low-frequency vibrations, so the analysis range should be from 10 to 1000 Hz. To be more precise in analysing the data, it is known that imbalances, misalignments and looseness are recorded at frequencies up to 300 Hz. The relationship between the failures that occur in rotating machines and the frequency spectrum is illustrated on Fig. 2.
Fig. 2. Machines typical faults distributed in the frequency spectrum.
In the state of the art there are some algorithms that are currently widely used to implement anomaly detection solutions. Robust Covariance [10] is an algorithm that detecting anomalies and outliers by means of the Mahalanobis distance. One-class SVM [11] is an unsupervised algorithm that learns a decision function for novelty detection: classifying new data as similar or different to the training set. The Local Outlier Factor (LOF) [12] algorithm is an unsupervised anomaly detection method which computes the local density deviation of a given data point with respect to its neighbours. It considers as outliers the
Automatic Anomaly Detection in Vibration Analysis
17
samples that have a substantially lower density than their neighbours. For this work, the implementation of the Isolation Forest through framework BigML was considered. Isolation Forest (IF) [13,14] is an unsupervised model, without need a predefined labels, based on decision trees, extensively used for outlier detection. In an Isolation Forest, randomly sub-sampled data is processed in a tree structure based on randomly selected features. The samples that travel deeper into the tree are less likely to be anomalies as they required more cuts to isolate them. Similarly, the samples which end up in shorter branches indicate anomalies as it was easier for the tree to separate them from other observations. As illustrated on Fig. 3, the algorithm try “isolate” outliers from the normal points. In order to isolate a data point, the algorithm recursively generates partitions on the sample by randomly selecting an attribute and then randomly selecting a split value for the attribute, between the minimum and maximum values allowed for that attribute. According to the original propose of isolation forest [13], the anomaly score (s) in a instance (x) is calculating by: s(x, n) = 2−
E(h(x)) c(n)
(1)
where h(x) is the path length of a point x and E(h(x)) is the average of h(x) from a collection of isolation trees. c(n) is the constant value to normalize the average path length for n trees. 2(n − 1) (2) n where H(i) can be estimated by ln(i) + 0.5772156649 (Euler’s constant) as the harmonic number. c(n) = 2H(n − 1) −
Fig. 3. Overview of the isolation forest method.
18
4
P. Torres et al.
Experimental Results
This section presents the experimental results achieved in the laboratory through different tests in an experimental apparatus consisting of a single-phase 0.2 kW/3000 rpm motor and a shaft alignment/misalignment system, as illustrated on Fig. 4. Vibration acquisitions were performed with a DYTRAN model 3134D piezoelectric accelerometer with a sensitivity of 500 mV/g through a PCB PIEZOTRONICS 482A21 ICP signal conditioner, a National Instruments data acquisition board (NI DAQ 6008) and a data acquisition virtual instrument software developed in LabVIEW. The accelerometer and the data acquisition chain was calibrated with a PCB PIEZOTRONICS 394C06 handheld portable shaker that oscillates a 159.2 Hz. The data acquisition setup is aligned with the Operational Technology (OT) part of Fig. 1.
Fig. 4. Experimental apparatus.
According to the ISO-10816 standard, all acquisitions were performed up to 1000 Hz, with a resolution of 0.5 Hz. Different experiments were carried out, namely misalignments and loosening at different rotation speeds, which allowed the creation of a diversity of datasets for analysis. On Fig. 5 is represented the frequency spectrum of one acquisition. As expected, signals with information about the motor status operation are identified at low frequencies, until 300 Hz. For better interpretation of the signals, a truncated to 300 Hz representation is performed, as shown in Fig. 6. Three groups of signals with greater amplitude are perfectly visible, which means that the detection of any anomaly will have to go through the analysis of these signals. This acquisition was performed with the motor at 570 rpm, measured with a strobe lamp. It means that the first peak
Automatic Anomaly Detection in Vibration Analysis
19
9.5 Hz (570 rpm/60) corresponds to the motor speed. From the spectral analysis it is also possible to pre-determine a threshold for which we can consider that the value is an outlier, a potential anomaly. This threshold corresponds to the value of mean plus 3 × standard deviation as shown in Fig. 6.
Fig. 5. Frequency spectrum of an acquisition.
In this experiment a misalignment of the motor shaft has been imposed, which implies a significant increase in the amplitude of the frequencies associated to the motor, as better illustrated in Fig. 7. This misalignment caused an imbalance in the motor, which is why all the peaks gained amplitude. At the motor speed frequency, the peak reaches a vibration velocity VRM S = 0.858 mm/s, which according to ISO 10816 vibration severity standards, for small machines which is the case (Class I), corresponds to a satisfactory severity. The automatic anomaly detection was implemented in the BigML framework, explorating the potentialities of unsupervised learning isolation forest algorithm. Figure 8 shows a snapshot of the BigML output with the identification of the top 5 anomalies detected. The algorithm is configured to search 20 anomalies along the spectrum and is automatically identified 16 anomalies with an accuracy score of 90.61% as presented on Table 1. This results are very promissory and shows that the isolation forest is an adequate algorithm for this kind of analysis.
20
P. Torres et al.
Fig. 6. Frequency spectrum [0–300 Hz].
Fig. 7. Frequency spectrum [0–35 Hz].
Automatic Anomaly Detection in Vibration Analysis Table 1. Anomalies automatic detected. Frequency [Hz] Anomaly score [%] 9.0
90.61
9.5
90.61
10.0
90.61
18.5
90.61
19.0
90.61
28.0
90.61
90.5
90.61
91.0
90.61
100.0
90.61
100.5
90.61
109.0
90.61
109.5
90.61
199.5
90.61
200.0
90.61
200.5
90.61
209.5
90.61
Fig. 8. BigML - automatic anomaly detection results.
21
22
5
P. Torres et al.
Conclusions and Future Work
The work focuses on the automatic detection of anomalies in rotating machines through machine learning algorithms. The baseline of the work is conditioned monitoring through vibration analysis, extensively used in the industry, but with the added value of seeking to detect failures as early as possible, in an intelligent way, indispensable for predictive maintenance scenarios. Machine learning algorithms have the capability to process large amount of data and identify patterns with high levels of accuracy, reason for the important on explore this contribution for the automatic anomaly detection. The paper explores the well-known isolation forest algorithm, due to its outlier detection capabilities, reason for that is considered a good algorithm for detecting anomalies in vibration analysis scenarios. Anomalies cause frequency peaks elevations and abnormal variations can be easily identified as a potential fault. The results show that the algorithm achieves results with scores higher 90% on the detection of potential anomalies. Results compared by a human analysis to the frequency spectrum that confirm the frequencies correspondent to misalignments and imbalances done on the motor. As future work is expected integrate this approach in a real time monitoring framework, aligned with the Reference Architectural Model for Industrie 4.0 (RAMI 4.0), with online reports through dashboards and alerts on smart devices. Other machine/deep learning algorithms will be explored to establish comparisons of precision and robustness.
References 1. Europe Predictive Maintenance Market - Industry Trends and Forecast to 2027. https://www.databridgemarketresearch.com/reports/europe-predictivemaintenance-market. Accessed 23 Jan 2022 2. Popescu, T.D., Aiordachioaie, D., Culea-Florescu, A.: Basic tools for vibration analysis with applications to predictive maintenance of rotating machines: an overview. Int. J. Adv. Manuf. Technol. 118, 2883–2899 (2021). https://doi.org/ 10.1007/s00170-021-07703-1 3. Egaji, O.A., Ekwevugbe, T., Griffiths, M.: A data mining based approach for electric motor anomaly detection applied on vibration data. In: 2020 Fourth World Conference on Smart Trends in Systems, Security and Sustainability (WorldS4), pp. 330–334 (2020). https://doi.org/10.1109/WorldS450073.2020.9210318 4. Saeki, M., Ogata, J., Murakawa, M., Ogawa, T.: Visual explanation of neural network based rotation machinery anomaly detection system. In: 2019 IEEE International Conference on Prognostics and Health Management (ICPHM), pp. 1–4 (2019). https://doi.org/10.1109/ICPHM.2019.8819396 5. Han, J.-H., Choi, D.-J., Hong, S.-K., Kim, H.-S.: Motor fault diagnosis using CNN based deep learning algorithm considering motor rotating speed. In: 2019 IEEE 6th International Conference on Industrial Engineering and Applications (ICIEA), pp. 440–445 (2019). https://doi.org/10.1109/IEA.2019.8714900
Automatic Anomaly Detection in Vibration Analysis
23
6. Choi, D.-J., Han, J.-H., Park, S.-U., Hong, S.-K.: Comparison of motor fault diagnosis performance using RNN and K-means for data with disturbance. In: 2020 20th International Conference on Control, Automation and Systems (ICCAS), pp. 443–446 (2020). https://doi.org/10.23919/ICCAS50221.2020.9268271 7. Dionisio, R., Torres, P., Ramalho, A., Ferreira, R.: Magnetoresistive sensors and piezoresistive accelerometers for vibration measurements: a comparative study. J. Sens. Actuator Netw. 10, 22 (2021). https://doi.org/10.3390/jsan10010022 8. BigML. https://bigml.com/. Accessed 28 Feb 2022 9. SenseGrow. https://www.sensegrow.com/. Accessed 19 Jan 2022 10. Robust covariance estimation and Mahalanobis distances relevance. https://scikitlearn.org/stable/auto examples/covariance/plot distances.html. Accessed 23 Jan 2022 11. One-Class SVM. https://scikit-learn.org/stable/auto examples/svm/plot oneclass.html. Accessed 23 Jan 2022 12. Local Outlier Factor. https://scikit-learn.org/stable/modules/generated/sklearn. neighbors.LocalOutlierFactor.html. Accessed 23 Jan 2022 13. Liu, F.T., Ting, K., Zhou, Z.-H.: Isolation forest. In: Eighth IEEE International Conference on Data Mining 2008, pp. 413–422 (2009). https://doi.org/10.1109/ ICDM.2008.17 14. Buschj¨ ager, S., Honysz, P.-J., Morik, K.: Generalized isolation forest: some theory and more applications extended abstract. In: 2020 IEEE 7th International Conference on Data Science and Advanced Analytics (DSAA), pp. 793–794 (2020). https://doi.org/10.1109/DSAA49011.2020.00120
Deep Neural Networks in Fake News Detection Camelia Avram(B)
, George Mesaro¸s, and Adina A¸stilean
Technical University of Cluj Napoca, Cluj Napoca, Romania [email protected]
Abstract. The primary objective of this paper is to develop a method by which a Deep Neural Network can be used to perform Fake News detection without implementing any complex Natural Language Processing techniques. A key element of the proposed structure consists of a data processing component that manipulates the news article data (shape, data type, etc.) in the database so that it can be used in-side a Deep Neural Network structure, without the need for additional complex NLP techniques. A Deep Neural Network model that can perform binary classification on a given set of news articles was built. The method was implemented in Python using the Jupyter Notebook framework. The algorithm has an accuracy of at least 70% on the testing data set, leading to conclusive results regarding the be-longing of the considered material to the fake news categories. Keywords: Fake news detection · Deep Neural Network · Machine learning
1 Introduction In recent years a growing problem has been represented by the widespread development of fake news. Due to the quick rise of social media, as well as many independent news outlets and blogs, because of increased internet accessibility and online traffic, fake news has become more prevalent than ever. This has led to a growing concern, especially in the western world, and has helped spread misinformation, as well as greatly aided the rise of many populist parties and politicians. The phenomenon is also enhanced by the constant need for media outlets to direct traffic through their systems (i.e., sell more subscriptions, increase the number of clicks on their websites, etc.) As a result, journalists are always obligated to generate articles in a quick manner, often sacrificing the accuracy of the information, since taking the time to check the facts they include in their articles could mean they lose the advantage of being the first to write on the subject. This, combined with the never-ending pursuit of sensationalism (i.e., headlines and subjects that attract people through their shock value) has led to a rapid decline in quality in the content present in today’s media, especially online. As a result, modern media often relies on the misinterpretation and exaggeration of trivial information to enhance their apparent importance and attract attention or generate panic, renouncing established journalistic standards [1]. Even though fact-checking organizations exist that employ professional researchers and fact-checkers to manually check the credibility of certain claims in news articles, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Machado et al. (Eds.): icieng 2022, LNME, pp. 24–35, 2022. https://doi.org/10.1007/978-3-031-09385-2_3
Deep Neural Networks in Fake News Detection
25
consumers still significantly struggle to acknowledge the proper criteria for deciding whether a source of information is credible. Studies show that around 46% of readers use criteria that in no way can be accurately correlated with the value of truth of an article, gauging the credibility of sites and articles, basing their analysis in part on the appeal of the overall visual design of a site, including layout, typography, font size, and color schemes.
2 State of the Art The tendency to rely on the overall visual appeal of a source to assess credibility has a higher occurrence with certain categories of news sources, especially in terms of websites. Comments related to credibility from consumers regarding visual design issues tended to be more frequent with financial websites (54.6%), search engines (52.6%), travel-related outlets (50.5%), and e-commerce platforms (46.2%), while they were less frequent with health-related websites (41.8%), news (39.6%) and sites related to non-profit organizations (39.4%) [2]. Machine learning is defined largely as the study of computer algorithms that are capable of automatically improving through experience. The second type of learning includes classification algorithms, as well as different types of regression [3–5]. Deep Neural Networks are defined as a subclass of Artificial Neural Networks (a computing system inspired by biological neural networks in animal brains) with multiple layers between the input and output layers [6]. These layers constitute the internal logic of the network and together they can find a correct mathematical manipulation (linear or nonlinear) that leads to a correspondence between input and output. The Deep Neural Network moves from layer to layer and computes the probability of each output [7–12]. The Sigmoid (or logistic) function can also be used as an activation function by neurons in Neural Networks. Its main advantages as an activation function are its smooth gradient, which prevents sudden jumps in output values, as well as its clear predictions resulting from the fact that outside the range. A very notable technique used in Deep Learning is backpropagation. [14] defines back-propagation (or “backdrop”) as an example of dynamic programming used for fitting a neural network. In the backdrop, the network computes the loss function gradient for a single training example, with respect to the weights of the network. This process is performed very efficiently, as opposed to a more naïve direct computation of the gradient with respect to each individual weight, reducing computational complexity. Due to this efficiency, it becomes feasible to use gradient methods to train complex, multilayer networks by updating the weights to minimize loss [13]. Convolutional Neural Networks (CNNs), also known as space invariant artificial neural networks (SIANN) are defined as specialized neural networks that process data that has a “known grid-like topology”. Convolutional Neural Networks present three important characteristics that improve Machine Learning: sparse interactions, parameter sharing, and equivariant representations [15, 16]. At a fundamental level, a typical CNN layer operates in three stages [17]. The first stage involves the layer performing several parallel convolutions to generate a set of linear activations. In the second stage (sometimes called the “detector stage”), each of
26
C. Avram et al.
the previously generated linear activations passes through a nonlinear activation function (e.g., ReLU function). Finally, during the third stage, a pooling function is used to further change the layer output by replacing it at certain locations with a summary statistic of the nearby outputs [25]. When applied to facial recognition, CNNs achieved a large decrease in error rate [18], and in 2012, when applied on the MNIST database (a large database of handwritten digits commonly used for training image processing systems) an error rate of 0.23% was reported [19]. Natural Language Processing, commonly known as “NLP”, is a field that combines computer science, linguistics, information engineering, and artificial intelligence. NLP is concerned with describing the interactions between computers and natural languages. Due to the widespread nature of the field and the advanced research performed over the years, many known NLP tasks and problems have been defined, one of which is Grammar Induction [20, 21]. [22] examines the use of various deep learning structures, particularly Deep Neural Networks on text. There is an important distinction to be made between the use of DNNs on bodies of text and their use on images. In [23] a Recurrent Neural Network (RNN) is used to represent word morpheme as a vector. A morpheme is defined as “the smallest meaningful unit in a language”. In [24] fact-checking is defined as “the assignment of a truth value to a claim made in a particular context”, considering it a naturally binary classification task. Still, some popular approaches expand on the classification aspect including multiple categories (e.g., “partially true”), as it is often the case that statements are not completely true or false, given the complexity of human language and scientific information. [27] defines the task of fake news detection, from an algorithmic perspective, as the prediction of the probability of a given news article (report, expose, editorial, etc.) being intentionally deceptive (i.e., containing intentionally deceptive information). Certain tools, based around presented Artificial Intelligence techniques and algorithms aim to mimic the fact-checking abilities of journalists and professional fact-checkers while removing the disadvantages that stem from the tedious nature of the task. [27] also provides several examples of relevant typical techniques employed to detect deception in news articles. One such commonly used approach for the training of “classifiers” is making use of Sets of word and category frequencies in Support Vector Machines. Essentially, a sufficiently trained mathematical model from examples pre-classified into one of two categories (binary classification), can predict in-stances of future deception based on numeric clustering and distances. [26, 28] presents a Deep Learning Approach to Fake News Detection with 94.21% accuracy on test data. As a general observation, binary classification using a classical Deep Neural Network for the problem of Fake News detection (as opposed to Stance Detection) has not been successfully attempted or sufficiently documented. Existing systems tend to be heavily dependent on a complex layer that applies various NLP techniques to pre-process the data, to simplify the task of the Deep Neural Network. As such, a DNN model would be able to classify an article as either containing fake news or not (binary classification), with intensive data pre-processing.
Deep Neural Networks in Fake News Detection
27
3 Fake News Detection System A Deep Neural Network-based design method is proposed to solve the problem of Fake News Binary Classification. A modular approach is preferred so that each individual component can be designed, implemented, tested, and optimized independently. Deep Neural Networks perform well in tasks of pattern recognition since they possess multiple layers that can gradually perform more and more complex analysis on the input data [29]. At a fundamental level, they use a set of parameters (weights and biases) and essentially construct a mathematical function that maps the input data to the output data. To implement a Deep Neural Network, there are multiple important elements that will ultimately influence the accuracy of the model, including and perhaps most importantly, the number of hidden layers and the number of neurons in each hidden layer. Like any Neural Network, this one will have one input layer and one output layer. The number of neurons in the input layer will be equal to the normalized length of the articles, whereas the output layer will only have one neuron. There is no 100% reliable way of computing algorithmically the number of hidden layers or the number of neurons in each layer. As a result, arbitrary values must be chosen, and these values should be treated as hyperparameters and tuned as necessary, through an experimental trial and error process, with a balance between accuracy and execution time in mind. Aside from these aspects, there are multiple components that are part of the overall Deep Neural network structure (see Fig. 1): • • • • •
Parameter initialization module Forward propagation module Cost computation module Backward propagation module Parameters update module
Another very important value that will be initially assigned arbitrarily and tuned as a hyperparameter in the final phase is the Learning Rate, which influences the importance (i.e., ratio) of the gradients (derivatives) computed by the backpropagation for the minimization of the cost. Simply put, the Learning Rate will determine by how much the parameters are updated at each step. The network should also construct a graph of the cost over the iterations and print the cost values at an arbitrary frequency (e.g., every 100 iterations) for a relevant visualization of the learning process. Figure 1 illustrates the general structure of the Deep Neural Network system, where: • • • •
“Articles” represents the Articles Matrix generated by the Data Processing Layer. “Labels” is a vector of verdicts as taken from the dataset. “Layers Dims” represents a matrix with the dimensions of each layer. “Iterations” represents the number of iterations the network will go through in its learning process. “Print Cost” is a Boolean value that decides whether the network should print the cost,
28
C. Avram et al.
• “OUT” represents the output of the network, i.e., the updated parameters that are ultimately used to make predictions on new data. The cost is computed, as follows, with (1): (1) where: • m is the number of training examples • Y is the vector of Labels ˆ is the output of the activation function of the output layer (i.e., the prediction vector) • Y Special attention was given to the design of the data processing layer. Thus, an initial layer with the purpose of extracting only the useful data, i.e., the text corpus and the label, as well as converting the text into a usable format, was designed. The considered inputs of this layer were the number of training examples, as well as the raw input data (training or test). These were converted into an Article Matrix that can be used in multiplication and other operations by the network. Since the aim of this paper was to examine the accuracy of a Deep Neural Network in Fake News Detection without the use of complex preprocessing techniques such as NLP, a text to ASCII conversion was considered desirable. Another intuitive, yet crucial aspect is that, unlike images that can be easily compressed without much impact, articles come in different sizes, and it is impossible to perform a similar compression.
Fig. 1. The structure of the Deep Neural Network.
However, to perform matrix operations, the dimensions must match. As a result, the data processing layer should also normalize the sizes of the articles, Fig. 2.
Deep Neural Networks in Fake News Detection
29
Fig. 2. The general structure of the data processing layer
The following main operations were performed to implement this layer: • • • •
Eliminate initial empty sections and tags. ASCII conversion. Normalize size of articles. Create a matrix of articles, where each column is a vector of numbers representing each article in ASCII format • Normalize ASCII values between 0 and 1. • Output Article Matrix of shape (Normalised_Article_Length, Number_Of_Examples). An initial component is required for initializing the parameters that will be used to compute the predictions. Normally some arbitrary or random set of parameters, i.e., weights and biases are chosen.
4 Implementation The software application is written using Python in the Jupyter Notebook environment, due to their many advantages that make them widely used, especially for programming systems that make use of Artificial Intelligence. An initial component is required for initializing the parameters that will be used to compute the predictions. The model assumes some arbitrary values for the hyperparameters (learning rate and the number of iterations), but they can be set to any value. The Weights and biases represent the learnable parameters of the Deep Neural Network model. It executes each iteration in a for loop and prints the cost every 100 iterations. It also constructs a plot of the cost. For this to happen, proper sizes must be determined for every weight and bias matrix. These sizes are highly dependent on the sizes of each individual layer and as such, the first component that should be designed is one that defines the sizes of each layer. The Weights are initialized using random values, whereas the biases are initialized with zero values. The sizes of the hidden layers will be given arbitrarily and used as hyperparameters to tune the network for better accuracy. After hyperparameter tuning, the most suitable learning rate was found to be 0.75 and an optimal number of iterations was found to be 2000. Normalization of the sizes is then performed by adding zeros at the end of each article. As the Deep Neural Network will perform mostly matrix multiplications, zeros
30
C. Avram et al.
do not influence this operation. After this step, each article is a vector of ASCII numbers corresponding to each character (with zeros at the end). These vectors are then ap-pended on the columns of a matrix. The values of the matrix are finally normalized between 0 and 1. The matrix is then split into training and test data. The Weights are initialized with random values, scaled-down for additional efficiency in computing, and the biases are initialized as zeros. The assert statement is used here and will be used in many other similar functions as a debugging aid. It tests whether the sizes of the output of each function corresponds to the sizes we expect. If there is a problem with the sizes, an AssertionError exception will be raised. A prediction module is implemented that follows the design in Fig. 3 and calls the Model_forward function to essentially compute the probability that an article contains fake news. If the predicted probability is over 50% then the prediction is true, otherwise, it is False.
Fig. 3. The structure of the prediction module
The linear function is passed through an activation function, which depends on the layer should either be a ReLU (or tanh), or Sigmoid function.
Fig. 4. The forward activation module
Figure 4 illustrates the structure of the Activation component of the forward propagation subsystem, where: • Activation is a string that dictates the type of activation function that will be used by the current layer (ReLU, tanh, or sigmoid), • A_prev is the value of the activation from the previous layer (or the input data) • Linear cache and Activation cache are tuples for storing user data for the backward propagation, as above. They are combined to form another tuple, simply called Cache. Multiple such structures are cascaded together, each representing a layer of the Deep Neural Network (Fig. 5).
Deep Neural Networks in Fake News Detection
31
Fig. 5. The structure of the forward propagation module
The outputs of each layer are the value of the activation function and the caches, which are stacked together to form the final cache, used for backward propagation. “AL” represents the final output of the forward propagation, and it contains the vector of predictions created by the Neural Network. The functionality of this structure, together with a backward propagation component will be executed repeatedly over several iterations, to update the parameters of the model. Each layer takes as an input the activation function values from the previous layer, as well as the cache. The first three hidden layers, as discussed above use ReLU or the Hyperbolic Tangent as their activation functions, whereas the final layer, the output layer, uses a Sigmoid function to automatically normalize the outputs. These outputs represent the probability (between 0 and 1) that a given article contains misleading information and thus can be classified as Fake News and will later be used to make a final binary prediction.
5 Simulation and Testing The Deep Neural Network model was run multiple times, with varying values for the hyperparameters. A testing dataset, which represents a subsection of the initial dataset, was used to test the applicability of the results. Any DNN can perform well on the training dataset, as it learns certain patterns that help it map the inputs to the outputs. However, the key aspect of a good Deep Neural Network is whether those assumptions (which translate into the parameters W and b) can be successfully applied to new data. The testing data consists of 200 articles, while the training data has 1000 articles. To assess the accuracy of the proposed method the following relation (2) was used: (2) The dot product of the labels and the predictions allows seeing how many correct predictions of “True” (i.e., “contains fake news”) were made as each time the values ˆ is 0 or vice versa) the result of this product is 0. The same is true for differ (Y is 1 and Y the second part of the formula, which computes how many predictions of “False” were correct. The accuracy of the test data is used as a measure of the capability of the Neural Network and hyperparameter tuning is performed after every re-run to ensure maximum accuracy is achieved. Indeed, during the initial phases of testing, the accuracy on the training dataset was acceptable (i.e., over 70%), but on the validation dataset, it was under
32
C. Avram et al.
50%. This was not acceptable as it meant the network initially performed worse than blindly guessing the results. However, after changing the hyperparameters, especially the learning rate, layer sizes, and iterations. For example, different values were tried for the learning rate (0.075, 0.2, 0.5, etc.), with smaller values resulting in irrelevant changes and higher values than 0.75 resulting in the network not being able to consistently decrease the cost over its learning process. As such a value of 0.75 was chosen for the learning rate. This process was replicated for all mentioned hyperparameters. The results were assessed not only by computing an accuracy but also by printing the cost every 100 iterations and constructing a graph for better visualization of the progress made by the Neural Network. The visualization of the cost values in real-time helps save time when running the Network repeatedly for hyperparameter tuning and for testing, as it can be easily seen when the DNN gets stuck since the cost stops changing entirely for hundreds of iterations. A data processing layer was designed and implemented, which enables Deep Neural Networks without the use of Natural Language Processing to receive articles on the input. Due to this fact, the performance of a pure Deep Neural Network structure on text, and specifically on the task of binary classification for fake news detection could be analyzed. This analysis was performed by showcasing the evolution of the cost, as well as computing the accuracy on both the training and validation datasets. These results are shown in Fig. 6 and Fig. 7.
Fig. 6. Evolution of the cost values over the iterations
Deep Neural Networks in Fake News Detection
33
Fig. 7. Graph of the cost values over the iterations
6 Conclusions To fake news detection without the help of NLP, a Dense Neural Network was used to perform binary classification on news article corpora and a proof of concept was developed. The Deep Neural Network is constructed from scratch, without using an existing model, providing more control over the internal structure and functionality. This leads to a completely unique solution, superior in flexibility, though not in terms of the final accuracy. The Deep Neural Network model is suitable for improvement in multiple directions. As mentioned previously, this fake news detection system has two crucial aspects related to input data and DNN structure. Further hyperparameter tuning could result in higher accuracy. A more complex structure, with more hidden layers with larger sizes, can potentially lead to higher accuracy. Another approach to improve the performance of the DNN is related to the data, as well as the data processing layer. For the experimental purposes of this paper, the dataset was reduced to 1000 articles. But it is intuitive that using a much larger dataset (containing hundreds of thousands of articles) could result in more accurate predictions and more importantly more generally applicable patterns learned by the network. A dataset could be constructed specifically for this binary classification task without the use of NLP, one which would consider the need to have articles of similar sizes. Such a dataset would mean there is no longer a need to adjust the lengths of the articles by adding 11 zeros and, as such, all parameters would be trained on the same number of examples, resulting in a smaller chance for inaccuracies. Intuitively, Natural Language Processing techniques are of great value in a task such as Fake News Detection, as by using these techniques the accuracy can be increased more. Though the purpose of this paper was to analyze the performance of a pure DNN on such a task.
34
C. Avram et al.
Another approach to improve the DNN would be to add another layer on top of it, namely a Generator component, and use the DNN as a Discriminator is a Generative Adversarial Network. These models presented in [30] represent a relatively new method for enhancing the accuracy of Neural Network classifiers by introducing the Generator component which generates fake data from random noise and tries to fool the Discriminator.
References 1. Stephens, M.: A History of News, pp. 55–57, 3 edn. Oxford University Press, New York, USA (2007) 2. Fogg, B.J.: Stanford Guidelines for Web Credibility. A Research Summary from the Stanford Persuasive Technology Lab. Stanford University. www.webcredibility.org/guidelines. Accessed 10 Jan 2022 3. Bengio, Y.: Learning Deep Architectures for AI, pp. 13–18. Now Publishers, Norwell, Massachusetts, United States (2009) 4. Russell, S.J., Norvig, P.: Artificial Intelligence: A Modern Approach, pp. 2–55, 3rd edn. Prentice Hall, Upper Saddle River. New Jersey (2003) 5. Intel: The Future of AI in Agriculture. https://www.intel.com/content/www/us/en/big-data/ article/agriculture-harvests-big-data.html. Accessed 10 Jan 2022 6. The Guardian: The superhero of artificial intelligence: can this genius keep it in check? https://www.theguardian.com/technology/2016/feb/16/demis-hassabis-artificialintelligencedeepmind-alphago. Accessed 10 Jan 2022 7. Ferrucci, D., Levas, A., Bagchi, S., Gondek, D., Mueller, E.T.: Watson: beyond Jeopardy! Artif. Intell. 199–200, 93–105 (2013) 8. Cortes, C., Vapnik, V.N.: Support-vector networks. Mach. Learn. 20(3), 273–297 (1995) 9. Hosmer, D.W., Lemeshow, S.: Applied Logistic Regression, pp. 2–46, 2nd edn. Wiley Publishing (2000) 10. Borucka, A., Grzelak, M.: Application of logistic regression for production machinery efficiency evaluation. Appl. Sci. 9, 1–16 (2019) 11. Han, J., Morag, C.: The Influence of the Sigmoid Function Parameters on the Speed of Backpropagation Learning, pp. 195–201. Springer. Malaga-Torremolinos, Spain (1995) 12. Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015) 13. Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks, Proceedings of the 14th International Conference on Artificial Intelligence and Statistics AISTATS, vol. 15, pp. 315–321. Montreal, Canada (2011) 14. Goodfellow I., Bengio Y., Courville A.: Back-Propagation and Other Differentiation Algorithms, pp. 200–220. MIT Press (2016) 15. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323, 533–536 (1986) 16. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning, pp. 326–339. MIT Press (2016) 17. Yi Tao, Z., Chellappa, R.: Computation of optical flow using a neural network. In: IEEE International Conference on Neural Networks, vol. 2, pp. 71–78 (1998) 18. Lawrence S., Giles, C.L., Ah Chung, T., Back, A.D.: Face recognition: a convolutional neural network approach. IEEE Trans. Neural Networks 8, 98–113 (1997) 19. Ciresan, D., Meier, U., Schmidhuber, J.: Multi-column deep neural networks for image classification. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 3642–3649. Institute of Electrical and Electronics Engineers (IEEE), New York, NY (2012)
Deep Neural Networks in Fake News Detection
35
20. Klein, D., Manning, C.D.: Natural Language Grammar Induction Using a Constituent-Context Model. Advances in Neural Information Processing Systems. Standford University California, USA (2002) 21. Alrehamy, H.H., Walker, C.: SemCluster: unsupervised automatic Keyphrase extraction using affinity propagation. In: Chao, F., Schockaert, S., Zhang, Q. (eds.) UKCI 2017. AISC, vol. 650, pp. 222–235. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-66939-7_19 22. Ho Chiung, C., Baharim, K.N., Abdulsalam, A., Alias, M.S.: Deep neural networks for text: a review. In: Proceedings of the Sixth International Conference on Computer Science and Computational Mathematics (2017) 23. Luong, T., Socher, R., Manning, C.D.: Better Word Representations with Recursive Neural Networks for Morphology”. CoNLL, pp. 104–113 (2013) 24. Vlachos, A., Riedel, S.: Fact checking: Task definition and dataset construction. In: Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social ScienceLondon, pp. 18–22. United Kingdom (2014) 25. Amazeen, M.A.: Checking the fact-checkers in 2008: predicting political ad scrutiny and assessing consistency. J. Political Market. 15(4), 433–464 (2008) 26. Morgan, M., Barker, D.C., Bowser, T.: Fact-checking polarized politics: does the fact-check industry provide consistent guidance on disputed realities? Forum 13(4) (2015) 27. Conroy, N., Rubin, V., Chen, Y.: Automatic deception detection: methods for finding fake news. In: Proceedings of the Association for Information Science and Technology. vol. 52, issue 1 (2015) 28. Thota, A., Tilak, P., Ahluwalia, S., Lohia, N.: Fake news detection: a deep learning approach. SMU Data Sci. Rev 1(3) (2018) 29. Baker, D., Chaudhry, A.K., Thun-Hohenstein, P.: Stance detection for the fake news challenge: identifying textual relationships with deep neural nets, Stanford. https://web.stanford.edu/ class/archive/cs/cs224n/cs224n.1174/reports/2760230.pdf. Accessed 11 Jan 2022 30. Goodfellow, I.J., et al.: Generative Adversarial Nets, Machine Learning. Universite de Montreal, Montreal (2014)
Cork-Rubber Composite Blocks for Vibration Isolation: Determining Mechanical Behaviour Using ANN and FEA Helena Lopes1(B)
and Susana P. Silva2
1 MEtRICs Research Center, University of Minho, Campus of Azurém, 4800-058 Guimarães,
Portugal [email protected] 2 Amorim Cork Composites, Rua Comendador Américo Ferreira Amorim, 260, 4535-186 Mozelos VFR, Portugal
Abstract. The static and dynamic performance of cork-rubber composites within the context of vibration isolation is dependent on the material hardness and sample geometry. The development of a new composite requires several iterations before achieving the projects’ requirements. The application of modelling techniques at an early stage could help reduce product development time. As a first step towards the development of new cork rubber composites, an existing database was utilized for the creation of an artificial neural network (ANN) aiming for the prediction of the static apparent compression modulus based on the sample’s shape factor and hardness Shore A. Based on the results given by the ANN model, the dynamic performance of squared cross-section block samples subjected to compression loading was analysed using finite element analysis and recurring to data from previous experimental tests of a different block sample composed of the same material. Comparison between previous experimental tests and the presented approach results was conducted for validation of the methodology. Keywords: Artificial neural networks · Compression · Cork-rubber composites · Finite element analysis · Stiffness · Vibration isolation
1 Introduction Within the scope of Industry 4.0, the development of new products by organizations faces new challenges to attend to the clients’ and markets’ needs. Creating custom products sometimes requires implementing new engineering approaches capable of achieving specific requirements while simultaneously ensuring that development time constraints are met [1–3]. The development and application of modelling and simulation strategies have provided new insights in different areas from product development [4–7] to automated manufacturing systems [8–10]. The present work focus on the development of cork-rubber composites. Several works have been carried out using modelling and simulation tools to assess final performance [4, 5] and manufacturing process [6, 7] of agglomerated cork and rubber materials. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Machado et al. (Eds.): icieng 2022, LNME, pp. 36–50, 2022. https://doi.org/10.1007/978-3-031-09385-2_4
Cork-Rubber Composite Blocks for Vibration Isolation
37
Cork-rubber composites are materials obtained through the vulcanization of synthetic or natural rubber compounds filled with cork granules obtained from the remains of cork stoppers production and harvest-related activities of Quercus suber L. trees [11–14]. These elastomeric materials can be applied as part of vibration and acoustic isolation systems between foundation and objects to be isolated, to reduce or avoid transmission of vibrations [12, 15–17]. To be able to accomplish its function properly according to the surrounding conditions, the characterization of cork-rubber composite materials must be performed in terms of static and dynamic performance, damping, creep resistance, among other properties [17–19]. In Fig. 1, an illustrative description of the mechanical behaviour of a system composed of a cork-rubber composite block subjected to compression, considering a friction contact between the block and rigid plates surfaces, is presented. Generally, the static deformation of an elastomer under compression should not exceed 10 to 15% of its thickness to guarantee reliable performance over long periods [17–19].
Fig. 1. Compression of cork-rubber composites with frictional contact between surfaces.
In these loading conditions, cork-rubber materials’ mechanical performance depends on material hardness, as observed in other elastomers. Usually, the increase of hardness leads to an increase of Young’s modulus, as accounted by several studies and developed models regarding rubber materials [20–22]. Due to the contact between elastomer and loading surfaces, geometry also influences the mechanical behaviour of a vibration isolator. Apparent compression modulus (Ec ) is applied to describe this effect. Several authors have derived and presented mathematical models describing the relationship between apparent compression modulus and elastomer’s geometry, assuming different Poisson’s ratios [23–26]. To access the dynamic performance of elastomers, different experimental different methods can be applied such as free, forced non-resonance or resonance vibrations [27]. For vibration isolation blocks, such as cork-rubber composites, it is common to evaluate the materials based on standard DIN 53513 [28]. As a result of static and dynamic characterization, the ratio between dynamic and static stiffness or dynamic stiffness coefficient (Kd ) can also be determined [17]. In the present study, two computational resources were used to study the static and dynamic performance of cork-rubber composites: finite element analysis (FEA) and artificial neural networks (ANN), a phenomenological modelling approach based on the
38
H. Lopes and S. P. Silva
collection of large datasets. A study relative to the application of FEA for the performance prediction of cork-rubber composite materials under static and dynamic compression loading was presented in previous articles [29, 30]. Regarding ANN, the development of a model able to predict the static behaviour of these composites is presented in [31]. The goal of the presented paper is to present and evaluate a modelling approach for the prediction of the mechanical performance of cork-rubber composites used in vibration isolation applications, based on the sample’s hardness and geometry. Using historical data related to the material characterization of several types of cork-rubber composites, an ANN model was used to predict the behaviour of samples under a static compressive load in terms of apparent compression modulus. Based on the results given by ANN, a first insight about the sample’s dynamical behaviour can be determined according to the methodology presented. Based on the application of FEA, the dynamic behaviour for other geometries of the same material can be determined. To validate the approach presented in this paper, a comparison between experimental and simulation results of the two models are also presented and discussed.
2 Materials and Methods 2.1 Methodology A general description of the approach that led to the development of this study is presented in Fig. 2. The first step consisted of using an artificial neural networks model to predict the static behaviour, when subject to a compressive load, of a cork-rubber composite material through its hardness and geometry. The geometry of the samples used in this study are squared cross-section blocks, and the parameter chosen to quantify this characteristic was shape factor: the ratio between loaded area and total are free to bulge. Using dynamic performance data of one sample of a material with a specific geometry (standard sample), the performance of other geometries of samples can be deduced using FEA output data. The first step of the dynamic analysis consists of determining the static performance of a standard sample with an equivalent compression behaviour given by the ANN model. Recurring to the dynamic stiffness coefficient of a standard sample, an equivalent Young’s modulus at each stress level can be estimated to use as input for FEA. After the FEA is conducted, the output data (namely displacement) is treated to achieve the dynamic parameters: dynamic stiffness and natural frequency of the system. 2.2 Material Data The database considered for the development of the ANN model was composed of 492 data points related to previous results of experimental characterization of cork-rubber composites samples with equal cross-section area and different thicknesses. The experimental characterization consisted of a quasi-static compression test until a certain load was achieved according to the sample’s hardness. Each sample was successively compressed three times, at a constant rate of 5 mm/min, with only the third being recorded. All samples were tested on the same equipment. Load-deflection curves
Cork-Rubber Composite Blocks for Vibration Isolation
39
Fig. 2. Overview of the implemented methodology.
for each material were retrieved and due to the linear behaviour of the cork-rubber composites until 10% strain [29], apparent compression modulus (Ec ) for each sample was calculated by Eq. 1: σ (1) Ec = 0,1 where Ec is the apparent compression modulus and σ corresponds to the stress at 10% strain. Hardness Shore A, shape factor and apparent compression modulus were registered for each sample. A description of the database properties is presented in Table 1. Table 1. Statistics of the database used for ANN development Variables
Minimum
Maximum
Shape factor
0.26
3.39
Hardness Shore A
33
88
Apparent compression modulus (MPa)
1.18
43.21
Mean
Standard deviation
0.65
0.50
60.88
16.40
9.27
8.95
Based on the quasi-static compression tests results and on the values of dynamic stiffness coefficient of a cork-rubber composite material (Kd ), the correspondent dynamic compression modulus (Ed ) is determined by Eq. 2: Ed = Kd Ec
(2)
The values corresponding to the dynamic stiffness coefficient used in this study were obtained through previous experimental data of two materials with different hardness ranges. For each material, a standard sample was chosen (squared cross-section block 60 × 60 × 30 mm) and its results of dynamic stiffness coefficient at different stress levels were applied to determine the input values for finite element analysis.
40
H. Lopes and S. P. Silva
2.3 Numerical Procedures Development of Artificial Neural Network. The artificial network was developed with MATLAB Deep Learning Toolbox [32]. Data from 492 experimental tests of several cork-rubber composite materials were applied for the development of an ANN for the prediction of apparent compression modulus. As input parameters for the model, hardness Shore A and shape factor were introduced. A representation of the data utilized for the development of the ANN model is presented in Fig. 3.
Fig. 3. Data used for the development of the artificial neural networks model.
Recurring to the early stopping method, data were divided into three datasets for training (70%), validation (15%) and testing (15%) stages, and each variable was normalized into the range [−1, 1]. The performance function for the development of ANN was mean squared error. The architecture of the neural network developed was composed of a single hidden layer with three neurons. A schematic figure of the ANN developed is presented in Fig. 4. The activation functions on the hidden and output layer were hyperbolic tangent sigmoid and linear, respectively. For the training of the neural network, the Levenberg-Marquardt back-propagation algorithm was applied. A detailed study on the development process of the ANN model is presented in previous works [31]. Finite Element Analysis. FEA was applied as part of the process to predict the dynamic performance of a system composed of a cork-rubber composite under a sinusoidal compressive loading. The performance under dynamic loading was evaluated through dynamic stiffness and natural frequency of the system using data obtained from the Harmonic Response module of Ansys Workbench. The 3-D model was composed of a squared cross-section block representing the cork-rubber composite and a mass point added to the top surface of the material, equivalent to the static compressive stress imposed on the sample during dynamic loading. The load amplitude corresponded to 10% of the mean compressive load applied, similarly to the experimental test setup. On the opposite side, a fixed support condition was applied, representing the contact between the block and rigid plate surfaces.
Cork-Rubber Composite Blocks for Vibration Isolation
41
Fig. 4. The architecture of the artificial neural network.
For this analysis, cork-rubber composites were considered as an isotropic material following Hooke’s Law, due to the linear behaviour observed up to 10% strain in quasistatic compression testing [29]. The material properties necessary for the analysis include density, equivalent Young’s modulus, Poisson’s ratio and damping ratio. The values of density, Poisson’s ratio and damping ratio of the materials considered in the analysis are presented in Table 2. Table 2. Properties of cork-rubber composite materials A and B. Properties
Material A
Material B
Density (kg m−3 )
1000
1000
Poisson’s ratio
0.31
0.31
Damping ratio
0.09
0.11
At each stress level, an equivalent Young’s modulus (E0eq ) was introduced in the analysis, according to the methodology presented in Fig. 2. The approach to determine each equivalent Young’s modulus values consisted of two consecutive stages. First, an iterative process to find a stiffness value (k) correspondent to a single degree of freedom model (SDOF) and equivalent to the displacement amplitude experimentally obtained (or expected to be obtained) by the standard sample (da ), considering similar testing conditions, applying Eq. 3. 4π 2 fe2 Fm 2 Fa 2 1 + tan δ 1 − (3) da = Xa = k kg where Xa is displacement amplitude (m) considering the SDOF model, Fa is load amplitude (N), tanδ is loss factor, fe is the exciting frequency (Hz), and Fm is mean load (N) and g is the gravitational acceleration (9.81 m/s2 ).
42
H. Lopes and S. P. Silva
Based on the relations between Young’s and apparent compression moduli for different shape factors, described in the study of [29], equivalent Young’s modulus was determined through Eq. 4. E0eq = rsf Edeq = rsf
kA T
(4)
where A is the block cross-section area, T is the initial thickness and rsf is the shape factor-dependent parameter (0.867 for a standard sample of 60 × 60 × 30 mm). The output given by finite element analysis is the displacement amplitude (daeq ) at the exciting frequency considered, in this case, 5 Hz. Using this value, the dynamic stiffness of the cork-rubber composite block (kd ) expected to be experimentally obtained for a sample at a certain compressive stress level is calculated according to Eq. 5. The natural frequency of the system is determined through this dynamic stiffness value. Td Fa cosδ (5) kd = T daeq where Td is the sample thickness when submitted to a certain compressive stress level.
3 Results and Discussion 3.1 Static Behaviour An artificial neural network with one hidden layer composed of three neurons was developed for this study, presenting mean squared errors of 3.61 and 5.08 at training and testing stages, respectively. A representation of the static apparent compression modulus according to hardness and shape factor given by the developed neural network is presented in Fig. 5, as well as the data points used for its development. As is possible to observe, generally the predicted values from the ANN model follow the tendency of the experimental data: there is an increase of the apparent compression modulus with the increase of shape factor and hardness. However, due to the lack of experimental data during the modelling stage, predictions from the ANN regarding samples with shape factor above 2 and hardness values above 50 Shore A, must be avoided. As a starting point for the dynamic performance analysis of two different cork-rubber composites, the developed ANN was used to determine each material’s apparent compression modulus, according to geometry and hardness range. Table 3 presents the inputs and outputs of the application of the ANN model for cork-rubber composite materials with hardness values of 45, 60 and 80 Shore A and two different block configurations with shape factor values below 2. The obtained results for apparent compression modulus through the use of ANN, demonstrate the effect on apparent compression modulus values under the increase of hardness and shape factor. As expected, and in accordance with what is described by several authors regarding other elastomers [20–26], higher values of these two variables, results in an increase of stiffness of cork-rubber composites under static compression loading.
Cork-Rubber Composite Blocks for Vibration Isolation
43
Fig. 5. Results of the developed ANN and correspondent experimental points used for its development.
Table 3. Inputs and outputs from the ANN model. Input
Output
Geometry
Shape factor
Hardness Shore A
150 × 150 × 30 mm
1.25
45
3.69
60
7.34
80
26.02
45
2.02
60
5.01
80
13.06
60 × 60 × 50 mm
0.30
Apparent compression modulus (MPa)
3.2 Input for Dynamic Performance Analysis Using the apparent compression modulus results given by the ANN model, dynamic performance prediction can be evaluated using finite element analysis. Two samples with 60 × 60 × 30 mm geometry from different cork-rubber materials were previously dynamically tested. The range of hardness values for Material A are 45 and 60 Shore A, while Material B presents values from 60 to 80 Shore A. The results of the dynamic stiffness coefficient at different compressive stress levels of these standard samples are presented in Fig. 6. In all dynamic experimental tests, load amplitude corresponded to 10% of the mean compressive load applied. An exciting frequency of 5 Hz was also considered.
44
H. Lopes and S. P. Silva
Fig. 6. Dynamic stiffness coefficient of standard samples (60 × 60 × 30 mm).
Based on the results obtained by the ANN model, standard samples with similar compression behaviour to specimens with the characteristics presented in Table 3, were derived. The apparent compression modulus of standard samples 60 × 60 × 30 mm was determined following the study of [29], assuming equal Young’s modulus between standard and other samples. Using the data presented in Fig. 6, the standard sample’s dynamic compression modulus at each stress level was calculated. Then this value was used as input for the mathematical procedure to find the equivalent Young’s modulus to be entered in each FEA according to the scheme presented in Fig. 2. An exciting frequency of 5 Hz was considered. The results obtained for equivalent Young’s modulus at different stress levels are depicted in Fig. 7, regarding the analysis of samples of two different cork-rubber materials with different geometries. Figure 7a presents the values of equivalent Young’s modulus entered in the FEA, considering Material A with hardness range between 45 and 60 Shore A, while Fig. 7b concerns Material B with a range between 60 and 80 Shore A.
Fig. 7. Input values of Young’s modulus for the FEA of samples with different levels of hardness: a) material A; b) material B.
Cork-Rubber Composite Blocks for Vibration Isolation
45
It is expected that samples of the same material and hardness to have equal Young’s moduli. Material A samples with different hardness present similar values of equivalent Young’s modulus with a maximum difference below 5 MPa between the two geometries analysed. This observation still verifies in the case of Material B with a 60 Shore A hardness. However, regarding the study of a sample of Material B with higher hardness, there is a significant difference between the equivalent Young’s modulus values of different geometries. This difference is related to the apparent compression modulus retrieved from the ANN model and used to determine the equivalent Young’s modulus to be entered in FEA. The data used to construct the ANN model presents a widespread of static apparent compression for values of hardness above 80 Shore A (Fig. 3), related to different compounding formulations for different materials. Also, as it is possible to verify in the data from Fig. 3, the apparent compression modulus of samples with higher shape factors (between 1 and 3.5) for hardness equal or above 80 Shore A is greater when compared with lower shape factors. Thus, the obtained apparent compression modulus obtained for Material B with hardness 80 Shore A and shape factor of 1.25, could be overestimated by the ANN model, increasing the equivalent Young’s modulus value to be entered in the dynamic FEA. 3.3 Determination of Dynamic Properties with FEA Results After conducting FEA for each sample, considering an exciting frequency of 5 Hz, dynamic properties of the cork-rubber composite materials under compression were determined at some stress levels. Two block samples with different geometries were simulated: 60 × 60 × 50 mm and 150 × 150 × 30 mm. The results obtained for Material A are presented in Fig. 8, and the results regarding Material B are depicted in Fig. 9. Shaded areas presented in the graphs are limited by the predicted dynamic behaviour of samples with minimum and maximum hardness values, using FEA. Also, previous experimental results of each material are depicted, for comparison. The results obtained for the two materials under study showed that material hardness and sample dimensions affect the performance when subjected to dynamic loading. As also seen for static compression behaviour, the increase of hardness for a single sample also increases dynamic stiffness, when subjected to the same load. The same effect is observable when the shape factor increases. For the two materials studied, samples with 150 × 150 × 30 mm geometry presented higher values of dynamic compression modulus and natural frequency in opposition to samples with lower shape factors.
46
H. Lopes and S. P. Silva
Fig. 8. Material A samples results: (a) dynamic compression modulus; (b) natural frequency.
Due to the standard sample data utilized, this analysis is limited to the prediction of sample behaviour to a specific range of stress imposed. Comparison between the simulation and previous experimental results shows that the dynamic behaviour of each sample is within the expected for each type of material, taking into account the specified limits of hardness, except for Material A 60 × 60 × 50 mm sample whose experimental results are in the maximum limit predicted by the employed simulation methodology. Material B presents a wide range of dynamic performance parameters due to the introduction of a higher Young’s modulus than expected for a sample with a shape factor greater than 1 and a hardness level of 80 Shore A, as presented in the previous section.
Cork-Rubber Composite Blocks for Vibration Isolation
47
Fig. 9. Material B samples results: (a) dynamic compression modulus; (b) natural frequency.
4 Conclusions As a tool to predict the static and dynamic performance of cork-rubber composites, a methodology was developed combining two different modelling techniques for the prediction of block sample properties for vibration isolation: artificial neural networks and finite element analysis. Based on a large database composed of several data regarding the static compression behaviour of different cork-rubber composite materials, an artificial neural network was developed to predict the apparent compression modulus using as input data shape factor and hardness Shore A. Generally, the model developed showed good prediction capacity compared with the experimental data. The application of finite element analysis in conjunction with the results obtained from the ANN model can give an insight into the dynamic performance of the cork-rubber composites. Based on the results of a single standard sample of the same material, such as dynamic stiffness coefficient, the performance of other geometries can be accessed after a mathematical procedure, where an input equivalent Young’s modulus for FEA is deduced. The results from FEA can then be retrieved and treated to determine properties like dynamic compression modulus or natural frequency of the system composed by a cork-rubber composite block, according to the mean compression level applied.
48
H. Lopes and S. P. Silva
In this case study, the combined application of ANN and FEA produced results with a good agreement with previous results of two block samples with a hardness range between 45 and 80 Shore A. This study focused on squared cross-section area blocks, which can limit the application of the developed methodology. Other cross-section geometries, such as circular, rectangular or other polygonal shapes, should also be considered in the development of future models. Similarly, the prediction of additional mechanical properties, considering different parameters, should also be investigated. Recurring to modelling techniques like the ones proposed in this article can allow giving a first idea about the performance of a material of different dimensions or under different compression loads and optimization of time, resources, energy consumption and a decrease in the number of iterations during the development of new cork-rubber materials. Acknowledgments. Helena Lopes was supported by scholarship SFRH/BD/136700/2018 financed by Fundação para a Ciência e Tecnologia (FCT-MCTES) and co-funded by European Social Fund through Norte2020 (Programa Operacional Regional Norte). This work has been supported by the FCT within the RD Units Project Scope: UIDP/04077/2020 and UIDB/04077/2020. The authors are also grateful to Amorim Cork Composites for providing all materials and resources for this study.
References 1. Trojanowska, J., Zywicki, K., Varela, M.L.R., Machado, J.M.: Shortening changeover time - an industrial study. In: 2015 10th Iberian Conference on Information Systems and Technologies (CISTI), pp. 1–6 (2015) 2. Sousa, R.A., Varela, M.L.R., Alves, C., Machado, J.: Job shop schedules analysis in the context of industry 4.0. In: 2017 International Conference on Engineering, Technology and Innovation: Engineering, Technology and Innovation Management Beyond 2020: New Challenges, New Approaches, ICE/ITMC 2017 – Proceedings, pp. 711–717. IEEE (2017) 3. Arrais-Castro, A., Varela, M.L.R., Putnik, G.D., Ribeiro, R.A., Machado, J., Ferreira, L.: Collaborative framework for virtual organisation synthesis based on a dynamic multi-criteria decision model. Int. J. Comput. Integr. Manuf. 31, 857–868 (2018). https://doi.org/10.1080/ 0951192X.2018.1447146 4. Fernandes, F.A.O., Pascoal, R.J.S., Alves de Sousa, R.J.: Modelling impact response of agglomerated cork. Mater. Des. 58, 499–507 (2014). https://doi.org/10.1016/j.matdes.2014. 02.011 5. Bani´c, M., Stamenkovi´c, D., Miltenovi´c, A., Jovanovi´c, D., Tica, M.: Procedure for the selection of rubber compound in rubber-metal springs for vibration isolation. Polymers (Basel). 12, 1737 (2020). https://doi.org/10.3390/polym12081737 6. Delucia, M., Catapano, A., Montemurro, M., Pailhès, J.: Pre-stress state in cork agglomerates: simulation of the compression moulding process. Int.J. Mater. Form. 14(3), 485–498 (2021). https://doi.org/10.1007/s12289-021-01623-x 7. Bera, O., Pavliˇcevi´c, J., Ikoni´c, B., Lubura, J., Govedarica, D., Koji´c, P.: A new approach for kinetic modeling and optimization of rubber molding. Polym. Eng. Sci. 61, 879–890 (2021). https://doi.org/10.1002/pen.25636
Cork-Rubber Composite Blocks for Vibration Isolation
49
8. Gangala, C., Modi, M., Manupati, V.K., Varela, M.L.R., Machado, J., Trojanowska, J.: Cycle time reduction in deck roller assembly production unit with value stream mapping analysis. In: Rocha, Á., Correia, A.M., Adeli, H., Reis, L.P., Costanzo, S. (eds.) WorldCIST 2017. AISC, vol. 571, pp. 509–518. Springer, Cham (2017). https://doi.org/10.1007/978-3-31956541-5_52 9. Campos, J.C., Machado, J.: Pattern-based analysis of automated production systems. In: IFAC Proceedings Volumes, pp. 972–977 (2009) 10. Kunz, G., Machado, J., Perondi, E., Vyatkin, V.: A formal methodology for accomplishing IEC 61850 real-time communication requirements. IEEE Trans. Ind. Electron. 64, 6582–6590 (2017). https://doi.org/10.1109/TIE.2017.2682042 11. Silva, S.P., Sabino, M.A., Fernandes, E.M., Correlo, V.M., Boesel, L.F., Reis, R.L.: Cork: properties, capabilities and applications. Int. Mater. Rev. 50, 345–365 (2005). https://doi.org/ 10.1179/174328005X41168 12. Knapic, S., Oliveira, V., Machado, J.S., Pereira, H.: Cork as a building material: a review. Eur. J. Wood Wood Prod. 74(6), 775–791 (2016). https://doi.org/10.1007/s00107-016-1076-4 13. Fernandes, E.M., Pires, R.A., Reis, R.L.: Cork biomass biocomposites. In: Jawaid, M., Tahir, P.M., Saba, N. (eds.) Lignocellulosic Fibre and Biomass-Based Composite Materials, pp. 365– 385. Elsevier, Cambridge (2017) 14. Parra, C., Sánchez, E.M., Miñano, I., Benito, F., Hidalgo, P.: Recycled plastic and cork waste for structural lightweight concrete production. Sustainability 11, 1876 (2019). https://doi.org/ 10.3390/su11071876 15. Pereira, H.: Cork: Biology, Production and Uses (2007). http://search.ebscohost.com/login. aspx?direct=true&db=edsebk&AN=195056&site=eds-live 16. Gil, L.: Cork composites: a review. Materials (Basel). 2, 776–789 (2009). https://doi.org/10. 3390/ma2030776 17. Rivin, E.I.: Passive Vibration Isolation. ASME Press, New York (2003) 18. Jones, D.I.G.: Handbook of Viscoelastic Vibration Damping. Wiley, Chichester, West Sussex, England (2001) 19. Racca, R.H., Harris, C.M.: Shock and vibration isolators and isolation systems. In: Harris, C.M., Piersol, A.G. (eds.) Harris’ Shock and Vibration Handbook. McGraw-Hill (2002) 20. Gent, A.N.: On the relation between indentation hardness and young’s modulus. Rubber Chem. Technol. 31, 896–906 (1958). https://doi.org/10.5254/1.3542351 21. Qi, H.J., Joyce, K., Boyce, M.C.: Durometer hardness and the stress-strain behavior of elastomeric materials. Rubber Chem. Technol. 76, 419–435 (2003). https://doi.org/10.5254/1.354 7752 22. Kunz, J., Studer, M.: Determining the modulus of elasticity in compression via the shore a hardness. Kunststoffe Int. pp. 92–94 (2006) 23. Gent, A.N., Lindley, P.B.: The compression of bonded rubber blocks. Proc. Inst. Mech. Eng. 173, 111–122 (1959). https://doi.org/10.1243/PIME_PROC_1959_173_022_02 24. Lindley, P.B.: Compression moduli for blocks of soft elastic material bonded to rigid end plates. J. Strain Anal. Eng. Des. 14, 11–16 (1979). https://doi.org/10.1243/03093247V141011 25. Horton, J.M., Tupholme, G.E., Gover, M.J.C.: Axial loading of bonded rubber blocks. J. Appl. Mech. 69, 836–843 (2002). https://doi.org/10.1115/1.1507769 26. Williams, J.G., Gamonpilas, C.: Using the simple compression test to determine Young’s modulus, Poisson’s ratio and the Coulomb friction coefficient. Int. J. Solids Struct. 45, 4448– 4459 (2008). https://doi.org/10.1016/j.ijsolstr.2008.03.023 27. Gent, A.N.: Engineering with rubber. In: Gent, A.N. (ed.) Engineering with Rubber, pp. I–XVIII. Hanser (2012) 28. DIN 53513: Determination of the viscoelastic properties of elastomers on exposure to forced vibration at non-resonant frequencies (1990)
50
H. Lopes and S. P. Silva
29. Lopes, H., Silva, S., Machado, J.: Analysis of the effect of shape factor on cork–rubber composites under small strain compression. Appl. Sci. 10 (2020). https://doi.org/10.3390/ app10207177 30. Lopes, H., Silva, S.P., Machado, J.: A simulation strategy to determine the mechanical behaviour of cork-rubber composite pads for vibration isolation. Eksploat. i Niezawodn. Maint. Reliab. 24, 80–88 (2022). https://doi.org/10.17531/ein.2022.1.10 31. Lopes, H., Silva, S.P., Machado, J.: Application of artificial neural networks to predict mechanical behaviour of cork-rubber composites. Neural Comput. Appl. 33(20), 14069–14078 (2021). https://doi.org/10.1007/s00521-021-06048-w 32. Beale, M., Hagan, M., Demuth, H.: MATLAB Deep Learning ToolboxTM User’s Guide R2019b. The MathWorks, Inc. (2019)
Deep Neural Networks: A Hybrid Approach Using Box&Jenkins Methodology Filipe R. Ramos1(B)
, Didier R. Lopes2
, and Tiago E. Pratas3
1 FCUL and CEAUL, University of Lisbon, 1749-016 Lisbon, Portugal
[email protected]
2 OpenBB, Inc., Virgina, USA
[email protected] 3 Paratus Capital, Dubai, UAE [email protected]
Abstract. The articulation of statistics, mathematical and computational techniques for modelling and forecasting of time series can help in the decisionmaking process. When dealing with the intrinsic challenges for financial time series, Machine Learning methodologies, in particular Deep Learning, was pointed out as being a promising option. Previous works, highlight the potential of Deep Neural Network architectures, but also their limitations with regards to computational complexity. Some of these limitations are analysed in this work, where a hybrid approach is proposed in order to benefit from the knowledge and solidity of Box&Jenkins methodologies and the viability of applying robust cross-validation of the neural network – Group k-Fold. Through the construction of complete and automated computational routines, the proposed model is tested with the modelling of two financial time series with disturbances on their historical data: Portuguese Stock Index 20 (PSI 20) and Standard & Poor’s 500 Exchange-Traded Fund (SPY). The approach is compared to neural network models with Multilayer Perceptron and Long Short-Term Memory architectures. Besides reducing the implicit computational time by 20%, by considering Mean Absolute Percentage Errors, the proposed model shows forecasting quality. Keywords: Deep Neural Networks · Forecasting · Prediction error · Time series
1 Introduction Due to the fast-paced globalisation of the economy, it’s possible to see growth in competitiveness across organisations, not only for the best talent but also for the best technology. In Hang [1], the importance of forecasting to create competitive advantages in the market is acknowledged as a fundamental tool. Box and Jenkins [2] is a mandatory reference when we refer to modelling and forecasting in time series. The mathematical approach of linear models proposed by the authors (Autoregressive Moving Average models – ARMA) is popularised in both the academic and professional fields. However, the scientific literature has found many © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Machado et al. (Eds.): icieng 2022, LNME, pp. 51–62, 2022. https://doi.org/10.1007/978-3-031-09385-2_5
52
F. R. Ramos et al.
(financial) real-time series that seem to follow non-linear behaviour and the approach of Box&Jenkins is, by itself, insufficient to represent their dynamics [3]. The continuous instability of the economy is leading to strong speculation and uncertainty about the economic future, which dramatically increases the complexity of modelling and predicting a financial time series [4] - which can be reflected by the nonstationarity, non-linearity, asymmetries, and particularly by the presence of a structural breaks in trends [5] of the time series. This dynamic has an impact on the forecasting performance of time series models. E.g. classical econometric forecasting models (such as ARMA) show poor performance around these types of structural breakpoints [6]. Due to the challenges that exist in this field, over the past decade, it has been possible to see different Artificial Intelligence techniques, such as Artificial Neural Networks (ANN)/Deep Neural Networks (DNN) ([7–9]). The research of nonlinear methodologies based on Neural Networks (NN), discussed extensively in the nineties and abandoned due to computational limitations [10] reappear in recent works. Although there’s a wide scope of areas that benefit from ANN methodologies, the research highlights that success is usually centred around financial difficulties and bankruptcies in the decision-making process and particularly stock price predictions [7]. In Zhang et al. [11], two points are identified as critical for the success of these methodologies: (i) ANN is capable of extracting hidden patterns from the raw data since they combine multiple sources of information and can capture complex dynamics, and (ii) for a large amount of data, it’s possible to build ANN more complex architectures, able of learning with higher dimensional samples. Therefore, the scientific research along with the computational progress seen in recent years – due to the use of graphic process units (GPUs) – has assumed a fundamental role in the adoption of ANN to a larger audience. This is seen in both simpler and more complex DNN structures, e.g., Multilayer Perceptron (MLP) and Recurrent Neural Networks (RNN), respectively. The latter has been pointed out as a promising technique to model and forecast economic/financial time series (e.g. [12, 13]). In addition to the proposal of new ANN architectures, the combination of ANN and other methodologies (hybrid models) has also shown itself to be promising ([8, 9]). In specific, when involving time series, for example: [14] provided a hybrid ARMA–ANN method; [15] used a combination of a discrete wavelet transform and an ANN. Most recently, [16] investigated the impact of sentiment analysis in the prediction of the Tehran Stock Exchange. When it comes to ANNs, the work of [17] is critical as it introduces the concept of LSTM models. It is important to note that most of these authors (experts) have gained a lot of experience developing and tuning these ANN. Owing to that, these are able to develop new hybrid models given the understanding of the inner works of an ANN. This does not apply for newcomers to the field, as there’s a steep learning curve [18]. As noted in [19], some proposals can be successful because they work better in punctual tasks. It is therefore important to explore modifications and under what circumstances they apply, namely in financial time series. In this way, starting from preliminary studies on forecasting stock prices (e.g. [13]), despite the DNN being pointed out as promising, limitations are mentioned, namely in terms of computational cost. This is the point where efforts will be focused seeking to obtain model models with good predictive quality and reduction of computational execution time.
Deep Neural Networks: A Hybrid Approach
53
2 Neural Networks 2.1 Deep Neural Networks The fundamental unit of the ANN is the artificial neuron, which are organised in layers. The MLP network which can be trained using the Backpropagation algorithm [20], can be seen as one of the first steps towards DNN since they present several hidden layers of artificial neurons. In more complex architectures, such as RNN, apart from the learning that occurs on each training epoch, there’s an additional learning input. This type of architecture relies on an enhanced Backpropagation algorithm, e.g. Backpropagation Through Time (BPTT) [21]. Highlighting not only its applicability but also its promise to learn long-term dependencies (e.g. Long Short-Term Memory – LSTM) as discussed in [22]. LSTMs [17], which are a subset of RNNs, can not only learn long term dependencies but can also select which information to preserve based on the data that’s allowed to minimise the cost function, which can be either more recent or older data. The differences between the learning process between the MLP, RNN and LSTM architectures, which are all Deep Neural Networks, occurs essentially at the low level of the neuron, as shown in Fig. 1, whose detailed explanation and all mathematical background can be found in [23].
Fig. 1. Comparison of a hidden cells of: MLP, RNN and LSTM
2.2 From the Network Training to Assessing Model The training of an ANN algorithm works by utilising a set of training data to adjust the synaptic weights of the network. After, the model’s performance and generalising capabilities can be done using a subset of independent data, Cross-validation (Cv) [24], in order to ensure the estimation and type of model chosen are suited for forecasting. From the Cv methodologies cited in the literature, some stand out: Forward Chaining, k-Fold and Group k-Fold. To choose the best Cv methodology, there is the need to consider not only the type of data, as the pros and cons of each methodology as explained in [23]. In time series, Forward Chaining methodology seems to be the one that better fits our goal as it respects the sequential order of data that is usually characterised by temporal correlation. This is because although we assume iid data in Machine Learning, that rarely occurs in time series. This is the reason why the said methodologies can show limitations. However, if there was an alternative to minimise the presence of autocorrelation of the data, Group k-Fold would be the better methodology.1 1 For more details and all mathematical background, see [23].
54
F. R. Ramos et al.
3 Methodological Considerations In terms of methodological procedures, three DNN architectures were considered (MLP, RNN e LSTM). In addition, there was the combination of Box&Jenkins methodologies with the DNN, where is proposed and hybrid model Box&Jenkins-DNN (BJ-DNN). Regarding pre-processing, there is the need to highlight the benefits of these transfers [23] and due to the importance of stationarity of the time series on the ARMA models, it was evaluated the benefits of input a stationary time series to the network, eliminating the presence of autocorrelation and making Group k-Fold methodology viable (see Sect. 2.2). Regarding DNN architectures, [25] mention the long term memory capacity of LSTM networks but highlight the high computational cost compared to MLP. Due to this fact, both networks were considered. In terms of computational implementation, in this study, all notebooks are available as open-source in [26]. To organise the development process, the code was separated into the following three notebooks: (1) ExploratoryDataAnalysis.ipynb; (2) DeepNeuralNetwork.ipynb; and (3) DNN_OurApproach.ipynb. In the last one, from the notebook DeepNeuralNetwork.ipynb, an original approach is presented to build a more efficient DNN model. All notebooks available were developed from scratch, based on scientific literature (e.g. [27, 28]). Regarding DNN, the code allows implementing three architectures: MLP; RNN and LSTM. The approach taken to build the code has been: (i) pre-processing the data, before being fed to the neural network; (ii) define the cross-validation methodology; and (iii) define the set of neural network hyperparameters (e.g., number of layers, number of neurons per layer, number of training epochs, activation functions, optimisation algorithm, and others). A multi-grid was used to explore several possible combinations to define an accurate model. In sequential terms, the following steps were taken: (1) Importing the data; (2) pre-processing and/or transforming data; (3) Define the ANN architecture and hyperparameters; (4) Training and validation model; (5) Assessing the model (in case the model performs poorly, we return to step (2) or (3)); (6) Forecasting. To assess a model, a comparison was needed, to forecast values against real price data that the model has not seen (test set). This analysis will generate the forecasting ‘error’. The most common performance/error metrics are the following: Mean Absolute Error (MAE) and Mean Absolute Percentage Error (MAPE) [29]. Considering the time series {yt }t∈T and the past observations from period 1, . . . , t, and being yt+h an unknown value in the future t + h and yt+h its forecast, the prediction error corresponds to the difference of these two values, that is,
et+h = yt+h − yt+h
(1)
where MAE and MAPE are defined, respectively, by s
s MAE =
i=1 |et+i |
s
, MAPE =
i=1 |
yt+i −ˆyt+1 | yt+i
s
× 100
(2)
where s corresponds to the number of observations in the forecasting samples (forecasting window).
Deep Neural Networks: A Hybrid Approach
55
4 Results: An Application to Financial Markets Time Series In this research, it was considered the daily closing prices of two financial time series: PSI 20 (Portuguese Stock Index 20) and SPY (Standard & Poor’s 500 ETF).2 4.1 Time Series Analysis Regarding PSI 20, the data samples considered in this study are working days in the period between 02-01-2002 and 09-27-2019 as shown in Fig. 2.
Fig. 2. PSI 20 time series: Graphical representation.
From that chart it is possible to observe a non-linear patter with several periods characterised by distinct behaviours. In 2008, with the financial world crisis; in April 2011, as Portugal is headed for a bailout requested; in 2014, with the bankruptcy of BES and respective exclusion from the index. This phenomenon can be better discerned when assessing the annual boxplots (see Fig. 3), where the amplitude of the samples and the IQR of the boxplot that corresponds to the years of 2008/9, 2011 and 2014 are considerably high, highlighting the occurrence of (negative) structural breaks.
Fig. 3. PSI 20 time series: graphical representation of annual boxplots.
Regarding SPY, the period considered was between 29-01-1993 and 27-09-2019 as shown in Fig. 4. 2 For the PSI 20, the data are available at https://stooq.pl/. For the SPY, the data are available at
https://finance.yahoo.com/ (‘SPY’). In both cases, The option for the referred time period is justified by the possibility of comparative analysis with the results presented in [13].
56
F. R. Ramos et al.
Fig. 4. SPY time series: Graphical representation.
In Fig. 4, it is possible to observe non-linear parts with distinct behaviours: The events of 2001; in 2008, with the world financial crisis. These patterns can be better discerned when assessing the annual boxplots (Fig. 5), where the amplitude of the samples and the IQR of the boxplot that corresponds to the year 2008 is considerably high. This fact shows the occurrence of a (negative) structural break, where the minimum value dipped below the minimum seen since the end of the nineties.
Fig. 5. SPY time series: graphical representation of annual boxplots.
To analyse some features of the time series, Table 1 contains the statistic test and the p-value for the following hypothesis tests: Normality tests (Jarque-Bera test and Skewness and Kurtosis tests), Stationarity/Existence of unit root (ADF test and KPSS test) and independence test (BDS test).3 As expected, for any significance level, the normality, stationarity and independence tests are rejected for PSI 20 series and SPY. In fact, for both time series, there is statistical evidence to: (i) do not reject the non-normality of the distribution of the data (with the rejection of all normality tests performed); (ii) assume the non-stationarity of the series (due to the null hypothesis not being rejected when doing ADF test, and due to the statistical value corresponding to KPSS being superior to the critical reference values); (iii) infer about the non-iid, since the null hypothesis of the data being iid has been rejected through BDS test. 4.2 Modelling and Forecasting From the preliminary studies ([13, 30, 31]), which are aligned with the scientific literature, both time series show some statistical properties (non-stationarity, non-linearity, 3 For more details about hypothesis tests, see [23].
Deep Neural Networks: A Hybrid Approach
57
SPY PSI 20
Table 1. PSI 20 and SPY time series: Normality, stationarity and independence tests
statistic p-value
Kurtosis 3.2793 0.0010*
Normality Unit Root / Tests Stationary Tests Skewness Jarque-Bera ADF KPSS 17.8311 495.5003 -2.3407 2.7392 0.0000* 0.0000* 0.1592 ---------
statistic p-value
2.9758 0.0029*
23.9444 0.0000* *
743.7061 0.0000*
0.5406 0.9861
13.367 ---------
Independence Test BDS (Dim.2–Dim.6) 11.0889 – 20.6448 0.0000* 14.9813 – 30.9223 0.0000*
is rejected for significance levels of 1%, 5% and 10%
asymmetries, structural breaks) that increase the complexity in the modelling and prediction tasks by using classical methodologies (e.g. Autoregressive models). The results discussed in [13], despite the instability present in financial time series, show that DNN models seem to be able to capture the pattern of the series. When comparing MLP with LSTM models, the latter shows a smaller forecast error. However, LSTM models require considerable computational time.4 In order to broaden horizons based on the reflections about the previous models and their performance, possible changes in the models can be considered. Besides what is described in Sect. 3 for the implementation of the proposed BJ-DNN model, more details are described below. 1. Regarding network inputs, our research showed that due to the nature of the data (existence of structural breaks and/or regime shifts), there isn’t a particular advantage of using a large data set. In addition, the computational cost necessary does not translate into any information gains. Historical data with different patterns from the more recent data will hurt the training of the network and therefore in its quality. 2. Pertaining cross-validation, different procedures are proposed. Following the usual data splitting into training sample (training and validation) and test sample, the most recent data is only used for testing. With this, the most recent dynamics are not known to the network. Therefore, it does not contribute to the hyperparameter adjustments and optimisation. In addition, when we verify the distribution of the errors for validation and test, they were very identical. This means that the validation error is a replication of the error test (see Fig. 6). Therefore, for the combination of Box&Jenkins methodology with MLP network, it was considered: (1) reduction in historical data – keeping recent data which is considered to be more relevant; (2) not considering a test sample on the training data – the latest data is used for training and not forecasting. These two options lead to a substantial reduction on computational costs and an improvement in the prediction error. In terms of forecasting, as shown in Fig. 7 (whose out-of-sample forecast are presented in more detail in Fig. 8), it is possible to verify that there is a good adjustment 4 Based on the conclusions of [25], RNN architectures do not produce substantially different
results from MLP. Therefore, it was only considered the results from MLP and LSTM.
58
F. R. Ramos et al.
Fig. 6. Comparison of training, cross-validation and test error distributions
Fig. 7. BJ-DNN model (fitting and forecasting) of the time series: (A) PSI 20; (B) SPY
Deep Neural Networks: A Hybrid Approach
59
with the predictions and real data with an excellent follow-up in terms of the monotony of the forecast line about the real data line.
Fig. 8. Forecasting out-of-sample of the time series: (A) PSI 20; (B) SPY
4.3 Comparing Results For a more detailed analysis, the MAPE values related to the out-of-sample forecast were calculated for three different time horizons: 1 day, 1 week (5 business days), 1 month (21 business days). Retrieving the results obtained in [22], for the DNN models using MLP and LSTM architectures, the range of MAPE values (lower and upper bound 5% trimmed)5 for the three DNN models (MPL, LSTM and BJ-DNN) are presented in Table 2. From the observed values, on the BJ-DNN model, the reduction of extremes of the ranges regarding MAPE comparative with MLP and LSTM models is apparent. Therefore, we can conclude that the models from the chosen methodologies not only are able to capture the fluctuation dynamics of the time series, but they can also present good forecasting properties. One example of this is the weekly forecasting where the best scenario for MAPE values sits around 0.50%. In the worst scenario, they do not surpass 1%. Regarding monthly predictions, the average value for each range is under 0.80%. Furthermore, although the expected increase in prediction error with the increasing of the time horizon, there is a reduction of the amplitude of the forecasting range as the time horizon increases, which represents a very positive indicator for the proposed model from 0.53% to 0.31% in the PSI 20 and from 0.54% to 0.26% in SPY time series (from 1 day to 1 month). This would not happen with the other models. To sum up, the quality of the models obtained based on the proposed approach represents advantages in both cases. It is also important to note, that PSI 20 is the time 5 The parameters of the neural network (weights and bias) benefited from a pseudo-random ini-
tialisation instead of using a fixed seed [32]. In addition, to avoid outlier results, the forecasting occurred in a loop (60 runs) and the 5% worse and best results were ignored.
60
F. R. Ramos et al. Table 2. Predictions errors of the three DNN models (MAPE)*
Model MPL LSTM
PSI 20
SPY
1 Day 0.83% – 1.39%
1 Week 1.31% – 1.84%
1 Month 1.57% – 2.01%
1 Day 0.55% – 0.94%
1 Week 0.60% – 1.03%
1 Month 1.02% – 1.82%
0.87% – 1.41%
1.19% – 1.63%
0.97% – 1.52%
0.48% – 0.83%
0.50% – 0.96%
0.79% – 1.12%
0.39% 0.51% 0.71% 0.11% 0.48% 0.61% – – – – – – 0.92% 0.89% 1.02% 0.65% 0.93% 0.87% * Minimum values - Maximum values (trimmed by 5%) obtained in a total of 60 runs
BJ-DNN
series that shows much more disturbances in the historical data and this series, the forecasting properties of the proposed model stands out.
5 Conclusions With this work, it was possible to acknowledge the advantages of implementing Machine Learning Models and their importance on the new forecasting methodologies. From the introduction of ‘’theoretical” modifications regarding methodological implementation, we had the goal of obtaining smaller forecasting errors, but with the concern to find strategies that would allow decreasing the computational cost. With this, we proposed a hybrid model, that combines DNN architectures (specifically MLP architecture) with the Box&Jenkins methodologies – BJ-DNN model. With this methodological mindset, we analysed the data window to be utilised as training input, giving the network more recent observations in detriment of older ones that had different dynamics from the current ones. This allowed not only to increase the quality of the forecasting (compared to the DNN models analysed), but also to significantly reduce the training and validation time of the model. Given that in the proposed BJ-DNN model it is possible to select a Group k-Fold methodology for Cv, this resulted in a 20% reduction (approximately) of computational time spent (in the modeling/forecasting process) when compared against a Forward Chaining Cv and a LSTM model. It is also important to highlight that the quality of the forecasting achieved provides a great tool to guide decision making on the financial markets as the proposed model has much stronger forecasting properties when compared to the models analysed. This is particularly important, especially regarding not only short-term volatility forecasting (such as 1-day periods) but also forecasting expectations on bigger time horizons where the uncertainty is higher (such as 1-month periods). Although this work is promising, it has recognised the limitations of our research. Namely: (i) further delving into the analytical perspective of the BJ-DNN model used – so that the results seen can have a stronger theoretical back up; (ii) increasing the amount
Deep Neural Networks: A Hybrid Approach
61
of diversity of the time series used – so that the extrapolated conclusions are more robust and biased towards the data used. Given these work limitations, it is proposed as future work, to not only delve further in the theory behind BJ-DNN but also to test the results in time series with different characteristics (such as a clear trend and/or seasonality). In addition, other hybrid DNN models can be explored and compared against the one proposed in this work. Since the main advantage introduced by BJ-DNN is the reduction in the computational time, and being able to objectively measure it, it is a great future topic of research. In summary, besides the mentioned limitations, this work opens new perspectives for future research. Furthermore, the changes introduced in our approach have proven to be quite promising, by including the articulation of artificial intelligence with a critical human look at the data and models.
References 1. Hang, N.T.: Research on a number of applicable forecasting techniques in economic analysis, supporting enterprises to decide management. World Sci. News 119, 52–67 (2019) 2. Box, G., Jenkins, G.: Time Series Analysis, Forecasting and Control. Holden-Day, San Francisco (1976) 3. Clements, M., Franses, P., Swanson, N.: Forecasting economic and financial time-series with non-linear models. Int. J. Forecast. 20(2), 169–183 (2004) 4. Chatfield, C.: The Analysis of Time Series: An Introduction, 6th edn. Chapman and Hall/CRC (2016) 5. Valentinyiendr, M.: Structural breaks and financial risk management. Magyar Nemzeti Bank Working Paper (2004) 6. Pesaran, M.H., Timmermann, A.: How costly is it to ignore breaks when forecasting the direction of a time series? Int. J. Forecast. 20(3), 411–425 (2004) 7. Tkáˇc, M., Verner, R.: Artificial neural networks in business: two decades of research. Appl. Soft Comput. 38, 788–804 (2016) 8. Tealab, A.: Time series forecasting using artificial neural networks methodologies: a systematic review. Future Comput. Inf. J. 3(2) (2020) 9. Sezer, O.B., Gudelek, M.U., Ozbayoglu, A.M.: Financial time series forecasting with deep learning: a systematic literature review: 2005–2019. Appl. Soft Comput. 90, 106–181 (2020) 10. Bengio, Y., Simard, P., Frasconi, P.: Learning long-term dependencies with gradient descent is difficult. IEEE Trans. Neural Networks 5(2), 157–166 (1994) 11. Zhang, Y., Guo, Q., Wang, J.: Big data analysis using neural networks, vol. 49, pp. 9–18 (2017) 12. Nikou, M., Mansourfar, G., Bagherzadeh, J.: Stock price prediction using deep learning algorithm and its comparison with machine learning algorithms. Intell. Syst. Account. Finan. Manage. 26(4), 164–174 (2019) 13. Lopes, D.R., Ramos, F.R., Costa, A., Mendes, D.: Forecasting models for time-series: a comparative study between classical methodologies and deep learning. In: SPE 2021 – XXV Congresso da Sociedade Portuguesa de Estatística. Évora – Portugal (2021) 14. Babu, C.N., Reddy, B.E.: A moving-average filter based hybrid ARIMA-ANN model for forecasting time series data. Appl. Soft Comput. J. 23, 27–38 (2014) 15. Chandar, S.K., Sumathi, M., Sivanandam, S.N.: Prediction of stock market price using hybrid of wavelet transform and artificial neural network. Indian J. Sci. Technol. 9(8), 1–5 (2016)
62
F. R. Ramos et al.
16. Ghahfarrokhi, A.H., Shamsfard, M.: Tehran stock exchange prediction using sentiment analysis of online textual opinions. Intell. Syst. Account. Finan. Manage. 27(1), 22–37 (2020) 17. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997) 18. Greff, K., Srivastava, R.K., Koutník, J., Steunebrink, B.R., Schmidhuber, J.: LSTM: a search space odyssey. IEEE Trans. Neural Networks Learn. Syst. 28(10), 2222–2232 (2015) 19. Jozefowicz, R., Zaremba, W., Sutskever, I.: An empirical exploration of recurrent network architectures. In: ICML - International Conference on Machine Learning (2015) 20. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323(6088), 533–536 (1986) 21. Pineda, F.: Generalization of Back propagation to Recurrent and Higher Order Neural Networks. Undefined (1987) 22. Koutník, J., Greff, K., Gomez, F., Schmidhuber, J.: A clockwork RNN. In: 31st International Conference on Machine Learning, ICML 2014, vol. 5, pp. 3881–3889 (2014) 23. Ramos, F.R.: Data Science na Modelação e Previsão de Séries Económico-financeiras: das Metodologias Clássicas ao Deep Learning. (PhD Thesis, Instituto Universitário de Lisboa ISCTE Business School, Lisboa, Portugal) (2021) 24. Arlot, S., Celisse, A.: A survey of cross-validation procedures for model selection. Statist. Surv. 4, 40–79 (2010) 25. Ramos, F.R., Lopes, D.R., Costa, A., Mendes, D.: Explorando o poder da memória das redes neuronais LSTM na modelação e previsão do PSI 20. In: SPE 2021 – XXV Congresso da Sociedade Portuguesa de Estatística. Évora - Portugal (2021) 26. Lopes, D.R., Ramos, F.R.: Univariate Time Series Forecast (2020) Retrieved from https://git hub.com/DidierRLopes/UnivariateTimeSeriesForecast 27. Ravichandiran, S.: Hands-On Deep Learning Algorithms with Python: Master deep learning algorithms with extensive math by implementing them using TensorFlow. Packt Publishing Ltd (2019) 28. Chollet, F.: Deep Learning with Python, 2nd edn. Manning Publications (2021) 29. Willmott, C., Matsuura, K.: Advantages of the mean absolute error (MAE) over the root mean square error (RMSE) in assessing average model performance. Climate Res. 30(1), 79–82 (2005) 30. Ramos, F., Costa, A., Mendes, D., Mendes, V.: Forecasting financial time series: a comparative study. In: JOCLAD 2018, XXIV Jornadas de Classificação e Análise de Dados. Escola Naval – Alfeite (2018) 31. Costa, A., Ramos, F., Mendes, D.: Mendes, V.: Forecasting financial time series using deep learning techniques. In: IO 2019 - XX Congresso da APDIO 2019. Instituto Politécnico de Tomar – Tomar (2019) 32. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: JMLR W\&CP: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS 2010), pp. 249–256. JMLR Workshop and Conference Proceedings, Sardinia (2010)
Semi-adaptive Decentralized PI Control of TITO System with Parameters Estimates Quantization Karel Perutka(B) Tomas Bata University in Zlin, Nam. T. G. Masaryka 5555, 76001 Zlin, Czech Republic [email protected]
Abstract. The paper presents one method of decentralized control. It combines PI control, pre-identification and quantization of the identified parameters of the system model. Controller parameters were computed using stability boundary locus. Parameters of the controlled system were pre-identified using least squares method from the response for each combination of requested values. Changes of the parameters estimates of the system were realized from the previous ones using three quantized values between them. The method was verified on the system with three inputs and three outputs with interactions realized by P-structure of the model. Keywords: Adaptive control · Decentralized control · PI control · Three input three output systems
1 Introduction Decentralized control is one of the approaches to the control of the systems with multiple inputs and multiple outputs with many implementations as it was described by Bakule [1]. This approach offers to simplify the used controller [2], for example as the matrix of the controller plants only in the main diagonal of the matrix [3], or even as the set of controllers with single input and single output. In this case, there could be used traditional well-known controller, for example PI controller based on Nyquist stability analysis [4], or pole placement [5], or linear quadratic controller [6]. Decentralized controller is often enlarged by another approach in purpose to be used at more systems. Traditionally, it is enlarged as it acts as adaptive, or switching, or robust controller. Decentralized adaptive controller is the most popular approach because it can be used at systems with slightly changing parameters, for example as adaptive decentralized output feedback tracking control design for uncertain interconnected nonlinear systems [7] or decentralized simple adaptive control for large space structures [8]. In practice, it can be mentioned decentralized adaptive control in the case of tension-only elements [9]. Decentralized switching controller can be used at more time varying systems or at systems with nonlinearities, such as the systems with actuator dead zone [10], uncertain systems [11], hierarchical systems [12], large scale systems [13], nonlinear large-scale © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Machado et al. (Eds.): icieng 2022, LNME, pp. 63–71, 2022. https://doi.org/10.1007/978-3-031-09385-2_6
64
K. Perutka
systems [14]. Decentralized switching control is also combined with predictive control [15]. Decentralized robust controller is often used in practice, for example at coupled tank system [16] or air conditioning [17]. The paper is organized in the following way. Firstly, the introduction to the area of decentralized control is given. It is followed by theoretical background in which the brief description of the theoretical methods used in this paper is provided. After that, there is given the description of one method of decentralized control that is firstly described in this paper. Next chapter presents the results of this method obtained by simulation in MATLAB and Simulink software package on the selected model of the system with three inputs and three outputs. And after that part, there is the conclusion and the references.
2 Theoretical Background This chapter provides brief and necessary mention of known methods that were implemented in the one approach of decentralized control presented in this paper. 2.1 PI Controller Parameters Computation Parameters of the PI controllers with single input and single output were computed according to the method published by Tan et al. in 2006 [18] using stability boundary locus. The method is based on plotting the stability boundary locus in the (kp, ki)-plane and then computing the stabilizing values of the parameters of a PI controller without linear programming. 2.2 Pre-identification Using Least Squares Method The method so called “pre-identification” is the method of identification described by Perutka in 2009 [19] based on identification of system from the step response. The method identifies the parameters of the model of the system that consist of the set of systems of single input and single output as the simplification of the multivariable system but for each change of one of the setpoints for each subsystem. Then this data set is used to compute controller parameters at decentralized control before the control is performed. Identification using least-squares method is well-know approach and its description can be found for example in Ding et al. [20]. 2.3 System Model Using P-Structure “P-structure” of the system with multiple input and multiple output is such the model of the system with interactions which are added to direct output of the subsystem. This approach was used at the systems with two inputs and two outputs for example by Kubalcik et al. in 2003 [21] and the principle of this approach to the modelling of the systems with the interactions is shown in Fig. 1, where it is realized for the decentralized control of the system with three inputs and three outputs with interactions.
Semi-adaptive Decentralized PI Control of TITO System u1(s)
e1(s)
65
y1(s)
GS11(s)
GR11(s)
GS21(s) GS31(s) u2(s)
e2(s)
GR22(s)
y2(s)
GS22(s) GS12(s) GS32(s)
e3(s)
u3(s)
GR33(s)
y3(s)
GS33(s) GS13(s) GS23(s)
Fig. 1. Decentralized control of three input three output system defined using P-structure
3 One Method of Decentralized Control This chapter describes the method of decentralized control which can be used at the systems with arbitrary number of inputs and outputs but both numbers must be the same. In this paper, the method is verified at the system with three inputs and three outputs (TITO) and therefore the title of the paper includes “TITO Systems”. The system of three inputs and three outputs GS (s) can be described by the matrix in the following form, where GSij (s) are plants with single input j and single output i ⎛
⎞ GS11 (s) GS12 (s) GS13 (s) GS (s) = ⎝ GS21 (s) GS22 (s) GS23 (s) ⎠ GS31 (s) GS32 (s) GS33 (s)
(1)
66
K. Perutka
And then the model of the system of three inputs and three outputs GSM (s) can be described by the matrix in the following form, where GSMij (s) are plants with single input j and single output i ⎛
⎞ GSM 11 (s) GSM 12 (s) GSM 13 (s) GSM (s) = ⎝ GSM 21 (s) GSM 22 (s) GSM 23 (s) ⎠ GSM 31 (s) GSM 32 (s) GSM 33 (s)
(2)
And then the controller of the system of three inputs and three outputs GR (s) can be described by the matrix in the following form, where GRij (s) are controllers with single input j and single output i ⎛
⎞ 0 0 GR11 (s) GR (s) = ⎝ 0 0 ⎠ GR22 (s) 0 0 GR33 (s)
(3)
The model of the controlled system GSM (s) is unique for each change of one of the requested values in the form of step change, so for k combination of changes of the requested values there exist k matrices of the model of the controlled system GSM (s). The parameters of the plant of the model in each k matrix is obtained by pre-identification from the step response of the system, for example by least squares. In purpose to avoid the immediate change of the model when one of the requested value changes, there is performed so called parameters estimates quantization which differs this method from the gain scheduling. The parameters estimates quantization is realized according the following approach 1. From response, for each output compute the time when it reaches the 0,99 of its value in infinity. 2. Take the lowest time from all times you obtain, it is denoted as TL . 3. Decide how many quantization you want to perform, generally you perform n quantization. 4. Divide TL on the n linearly spaced parts TLn . 5. For each TLn compute model parameter estimates as n.GSMij 6. Compute for each parameter of the model of the controlled system GSM (s) 7. Do step 1 to step 6 for each change of the requested value. Once the quantization is finished then the controllers’ parameters are computed and after that the immediate control is performed. Due to fact that in some case the decentralized control can give the possible instability in case of strong cross-influence, the proposed method is suggested to be used at the systems without strong cross-influence.
Semi-adaptive Decentralized PI Control of TITO System
67
4 Results This chapter presents results of the method described before demonstrated on the selected system with three inputs and three outputs. The system GS (s) is described by the equation ⎞ ⎛ 0.1 0.2 2 ⎜ s +3s+1 0.3 GS (s) = ⎝ s+1.2 2
0.2 s+1.5
s+1.6 1.5
s2 +5s+6 0.2 s+1.8
s+1.1 0.1 s+1.3 1.7 s2 +4s+3
⎟ ⎠
(4)
And the initial values of the model of the system GSM (s) is described as ⎞ ⎛ 0.21 2.2 0.12 ⎜s GSM (s) = ⎝
2 +3.1s+1.1
0.31 s+1.1 0.22 s+1.4
s+1.4 s+1.2 1.4 0.12 s+1.2 s2 +5.2s+6.6 0.21 1.6 s+1.9 s2 +3.9s+2.7
⎟ ⎠
(5)
There are three PI controllers which initial values of kp and ki are kp11 = 2.5, ki11 = 1.1
(6)
kp22 = 1, ki22 = 1
(7)
kp33 = 1.1, ki33 = 0.9
(8)
In the next three figures, Fig. 2, 3 and 4, there are shown the results of control for each of the three subsystems performed in MATLAB and Simulink software environment. The used SIMULINK model is in Fig. 5. The step change of the requested value is in time 20 s for the first subsystem, in time 40 s for the second subsystem and in time 60 s for the third subsystem. The history of control of all three subsystems shows the acceptable results which demonstrates that the presented method can be used and applied. The achieved results confirm by simulation at the selected controlled system with three inputs and three outputs that the proposed method can be used at the linear systems with multiple inputs and multiple outputs. However, there are some limitations. The order of controller plant must be increased in case the controlled system has 4 inputs 4 outputs and more. The stability must be ensured. And the method is not suggested to be used at the systems with strong cross-influencing. Due to cross influences, the parameters of decentralized controllers and corresponding SISO system models change with each change of one set-point.
68
K. Perutka
Fig. 2. History of control of 1st subsystem
Fig. 3. History of control of 2nd subsystem
Semi-adaptive Decentralized PI Control of TITO System
69
Fig. 4. History of control of 3rd subsystem
Fig. 5. Simulink model
5 Conclusion The paper presented one approach to decentralized control of systems with three inputs and three outputs. The method combined PI control, decentralized control and preidentification of the controlled system. Firstly, the parameters of all three single input single output controllers were computed according to the results of pre-identification and 2 quantization and after that, the control was performed. The method was verified
70
K. Perutka
on the chosen system with interactions realized by so called P-structure of the controlled system.
References 1. Bakule, L.: Decentralized control: an overview. Annu. Rev. Control. 32, 87–98 (2008) 2. Liu, G., Jiang, W., Yang, Q., Wang, Q., Law, S.S.: Decentralized state feedback control of linear time-invariant system with free-interface substructures. Structures 34, 4900–4919 (2021) 3. Husek, P., Kucera, V.: Robust decentralized PI control design. In: Proceedings of the 19th World Congress the International Federation of Automatic Control, pp. 4699–4703. IFAC, Cape Town, South Africa (2014) 4. Chen, D., Seborg, D.E.: Design of decentralized PI control systems based on Nyquist stability analysis. J. Process Control 13, 27–39 (2003) 5. Erol, H.E., Iftar, A.: Decentralized controller design by continuous pole placement for incommensurate-time-delay systems. IFAC-PapersOnLine 48(12), 257–262 (2015) 6. Warsewa, A., Wagner, J.L., Böhm, M., Sawodny, O., Tarín, C.: Decentralized LQG control for adaptive high-rise structures. IFAC-PapersOnLine 53(2), 9131–9137 (2020) 7. Sun, H., Zong, G., Chen, C.L.P.: Adaptive decentralized output feedback tracking control design for uncertain interconnected nonlinear systems with input quantization. Inf. Sci. 512, 186–206 (2020) 8. Hu, Q., Su, L., Cao, Y., Zhang, J.: Decentralized simple adaptive control for large space structures. J. Sound Vib. 427, 95–119 (2018) 9. Wagner, J.L., Böhm, M., Sawodny, O.: Decentralized control design for adaptive structures with tension-only elements. IFAC-PapersOnLine 53(2), 8370–8376 (2020) 10. Yang, J., Yang, W., Tong, S.: Decentralized control of switched nonlinear large-scale systems with actuator dead zone. Neurocomputing 200, 80–87 (2016) 11. Namaki-Shoushtari, O., Khaki-Sedigh, A.: Decentralized supervisory based switching control for uncertain multivariable plants with variable input-output pairing. ISA Trans. 51, 132–140 (2012) 12. Agham, A.G., Davison, E.J.: Decentralized switching control for hierarchical systems. Automatica 43, 1092–1100 (2007) 13. Ananduta, W., Pippia, T., Ocampo-Martinez, C., Sijs, J., De Schutter, B.: Online partitioning method for decentralized control of linear switching large-scale systems. J. Franklin Inst. 356, 3290–3313 (2019) 14. Liu, H., Zhai, D.: Adaptive decentralized control for switched nonlinear large-scale systems with quantized input signal. Nonlinear Anal. Hybrid Syst. 35, 100817 (2020) 15. Ahandani, M.A., Kharrati, H., Hashemzadeh, F., Baradarannia, M.: Decentralized switched model-based predictive control for distributed large-scale systems with topology switching. Nonlinear Anal. Hybrid Syst. 38, 100912 (2020) 16. Mahapatro, S.R., Subudhi, B., Ghosh, S.: Design and experimental realization of a robust decentralized PI controller for a coupled tank system. ISA Trans. 89, 158–168 (2019) 17. Zhang, Z.-Y., Zhang, C.-L., Xiao, F.: Energy-efficient decentralized control method with enhanced robustness for multi-evaporator air conditioning systems. Appl. Energy 279, 115732 (2020) 18. Tan, N., Kaya, I., Yeroglu, C., Atherton, D.P.: Computation of stabilizing PI and PID controllers using the stability boundary locus. Energy Convers. Manage. 47, 3045–3058 (2006)
Semi-adaptive Decentralized PI Control of TITO System
71
19. Perutka, K.: Pre-identification for real-time control. In: Moreno-Díaz, R., Pichler, F., QuesadaArencibia, A. (eds.) EUROCAST 2009. LNCS, vol. 5717, pp. 626–632. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-04772-5_81 20. Ding, F., Liu, X., Liu, M.: The recursive least squares identification algorithm for a class of Wiener nonlinear systems. J. Franklin Inst. 335, 1518–1526 (2016) 21. Kubalcik, M., Bobal, V., Navratil, P.: MIMO systems in education – theory and real time control of laboratory models. IFAC Proc. 36(10), 255–260 (2003)
Simulation of Cyber-Physical Intelligent Mechatronic Component Behavior Using Timed Automata Approach Adriano A. Santos1,2(B)
, António Ferreira da Silva1,2
, and Filipe Pereira3
1 CIDEM, School of Engineering of Porto, Polytechnic of Porto, 4249-015 Porto, Portugal
[email protected]
2 INEGI - Institute of Science and Innovation in Mechanical and Industrial Engineering,
Rua Dr. Roberto Frias, 400, 4200-465 Porto, Portugal 3 Algoritmi R&D Centre, University of Minho, Guimarães, Portugal
Abstract. This article presents a new approach for the simulation of CyberPhysical Intelligent Mechatronics Components using process evolution based on the timed approach. The purpose of this article is to address the issue of implementing the Intelligent Mechatronics Components control from the bench point of view and its simulation considering a virtual system composed of a single programmable logic controller and a touchscreen. For this, we intend to develop a systematic approach that allows modeling physical cybernetic systems based on timed automata. The proposed methodology makes it possible to define, in a systematic way, the formalisms and tools to model the controller and the respective plant. These global models can be used to simulate and validate systems based on development tools like UPPAAL software, so the proposed approach intends to systematically define their development. To present and explain the proposed methodology, a Modular Production System was used to distribute objects as a physical element. A virtual platform based on the Simatic TIA Portal was developed to monitor and validate the methodology. Keywords: Controller modelling · Cyber-physical system · Intelligent mechatronic component · Simulation · Timed automat
1 Introduction The challenges of the great demand for products, often customized, require quick responses capable of satisfying the most recondite tastes. This continuous demand for products requires a great flexibility of the productive structures, so the modular systems must have a high capacity of interaction and adaptation to these new postures of the global markets. It is in fact in the field of modularity that Cyber-Physical Systems (CPS) present themselves as an excellent tool to implement modular structures as well as to emulate plant specifications [1]. On the other hand, considering that CPS are hybrid and distributed systems, operating in Real-Time (RT), the validation and verification of the control model requires a simulation of the conceptual model based on the real model. So, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Machado et al. (Eds.): icieng 2022, LNME, pp. 72–85, 2022. https://doi.org/10.1007/978-3-031-09385-2_7
Simulation of Cyber-Physical Intelligent Mechatronic Component Behavior
73
these models must, necessarily, be composed of several software models and computing platforms, in addition to the physical processes, where the feedback between processes and computational systems will lead to the modeling of actuators, sensors and physical dynamics associated with the operation and communication times [2]. For this, it will be necessary to model the behavior of the system considering the interaction between the controller and the plant. In the real world, systems behave differently at the controller and plant levels since the evolution of the system in the real world has different time scales, that is, the time interval necessary to allow the evolution from one place to another [3]. Thus, due to these lags, it will be necessary to create additional coordination variables, or convert the current ones to equivalent roles, so that the evolution of the models can be guaranteed during the simulation process. Given these conditions, the CPS, given the heterogeneity and diversity of disciplines covered, present themselves as very important tools for modeling since it will be possible to integrate not only the physical process but all the computational, communication and control architectures. So, when developing a CPS, we must consider not only the complexity of the system and the scale of the problem, but also its particularities, requirements, and characteristics. Furthermore, these must be carefully modeled to ensure the simulation and testing of models (controller and plant) that allow the application of verification and validation techniques that improve system reliability [2] and increase quality of experience (QoE) and quality of service (QoS) levels [4]. On the other hand, with the increasing complexity of systems and, consequently, the engineering problems associated with them, it becomes necessary to work with systems that can emulate their physical and computational principles. So, if we consider CPS as evolutionary and intelligent systems [5] as well as a critical and fundamental element of Industry 4.0 [6], taking into account not only the industrial benefits of their use, but also the numerous challenges to be overcome [7], these will undoubtedly be an important tool the simulation and verification of any plant. Based on these assumptions we develop the work (case) presented in this article. In this paper, we present an approach to the simulation of cyber-physical system using a single programmable logical controller (PLC) to develop code for the controller and validation of the developed code. This is organized as follows. Section 2 presents an overview of the modulated system and the timed approach of the problem. Section 3 introduces the simulation process and presents the implementations of the supervisory control and the proposed approach for the simulation based on only PLC. Discussion of the proposed methodology is also provided in Sect. 3, and the conclusions are presented in Sect. 4.
2 System Modeling The study example that we will cover is developed using the structure of a mechatronic device, Modular Production System (MPS®), called Distribution Station (DS) from FESTO [8]. As depicted in Fig. 1, the DS consists of two parts: Stack Magazine (SM) module and Changer module (CM). The main function of the DS is to transfer the parts contained in the magazine cylinder, up to 8 workpieces, to the downstream station. Once in the home position, the
74
A. A. Santos et al.
Siemens KT700
Siemens S7-1200 PLC
Personal computer
Distribution Station
Fig. 1. The experimental production system, distribution station [8].
plant starts its operation as soon as the start button is pressed. The operating sequence, prerequisites and starting position of the distribution station system plant are described as follows: Step 1 – The Rotary Drive moves always to the downstream station if pressure is turn on (right); Step 2 – The linear actuator (stack magazine) pushes a part out of the magazine if workpieces are detected in the stack magazine and the Start button is pressed; Step 3 – The Rotary Drive rotates to the magazine position (pick up position, to the left) if the workpiece has been pushed out of the magazine to the mechanical stop; Step 4 – The vacuum is turned on as soon as the sensor (electrical limit switch) detects that the rotary drive is in the magazine position. When the part is captured, the part detection sensor, present in the suction cup changes to the on state. Linear cylinder (LC) return. Step 5 – The Rotary drive rotates to downstream station. When the electrical limit switch detects that the rotary drive is in the downstream station, changes to off state the part detection sensor. Step 6 – When the part detected sensor changes to off state (part released), the rotary drive will rotate to “magazine” position. 2.1 Problem Description The modular production system (DS), shown in Fig. 1, can be controlled by a microcontroller or a PLC. The LC and the rotary drive are driven by the distribution control valve (CR) and the distribution control valves (RMR and RML), respectively, see Table 2. Several plastic pieces are placed on the magazine barrel, shown in Fig. 2, that can be ejected by the LC to the mechanical stop. An inductive sensor (scr sensor for LC retracted) detects the position of the workpiece and automatically the solenoid (RML) active the rotary drive to the mechanical stop position. Workpieces are collected with a suction cup and transferred by the rotary, unit to the downstream station. The vacuum switch (smp sensor) detects the partial vacuum at the vacuum suction cup, if is picked up, and an output signal is generated by the
Simulation of Cyber-Physical Intelligent Mechatronic Component Behavior
75
Fig. 2. Stack magazine module (a) and changer module (b) [8].
vacuum switch. In this work, a Siemens S7-1200 (CPU 1214C) PLC is used to control this modular process, and TIA Portal software is used to program PLC, as well as the simulation the software “S7 PLCSIM”. The distribution station modeling was based on the operating sequence and its temporal dynamics. The respective command of the Sequential Function Chart (SFC) system was elaborated considering the steps listed above in addition to the functionalities described in Fig. 2. The codes used and the designations defined for the inputs and outputs are presented in Table 1 and Table 2. Table 1. Labels, description, and bits of all digital inputs. Inputs
Description
PLC address
ON
Start system button
%I1.1
OFF
Stop system button
%I1.0
sca
Sensor that detects linear cylinder advanced, inductive sensor
%I0.1
scr
Sensor that detects linear cylinder retracted, inductive sensor
%I0.2
smp
Sensor that detects piece in rotary drive
%I0.3
srl
Sensor that detects rotary drive rotate left, electrical limit switch
%I0.4
srr
Sensor that detects rotary drive rotate right, electrical limit switch
%I0.5
spiece
Sensor that detects piece in magazine, trough-beam sensor
%I0.6
2.2 Controller Modeling The use of models of physical systems aims to simulate the physical behavior of these systems instead of carrying out experiments in the real plant. These allow, safely and economically, to perform many tests considering, for example, a discrete-time (DT) or a discrete-event (DE) approach, or a combination of both. Them, the modular approach
76
A. A. Santos et al. Table 2. Labels, description, and bits of all digital outputs.
Output
Description
PLC address
CR
Order for cylinder return
%Q0.0
VAC
Activation of the vacuum
%Q0.1
RDP
Rotary drive frees piece
%Q0.2
RML
Rotary drive moves left
%Q0.3
RMR
Rotary drive moves right
%Q0.4
will be used to “divide and conquer”, e.g., divide the different components of the physical system making the modulation less complex, more efficient, and closer to the correct model. Physical characteristics of the MPS – an Intelligent Mechatronic Component (IMC), are introduced by [9] and it is defined as a product composed by: Mechatronic component, physical part; Embedded control device, computing device, Software components, dataset, and control logic) - and plant components, identified during the problem analysis, were divided, and modeled in three modules. The modular approach to system command will realize the system control program in interactive blocks according to the composite SFC model presented in Fig. 3. It presents the model of the SFC controller for the DS used in the modular production system. This approach is one of many possible solutions to control the DS, so that different Grafcet (SFC) can be used with similar results and with a correct functioning of the MPS. So, according to the controller specifications it is possible to deduce system behavior: grafcet “ON-OFF” translates system activation to standby (activation/deactivation), the system is on hold until the parts are placed in the magazine. Grafcet “Feed Magazine” translates the actuation of the double-acting cylinder when detecting parts in the magazine, spiece sensor. The cylinder moves positioning the part at the pick-up location, mechanical stop. Times associated with this movement are dependent on the pressure of the compressed air and, consequently, on its course, in this case it was not considered. The third grafcet (Rotary Drive) controls the movement of the rotary cylinder, from the mechanical stop to delivery position, and vacuum system. It has a temporal transition associated with the deactivation of the vacuum signal which results in a delay of the activation of step 20. The specification model is elaborated in two steps: first, the SFC corresponding to the intended behavior is elaborated and then the specification, previously developed in SFC, is converted to timed finite automata. In this paper, the Timed Finite Automata (TFA) formalism will be used [10]. 2.3 Command Specification Template The specification model of the SFC is converted to (TFA) based on the original formalism proposed by Alur and Dill [10] and the input formalism of UPPAAL software [11] for two main reasons [3]: “it supports a timed input language (timed finite automata); and achieves, with a single model, formal simulation and verification in a unique environment, without the need for translation between formalism from the simulation to the
Simulation of Cyber-Physical Intelligent Mechatronic Component Behavior ON-OFF
Feed Magazine
77
Rotary drive
Fig. 3. Implementation of the DS control model SFC code.
formal verification environments”. As this article presents the modeling of the physical part of a mechatronic system of a FESTO MPS, the need for time modeling is justified, due to the need to use a non-deterministic formalism – because the behavior of the physical part of a system mechatronic is non-deterministic. For the rotary drive, the model in timed finite automata, is presented in Fig. 4. The model in the figure (rotary drive move part from the pic-up place to its delivery platform) starts its evolution when it receives the message “START_PE?” and ends its evolution by sending the message “END_PE”. Sequentially, what happens in this model is the following: while the “START_PE” message is received, the variables corresponding to the ascending and descending flanks are updated; then, in the transition between C7 and C8, the SFC transposition conditions are calculated; then, the calculation of the internal variables (corresponding to the steps of the SFC), in the evolution between C8 and C9; then, updating the timings and counts, between C9 and C10, sending the message “T_C” to the respective timing models; and, finally, the calculation of the outputs, in the evolution between C10 and C11. 2.4 Timing Modeling Basically, timings in the SFC command specification are modeled with delay and advance, a temporal logic evolution [12]. Like the reasoning followed for modeling, in timed finite automata, of flanks and counts, the functional blocks of the IEC 61131-3 standard are adopted as a basis for the creation of models in finite timed automata of these timings. Figure 5 it shows the delay a) and advance timings b), respectively. The
78
A. A. Santos et al.
Fig. 4. TFA model corresponding to the algebraic translation of the SFCs presented in Fig. 3.
evolution of these models is carried out when they receive the message “T_C”, sent by the model of the controller’s command specification. Thus, in the case of the modeling presented in Fig. 5a) the temporal evolution of the model is performed with a TON function block [2], on delay activation, which, as the name suggests, will delay the evolution of the grafcet from state 24 to state 20 (see Fig. 3). This is the approach used to simulate and validate the system’s controller model in the case under study (MIC). TON goes to STANDBY as soon as IN is equal to 1 and will return to OFF when IN goes to zero, if Q remains at 0. Otherwise, if Q is 1, the model will go to ON allowing the passage from state 24 to 20. With IN = 0 and Q = 0 the model goes back to OFF. Figure 5b) also presents a temporal evolution of the model realized with a TOF [13], on advance activation function block (off-delay timer). This approach anticipates the passage of states, that is, the immediate evolution of states. TOF immediately switches to STANDBY as soon as IN is equal to 1. Q takes the value 1 (see graph). Switching to the ON state can only be achieved when IN returns to 0. The time delay starts delaying the deactivation. The use of this block would lead to an immediate evolution of the states, in a cyclical way, where time would not be perceived, since the deactivation of TOF would only occur after the transition of the state. This is the main reason for not adopting this approach in the developed simulation model (see Fig. 8).
Simulation of Cyber-Physical Intelligent Mechatronic Component Behavior
79
Fig. 5. a) Delayed timing, b) advance timing [10].
3 Simulation Process The control of the presented example in this work was developed based on discrete event systems (DES) automation. Its development is supported by an RT implementation running on a PLC programmed with the programming language called Ladder Logic Diagram (LD) [14]. On the other hand, as current production systems can fit into a DES classification, a finite state system, with time requirements, resulting from the intrinsic evolution of the process, expressed as formulae in temporal logic [12], it was decided to implement a timed automata (TA) framework that breaks the formalisms and behavior of the controller (task simulation and interaction with the plant). Thus, a modular approach was assumed that can portray and validate the behavior of the CPS, which translates, for example, into the division of the plant into three modules that represent the functioning of the system under analysis, see Fig. 3. 3.1 Simulator Implementation To implement the behavioral simulation of the system, it was considered that not only would there be a need to simulate the physical behavior of the components, but also the direct orders of the controller. In this phase, the control focuses on the activation of the solenoid valves controlling the double-acting cylinders (Magazine, mono-stable valve) and the Rotary Drive (bi-stable valve) associated with the vacuum injector. Figure 5 illustrates the temporal behavior of the rotary drive, showing the relationship between the controller output signals according to their evolution in the model. However, the models developed are specific to a given problem and therefore reduce to the application. In this situation, the understanding of the expected behavior of the same and the understanding of the trajectory of the parts is a fundamental factor for the construction of the model that will simulate the same. In this example, the model in Fig. 4 was expressly built to translate
80
A. A. Santos et al.
the movement of the part when it “moves” between collection point (Mechanical stop) to the evacuation site (sending part to the next station). Simulation Supervisory Control The supervisory control of the simulation is developed to ensure the occurrence of all events and guarantee their execution. Thus, and to portray what has been mentioned, consider an automaton, designated by G, with all its sets of states, finite events, partial transition functions and states representing the completion of a given task or operation as defined in [13]. Knowing that q represents a feasible event for each state of G, we must consider q0 as the initial state of many others q that represent the completion of a given task. Consequently, a simple finite automaton will be constituted by an initial state labeled with q0 and a subsequent state labeled with q1. The interconnection arrows of the two states represent the automaton transition functions. These are labeled with e1, transition from 0 to 1 [q0, e1, q1], and e2, transition 1 to 0 [q1, e2, q0], corresponding to events. This representation means [13] “if state q0 is active and the event e1 occurs then set state q1 and reset state q0”. Based on what was mentioned above and since in a simulation environment the real events will not occur according to the evolution of the process, it will be necessary to associate the necessary time to its transition, system will wait to ensure it never reaches specific undesirable states [12]. This concept will be defined considering the following expression: qn + 1 = qn•en + t, where qn is the initial active state, en the transition event, t the deterministic value of the delay time, “•” the join operator applied to the state evolution and “ + “ the delay operator applied to the event on the time axis, in this case the transition function from o to 1 is defined as [q0, e1 + t, q1]. So, the implementation of the delayed event in a PLC can be carried out using the standard timer functions which, in this case, will be performed using On Delay Timer (TON), defined by the IEC 61131-3 standard. The TON timer can be used to delay setting output to true (Q = 1) for a predefined (deterministic) period after a true physical input signal (ON = 1), e.g., the output is kept off for a certain period that is intended to be greater than the input value (PT), as shown in Fig. 6a). Thus, as the input signal IN remains true (ON), the Q output (status bit) follows and remains false (OFF) while the elapsed time ET starts to increase until it reaches the preset value at the PT input. At this moment the Q output (status bit) is set to true (ON) as a function of the elapsed time. If the input signal IN becomes false (OFF) the Q output returns to false (OFF) and ET is setting to zero. However, if the input IN becomes true (ON) only for a period, shorter than the time defined in the PT input, just a pulse, the Q output will remain false (OFF). The LD diagram implementation of the delay event (TON), associated with any transition, is shown in Fig. 6b). Proposed Approach for the Simulation The supervision control of discrete events signal coming from the plant are, as mentioned before, monitored, and controlled according to the SFC defined in Fig. 3. Sensor readings, controller inputs, stimulate the evolution of the control system, giving rise to control actions that are manifested in the outputs to the plant. So, when an event occurs in the plant, the supervisor control (PLC) changes its state synchronously, according to the active state and the received event. The supervisor works as state feedback, whose
Simulation of Cyber-Physical Intelligent Mechatronic Component Behavior
81
Fig. 6. Evolution of the proposed approach and timing diagram of on-delay timer (TON) [15].
control actions are output signals from the controller and, consequently, inputs to the plant, see Fig. 7a). The controlled component, integrated in a closed feedback loop, will perform its action informing the system of its new state.
Fig. 7. a) Supervisory control of DES, b) simulation control system.
The proposed approach for CPS simulation is based on a single PLC. So, for the implementation of the simulation, it will also be necessary to define the events and actions that will allow the evolution of the SFC of the same. Events will be, in this new module, assigned to states (steps) and will define when the actuators will be activated. These will be the physical representation of the activation of each of the actuators, e.g., the validation of the virtual movement of the actuators. On the other hand, actions, normally assigned to outputs, will be transformed into events assigned to transitions to define the conditions of evolution from one state to another (Fig. 7b)). In this approach simulation and validation process convert events in actions and outputs in events forcing the evolution of “virtual” SFC. Figure 8a) represents the CPS control LD code conversion algorithm. In the simulation, events assume a complementary designation “S”, for example “S_srr”, and becoming actions that interact directly with the Feed Magazine and Rotary Drive modules. These actions force virtually, base in a memory address, the evolution of the respective control modules, generating the necessary signals for the actuators to be turned on. Then, the actions performed by the controller are converted into events to which a delay time is associated to guarantee the evolution of the physical system. Events (step i action) are assigned to transitions to define the evolution conditions that guarantee the change from one simulated state to another. In Fig. 8b) shows a single Grafcet, run in the same PLC controller, that translates the code developed for the physical system simulation, as well as a part of the Rotary Drive control code. The arrows show where the control’s actions and events are inserted and what roles they now assume. Table 3 shows the conversion
82
A. A. Santos et al.
a)
b)
Fig. 8. a) Simulation development algorithm, b) LD code for the simulation.
of the signals event (sensors) of the control system (Feed Magazine and Rotary Drive) into virtual outputs (memory) essential for the evolution of the simulation module. Table 3. Simulation signal, internal memory. Inputs
Description
PLC address
S_sca
Simulation of the sensor cylinder advanced
%M180.1
S_scr
Simulation of the sensor cylinder retracted
%M180.2
S_smp
Simulation of the detects piece sensor
%M180.3
S_srl
Simulation of the sensor rotary rotate left
%M180.4
S_srr
Simulation of the sensor rotary rotate right
%M180.5
S_vac
Simulation of the vacuum activation
%M181.0
Simulation of Cyber-Physical Intelligent Mechatronic Component Behavior
83
Figure 9 illustrates one of the “frames” of movements that users can observe on the tactile console or on the monitor of the personal computer, or laptop, which they are using to monitor the evolution of the system. The simulation is performed on the virtual platform, running on S7-PLCSIM software, (MiL, simulation environment task) and physical workbench, MIC – Festo MPS, (HiL, simulation environment task) resulting from the combination of SIMATIC TIA Portal with WinCC software from SIEMENS. Note that in this case we are only simulating a single CPS and therefore with centralized control. The decentralization of the system, distribution of control by several PLC’s, would transport our simulation to an environment with special requirements, probably in terms of real time and communications, between the various nodes of the system. This approach would lead us to adopt, for example, simulation methodologies based on communication requirements as recommended by [3].
Fig. 9. HMI – Process simulation on Siemens KTP700 Basic PN or on a PC.
4 Conclusions In this paper, a systematic approach to CPS modeling and simulation was presented, using timed automata. The proposed approach systematically defines the creation of a simulation and validation model considering the plant and the controller. Based on MiL and HiL, in the formalism proposed by Alur and Dill, and their simulation and validation tools, such as UPPAAL software, a virtual platform capable of testing the simulation code was developed. A case study is also presented, which was discussed and used to develop the proposed methodology, and it can be concluded which are the main characteristics of this approach, to model CPS and MIC. The proposed approach can be used to simulate previously developed models (system control) and the respective formal verification, of the same models, in the same development environment. The proposed approach is presented as an easy-to-use, compared to the one described by [2], and practical technique to implement in a single PLC using delay functions. On the other hand, it is also shown that the use of a single PLC can be an efficient solution when applied to low-level real-time control systems, namely regarding the interaction of the control with the actuators. The use of a monolithic Grafcet allowed to have a
84
A. A. Santos et al.
correct perception of the evolution of the simulator regardless of the number of states to be simulated. It is important to mention that the number of memories to be used in the PLC will not be an impediment, since they currently have an almost unlimited number of these elements. The use of modular simulation models, as opposed to monolithic ones, will be a very valuable alternative in approaching highly complex systems and in an industrial application. This approach allows systematizing the creation of global models considering the plant and the controller. Thus, from this point of view, it could work as a platform that facilitates learning, in addition to the simulation and validation of the virtual control, as well as a workbench that can be accessed via Internet, integrated into a virtual and physical remoted laboratory. Acknowledgements. We acknowledge the financial support by FCT – Portuguese Foundation for the Development of Science and Technology, through CIDEM, under the Project UID/EMS/0615/2019, and the INEGI and LAETA, under project UIDB/50022/2020.
References 1. Adriano, A.: Santos, António Ferreira da Silva: simulation and control of a cyber-physical system under IEC 61499 standard. Procedia Manuf. 55, 72–79 (2021). https://doi.org/10. 1016/j.promfg.2021.10.011 2. Canadas, N., Machado, J., Soares, F., Barros, C., Varela, L.: Simulation of cyber physical systems behaviour using timed plant models. Mechatronics 54, 175–185 (2018). https://doi. org/10.1016/j.mechatronics.2017.10.009 3. Kunz, G., Machado, J., Perondi, E., Vyatkin, V.: A formal methodology for accomplishing IEC 61850 real-time communication requirements. IEEE Trans. Ind. Electr. 64(8), 6582–6590 (2017). https://doi.org/10.1109/TIE.2017.2682042 4. Lampropoulos, G., Siakas, K., Anastasiadis, T.: Internet of things in the context of industry 4.0: an overview. Int. J. Entrepr. Knowl. 7(1), pp. 4–19 (2019). https://doi.org/10.37335/ijek. v7i1.84 5. Putnik, G.D., Ferreira, L., Lopes, N., Putnik, Z.: What is a cyber-physical system: definitions and models spectrum. FME Trans. 47(4), 663–674 (2019). https://doi.org/10.5937/fmet19 04663P 6. Castro, H., et al.: Cyber-physical systems using open design: an approach towards an open science lab for manufacturing. Procedia Comput. Sci. 196, 381–388 (2022). https://doi.org/ 10.1016/j.procs.2021.12.027 7. Samala, T., Manupati, V.K., Machado, J., Khandelwal, S., Antosz, K.: A systematic simulation-based multi-criteria decision-making approach for the evaluation of semi-fully flexible machine system process parameters. Electronics 11(2), 233 (2022). https://doi.org/ 10.3390/electronics11020233 8. Ebel, F., Pany, M.: FESTO Distributing Station Manual. Denkendorf, April 2006 9. Vyatkin, V.: Intelligent mechatronic components: control system engineering using an open distributed architecture. In: Proceedings of the EFTA 2003-IEEE Conference on Emerging Technologies and Factory Automation. (Cat. No. 03TH8696), vol. 2, pp. 277–284 (2003). https://doi.org/10.1109/ETFA.2003.1248711 10. Alur, R., Dill, D.L.: A theory of timed automata. Theoret. Comput. Sci. 126(2), 183–235 (1994). https://doi.org/10.1016/0304-3975(94)90010-8
Simulation of Cyber-Physical Intelligent Mechatronic Component Behavior
85
11. Behrmann, G., David, A., Larsen, K.G.: A tutorial on Uppaal 4.0. https://www.it.uu.se/res earch/group/darts/papers/texts/new-tutorial.pdf. Accessed 2 Jan 2022 12. Campos, J.C., Machado, J.: Pattern-based analysis of automated production systems. IFAC Proc. Vol. (IFAC-PapersOnline) 42(4), 972–977 (2009). https://doi.org/10.3182/200906033-RU-2001.0425 13. Uzam, M.: A general technique for the PLC-based implementation of RW supervisors with time delay functions. Int. J. Adv. Manuf. Technol. 62, 687–704 (2012). https://doi.org/10. 1007/s00170-011-3817-1 14. International Standard IEC 61131-3, Programmable Controllers – part 3: Programming languages, IEC (2013) 15. SIMATIC, S7-1200 Programmable controller – System Manual, V4.4 11/2019, A5E02486680-AM (2019). https://support.industry.siemens.com/cs/attachments/109772 940/s71200_system_manual_en-US_en-US.pdf?download=true. Accessed 19 Jan 2022
Classification of Process from the Simulation Modeling Aspect - System Dynamics and Discrete Event Simulation Jacek Jan Krzywy1(B)
and Marko Hell2
1 Poznan University of Technology, Plac Marii Skłodowskiej-Curie 5, 60-965 Pozna´n, Poland
[email protected]
2 Faculty of Economics, Business and Tourism, University of Split, Cvite Fiskovi´ca 5,
21000 Split, Croatia [email protected]
Abstract. The problem of processes simulation modeling is a significant issue for business analysts. Properly developed process models can be used not only to understand the process at both the operational and management level but also to identify bottlenecks and support in making decisions. The aim of this article was to classify processes from the point of view of simulation modeling using Discrete Event Simulation (DES) and System Dynamics (SD). The mentioned classification of processes is based on the Process Classification Framework (PCF) methodology developed by the American Productivity & Quality Center (APQC) organization. The literature studies of articles were carried out, in which processes were modeled using the methods above. For the DES method, 129 articles were analyzed, and then the processes were assigned to the appropriate category, while in the case of SD, 138 articles were analyzed, which were then also assigned to the proper process category. Keywords: Discrete event simulation · System dynamics · APQC · PCF
1 Introduction The problem of modeling and simulating processes is a topic that numerous scientists and practitioners have already taken up many times. They repeatedly indicated the need for access to appropriate methods and tools to facilitate efficient and effective simulation processes modeling. They also emphasized that an important element in the modeling and simulation process is selecting a proper simulation modeling technique. Choosing the right technique is not an easy task. On the one hand, we have a very large number of available techniques. On the other hand, there are often no clear guides on which method should be used for simulation modeling of the selected process. Therefore, the choice of the appropriate technique should be selected depending on the complexity of the project being implemented, the purpose of the model to be constructed, and, of course, the characteristics of the analyzed process. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Machado et al. (Eds.): icieng 2022, LNME, pp. 86–97, 2022. https://doi.org/10.1007/978-3-031-09385-2_8
Classification of Process from the Simulation Modeling Aspect
87
When starting to consider the classification of processes, it is worth presenting how the concept of process has been defined in the literature. Hammer defines a process as a related group of activities, the collective effect of which provides value for the customer. He also points out that the mentioned value is the value for which the customer is willing to pay [1–3]. Stabryła in his considerations defines a process as a sequence of tasks that have clearly defined functions and which are arranged in an appropriate time sequence [4]. Researchers who deal with the issues of process modeling believe that properly designed processes are an important factor influencing the efficient functioning of the company. In addition, they also believe that the effectiveness of the operation of systems depends on the level of understanding of the processes in the company [5]. Giaglis and Doukidis claim that properly modeled processes can be used to more effectively manage change in an organization. Well-described processes can be analyzed, monitored, and updated, which makes making decisions in the organization easier [6]. Properly developed process models can be used for: • Understanding the process at both the operational and management level, • Identifying bottlenecks, • Supporting in making decisions [7]. In business practice, the stage of business process modeling is often only the first step on the way to optimization. Process simulation is often a required element to perform effective process optimization. In the literature, process simulation is often classified as a change management tool in an organization. Simulation helps to answer a number of questions related to the time of the process implementation, labor costs, effectiveness of implemented changes in the organization and the involvement of staff and utilization of technical resources. In addition, the simulation of processes allows to predict how the introduced changes will affect the company in the future and is a very important tool that eliminates the implementation of wrong decisions [8]. The information presented above describe process modeling and the benefits of process simulation. Now the focus will be shifted to process classification. According to the American economic expert Michael E. Porter, processes can be divided into two main categories: core processes and supporting processes [9]. In the following years, further classifications of processes was made, and according to R. S. Kaplan and R. Cooper, the following classification of processes was made: innovative, operational, and after-sales service. In addition, they also distinguished the division of processes into necessary, essential and irrelevant [10]. Subsequent researchers classified the processes according to the task criterion, including logistic, regulatory, control, and information [11]. Mr. Grajewski, on the other hand, believes that enterprises are most often divided into core, supporting, and management processes [12]. In recent years, the APQC organization has done the most in process classification, which has prepared a very extensive document containing the Process Classification Framework. A detailed description and scope of PCF is characterized in the next part of the article. This article aims to classify processes from the simulation modeling point of view using Discrete Event Simulation and System Dynamics. The aforementioned classification of processes was based on the PCF (Process Classification Framework) methodology
88
J. J. Krzywy and M. Hell
developed by the APQC (American Productivity & Quality Center) organization. This article uses the latest version of the tool - version 7.2.1, published in 2019.
2 Techniques of Simulation Modeling Simulation of business processes is always related to some technique of its simulation modeling. Analysts use various techniques to model and simulate business processes in their activities. The choice of the appropriate method depends primarily on the process’s complexity, the purpose of the process, the detail of the process, and the level of the modeled process (strategic, tactical, operational). The most frequently used techniques for simulation modeling business processes are [13]: • • • • • •
DES – Discrete Event Simulation SD – System Dynamics ABS - Agent-Based Simulation Monte Carlo Intelligent Simulation Distributed Simulation
Discrete Event Simulation is a method that allows simulating a process as an analysis of individual events carried out over time. It enables the analysis of dynamic behaviors and their mutual interactions. The use of DES for the analysis of processes allows the selection of the most favorable scenarios of process changes. Also it can helps identify the resources needed to run the process efficiently and examine the relationship between the various elements of the process. Using this method, we can verify whether the change we plan to implement will have a positive or negative impact (analysis of the consequences of changes before implementation). Practitioners using DES say that it should be used to analyze problems that are: • • • • •
Limited in time or those with limited resources, Dependent on the influence of many characteristics of individuals, Related to the analysis of the experience of individual units, Dependent on the impact of alternative scenarios, Related to the occurrence of decision points.
Discrete event simulation is perfect for the analysis of processes occurring, among others, in: production, marketing, healthcare, logistics, or military [14–17]. System Dynamics is a method based on continuous simulation modeling in which there may be feedback loops in the systems. SD deals with modeling the system’s overall behavior rather than a detailed task analysis. When modeling processes according to the SD assumptions, time delays and feedback loops in the system should be taken into account. Thanks to this, it will be possible to learn about the behavior of the system and its basic principles of functioning. The effectiveness of a System Dynamics is about knowing how the entire system works, not about analyzing and measuring short-term goals - which scientists believe may lead to inaccurate conclusions. The developed
Classification of Process from the Simulation Modeling Aspect
89
SD models are in fact a macroscopic model of the tested system, which can be used to study how the structure of the system has a direct impact on the behavior of the entire system [18]. The main features of System Dynamics such as feedback loops, delays, stocks and flows diagrams are very helpful for describing how even seemingly simple systems exhibit non-linearity. It is worth adding that SD uses both quantitative and qualitative methods for its analyzes. Qualitative methods are used to describe the structure of the model’s behavior, while quantitative methods are related to the analysis of the feedback loop. According to Forrester, the creator of System Dynamics, feedback loops are technical elements of the system that describe phenomena occurring within decision points. Decision-making elements are related to activities that directly impact the surrounding system. The information obtained from the feedback loop can be an important element in the context of making future decisions [19]. The third method - Agent-Based Simulation is one of the newer techniques of simulation modeling of processes based on the interaction of autonomous "agents". These agents are described by simple rules and their interactions with other agents, which directly impact their behavior. When modeling individual agents, one can pay attention to the large variety of their attributes and behavior, which shows the system’s functioning as a whole. By modeling the process from scratch, agent-by-agent and behavior-bybehavior, it is easy to observe their self-organization in such systems. The main point is that the resulting patterns of behavior that were not previously designed in the system arose as a result of the more positive interaction of agents. The focus on modeling the heterogeneity of objects in the system and their self-organization are the main features of Agent-Based Simulation that distinguish them from other simulation modeling techniques. It is also worth mentioning that this modeling uses mutual learning of agents and mutual adaptation to each other so that the system works best. Summing up the considerations regarding ABS, it should be noted that the main advantages of this type of modeling are: providing an actual system description, modeling flexibility and taking into account emergent phenomena [20, 21]. The Monte Carlo process simulation method is a method used in economic activities to assess the level of exposure to a specific risk. This type of simulation is most often used for the financial valuation of derivatives and the creation of various management scenarios. In general, simulations based on the Monte Carlo method are used by many financial companies to evaluate various stochastic integrals related to the probability of certain situations. Simulation of processes using the Monte Carlo method changes the usual statistical problem: calculating random values in a deterministic way has been replaced by estimating deterministic quantities using random values. Simulation using this method is most often used to predict attacks and cyber threats. Instead of using estimated points, the scope of the damage event and its costs should be entered as input data for the simulation. As a result of the simulation, we will have the opportunity to identify many possible outcomes of the event. The simulation results obtained can then be processed in a graphical form in order to show the places with the highest probability of loss risk [22–24].
90
J. J. Krzywy and M. Hell
Intelligent Simulation is another method of simulation modeling of processes. This technique assumes the integration of artificial intelligence with simulation methods. It mainly focuses on using artificial intelligence to solve problems of the nature of the variability of real life or to solve complex problems, such as planning problems. It is also worth adding that neural networks and genetic algorithms have contributed significantly to developing optimizations used in the simulation of processes [13]. Distributed Simulation is a simulation that assumes that the simulation functions are distributed over the network. This assumption fits well with the current trend of decentralization in organizations. This method is mainly related to a distributed architecture, i.e., high-level architecture. Distributed Simulation is currently used to simulate transportation, electricity generation, and the military industry problems. The use of this technique in the military industry may refer to the training of commanders in the implementation of various war scenarios or to defense training [25]. After finishing the description of the particular methods of simulation modeling processes, it is worth emphasizing that the processes can also be simulated with the simultaneous use of several methods, the so-called hybrid simulation. In some cases, it is not enough to use one simulation technique to solve complex problems but also to use another technique. The most common combination of process simulation methods is the combination of Discrete Event Simulation and System Dynamics.
3 Process Classification Framework The American Productivity & Quality Center (APQC) was founded in 1977 and is one of the world’s leading authorities in benchmarking, best practices, process, performance improvement, and knowledge management. The main goal of the organization is to help companies improve their processes. In the early 1990s, APQC and a group of business practitioners began working on a tool that supports implementing projects related to process improvement. This tool is called the Process Classification Framework (PCF). In the following years, PCF transformed into a taxonomy of business processes, and it functions in this form to this day. The developed taxonomy allows companies to compare internal and external processes with companies from any industry. Thanks to PCF terminology, organizations can name and organize their processes. PCF has been developed in such a way that it can be adapted by any organizational unit [26]. PCF assumes the classification of processes taking into account five main levels: Level 1 – Category, level 2 – Process Group, level 3 - Process, level 4 – Activity, level 5 - Task. An example of a process hierarchy is presented below (Fig. 1). PCF distinguishes twelve main categories of processes that represent the highest level of processes in the organization. Due to the fact that, in this article, the classification of processes is based on the categories of processes, a detailed breakdown is presented in the Table 1.
Classification of Process from the Simulation Modeling Aspect
91
Fig. 1. An example of a process hierarchy according to Process Classification Framework (Source: [26]) Table 1. Categories of processes according to the process classification framework Hierarchy ID
Name
1.0
Develop vision and strategy
2.0
Develop and manage products and services
3.0
Market and sell products and services
4.0
Deliver physical products
5.0
Deliver services
6.0
Manage customer service
7.0
Develop and manage human capital
8.0
Manage information technology (IT)
9.0
Manage financial resources
10.0
Acquire, construct, and manage assets
11.0
Manage enterprise risk, compliance, remediation, and resiliency
12.0
Manage external relationships
13.0
Develop and manage business capabilities
4 Methodology The methodology of the research work consists of the following steps. The first step was to choose the database in which the articles will be searched. The next stage was the selection of articles that fall within the subject scope of the analysis. Search criteria: “System Dynamics” and “Discrete Event Simulation”. The search term “System Dynamics” in the selected database returned over 1 million results, while the articles for
92
J. J. Krzywy and M. Hell
the term “Discrete-Event Simulation” were matched with over 9,000 results. It is obviously impossible to analyze such a large number of articles, therefore, it was decided to introduce additional restrictions on the searched phrases. First limitation: document types - we only search in articles and second limitation: we are looking for open access articles. After that, a representative size of the research sample was selected on the basis of which it will be possible to conclude. For this purpose, the formula for the minimum sample size was used [27]. Np α 2 ∗ f (1 − f ) (1) Nmin = Np ∗ e2 + α 2 ∗ f (1 − f ) where: N min – Minimum sample size N p - Size of the sample population α - confidence level for the results f - fraction size e - the assumed maximum error, expressed as a fractional number Once it was possible to determine the size of a representative group, the next step was to select sampling techniques and methods. In this article, it was decided to choose nonrandom sampling techniques. The following sampling method was chosen from this technique: Purposive or judgmental sampling. As a result, it was possible to select articles that were included in the representative group. The next step was to prepare a form to collect detailed data on the analyzed articles. The prepared form for collecting data on articles contains the following data: The title of the journal, the journal’s authors, the area of research and the category of the analyzed process. The category of the analyzed process is the most important element of this analysis. This category was assigned based on the methodology of the Process Classification Framework developed by the American Productivity & Quality Center (APQC). The process category was assigned according to the latest version of the Process Classification Framework (version number 7.2.1 generated on 01/09/2020), available for download from the official website of the APQC organization. The aggregation of the obtained results and their presentation in graphic form constituted the penultimate stage of the research. The presented results were the basis for developing the conclusions of the study and allowed us to answer the initial question: which processes are most often modeled using the System Dynamics methodology, and which ones use the Discrete Event Simulation.
5 Selection of the Number of Articles for Research (According to the Adopted Work Methodology) 5.1 Process Modeling Using Dynamic Systems • • • •
The searched database: Web of Science, Typed phrase: System Dynamics – 1 096 131 papers, First limitation: we only search in articles – 902 943 papers, Second limitation: open access: 319 434 papers,
Classification of Process from the Simulation Modeling Aspect
93
• Assumptions for the selection of representative sample, • Population size (N p ): 319 434, • Confidence level for results (α): 1,96 - statistically calculated ratio, which is taken as the standard value of 1.96 for the confidence level of 95%, • Fraction size (f ): 0,9 - we assume that the majority of the population will have the tested trait due to the search criterion (we only search for articles that contain the phrase “system dynamics”, and we do not search in all articles in the database, • The assumed maximum error (e):5%. Based on the data entered into the formula, we obtain the number of articles to be analyzed. This number is 138 articles. Np α 2 ∗ f (1 − f ) = 138, 24 (2) Nmin = Np ∗ e2 + α 2 ∗ f (1 − f )
5.2 Process Modeling Using Discrete Event Simulation • • • • • • •
The searched database: Web of Science, Typed phrase: Discrete Event Simulation – 9 643 papers, First limitation: we only search for articles – 6 239 papers, Second limitation: open access:1 834 papers, Assumptions for the selection of the sample, Population size (N p ):1834, The confidence level for the results (α): 1,96 - statistically calculated ratio, which is taken as the standard value of 1.96 for the confidence level of 95%, • Fraction size (f ): 0,9 - we assume that the majority of the population will have the tested trait due to the search criterion (we only search for articles that contain the phrase “system dynamics”, and we do not search in all articles in the database, • The assumed maximum error (e): 5%.
Based on the data entered into the formula, we obtain the number of articles to be analyzed. This number is 129 articles. The method of calculating the number of articles is presented below. Np α 2 ∗ f (1 − f ) = 128, 62 (3) Nmin = Np ∗ e2 + α 2 ∗ f (1 − f )
6 Results 6.1 Research Results for System Dynamics According to the research methodology, 138 articles on simulation modeling with the use of System Dynamics were analyzed. According to the Process Classification Framework, each of the analyzed articles has been assigned a process category (see Fig. 2). When
94
J. J. Krzywy and M. Hell
analyzing the results obtained, it should be emphasized that the modeled processes using SD assumptions were classified into ten different process categories. It is worth noting that the largest group is the processes from the first group, i.e., those related to Develop Vision and Strategy. The group included up to 65% of all articles analyzed. Deliver Physical Products is the group to which 13% of the articles were assigned. The remaining eight process categories account for 22% of the articles analyzed.
Process category SD - frequency of use [percent]
70% 60% 50% 40% 30% 20% 10% 0% 1.0
2.0
3.0
4.0
5.0
6.0
7.0
9.0
11.0
13.0
Fig. 2. Assigned process categories - system dynamics
6.2 Research Results for Discrete Event Simulation After analyzing 129 articles related to DES modeling, it was possible to assign the modeled processes to 8 main categories (see Fig. 3). These include: Develop Vision and Strategy, Develop and Manage Products and Services, Deliver Physical Products, Deliver Services, Manage Information Technology (IT), and Manage Enterprise Risk, Compliance, Remediation, and Resiliency. As can be seen from the obtained results, Discreet Event Simulation is most often used for simulation modeling of processes related to Deliver Physical Products, Deliver Services and Manage Information Technology IT. It is also worth noting that DES was used to model operational processes and to model processes related to the development of the mission and the company’s strategy. 6.3 Combined Research Results for Discrete Event Simulation and System Dynamics The results obtained for the above-mentioned simulation techniques were combined to indicate which process category is most often modeled using both DES and SD (see Fig. 4). It is worth noting that up to 102 analyzed articles were included in the first category - Develop Vision and Strategy. The second most numerous group is the category related to Deliver Physical Products, followed by the process category related to Deliver Services, to which 52 articles have been assigned.
Classification of Process from the Simulation Modeling Aspect
95
Process category DES - frequency of use [percent]
45% 40% 35% 30% 25% 20% 15% 10% 5% 0%
1.0
2.0
4.0
5.0
8.0
11.0
Fig. 3. Assigned process categories - discrete event simulation
120 100
12
80 60
5 8
0 0
0 1
13.0
0 0
12.0
0 5
11.0
DES
20 0
10.0
SD
0 3
9.0
0 1
8.0
2
7.0
18
50
6.0
0 3
4.0
1.0
0
3 7
3.0
20
39
5.0
90
2.0
40
Fig. 4. Combined research results for discrete event simulation and system dynamics
7 Discussion This article aimed to answer the question of which categories of processes are most often modeled using Discrete Event Simulation and which ones use System Dynamics. For this purpose, literature studies of articles were carried out, in which processes were modeled using the aforementioned methods. For the DES method, 129 articles were analyzed. Then the processes were assigned to the appropriate category, while in the case of SD, 138 articles were analyzed, which were then also assigned to the appropriate process category. The analysis of the obtained results showed that simulation modeling with the use of SD most often concerns strategic issues such as: defining business concepts and long-term visions or developing a business strategy. While DES is most often used to model operational issues. These may include, but are not limited to, issues related to: Planning of supply chain resources, production planning, warehouse and logistics management, as well as service delivery resource management.
96
J. J. Krzywy and M. Hell
When choosing the appropriate simulation modeling method, it is worth taking into account the purpose of the modeled process and its complexity. When modeling detailed issues, it is worth using the DES method, which enables analysis of individual events carried out over time. When modeling more general issues, for example, those related to the analysis of the trend of changes or the influence of certain factors on the entire system, it is worth using the SD methods. It is worth mentioning that there are also common process areas in which we can notice processes modeled using DES and SD. These include Developing Vision and Strategy and Delivering Physical Products. This indicates that these methods can not only interpenetrate, but also complement each other perfectly. In the case of modeling very complex and detailed issues, it is possible to integrate these techniques. This integration can take place in two directions. In the first case, we start to model very detailed processes using DES and then extend the analyzed problem to model the overall system’s overall behavior using SD. In the second case, we start modeling by getting to know the behavior of the system and its basic principles of functioning. Then, in order to understand not only how the system works, but also to learn how the individual elements of the system work, we begin detailed modeling of the indicated elements using DES. The integration of these two methods is certainly not easy to implement, but the skillful use of integration may contribute to a more complete picture of the analyzed situation and to obtaining more reliable results of the process simulation. Acknowledgement. The research was carried out as part of the Applied Doctorate Program of the Ministry of Education and Science carried out in the years 2020-2024 (Agreement No. DWD/4/232020).
References 1. Hammer, M.: Reengineering work: don’t automate, obliterate. Harvard Business Review (1990) 2. Grajewski, P.: Organizacja procesowa, Polskie Wydawnictwo Ekonomiczne, Warszawa (2016) 3. Davenport, T.H.: Process Innovation: Reengineering Work through Information Technology. Harvard Business Press, Boston (1993) 4. Stabryła, A.: Analiza systemowa procesu zarz˛adzania (1984) 5. Aguilar-Savén, R.S.: Business process modelling: review and framework. Int. J. Prod. Econ. 90, 129–149 (2004) 6. Giaglis, G., Paul, R., Doukidis, G.: Simulation for intra- and inter-organisational business process modelling. In: Proceedings Winter Simulation Conference (2000) 7. Browning, T.R.: On the alignment of the purposes and views of process models in project management. J. Oper. Manag. 28, 316–332 (2010) 8. Barnett, M.W.: Modeling & simulation in business process management. 10 (2003) 9. Porter, M.E.: Competitive Advantage: Creating and Sustaining Superior Performance. Free Press, New York (1985) 10. Siemionek, M., Siemionek, A.: Strategiczna karta wyników jako narz˛edzie wspomagaj˛ace nadzór korporacyjny. Studia Prawno-Ekonomiczne 93, 301–312 (2014) 11. Harvard Business Review. https://hbr.org/1985/09/the-hidden-factory. Accessed 11 Oct 2021
Classification of Process from the Simulation Modeling Aspect
97
12. Ossowski, M.: Identyfikacja i klasyfikacja procesów w przedsi˛ebiorstwie. Zarz˛adzanie i Finanse 10, 297–312 (2012) 13. Jahangirian, M., Eldabi, T., Naseer, A., Stergioulas, L.K., Young, T.: Simulation in manufacturing and business: a review. Eur. J. Oper. Res. 203, 1–13 (2010) 14. Jun, J.B., Jacobson, S.H., Swisher, J.R.: Application of discrete-event simulation in health care clinics: a survey. J. Oper. Res. Soc. 50, 109–123 (1999) 15. Karnon, J., Stahl, J., Brennan, A., Caro, J.J., Mar, J., Möller, J.: Modeling using discrete event simulation: a report of the ISPOR-SMDM modeling good research practices task Force-4. Value Health 15, 821–827 (2012) 16. Zhang, X.: Application of discrete event simulation in health care: a systematic review, BMC Health Serv. Res. 687 (2018) 17. Ferro, R., Cordeiro, G.A., Ordoñez, R.E.C.: Dynamic modeling of discrete event simulation. In: Proceedings of the 10th International Conference on Computer Modeling and Simulation - ICCMS 2018, pp. 248–252 (2018) 18. Morgan, J.S., Howick, S., Belton, V.: A toolkit of designs for mixing discrete event simulation and system dynamics. Eur. J. Oper. Res. 257, 907–918 (2017) 19. Hell, M., Petri´c, L.: System dynamics approach to TALC modeling. Sustainability 4803 (2021) 20. Derksen, C., Branki, C., Unland, R.: A framework for agent-based simulations of hybrid energy infrastructures. In: Federated Conference on Computer Science and Information Systems (FedCSIS), 2012, vol. 7 (2012) 21. Macal, C.M., North, M.J.: Tutorial on agent-based modelling and simulation. J. Simul. 4, 151–162 (2010) 22. Ruan, K.: Digital Asset Valuation and Cyber Risk Measurement. In: Chapter 4 - Cyber Risk Measurement in the Hyperconnected World. pp. 75–86. Academic Press (2019) 23. Levy, G.: Computational Finance Using C and C#, 2nd edn. Academic Press (2016) 24. Johansen, A.M.: Monte Carlo Methods, International Encyclopedia of Education, 3rd edn., pp. 296–303. Elsevier, Red. Oxford (2010) 25. Eldabi, T., Jahangirian, M., Naseer, A., Stergioulas, L., Young, T., Mustafee, N.: A survey of simulation techniques in commerce and defence. Eur. J. Oper. Res. 203, 2275–2284 (2008) 26. APQC’s Process Classification Framework (PCF)®. https://www.apqc.org/resource-lib rary/resource-listing/apqc-process-classification-framework-pcf-cross-industry-excel-7. Accessed 10 Sep 2020 27. Wzór na minimaln˛a liczebno´sc´ próby. https://www.naukowiec.org/wzory/metodologia/min imalna-liczebnosc-proby_902.html. Accessed 11 Oct 2021
Risk Assessment in Industry Using Expected Utility: An Application to Accidents’ Risk Analysis Irene Brito1(B) , Celina P. Le˜ ao2 , and Matilde A. Rodrigues3 1
3
Centre of Mathematics and Department of Mathematics, University of Minho, Guimar˜ aes, Portugal [email protected] 2 Centro ALGORITMI, University of Minho, Guimar˜ aes, Portugal [email protected] Health and Environment Research Centre, School of Health of the Polytechnic Institute of Porto, Porto, Portugal [email protected]
Abstract. Expected utility theory can be relevant for decision-making under risk, when different preferences should be taken into account. The goal of this paper is to present a quantitative risk analysis methodology, depending on expected utility, where the risk consequences are determined quantitatively, the risk is modelled using a loss random variable and the expected utility loss is used to classify and rank the risks. Considering the relevance of risk management to reduce workers’ exposure to occupational risks, the methodology is applied to the analysis of accidents in industry, where six different contact mode of injury categories are distinguished. The ranking of the injury categories is determined for three different utility functions. The results indicate that the slope of the utility function influences the ranking of the contact mode of injury categories. The choice of the utility function may thus be relevant for the risk classification in order to prioritize different aspects of risk consequences. Keywords: Risk assessment · Risk analysis function · Accidents · Industry
1
· Expected utility · Utility
Introduction
In the theory of decision making under risk the expected utility theory developed by von Neumann & Morgenstern (1947) describes the representation of preference relations on risky alternatives using expected utility. The expected utility model is used to model how decision makers choose between uncertain or risky prospects [1]. According to that model, there exists a utility function, which depends on the individual’s preferences, to appraise different risky outcomes and a decision maker chooses the outcome which maximizes expected utility. Expected utility theory has applications in the context of economic and actuarial sciences, however it can also c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Machado et al. (Eds.): icieng 2022, LNME, pp. 98–110, 2022. https://doi.org/10.1007/978-3-031-09385-2_9
Risk Assessment in Industry Using Expected Utility
99
be useful for applications in industrial settings, as it will be shown in the present paper. There exist applications of the theory to risk analysis in industry (e.g. in the examples presented below), where utility functions are introduced, however we could not find applications of expected utility, where the utility functions are applied to random risk consequences. Considering the risk analysis in industry, one important aim is to order risks quantitatively, so that higher ranked risks can be identified and riskhandling approaches can be applied, such as risk-controlling, risk-avoiding and risk-mitigating actions [2]. One common method used in industries, e.g. in many applications of safety analysis or in systems development, is to evaluate risks based on the determination of the level of risk (see [3]) or of the risk factor (see e.g. [4]), which are both expressed in terms of combinations of the occurrence probability of a certain risk event and of the size of its consequence. For example in the risk analysis of accidents, the risk matrix is a popular and common approach to evaluate occupational risks, where the probability of the occurence of accidents and the consequences of accidents are categorised and each cell of the matrix is associated with a level of risk. Several risk matrices have been used and proposed. In the end, risk can be classified in relation to its acceptance or ranked, as e.g. [3] intolerable, undesirable, tolerable and negligible; or high, medium high, medium and low. Another approach, for example in the failure analysis in systems development (see [4]), is to determine a risk factor RF , more used for complex systems, which depends on the probability of failure P and on the consequence of failure or measure of the consequence of failure C. A failure can be generalized to a broader sense of risk and therefore the risk factor RF can be defined as the product of the probability of risk occurence and the consequence of risk, which can be interpreted as a loss. A simple formulation of the risk factor is to define it as the product of both factors [4] RF = P C.
(1)
The values for C are determined based on the classification of different risk categories by assigning a value between 0 (negligible impact) and 1 (high or catastrophic impact) to C. This classification can be performed by experts in the various technological areas and several tables with decision criteria have been developed to facilitate the assessment and to evaluate the consequences of risk (see e.g. [5–9]). Some tables, for example, consist of 4 risk categories: catastrophic, critical, marginal, negligible and the risk factor RF ranges then between 0 and 1, where 0 means that there is no risk and 1 means that there is a high or maximal risk [9]. Based on this risk assessment formulation, Ben-Asher [9] developed a risk assessment method using utility theory, which improves the previous model (1). Due to symmetries, drawbacks were identified in the formula for the risk factor, e.g. high probabilities with negligible consequences and low probabilities with high (catastrophic) consequences can be ranked equally. However, as remarked
100
I. Brito et al.
in [9] most people emphasize risks with higher catastrophic consequences, so that more attention should be payed to the latter case and it should therefore be ranked differently from the first case. Using utility theory the new proposed model was formulated by [9] RF = P u(C),
(2)
where u is a utility function, called utility-based loss function. The loss value for the worst possible outcome is defined by u(Cworst ) = 1 and, in the absence of risk, the value is given by u(Cbest ) = 0. For other intermediate consequences Ci , the values are determined by asking the agent (or risk management board) for which value of p ∈ (0, 1) he or she would be indifferent between getting the outcome Ci with certainty and a lottery yielding the outcome 1 (worst outcome) with probability p and 0 with probability 1 − p. The utility based loss values are then defined by u(Ci ) = p, so that Ci is the certainty equivalent of the lottery. In this context, a typical utility function is convex, which increases more for higher values of consequences. This resolves the mentioned drawback. Other applications of utility theory to the risk analysis in the industrial settings were proposed in [10], where risk matrices were established that integrate risk attitudes quantified by utility functions, in [11], where the assessment of safety risks in oil and gas industry was considered, or in [12], where in the context of ports vulnerability analysis in shipping and port industries the ranking of vulnerable levels was determined by utility values. In this work we present a methodology for the analysis of risks in industry, where the risk evaluation and classification is based on expected utility theory. The main difference between this methodology and the methodologies based on utility theory mentioned above is that this is a quantitative methodology, where the risk consequences are determined numerically and modelled as random variables and expected utility is used to assess the risks, whereas the other methodologies are semi-quantitative, the consequences being determined qualitatively and ranked subjectively through the utilities. Here, a loss random variable is defined and the expected utility loss is used to rank the different risks. Different utility functions are introduced presenting more or less agressive decision-makers, so that a given risk can be classified differently, more or less severe, according to the different properties related to the first and second derivative of the utility function, modelling the decision-maker’s risk attitude. The risk analysis based on expected utility is applied to a case study of risk analysis of occupational accidents in the furniture industry, considered in [13], and the results are compared with the results of that previous study. In [13], the risks, associated with lost days due to different injury categories, were modelled by loss random variables and expected loss, loss variance and risk measures, such as Value-at Risk and Tail-Value-at-Risk (or Expected Shortfall), were used to analyse the risk levels of different accident categories. The data used in the case study corresponds to accidents in the furniture industry in Portugal in the year 2010, this industry being one of the most relevant activity sectors in Portugal [14], which consists predominantly of small and medium-sized enterprises [15].
Risk Assessment in Industry Using Expected Utility
2
101
Risk Assessment Based on Expected Utility
Consider the following decision problem. Suppose that a decision maker with wealth w must decide between two random losses X and Y . A simple decision model that one can apply in order to choose between X or Y is the expected value model. According to the expected value principle, a decision maker compares E[w − X] with E[w − Y ] and chooses the random amount which maximizes the expected value, so that he would prefer X to Y if E[w − X] > E[w − Y ], he would choose Y if E[w − X] < E[w − Y ] and in the case E[w − X] = E[w − Y ] the decision maker would be indifferent between X and Y . In the expected utility theory developed by von Neumann and Morgenstern (1947) a utility function u(·) : R → R is introduced that represents the preference ordering of the decision. The decision maker judges the utility of a given quantity instead of the simple value of that quantity, so that he takes into account the utility of the wealth u(w) instead of w. The utility function is an increasing function, u (·) > 0, since utility increases with wealth. According to the expected utility model, if the decision maker must decide between two random losses X and Y , he or she compares the expected utilities E[u(w − X)] with E[u(w − Y )] and opts for the quantity having the higher expected utility, so that he chooses X if E[u(w − X)] > E[u(w − Y )], or Y if E[u(w − X)] < E[u(w − Y )] and if E[u(w − X)] = E[u(w − Y )] the decision maker is indifferent between X and Y . The decision that an individual takes depends on the attitude towards risk, which is characterized by the second derivative of the utility function, by its shape. A decision maker can be classified into three risk attitude categories: risk-seeking, risk-neutral and risk-avoiding (or risk-averse). For a risk-seeking individual the utility function is convex, u (·) > 0, for a risk-avoiding individual the utility function is concave, u (·) < 0, and for a risk-neutral individual the utility function is linear, u (·) = 0. The following example illustrates this characterization. Example 1. Consider a lottery, where the decision maker can win 2 monetary units with probability 0.5 or 0 with the same probability, and let X denote the random variable representing the corresponding monetary outcome. The expected utilities for the three different utility functions: u1 (x) = x2 – risk√ seeking, u2 (x) = x – risk-neutral, u3 (x) = x – risk-avoiding, are given by: √ 2 ≈ 0.71. (3) E[u1 (X)] = 2, E[u2 (X)] = 1, E[u3 (X)] = 2 Comparing the expected value of the monetary outcome of the lottery, given by E[X] = 1, with the certainty equivalents corresponding to the different utilities (a certainty equivalent is the amount of cash that an individual would accept with certainty instead of facing the lottery), which are determined through CE(X; ui ) = u−1 i (E[ui (X)]), i = 1, 2, 3: CE(X; u1 ) =
√
2 ≈ 1.41, CE(X; u2 ) = 1, CE(X; u3 ) =
1 , 2
(4)
102
I. Brito et al.
one can observe that the lottery for a risk-seeking decision maker is valued higher, CE(X; u1 ) > E[X], than for a risk-avoiding decision maker, CE(X; u3 ) < E[X]. For the risk-neutral utility, the certainty equivalent coincides with the expected value. In risk theory, a risk can in certain situations be modeled by a loss random variable X, a random risk, of the form (see for example [1]): X = IB,
(5)
where I is the indicator random variable, which can take the values I = 1, indicating that a risk event or a loss has occurred, or I = 0, indicating that no risk event or no loss has occurred, and B represents the random amount of loss. The indicator random variable I can be described by a Bernoulli(p) distribution, 0 ≤ p ≤ 1. The occurrence probability of the risk event is p = P (I = 1) and 1 − p = P (I = 0) corresponds to the probability of no risk occurrence. If I = 1, then the loss X is drawn from the distribution of the random variable B and if I = 0, then X = 0, meaning that no loss has occurred. The moments of X can be calculated using the iterative formula of conditional expectations. The expected loss is determined as follows E[X] = E[E[X|I]] = pE[B].
(6)
In the particular case, where the amount of loss is fixed B = b, then the formula for the expected loss simplifies to E[X] = pb.
(7)
The expected utility loss of X = IB is given by E[u(X)] = pE[u(B)] + (1 − p)u(0)
(8)
and the expected utility for X = Ib, considering a fixed loss B = b, by E[u(X)] = pu(b) + (1 − p)u(0).
(9)
Using a normalization condition u(0) = 0, the formulas (8) and (9) become, respectively, E[u(X)] = pE[u(B)]
(10)
E[u(X)] = pu(b).
(11)
and
Comparing the previous formulas for the expected loss and expected utility loss of a random risk with the definitions for the risk factor used in the industry context (1) and (2), the latter depending on the utility function, then the expected loss defined in (7) resembles formula (1) and (11) resembles formula (2), where
Risk Assessment in Industry Using Expected Utility
103
p plays the role of P of the risk occurrence probability and b the role of C, since a risk consequence can be perceived and represented as a loss in a certain sense. In the expected utility framework, that we will adopt here, the consequences or losses will also considered to be random variables, so that the definitions (6) and (10) would represent the analogous risk factors for random consequences. In the risk assessment approach based on expected utility we will use the utility function to characterize the outcome of the risk representing a loss (or a consequence), so that as in [9] the utility function u is a utility loss function (or utilitybased loss function). Thus, u(·) is an increasing function of loss or consequence, u (·) > 0. We will apply the definitions (11) and (10), where the losses are modelled as fixed or random variables, respectively. In order to assess and classify the risks in industry, we will use the expected utility losses to rank the risks. We will say that a risk X is higher or more severe than the risk Y , or simply that X is riskier than Y , if the expected utility loss of X is higher than those of Y and we will represent this risk relation order by X Y . Thus, we have that X Y ⇔ E[u(X)] > E[u(Y )],
(12)
meaning that X is riskier than Y .
3
A Case Study – Risk Assessment of Industrial Accidents
We will apply the expected utility based risk analysis method to a case study that was considered in [13]. The aim is to classify and assess injury categories of occupational accidents that occurred in the furniture industry in Portugal in 2010. Official accident reports data were provided by the Portuguese Office of Strategy and Planning (GEP), which are aligned with European Statistics on Accidents at Work (ESAW III). The six categories of contact-modes of injuries, denoted by Ci , i = 1, . . . , 6, are presented in Table 1. Table 1. Contact mode of injury categories. Ci Injury category C1 Contact with electrical voltage, temperatures, hazardous substances C2 Horizontal or vertical impact with or against a stationary object (victim in motion) C3 Struck by object in motion, collision with C4 Contact with sharp, pointed, rough, coarse Material Agent C5 Trapped, crushed, etc. C6 Physical or mental stress
The occurrence of a safety risk, which in the present case is a occupational accident belonging to one of the six injury categories, is accompanied by a consequence and its severity can be measured by the number of lost work days. Therefore, here the loss (or consequence, or severity) will be measured in terms
104
I. Brito et al.
of number of lost days implied by an accident. Different injuries will lead to different numbers of lost work days, where the case of zero lost work days is also possible to occur. The risk of accident can then be characterized by its occurrence probability, estimated based on the past accidents’ frequencies, and by its loss, measured in terms of lost work days. The risk corresponding to the injury category Ci will be modeled using the loss random variable (13) Xi = Ii Bi , where Ii ∼ Ber(pi ), with pi being the accidents’ occurrence probability in category Ci , and the random variable Bi represents the number of lost work days in category Ci . We will consider two cases, where in the first case a fixed estimated number of lost work days will be used, Bi = bi , and in the second case a random variable Bi will be used, where further the number of zero and non-zero lost work days will be taken into account. The estimated number of lost work days due to an accident of category Ci , i = 1, . . . , 6, will be defined by bi =
bT i , ni
(14)
where bTi represents the total number of lost days associated with accidents of category Ci and ni is the number of accidents in category Ci . The occurrence probability of an accident in category Ci is given by pi =
ni , n
(15)
where ni stands for the number of accidents in category Ci and n is the total number of accidents that occurred in the furniture industry. The probability that an accident belonging to category Ci will lead to at least one lost work day can be estimated taking into account the number of accidents that had as consequence one or more than one lost work day as follows qi =
ni,≥1 , ni
(16)
where ni,≥1 denotes the number of accidents in category Ci leading to at least one lost work day. Table 2 contains the results for accidents of each category Ci . Table 2. Results for the contact modes of injury categories. Ci ni
ni,≥1 qi
bT i
bi
pi
C1
97
C2
523
361 0.69 17457 33.38 0.12
C3
958
585 0.61 18082 18.87 0.22
77 0.79
1135 11.70 0.02
C4 1406 1188 0.84 53661 38.17 0.33 C5
331
270 0.82 13594 41.07 0.08
C6
998
809 0.81 27062 27.12 0.23
Risk Assessment in Industry Using Expected Utility
105
Case 1: Risk Model with a Fixed Number of Lost Work Days Considering the risk model Xi = Ii bi , we will apply the expected utility loss (11) E[u(Xi )] = pi u(bi ) using three different utility functions exemplifying the different risk attitudes: the linear utility u1 (x) = x, a quadratic utility u2 (x) = x2 and the exponential utility u3 (x) = ex . Note that for the linear utility the expected utility corresponds to the expected value. The other two utility functions are convex, where the exponential utility increases more for higher losses than the quadratic utility. Table 3 contains the calculated expected utility losses for each injury category. Table 3. Expected utility losses of injury categories. Ci E[u1 (Xi )] E[u2 (Xi )] E[u3 (Xi )] C1
0.23
2.74
C2
4.01
133.71
3.77 × 103
C3
4.15
78.34
3.45 × 107
C4 12.60
480.79
1.25 × 1016
C5
3.29
134.94
5.49 × 1016
C6
6.24
169.16
1.38 × 1011
2411.43
The contact mode of injury categories can be ordered with respect to their risks using the results of Table 3 and the representation (12) for each utility function (see Table 4). Table 4. Risk ordering of contact mode of injury categories using expected utility losses. E[u1 (Xi )] C4 C6 C3 C2 C5 C1 E[u2 (Xi )] C4 C6 C5 C2 C3 C1 E[u3 (Xi )] C5 C4 C6 C3 C2 C1
In this case, the utility function is applied to the loss model depending on the accident ocurrence probability and on the estimated number of lost work days, which is fixed, with the effect that only the number of lost work days, bi , is influenced by the utility function. From the results in Table 4 one concludes that the injury category C1 is classified as the lowest risk category with all three utility functions. The linear and the quadratic utility classifiy C4 as the higher risk category, followed by C6 . However the exponential utility classifies C5 as the higher risk category, followed by C4 . The reason for this difference can be explained as follows. As one can see from Table 2, C5 has the highest estimated number of lost days b5 = 41.07 and a low occurrence probability p5 = 0.08, whereas C4 has the highest occurence probability
106
I. Brito et al.
p4 = 0.33 and the number of lost days is also considerable large b4 = 38.17. With an increasing slope of the utility function the impact on bi also increases. Thus, the increasing slope penalizes the risk and the penalization is more accentuated for higher values of bi . This has the influence that C5 is considered with u1 the fifth risk category, with u2 the third risk and with u3 (the exponential utility having the highest slope), the highest risk category. For low values of bi , as in C1 , the influence of the slope has low impact, leaving C1 at the same risk level. In general, one can conclude that attention should first be paid to the injury category C4 and then to the category C6 which occupy higher risk positions in the different orderings. There is a further evidence to consider C5 as the third risk category. If one wants to weight more the number of lost days and the aim is to reduce accidents with higher number of lost days, although the occurence probability being low, then more attention should be paid to injury category C5 . With the given model the results of Table 4 suggest the following ranking: C4 C6 C5 C2 , C3 C1 ,
(17)
where the position of C2 and C3 varies more with the utility function and it is not clear, in general, which of both should be prioritized. This depends effectively on the decision-makers’ attitude reflected by the utility function. One can conclude that the utility function influences the risk ordering of the injury categories, where different utility functions weight the number of lost days differently. A utility function with a higher slope, as the exponential utility, could be employed if one wants to prioritize risks with higher number of lost work days. Case 2: Risk Model with a Random Number of Lost Work Days Now, we will consider the risk model Xi = Ii Bi , where Bi is a random variable. In this case, we will further take into account the occurrence probability of accidents with non-zero lost work days, given by qi (see (16)). The distribution of Bi can be defined by: P (Bi = bi ) = qi , P (Bi = 0) = 1 − qi . The expected utility loss for category Ci , using (10), is then given by E[u(Xi )] = pi qi u(bi ). Table 5. Expected utility losses of injury categories. Ci E[u1 (Xi )] E[u2 (Xi )] E[u3 (Xi )] C1
0.18
2.16
C2
2.76
92.26
2.60 × 1013
C3
2.53
47.79
2.10 × 107
C4 10.58
403.87
1.05 × 1016
C5
2.69
110.65
4.50 × 1016
C6
5.05
137.02
1.12 × 1011
1905.03
(18)
Risk Assessment in Industry Using Expected Utility
107
Calculating the expected utility losses for the utility functions u1 (x) = x, u2 (x) = x2 and u3 (x) = ex , one obtains the results in Table 5. The injury categories can be ordered with respect to their risks using the results of Table 5 and the representation (12) for each utility function (see Table 6). Table 6. Risk ordering of injury categories using expected utility losses. E[u1 (Xi )] C4 C6 C2 C5 C3 C1 E[u2 (Xi )] C4 C6 C5 C2 C3 C1 E[u3 (Xi )] C5 C4 C2 C6 C3 C1
Analysing the results presented in Table 6, one can observe that, as with the application of the previous model, C4 and C6 occupy the first and second risk position with u1 and u2 , respectively, and using u3 , C5 occupies the first and C4 the second risk position. The values for the new introduced quantity qi are in fact also higher for C4 , C5 and C6 (see Table 3), so that these categories remain unchanged in the ranking, and this also due to the fact that in formula (18), the utility function continues influencing only the quantity bi . Considering the low risk injury category, C1 is also classified with this model as having the lowest risk with all three utility functions. The difference is now that, furthermore, the category C3 is classified with the three utilities as second lowest risk category, whereas in the other model C3 appeared in the third, fourth and fifth risk position, depending on the utility function. The reason for C3 appearing now in the fifth risk position with all three utilities is that C3 has the lowest probability of accidents with non-zero lost work days: q3 = 0.61 (see Table 2). In general, if one should rank the injury categories with the proposed model and the three utilities, the results of Table 6 suggest the following ranking: C4 C6 C5 C2 C3 C1 .
(19)
With this second model, the aspect of the probability of non-zero lost work days was further taken into account, so that, with the low probability q3 , the category C3 was classified consistently as second lowest risk category. Considering the impact of the number of lost work days on the risk classification, the exponential utility with its higher slope is more sensitive to this aspect than the other utilities and also in this case the expected utility loss with u3 prioritizes the risk of injury category C5 . On the contrary, the expected utility loss with u1 and u2 ranks C4 as the highest risk category, which has the highest occurence probability and the highest probability of non-zero lost work days. Comparing the ranking of the six injury categories of industrial accidents obtained with the expected utility loss models with the classification obtained with other risk measures, such as the Value-at Risk (VaR), in the study conducted in [13], where the proposed ranking was C4 C6 C2 C5 C3 C1 ,
108
I. Brito et al.
one can conclude the following. The ranking based on VaR coincides with the expected loss (expected utility loss with linear utility u1 ) with random number of lost days (cf. Table 6). In general, the expected utility loss models (cf. (17) and (19)) and VaR prioritize the risk of category C4 followed by C6 and all classify C1 as the category with minimum risk. The main difference lies in the classification of the intermediate risk levels. For example, considering the third risk position in the ranking, the expected utility loss models select C5 , whereas VaR selects C2 . Here, one can observe the role of the utility function, which penalizes C5 , that in fact has the highest estimated number of lost work days.
4
Conclusions
In this work we proposed a risk analysis approach based on expected utility. The industry risk was modelled by a loss random variable and the expected utility loss was used to classify and rank industry risks. The role of the utility function is to weight differently certain aspects of risk and the utility function can represent the risk attitude of a decision-maker. The methodology was applied to the risk analysis of accidents in industry, which can be categorized into six classes of contact mode of injury categories. A loss random variable, depending on the number of lost work days, which in a first model was considered fixed and in a second model was considered random, and on the accident occurence probability was defined. Then, the expected utility loss was calculated for each injury category. In the second model, the number of non-zero lost work days was further distinguished for each injury category. Different utility functions, a linear utility, a quadratic utility and an exponential utility, were used to determine the expected utility loss. The results showed that different utility functions provide different rankings of the categories, where the higher the slope of the utility function was, the higher was the penalization of categories with high accident numbers. The introduction of the utility function in the risk assessment can therefore be useful to prioritize certain aspects of risk, penalizing more or less a given risk event (or risk consequence). The utility function serves therefore also to model the preferences and the risk attitude of the decision-maker. This analysis method, besides being a quantitative method, considers a qualitative or subjective evaluation through the utility function. The selection of the utility function, based e.g. on empirical studies in risk judgement in industry, is a topic that can further be studied, also a possible relation of the utility function with existing risk matrices can be investigated. The risk analysis approach can be extended to other domains, in situations where it is possible to quantify the risk consequence, having an associated occurence probability, and represent it by a loss random variable. The case study serves as an example of how to apply the approach and how to model the risk. One of the main and new achievements is the introduction of the utility function to take into account the decision-maker’s risk attitude and the possiblity of ranking the risks accordingly using expected utility. In the future we will use more actual data, apply the different risk methodologies, based on expected utility, on
Risk Assessment in Industry Using Expected Utility
109
VaR and on the risk matrix and analyse and compare the results. We also plan to apply the methodologies to different industry sectors. Acknowledgments. This work was partially financed by Portuguese Funds through FCT (Funda¸ca ˜o para a Ciˆencia e Tecnologia) within the Projects UIDB/00013/2020, UIDP/00013/2020, UIDB/00319/2020, UIDB/05210/2020.
References 1. Kaas, R., Goovaerts, M., Dhaene, J., Denuit, M.: Modern Actuarial Risk Theory, 2nd edn. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-70998-5 2. ISO 31000:2018. Risk management - principles and guidelines (2018) 3. Harms-Ringdahl, L.: Guide to Safety Analysis for Accident Prevention. IRS Riskhantering AB, Sweden (2013) 4. Whalen, J., Wray, R., McKinney, D.: Systems Engineering Handbook (version 2.0). INCOSE (2000) 5. Marhavilas, P., Filippidis, M., Koulinas, G., Koulouriotis, D.: Safety considerations by synergy of HAZOP/DMRA with safety color maps-applications on: a crude-oil processing industry/a gas transportation system. Processes 9, 1299 (2021). https:// doi.org/10.3390/pr9081299 6. Katopodis, T., Adamides, E.D., Sfetsos, A., Mountouris, A.: Incorporating future climate scenarios in oil industry’s risk assessment: a greek refinery case study. Sustainability 13, 12825 (2021). https://doi.org/10.3390/su132212825 7. Hassani, M., Chaib, R., Bouzerara, R.: Vulnerability assessment for major industrial risks proposal for a semiquantitative analysis method (VAMIR) application: oil and gas industry. J. Fail. Anal. Prev. 20(5), 1568–1582 (2020). https://doi.org/ 10.1007/s11668-020-00960-4 8. Ruzante, J.M., Grieger, K., Woodward, K.P., Lambertini, E., Kowalcyk, B.B.: The use of multi-criteria decision analysis in food safety risk-benefit assessment. Food Prot. Trends 37, 132–139 (2017) 9. Ben-Asher, J.Z.: Development program risk assessment based on utility theory. Risk Manag. 10(4), 285–299 (2008). https://doi.org/10.1057/rm.2008.9 10. Ruan, X., Yin, Z., Frangopol, D.M.: Risk matrix integrating risk attitudes based on utility theory. Risk Anal. 35(8), 1437–1447 (2015). https://doi.org/10.1111/ risa.12400 11. Tian, D., Zhao, C., Wang, B., Zhou, M.: A MEMCIF-IN method for safety risk assessment in oil and gas industry based on interval numbers and risk attitudes. Eng. Appl. Artif. Intell. 85, 269–283 (2019). https://doi.org/10.1016/j.engappai. 2019.06.015 12. Jiang, M., Lu, J., Qu, Z., Yang, Z.: Port vulnerability assessment from a supply chain perspective. Ocean Coast. Manag. 213, 105851 (2021). https://doi.org/10. 1016/j.ocecoaman.2021.105851 13. Brito, I., Le˜ ao, C.P., Rodrigues, M.A.: Risk analysis and risk measures applied to the furniture industry. In: Machado, J., Soares, F., Trojanowska, J., Ottaviano, E. (eds.) icieng 2021. LNME, pp. 113–121. Springer, Cham (2022). https://doi.org/ 10.1007/978-3-030-79165-0 11
110
I. Brito et al.
14. Rodrigues, M.A., Arezes, P., Le˜ ao, C.P.: Defining risk acceptance criteria in occupational settings: a case study in the furniture industrial sector. Saf. Sci. 80, 288–295 (2015). https://doi.org/10.1016/j.ssci.2015.08.007 15. Eurostat: Archive: Manufacture of furniture statistics - NACE Rev. 2 (2013). https://ec.europa.eu/eurostat/statistics. Accessed Apr 2013
Two-Criterion Optimization of the Worm Drive Design for Tool Magazine Oleg Krol(B)
and Volodymyr Sokolov
Volodymyr Dahl East Ukrainian National University, 59-a Tsentralnyi Prospect, Severodonetsk 93400, Ukraine [email protected]
Abstract. A complex searching procedure for a rational design of a worm drive mechanism of a tool magazine for a multioperational metal-cutting machine in the mode of automatic tool change is considered. The analysis of the chain tool magazine design of the drilling-milling-boring machining center based on the constructed three-dimensional models in the integrated CAD environment is carried out. Rendering of tooling for technological operations of drilling and milling has been completed. The 3D modeling of a worm gear using specialized applied programs for the design of mechanical transmissions has been carried out. A procedure for two-criterion design characteristics optimization of the worm gear of the tool magazine drive by the method of Lagrange multipliers is proposed. The main criteria are highlighted – contact endurance and wear resistance in the contact zone of the worm and the worm wheel. The choice of the optimized parameters – the teeth number of the worm and the worm wheel, as well as the coefficient of the worm diameter is carried out. An algorithm for finding rational design solutions when construction a worm gear is proposed. A numerical analysis of the results the system of equations solving reflecting the dependence of the main transmission criteria on the optimized parameters was carried out. Keywords: Chain tool magazine · Gear drive · Multi-criteria optimization
1 Introduction One of the development main directions for mechanical engineering and metalworking is increasing the productivity and flexibility of metal-cutting machine tools and machinetool complexes. This is due to a significant increase in the range of parts in medium and small-scale production and the need to automate their production on modern multioperational machines and machining centers. This can be achieved by extensively equipping the main equipment with subsystems for the automated change of workpieces and cutting tools. When using automatic tool changers, it is rational to use tool blocks consisting of cutting and auxiliary tools. The tool blocks are fitted with identical seats and are pre-dimensioned (or measured). The use of Automatic Tool Changers (ATC) is one of the main means of reducing the downtime of high-performance equipment. The ATC type and design determine the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Machado et al. (Eds.): icieng 2022, LNME, pp. 111–122, 2022. https://doi.org/10.1007/978-3-031-09385-2_10
112
O. Krol and V. Sokolov
accuracy of the tool set and, consequently, the processing accuracy, tool change time, i.e. productivity, as well as the entire layout of the multioperational machine [1, 2]. So, the machining process on a CNC lathe [3] is considered from a systemic point of view and a reduction in the processing cycle time of the order of 18% is noted. Processing time reduced by using of optimal cutting conditions (main time), and due to auxiliary time (including the component tool change). An important role is played by the ATC drive, which implements basic motions of the Machine Tool Magazine (MTM) in accordance with the existing technological process. Errors arising in the process of positioning and tool change depend largely on mechanical transmissions, their manufacturing technology and optimal design solutions. Cutting- and auxiliary tools, means of its presetting, means of controlling it and tooling systems play an important role in achieving high cost-effectiveness of expensive CNC equipment [4].
2 Literature Review In the process of designing multioperational machines, it is necessary to take into account the directions of development of their tooling systems. In this regard, the results obtained in [5] are of interest. In it, comprehensive research of the improvement strategy the tooling support of machine equipment was carried out. The authors have formed a number of factors influencing the creation of a sustainable production process in two directions. The first direction of research reflects the static aspect, considering the physical and mechanical properties of the used cutting tool and the level of its completeness. The second is related to the quasi-static aspect and determines the nature of positioning and the storage location of cutting tools in the form of tool blocks. This presentation of the problem presupposes the solution emergence of two conflicting goals. The first goal is to ensure that production orders are adhered to, thus increasing inventory levels and transportation costs. In this direction a research was carried out [6] in which a mathematical model was created that reflects the minimum stock of the tool and the value of the average delay in the process of identifying the tool. This model most adequately reflects the situation associated with the storage of tools in a centralized tool plant, or a localized storage. The second goal is high tool availability and reduced machining times. Based on the factors systematization influencing the options choice for the localization of the instrumental component, recommendations are proposed for resolving this conflict of goals. The essence of this approach is to determine the optimal storage area within a set of options: – centralized tool plant; – localized storage; – machine tool magazine. As you know, in the process of operation of multioperational machines such as a machining center with a variety of tooling, processing errors are formed more intensively than in conventional CNC machines. They can lead to relative linear and angular
Two-Criterion Optimization of the Worm Drive Design for Tool Magazine
113
displacements of the forming units of the machine, in which the cutting tool and the workpiece are mounted. Errors of this kind, as well as the backlash in the links of the mechanical transmissions of the MTM drive, affect the accuracy of the movement of the actuating devices of the machine. Problems associated with the correction of angular displacements of the actuating devices are also noted. It should be borne in mind that the reduction in the level of errors directly depends on the most optimal design solutions laid down by the designers when designing it. The efficiency of the machining center operation is largely influenced by the ATC operation. At the same time, the accuracy and quality of blank shaping is associated with the level of errors arising from the interaction of the tool magazine-autooperator-machine tool spindle. Therefore, in works [7, 8] the analysis of errors associated with the positioning of tools in the MTM, which can lead to failures in the ATC operation of the machining center, is given. In addition, the early warning system for tool magazine failures plays an important role in the operational phase. Such a system for analyzing vibration signals in the positions of removing tools from the magazine cradle and mounting it in the mating conical bore of the spindle is presented in [7]. The use of fuzzy complex assessment in problems of predicting MTM failures and identifying the most critical components is proposed in [8]. It is noted that among the various causes of failure, the component associated with tool jamming with inaccurate positioning accounts for 21% of the total volume of failures. In [9] the VERICUT software, designed for five-axis milling machines of medium sizes equipped with Chain Tool Magazines (CTM) was developed. This software in the tasks of modeling the main shaping processes, including ensuring high accuracy of the tool positioning in the CTM during the tool change is used. In the work [10] the issues of increasing accuracy and reducing the cost of processing due to improvement of the components of the technological system of the CNC lathe are considered. Analytical dependencies to determine the errors of the machining process in the case of using a multi-cutter turret are formed. These dependencies allow optimal design and technological decisions. A sufficiently effective procedure for preliminary correction of the longitudinal and transverse overhang of the tool in the multi-cutter head is highlighted, which allows reducing the machining errors along the X- and Y-axes by up to 30%. It is important that such positioning indicators as accuracy, resolution and repeatability of positioning depend on the design characteristics of the MTM drive, the main mechanical transmission of which is a worm gearing. In the drive mechanisms of machining centers, the use of worm gears is difficult due to the problem of strict synchronization of the rotations of the power and driven shafts [11, 12]. A similar situation occurs during the reverse movement of the worm, when the driven worm wheel is stationary for a certain period. The misalignment of the elements rotations of the worm transmission causes inaccuracy in the MTM positioning. Based on the analysis of the problem of increasing the accuracy of executive movements of the MTM by reducing the gaps in the transmission, it can be concluded that there are two directions of research. The first, technological, is associated with the search
114
O. Krol and V. Sokolov
for a new technology for the production of worm transmission elements. The second – constructional is associated with the use of optimal design solutions in worm gearing. Improvement of the design is faced with the need to analyze the engagement zone, which are characteristic of various types of mechanical gears (spur, helical, bevel, worm, etc.) [13, 14]. Engineering research of structures of various types of mechanical transmissions is carried out in advanced computer-aided design systems and specialized author software. Thus, a dual-part gearing study in the ABAQUS program environment, aimed at modeling and determining contact stresses on the lateral surfaces of the teeth and bending stresses at the root of the teeth are carried out [15]. The author emphasizes that the level of bending stresses in the tooth root on the side opposite to the applied load is two times lower than in the case of contact stresses on the lateral sides of mating teeth, which corresponds to the theory of gearing. In applied terms, two specialized subprograms [16] within the ABAQUS program have shown their effectiveness. It allows you to import solid models from other CAD systems (in this case, from CATIA software). The proposed software product makes it possible to assess the level of contact stresses, pressure on the lateral surfaces of the tooth, the length of the contact lines, and the wear resistance of the contacting surfaces. In [17], the endurance strength of parts of a two-stage gear mechanism under contact and bending stresses is researched. An experimental analysis of the endurance strength indicators in the most dangerous section at the tooth apex with a safety factor equal to 1.4 is carried out. The main research toolkit is the finite element method used to analyze the stress-strain state of the output link – a hollow shaft with internal gearing. The results of this analysis in the problem of optimizing the design of the gear mechanism are planned to be used. At the same time, the scenario of the optimization procedure (criterion and method) is not presented in the work. Optimization problems for the parameters of worm gears are presented in [18–21]. Thus, in [18, 19], analytical and experimental approaches to solving the problem of choosing a rational type of gearing and finding the optimal values of its parameters are presented. The systematization of design criteria (power loss along the length of the contact line, load capacity, efficiency, etc.), presented as functions of several variable parameters that determine the transmission quality, has been carried out. In works [20– 22], the procedure of multicriteria synthesis of transmissions with different types of worms is considered, as well as the effect on optimization criteria of such geometrical and kinematic characteristics of engagement, such as the length and shape of contact lines, reduced curvature, profile angle of turns, etc. A new approach to search for optimal parameters of worm gearing by the Lagrange multipliers method on the basis of a multi-criteria optimization procedure is proposed in the paper. In contrast to [21, 22], where the problem of modifying the geometry of the gearing with an increased load capacity is solved, in this work, the peculiarity of the worm gearing is taken into account. First of all, this is due to the high sliding velocity and, consequently, to increased wear. Therefore, Wear resistance is used as one of the criteria. A similar approach is considered in the paper [19], but there is no multicriteria formulation and, as criteria, the contact and bending endurance are taken with an emphasis on the analysis of the geometry of the gear and wheel generating lines.
Two-Criterion Optimization of the Worm Drive Design for Tool Magazine
115
Based on the above analysis of the problems of creating tooling support for multioperational machines based on tool magazines of various types, it is possible to formulate the statement of the problem: To create a complex of three-dimensional models of a chain tool magazine as a basis for determining the main technical characteristics and to develop an optimization procedure for choosing a rational design of an MTM drive mechanism based on a worm transmission.
3 Research Methodology The efficiency of the functioning of multioperational machines and machine-tool complexes depends, among other things, on uninterrupted operation and the speed of changing workpieces and cutting tools. Modern machines are equipped with various types of tool magazines with differ toolsets. At the design stage at the initial steps, it becomes necessary to create new 3D-model sets of disk and chain type’s tool magazines for machining center of two- and three standard sizes using the theory of rational choice [22, 23]. When creating projects for such stostorage is necessary to create separate sections of threedimensional models of auxiliary and cutting tools using various specialized applications [24, 25], including rendering technologies. As noted above, for machine tools of the drilling-milling-boring group, on which complex housing parts are processed with the help of a significant number of cutting tools, it is most expedient to use a chain tool magazine (CMT). Let us consider the CTM design of a multioperational machine of a drilling-millingboring type. Its main purpose is storage and delivery to the position of changing tool blocks with a metal-cutting tools. The variant of the CTM location on the spindle head of the machine does not require additional movements of the manipulator, the spindle head or the magazine to ensure the necessary mutual position of the magazine and the spindle when changing the tool. In this case, the tool change can be carried out at any position of the spindle. However, this arrangement option is advisable to use for a sufficiently small number of cutting tools and a small mass of the magazine itself. In this case, due to the large mass of the magazine, there is a significant loss of time for the auxiliary strokes of the spindle head. The mass of the magazine and tools will affect the accuracy of processing workpieces, since the forces of inertia arising at the time of starting and braking the magazine when searching for a tool affect the machine-fixture-tool-workpiece system and the micro geometry of the machined surface [26]. In addition, the degree of filling the magazine with tools, and its different mass, cause different loads on the carrier system of the machine, which leads to a displacement of the spindle axis, and also affects the stability of positioning. In connection with the above, we take such a CTM arrangement the machining center, which involves its mounting on a column (Fig. 1), taken out from the cutting zone of the machine. The tool magazine chain in two versions consists of 36 (60) links and is driven by a gearbox. To create a complex of 3D models of tool storages equipped with a variety of tool blocks for milling and axial machining operations (drilling, countersinking, etc.), we
116
O. Krol and V. Sokolov
will use the integrated CAD KOMPAS-3D with the technology of comprehensive endto-end 3D design developed by the ASCON group of companies [27, 28] and the built-in Artisan-rendering module. The use of 3D modeling with the principle of associativity in this task allows, along with the procedure for generating the CTM structure, to draw the necessary views and sections on the drawings, to form the initial data for the tasks of design and technological preparation of production [29]. On the basis of the constructed model of the structure, precise lists of equipment, products and materials used in this model are formed – specifications, list of materials. In the environment of the integrated CAD KOMPAS-3D, a solid model of a chain tool magazine has been developed for 36 cradles, in which various tool blocks are mounted (Fig. 2).
Fig. 1. Fragment of the CTM drawing for multioperational machine with an autooperator
Fig. 2. Three-dimensional model of a chain tool magazine: a – general view; b – longitudinal section; c – cross section; d – even and odd fastening cells
The choice of the capacity of the tool magazine for 36 cradles (tool blocks) on the basis of an analysis of typical technological processes (milling, drilling, countersinking, boring, etc.) for processing housing parts was carried out. In addition, in the process of making this decision, the indicator of rational duplication of the cutting tool for the most commonly used technological passes was taken into account. The creation of a photorealistic image of tooling using the example of drilling and milling operations is shown in Fig. 3. For this, the Artisan rendering module built into
Two-Criterion Optimization of the Worm Drive Design for Tool Magazine
117
the KOMPAS system is used, the advantages of which are the simplicity and speed of installing a complete scene (Snapshot) and the ability to view and generate several snapshots for software rendering. The expansion of such capabilities by the example of the perception of graphic information by visualizing complex objects using Augmented Reality (AR-) and VR-application are presented in [30]. The Artisan module uses a combination of high quality hardware OpenGL rendering for setup and viewing, along with rendering for ray tracing high quality images and global illumination. The module contains toolkit for combining materials and lighting, texture and relief. In this case, the textures contain reflections and transparency of elements such as a mirror or glass (Fig. 3).
Fig. 3. Rendering of tooling: a – drilling block; b – high-speed drilling head; c – face-milling cutter
The tool change with this variant of the magazine design is carried out programmatically (using the ATC system). The movement of the chain is carried out from an electric motor through a worm gear. The kinematic diagram of the CTM drive is shown in Fig. 4a. The CTM drive consists of a housing in which there is a worm transmission (Fig. 4b, c), which pass the movement from the electric motor to the CTM drive sprocket and a cylindrical gear leading to the circular displacement transducer. In the factory version of the worm gear, a worm with an uneven pitch is used, which allows, as wear and tear, to adjust the backlash in the worm gear by engaging the worm turns with a greater thickness. To assess the design parameters of the worm transmission, 3D models have been developed in the KOMPAS environment (Fig. 4b, c) [21, 25].
Fig. 4. 3D simulation of worm transmission
118
O. Krol and V. Sokolov
4 The Main Results An analysis of the stress-strain state based on a 3D model of the tool magazine design and its drive showed a high level of dynamic loads with impacts during frequent tool changes. This determines the choice of the criterion for optimizing the basic drive worm gear associated with contact endurance. The parameterization tool of CAD KOMPAS3D allows you to preliminarily outline the ranges of design parameters for the worm drive. In the CTM operation with a large number of various tools and its changeover dynamics, the problems of durability and guaranteed service life in conditions of sufficiently intensive wear of the drive mechanism parts come to the fore. This paper proposes a solution to a multicriteria optimization problem for the CTM worm gear drive, in which the criteria limiting the performance of power worm gears are: 1) contact endurance (σH ); 2) wear resistance (γ ). The set of parameters: the number of worm threads z1 , the number of worm wheel teeth z2 and the worm diameter factor q as optimized basic parameters of worm transmission are considered. This set of parameters {z1 , z2 , q} depend not only on the gear ratio u but also on what performance criterion is set as the main one. We optimize the parameters of the worm gearing for this pair of performance criteria according to the following algorithm. 1. Let’s introduce new variable variables y and s: y = q/z2 ; s = q/z1 . 2. Let’s represent the criterion of endurance [13, 14] as a function of the variable y: (1) σH = Cσ · (1 + y)3 /y, where T2 – torque on the worm wheel, N m; aw – center distance, mm; KH – loading coefficient. 3. Let’s represent the criterion of wear resistance [12] as a function of the variable s: √ 1 + s2 , (2) γ = Cγ · (1 + y) · s where Cγ =
KH · T2 · ω1 = const; 2, 6 · aW · cos α
(3)
where: ω1 – the circular frequency of worm rotation; α – the angle of the worm profile on the pitch diameter. 4. Substituting into function (1) the dependence for Cσ and replacing the calculated stress σH by the allowable value [σH ], we solve the resulting equation with respect to T2 : T2 =
[σH ] 5300
2 ·
3 ·y aW KH · (1 + y)3
(4)
Two-Criterion Optimization of the Worm Drive Design for Tool Magazine
119
5. Substitute the found value T2 into (3), after which dependence (2) takes the form: √ y · 1 + s2 (5) γ = Cγ σ · s · (1 + y)2 2 ω1 [σH ]·aW · = const. where Cγ∗ σ = 2.6·cos α 5300 After the transformations, we get the objective function γ = γ (s, y) that combines two criteria for the performance of a worm transmission – contact endurance of the teeth (σH ) and wear resistance (γ ). For the thus presented two-criterion function {σH + γ }, we introduce the Lagrange function, which takes the form: L = γ + λ · g,
(6)
where λ – Lagrange multiplier; g = s − y · u = 0 – the equation of connection of the varied variables; u – gear ratio. Using the obtained relations, we find the minimum of the two-parameter function (2), as a solution to the system of three equations with three unknown parameters – s, y, λ : ⎧ y · [s − 2 · (1 + s2 )] ⎪ ⎪ ⎪ C · +λ = 0; √ γ σ ⎪ ⎪ ⎪ 2 · s2 · (1 + y)2 · 1 + s2 ⎨ √ (7) (1 − y) · 1 + s2 ⎪ Cγ σ · −λ·u = 0; ⎪ ⎪ 3 ⎪ s · (1 + s) ⎪ ⎪ ⎩ s −y ·u = 0. The system of Eqs. (7) is reduced to a one-parameter equation of the following form: (1 + y2 · u2 ) · (1 − y) + 0, 5 · (1 + y) · [y · u − 2 · (1 + y2 · u2 )] = 0
(8)
Based onthe dependence (8), the search for the optimal parameters of the worm transmission z1 , z2 , q is realized. Consider an example of calculating Eq. (8). Using the relations for the varied variables: s = q/z1 ; and y = q/z2 , we compose the following calculated dependencies: ⎧ ⎪ ⎨ For u = 28...80 → z1 = 1 . Then : q = z1 · s = s; For u = 14...40 → z1 = 2. Then : q = z1 · s = 2 · s; (9) ⎪ ⎩ For u = 7...20 → z1 = 4 . Then : q = z1 · s = 4 · s . For all combinations [u , z1 , q] the following condition: z2 = q/z1 = u · z is fulfilled. Naturally, the calculated values of q and z2 must correspond to the boundary conditions: q = 6.3...24; z2 ≈ 28...80 (Standard 19672-74). Since the three ranges of gear ratios u given in (9) partially overlap each other, then for each pair of found optimal parameters [s, y] two variants of their values are possible.
120
O. Krol and V. Sokolov
Consider a numerical solution to the two-criterion optimization problem. Specified: u = 15 ; ϕ = 1.20 f = tgϕ ≈ 0.02 which corresponds to the teeth of the tin bronze wheel at the sliding speed of the worm: VS ≈ 5 m/s. Determine z1 , z2 , q which will be optimal for two performance criteria. Solution. From Eq. (8) for the given constants, we find:y = 0.117 ; s = 1.75. The value u = 15 is included in two ranges given in formulas (12): for z1 = 2 and for z1 = 4. Therefore, the calculations for each of them are performed. 1. For the number of worm teeth with z1 = 2 : q = z1 · s = 2 · 1.75 = 3.5; z2 = q/y = 3.5/0.117 ≈ 30.
(10)
Note that the value z2 = z1 · u = 2 · 15 = 30 will coincide with the calculation of z2 by the formula: z2 = z1 · u = 2 · 15 = 30. 2. For the number of worm teeth z1 = 4 : q = z1 · s = 4 · 1.75 = 7.0; (11) z2 = q/y = 7/0.117 ≈ 60. The same result for z2 is obtained by calculating gear ratio u: z2 = z1 · u = 4 · 15 = 60. Thus, for the given conditions, we can recommend the optimal transmission parameters: z1 = 4; q = 7.1; z2 = 60. The final value q = 7.1 is taken from Standard 19672-74, as the closest to the calculated one q = 7.0.
5 Conclusions 1. The procedure for creating a set of 3D models of a chain tool magazine with a capacity of 36 tool blocks for a drilling-milling-boring machining center has been implemented. 2. Three-dimensional modeling of a CTM worm gear drive using the specialized program “Shafts and mechanical transmission-3D”, which provides a sharp reduction in the design time of mechanical gears and housing parts of the tool is performed. With the help of the C3D Modeler geometric kernel integrated with the KOMPAS-3D CAD system the complicated task of constructing combinations of rounding that are characteristic for worm and worm wheel designs is solved. 3. The two-criterion optimization of the worm transmission of the CTM drive by the method of Lagrange multipliers has been carried out. It is shown that the optimal transmission parameters {z1 , z2 , q} are possible only for two- and four-way worms. A limitation for two-threaded worms by a parameter q = 3.5, whose value is out of range has been introduced. The optimal values of the worm transmission parameters: z1 = 4;q = 7.1;z2 = 60 were obtained experimentally: at a worm sliding speed of the order of VS ≈ 5 m/s. The obtained optimal set of parameters underlies an effective procedure for searching for a rational design of the CTM drive.
Two-Criterion Optimization of the Worm Drive Design for Tool Magazine
121
In the future, the problem of developing an algorithm for constructing the intersection curves of surfaces that pass through the pole of the tooth flank surface and coincide with one of its supported objects is considered.
References 1. Bushuev, V.: Metal-Cutting Machine. Machinostroenie, Moscow (2012) 2. Tsankov, P.: Modeling of vertical spindle head for machining center. J. Phys: Conf. Ser. 1553, 012012 (2020). https://doi.org/10.1088/1742-6596/1553/1/012012 3. Manoj, K., Kar, B., Agrawal, R., Manupati, V.K., Machado, J.: Cycle time reduction in CNC turning process using six sigma methodology – a manufacturing case study. In: Machado, J., Soares, F., Trojanowska, J., Ottaviano, E. (eds.) icieng 2021. LNME, pp. 135–146. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-79165-0_13 4. Paulo Davim, J. (ed.): Machining and Machine Tools. Research and Development. Woodhead Publishing, Sawston Cambridge (2013) 5. Schaupp, E., Abele, E., Metternich, J.: Evalution relevant factors for developing an optimal tool storage strategy. In: 5th CIRP Global Web Conference Research and Innovation for Future Production, Procedia CIRP 55, pp. 23–28 (2016) 6. Rochov, Ph., Hund, E., Gruss, M., Nuhuis, P.: Development of a mathematical model for the calculation of the tool appropriation delay depending on the tool inventory. Logist. J. Rev. 1–11 (2015). ISSN: 860-7977 7. Chen, C., Tian, H., Zhang, J., et al.: Study on failure warning of tool magazine and automatic tool changer. J. Vibroeng. 18(2), 883–899 (2016) 8. Li, G., Wang, Y., He, J., et al.: Fault forecasting of a machining center tool magazine based on health assessment. Math. Probl. Eng. 2020, 1–10 (2020) 9. Yan, Y., Yin, Y., Xiong, Z., Wu, L.: The simulation and optimization of chain tool magazine automatic tool change process. Adv. Mater. Res. 834–836, 1758–1761 (2014) 10. Gasanov, M., Kotliar, A., Basova, Y., Ivanova, M., Panamariova, O.: Increasing of lathe equipment efficiency by application of Gang-Tool holder. In: Gapi´nski, B., Szostak, M., Ivanov, V. (eds.) MANUFACTURING 2019. LNME, pp. 133–144. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-16943-5_12 11. Levitan, Y., Obmornov, V.P.: Worm Reducers. Mashynostroenye, Leningrad (1985) 12. Chernavsky, S.A., Itskovich, G.M., Kiselev, V.A., et al.: Mechanical Transmission Design: Textbook. Mashinostroenie, Moscow (1976) 13. Litvin, F.L.: The Theory of Gearing. Nauka, Moscow (1968) 14. Litvin, F.L., Qi, F., Fuentes A.: Computerized Design, Generation, Simulation of Meshing and Contact of Face-Milled Formate Cut Spiral Bevel Gears. NASA/CR-2001 (2001) 15. Pacana, J., Kozik, B., Budzik, G.: Strength analysis gears in dual path gearing by means of FEM. Diagnostyka 16(1), 41–46 (2015) 16. Fudali, P., Pacana, J.: Application development for analysis of bevel gears engagement using FEM. Diagnostyka 16(3), 47–51 (2015) 17. Kaˇcalová, M., Pavlenko, S.: Strength and dynamic analysis of a structural node limiting the multi-output gear mechanism. Acta Polytechnika 57(5), 316–320 (2017) 18. Bernatsky, I.P., Vjushkin, N.I., Gerasimov, V.K., Komkov, V.N.: Rational choice of the parameters of engagement of worm cylindrical gears. In: Toothed and Worm Gears. Machinostroenie, Moscow (1974) 19. Shevchenko, S., Mukhovaty, A., Krol, O.: Gear transmission with conic axoid on parallel axes. In: Radionov, A.A., Kravchenko, O.A., Guzeev, V.I., Rozhdestvenskiy, Y.V. (eds.) ICIE 2019. LNME, pp. 1–10. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-22041-9_1
122
O. Krol and V. Sokolov
20. Dusev, I.I., Baturina, N.Yu.: Multicriterial synthesis of worm gears with optimal parameters. Novocherk. Polytechnical. In-t., Novocherkassk (1987) 21. Krol, O., Sokolov, V.: Research of modified gear drive for multioperational machine with increased load capacity. Diagnostyka 21(3), 87–93 (2020). https://doi.org/10.29354/diag/ 126026 22. Sokolov, V., Porkuian, O., Krol, O., Stepanova, O.: Design calculation of automatic rotary motion electrohydraulic drive for technological equipment. In: Ivanov, V., Trojanowska, J., Pavlenko, I., Zajac, J., Perakovi´c, D. (eds.) DSMIE 2021. LNME, pp. 133–142. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-77719-7_14 23. Kpctiq, M.: Rational choice theory and dependence. Market-Trzhishte 26(2), 163–177 (2014) 24. Krol, O., Sokolov, V.: Research of toothed belt transmission with arched teeth. Diagnostyka 21(4), 15–22 (2020). https://doi.org/10.29354/diag/127193 25. Krol, O., Sokolov, V.: Modeling of spindle node dynamics using the spectral analysis method. In: Ivanov, V., Trojanowska, J., Pavlenko, I., Zajac, J., Perakovi´c, D. (eds.) DSMIE 2020. LNME, pp. 35–44. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-50794-7_4 26. Janigova, S., Schurger, B.: Design optimization of the modified planetary carrier. J. Eng. Sci. 8(1), E17–E22 (2020). https://doi.org/10.4028/www.scientific.net/AMR.139-141.1196 27. Sokolov, V., Porkuian, O., Krol, O., Baturin, Y.: Design calculation of electrohydraulic servo drive for technological equipment. In: Ivanov, V., Trojanowska, J., Pavlenko, I., Zajac, J., Perakovi´c, D. (eds.) DSMIE 2020. LNME, pp. 75–84. Springer, Cham (2020). https://doi. org/10.1007/978-3-030-50794-7_8 28. Krol, O., Porkuian, O., Sokolov, V., Tsankov, P.: Vibration stability of spindle nodes in the zone of tool equipment optimal parameters. Comptes rendus de l academie bulgare des sciences 72(11), 1546–1556 (2019). https://doi.org/10.7546/CRABS.2019.11.12 29. Nikonov, V.V.: KOMPAS 3D. Creation of Models. Peter, St. Petersburg (2020) 30. Zhylenko, T., Ivanov, V., Pavlenko, I., Martynova, N., Zuban, Y., Samokhvalov, D.: Mobile applications in engineering based on the technology of augmented reality. In: Machado, J., Soares, F., Trojanowska, J., Yildirim, S. (eds.) icieng 2021. LNME, pp. 366–376. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-79168-1_33
Design and Simulation for an Indoor Inspection Robot Erika Ottaviano(B)
and Giorgio Figliolini
LARM, Laboratory of Robotics and Mechatronics, University of Cassino and Southern Lazio, via Di Biasio 43, 03043 Cassino, FR, Italy {ottaviano,figliolini}@unicas.it Abstract. In this paper, we propose the design and simulation for an inspection robot that can be effective in indoor or outdoor surveys. Robotics are traditionally employed wherever human limits are achieved. In the context of inspection of structures and infrastructure, automatic or semiautomatic solutions can be used for monitoring purposes for securing the access to the manholes and pipes. Indeed, the proposed robot can be effective for the survey of indoor sites, which are difficult or almost impossible to access, and for which tele-operated survey by mobile robots can be a solution. As an illustrative case of study, we refer to the inspection of box girder bridges, in which obstacles are represented by steps, longitudinal stiffeners, and debris. In this paper, we propose the design and simulation for a hybrid rover developed for indoor and outdoor inspections, and equipped with suitable sensors to detect defects, such as cracks, or corrosion, according to the nature of the inspected elements. Keywords: Mobile robots · Simulation · Robotic inspection · Structural health monitoring
1 Introduction Robots were initially introduced and used in industry for pick and place and manipulative or specific tasks, as painting or welding, additionally when working conditions are dangerous or unsafe. More recently, new applications have been focused on robotic systems capable of controlled manouvers and inspection of inaccessible or confined unsafe sites, to identify possible damages or defects, such as cracks, corrosion and/or mechanical defects. A possible solution for automatic or semi-automatic inspection of structures is represented by mobile robots. Manipulators are used for education [1], defense, service [2], exploration [3], rescue [4] manufacturing [5], cleaning, and entertainment. Additionally, in recent years, a growing interest has been devoted to the Structural Health Monitoring (SHM) for structures and infrastructures, [6, 7]. Mobile solutions involving wheels, tracks or a combination of the two types, represent the most versatile, robust and saving energy system for dealing with flat or uneven terrain, even with the presence of obstacles. Some of the solutions reported in the literature make use of wheels/tracks, as those in reported [8–12], other solutions uses legs [13–15], because their versatility, although in general the high complexity of the control make their use more limited. If the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Machado et al. (Eds.): icieng 2022, LNME, pp. 123–131, 2022. https://doi.org/10.1007/978-3-031-09385-2_11
124
E. Ottaviano and G. Figliolini
surface is relatively flat, and almost known the type and dimension of obstacles, hybrid solutions are the most suited ones [16–19]. Among many possible recent applications for the inspection of sites of interest, we have focused our attention on a specific type of infrastructure, which is represented by box girder bridges, for which the surface is quite large, but it is generally difficult to enter. Box girders are commonly adopted for bridges with a large span, thanks to their excellent high torsional stiffness and to minimize the self-weight of the bridge. In order to reach this goal, internal stiffening are designed and they can be seen as obstacles, and possible location of defects. Therefore, these internal stiffening must be carefully inspected. In addition, although box sections offer high torsional stiffness, internal cross frames can be additionally considered to mitigate the distortion if one web is subject to a greater shear than another. In this paper we report the design and simulation of a hybrid rover that can be used for inspection of confined spaces, nevertheless, it can be used for outdoor inspection, carrying the same or additional instrumentation.
2 Mechatronic Design 2.1 Mobile Hybrid Robot The hybrid robot is shown in Fig. 1, in which the basic components are illustrated. The main idea is to keep the mechanical design simple for realizing a less expensive and robust solution. The robot is operated by tracks and legs. The battery is put on-board. Table 1 reports the robot main features. The overall cost of the robot is kept reasonable by using commercial components and reduced number of parts, using direct-drive connections.
Fig. 1. Hybrid robot design: a) a 3D view; b) front view; c) side view.
Design and Simulation for an Indoor Inspection Robot
125
2.2 Sensorization The following criteria can be listed for sensorization issues, namely, the ability in moving in an indoor cluttered environment that include obstacles, steps, and debris. Therefore, the robot should be realized in a small and compact size, being efficient in terms of mobility and carrying the inspection sensor with a viewing angle of 180°. Taking into account the most advanced systems for inspection [20], a thermal camera has been considered as the main sensor. In order to be able to move in an indoor environment, an additional front camera with adequate lighting has to be considered. Tele-operated control can be used, either wireless or wired one as well. Indeed, cabling usually includes not only the power wire but also wire for Ethernet communications, and a string for towing in case of failure. Wheels/tracks should be chosen according to the environment, both for the design of tire tread and material, and must ensure interchangeability, as the solutions adopted for the Quince robot [21]. An important issue is the capacity of upgrading the entire control system, as well as individual elements and the CCD (Charge-Coupled Device) camera also for repair. Water resistance is a further desirable need, in order to prevent damages to the electronics and actuation system, indeed the environment will not pervade inside the robot chassis. Further requirement is related to the overall stiffness, presence of LED (Light Emitting Diode) lights for the camera system, and finally, easy portability. The control scheme is designed taking into account the design principles reported in [22, 23]. The hybrid rover can be operated by a remote-control, additionally the HMI interface is used for visualize the sensors on board. A front camera is mounted and used for robot localization, very important in the case of an indoor inspection with thick walls for which a wireless control is not possible. The above-mentioned camera is used for moving in the environment and allows infrared vision when a dusty, dark environment is the common scenario for boxed girder bridges, or for any closed environment. A second sensor by FLIR is used to take both images and thermal images, this one is the main sensor used for inspection purposes. It is mounted on a suitable wrist designed ad-hoc for having compactness and large orientation capabilities. It is designed to have three rotations and it is controlled by a virtual joystick. Table 1 summarize the main characteristics of the robot. As it is possible to note, its compact design allows the entrance in common openings used for inspection, the robot uses two DOFs for the tracks and 1 (optionally 2) actuator(s) for the legs (Figs. 1 and 2). Table 1. Robot features. Description
Specification
Robot Hybrid robot
Size (L × H × W)
355 × 300 × 407 mm
Mass
4,5 kg
Max speed
Up to 0,6 m/s
Actuation
24VDC 47 rpm
Max payload
Up to 7 kg
DOFs
3 (4)
126
E. Ottaviano and G. Figliolini
Fig. 2. Overall sensorization system: 1) gimbal with the FLIR sensor onboard; 2) set-up of the robot with sensorization.
3 Simulation The designed robot was tested in several operating conditions to verify its operational features in realistic scenarios and environment. In particular, as previously stated, the system was tested for infrastructure inspection. A boxed girder bridge was designed, referring to a real structure, as reported in Fig. 3. Box girders, steel and concrete composite box girders are commonly employed in case of long spans, for which their high torsional stiffness is of great benefit and the self-weight of the bridge has to be minimized. A standard configuration make use of the so called open trapezoidal girders, which present steel bottom flanges, inclined steel webs and a thin steel flange on the upper part of each one of the web. Each of these structural parts may present damages or cracks and therefore need to be inspected.
Fig. 3. Box girder bridge taken as an illustrative example: a) real structure; b) the designed one.
Design and Simulation for an Indoor Inspection Robot
127
Another important issue is related to the access to the site to inspect. The reinforced concrete bridges use deck slab that realizes closed cells. For any of the latter it is mandatory to realize an internal access, realized by opening large enough and well-placed, for performing inspection and maintenance. Having in mind these specifications, the modelled scenario for the robot may consists of flat, inclined surface, with steps or obstacles to climb, as it is shown in the simulations of Figs. 4, 5 and 6. In Fig. 4 a motion sequence of the robot surpassing an internal stiffener is reported.
Fig. 4. Motion sequence for overpassing internal stiffeners according to Fig. 3.
128
E. Ottaviano and G. Figliolini
Fig. 5. Simulation results for Fig. 4: a) torques for the left and right motors (M 1 and M 2 ) and legs (M 3 ); b) coordinate PCM of the center of mass (CM).
Simulations of robot interacting with different scenarios is an efficient tool for testing its performances, which can be used together with an experimental activity carried out with step field pallets, which represent the gold standard evaluation method, as it is reported in [24]. For the proposed simulation, the obstacle is an internal stiffener, whose high is 80 mm. Figure 5a) reports the needed input torque for left (M 1 ) and right (M 2 ) actuators, and for the additional leg (M 3 ), the plots related to the rear legs show similar behavior and then do not repeated. Figure 5b) shows the COM (Center Of Mass) coordinates of the rover. Figures 6 gives the velocity and the acceleration of the COM during the simulation. These results show a good behavior of the robot in overpassing obstacles during its operation, the robot can overpass an obstacle that is 135% of the track high without overthrowing, simulations included the influence of the sensor and the gimbal on board. Simulation software are efficient tools being used for large-scale system designed and tested to operate in urban environment, as it was described in [25, 26]. The hybrid rover has been built and assembled and first experiments are ongoing in laboratory environment.
Design and Simulation for an Indoor Inspection Robot
129
Fig. 6. Simulation results of Fig. 4 COM: a) velocities vx vy vz ; b) accelerations ax ay az .
4 Conclusions In this paper, main characteristics and simulations are proposed for a hybrid rover designed for inspection purposes. In particular, the developed 3D model permits the realization of simulation tests by using a realistic scenario and proper operational conditions. First simulation results show good characteristics of the rover during its operations in an indoor environment constituted by a box girder bridge, as an illustrative case of study. Design simulations will be used, together with internal sensors outcomes, to build control strategies for improving mobility when the vision is limited. Acknowledgments. This paper is a part of a project that has received funding from NATO, Science for Peace and Security Programme Multi-Year Project Application, G5924 – “IRIS – Inspection and security by Robots interacting with Infrastructure digital twinS”.
References 1. Papadopoulos, I., Lazzarino, R., Miah, S., Weaver, T., Thomas, B., Koulouglioti, C.: A systematic review of the literature regarding socially assistive robots in pretertiary education. Comput. Educ. 155, 103924 (2020). https://doi.org/10.1016/j.compedu.2020.103924 2. Sprenger, M., Mettler, T.: Service robots. Bus. Inf. Syst. Eng. (2015). https://doi.org/10.1007/ s12599-015-0
130
E. Ottaviano and G. Figliolini
3. Candela, A., Edelson, K., Wettergreen, D.: Mars rover exploration combining remote and in situ measurements for wide-area mapping. In: Proceedings of International Symposium on Artificial Intelligence, Robotics and Automation in Space (iSAIRAS 2020), Paper no. 5024 (2020) 4. Swathi, S., Sowandarya, P.: Rescue robot-a study. Int. J. Adv. Res. Electr. Electron. Instrum. Eng. 3(3), 158–161 (2014) 5. Knast, P.: Similarities and differences in the process of automating the assembly of rigid bodies and elastic elements of pneumatic tires. Assembly Tech. Technol. (2021). https://doi. org/10.15199/160.2021.3.4 6. Yu, S.N., Jang, J.H., Han, C.S.: Auto inspection system using a mobile robot for detecting concrete cracks in a tunnel. Autom. Constr. 16, 255–261 (2007) 7. Tung, P.C., Hwang, Y.R., Wu, M.C.: The development of a mobile manipulator imaging system for bridge crack inspection. Autom. Constr. 11, 717–729 (2002) 8. Ottaviano, E., Rea, P., Castelli, G.: THROO: a tracked hybrid rover to overpass obstacles. Adv. Robot. 28(10), 683–694 (2014). https://doi.org/10.1080/01691864.2014.891949 9. Rea, P., Ottaviano, E.: Design and development of an inspection robotic system for indoor applications. Robot. Comput. Integr. Manuf. 49, 143–215 (2018) 10. Rea, P., Ottaviano, E., Castillo-García, F.J., Gonzalez-Rodríguez, A.: Inspection robotic system: design and simulation for indoor and outdoor surveys. In: Machado, J., Soares, F., Trojanowska, J., Yildirim, S. (eds.) icieng 2021. LNME, pp. 313–321. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-79168-1_29 11. Rea, P., Pelliccio, A., Ottaviano, E., Saccucci, M.: The heritage management and preservation using the mechatronic survey. Int. J. Archit. Heritage 11(8), 1121–1132 (2017). https://doi. org/10.1080/15583058.2017.1338790 12. Ottaviano, E., Rea, P.: Design and operation of a 2-DOF leg-wheel hybrid robot. Robotica 31(8), 1319–1325 (2013) 13. Figliolini, G., Rea, P., Conte, M.: Mechanical design of a novel biped climbing and walking robot. In: Parenti Castelli, V., Schiehlen, W. (eds.) ROMANSY 18 Robot Design, Dynamics and Control. CICMS, vol. 524, pp. 199–206. Springer, Vienna (2010). https://doi.org/10. 1007/978-3-7091-0277-0_23 14. Figliolini, G., Ceccarelli, M., Di Gioia, M.: Descending stairs with EP-WAR3 biped robot. In: IEEE/ASME International Conference on Advanced Intelligent Mechatronics, AIM, vol. 2, pp. 747–752 (2003) 15. Boston dynamics. https://www.bostondynamics.com/products/spot. Accessed 28 Feb 2022 16. Maurtua, I., et al.: MAINBOT - mobile robots for inspection and maintenance in extensive industrial plants. Energy Procedia 49, 1810–1819 (2014) 17. PETROBOT. http://petrobotproject.eu/. Accessed 28 Feb 2022 18. Leggieri, S., Canali, C., Cannella, F., Caldwell, D.G.: Self-recofigurable hybrid robot for inspection. In: 2a Conference Robotica e Macchine Intelligenti I-RIM 2020, paper 49 (2020) 19. Lopez-Lora, A., Sanchez-Cuevas, P.J., Suarez, A., Garofano-Soldado, A., Ollero, A., Heredia G.: MHYRO: Modular HYbrid RObot for contact inspection and maintenance in oil & gas plants. In: 2020 IEEE/RSJ International Conference IROS, Las Vegas (virtual) (2020) 20. EVO II. https://advexure.com/collections/autel-evo-ii-series. Accessed 28 Feb 2022 21. Quince robot. https://www.rm.is.tohoku.ac.jp/quince_eng/. Accessed 28 Feb 2022 22. Sorli, M., Figliolini, G., Pastorelli, S., Rea, P.: Experimental identification and validation of a pneumatic positioning servo-system. In: Power Transmission and Motion Control, PTMC 2005, Bath, pp. 365–378 (2005) 23. Ottaviano, E., Vorotnikov, S., Ceccarelli, M., Kurenev, P.: Design improvements and control of a hybrid walking robot. Robot. Auton. Syst. 59(2), 128–141 (2011)
Design and Simulation for an Indoor Inspection Robot
131
24. Jacoff, A., Downs, A., Virts, A., Messina E.: Stepfield pallets: repeatable terrain for evaluating robot mobility. In: Performance Metrics for Intelligent Systems (PerMIS), Gaithersburg, pp. 29–34 (2008) 25. Ottaviano, E., Ceccarelli, M., Palmucci, F.: An application of CaTraSys, a cable-based parallel measuring system for an experimental characterization of human walking. Robotica 28(1), 119–133 (2010) 26. Gonzalez-Rodriguez, A., Castillo-Garcia, F.J., Ottaviano, E., Rea, P., Gonzalez-Rodriguez, A.G.: On the effects of the design of cable-driven robots on kinematics and dynamics models accuracy. Mechatronics (2017). https://doi.org/10.1016/j.mechatronics.2017.02.002
A Review of Fault Detection Methods in Smart Pneumatic Systems and Identification of Key Failure Indicators Philip Coanda(B)
, Mihai Avram , Daniel Comeaga , Bogdan Gramescu , Victor Constantin , and Emil Nita University “Politehnica”, Bucharest, Romania [email protected]
Abstract. Smart pneumatic systems represent a major part of overall actuation systems, close to electric and hydraulic systems as market share. Considering this, fault detection and maintenance represent a key point in the long-term reliability of pneumatic systems. In addition, proper resource management is needed to ensure long-term sustainability given the current global situation. This paper aims to review the main areas where smart pneumatic systems are used, highlight component failure modes, and identify the main failure detection methods and parameters of interest to be monitored. Keywords: Smart pneumatic systems · Failure analysis · Acoustic emission
1 Introduction As research, development, and continuous industrialization have demonstrated considerable growth over the past decades, given the demands of the consumer and industrial market, it is important to mention that ongoing social development is primarily based on technical systems that can process and produce material goods for consumption. To function properly, most industrial systems that work in the production area are built around actuation systems. We can divide these actuation systems into four categories, based on the type of energy used: electric, hydraulic, pneumatic, or mixed. Today, pneumatic systems represent a competitive alternative to conventional electric or hydraulic systems. Given their working principles, they provide high reliability even under difficult working conditions, are easy to implement and maintain, and can also be used in applications where rigorous hygiene is required, such as the pharmaceutical industry, food, or medical industry. Moreover, classic pneumatic actuation systems can be used in areas where electromagnetic interference may occur, since for classic actuation systems no electronics are necessary. According to the market study [1] conducted in 2018 by BBC Research, the global pneumatic equipment market is expected to reach $113.9 billion in 2022, up from $82.5 billion in 2017. This confirms the increased demand for pneumatic equipment and drive systems, and thus the need for intelligent maintenance methods, in line with today’s industry standards. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Machado et al. (Eds.): icieng 2022, LNME, pp. 132–142, 2022. https://doi.org/10.1007/978-3-031-09385-2_12
A Review of Fault Detection Methods in Smart Pneumatic Systems
133
As the IoT (Internet of Things) market is also on a rising edge and an increasing number of industrial facilities aim to adopt Industry 4.0 standards, fault detection in smart pneumatic systems is a subject of major interest given the implications of this on the overall reliability and productivity of the facilities. Furthermore, proper detection and identification of failures can also help reduce the overall carbon footprint, since proper maintenance can prolong the useful life of the system. In addition, a detailed analysis of operating areas, failure mode analysis, and failure detection methods will be presented to better understand the failure mechanism and identify the key parameters to be monitored in smart pneumatic systems.
2 Smart Pneumatic Systems: Constructive Characteristics and Operation Areas Given the advantages of smart pneumatic systems [2–4], many industries have adopted this type of system as their main actuation system. As sensors and communication interfaces can be easily adapted and connected today, offering the means to create a distributed network of systems, smart pneumatic systems are used to create added value to the processes they conduct. As smart pneumatic systems can be found in a variety of industrial systems, the main components found in almost every smart pneumatic system are the following: • An air preparation system – used to ensure the quality of air needed for proper operation. • Pneumatic air pipes – used to transport compressed air from the source to the system. • Pneumatic valves – used to control how air is supplied. • Pneumatic motor (cylinder) – used to transform pneumatic power into mechanical power. Various pneumatic motors (cylinders) are available on the market, depending on the application. • Sensors and transducers – used to analyze or control specific processes or parameters of the smart pneumatic system. Today, more and more sensors and transducers are being used, creating added value for the system in the overall industrial process. In Fig. 1, a classical and a smart pneumatic system can be observed. Taking into account the advantages of smart pneumatic systems, especially in the control and communication areas, the study of the literature highlights multiple areas where smart pneumatic systems are used, with industrial activities included. An intelligent electro-pneumatic actuator is presented in the paper [5] by Glebov et al., used for precision positioning and to increase travel speed in industrial robots. Paper [6] deals with the problem of the maximum applied force when handling specific objects with a pneumatic gripper. Forces must be adapted depending on the handled object, so a smart controller is designed for this specific task. Paper [2] presents an intelligent pneumatic motor equipped with a MEMS encoder for accurate position control, and paper [7] presents experimental research on wireless control of fluid actuation systems. As we can observe, smart actuators and pneumatic systems are used in multiple areas, with compounding benefits for the overall system.
134
P. Coanda et al.
Fig. 1. A classical pneumatic system is presented on the left side of the figure and a smart pneumatic system by Aventics on the right side.
3 Failure Mode Analysis of Smart Pneumatic Systems When speaking about smart pneumatic systems, as stated above, multiple configurations can be observed, depending on the application. Still, a minimum of components that are present in every smart pneumatic system can be identified as follows: an air preparation system, a series of air pipes for air transportation, a pneumatic valve, and a pneumatic motor, and of course, a series of sensors and transducers. Each component has a different failure mode, mainly because of its constructive characteristics. For example, if we consider the pneumatic cylinder, some failure mechanisms appear differently in a rodless pneumatic cylinder compared to a single rod pneumatic cylinder. On the other hand, radial loads can generate friction, damaging the piston rod and seals. As each component has a different failure mechanism, Table 1 aims to treat the main possible causes and effects. Due to the electronics and sensors added to the pneumatic system, more components can fail due to different failure mechanisms. This adds complexity to system maintenance, as the new failure mechanisms do not follow the same failure modes as the pneumatic components. Filtering Table 1 possible causes and effects, we can determine that seal damage is the possible cause of failure that generates most failures, and the effect generated by it is the alteration of the pressure-flow parameters, which is, generally speaking, caused by a leak. This is not a new conclusion, as is widely known: leaks at the pneumatic system level are most likely to appear when wear arises in different components. As will be presented in the following chapter, leak detection solutions are the main approaches in fault detection methods. Some of these solutions rely on sensors that are already available within the system, others request new sensors to be used.
A Review of Fault Detection Methods in Smart Pneumatic Systems
135
Table 1. Analysis of the failure modes of the main components of the smart pneumatic system. Component
Fault
Possible cause
Effect
Pneumatic motor
Fluid leaks Cylinder blockage Intermittent operation
Accumulation of deposits (poor air quality) Oxidation Seals damage Usage above specified limit values
Operation below specified parameters Increased temperature due to friction Pressure-flow parameters alteration
Pneumatic valve
Blocked in one position Fluid leaks
Accumulation of debris (poor air quality) Seals damage Faulty control
Operation below parameters or faulty operation Pressure-flow parameters alteration
Flow section
Throttle blockage Fluid leaks
Accumulation of debris (poor air quality) Seals damage
Pressure-flow parameters alteration
Air preparation Fluid leaks system Improper air preparation (lubrication, drying, filtration)
Poor maintenance of air preparation system Failure to comply with technical service instructions Seals damage
Pressure-flow parameters alteration Fluid delivery with altered parameters (e.g. humidity) Increased risk of collateral damage
Sensors and transducers
Failure to comply with environmental conditions for sensors or transducers Exceeding the maximum limits allowed by the manufacturer Faulty power supply Accumulation of debris Seals damage
Wrong sensor reading Pressure-flow parameters alteration Damage to the equipment where the sensor is mounted Operation below system parameters
Contamination Fluid leaks Sensor locks
3.1 Pneumatic Cylinder: An In-Depth Analysis of Failure Modes Considering that the pneumatic cylinder is the effector element that converts the pneumatic power to mechanical power, multiple papers [8–10], concentrate their attention on experimental analysis, failure analysis, and fault detection for it. As the pneumatic cylinder failure modes have been widely known for a long time, given that their constructive characteristics remain the same even in new designs, once the same constructive characteristics are shared over a broader range of products, techniques such as accelerated life testing are also used to determine or validate the reliability of a pneumatic cylinder used under specific conditions. This type of experimental analysis can be observed in papers
136
P. Coanda et al.
[11–14] where pneumatic cylinders are used under extreme and difficult conditions and after a given time or when all samples fail, a statistical analysis is performed and the information of interest is extracted. To validate the reliability data, the pneumatic cylinders are analyzed at the end of the test to verify whether the failure occurred according to real-life conditions. Accelerated life testing procedures for pneumatic equipment are presented in ISO standards, specifically ISO 19973-1-6 and ISO TR 16194. Depending on the test plan, accelerated testing methods can be used to determine the reliability parameters of the pneumatic cylinder or to determine new failure mechanisms. As the failure mechanisms of pneumatic cylinders are widely known, this type of analysis can be used to validate the failure mechanisms of newly developed pneumatic cylinders which do not share the same constructive characteristics as the classic pneumatic cylinders. One of the simplest pneumatic cylinder is presented in Fig. 2 – a single-rod, doubleacting pneumatic cylinder. The main failure mechanism is contoured around the wear of the seals presented in Fig. 2. These seals are built most of the time from a rubber-plastic compound, and considering the working principle of the pneumatic cylinder, these are exposed to constant friction with the cylinder’s tube. As friction generates heat (plus the heat of the environment around the pneumatic cylinder), the structural characteristics of the seals change, and wear arises. Of course, multiple parameters are involved in the wear of these seals, some of them being the correct application, design and mounting of the pneumatic cylinder, as radial loads increase the friction, proper lubrication of the air, and proper overall parameters of the air used for actuation, correct temperature margins, as the temperature helps to evaporate the lubricants, generating this way higher friction in the seals.
Fig. 2. Image of a simple double-acting pneumatic cylinder from [15]. This figure presents the main architecture and components of a classic pneumatic cylinder.
A Review of Fault Detection Methods in Smart Pneumatic Systems
137
As the seals wear during usage, the pneumatic cylinder will suffer from losses – air under pressure will “escape” either to the atmosphere if we consider the rod seal wear, or between chambers, if we consider the piston seal wear. Normally, wear appears in both seals, as friction and radial loads affect the hole piston rod. Of course, environmental parameters have a high impact on wear propagation, since temperature, for example, is a good accelerator of wear.
4 Fault Detection Methods for Smart Pneumatic Systems Prior to this chapter, the constructive characteristics, and analysis of failure modes of smart pneumatic systems were presented with an in-depth overview of pneumatic cylinders. In addition, a review of fault detection methods is presented, considering the main failure mechanisms that occur in this type of system. Determining the failure generating factor and implicitly monitoring the parameters of interest to limit or prevent a possible failure has raised major interest in recent years in the research areas where smart pneumatic systems are actively used. If at the end of 1992, pneumatic cylinder failure analysis was performed by identifying components with the highest risk of failure within the pneumatic cylinder [8], today, considering advances in processing power, failure analysis methods and reliability estimation methods have evolved, using different modern approaches, such as neural networks and intelligent monitoring of process parameters [16–21]. One of the most common signs of failure is the presence of air leaks in the system, which confirms the conclusions of the analysis in the previous chapter, where losses have a major contribution to the poor functioning of the system. This problem is treated in several specialized papers depending on the type of system and its usage and its diversity, emphasizing the fact that the study of these methods is of continuous interest today for both industrial and research areas. For example, paper [21], focuses on the identification of leaks within the system and the estimation of their size. A specially designed experimental setup is used to introduce the leaks in different places, and a multitude of pressure and flow sensors are used to determine the modifications of the aforementioned parameters within the system when a leak arises. Multiple measurements are considered, and a specific diagnostic model is proposed with the experimental data. Thesis [22] presents in detail multiple analysis methods aimed at detecting failures, following the same principle of introducing leaks within the system with a specific block of pneumatic valves. The novelty of this thesis comes as leak detection is done on a multi-actuator pneumatic system, so localization of the leak is also necessary. Multiple approaches are presented for both system modelling and testing. In the case of pneumatic cylinders, in addition to monitoring pressure and flow parameters, some research is carried out in the area of acoustic emission (AE), measuring high-frequency vibrations that arise when there is a leak within the system. Papers [23– 25], focus on analysis and leak identification in pneumatic cylinders using acoustic emission. When air under pressure escapes between the chambers or outside the cylinder, a series of high-frequency vibrations arise in the cylinder tube. The intensity changes depending on the size and location of the measurement. The advantage of this method
138
P. Coanda et al.
is that it does not require additional pressure or flow sensors within the system, as the acoustic emission sensor can be used with a portable device, and, with one sensor only, multiple cylinders can be verified during the working process. This type of analysis can also be extended to other pneumatic components. In addition, an acoustic emission workflow analysis will be performed on a pneumatic cylinder to highlight the working principle. 4.1 Acoustic Emission: A Noninvasive Fault Detection Method As classical methods for fault detection in smart pneumatic systems are widely described in the specific literature, noninvasive fault detection methods are now on a rising edge. Today, the main contributors are technological progress and the availability of acoustic emission sensors and amplifiers at a reasonable price. This type of analysis is widely used in the detection of failures in pressurized containers and pipes, with multiple types of analyzes available in the specific literature [26–30]. Figure 3 presents the working principle of using acoustic emission for fault detection in pneumatic cylinders. The AE sensor is placed on the pneumatic cylinder tube, where high-frequency vibrations generated by leaks can be measured. The signal is then amplified and further measured using specific digital signal processing techniques to extract the needed information. Most of the time, leakage propagation follows a pattern that can be applied to other components that share the same characteristics.
Fig. 3. Acoustic emission fault detection workflow for pneumatic cylinders. Leakage generates high-frequency stress waves which are measured and analyzed using specific digital signal processing techniques.
A Review of Fault Detection Methods in Smart Pneumatic Systems
139
The workflow to create a fault detection method using acoustic emission measurements is presented in Fig. 4. During the lifetime of the pneumatic cylinder, a series of measurements are performed from time to time to better understand the propagation of the failure and the behavior of the cylinder, given the gravity of the failure. Once the data are acquired, proper filtering is needed, as different materials and different failures generate acoustic stress waves in different frequency bands. As for pneumatic cylinders, the acoustic stress waves must be separated from the other vibrations that arise at the system level, considering that in pneumatic systems, generally, there are a lot of vibrations. The specific literature [24, 25] suggests that 400 kHz is the upper frequency limit for this type of analysis. Of course, filtering must be performed on the acquired data, as shocks within the system can have an impact on the overall data quality. 100 kHz frequency intervals can be chosen, for example: 100–200 kHz, 200–300 kHz, and 300–400 kHz.
Fig. 4. Workflow for detecting faults using acoustic emission for pneumatic cylinders. Data are acquired and analyzed from time to time during the lifetime of the pneumatic cylinder to failure. When the pneumatic cylinder failed, a failure analysis was performed to verify the failure mechanism. The acquired data are then used in further analyses.
Specific parameters can be derived from filtered acoustic emission data such as RMS, or analyses such as Fourier Transform to identify the frequency spectrum and Wavelet Transform for nonstationary signals can be applied [31]. Finally, a failure analysis is performed to verify if the pneumatic cylinder wear was normal, which means that no other failure mechanism occurred. Modifications of parameters such as frequency spectrum shifts or higher amplitudes of RMS are expected, given the wear level. The correlation of the measured wear level and acoustic emission parameters can offer a novel, non-invasive fault detection method. Using acoustic emission for the characterization of failures in smart pneumatic systems and specifically for pneumatic cylinders can create specific conditions for improved fault detection at the failure modes of the components of the system level, once the smart pneumatic systems are characterized and measured. Using specific digital signal processing techniques, the authors aim in further experiments to create a method to detect failure not by measuring the elements one by one, but by measuring the acoustic emission stress waves at system level. This new approach can create a new improved method for fault detection in mechanical systems and not only in intelligent pneumatic systems.
140
P. Coanda et al.
5 Conclusions This paper begins with a brief description of the constructive characteristics of smart pneumatic systems and a brief analysis of the failure mode analysis of the principal components of smart pneumatic systems. Then, an in-depth overview of the pneumatic cylinder was presented, the latter being one of the most important components within the system. Furthermore, a detailed analysis of the fault detection workflow using acoustic emission in pneumatic cylinders was performed and the steps for creating a noninvasive fault detection method were presented. The study of the solutions mentioned above offers a perspective on the parameters of interest that can be monitored to determine failures and leaks within smart pneumatic systems. This analysis is the basis for the design of experimental setups for further testing and analysis regarding the research of fault detection methods in smart pneumatic systems. Therefore, a series of interest parameters of the smart pneumatic system can be determined, listed in order of their importance: • • • •
Pressure of the pneumatic cylinder chamber. Flow rate in different locations within the system. Velocity and acceleration of the pneumatic cylinder. Acoustic emission analysis - measurement of high-frequency vibrations.
Acoustic emission fault detection represents a noninvasive method that can be taken advantage of to create a distributed fault detection system, given the Industry 4.0 standards. The main advantage of using acoustic emission for fault detection is that no other sensor (pressure, flow, or displacement) is needed within the pneumatic system, and one AE sensor can be used to measure multiple equipment. In addition, failures can be characterized and the results can be used in different systems that share the same constructive characteristics, providing a broader application environment for the analysis.
References 1. Pneumatic Equipment Market: Size, Share and Technology Report. https://www.bccresearch. com/market-research/instrumentation-and-sensors/pneumatic-equipment-technologies-andglobal-markets-report.html. Accessed 11 Sept 2021 2. Suzumori, K., Tanaka, J., Kanda, T.: Development of an intelligent pneumatic cylinder and its application to pneumatic servo mechanism. In: Proceedings, 2005 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, pp. 479–484 (2005) 3. Faudzi, A.A.M., Suzumori, K., Wakimoto, S.: Distributed physical human machine interaction using intelligent pneumatic cylinders. In: 2008 International Symposium on MicroNanoMechatronics and Human Science, pp. 249–254 (2008) 4. Faudzi, A.A.M., bin Osman, K., Rahmat, M.F., Mustafa, N.D., Azman, M.A., Suzumori, K.: Controller design for simulation control of intelligent pneumatic actuators (IPA) system. Procedia Eng. 41, 593–599 (2012) 5. Glebov, N., Kruglova, T., Shoshiashvili, M.: Intelligent electro-pneumatic module for industrial robots. In: 2019 International Multi-Conference on Industrial Engineering and Modern Technologies (FarEastCon), pp. 01–04 (2019)
A Review of Fault Detection Methods in Smart Pneumatic Systems
141
6. Kaitwanidvilai, S., Parnichkun, M.: Force control in a pneumatic system using hybrid adaptive neuro-fuzzy model reference control. Mechatronics 15, 23–41 (2005) 7. Wang, C.-L., Renn, J.-C.: Study on the motion control of pneumatic actuator via wireless bluetooth communication. In: 2018 IEEE International Conference on Applied System Invention (ICASI), pp. 601–604 (2018) 8. Belforte, G., Raparelli, T., Mazza, L.: Analysis of typical failure situations in pneumatic cylinders under load. Lubr. Eng. 48, 840–845 (1992) 9. Chen, J., Qi, X., Liu, B., Wang, D.: Analysis of failure mechanism and stress influence on cylinder. In: Proceedings of 2011 International Conference on Electronic Mechanical Engineering and Information Technology, pp. 3543–3546 (2011) 10. Jiménez, M., Kurmyshev, E., Castañeda, C.E.: Experimental study of double-acting pneumatic cylinder. Exp. Tech. 44(3), 355–367 (2020). https://doi.org/10.1007/s40799-020-00359-8 11. Chen, J., Zio, E., Li, J., Zeng, Z., Chong, B.: Accelerated life test for reliability evaluation of pneumatic cylinders. IEEE Access 6, 75062–75075 (2018) 12. Chen, J., Wu, Q., Bai, G., Ma, J., Wang, Z.: Accelerated life testing design based on wear failure mechanism for pneumatic cylinders. In: 2009 8th International Conference on Reliability, Maintainability and Safety, pp. 1280–1285 (2009) 13. Chang, M.S., Shin, J.H., Kwon, Y.I., Choi, B.O., Lee, C.S., Kang, B.S.: Reliability estimation of pneumatic cylinders using performance degradation data. Int. J. Precis. Eng. Manuf. 14(12), 2081–2086 (2013). https://doi.org/10.1007/s12541-013-0282-9 14. Chang, M.S., Kwon, Y.I., Kang, B.S.: Design of reliability qualification test for pneumatic cylinders based on performance degradation data. J. Mech. Sci. Technol. 28(12), 4939–4945 (2014). https://doi.org/10.1007/s12206-014-1115-1 15. File:Pneumatic actuators.jpg – SolidsWiki. http://www.solidswiki.com/index.php?title=File: Pneumatic_actuators.jpg&mobileaction=toggle_view_desktosp. Accessed 27 Dec 2021 16. Li, X., Kao, I.: Analytical fault detection and diagnosis (FDD) for pneumatic systems in robotics and manufacturing automation. In: 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2517–2522 (2005) 17. Alvarez, G.P.: Real-time fault detection and diagnosis using intelligent monitoring and supervision systems. In: Fault Detection, Diagnosis and Prognosis (2020) 18. Kaškonas, P., Nakutis, Ž.: Leakage diagnostics in pneumatic systems using transient patterns. Presented at the (2006) 19. Li, X., Zhao, L., Zhou, C., Li, X., Li, H.: Pneumatic ABS modeling and failure mode analysis of electromagnetic and control valves for commercial vehicles. Electronics 9, 318 (2020) 20. Pneumatic Actuator Fault Diagnosis Based on LS-SVM and SVM. Chinese Journal of Sensors and Actuators (2013). 年11期. http://en.cnki.com.cn/Article_en/CJFDTotal-CGJS20131 1025.htm. Accessed 10 Aug 2020 21. Nakutis, Ž, Kaškonas, P.: An approach to pneumatic cylinder on-line conditions monitoring. Mechanika 4(72), 41–47 (2008) 22. Zhang, K.: Fault Detection and Diagnosis for Multi-Actuator Pneumatic Systems (2011) 23. Augutis, V., Saunoris, M.: Investigation of high frequency vibrations of pneumatic cylinders. Ultrason. Acoust. Meas. 51, 21–26 (2004) 24. Mahmoud, H., Vlasic, F., Mazal, P., Jana, M.: Leakage analysis of pneumatic cylinders using acoustic emission. Insight Non-Destr. Test. Cond. Monit. 59(9), 500–505 (2017) 25. Mahmoud, I.H., Mazal, A.P.: Diagnosis of pneumatic cylinders using acoustic emission methods, p. 78 (2019) 26. Bui Quy, T., Kim, J.-M.: Leak detection in a gas pipeline using spectral portrait of acoustic emission signals. Measurement 152, 107403 (2020) 27. Hu, Z., Tariq, S., Zayed, T.: A comprehensive review of acoustic based leak localization method in pressurized pipelines. Mech. Syst. Signal Process. 161, 107994 (2021)
142
P. Coanda et al.
28. Yu, L., Li, S.Z.: Acoustic emission (AE) based small leak detection of galvanized steel pipe due to loosening of screw thread connection. Appl. Acoust. 120, 85–89 (2017) 29. Li, S., Song, Y., Zhou, G.: Leak detection of water distribution pipeline subject to failure of socket joint based on acoustic emission and pattern recognition. Measurement 115, 39–44 (2018) 30. Sharif, M.A., Grosvenor, R.I.: Internal valve leakage detection using an acoustic emission measurement system. Trans. Inst. Meas. Control. 20, 233–242 (1998) 31. Terchi, A., Au, Y.H.J.: Acoustic emission signal processing. Meas. Control 34, 240–244 (2001)
Algorithmization of Functional-Modular Design of Packaging Equipment Using the Optimization Synthesis Principles Oleg Zabolotnyi1(B)
, Olha Zaleta1 , Tetiana Bozhko1 and José Machado2
, Taras Chetverzhuk1
,
1 Lutsk National Technical University, 75 Lvivska Street, Lutsk 43018, Ukraine
[email protected] 2 Department of Mechanical Engineering, MEtRICs Research Centre, University of Minho,
4804-533 Guimarães, Portugal
Abstract. At the present stage of development of the packaging industry, the range of packaging equipment has increased significantly due to the expansion of its additional functions, aimed primarily at improving the aesthetic, operational and ergonomic properties of packaging. Further improvement of packaging machines should be aimed at improving the efficiency of their work. At what to do it comprehensively - both from the point of view of rationality of all design, and taking into account parameters of each separate element. This goal can be achieved at the level of improving traditional design methods by including optimization methods. The theoretical research presented in the article was conducted on the basis of modeling the structure of the packaging machine using the method of functional-modular design and the provisions of system analysis. The result of this work is an algorithm for optimization synthesis, which can be used to determine the rational layout of the technological machine of arbitrary structural complexity. This study provides a detailed description of the early stages of design. How to systematize and formalize the initial data is shown. The formulation of the optimization problem is also shown and the sequence of its solution is given. Keywords: Optimization synthesis · Efficiency parameter · Module · System · Functional-modular design · Layout · Set · Variability
1 Introduction At the present stage of development of various industries technological packaging complexes are becoming widespread. This is due to the growing range of packaged products, the emergence of new packaging materials and types of packaging. Such trends are the driving force behind the continuous structural improvement of packaging machines (PM) in order to expand their functionality [1]. For a long time, the creation of high-tech packaging equipment was achieved by modernizing existing PM by increasing their auxiliary functions and increasing versatility in order to expand the range of packaged products [2]. However, this approach is © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Machado et al. (Eds.): icieng 2022, LNME, pp. 143–154, 2022. https://doi.org/10.1007/978-3-031-09385-2_13
144
O. Zabolotnyi et al.
gradually exhausted and often shows a decrease in efficiency due to the accumulation of machine design with an excessive number of mechanisms [3]. In addition, the success of solving applied engineering problems, even if it requires choosing the best option among the possible alternatives, does not require a strict methodology and depends mainly on the professional experience of the designer. Implementation of structural synthesis by automated design methods is mainly reduced to the creation of electronic models in the environment of relevant programs and has limited ability to choose the optimal construction [4]. It is also noted that the next level of progress will comprehend the wide spread inclusion of machines sensors and big data analytics [5]. The aim of this study is to form a clear universal algorithm for optimizing the structure of PM based on the formalization of synthesis procedures and finding the best option for the structure of PM among the possible alternatives.
2 Literature Review Typical technological machines, which are part of most modern production complexes, belong to the multi-position technological machines of sequential or parallel action [6]. Their structure is formed on the basis of the modular principle, which is understood as a method of building a technical system, in which its layout is formed in a certain order from a set of typical functional modules (FM) [7]. “Functional module” means a device that is functionally independent and structurally complete set of mechanisms that are united by a common functional purpose [8]. The order of placement and the number of these modules is determined by the sequence of the technological process of packaging and the number of technological operations (working positions) of which it consists. At the same time, material (transfer of products), energy (drive of moving elements) and information (control and management of work) connections must be provided between them [9]. The main task during the design of PM is to ensure the purpose of its creation, which is set by the technical task and implemented by the service function - the manufacture of products of a given quality in the cheapest way. The same service function can be implemented in different ways, so there is a certain number of possible principles of operation of the machine. In order to determine which principle of operation will be the best, it is necessary to implement each variant of operation constructively with a set of FM and the connections between them, and to solve the optimization problem on the basis of the obtained data [11]. In the general case, the implementation of optimization synthesis involves the implementation of three procedures [12]: 1) determination of the elements of synthesis, for which it is necessary to perform decomposition (division into the simplest elements) of the service function and perform the selection of technical means for the implementation of lower-level functions; 2) synthesis, which consists in creating various permissible combinations of elements with each other and is reduced to the formation of many alternatives to the structure of the PM. The purpose of the synthesis of the structure variant is to determine the
Algorithmization of Functional-Modular Design of Packaging Equipment
145
list of elements that form the object of design, and the way these elements are related to each other; 3) optimization, which consists in evaluating the obtained variants of the machine structure using known methods, followed by choosing the best among them. This option is selected according to a certain algorithm, which is based on the search for the best value of the performance indicators of the PM, which are called the criteria of optimality. The solution to the problem of structural optimization is to choose the parameters so as to ensure the extreme value of the criterion of optimality, taking into account the accepted constraints. The problem of optimization synthesis of objects of discrete structure belongs to integer combinatorial optimization problems [13], the solution of which is performed using two groups of methods: 1) approximate - methods that do not always lead to objectively better solutions (for example, iterative, stochastic, genetic algorithms) [14]; 2) precise (formal) - methods that always lead to the optimal solution (methods of complete and directed sorting) [15]. The problem of creating an effective methodology for describing and evaluating the structure of PM by different characteristics requires a systematic approach to machine analysis [3]. This takes into account two aspects of the description of the PM: 1. Functional description, the elements of which are the set of simple functions F and the set of relations Q between them, which determine the principle and sequence of operation of the PM; 2. Structural description, the elements of which are FM x ∈ X and the connections R between them, which create the layout of the PM. The functional description is more general, as each technical function can be implemented by many design options FM. However, each FM implements only the function for which it was created. It explains the principle of precedence of functional design of PM in relation to structural [10].
3 Research Methodology In the broad context of the improvement of packaging machines, it is advisable to search for a rational design, based on the achievements in the development of individual sizes of FM for various purposes and take into account their parameters during the structural synthesis of the machine. When sorting the FM in accordance with the specified functional scheme of the PM, you can get a significant number of possible layout options. In addition, the more FM sizes for each technological operation are considered, the more difficult it is to make an informed choice of the best empirical layout [16]. After all, for the most effective implementation of the technological process, in addition to the
146
O. Zabolotnyi et al.
correct spatial location of the FM, you need to establish which set of FM will provide the highest productivity, reliability, accuracy, etc. [17]. When processing a large amount of collected information about the object of optimization, it is important to have the correct sequence of their processing and adequate choice of methods for solving the problem. In this case, the parameters of effective operation of modules in particular and the machine as a whole should be taken as optimization criteria, and the task of finding the best option for the structure of the packaging machine should be considered as a task of optimization synthesis [18]. In Fig. 1 the algorithm developed by us is represented. According to it we propose to perform optimization synthesis of PM. This algorithm is based on the principles of methodology SADT (Structured Analysis & Design Technique), which involves the construction of three levels of conceptual model of the machine: functional (f-model), functional and structural (fs-model), structural (s-model) [19].
Technical task
Levels of the conceptual model of the packing machine
Decomposition of the service function
Stages of f-model creating
Definition of service function Division of service function into basic, additional and service functions
Determining the order of functions
Decomposition of basic functions into auxiliary by model levels
Stages of fs-model creating
Construction of the functional scheme of the machine Choice of technical means of realization of functions and their types
Generating plural of variant of the machine structure Stages of s-model creating
Determining the optimal variant of the machine structure using the methods of optimization synthesis Construction of the layout of the machine
Fig. 1. Algorithm of optimization synthesis of packaging machine
Algorithmization of Functional-Modular Design of Packaging Equipment
147
Each of the levels of the conceptual model includes a number of stages that are performed sequentially. At the first level there is a selection of the most important function of the PM, which is called the service function and, as a rule, corresponds to the primary purpose of any technological machine. It is divided into main and auxiliary. At the second level, the selection of technical devices - FM - for the implementation of simple functions. At the third level, the optimization of whole PM is performed.
4 Results Any useful action of an object is expressed through the function it performs. According to the algorithm described above, its initial description is in the form of an f-model. Building an f-model of an object (in our case, a packaging machine) consists in decomposing the service function, which means sequentially deriving functions of a certain level from the functions of the previous one, and establishing links between them. At the first level of decomposition, the service function is considered as a result of the joint action of several basic functions. The main functions, in turn, are divided into auxiliary, which implement them. Decomposition of a function by levels of the f-model is carried out until it decomposes into the simplest components, for the implementation of which it becomes obvious the use of a particular technical device. The main working elements of the f-model are diagrams. The diagram contains blocks and arcs. Blocks display functions of different levels. Arcs connect blocks together and reflect the links and relationships of subordination between them. We illustrate the construction of the f-model on the example of the process of packaging viscous products in polymer containers (Fig. 2). This f-model is the basis for creation an intermediate fs-model. In this case, the elements of the fs-model will be such FM as: dosing module, container supply module, aluminum plate supply module (additionally the cover supply module can be used, if provided by the packaging design), plate heat welding module (and fixing lids on the container), dating module, module for disposal of finished products, drives of all movable devices and control system elements, frame for fixing these modules in the correct position relative to each other in space. To represent the fs-model in the form of a diagram, matrix or graph, where the FM and the logical relationships between them are conditionally denoted, it is necessary to present the obtained data in a formalized form [20]. To do this, we used the algebra of predicates [21]. As already mentioned, the packaging machine as an object of design is considered in two aspects: as a set of functions obtained as a result of the decomposition of the service function F = {f1 , f2 , f3 , . . . , fn },
(1)
which is equal to the plural of technological transitions performed by the machine M = {m1 , m2 , m3 , . . . , mn },
(2)
148
O. Zabolotnyi et al.
and a set of FM to perform these functions X = {x1 , x2 , x3 , . . . , xn }.
(3)
The same technological transition tk can be performed by several different types of FM Ek = {xk1 , xk2 , xk3 , . . . , xkn },
(4)
therefore, for the machine as a whole, there are a number of E functional modules that may be part of it E = {E1 , E2 , E3 , . . . , Ei },
(5)
which, combining with each other according to certain dependencies (functional connections), form a set of variants of the structure equal to N = X1 × X2 × X3 × . . . × Xj .
(6)
Implementation of the service function S of PM (technological operation of packaging) is possible provided that all the functions (technological transitions) that it provides: F (n) = f1 ∧ f2 ∧ f3 ∧ . . . ∧ fi , S ↔ F (n) ; M (n) = m1 ∧ m2 ∧ m3 ∧ . . . ∧ mn , F (n) ↔ M (n) .
(7)
This requires the use of appropriate technical devices (FM), which form the structure of the machine X (n) = x1 ∧ x2 ∧ x3 ∧ . . . ∧ xn ,
(8)
moreover, the same technological transition in the machine can perform only one of the types of FM E (n) = xk1 ∨ xk2 ∨ xk3 ∨ . . . ∨ xkn .
(9)
Then (n)
S : M (n) × Xi → N (i) = (x11 ∨ x12 ∨ x13 ∨ . . . ∨ x1n ) ∧ (x21 ∨ x22 ∨ x23 ∨ . . . ∨ x2n ) ∧ (x31 ∨ x32 ∨ x33 ∨ . . . ∨ x3n ) ∧ . . . ∧ ∧ (xk1 ∨ xk2 ∨ xk3 ∨ . . . ∨ xkn ). Since
then
(10)
∀ xi ∈ X (n) ∃ fi ∈ F (i) → ∃ xkn ∈ E (n) ∃ fi ∈ F (i) ,
(11)
∃ xkn ∈ E (n) ∃ fi ∈ F (i) ∼ ∃ xkn ∈ E (n) ∃ mi ∈ M (n) .
(12)
However, taking into account the interchangeability of FM to perform certain functions: (k) (k) ∃ xki ∈ Xj ∃ mk ∈ M (n) ¬∃ xkn ∈ Xj ∃ mk ∈ M (n) . (13) Subsequently, the optimization synthesis procedures are performed in the following sequence.
Algorithmization of Functional-Modular Design of Packaging Equipment
Grab the container Separate from the stack Move to the specified position Lock in position
Make the packaging
Take away the finished product Apply a load Move from the position in the specified direction
Sync operations
Set a stack of containers
Pass the container to the position
Set the date
Leave a mark
Download the product
Adjust operation parameters
Separate one container from the stack
Click on the container
Feed the product in a container
Regulate the process
Feed the container
149
Put the product in the container
Measure the dose Move the dose to the container
Move the container one step
Attach an aluminum plate
Close the container with a lid Cover the container with a lid Pass the lid on
Grab the lid
Put the lid on the neck of the container Press the lid to the end of the cup in the axial direction
Press the plate to the butt of the container
Weld the plate to the throat of the container
Heat the plate along the contour
Submit the plate
Grab the plate
Separate the lid from the stack
Separate the plate from the stack
Elements of the service function
Basic function
Service function The sequence of the basic functions
Additional function
Auxiliary function
Information (signal) connections
Fig. 2. Functional model of the service function of packaging viscous products in plastic containers
150
O. Zabolotnyi et al.
1. Selection of optimality criteria. Each FM is characterized by certain technical and economic parameters that determine the effectiveness of its work xin : p1 , p2 , . . . , pn ,
(14)
and therefore can be used as optimality criteria to assess the quality of the overall structure of the PM xin : p1 , p2 , . . . , pn → Xi : p1 , p2 , . . . , pn
(15)
The choice of parameters by which FM and PM in general will be evaluated depends on the design objectives. 2. Database formation. At this stage, a list of types of FM that can be part of the PM for a given purpose. They are assigned the values of optimality criteria. These values can be known in advance or determined by experimental research. 3. Formulation of the optimization problem. As a rule, a single-criteria optimization problem can be represented as: F1 (X ) = p1 → max p2 ≤ p2max
(16)
or
F2 (X ) = p2 → min p1 ≥ p1min
(17)
In the first version of the problem it is assumed that the technical parameter p1 (productivity, reliability, accuracy, energy consumption, etc.) is maximized, and the economic parameter p2 (capital costs for manufacturing PM, production costs, etc.) is a boundary condition (not more than the preset p2max ), and in the second statement on the contrary: the economic parameter p2 is minimized, and the technical parameter is a boundary condition (not less than the preset value p1min ). The problem of multicriteria optimization is formulated as follows: F(X ) = (f1 (X ), f2 (X ), . . . , fS (X )) → max(min),
(18)
that is, the optimality criteria accepted for consideration are combined into one integral criterion and the direction of the extremum is indicated. 4. Synthesis of alternative variants of PM structure (finding the domain of admissible solutions of the optimization problem). The formation of variants of structures occurs according to a certain algorithm by combining different types of FM with each other in accordance with the order of functions
Algorithmization of Functional-Modular Design of Packaging Equipment
151
(technological transitions) required for the technological operation (technological process) of packaging. The specificity of generating PM structures depends on the method used to implement this procedure, but it must provide a rational combination of FM, which does not contradict the principle of operation of PM. Consider the case when variants of the structure of PM can be formed from some set of FM E, which is divided into subsets by functional purpose: E=
3
Ei = {x|x1n ∈ E1 ∧ x2n ∈ E2 ∧ x3n ∈ E3 }.
(19)
i=1
Assume that the number of variants of the structure of PM X is: N = 3 · 2 · 5 = 30, where 3, 2, 5 - the number of versions of the structure conditionally 1st, 2nd and 3rd FM (Fig. 3).
Technological transition
х1
х11
Е1
т1
х2 т2
Е2
х3 т3
х12
Х2 0
Х3 0
Х4 1
Х5 0
Х6 0
… 1
Х 29 0
Х 30 0
х13
0
1
0
0
1
0
0
1
0
0
0
1
0
0
1
0
0
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
1
1
1
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
0
0
0
0
0
1
0
0
0
1
0
0
0
0
0
1
0
0
0
1
х21 х22 х31 х32
Е3
Variants of the packaging machine structure Х1 1
х33 х34 х35
Fig. 3. Generating the variants of the machine structure
The formalized representation of this graph is expressed as follows: S : m1 (x11 ∨ x12 ∨ x13 ) ∧ m2 (x21 ∨ x22 ) ∧ m3 (x31 ∨ x32 ∨ x33 ∨ x34 ).
(20)
5. Finding the optimal solution that best satisfies the condition of the problem. This procedure consists in weeding out unpromising variants in the following sequence: the whole set of variants of the structure of PM N obtained during the generation is limited by the criteria of optimality p1 and p2 : ⎧ n ⎧ n ⎨ p ·x ≥p , ⎨ p ·x ≤p , 1i i 1 2i i 2 , i=1 . (21) ⎩ i=1 ⎩ xi ∈ Xj , i, j = 1, n xi ∈ Xj , i, j = 1, n
152
O. Zabolotnyi et al.
As a result, we obtain the set N , from which we gradually eliminate non-dominant solutions, step by step narrowing the search space to the plural of priority options X , then to the plural of dominant variants X and the plural of final variants X , in which determine the optimal result - the best structure of the PM Xopt : Xopt ∈ X , X ⊂ X ⊂ X .
(22)
The method of finding the optimal variant of the PM structure should be based on two principles: selection of many dominant alternatives and selection of the optimal one and exclusion of the probability of elimination of potentially more effective variants compared to those accepted for further consideration. 6. Creation of a PM layout (s-model) by visualizing the obtained results in the form of a 2D or 3D model.
5 Conclusion 1. Technological machines of packaging production are developed mainly at the parametric level, by improving local characteristics, and the problem of structure optimization is often solved by empirical methods [22], which reduces the efficiency of design work and objectivity of the results. The article proposes a systematic approach to the design of machines of functional-modular structure, which provides a comprehensive description of the design object and the involvement of accurate methods of finding optimal solutions. 2. The developed SADT-model of the structure of the service function of the machine is the basis for organizing and formalizing the data set used to solve the optimization problem. 3. The proposed algorithm can be applied not only to packaging machines, but also to solve the problem of optimizing a wide range of equipment of functional-modular structure. 4. The presented method of processing and presenting data on structural elements and parameters of the design object can be the basis for software implementation of procedures for generating variants of the object structure and finding better variants among the allowable number of possible alternatives. We plan to develop the special computer program, that can process an array of input data (number of design elements, the values of the of optimization criteria) and produce such number of results, which the designer is able to process analytically using the Pareto method. This task is the goal of our further scientific work.
References 1. Götz, G., Kiefer, L., Richter, C., Reinhart, G.: A design approach for the development of flexible production machines. Prod. Eng. Res. Devel. 12(3–4), 331–340 (2018). https://doi. org/10.1007/s11740-018-0804-5
Algorithmization of Functional-Modular Design of Packaging Equipment
153
2. Mahalik, N.P.: Processing and packaging automation systems: a review. Sens. Instrumen. Food Qual. 3, 12–25 (2009). https://doi.org/10.1007/s11694-009-9076-2 3. Chan, F., Chan, H., Choy, K.: A systematic approach to manufacturing packaging logistics. Int. J. Adv. Manuf. Technol. 29, 1088–1101 (2006). https://doi.org/10.1007/s00170-005-2609-x 4. Millhouse, T.: Virtual machines and real implementations. Minds Mach. 28, 465–489 (2018). https://doi.org/10.1007/s11023-018-9472-7 5. Arrais-Castro, A., Varela, M.L.R., Putnik, G.D., Ribeiro, R.A., Machado, J., Ferreira, L.: Collaborative framework for virtual organisation synthesis based on a dynamic multi-criteria decision model. Int. J. Comput. Integr. Manuf. 31(9), 857–868 (2018). https://doi.org/10. 1080/0951192X.2018.1447146 6. Usubamatov, R., Alwaise, A.M.A., Zain, Z.M.: Productivity and optimization of sectionbased automated lines of parallel-serial structure with embedded buffers. Int. J. Adv. Manuf. Technol. 65, 651–655 (2013). https://doi.org/10.1007/s00170-012-4204-2 7. Mirabedini, S.N., Iranmanesh, H.: A scheduling model for serial jobs on parallel machines with different preventive maintenance (PM). Int. J. Adv. Manuf. Technol. 70(9–12), 1579– 1589 (2013). https://doi.org/10.1007/s00170-013-5348-4 8. Bazrov, B.M.: Modular design of machine tools. Russ. Engin. Res. 31, 1084–1086 (2011). https://doi.org/10.3103/S1068798X11110049 9. Gauss, L., Lacerda, D.P., Sellitto, M.A.: Module-based machinery design: a method to support the design of modular machine families for reconfigurable manufacturing systems. Int. J. Adv. Manuf. Technol. 102(9–12), 3911–3936 (2019). https://doi.org/10.1007/s00170-019-03358-1 10. Yakovenko, I., Permyakov, A., Naboka, O., Prihodko, O., Havryliuk, Y.: Parametric optimization of technological layout of modular machine tools. In: Ivanov, V., Trojanowska, J., Pavlenko, I., Zajac, J., Perakovi´c, D. (eds.) DSMIE 2020. LNME, pp. 85–93. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-50794-7_9 11. Kudryavtsev, Y.M.: Structurally-parametrical optimization technological process by Dijkstra’s method in system Mathcad. In: Materials Science Forum, vol. 931, pp.1238–1244. Trans Tech Publications, Ltd (2018). https://doi.org/10.4028/www.scientific.net/msf.931.1238 12. Ravichandran, K., Masoudi, N., Fadel, G.M., Wiecek, M.M.: Parametric optimization for structural design problems. In: Proceedings of the ASME 2019 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. Volume 2B: 45th Design Automation Conference. Anaheim, California, USA, 18–21 August (2019). V02BT03A022. ASME. https://doi.org/10.1115/DETC2019-97860 13. Marmion, M.É.: Local search and combinatorial optimization: from structural analysis of a problem to efficient algorithms design. 4OR-Q J. Oper. Res. 11, 99–100 (2013). https://doi. org/10.1007/s10288-012-0204-1 14. Papadrakakis, M., Lagaros, N.D., Tsompanakis, Y., et al.: Large scale structural optimization: computational methods and optimization algorithms. ARCO 8, 239–301 (2001). https://doi. org/10.1007/BF02736645 15. Sousa, R.A., Varela, M.L.R., Alves, C., Machado, J.: Job shop schedules analysis in the context of industry 4.0. In: 2017 International Conference on Engineering, Technology and Innovation: Engineering, Technology and Innovation Management Beyond 2020: New Challenges, New Approaches, ICE/ITMC 2017 - Proceedings, 2018-January, pp. 711–717 (2018). https://doi.org/10.1109/ICE.2017.8279955 16. Usubamatov, R., Ismail, K.A., Sah, J.M.: Mathematical models for productivity and availability of automated lines. Int. J. Adv. Manuf. Technol. 66, 59–69 (2013). https://doi.org/10. 1007/s00170-012-4305-y 17. Usubamatov, R., Sin, T.C., Ahmad, R.: Mathematical models for the productivity rate of automated lines with different failure rates for stations and mechanisms. Int. J. Adv. Manuf. Technol. 82(1–4), 681–695 (2015). https://doi.org/10.1007/s00170-015-7005-6
154
O. Zabolotnyi et al.
18. Guo, X., Cheng, G.D.: Recent development in structural design and optimization. Acta Mech. Sin. 26, 807–823 (2010). https://doi.org/10.1007/s10409-010-0395-7 19. Ahrens, M., Richter, C., Hehenberger, P., Reinhart, G.: Novel approach to establish modelbased development and virtual commissioning in practice. Eng. Comput. 35(3), 741–754 (2018). https://doi.org/10.1007/s00366-018-0622-6 20. Nikitchenko, N.S.: Equitone predicate algebras and their applications. Cybern. Syst. Anal. 39, 97–112 (2003). https://doi.org/10.1023/A:1023829327704 21. Nikitchenko, N.S.: Propositional compositions of partial predicates. Cybern. Syst. Anal. 36, 149–161 (2000). https://doi.org/10.1007/BF02678660 22. Chetverzhuk, T., Zabolotnyi, O., Sychuk, V., Polinkevych, R., Tkachuk, A.: A method of body parts force displacements calculation of metal-cutting machine tools using CAD and CAE technologies. Ann. Emerg. Technol. Comput. (AETiC) 3(4), 37–47 (2019). Print ISSN 2516-0281, Online ISSN 2516-029X Published by International Association of Educators and Researchers (IAER) (2019). https://doi.org/10.33166/AETiC.2019.04.004. http://aetic. theiaer.org/archive/v3/v3n4/p4.html
Implementation of the SMED Methodology in a CNC Drilling Machine to Improve Its Availability Arminda Pata1(B)
and Agostinho Silva2
1 D. Dinis Higher Institute (ISDOM), Marinha Grande, Portugal
[email protected] 2 CEI – Zipor, Companhia de Equipamentos Industriais, São João da Madeira, Portugal
Abstract. This Case Study aims to improve the availability of a critical CNC drilling machine by implementing Lean tools. An analysis of the original scenario is performed by filming the entire process to identify what the main problems are. Afterwards the Overall Equipment Effectiveness (OEE) are calculated. In response to the identified problems, an action plan is developed and implemented to find the root cause of the high number of delays in drilling the components, implementing some Lean tools, namely: 5S, visual management, maintenance and enhancing training to increase the operators’ skills. The implementation of the SMED methodology allowed the reduction of 35.23% of drilling time in part A, with the drilling time going from 105 min to 68 min. In part B there was a reduction of 39.91% in drilling time, with a gain of about 44 min. So, the availability of the drilling machine was improved to 90%, contributing to the increase of the global efficiency of the process, with an OEE of about 85% . The results were positive after the implementation of SMED. Consequently, the availability of the CNC drilling machine has increased, which has increased overall process effectiveness. Keywords: Industry 4.0 · Mold · Project · Production · Industrial engineering
1 Introduction With the increasing competitiveness in the mold industry, it is important that companies take measures to continuously eliminate waste and create value in what they produce. The management of metal cutting processes is now a topic of strategic importance for these industries. Because of the strong connection with the production of molds for the aeronautic, automotive, and aerospace industries. The efficient use of machines can contribute to leveling the machining operations. It is indispensable to control the load of each equipment and minimize the risk of failures that can lead to downtime in the production of mold components. However, due to the complexity of each project because each mold is unique, and the customer is repeatedly requesting changes, the risk of delays in machining the components cannot be completely avoided [1]. Increasingly, companies aim to evolve and keep up with people’s needs, while remaining competitive in a globalized market. The current competitive environment, generated mainly by © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Machado et al. (Eds.): icieng 2022, LNME, pp. 155–163, 2022. https://doi.org/10.1007/978-3-031-09385-2_14
156
A. Pata and A. Silva
consumption and by an increasingly demanding and rigorous market, makes companies have an even greater commitment to the continuous improvement of their products and processes. Always trying to identify what waste needs to be eliminated. In the last decades, there have been profound changes in the production systems by project for injection molds due to globalization. Accelerating competition, innovation and technology, and the need to implement methodologies that allow producing molds at the best price, with high quality and in the shortest time possible. Nowadays, developing a mold to customer specifications is not everything. Mold industries are forced to equip themselves with tools and methodologies that contribute to keep up with the needs of an increasingly demanding and competitive market. It is necessary to implement methodologies to minimize the manufacturing times of a mold [2, 3]. Currently, with the increase in competitiveness and consumer expectations, there is a concern and a pressing need for companies to improve the quality of their products, reduce deadlines and costs of the final product. However, for this to happen, it is important to be aware of keeping the manufacturing spaces organized, clean, and standardized [4–6]. These aspects can be improved in the bench section, allowing a more efficient assembly of molds, reducing setup time in search of tools, accessories, machined parts, and most of the reasons for interruption of the assembly process. Measuring how equipment and the way it is run contributes to the performance of industrial companies is of utmost importance, as several key aspects depend on it to determine not only their success, but also their survival. The performance of equipment directly determines the productivity of production processes, influences the efficiency of the workforce, contributes to the level of product quality, and customer satisfaction [2].
2 Improvement of Overall Equipment Efficiency The mold industry consists of a production-to-order system called Engineer-to-order, where the product is unique and totally designed and manufactured according to the clients’ specifications. This system is dissociated from the others (Make-to-order; Maketo-stock; Assemble-to-Order; Buy-to-Order; Ship-to-Stock) due to the design phase, since it is during this task that the customer’s needs are absorbed [7–10]. Planning in this type of industry requires operations and decisions that will not be repeated, as each mold is unique, and rarely are two molds made the same. The deadlines for mold making are initially agreed upon, until the first plastic parts and the mold are shipped. Non-compliance with planning, is traditionally justified by the tight deadlines and the high number of components and accessories to be integrated in the mold [11–13]. Due to the unique characteristics of the molds, designed in the project, the planning of the operations for the manufacturing of the various parts of the mold makes the planning very exhaustive and dependent on the know-how of the technicians [14]. Because it is constantly undergoing changes and updates that result in machine downtime and change of operations. Therefore, not only is the mold planning not fulfilled, but it may be compromising the planning of other molds in production. It is also important to highlight disorder in planning caused by rework and adjustments to parts already manufactured, which may have been caused by errors or improvements after mold testing [15–17]. The concept of Overall Equipment Effectiveness (OEE) was first written about in 1989 from a
Implementation of the SMED Methodology
157
book called TPM Development Program: Implementing Total Productive Maintenance, edited by Seichi Nakajima of the Institute of Plant Maintenance in Japan. Before OEE, equipment performance was monitored through Availability or downtime. Later, it was discovered that it was possible to have the same downtime for the same equipment at different timeframes but get a different output. For example, if the performance of a line is measured over 100 h and during this time it has a breakdown for 10 h, the Availability will be 90% and the downtime 10%. If the same line over another 100 h had 10 breakdowns of 1 h duration (total of 10 h), then the Availability would still be 90% and the downtime would be 10%. For the first time, it was possible to measure the effectiveness of a piece of equipment in producing good output, recognizing that equipment is only effective if it is available when needed, running at the optimal speed, and producing output that is perfect or within specification [2]. The OEE is considered a basic and fundamental method for a system to measure equipment performance. It consists of a tripartite equipment performance analysis tool, based on its availability, performance, and quality. It is used with the goal of identifying related losses in a piece of equipment, to improve asset performance and reliability [3]. The Toyota Production System (TPS) seeks mainly to reduce waste and improve the efficiency of production systems. This thinking gave rise to Lean Manufacturing and Just in-time (minimum stocks), as well as techniques such as the SMED (Single Minute Exchange of Die) methodology, also known as Rapid Tool Exchange – RTF [18, 19]. According to the authors, Vieira & Cambruzzi (2020), when faced with the need for a quick tool change, it is necessary to find out how things are done to improve the method in which they are done. In such a way, it is up to the company to analyze how the setup processes occur, reduce waste, propose solutions and improvements that can reduce the setup time of machines and equipment, starting a continuous process of improvements, beginning with the use of the SMED tool, focusing on the elimination of waste. The SMED concept represents the minimum time required to prepare the equipment and operators to change from one type of activity to another, considering the end of the last good part of a given batch until the output of the first good part of the next batch.
3 Case Study and Objectives The mold industry is increasingly competitive. The price and the delivery time of each mold are therefore decisive in the adjudication of services in this type of industry. Companies have to be able to reduce costs and guarantee the best deadlines. Many companies are dedicated to providing services only. Offering CNC machining services. So, it is necessary that these companies use methodologies and tools that help them to improve performance and availability of their equipments. Only through the use of rigorous production planning and true sequencing that makes the most of existing equipment enterprises can they achieve their main goal, customer satisfaction. The subject under study is explored in practice in a work context and aims to present solutions and implement some suggestions for improvement. Its purpose is to study current practices and propose solutions to the problems identified. It is intended to provide solutions that contribute with improvements in future developments in order to remove as much time loss as possible in the manufacturing of molds, contributing
158
A. Pata and A. Silva
to the increase of the company’s competitiveness. Generically, the study focuses on the application and monitoring of the SMED tool in an experimental way, in a mold manufacturing industry. The case study analyzes a project with the machining service of four parts (A, B, C, D). In following this project, we will consider two parts with the denomination A and B. The deadline agreed with the client for the execution of this work was 8 weeks from the date of delivery of the material. We began planning the project, defining the work to be developed and the equipment available to execute it. In the machining of the four pieces, different processes are used. The sequence of services to be performed are: (1) receiving the order from the customer, (2) gauging, (3) drilling (4) rough machining, (5) grinding, (6) finishing machining, (7) dimensional control, (8) shipping.
Fig. 1. Services (a) Gauging; (b) Drilling; (c) Grinding; (d) Rectification. (e) Finishing Machining; (f) Measurement - 1st phase; (g) Measurement – 2nd phase.
For the execution of the spreading service of the four pieces of this project it is estimated 11 days of work on the two machines available. Gauging (Fig. 1(a)) is always the first service to be executed, because only after this service has been executed can one move on to the next service. It is the service with less complexity, executed in conventional machines. The execution of this service will be done in two machines, the G2 and the G3. All parts pass through the two machines, and in the G2 machine the thick parts are lifted, commonly known as face milling, and in the G3 machine the tops of the parts are lifted. The drilling service (Fig. 1(b)) can be performed immediately after the gauging or it can be performed in a third phase after the rough machining service. In the specific case, in part 1 and part 2 the drilling service was performed immediately after the gauging, in the remaining parts the drilling service was performed after the rough machining service. This sequence of services depends mainly on one factor, the geometry of the piece. In this case study under analysis we faced both situations. To execute the drilling service of four pieces we have available two machines, the D1 and D2 machines with an estimated time of 18 days of work. The grinding machining service (Fig. 1(c)) is performed after galling or after drilling. It is a service performed on a geometry defined by the customer, within a tolerance that can vary from project to project. For the project under analysis, the tolerance to be respected in the pieces varies between 0,5 mm and 1,5 mm. Considering the size of the parts in this project, its execution is dependent on a single CNC machine. This machine,
Implementation of the SMED Methodology
159
M1, is the only one with capacity for the dimension of the parts of this project, whose execution time is 20 days of work. The rectification service (Fig. 1(d)) is performed to ensure the parallelism of the part. This service ensures that in the final stage of machining, the finishing stage, all the geometry is perfectly centered. Depending on the part’s geometry, grinding can be performed on one or both sides of the part. This definition depends on whether the workpiece has one or two faces, and is called base grinding and thickness grinding, respectively. Similarly to the rough machining service, the grinding service, given its size, can only be performed on a grinder, the R2 machine. This machine is the only one with the capacity for the size of the parts in this project, whose execution time is 5 days of work. The finishing machining service is the last service to be performed (Fig. 1(e)). At this stage, it is intended that the machining on the part geometry is met with zero tolerance. This service is performed on CNC machines of great accuracy and high levels of demand. The rigor required in this phase of the work is extremely important, taking into account that small errors are big problems. For the finishing machining work in this project, we estimate 33 days of work on two machines, the M2 machine and the M3 machine. Given the complexity of some part geometries, after this phase it is required to measure the parts to ensure that the machining conforms to the geometry. The company does not have its own resources to execute the measurement service (Fig. 1(f, g)), so it is a service contracted from external entities. In this project, by customer requirement, the GOM system, a high precision 3D control system, was contracted. The GOM system allows the extraction of conformity reports, enabling the customer to verify the conformity of the parts before they are shipped. The GOM system is a system with a significant cost and is therefore only applied to the most complex parts. The fulfillment of the deadlines previously agreed upon depends on the resources and machines available and the overall efficiency of the equipment. In the project under analysis, the sequencing has to respect two distinct restrictions: (1) when there is more than one piece of equipment for the same service. As was the case with the machining service in finishing and drilling, where in each service there are 2 machines; (2) when there is only one machine for the service, as is the case of rough machining and grinding, where there is only one machine to perform the services. It is verified that there is a critical point in the project, and only a correct definition of the sequence of tasks can generates best results. According to the planning the critical problem is in the drilling machine. In order to save as much time as possible in the changes of parts to be drilled and machined, and minimize setups, the aim os this study is to implement SMED tool was implemented. For this, in an experimental way, during one day the change of two parts (A and B) was filmed, where their drilling was done. These parts belong to a mold. The drilling was done on the Cheto IXN 2000 CNC machine. The aim of the study is to implement measures to increase the availability of the drilling machine.
4 Results and Impact of the Implementation of the SMED at OEE The OEE indicator is an easy-to-use, efficient tool, and like other indicators, is used with the purpose of helping managers, leaders, and decision-makers to make decisions, in order to evaluate the performance of an enterprise and reconduct investments, in an
160
A. Pata and A. Silva
agile and efficient way. The total productive maintenance (TPM) concept, launched by Nakajima (1988) in the 1980s, provided a quantitative metric called overall equipment effectiveness (OEE) to measure the productivity of individual pieces of equipment in a plant. It identifies and measures losses of important aspects of manufacturing, namely availability, performance, and quality rate. This supports the improvement of equipment effectiveness and thus its productivity. The OEE concept is becoming increasingly popular and has been widely used as an essential quantitative tool for measuring productivity, especially in semiconductor manufacturing operations. Manufacturers in other industries have also embraced it to improve their asset utilization. The industrial application of OEE, as it is today, varies from one industry to another. Although the basis for measuring efficiency is derived from the original OEE concept, manufacturers have customized OEE to fit their specific industrial needs. In addition, the term OEE has been changed in the literature to other different terms with respect to the application concept. This has led to the extension of OEE to overall plant effectiveness (OPE), overall throughput effectiveness (OTE), production equipment effectiveness (PEE), overall asset effectiveness (OAE), and total equipment effectiveness performance (TEEP) [2]. However, the OEE and its factors Availability, Efficiency and Quality are only indicators, nothing more, that is, they only serve to inform us of the existence of problems, what the potential for equipment utilization is, and whether we are getting better or worse. When a problem exists, it has to be investigated, studied, and measures implemented to solve it, permanently eliminating the root cause, duly consolidated by standardized work. The OEE is, above all, a tool to support continuous improvement. If this concept is present in all stakeholders in the system, operators, supervisors and management, then the ideal environment will be established for the system to become a very useful management indicator. It was on the basis of this indicator, which is below 85%, that we chose to implement SMED on the CNC drilling machine.
Fig. 2. Drilling of parts A and B.
After the recordings were made (Fig. 2), all the operations were analyzed and, the time spent on each operation was counted. Subsequently, it was necessary to distinguish all the operations and account for the time spent on each one. A detailed list of all the activities performed was created. Then the internal setup activities were separated from
Implementation of the SMED Methodology
161
the external setup activities. A list of activities was created, and based on this list a work orientation was proposed, to start performing the external tasks when the machine is running, and not when the machine is stopped. Based on the work orientation, a comparison was made of the time before and after the SMED tool was applied. After analyzing results, it is possible to conclude the percentage of stops in the drilling of each part. Thus, to make the holes, the machine was stopped 57% of the time in the holes of the part A and 68% in the holes of the part B. Comparing the drilling time of each part, part A has more holes, therefore has a higher percentage of drilling time and part B, has more “Breaks”, because besides having caught the company’s break, a drill broke, which involved a new preparation of the program and the preparation of a new drill. Such occurrences directly influenced the percentage of downtime of the machine while it was drilling part B. After all the times were analyzed, all operations were distinguished between internal setup and external setup operations. In order to understand which operations are necessarily done with the machine stopped or with the machine running. From the collection of opinions and sharing of empirical and scientific knowledge, the following were determined to be external tasks: tool checking, program preparation, workpiece change, questions, cleaning, and breaks. Internal tasks included: drilling, workpiece preparation and self-checking. Regarding the operation “Self-Control”, this could be considered as an external setup operation, because it has to be done with the machine stopped. But since it is an operation to ensure a good job, without errors, it was considered as an internal setup. One example is the long changeover time. In this situation it was suggested that the operator should prepare the next part while the machine is drilling. The same was suggested for the tool change. As is the case with the drill change. Changing a drill takes a long time, so it’s important that the operator prepares the next drills to be used while the machine is working, and puts them on the carousel for quick tool changes. Another important aspect is the exchange of information between shifts. It is very important that the operator leaves a record of the job status before the next shift arrives. This will prevent the next operator from wasting time looking for the information he needs to continue the work, asking the bosses what he has to do next. After the tables were analyzed, and improvement opportunities implemented, parts A and B were produced again and the setup times before and after suppressing tasks were compared (Table 1). Table 1. Table of part drilling times before and after SMED Parts
Drilling time before SMED (minutes)
Drilling time after SMED (minutes)
Availability before
(%) After
A
105,00
68,00
43,00
90,28
B
109.50
65,50
32,00
90,04
It can be seen that the implementation of the SMED methodology allowed the reduction of 35.23% of drilling time in part A, with the drilling time going from 105 min to 68 min. In part B there was a reduction of 39.91% in drilling time, with a gain of about 44 min. The overall efficiency of the process of mold-making should be above 85%.
162
A. Pata and A. Silva
For example, result of availability (90%), efficiency (95%), and quality (99%). In this study, the availability of the drilling machine was improved to 90%, contributing to the increase of the global efficiency of the process, with an OEE of about 85%. However it is not enough to implement SMED and quick tool change systems to reduce setups for adjustments, through the strategy of reducing tool change time. To implement measures to increase the availability of the drilling machine, future work should too implement 5S, Poka-Yoke, visual management and standardized work.
5 Conclusion Several industrial sectors, in which value chains are heavily involved, are subject to an enormous demand for the reduction of production cycles and product innovation. This induces a strong pressure on mold manufacturing lead times and a clear need to accommodate an increasing number of product modifications during the mold manufacturing, without deteriorating its final cost and the final date of first samples and shipment. The implementation of management strategies capable of handling a unique type of production, with defined unstable specifications, where the delivery date is a critical issue, is critical to the smooth operation of mold manufacturing systems. Thus, and ensuring continuously improved performance, the only way to ensure business excellence and market leadership, is to combine technical knowledge with scientific knowledge and develop new tools applied to the moldmaking industry. In this context, there was a need to study the global efficiency of the mold manufacturing process of a certain industry. It was found to be urgent to implement measures to increase the availability of a drilling machine. The use of SMED has become fundamental to increase the availability of the drilling machine, to decrease set-up times, and to increase productivity. The main practical contributions of this paper were: the conversion of internal setup activities into external ones, elimination of waste in the process, significant reduction in setup times and fixing pneumatic clamps on the machine. From the analysis of the OEE, its evolution and application in industries, it is concluded that OEE is a valuable measure that provides information about the sources of lost time and lost production. Many companies when they routinely hit capacity constraints, immediately consider adding overtime to existing workers, hiring workers for new shifts, or even buying a new equipment in order to increase their production capacity. For these companies, the OEE tool can be very useful and help in optimizing the performance of existing capacity. OEE is a valuable tool that can help in reducing overtime expenses and allow the postponement of large capital investments. It helps reduce process variability, reducing changeover times and improving operator performance, benefits that substantially improve the bottom line of the drilling operation and increase the competitiveness of companie. This topic has several opportunities for study. Although the article has focused on the drilling operation, the equipment can be studied, such as the grinding and other equipments.
Implementation of the SMED Methodology
163
References 1. Mennig, G.: Mold-Making Handbook 3e. Hanser Publications, Munich (2013) 2. Kennedy, R.K.: Understanding, Measuring, and Improving Overall Equipment Effectiveness: How to Use OEE to Drive Significant Process Improvement. Productivity Press; Taylor & Francis; CRC Press (2018) 3. Muchiri, P., Pintelon, L.: Performance measurement using overall equipment effectiveness (OEE): literature review and practical application discussion. Int. J. Prod. Res. 46(13), 3517– 3535 (2008) 4. Patel, V.C., Thakkar, D.H.: Review on implementation of 5S in various organization. J. Eng. Res. Appl. 4, 774–779 (2014) 5. Saurin, T.A., Ribeiro, J.L.D., Vidor, G.: A framework for assessing poka-yoke devices. J. Manuf. Syst. 31, 358–366 (2012) 6. Keen, B., Protzman, C., Protzman, D.: The Basics Lean Implementation Model Lean Tools to Drive Daily Innovation and Increased Profitability. Taylor & Francis Inc. (2019) 7. Almeida, H.A., Vasco, J.C.: Progress in Digital and Physical Manufacturing. Lecture Notes in Mechanical Engineering, Springer, Cham (2019). https://doi.org/10.1007/978-3-030-290 41-2 8. Nembhard, D., Bentefouet, F.: Selection, grouping, and assignment policies with learningby-doing and knowledge transfer. Comput. Ind. Eng. 79, 175–187 (2015) 9. Sushil, G., Starr, M.: Production and operations management systems. Production and Operations Management Systems (2014) 10. Kumar, S., Suresh, N.: Production and Operations Management. New Age International (P) Ltd., Publishers (2008) 11. Kennedy, P.: Flow Analysis of Injection Molds 2e. Hanser Publications, Munich (2013) 12. Moayyedian, M.: Intelligent Optimization of Mold Design and Process Parameters in Injection Molding. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-03356-9 13. Menning, G., Stoeckhert, K.: Mold-Making Handbook. Hanser, Munich (2013) 14. Tornincasa, S.: Technical Drawing for Product Design Mastering. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-60854-5 15. Liu, S.J., Wu, Y.C.: Dynamic visualization of cavity-filling process in fluid-assisted injection molding-gas versus water. Polym. Test. 26, 232–242 (2007) 16. Liu, X., et al.: Overview of the experimental trends in water-assisted injection molding. Macromol. Mater. Eng. 303, 1–13 (2018) 17. Speranza, G., Liu, W., Minati, L.: Applications of Plasma Technologies to Material Processing. Taylor & Francis Group, LLC (2019) 18. Vieira, E.L., Cambruzzi, C.L.: Aplicação do SMED para redução do tempo de setup em uma máquina injetora de plásticos. Revista Latino-Americana de Inovação e Engenharia de Produção 8, 155 (2020) 19. Sugai, M., McIntosh, R., Novaski, O.: Metodologia de Shigeo Shingo (SMED): análise crítica e estudo de caso. Gestão & Produção 14, 323–335 (2007)
Optoelectronic Systems for the Determination of the Position of a Mobile System Lauren t, iu Adrian Cartal1(B) , Daniel B˘acescu1 , Lucian Bogatu1 , Tudor C˘at˘alin Apostolescu2 , Georgeta Ionas, cu1 , and Andreea Stanciu3 1 POLITEHNICA University of Bucharest, Splaiul Independentei nr. 313, Bucharest, Romania
[email protected]
2 “Titu Maiorescu” University of Bucharest, Calea V˘ac˘are¸sti nr. 189, Bucharest, Romania 3 National Institute of Research and Development in Mechatronics and Measurement
Technique, S, os. Pantelimon nr. 6-8, Bucharest, Romania
Abstract. The article deals with two optoelectronic systems that determine the position of a light source placed on a mobile system. The mobile system has a movement in a plane or in a workspace whose topography is known a priori. One of the optoelectronic systems consists of electric actuators and moving mechanical elements and the other is of the solid-state type. In the case of both optoelectronic systems, the determination of the position of the light source is based on the triangulation method. Keywords: Optoelectronic system · Triangulation · Stereoscopic base · Image sensor · Wide-angle lens
1 Introduction The determination of the position in the plan or space of a mobile system can be done with the help of internal transducers that are on the mobile system or with external transducers that are not on board the mobile system. The use of external transducers has the advantage that it simplifies the construction of the mobile system and reduces its size. This is especially important if the mobile system is small, such as a mobile minirobot. Another advantage is the fact that such an external transducer can be incorporated into a tracking system and adaptive control of the mobile system (minirobot) [1–3]. The tracking system determines, at any time, the position of the minirobot in real space, which is a priori known. Any command given to the real minirobot is first analyzed by the computing unit in the virtual workspace, which decides which is the best trajectory on which the minirobot can move so that it avoids the obstacles in real space. With the help of the tracking system and the adaptive control of the mobile system, virtual transducers can also be implemented, such as proximity or tilt transducers. Also, with such a system the speed and acceleration of the mobile system can be determined. In general, the optoelectronic systems for determining the position of a mobile system are based on the triangulation method [2–7]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Machado et al. (Eds.): icieng 2022, LNME, pp. 164–173, 2022. https://doi.org/10.1007/978-3-031-09385-2_15
Optoelectronic Systems for the Determination of the Position of a Mobile System
165
This article presents two optoelectronic systems that allow the determination of the position of a light source placed on the mobile system. The mobile system has a movement in a plane or in a workspace whose topography is known a priori. One of the optoelectronic systems consists of electric actuators and moving mechanical elements and the other is of the solid-state type.
2 The Optoelectronic System with Electric Actuators The position of a light source P embarked on a mobile system (minirobot) can be determined using the triangulation principle whose diagram is shown in Fig. 1. If it is assumed that the stereoscopic base B between two optoelectronic transducers measuring the angles θ1 and θ2 is known and if the origin of the coordinate system O is considered in the middle of the stereoscopic base, then the x and z coordinates can be calculated with the relations: 2 −2θ1 ) x = B2 sin(2θ sin(2θ2 +2θ1 ) (1) 2θ2 ·sin 2θ1 z = B sin sin(2θ2 +2θ1 ) The y coordinate can be obtained at the intersection of the line perpendicular to the zOx plane with the a priori known workspace topography. The two optoelectronic transducers are equipped with rotating mirrors M1 and M2 and with lenses Ob1 and Ob2 which have image focal points F1 and F2 .
Fig. 1. Diagram of the optoelectronic system with electric actuators
The designed optoelectronic transducer, shown in Fig. 2, consists of a rotating mirror 1 driven by a DC servomotor with encoder 2, a lens 3 and a photosensitive strap 4. The strap is positioned in the image focal plane of the lens 3. When the normal of the rotating mirror coincides with the bisector of the angle between the Ox axis and the light beam from the light source on the minirobot, then the
166
L. t, . A. Cartal et al.
Fig. 2. 3D model of optoelectronic transducer with DC servomotor and encoder
light point image reaches the photosensitive strap, and a signal is transmitted to read the encoder yielding angle θ (θ1 or θ2 ). The angles θ1 and θ2 are calculated with an error that increases as the point P moves away from the origin. Increasing the stereoscopic base is necessary in order to keep the measurement errors within an acceptable range. The 3D model of the optoelectronic system with electric actuators that has conceived is shown in Fig. 3. The system is composed of two optoelectronic transducers 1 and 1’, as shown in Fig. 2, two rails 2 and 2’, body 3, plate 4, two DC servomotors with encoders 5 and 6, laser source 7 and two rotary tables 8 and 9. The two rails are guided by means of rollers fixed in the body 3. They move symmetrically towards the body of the installation and allow the precision modification of the stereoscopic base between the two optoelectronic transducers. The rails are driven by the servomotor 5 by means of a pinion and two racks. The body 3 together with the rails can rotate relative to the plate 4 being operated by the servomotor 6 so that the light source on the minirobot is maintained in the visual field of the optoelectronic transducers. By reducing the size of the minirobot, the light source can become comparable to the size of the robot. In this case a laser source 7 is used mounted on the two rotary tables 8 and 9 which allow it to rotate in two perpendicular planes. The role of the laser source is to track the minirobot and illuminate it so that the light reflected by it is detected by those two optoelectronic transducers. In this case, a sphere that reflects light from the laser source must be mounted on the minirobot. If the sphere cannot be mounted on the minirobot, this can be painted with a reflective dye. In order to avoid some interferences, the wavelength of the laser is very close to the wavelength of the spectral sensitivity of the optoelectronic sensor strap.
Optoelectronic Systems for the Determination of the Position of a Mobile System
Fig. 3. 3D model of the optoelectronic system with electric actuators
Fig. 4. The cross section through the body and the plate of the optoelectronic system
167
168
L. t, . A. Cartal et al.
Figure 4 shows a cross section through the body and the plate of the optoelectronic system. The prototype of the optoelectronic system with electric actuators is presented in Fig. 5.
Fig. 5. The view of the prototype of the optoelectronic system
3 Solid-State Optoelectronic System Another optoelectronic system conceived to determine the position of a mobile system that emits or reflects light is of the solid-state type (Fig. 6). The solid-state optoelectronic system consists of two optoelectronic transducers, consisting of lenses Ob1 and Ob2 that have a focal length f, and image sensors IS1 and IS2 placed in the image focal planes of the two lenses. The lenses Ob1 and Ob2 have optical axes OA1 and OA2 . As in the previous case, it is assumed that the stereoscopic base B between those two optoelectronic transducers is known, and the origin O of the coordinate system is considered in the middle of the stereoscopic base. Linear image sensors can be used to determine the position of the light source P attached to a mobile system moving in a plane. The computing unit of the image sensors IS1 and IS2 provide the coordinates of the pixels u1 and u2 on which the images of the light source are formed by the lenses Ob1 and Ob2 with respect to the ends O1 and O2 of the sensors. In this case, using the triangulation method, can be determine the x and z coordinates of the point P with the
Optoelectronic Systems for the Determination of the Position of a Mobile System
relations:
169
2 −u1 x = B2 d d (u1u+u 2 )−U
(2)
z = B d (u1 +uf 2 )−U
where U is the length of the linear image sensor, and d is the pixel size in the u1 or u2 direction.
z
P(x,z)
OA2
OA1 Ob2
Ob1 O
f IS1
u1
O1
O2
U
u2 U
B/2
x
f IS2
B/2
Fig. 6. Solid-state optoelectronic system diagram
The development of the solid-state optoelectronic system with image sensors requires the design of a wide-angle lens (a.k.a. “fisheye”) whose optical scheme must be known in order to calculate the field aberration and distortion produced by such a lens. Knowing these aberrations can correct the image produced by the lens on the image sensor. A technique for determining aberrations is vectorial raytracing. The computational relations for the vectorial raytracing are based on the knowledge of the direction cosines of the incident ray and the point where this ray intersects the plane tangent to the first diopter. Thus, the start quantities in vectorial raytracing can be calculated if the coordinates of the object point and the position and size of the entrance pupil are known. If the optical system works with very large field angles (which can exceed 180°) the distortion aberrations and the field curvature are very large and the entrance pupil, calculated with the paraxial relations, no longer makes sense. Consequently, replacement relations must be found based on another calculation scheme, indicating a replacement entrance pupil with which to construct the emergent ray.
170
L. t, . A. Cartal et al.
The radius of the replacement entrance pupil is calculated using known formulas. The position of its center will be placed in space at an arbitrary distance chosen by the user in the direction of the main pupillary ray, and its plane must be perpendicular to it. The sequence of calculations is as follows: 1. 2. 3. 4.
The position of the aperture diaphragm of the optical system is established. The position and size of the exit pupil is calculated. The direction of the emerging main pupillary ray is established. An inverse vectorial raytracing is performed with this direction resulting the intersection of the optical ray with the first diopter and the direction cosines of the emerging main pupillary ray. 5. Switch to the normal reference system of the optical system. 6. The distribution of points in the entrance pupil, constructed in the plane tangent to the first diopter is calculated. 7. The calculated points distribution shall be placed in a plane perpendicular to the principal pupillary ray at a known distance from the first diopter. 8. All optical rays leaving this distribution will be parallel to the direction of the main pupillary ray (because in such optical systems the object is considered to be at infinite position). 9. Knowing the point where the ray of light starts and its direction, the point where it intersects the first diopter is calculated. 10. Knowing the point where the optical ray intersects the sphere of the first diopter and its direction, can be apply the relations of vectorial raytracing and thus calculate the emerging optical ray from the optical system. The calculation of the intersection of an optical ray with the sphere of the first diopter can be done if it is known the point i where the optical ray (x, y, z)pi starts from, the direction (L, M, N)i and the equation of the sphere of the first diopter: (x − a)2 + (y − b)2 + (z − c)2 = r12
(3)
where r1 is the modulus of the radius of the first diopter of the optical system and (a, b, c) represent the coordinates of the center of the sphere of the first diopter which is calculated with the relations: ⎧ ⎨ a = o · |r1 | (4) b = p · |r1 | ⎩ c = q · |r1 | where (o, p, q) is the radius versor of the first diopter. Under these conditions, the parametric equations of the line representing the optical ray to which refer are presented in the following relations: ⎧ ⎨ x = λLi + xpi (5) y = λMi + ypi ⎩ z = λNi + zpi
Optoelectronic Systems for the Determination of the Position of a Mobile System
171
By introducing these equations into the equation of the sphere of the first diopter is obtained: 2 2 λ2 L2i + 2λLi xpi − a + xpi − a + λ2 Mi2 + 2λMi ypi − b + ypi − b (6) 2 +λ2 Ni2 + 2λNi zpi − c + zpi − c − r12 = 0 Is considered: A = L2i + Mi2 + Ni2 B = 2 Li xpi − a + Mi ypi − b + Ni zpi − c 2 2 2 C = xpi − a + ypi − b + zpi − c − r12
(7)
Thus, result the quadratic equation: Aλ2 + Bλ + C = 0 Which has solutions: λ1,2 =
−B ±
(8)
√ B2 − 4AC 2A
(9)
From the two solutions will choose the smallest solution in the modulus, λmin , which will be introduced in the parametric equations of the line, obtaining the coordinates of the point of intersection with the first diopter: ⎧ ⎨ x1i = λmin Li + xpi (10) y = λmin Mi + ypi ⎩ 1i z1i = λmin Ni + zpi Knowing the coordinates of the point of intersection between the optical ray and the first diopter of the optical system, by means of the relation of the vectorial raytracing, which builds the cosine of the angle of incidence i1i and emergence i’1i for the first diopter, resulting: cosi1i = Li (o − ρ1 x
1i ) + Mi (p − ρ1 y1i ) + Li (q − ρ1 z1i ) 2 cosi1i = 1 − (1 − cosi1i )2 n1
(11)
n1
where: ρ1 =
1 r1
(12)
Obtaining these values, the input data in the vectorial raytracing for second diopter can be calculated, and so on until to the last diopter. The optical scheme of a wide-angle lens that can be used within the optoelectronic system for determining the position of a light source placed of a mobile system is presented in Fig. 7, where F and F’ are the object and image side principal foci, H and H’ are the object and image side principal points, Dd is the aperture diaphragm, Pi is the entrance pupil and Pe is the exit pupil.
172
L. t, . A. Cartal et al.
Fig. 7. Optical scheme of a wide-angle lens
4 Conclusions Two optoelectronic systems were designed for adaptive control of minirobots that are related to the topography of the terrain on which they move. The optoelectronic system with electric actuators was developed before the solidstate one and, although more complex from mechanical point of view, has the advantage that it is cheaper than this. The solid-state optoelectronic system, in the absence of electric actuators and moving mechanical elements, is simpler, more reliable and has a high frequency of measurement. Furthermore, due to wide-angle lenses the field of view is large enough to cover the entire space than no more system rotation needed. The solid-state optoelectronic system can be adapted to control a drone that moves in 3D space by replacing linear image sensors with surface image sensors.
References 1. B˘acescu, D., Panaitopol, H., B˘acescu, D.M., Bogatu, L.: Research and manufacturing of virtual sensors in terms of mechatronical concepts. In: The 5-th International Conference of Mechatronics “Mechatronics 2004”, Elektronika: konstrukcje, technologie, zastosowania, nr. 8-9/2004, Warsaw, Poland, pp. 70–72 (2004)
Optoelectronic Systems for the Determination of the Position of a Mobile System
173
2. Petrache, S., Duminic˘a, D., Cartal, L.A., Apostolescu, T.C., Iona¸scu, G., Bogatu, L.: A new method for establishing the performances of an optoelectronic sensor. In: Gheorghe, G.I. (ed.) ICOMECYME 2017. LNNS, vol. 20, pp. 204–219. Springer, Cham (2018). https://doi.org/10. 1007/978-3-319-63091-5_24 3. B˘acescu, D., Petrache, S., Alexandrescu, N., Duminic˘a, D.P.: Adaptive Control of a Virtual Mini Robot with Autonomous Displacement Using a Virtual Sensor System. The Romanian Review Precision Mechanics, Optics & Mechatronics, no. 39, pp. 169–172 (2011) 4. B˘acescu, D., Panaitopol, H., B˘acescu, M.D., Bogatu, L.: Optoelectronic sensors family for determine of the normal direction of the plane with variable space movement. Revista Mecatronica (2), 16–18 (2004) 5. B˘acescu, D., Panaitopol, H., B˘acescu, D.M., Bogatu, L., Petrache, S.: Optoelectronic sensor with quadrant diode patterns used in the mobile robots navigation. In: Jablonski, R., Turkowski, M., Szewczyk, R. (eds.) Recent Advances in Mechatronics, pp. 136–140. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-73956-2_28 6. Petrache, S., Alexandrescu, N., B˘acescu, D.: Increasing the position tracking accuracy of an autonomous mobile minirobot, an essential condition for the remote command and control of the motions. J. Control Eng. Appl. Inform. 13(2), 50–55 (2011) 7. Zhao, Z., et al.: Multi-camera-based universal measurement method for 6-dof of rigid bodies in world coordinate system. Sensors 20(19), 21 (2020)
Cocktail Party Graphs and an Optimal Majorization for Some of the Generalized Krein Parameters of Symmetric Association Schemes Vasco Mo¸co Mano1(B) and Lu´ıs Almeida Vieira2 1
2
University of Porto, Porto, Portugal [email protected] Faculty of Engineering, University of Porto, Porto, Portugal [email protected]
Abstract. In this paper we deduce a majorant for some of the generalized Krein parameters of a symmetric association scheme and we analyze the particular case of Cocktail Party Graphs to show the optimality of this majorant. Keywords: Algebraic combinatorics · Association scheme analysis · Strongly regular graph · Cocktail party graph
1
· Matrix
Introduction
The present text takes as a starting point the work developed in [1] on symmetric association schemes. Along this text we adopt the notation introduced in the above mentioned paper, which is an extension of the work published on other papers such as [2] or [3]. In our previous work we established a generalization of the Krein parameters associated to a symmetric association scheme which, in turn, allowed us to deduce some necessary conditions for the existence of these complex combinatorial structures. This work is organized in the following way. In Sect. 2 we introduce the basic concepts on symmetric association schemes together with a brief survey on the work previously developed. Section 3 is devoted to the deduction of a majorant for some of the generalized Krein parameters of a symmetric association scheme. Section 4 contains a short introduction on strongly regular graphs as a particular case of symmetric association schemes. Finally, in Sect. 5, we consider the family of Cocktail Party Graphs to show that the majorant previously deduced is optimal.
2
Symmetric Association Schemes and Some Results
In this section we will briefly introduce the concept of symmetric association scheme. We follow the notation introduced in [1]. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Machado et al. (Eds.): icieng 2022, LNME, pp. 174–181, 2022. https://doi.org/10.1007/978-3-031-09385-2_16
A Generalization of the Krein Parameters
175
Let n and d be natural numbers. Given a finite set of order n, A, and d nonempty subsets of A × A, Si , i = 0, 1, . . . , d, a symmetric association scheme with d classes is simply a pair (A, {Si }di=0 ) verifying: (i) (ii) (iii)
S0 = {(a, a) : a ∈ A}; if (a, b) ∈ Si , then (b, a) ∈ Si , for all a, b in A and i in {0, 1, . . . , d}; l for all i, j, l in {0, 1, . . . , d} there is an integer αij such that, for all (a, b) in Sl , l . |{c ∈ A : (a, c) ∈ Si and (c, b) ∈ Sj }| = αij
The definition presented is due to Bose and Shimamoto [4]. By (ii), every relation Si is symmetric in A × A. Hence the reason for calling every scheme defined it this way a symmetric one. In our work we only consider symmetric association schemes. For a more general definition of association schemes which are not necessarily symmetric, see [5]. l are called the intersection numbers of the symmetric The parameters αij association scheme and can be organized in the intersection matrices, L0 , . . . , Ld , l , where L0 = In . where (Li )lj = αij Instead of dealing with S0 , S1 , . . . , Sd , one can consider the respective adjacency matrices M0 , M1 , . . . , Md , where each Mi is a matrix of order n such that (Mi )ab = 1, if (a, b) ∈ Si , and (Mi )ab = 0, otherwise. As expected, the matrices of (A, {Si }ki=0 ) satisfy corresponding properties to (i)−(iii): (I) (II) (III) (IV)
M = In ; 0d i=0 Mi = Jn ; Mi = Mi , ∀i ∈ {0, 1, . . . , d}; k l Ml , ∀i, j ∈ {0, 1, . . . , d}; Mi Mj = l=0 αij
where In and Jn denote the identity and the all ones matrices of order n, respectively, and M denotes the transpose of M . Note that property (II) is equivalent to stating that {Mi }di=0 is a free set. It is also known (see, for instance, [6]) that the product of the elements of {Mi }di=0 is commutative. The symmetric matrices M0 , M1 , . . . , Md of a symmetric association scheme span a commutative algebra, called the Bose-Mesner algebra, see [7], and denoted by B, with dimension d + 1. This is an algebra with respect to the usual matrix product and also to the Schur product of matrices, defined as: (X ◦ Y )ij = Xij Yij , for any two matrices X and Y of the same order. Note that B is an associative and Abelian algebra with unit Jn , with respect to ◦. The Bose-Mesner algebra, B, has a unique basis of minimal orthogonal idempotents {F0 , F1 , . . . , Fd }, that is, each Fi is such that Fi2 = Fi , Fi Fj = 0, ∀i = j d and i=0 Fi = In . The idempotents Fi can be calculated as projectors associated to the matrix M1 of the association scheme using the formula: Fi =
d M1 − λl In , λi − λl
l=0,l=i
(1)
176
V. M. Mano and L. A. Vieira
providing that M1 has d+1 distinct eigenvalues, where the λ’s are the eigenvalues of M1 . Given a symmetric association scheme (A, {Si }di=0 ), the Krein parameters, l , with 0 ≤ i, j, l ≤ d, defined by discovered by Scott [8], are the numbers βij Fi ◦ F j =
d
l βij Fl .
l=0
(A, {Si }di=0 ),
l The Krein parameters of βij , can be interpreted as the dual paral meters of the intersection numbers of (A, {Si }di=0 ), αij . In [1] we generalized these parameters. The generalized Krein parameters of (A, {Si }di=0 ) are the real numbers βnl 0 ,n1 ,...,nk , with l ∈ {0, 1, . . . , d} and n0 , n1 , . . . , nd ∈ N, such that
F0◦n0 ◦ F1◦n1 ◦ · · · ◦ Fd◦nd =
d
βnl 0 ,n1 ,...,nd Fl ,
(2)
l=0
where A◦m denotes the m-th Schur power of the matrix A, with A◦0 = Jn . We finish this section with some results involving the parameters of a symmetric association scheme that were introduced and proved in [1]. The first result states a general upper and lower bound for the generalized Krein parameters of a symmetric association scheme. Theorem 1. Let (A, {Si }di=0 ) be a symmetric association scheme with d classes. Then, for l ∈ {0, 1, . . . , d} and n0 , n1 , . . . , nd ∈ N, the generalized Krein parameters of (A, {Si }di=0 ) satisfy the double inequality: 0 ≤ βnl 0 ,n1 ,...,nd ≤ 1. Besides each generalized Krein parameter being a real number which is necessary in the set [0, 1], we also have the next equality relating all the generalized Krein parameters in a given direction. Theorem 2. Let (A, {Si }di=0 ) be a symmetric association scheme with d classes. Then, for any l ∈ {0, 1, . . . , d} and m ∈ N, we have that m (3) βnl 0 ,n1 ,...,nd = 1, n , n , . . . , n 0 1 d n +n +···+n =m 0
1
d
where 0 ≤ ni ≤ m, for each i ∈ {0, 1, . . . , d}, and m m! . = n0 , n1 , . . . , nd n0 !n1 ! . . . nd ! Theorem 2 establishes in (3) that the sum of all the generalized Krein parameters, in a given direction and with a given total sum of exponents, of a symmetric association scheme is always equal to 1. Besides the lower and upper bounds presented in Theorem 1, this constitutes an extra condition that the generalized Krein parameters must comply for the symmetric association scheme to be able to exist.
A Generalization of the Krein Parameters
3
177
A Majorant for Some of the Generalized Krein Parameters of a Symmetric Association Scheme
In this section we consider the generalized Krein parameters as defined above in (2). Now, for i, j ∈ {0, 1, . . . , d}, let ni , nj = 0 and nl = 0, for all l ∈ {0, 1, . . . , d}\ {i, j}. Since we defined that every null Schur power of any matrix corresponds to the all-ones matrix, we can write: ◦nj
F0◦n0 ◦ F1◦n1 ◦ · · · ◦ Fd◦nd = Fi◦ni ◦ Fj
.
The corresponding spectral decomposition can be written as: Fi◦ni
◦
◦n Fj j
=
d
l β0...0n F. i 0...0nj 0...0 l
l=0
Regarding these generalized Krein parameters, we can establish the following result. Theorem 3. Let (A, {Si }di=0 ) be a symmetric association scheme with d classes. Then, for l ∈ {0, 1, . . . , d} and n0 , n1 , . . . , nd ∈ N such that all but two, ni and nj , are equal to 0, the generalized Krein parameters of (A, {Si }di=0 ) satisfy: 1 l β0...0n ≤ ni +nj . i 0...0nj 0...0
(4)
nj
Proof. Let ⊗ denote the Kronecker product between matrices, defined as ⎡ ⎤ a11 B · · · a1n B ⎢ ⎥ A ⊗ B = ⎣ ... . . . ... ⎦ , am1 B · · · amn B where A is any m × n matrix and B is any matrix. Let σ⊗ be the following sum of Kronecker products: σ ⊗ = Fi ⊗ F i ⊗ · · · ⊗ Fi ⊗ F j ⊗ F j ⊗ · · · ⊗ Fj ni times nj times + Fi ⊗ Fi ⊗ · · · ⊗ Fi ⊗Fj ⊗ Fi ⊗ Fj ⊗ Fj ⊗ · · · ⊗ Fj ni − 1 times nj − 1 times + · · · + Fj ⊗ F j ⊗ · · · ⊗ Fj ⊗ Fi ⊗ F i ⊗ · · · ⊗ Fi . ni times nj times j σ⊗ consists of nin+n summands of Kronecker products between the idempoj tents Fi and Fj , such that on each summand Fj appears nj times on the ni + nj
178
V. M. Mano and L. A. Vieira
possibilities and Fi appears in the remaining positions between all the permutations possible. Using the properties of the Kronecker product, namely that (A⊗B)(C ⊗D) = (AC ⊗ BD) for any four square matrices with equal dimension (see [9, Lemma 4.2.10]), and since Fi and Fj are orthogonal idempotents regarding the usual matrix product, then we can conclude that σ⊗ is an idempotent matrix. Thus its eigenvalues are 0 and 1. ◦ni ◦n j Fi ◦ Fj j is a principal Since the Schur product is commutative, nin+n j submatrix of σ⊗ . Hence, by the eigenvalues interlacing theorem (see [10, Theorem 4.3.15]), it follows that ni + nj l β0...0ni 0...0nj 0...0 ≤ 1 0≤ nj 1 l . ≤ ⇔ 0 ≤ β0...0n i 0...0nj 0...0 ni + nj nj
Theorem 3 provides a potentially tighter upper-bound for some of the generalized Krein parameters of symmetric association schemes relatively to the one that was known and presented in Theorem 1. Later in this paper we will present an example of a family of symmetric association schemes with two classes where this majorant is asymptotically attained.
4
Strongly Regular Graphs: A Particular Type of Symmetric Association Schemes
It is natural to observe a certain correspondence between symmetric association schemes and graphs. Because of the nature of the elements of {Mi }di=0 , it is straightforward to consider the simple and undirected graphs G1 , G2 , . . . , Gd , sharing a common vertex set V , from which M1 , M2 , . . . , Md are the respective adjacency matrices. The case where we have only one class yields the matrices M0 = In and M1 = Jn −In . This one class association schemes corresponds to the unconnected graph, G0 , where each vertex is connected to itself and the complete graph G1 . Thus, it has no interest. When we go to the next case and consider a scheme with two classes, we come face to face with a much more interesting family of graphs called the strongly regular graphs. These graphs are simple, regular and undirected with the following extra regularity condition: the number of common neighbors to every pair of vertices is fixed and depends only on the nature of that pair of vertices, that is, if they are adjacent or non-adjacent. Therefore, a strongly regular graph is usually paired with a specific parameter set (n, k, a, c), where n is the number of vertices, k is the regularity, a is the number of common neighbors to every
A Generalization of the Krein Parameters
179
pair of adjacent vertices and c is the number of common neighbors to every pair of non-adjacent vertices. Indeed, the matrices of a symmetric association scheme with two classes are M0 = In , M1 , M2 = Jn − M1 − In , where M1 and M2 can be observed as the adjacency matrices of a strongly regular graph and it’s complement, respectively. Given a graph G, its complement, G, is the graph whose adjacent vertices are the non-adjacent vertices of G and vice-versa. Also, it is well known that, if G is strongly regular with parameter set (n, k, a, c), then G is also strongly regular with parameter set (n, k, a, c), where k = n − k − 1, a = n − 2 − 2k + c and c = n − 2k + a. Conversely, given the adjacency matrix of a strongly regular graph, M , then In , M, Jn − M − In constitute an association scheme with two classes. The eigenvalues of the adjacency matrix of a strongly regular graph are k, θ and τ , where θ and τ can be written in terms of the parameters of the graph: a − c + (a − c)2 + 4(k − c) (5) θ= 2 a − c − (a − c)2 + 4(k − c) . (6) τ= 2 Using formula (1), one can obtain the elements of the unique basis of orthogonal idempotents associated to a strongly regular graph with adjacency matrix A, written in the basis {In , A, Jn − A − In }: θ−τ θ−τ θ−τ In + A+ (Jn − A − In ) , n(θ − τ ) n(θ − τ ) n(θ − τ ) τ −k −τ n + τ − k n+τ −k In + A+ (Jn − A − In ) , F1 = n(θ − τ ) n(θ − τ ) n(θ − τ ) k−θ θn + k − θ −n + k − θ In + A+ (Jn − A − In ) . F2 = n(θ − τ ) n(θ − τ ) n(θ − τ )
F0 =
For an extended survey on strongly regular graphs refer to [11].
5
Cocktail Party Graphs: An Example
As we saw in Sect. 4, a symmetric association scheme with two classes is equivalent to considering a strongly regular graph and its complement. In this section we will study a particular family of strongly regular graphs called the Cocktail Party Graphs. A Cocktail Party Graph of order n is a graph with 2n vertices divided in n columns such that every pair of vertices are adjacent except the ones belonging to the same column. These graphs are denoted by Kn×2 in [12]. The three first Cocktail Party Graphs, K2×2 , K3×2 and K4×2 , are depicted in Fig. 1.
180
V. M. Mano and L. A. Vieira
Fig. 1. The three first members in the Cocktail Party Graphs family, K2×2 , K3×2 and K4×2 , respectively.
The name of this family of graphs comes from the situation that arises when a party for couples is held and everyone is supposed to handshake everyone at the event except themselves and their partner. As it was mentioned above, every Cocktail Party Graph is a strongly regular graph. Proposition 1. A Cocktail Party Graph of order n is strongly regular with parameter set (2n, 2n − 2, 2n − 4, 2n − 2). The statement in Proposition 1 can be deduced straightforwardly from the definition of a Cocktail Party graph of order n and the understanding of the parameters of a strongly regular graph. The fact that the total number of vertices in Kn×2 is 2n is obvious. The fact that every vertex is adjacent to all vertices of the graph except itself and its companion in its column is also clear. This makes Kn×2 a regular graph of valency 2n − 2. It should be equally clear that, if you consider any vertex of non-adjacent vertices, that is, two vertices belonging to the same column, then they will have 2n − 2 common neighbors. Finally, given a pair of two adjacent vertices in Kn×2 , x and y, if you consider the two vertices which are not adjacent to x and y, say x∗ and y ∗ , respectively, then we can easily conclude that x and y will have 2n − 4 common neighbors, corresponding to the total number of vertices in the graph minus x, y, x∗ and y ∗ . We will now make use of this family of Cocktail Party Graphs to show that the majorant introduced in Theorem 3 is as good as it can be. Consider the Cocktail Party Graphs Kl×2 . Since these graphs are strongly regular with parameter set (2l, 2l − 2, 2l − 4, 2l − 2), we can compute its eigenvalues as a function of l, using (5) and (6). Its eigenvalues are k = 2l − 2, θ = 0 and τ = −2. Calculating the 1 , we obtain: generalized Krein parameter β011 (|τ |n + τ − k)(θn + k − θ) (n + τ − k)(−n + k − θ)θ + n2 (θ − τ )2 n2 (θ − τ )2 (τ − k)(k − θ)(−θ − 1) + . n2 (θ − τ )2
1 = β011
A Generalization of the Krein Parameters
181
Substituting k, θ and τ in the previous formula by the corresponding values calculated above, we conclude that 1 = β011
l−1 . 2l
(7)
1 Analyzing (7), we observe that, as the value of l increases, the value of β011 1 1 converges to the majorant obtained in (4), that is: 1+1 = 2 . This means that we ( 1 ) can find a suitable Cocktail Party Graph such that this corresponding generalized Krein parameter is as close as we want to the upper-bound 12 . In other words, the majorant presented in Theorem 3 is the best one can provide for these generalized Krein parameters of symmetric association schemes as it cannot be optimized.
Acknowledgments. In this work Lu´ıs Vieira was partially supported by the Center of Research of Mathematics of University of Porto (UID/MAT/00144/2013), which is funded by FCT (Portugal) with national (MEC) and European structural funds through the program FEDER, under the partnership agreement PT2020.
References 1. Mano, V.M., Vieira, L.A.: A generalization of the Krein parameters of a symmetric association scheme. In: Machado, J., Soares, F., Trojanowska, J., Ivanov, V. (eds.) icieng 2021. LNME, pp. 453–460. Springer, Cham (2022). https://doi.org/10.1007/ 978-3-030-78170-5 39 2. Mano, V., Vieira, L.: Symmetric association schemes and generalized Krein Parameters,. Int. J. Math. Models Methods Appl. Sci. 9, 310–314 (2015) 3. Mano, V.M., Martins, E.A., Vieira, L.A.: Some results on the krein parameters of an association scheme. In: Bourguignon, J.-P., Jeltsch, R., Pinto, A.A., Viana, M. (eds.) Dynamics, Games and Science. CSMS, vol. 1, pp. 441–454. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-16118-1 23 4. Bose, R.C., Shimamoto, T.: Classification and analysis of partially balanced incomplete block designs with two associate classes. J. Am. Statist. Assoc. 47, 151–184 (1952) 5. Delsarte, Ph.: An algebraic approach to the association schemes of coding theory. Philips Res. Rep. Suppl. 10 (1973) 6. Graham, R.L., Gr¨ otschel, M., Lov´ asz, L. (eds.): Handbook of Combinatorics, vol. 1. The MIT Press, North-Holland (1995) 7. Bose, R.C., Mesner, D.M.: On linear associative algebras corresponding to association schemes of partially balanced designs. Ann. Math. Statist. 30, 21–38 (1959) 8. Scott, Jr. L.L.: A condition on Higman’s parameters. Notice. Am. Math. Soc. 20, A-97, 721-20-45 (1973) 9. Horn, R., Johnson, C.R.: Topics in Matrix Analysis. Cambridge University Press, Cambridge (1991) 10. Horn, R., Johnson, C.R.: Matrix Analysis. Cambridge University Press, Cambridge (1985) 11. Lint, J.H.v., Wilson, R.M.: A Course in Combinatorics. Cambridge University Press, Cambridge (2004) 12. Brower, A.E., Cohen, A.M., Neumaier, A.: Distance-Regular Graphs, SpringerVerlag, Berlin (1989). https://doi.org/10.1007/978-3-642-74341-2
The Structure of Automated Control Systems for Precision Machining of Parts Bearing Ivanna Trokhymchuk , Kostiantyn Svirzhevskyi , Anatolii Tkachuk(B) Oleg Zabolotnyi , and Valentyn Zablotskyi
,
Lutsk National Technical University, 75, Lvivska Street, Lutsk 43018, Ukraine [email protected]
Abstract. One of the ways to improve the quality of bearing parts is the introduction of automatic control systems to improve the technological reliability of machines and increase the accuracy of machining of roller bearing rings. A significant advantage of these systems is that it is possible to compensate for technological factors by cheaper means compared to such as increasing the rigidity of the technological system, processing in modes with lower productivity, processing using more passes, using manual methods to compensate for wear of cutting tools, maintaining the required rigidity of the machine and the accuracy of its elements through periodic repairs. These methods are associated with either a loss of cyclical productivity or a significant loss of unproductive nature. Therefore, the errors of automated control systems should be considered as processing errors or as a scattering field of the dimensions of parts manufactured on a machine equipped with an automatic control system for precision machining. The share of error of the most automated control system in the total balance of the total size error is quite small and does not exceed 10…20%. The task of improving the accuracy and ensuring stability in technological systems is complex, so it is solved only by a comprehensive method by improving the accuracy of all elements of the technological system. Thus, automated control systems for precision machining of bearing parts is one of the main subsystems of flexible automated production in the concept of Industry 4.0. Keywords: Accuracy · Active control · Machining process · Signal · Tolerance field
1 Introduction To determine the expected accuracy of the automated technological system that controls the process of manufacturing bearing parts, it is advisable to consider the errors of the system that occur under normal operating conditions. Under normal conditions is understood as a set of physicochemical parameters of the system, in which the influence of external factors on the system is minimal. The measurement limits of each of the parameters that characterize normal operating conditions are the main characteristic. Failure to comply with these conditions causes additional errors that can reduce the accuracy and reliability of the system and distort the quality characteristics. The criterion © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Machado et al. (Eds.): icieng 2022, LNME, pp. 182–192, 2022. https://doi.org/10.1007/978-3-031-09385-2_17
The Structure of Automated Control Systems for Precision Machining
183
for the accuracy of the automatic technological system is the total processing error. But during research and calculations, there is a problem of identifying its components and their share in the complex error. The sources of element-by-element methodological errors caused by the imperfection of the measurement method may be non-compliance with the principles of construction of measurement schemes, ignoring shape errors, control without taking into account the temperature of the part, control without taking into account deformations. Deviation from the theoretical schemes that allow obtaining the highest accuracy, is dictated by the achievement of possible simplification of the technological system as a whole. The possibility of reducing the influence of technological factors on the accuracy of processing is the inclusion in the technological system of the control unit [1]. But here it is necessary to consider that introduction of each new link in the system the machine tool the tool - a detail - the device can become a source of additional failures. Thus, it is necessary, if possible, to avoid complex multi-circuit control systems, using single-circuit systems based on direct measurement methods, which allow comprehensive compensation of processing errors by technological methods [2].
2 Literature Review The efficiency of flexible automated readjustment production largely depends on the specification of input parameters for the formation of control programs. The optimal number and informational weight of these parameters can be identified in the comparative analysis and implemented in the appointment of priority factors influencing dimensional accuracy [3]. Adjusting devices synthesized on the basis of optimal algorithms choose the value of the adjusting pulse, using all available information about the previous pulses and the size of the machined parts, which is especially important because in real conditions the process of offsetting the machine is random. If debugging level changes were a deterministic process, then debugging levels could be determined with any degree of accuracy [4]. In the process of processing products, along with the deterministic, often systematic linear, there is a random component, which determines the nature of the process. In such circumstances, the magnitude of the uniform systematic offset of the debugging per part may also vary from implementation to implementation. Thus, only a statistical assessment of the level of debugging of the machine on each cycle will ensure the effectiveness of controlling the accuracy of such processes. Active size control systems are used in almost all bearing manufacturing operations. Such systems allow compensating both functional and accidental errors in the course of processing [5]. Thus, these systems have the greatest compensatory properties and, accordingly, correspond to the simplest mathematical models [6, 7]. The high-frequency component of technological errors is one of the most difficult to compensate, as it arises as a result of fluctuations in the values of allowances for processing. To reduce it, it is necessary to increase the accuracy of previous operations [8]. Therefore, active control systems are used in all grinding operations. In general, active control includes any method of control, the results of which are manually or automatically influenced by the technological process [9]. Active control can be considered as a system of “size control”. The main factors that determine the scattering of the size of the parts are the
184
I. Trokhymchuk et al.
dimensional wear of the cutting tool, thermal and force deformations of the technological system [10]. The geometric or static accuracy of the machine tool mainly determines the shape errors of the workpieces and has no significant effect on the scattering of dimensions [11]. The main meaning of using the method of active dimensional control is to eliminate the impact on the accuracy of machining wear of the cutting tool, thermal and force deformations of technological systems. Measurement errors are part of the total error of active control [12]. The share of static error of the devices in the total volume of the total processing error is quite insignificant, in some cases, the share of the error of the devices is only 2…4% of the total error of the control. The rest of the error is determined mainly by the influence of technological factors. Grinding of holes and rolling tracks of roller bearing rings in the vast majority is performed according to the scheme of centerless grinding. The bearing ring with its base end is mounted on the surface of the cartridge with a magnetic grip, and the other base surface - the outer raceway rests on two supports, the working surface of which is made in the shape of the raceway. These supports are installed at a given angle to the center of the ring, and the center of the conditional circle, around the perimeter of which is the working surfaces, lies slightly above the geometric center of the ring. Thus, some eccentricity is created, due to which an additional force acts on the ring during grinding, forcing it to be constantly pressed to the support [13].
3 Researches Methodology Geometric or static accuracy of the machine determines the shape errors of the workpieces and the scattering of dimensions does not significantly detect [1, 8]. The main use of the method of active control of the sizes consists in elimination of influence on accuracy of processing of wear of the cutting tool, thermal and force deformations of technological systems. Measuring errors are part of the total error of active control, but the share of static error of instruments in the total amount of total processing error is quite small. The share of instrument error is only 2…4% of the total error. The remaining errors are determined by the influence of technological factors. Grinding of holes, and rolling tracks of roller bearing rings is performed according to the scheme of centerless grinding. In Fig. 1 shows a diagram of the centerless grinding of the hole of the inner ring of the roller bearing. The bearing ring with its base end is mounted on the surface of the cartridge with a magnetic grip, and the other base surface - the outer raceway rests on two supports, the working surface of which is made in the shape of the raceway. The base supports are installed at a certain angle to the center of the ring, and the center of the conditional circle, around the perimeter of which are the working surfaces, lies slightly above the geometric center of the ring. Thus, an eccentricity is created, due to which a force acts on the ring during grinding, forcing it to be constantly pressed against the base supports. During grinding, the ring is held by a magnetic chuck, resisting and rubbing the raceway on the surfaces of the base supports, and rotates at a frequency of 200…600 rpm. The direction of rotation - opposite to the direction of rotation of the grinding wheel.
The Structure of Automated Control Systems for Precision Machining
185
Fig. 1. Scheme of centerless grinding of the hole of the inner ring of the roller bearing
Cutting tool - the grinding wheel has an independent drive. For internal grinding machines, the spindle is a motor that provides a high speed with minimal vibration. The grinding wheel is mounted on a frame, which is installed in the spindle and provides rotational movement and rectilinear movement along the axis. In addition, the spindle can move at different speeds in the transverse direction. A feature of the control system is that it is a single-acting device that includes both linear elements and elements with nonlinearities of the transmission characteristics [14]. Moreover, all elements are usually connected in series (Fig. 2).
Fig. 2. Scheme of operational control: MT – machine tool; WZ – working area of machining part; 1 - workpiece; 2 - primary measuring transducer; 3 - control system; 4 - control unit; 5 - the executive body of the machine
The static accuracy of such control systems is affected by: wear of the contact tips, errors in the position of the measuring head relative to the controlled surface of the part, temperature and force deformations of the system, errors in the shape of the part. The primary measuring transducer 1 is installed in the processing area of the part WZ and is used to convert the linear size of the part (in this case - the diameter) into an intermediate signal. The signal is processed by the control system 2, at the output of which control systems are formed. The control systems enter control unit 4 and are implemented by the executive bodies of machine tool 5, which, in turn, move the cutting tool. Errors in the shape of the part: deviations from roundness, cut, waviness, for the
186
I. Trokhymchuk et al.
case of processing of cylindrical surfaces, affect not only the result of the process but also the accuracy of measurement for the case of static dimensional monitoring. Let the initial size of the workpiece coincide with the line O1 O1 (Fig. 3), and the size to be obtained and the size of the adjustment coincides with the line O2 O2 . Accordingly, the positions of the lines O1 O1 and O2 O2 determine the value of the allowance for processing D. The head of the automatic control means measures the detail x i at some point in time t i . This size due to the presence of shape errors in the measurement plane is not constant and in the first approximation will be characterized by a variable, for example, a sinusoidal signal with amplitude Ai and period T 0 , moreover Ai = (xi max − xi min )/2
(1)
xi av = (xi max + xi min )/2
(2)
Fig. 3. Graph of resizing during discrete monitoring
The period T 0 is determined by the type of shape errors and the speed of movement of the part in the measurement plane. Thus, if the control of the diameter of a cylindrical part having a speed n is carried out by a two-point diametric measuring device, and the main error of the shape is manifested as an oval, then T 0 = n/2. For workpieces with a cut in which the number of faces is equal to z, the period is equal to T 0 = n/z. The measurement time t meas is different for different types of devices. For devices with gauge plugs that provide surface contact with the product, the control time t meas is determined by the value of the longitudinal feed of the tool, the number of double strokes per minute the period T lf , and in the extreme case may be equal to this period tmeas ≤ Tlf . In practice, more often there is some idle time t idling . When inspecting internal surfaces (holes) with a caliper, the size will be fixed, without taking into account the deformations of the product and caliber, as its smallest size is acceptable, and, accordingly, the line of adjustment, or the size of the caliper, should
The Structure of Automated Control Systems for Precision Machining
187
deviate from the smallest limit. The size of the hmin of the detail tolerance field by the value of hd . Then the tolerance field Tx = hmax − hmin must be at least: Tx = 2Ad + hd
(3)
where Ad - the amplitude of the error at the end of processing; hd - change in size between two measurement cycles at the end of processing, equal to twice the thickness of the layer removed in one double stroke. Without taking into account the dynamic errors, in the case of operation in quasistatic mode, the device that has a point periodic contact with the surface of the hole will fix as acceptable the largest size of the product hi max , and the debug line should deviate from the largest size by hd . In this case, the tolerance field is determined from condition (3). The measurement time t meas in such devices must be such that during the control period all points of the controlled product have passed through the measuring tips, it is necessary to comply with the condition t meas > 60/n, where n is the workpiece speed per minute. Before starting the measurement, the probes of the measuring tips must be diluted to a value E that exceeds the allowance D. But it should be noted that the stroke of the H i max probes will increase until it reaches the value of N d work . The cycling work of the measuring elements is defined as Tpp = tmeas + 2Hi max /vav + tidling
(4)
where Nd work = D + E - the course of the probes during the calibration at the end of the processing; vav - the average speed of calibration (supply) of probes. In some cases, the measuring head, in order to reduce the wear of the tips, rotates together with the part, thereby controlling any size in the range from hi min to hi max . But this method reduces the accuracy of control, as the tolerance, in this case, will be: Tx ≥ 4Ad + hd
(5)
Devices that measure one random size reduce the accuracy of processing in general. The change of the size during continuous control taking into account errors of the form is shown in Fig. 4. The line of adjustment coincides with the line O2 O2 , and the line O1 O1 characterizes the allowance B; removal of the allowance takes place along the SS line, it can be performed not at a constant speed; at the beginning of the measurement process, this speed is characterized by the angle αD , and at the end of the processing cycle - the angle αd . In the presence of shape errors, the measuring system will issue a variable, in the first approximation, close to a sinusoidal signal with a variable amplitude Ai , but with a constant period T 0 . This period depends on the frequency of rotation of the part and the type of shape errors. The size obtained as a result of processing can be determined x = xH − D + F(t) + f (t)
(6)
188
I. Trokhymchuk et al.
Fig. 4. Graph of resizing during continuous monitoring
where x H - the size that determines the level of adjustment of the machine tool; F(t) - a non-random function of time, or a mathematical expectation that describes the line CC, for example, using the least-squares method; f (t) - a centered random function of time. For slow processes, ie for quasi-static modes, the actuation of the system will occur at the first crossing of the line by the line size of the debugging level of O2 O2 , after which the supply will be turned off, and the average size of the details will remain unchanged (solid sinusoidal line). But the actuation of the actuator, which stops the supply at time t 0 (at point O), may not occur. Then the operation will occur at time t0 of the next intersection of a line of variable size with the level of O2 O2 (point O1 ). But the average size xav 0 of the detail (dashed sinusoidal line) at this point will be different from x av 0 . The difference xav 0 − xav 0 will depend not only on the amplitude of the error of the shape of Ad at the end of the processing cycle but also on the cutting speed at this time and on the size of the angle α d . Based on the graph-analytical method, it can be determined that the difference in the sizes xav 0 − xav 0 can be approximately Ad /2 and exceed this value. Size scattering range for a similar class of control systems Tx ≥ (5/2)Ad
(7)
For small feeds at the end of the processing cycle, the tolerance value may be T x ≈ 2Ad . Devices for active control of parts during machining provide information about the current size of the part in one cross-section. In the presence of shape errors and uncertainty or instability of these errors in both magnitude and geometric shape, the dimensions of individual sections of parts may exceed the tolerance field. In practical cases, the tolerance of the shape error is not indicated in the drawing, it can take any value and type within the tolerance field for the size (Fig. 5). If the control section is the average value of the section I-I along the length of the part, and the active control device is set in the middle of the tolerance field, then changing the shape error from concavity (contour 1) to convexity (contour 2) can lead to the size of extreme sections tolerance T, respectively, in plus or minus. Therefore, the scattering of the dimensions of these sections can be 2T, even with the absolutely accurate issuance of the final command to stop processing. The location of the controlled section in position II-II eliminates the effect of convexity and concavity of the part, but in this case, the danger is the manifestation of direct (contour 3) or reverse (contour 4) taper.
The Structure of Automated Control Systems for Precision Machining
189
Fig. 5. Typical errors in the shape of cylindrical parts
Dynamic errors of the active control system are determined from the analysis of a typical structural scheme (Fig. 6). In the scheme (Fig. 6), the input signal is the size of the surface to be treated x input (t). The signal from the processing zone arrives with some delay, which is determined by the constant τ 1 , in the control zone. The signal is received by a transducer, which is an oscillating or aperiodic link with a second-order transfer function and is transmitted through an amplifier with a gain of k 2 to the trigger and its relay with a time delay constant τ 2 .
Fig. 6. Block diagram of the active control system
Next, the signal is fed to the actuator relay with a time constant of the machine tool and the actuator of the machine tool.
4 Results Increased requirements for dimensional accuracy and quality of processing surfaces at the final stage of grinding can be realized by means of adaptive systems of formation of control commands (Fig. 7).
190
I. Trokhymchuk et al.
In such systems, the end of machining with a given final rate of removal of the allowance is provided regardless of such variables as the amount of allowance for processing, the feed rate, the cutting properties of the grinding wheel. In circuits using adaptive devices, the operation of the command to enable the final stage of processing is determined by the current rate of removal of the allowance. There is a linear relationship between the rate of removal of the allowance vp and the value of the allowance for finishing Dv : Dv = Tr (vv − vk )
(8)
where vv - the rate of removal of the allowance at the beginning of finishing; vk - the rate of removal of the allowance at the end of the processing; T r = tgα - cutting constant for the machine tool. To ensure a constant final rate of removal of the allowance vk , each initial value of the initial speed vv must correspond to a certain value of the allowance for the final processing Dv . To fulfill this condition, it is necessary that the command for final processing is given when Dv + Dsv 0 − Tr vv = 0
(9)
where Dset 0 - set during debugging the value of the allowance, at which vp = 0. Turning to the values of the allowance for processing D and the removal he rates vp , we obtained two states of the system: 1) the mode of grinding processing with giving Dv + Dsv 0 − Tr vp > 0
(10)
2) the feed is switched off and grinding goes into the process of fine grinding Dv + Dsv 0 − Tr vp ≤ 0
(11)
In the scheme (Fig. 7), the workpiece is controlled by the measuring head 1. The signal from the measuring transducer is converted in measuring circuit 2 and amplified by unit 3. The electrical signal from the amplifier 3, in magnitude proportional to the current allowance U D , is transmitted to the reading device 4 and the input of the command driver 7. The operation of the shaper takes place when comparing the signal U D with the signal U set coming from the control device 9. In the case of U D ≤ U set , the shaper is in the off state, if U D > U set - the command relay is turned on and the control signal is fed to the circuit of the machine tool 12. A converter 6 is connected to the output of the amplifier 3, which differentiates the signal U D , and the signal dU D /dt, which is transmitted to the amplifier 5, will be proportional to the current rate of removal of the allowance. At the output of the amplifier 5, with a gain of k 3 , the polarity of the signal changes and is equal to −k 3 (dU D /dt). This signal is fed to the shaper of the command 10. The same input is supplied with voltage U set0 from the setter of the static level 8, the other input of the shaper 10 is fed a signal from the output of the amplifier 3 U D . The
The Structure of Automated Control Systems for Precision Machining
191
Fig. 7. Scheme of the device with an adaptive command system: 1 - measuring head; 2 - signal converter; 3, 5 - amplifiers; 4 - reading device; 6 - signal differentiation device; 7, 10 - command generation devices; 8 - setter of static level of operation; 9, 11 - task commanders; 12 - control scheme of the machine tool
signals coming to the input of the shaper 10 correspond to the linear values included in Eqs. (10) and (11). The proposed scheme will provide closed feedback on the rate of removal of the allowance in the final stage of grinding using the feed tracking mode.
5 Conclusions The problem of ensuring the accuracy of parts, the manufacturing cycle of which includes the grinding operation is complex and in automated production should be addressed at the levels of the configuration of the technological system, during design and technological preparation of production and by optimizing the sequence of processes and processing modes. To ensure the accuracy of the product, it is necessary to follow the principles and methods that provide consistent continuous monitoring of dimensional and other interrelated geometric parameters of the surfaces of parts, but the defining stage of the result is the final grinding operations. The efficiency of automated production depends on the specification of input parameters for the formation of control programs. The optimal number and informational weight of these parameters can be identified in the comparative analysis and implemented in the appointment of priority factors influencing dimensional accuracy. The introduction of automated control and process control improves quality indicators, but at the same time requires additional costs that affect the cost of production. Automated production requires a consistent process approach, especially at the stage of its preparation, in order to build a technological process that would guarantee the necessary quality indicators, and its economic evaluation is carried out both at the beginning of a comprehensive analysis of the technological system and after.
192
I. Trokhymchuk et al.
References 1. Khryashchev, S.: On accuracy of control of dynamical systems with various types of piecewise constant feedbacks. In: 14th International Conference “Stability and Oscillations of Nonlinear Control Systems” (Pyatnitskiy’s Conference) (STAB), pp. 1–4 (2018). https://doi.org/10. 1109/STAB.2018.8408364 2. Denysenko, Y., Ivanov, V., Luscinski, S., Zaloga, V.: An integrated approach for improving tool provisioning efficiency. Manag. Prod. Eng. Rev. 11(4), 4–12 (2020). https://doi.org/10. 24425/mper.2020.136115 3. Zhmud, V., Roth, H., Hardt, W.: Increase the dynamic accuracy of a system with PID-regulator by numerical optimization. In: International Multi-Conference on Industrial Engineering and Modern Technologies (FarEastCon), pp. 1–4 (2020) https://doi.org/10.1109/FarEastCon50 210.2020.9271318 4. Nikaeen, P., Murmann, B.: Digital compensation of dynamic acquisition errors at the frontend of high-performance A/D converters. J. Sel. Top. Sig. Process. 3(3), 499–508 (2009). https://doi.org/10.1109/JSTSP.2009.2020575 5. Chalyj, V., Moroz, S., Ptachenchuk, V., Zablotskyj, V., Prystupa, S.: Investigation of waveforms of roller bearing’s working surfaces on centerless grinding operations. In: Ivanov, V., Trojanowska, J., Pavlenko, I., Zajac, J., Perakovi´c, D. (eds.) DSMIE 2020. LNME, pp. 349–360. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-50794-7_34 6. Chai, T., Qin, S.J., Wang, H.: Optimal operational control for complex industrial processes. Ann. Rev. Control 38(1), 81–92 (2014). https://doi.org/10.1016/j.arcontrol.2014.03.005 7. Sousa, R.A., Varela, M.L.R., Alves, C., Machado, J.: Job shop schedules analysis in the context of industry 4.0. In: 2017 International Conference on Engineering, Technology and Innovation: Engineering, Technology and Innovation Management Beyond 2020: New Challenges, New Approaches, ICE/ITMC 2017 - Proceedings, pp. 711–717, January 2018. https:// doi.org/10.1109/ICE.2017.8279955 8. Zablotskyi, V., Tkachuk, A., Senyshyn, A., Trokhymchuk, I., Svirzhevskyi, K.: Impact of turning operations on the formation of rolling bearing’s functional surfaces. In: Tonkonogyi, V., Ivanov, V., Trojanowska, J., Oborskyi, G., Pavlenko, I. (eds.) InterPartner 2021. LNME, pp. 229–238. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-91327-4_23 9. Le, K.M., Van Hoang, H., Jeon, J.W.: A method to improve the accuracy of synchronous control systems. In: 11th IEEE International Conference on Industrial Informatics (INDIN), pp. 188–193 (2013). https://doi.org/10.1109/INDIN.2013.6622880 10. Ivanov, V., Pavlenko, I., Liaposhchenko, O., Gusak, O., Pavlenko, V.: Determination of contact points between workpiece and fixture elements as a tool for augmented reality in fixture design. Wireless Netw. 27(3), 1657–1664 (2019). https://doi.org/10.1007/s11276-019-02026-2 11. Jian, B., Wang, C., Chang, J., Su, X., Yau, H.: Machine tool chatter identification based on dynamic errors of different self-synchronized chaotic systems of various fractional orders. IEEE Access 7, 67278–67286 (2019). https://doi.org/10.1109/ACCESS.2019.2917094 12. Kuric, I., Kandera, M., Klarák, J., Ivanov, V., Wi˛ecek, D.: Visual product inspection based on deep learning methods. In: Tonkonogyi, V., et al. (eds.) InterPartner 2019. LNME, pp. 148– 156. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-40724-7_15 13. Lu, H., Zhao, X., Tao, B., Yin, Z.: Online process monitoring based on vibration-surface quality map for robotic grinding. IEEE/ASME Trans. Mechatron. 25(6), 2882–2892 (2020). https://doi.org/10.1109/TMECH.2020.2996939 14. Xintao, X., Long, C., Zhongyu, W.: Automatic control over roundness error of bearing ring grinding surface based on quasi-dynamical harmonic generating theory. In: 2008 Chinese Control and Decision Conference, pp. 3596–3598 (2008). https://doi.org/10.1109/CCDC. 2008.4598000
Study Case Regarding the Evaluation of Eye Refraction in an Optometry Office Alionte Andreea Dana1 , Negoita Alexandra Valentina2 , Staetu Gigi Nelu3 , and Alionte Cristian Gabriel2(B) 1 Department of Mechanism and Robot Theory, University Politehnica of Bucharest, Bucharest,
Romania 2 Department of Mechatronics and Precision Mechanics, University Politehnica of Bucharest,
Bucharest, Romania [email protected] 3 Valahia University of Targoviste, Targoviste, Romania
Abstract. This paper presents basic knowledge regarding refractive eye testing for far distance and close distance used in an optometric office and was done a comparison between subjective and objective methods. Nowadays, according to the optometrist experience, each optometric office has a custom mélange of methods, and the main issue is what to do with the different results obtained by each refractive testing method. The comparison methodology took into account complete eye testing based on the most used testing methods, which were performed for monocular and binocular vision at first for far vision and a second to close distance. Keywords: Eye testing · Fogging method · Skiascopy
1 Introduction According to the Centers for Disease Control and Prevention [1], the vision examination is designed to test distance vision, measure refractive error, measure the shape of the cornea, and measure the distance eyeglass prescription. Vision loss is common in adults, and more than 90% of older people require the use of corrective lenses. Also, the importance of normal vision in children has developed many screening initiatives worldwide with governmental support [2]. In the following chapters, it will be presented the literature overview, the methodology applied focused on full eye testing involving a minimum number of tests, and the applied testing methodology for 55 clients showing the differences between the objective and subjective refractive testing methods.
2 Literature Review For people without special needs, visual performance can be assessed according to their needs (road traffic, industry, reading, etc.) having the scope to achieve a qualitative balance of vision in a condition of the best visual acuity. The testing methodology involves © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Machado et al. (Eds.): icieng 2022, LNME, pp. 193–202, 2022. https://doi.org/10.1007/978-3-031-09385-2_18
194
A. A. Dana et al.
a succession of eye testing methods, that have different outcomes, implementation, and performances, to find the best measures that can be taken to solve a visual problem. Visual performance testing methodology has the goal to show to the client the exact nature of his or her visual problem. Nowadays, according to the optometrist’s knowledge, competence, and the available optical devices in the optometric office, each optometric office has a mélange of methods. Thus, the main issue consists in the interpretation of the different results obtained by each refractive testing method. Eye refraction test methods are divided according to the following criteria [3]: 1. Involvement of the tested person: • Subjective methods • Objective methods 2. The number of the eye involved in testing: • Monocular testing • Binocular testing 3. The distance between the eye and the object • Far distance – more than 6 m between the person and the object • Close distance – 0,33 m between the person and the object • Intermediate distance – between the close and far distance The literature [1–4] is focused on single eye testing methods regarding acuity, fastness, and ease of application.
3 Methodology The methodology presented in this paper is based on a complete set of eye testing starting from far distance seeing and close distance, and from monocular to binocular testing. Usually [2], in the optometric office is used only one testing method which can be subjective or objective. In the following sections, it will be presented testing methodology having in mind the comparison of at least two methods used for refraction eye testing. 3.1 Subjective Method Used the Determination of Monocular Refraction for Far Vision - Fogging Method This test type must be done monocularly [4] and can be started from the eye with the lowest acuity visible far away. After the testing of the eye is finished the method will be applied to the other eye. It can be used the following steps [5]:
Study Case Regarding the Evaluation of Eye Refraction
1. 2. 3.
4. 5.
195
The untested eye is covered. The spherical compensation is evaluated for the tested eye. From the beginning, a spherical lens will place 3 spheric diopters higher than the estimated compensation in front of the tested eye so that the client can see in the fog. If the +3 spheric diopters supplement does not provide a foggy vision of the 0.1 acuity optotype, this additional power will be increased until the client reports blurring of the test. Gradually reduce the power of the spherical lens using steps of 0.25 spheric diopters until the client is able to read the acuity optotype of 0.1. To the client it will be displayed optotypes with 0.2 acuity. Here can be found two different possibilities: • If the client can read it, the client is displayed optotypes with the acuity of 0.3, which the client must not be able to read. If the client can also read optotypes with the acuity of 0.4, then the spherical power has been reduced too much in the previous step. The spherical power can be subtracted until the client can read the optotypes with the acuity of 0.3. • If the client fails to read the optotypes with the acuity of 0.2, the power of the spherical lens has to be decreased until the client achieves to read, and the optotype can be modified with an increasing acuity.
6.
The previous step is repeated until the client can read the optotypes with the acuity of 0.5. In this step, it is observed if the client can read, without any change in spherical power, if the client can read three successive acuities, then the spherical power has been lowered too much in the previous step and must be returned to the previous step. After changing the spherical power if the client can read the test with one acuity, it is accepted to be able to read only the test with the next acuity, in which one passes to another acuity. 7. Astigmatism is tested using a Pareto chart [6]. 8. When the astigmatism testing is finished the 0.6 acuity optotype is displayed, and the spherical power modification is resumed according to the client’s possibility to see the optotypes with the acuity of 0.6/0.7/0.8/0.9/1.0. 9. The client may not be able to see the optotype with the acuity 1.0 because the ascribed correction is not very precise, there are disorders of the ocular media, other causes. In this case, during the change of the spherical power to see the optotype 1.0, the client will report in the first steps an improvement of the image, and then a worsening or maintenance of the image quality without reaching acuity. 10. If the acuity of 1.0 is not reached, it is recommended two methods: • Check the power and direction of the correction cylinder. Some astigmatism tests do not have sufficient resolution. It is recommended to use a high-resolution astigmatism test (e.g., arrow and circle + square test). If there is no such test or if very good accuracy is desired, the use of a cross-cylinder is recommended. A solution if a very precise astigmatism test or cross-cylinders are not available is also to look for the correct orientation of the compensation cylinder by rotating its axis, determining the position that gives the client the best image quality.
196
A. A. Dana et al.
If no acuity has been obtained. even after this adjustment, no attempt is made to change the power of the cylinder around the previously determined value, obtaining the best possible sharpness. • Using a disc with a central perforation. If the client reports an improvement in image quality when looking through this disc, then the acuity is also reduced due to eye disorders. 11. The spherical power can be verified using several methods: • Using the red-green test, using ±0.25 spheric diopters lenses • To the client is displayed two identical tests, one on a green background and one on a red background containing the best acuity observed in determining monocular compensation and is asked to focus on the best acuity client can see. – The client is asked if the client sees the tests on the green background and the red background equally clearly. If so, spherical compensation is correct, and if not, it is asked on what background it is easier to read. – If the letters are seen more clearly on a green background, then add a lens with a power of +0.25 spheric diopters and repeat the question – If the letters are seen more clearly on a red background, then add a lens with a power of −0.25 spheric diopters and repeat the question – The final amount of compensation will be equal to the initial amount, plus the added lenses if applicable. • the cross-cylinder method can be used using the following steps: checking the orientation of the astigmatism axis, cylindrical power check, cross-cylinder spherical power control, and changing power in positive/negative steps. – A lens with a power of +0.25 spheric diopters is added over the spherical compensation client to verification. The client is asked if the image has improved. If the client sees as well or even better, then the spherical compensation increases by +0.25 spheric diopters, and the question is repeated. If the client sees worse, the +0.25 spheric diopters lens is removed, and the compensation is retained. – A lens with a power of −0.25 spheric diopters is added over the spherical compensation client to verification. The client is asked if the image has improved. If the client sees as well or worse than the −0.25 spheric diopters power lens is removed, and the compensation is retained. If the client sees better, then the compensation is reduced by 0.25 spheric diopters and the question is repeated. 12. Some clients may not get an acuity of 1 or 0.8 even after checking the orientation and power of the compensation cylinder or checking the spherical power. In this case, the change in spherical power to the highest value of acuity that results from an improvement in the perceived image quality when applying the tests is stopped.
Study Case Regarding the Evaluation of Eye Refraction
197
If two methods give different results, the value that indicates the best acuity will be chosen. 3.2 Subjective Method Used for Binocular Vision Testing – Polarized Test After compensating for monocular spherical power for both eyes, the client will be asked to look through spectacles. The clearing powers at monocular vision may differ from those at binocular vision. The polarized red-green test [7] can be used to check binocular vision compensation. The test is divided into four squares. The upper squares (on the green background on the left, on the red background on the right) emit polarized light at 135°. The lower squares (on the green background on the left, on the red background on the right) emit polarized light at 45°. The client has a filter with a 135° polarization plane in front of the eye and a filter with a 45° polarization plane in front of the left eye. The right eye will see only the black rings/symbols in the upper squares at the same time, and the left eye will see only the black rings/symbols in the lower squares at the same time. If in the four squares the rings are seen as equally black, there is balance. The test being at 5 m, −0.25 spheric diopters is added to both eyes. Using additions of spheric diopters it is tried to equalize the sharpness of the symbols on the same vertical. After that, the compensation is reduced simultaneously in both eyes in steps of 0.25 spheric diopters until maximum binocular acuity is obtained. After that, the sharpness is equalized horizontally, by adding equal additions to the two eyes. 3.3 Objective Method Used the Determination of Monocular Refraction for Far Vision - Skiascopy/Retinoscopy In this section, it will be presented a complex objective test method, like the one described above, named skiascopy or retinoscopy. Skiascopy presumes the observation of the direction of movement of light in the client’s pupil when the image of a light source moves on the rectum. The movement takes place in one direction or another, as the client’s remote control is placed in front of or behind the observer, as can be seen in Fig. 1. The client’s eye is illuminated by a flat or concave mirror. The eye of the observer is located on the axis of the beam, looking through a hole in the center of the mirror. The light beam can be linear or circular. A rectilinear filament lamp perpendicular to the axis illuminates the client’s eye through the L line and the reflecting mirror. If the image S0 is linear, its retinal image provides a clear ophthalmoscopic image under the conditions: • the lamp is moved relative to the lens and the image (S0) is obtained in a position convenient for skiascopy. • the filament is rotated in a plane perpendicular to the optical axis.
198
A. A. Dana et al.
Fig. 1. Skiascope/retinoscope components and functioning [9].
A bright spot on the client’s face is a strip of light whose direction is to image S0. If the client’s ametropia is spherical or the direction image is parallel to the refractive meridian of R1 then the ophthalmoscopic image is parallel to the image S0, so the band is on the face. If not, the two directions are different. The observer is placed at 667 mm and a first attempt is made without a correction lens requiring the client to fix an object as far away as possible in a direction from 200 mm to the right of the observer’s head if the left eye is examined, up to 200 mm to the left if the right eye is examined. In this way, only the pupil is illuminated, and the blinding of the fovea is avoided. Skiascopy is classified into [8] at the working distance of 0.4 m, 0.5 m, or 0.667 m: • static skiascopy the client sets a distance test • dynamic skiascopy the client sets a test at 0.5 m. Dynamic Skiascopy [9] The method is like the static skiascopy and, the differences consist mainly of the following: • The client is in binocular view. • In dynamic skiing, the two eyes fix the same point. • The client looks at a short distance. In static skiascopy, the result obtained by the working distance is considered. Static skiascopy refraction = Result obtained + −2.5 spheric diopters for 0.4 m/−2.0 spheric diopters for 0.5 m/−1.5 spheric diopters for 0.667 m. The observer projects the light beam into the client’s eye and observes a yelloworange reflection through the hole in his eye to see the shadow movement. The observer moves the skiascope in the meridian and can see a moving shadow:
Study Case Regarding the Evaluation of Eye Refraction
199
• Either way: with • Either way: against In the cases of hypermetropia, emmetropia, and low myopia, and if the flat mirror is with, a concave mirror is against and an electric ophthalmoscope is with, then + spheric diopters must be added until the image is neutralized. In case of high myopia and if the flat mirror is against, a concave mirror is with an electric ophthalmoscope is against, then-spheric diopters must be added until the image is neutralized. The flat or spherical mirror skiascopy has the following drawbacks: the shadow and its displacement are not very clear, the determination of the neutral point has rather high uncertainty and in the case of astigmatism the direction of the axis is very uncertain. Another parameter is the travel speed. In the case of rapid movement, the result is ametropia small or close to neutral and in the case of slow-motion the result is large ametropia. The neutralization point represents a point at which the movements of the skiascope no longer produce the movement of the shadow and the reflection is very bright. The brightness increases as we approach the neutralization point which can be controlled by neutralization point by advancing 10cm toward the client and can be noticed a shadow shift (with) or move 10 cm away from the client and can be noticed movement (against). The optometrist must pay attention to both eyes of the client are open, and his right eye examines the client’s left eye and, of course, his left eye examines the client’s right eye. Some scissor movements are perceived as simultaneous opposite movements for the same movement of the mirror in case of keratoconus, irregular astigmatism, lens dislocation, aphakia. Due to aberrations, a dilated pupil produces an ambiguous effect. The cornea is not perfectly spherical and, the marginal and central rays do not meet at the same point. It is important to consider only the central area. Sometimes, an observer is deceived by the movements at the periphery that are more significant. If the observer does not sit exactly in the vertical axis of the observed eye, it will induce in the eye with spherical ametropia false astigmatism with a higher value according to the eccentricity of its position.
4 Results and Discussion The subjective and the objective methods have been applied to 55 clients (see Table 1). As can be seen, the differences between methods are small especially for small spherical spheric diopters and for aged clients. It was used an average acuity found using the subjective and objective methods for the left and right eye and binocular testing.
200
A. A. Dana et al. Table 1. The average differences between the subjective and objective methods.
Right eye
Left eye
Spheric diopters addition
−6.25/−1.5/170
−6.25/−1.50/0
1
−2.5/−0.75/130
−2.25/−1.0/35
1
−1.25
−2.0
1
−1.5
−0.75/−1.25/170
1
−1.5
−1.75
1
−1.75
−1.25
−2
−2.25
1
−3
−3.50
1
−3.5
−3.75
1
0.5
Average method differences
1
−3.75
−3.75
1
−2.25/−0.5/150
−1.50
0.5
−1/−1/180
−1.0/−0.75/5
0.5
−1.25/−0.25/130
−1.25
2
1.7
0.75
0.5
1.25
1.75
1.5
0.5
0.5
1
1
0.75
0.5
1.00/0.25/175
1.25
2
0.25
0.75/−0.75/30
0.75/−0.75/160
0.50/0.50/50
0.25/0.75/55
2.5
0.25
2.25
+2.25
4.5
0.25
2.25
2.0/0.50/60
2
0.25
2
1.75
2.25
0.25
2
2
2.75
0.25
2
2.25
3
0.25
1.75
2
2.75
0.25
1.5
1.75
2
0.25
1.5
1.5
2.25
0.25
1
1
1.75
0.25
1
+0.25
1.25
0.25
0.75
0.25/0.75/30
2.25
0.25
0.5
0.5
1.5
0.25
−1
−1.25
0.25
0.25 (continued)
Study Case Regarding the Evaluation of Eye Refraction
201
Table 1. (continued) Right eye
Left eye
Spheric diopters addition
−1
−075
0.25
0
−0.50
0
−0.5/10
−0.25/−0.50/165
0
−0.25/−0.5/170
−0.50/10
0
−0.25/160
0.50/−0.25/170
0
0.75/1.25/90
0.75/2.0/70
0
0.75/1.25/120
3.25/60
0
0.25/0.25/30
0.75
0
0.25/−1/175
0.5/−0.75/15
1.75
0
0.25/0.50/80
0.25/0.50/90
1
0
0
−2.25/170
0
−1.25/5
−1.25/10
0
0.75/0.75/20
0.25/0.50/35
−0.50/180
−0.5/0
0
0.5
+0.50
0
0.5
+1.0
0.5
0.25
−1
0
0.25
0.25
0.75
0
−0.25
0
0.5
0
−0.25
−0.25/60
−0.75
−0.75
0
−0.75
−1/−0.25/5
0
−0.75
−0.775
0
2
Average method differences
0
0
0
5 Conclusions According to the set of the results, it can be concluded that the subjective testing method can be used for assessing the eye vision and the objective methods can be used for verification. Until now, cannot be concluded which method is best. The testing is more precise if the client is older because the accommodation process is lost with aging. Another important fact is that the testing doesn’t take into account the dynamic of the factors that can influence the eye vision for example the illumination, the tiredness of the client, etc. In future work, it will be of interest to assess the quantitative and the qualitative influence of these factors according to the needs and eye vision of the clients.
202
A. A. Dana et al.
References 1. National Center for Health Statistics: Vision Procedures Manual. https://www.cdc.gov/nchs/ data/nhanes/nhanes_05_06/VI.pdf. Accessed 19 Nov 2020 2. Fotouhi, A., Khabaz Khoob, M., Hashemi, H., Ali Yekta, A., Mohammad, K.: Importance of including refractive error tests in school children’s vision screening. Arch. Iran. Med. 14(4), 250–253 (2011) 3. Flitcroft, D.I., et al.: IMI – defining and classifying myopia: a proposed set of standards for clinical and epidemiologic studies. Invest. Ophthalmol. Vis. Sci. 60(3), M20–M30 (2019) 4. Lai, S., Gomez, N., Wei, J.P.: Method of determining a patient’s subjective refraction based on objective measurement. J. Refract. Surg. 20(5), S528–S532 (2004) 5. Leandro, J.E., et al.: Adequacy of the fogging test in the detection of clinically significant hyperopia in school-aged children. J. Ophthalmol. 2019 (2019) 6. Huang, Y.T., et al.: Astigmatism management with astigmatism-correcting intraocular lens using two toric calculators - a comparative case series. Clin. Ophthalmol. J. 15, 3259–3266 (2021) 7. Zhao, L.Z., Zhang, Y., Wu, H., Xiao, J.: The difference of distance stereoacuity measured with different separating methods. Ann. Transl. Med. 8(7), 468 (2020) 8. Mulhaupt, M., Michel, F., Schiefer, U., Ungewiss, J.: Introduction of a novel video retinoscope application of a conventional retinoscope and video retinoscope in teaching-a comparative pilot study. Ophthalmologe 118(8), 854–858 (2021) 9. Cordero, I.: Understanding and looking after a retinoscope and trial lens set. Community Eye Health 30(98), 40–41 (2017) 10. Bartsch, D.U.G., Bessho, K., Gomez, L., Freeman, W.R.: Comparison of laser ray-tracing and skiascopic ocular wavefront-sensing devices. Eye 22(11), 1384–1390 (2008)
Analysis and Comparison of DABC and ACO in a Scheduling Problem Ana Rita Ferreira1
, Ângelo Soares1 , André S. Santos2(B) , João A. Bastos1,3 and Leonilde R. Varela4
,
1 Institute of Engineering, Polytechnic Institute of Porto - ISEP/IPP, Porto, Portugal
{1180733,1180758,jab}@isep.ipp.pt
2 Interdisciplinary Studies Research Center (ISRC), Institute of Engineering, Polytechnic
Institute of Porto - ISEP/IPP, Porto, Portugal [email protected] 3 INESC TEC, Institute of Engineering, Polytechnic Institute of Porto - ISEP/IPP, Porto, Portugal 4 Algoritmi Research Centre, University of Minho, Guimarães, Portugal [email protected]
Abstract. The present study consists in the comparison of two metaheuristics in a scheduling problem (SP), in particular in the minimization of the makespan in flowshop problem. The two selected metaheuristics were DABC (Discrete Artificial Bee Colony) and ACO (Ant Colony Optimization). For the performance analysis, the metaheuristics were tuned with an extensive DOE study, subsequently, several tests were performed. Thirty-one evenly distributed instances were generated for a in-depth analysis and each one was subjected to three runs for each metaheuristic. Through the results obtained, it was possible to concluded that the DABC has a better performance when compared to SA and ACO. SA and ACO have a similar performance in the chosen problem. These conclusions were supported by descriptive statistics and statistical inference. Keywords: Metaheuristic · SA · DABC · ACO
1 Introduction The article focuses on the study of the performance of two metaheuristics, DABC and ACO, (implemented in VBA, Excel) for a scheduling optimization problem, in this case the minimization of the makespan in a flowshop, which in other words means, minimizing the time needed to complete a set of tasks that run through the same machines in the same order. Problem solving in flowshop is important to organize a certain number of jobs that will be processed by a set of machines [1]. To solve these problems, as well as other manufacturing optimization ones [2–7], metaheuristics are an excellent option [8–11]. Temporally, metaheuristics emerged at the end of the 20th century as a response to the existing gap regarding a method that was capable to produce good results for NP-hard © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Machado et al. (Eds.): icieng 2022, LNME, pp. 203–215, 2022. https://doi.org/10.1007/978-3-031-09385-2_19
204
A. R. Ferreira et al.
and large problems, in a short execution time. Thus, the metaheuristics are defined by [8] as an independent framework of high-level problems, which offers a path to search for satisfactorily solutions in practical problems within a reduced timeframe. In addition to diversity, intensity and good quality results/time, the flexibility of this type of models is also highly emphasized, mainly due to their ability to adapt to most optimization problems (on the other hand, extra care is needed with a correct parameterization to obtain good performance values) [9–11]. The scientific community proved that metaheuristics are a viable and often superior alternative to more traditional optimization methods and, consequently, they became the method of choice for solving most real-life optimization problems [10]. In [10] multiple applications are listed, both in academic researches as in practical applications, such as specialized software for production scheduling, vehicle routing [12] and nursing scale [13], among others. The article is divided into 6 sections. The second chapter is dedicated to the literature review, with a contextualization of the algorithms. Section 3 focuses on the parameterization of algorithms. The fourth section encompasses the analysis of the results obtained by the metaheuristics. In the fifth chapter the statistical analysis is carried out. In the last chapter, the conclusions are discussed.
2 Literature Review Metaheuristics are characterized by their ability to solve complex and large-sized problems with an excellent balance between solution quality and execution time. 2.1 DABC (Discrete Artificial Bee Colony) Artificial Bee Colony (ABC) is a population metaheuristic, based on swarm intelligence, inspired by a beehive that seeks food sources, developed by [14] and [15]. ABC uses three types of bees to search the space for solutions: employed, onlookers and scouts. ABC was developed for Continuous Optimization Problems, however, there were several adaptations of this algorithm for Discrete Optimization Problems and Discrete Artificial Bee Colony (DABC) is one of these variants, proposed by [16], to solve the TSP-Euclidean. In DABC bees explore discrete solutions in a predetermined neighborhood structure. Food sources represent solutions that are explored by employed bees. Onlooker bees wait in the hive for the performance of food sources. Then they choose and explore the most promising food sources (note: there is higher probability of food sources with better performance be more deeply explored, by attracting a greater number of onlooker bees). The role of scout bees is to look for new food sources. Initially, each employed bee is allocated a food source. Then, the algorithm consists of three phases until the interruption criterion occurs. At employed bee phase, bees explore a solution in the vicinity of the food source assigned to them. If the candidate solution is better than the food source, the new solution replaces the food source. In the onlooker bee phase, each bee waits for the performance of food sources, and depending
Analysis and Comparison of DABC and ACO in a Scheduling Problem
205
on these results, selects the one that seems most promising. This selection is made based on the calculation of a probability [17, 18]. Next, the bees look for a solution in the vicinity of the selected food source and analyse it. If the candidate solution is better than the selected food source, the food source is replaced. The scout bee phase only happens when a food source is abandoned. Abandonment occurs after a certain number of iterations without improvement of that food source. In this process, the employed bee is transformed into a scout bee and starts looking for a new food source [16, 18]. The DABC procedure is expressed as pseudocode in Table 1 [16]. Table 1. DABC algorithm
2.2 ACO (Ant Colony Optimization) The first version of this metaheuristic appeared in 1992. This is a stochastic search method inspired by the behaviour of ants. Thus, the objective of any ant is to travel the shortest distance possible between the shelter and the food source and its journey is influenced by two factors: the distances and the pheromones deposited by other ants that have passed through. In this model, artificial ants are essentially random construction procedures that generate solutions based on (artificial) pheromone trails and heuristic information
206
A. R. Ferreira et al.
associated with the solution components. The greater the amount of pheromone left in a route, the greater the probability of selection of the route by artificial ants [19–21]. In this algorithm, first, a job is selected randomly, as a starting point. Afterwards, to select the next job, the probability of all remaining options being selected is calculated (weighing the “distance” between jobs and the level of pheromones between those jobs) by the equation in [22]. Subsequently, the roulette method, to select whatever job comes next, is performed. After the sequence of all the ants has been delineated, it is necessary to update the pheromone level, this equation is shown in [22]. The value of τi,j (amount of pheromone to deposit) varies depending on whether that path has been selected or not. It will take the value of 0 if j is not the job after i or calculate how much it will increase the pheromone level if i precedes j (the expression to this calculation is found in [22]). This procedure is repeated until the interruption criterion occurs. ACO is expressed as pseudocode in Table 2. Table 2. ACO algorithm
3 Parameterization The best combinations of parameters was obtained with an in-depth DOE study, which examined the impact of the parameters on each metaheuristic [23]. The process consists of the statistical approximation of experiences to obtain statistical evidence about the performance of each parameter and, mainly, the interaction between them. This means that the parameters were not analysed in separate because the objective was to find an overall combination of parameters that guaranteed a good performance of the metaheuristic. Note that the effort of the parameterization process is proportional to the number of parameters analysed and/or the levels of these parameters [18, 23, 24].
Analysis and Comparison of DABC and ACO in a Scheduling Problem
207
SA will also be used in the test as a baseline. This metaheuristic was parameterized according to the most current techniques. The selected parameters were initial temperature (Ti ) = 4408, epoch length (L) = 100, α = 0,99 and the neighbourhood structure is insertion. 3.1 DABC (Discrete Artificial Bee Colony) For this metaheuristic it was decided that the population size would not be considered as a parameter, since the DABC is not very sensitive to this factor [25]. However, it was necessary to define which value to use for the tests. [26] used 40 bees as population size, while [27] used a population of 80 bees. [16, 28–30] used 20 bees, so this value was selected for the tests in this study. It was defined that the number of employed and onlooker bees as the number of food sources was the same [14]. Moreover, the number of insertion movements to generate a new food source (scout bee phase) was considered a parameter. The limit number (l) which corresponds to the number of iterations without improvement until a food source is abandoned was considered a parameter too. The neighbourhood structure responsible for generating a new solution in the employed bee phase and in the onlooker bee phase is also studied. In terms of number of movements, at least 3 movements of insertion (3, 4, 5) were considered to generate a new food source in the scout bee phase [18, 31]. Regarding limit number values, authors appear who resorted to values ranging between 20 and 50 iterations [32, 33] for 10 jobs. It should also be noted that [16] proposes an expression that depends on dimension and size population which leads to higher limit number values. In [31], for an average population of 20 bees, a limit Number of 20 is used. In this sense, as the dimension of this problem is 150, a limit Number of 150 was stipulated. if the levels for study: 100,150 and 200 (variation of 33%). Finally, the 3 neighbourhood structures selected to generate the neighbour solutions in the employed and onlooker bee phases were transpose, swap and insertion. The experience based on DOE allowed us to conclude that the best results were obtained for the combination: limit number (l) = 100; insertion movements = 4 and neighbourhood structure = insertion. 3.2 ACO (Ant Colony Optimization) ACO is typically applied to graph problems, however, it is possible to adapt it to flowshop problems. For this conversion, the authors, used the equation suggested in [34]. In this study, the number of ants and the initial value of pheromones were not considered as parameters and they were defined in accordance to values reported in literature. For the number of ants, in [35], 20 ants are used for a reduced number of jobs (18), compared to the instances studied in this article. In [22] they recommend the use of as many ants as jobs, which in this case would correspond to 150 ants and that would require more computational effort, so, this option was excluded. The number of ants is important for the results obtained. In [22] it is pointed out that better results are associated with a greater number of ants, however, according to [36] this factor can be offset by a high number of iterations. For this reason, the authors chose 10 ants.
208
A. R. Ferreira et al.
Regarding initial pheromones quantity, the authors used the equation suggested by in [22, 37]. The result to the value of τ0 is approximately 0,001. As for the ACO parameterization, it is necessary to define several parameters, namely α, which represents the importance of pheromones (τ ), and β, which represents the importance of the “distance” between jobs (η). The other parameter studied is ρ which represents the evaporation rate of pheromones. Information was collected for the study of the 3 parameters, having understood that the best values of α are between 1 and 5 [22, 35, 36, 38, 39], of β between 1 and 10 [22, 35, 36, 38–40] and of ρ between 0,1 and 0,5 [35, 38, 39]. The experience based on DOE allowed us to conclude that the best results were obtained for the combination: = 3; β = 1 and ρ = 0,5.
4 Computational Study Regarding the number of iterations used, for SA the stopping criterion was 10.000 iterations, which means that in each run, 10.000 solutions are analysed. To keep the number of solutions analysed uniformly in all metaheuristics and to correctly compare the performance between them, 500 iterations were used in DABC (population size = 20) and in ACO 1.000 iterations (number of ants = 10). The computational experiments were run on “Visual Basic for Applications”, in Excel, using Intel Core i7-8550U CPU1.80 GHz, with 8.0 GB of RAM. To draw more substantiated conclusions, 30 instances were generated based on the initial instance. The new instances were obtained from a uniform distribution with minimum and maximum processing times of 1 and 99, respectively. For each instance, 3 tests were performed in each algorithm, and the best solution for each algorithm was selected. The test results of the instances created are shown in Table 3. Table 3. Instances results t
SA
DABC
ACO
t
SA
DABC
ACO
t
SA
DABC
ACO
0
9330
9151
9338
11
8070
7754
8060
21
8155
7836
8017
1
8186
7827
8129
12
8364
8222
8348
22
8066
7837
8027
2
8345
8249
8282
13
7993
7680
7995
23
8516
8406
8499
3
7915
7706
8017
14
8487
8237
8492
24
8274
7953
8195
4
8039
7856
8117
15
7997
7920
7982
25
8093
7955
8136
5
8315
8119
8278
16
8082
7848
8090
26
8128
7854
8099
6
8092
7803
7995
17
8060
7877
8072
27
8327
8228
8296
7
8293
8168
8228
18
8215
7870
8147
28
8031
7723
7977
8
8228
7920
8155
19
8263
7978
8208
29
8182
7973
8102
9
8314
8054
8218
20
8458
8298
8379
30
8268
8116
8275
10
8072
7796
8067
Analysis and Comparison of DABC and ACO in a Scheduling Problem
209
Figure 1 allows the graphical visualization of the results for the thirty-one instances, where, the DABC metaheuristic stands out, always obtaining superior performance, compared to the other metaheuristics. The ACO and SA have more similar performances, with a minimal advantage for the ACO. 9400 9200 Makespan - OF
9000 8800 8600 8400 8200 8000 7800 7600
SA
DABC
ACO
Fig. 1. Solutions of each metaheuristic in the 31 instances.
5 Statistical Analysis 5.1 Descriptive Statistics The results of the computational study seem to show a superior performance for the DABC when compared to the other metaheuristics. To reinforce these conclusions, a detailed analysis of the results was carried out. The results obtained were normalized by relative deviation of the metaheuristic solution of the best solution found in each instance (Eq. 1). f (s)MH represents the makespan of the greatest solution for that metaheuristic and f (s)BST the value of the best known solution of the instance [18, 41]. f (s)MH − f (s)BST f (s)BST
(1)
In Fig. 2, it is possible to observe the frequency of relative deviation of the metaheuristic solutions to the best solution found, in 1% intervals. In all instances, the DABC found the best solutions, leading to deviations of 0%. In the other metaheuristics there is a dispersion through the different intervals. For SA the most frequent intervals are: 1–2 and 3–4, while for ACO the highest frequency occurs in the intervals: 2–3 and 3– 4. Through the analysis of the boxplots present in Fig. 3, once again the conclusions about the performance of the DABC are confirmed (0% deviation from the best known solution), which leads to a null dispersion. It is visible that the dispersion in ACO is marginally lower than SA (smallest distance between the 1st and the 3rd quartile).
210
A. R. Ferreira et al.
35
31
30
Frequency
25 20 15 9
10 5
5
5
9
7
10
8
6 2
1
0 0-1
1-2
2-3
3-4
Deviations related to the best O.F. (%)
4-5 SA
DABC
ACO
Fig. 2. Bar graph to the frequency of relative deviation to the best solution found.
Fig. 3. Boxplots to the frequency of relative deviation to the best solution found.
Table 4 presents the mean, median, standard deviation and variance values for the relative deviations of the metaheuristic solutions to the best solution found. As expected, the DABC presents null values for all analysed parameters. Figure 3 already predicted a median and a standard deviation higher in SA than in ACO. 5.2 Statistical Inference To reinforce what has been concluded so far about the performance of metaheuristics in the flowshop problem, the 1-way ANOVA (Analysis of Variance) was used, which is an extension of the T-Student parametric hypothesis test to analyse the means of multiples populations [42]. In Eq. 2, μDABC , μACO e μSA are the means of the metaheuristic solutions’ relative deviations compared to the best solution found. The hypotheses analysed
Analysis and Comparison of DABC and ACO in a Scheduling Problem
211
Table 4. Frequency of relative deviation to the best solution found
Mean
SA
DABC
ACO
2,82%
0,00%
2,44%
Median
2,92%
0,00%
2,46%
Standard deviation
1,08%
0,00%
1,07%
Variance
1,17%
0,00%
1,14%
by ANOVA are: H0 : μDABC = μACO = μSA
(2)
H1 : There is a difference between the means of the relative deviations
(3)
What we intend to analyse is the existence of evidence that allows us to affirm that populations do not have an identical average performance. In concrete terms, it seeks to ascertain whether at least one population does not have an average performance identical to the others. The ANOVA result is shown in Table 5, where the existence of statistical evidence to reject the null hypothesis was proved, with 95% confidence. In this sense, there is at least one population that does not have an average performance identical to the others (p-value of 0,0000). Table 5. ANOVA results Sum of squares
Df
Mean square
F
Sig
94,36
0,0000
Between groups
0,014512
2
2,44%
Within groups
0,006921
90
2,46%
Total
0,021433
92
1,14%
As concluded above, there is at least one population that performed differently from the rest. To analyse which populations present a different statistical performance, a Post Hoc test was performed, in this case, the Scheffe Test [1], shown in Table 6. Table 6. Scheffe test results Metaheuristics
Mean difference
Lwr.Ci
Upr.Ci
Std. error
Sig
DABC
ACO
−0,0244067
−0,0299506
−0,0188628
0,0055439
0,0000
SA
ACO
0,0037787
−0,0017652
0,0093226
0,0055439
0,2425
SA
DABC
0,0281854
0,0226415
0,0337293
0,0055439
0,0000
212
A. R. Ferreira et al.
The results demonstrate that there are statistically significant differences between the performance of DABC and ACO and between DABC and SA, since p-value of 0.0000 is observed. From this, with 95% confidence, it is not possible to assume that the three metaheuristics have identical means of relative deviations from the best solution found. Furthermore, as the SA-ACO combination has a p-value of 0,2425, it was concluded that no statistical evidence was found to prove that one of these two metaheuristics had superior performance. Thus, the metaheuristic that stands out is DABC and stands out positively because μDABC − μACO < 0 and μSA − μDABC > 0.
6 Conclusion In this article, the authors focused on the impact of two metaheuristics on the scheduling problem. After studying the performance of the two metaheuristics, running tests and carrying out a statistical analysis, it was concluded that the DABC presents better results than ACO. In a first phase, tests were performed for thirty-one instances, where the results of the computational study express that the performance of the DABC was better than the rest (100% of the time), while the ACO is, on average, 3% away from the best solution. In a second phase, it was statistically proved that the average performance of the DABC was superior to ACO. As for ACO, the average performance of the metaheuristics was identical to SA, which is a well-known metaheuristic. Regarding the performance of the ACO metaheuristic, the authors expected that its results would not better than DABC. The authors believe that the main reason for this performance is due to the fact that the ACO is a metaheuristic aimed at graph problems, which forced a significant adaptation to be used in the scheduling problem, an adaptation that is not consensual in the literature (there are several conversion options). Future work may focus on developing an adaptation for the ACO metaheuristic that allows it to respond to discrete problems. Acknowledgement. This work was supported by national funds through the FCT-Fundação para a Ciência e Tecnologia through the R&D Units Project Scopes: UIDB/00319/2020, and EXPL/EMESIS/1224/2021.
References 1. Ross, S.M.: Introductory Statistics, 4th edn (2017) 2. Arrais-Castro, A., Varela, M.L.R., Putnik, G.D., Ribeiro, R.A., Machado, J., Ferreira, L.: Collaborative framework for virtual organisation synthesis based on a dynamic multi-criteria decision model. Int. J. Comput. Integr. Manuf. 31(9), 857–868 (2018). https://doi.org/10. 1080/0951192X.2018.1447146 3. Sousa, R.A., Varela, M.L.R., Alves, C., Machado, J.: Job shop schedules analysis in the context of industry 4.0. In: 2017 International Conference on Engineering, Technology and Innovation: Engineering, Technology and Innovation Management Beyond 2020: New Challenges, New Approaches, ICE/ITMC 2017 - Proceedings, pp. 711–717, January 2018. https:// doi.org/10.1109/ICE.2017.8279955
Analysis and Comparison of DABC and ACO in a Scheduling Problem
213
4. Gangala, C., Modi, M., Manupati, V.K., Varela, M.L.R., Machado, J., Trojanowska, J.: Cycle time reduction in deck roller assembly production unit with value stream mapping analysis. In: Rocha, Á., Correia, A.M., Adeli, H., Reis, L.P., Costanzo, S. (eds.) WorldCIST 2017. AISC, vol. 571, pp. 509–518. Springer, Cham (2017). https://doi.org/10.1007/978-3-31956541-5_52 ˙ 5. Trojanowska, J., Zywicki, K., Varela, M.L.R., Machado, J.M.: Shortening changeover time an industrial study. In: The Proceedings of the 2015 10th Iberian Conference on Information Systems and Technologies (CISTI), pp. 1–6 (2015). https://doi.org/10.1109/CISTI.2015.717 0373 6. Vieira, G.G., Varela, M.L.R., Putnik, G.D., Machado, J.M., Trojanowska, J.: Integrated platform for real-time control and production and productivity monitoring and analysis. Rom. Rev. Precis. Mech. Opt. Mechatron. 2016(50), 119–127 (2016) 7. Reddy, M.S., Ratnam, C.H., Agrawal, R., Varela, M.L.R., Sharma, I., Manupati, V.K.: Investigation of reconfiguration effect on makespan with social network method for flexible job shop scheduling problem. Comput. Ind. Eng. 110, 231–241 (2017) 8. Ng, K.K.H., Lee, C.K.M., Chan, F.T.S., Lv, Y.: Review on meta-heuristics approaches for airside operation research. Appl. Soft Comput. 66, 104–133 (2018). https://doi.org/10.1016/ J.ASOC.2018.02.013 9. Hussain, K., Mohd Salleh, M.N., Cheng, S., Shi, Y.: Metaheuristic research: a comprehensive survey. Artif. Intell. Rev. 52(4), 2191–2233 (2018). https://doi.org/10.1007/s10462-0179605-z 10. Sörensen, K., Glover, F.: Metaheuristics. Encycl. Oper. Res. Manag. Sci. 62, 960–970 (2013). https://www.opttek.com/sites/default/files/Metaheuristics.pdf. Accessed 23 Dec 2021 11. Gandomi, A.H., Yang, X.S., Alavi, A.H.: Cuckoo search algorithm: a metaheuristic approach to solve structural optimization problems. Eng. Comput. 29(1), 17–35 (2013). https://doi.org/ 10.1007/S00366-011-0241-Y 12. Sörensen, K., Sevaux, M., Schittekat, P.: Multiple neighbourhood search in commercial VRP packages: evolving towards self-adaptive methods. Stud. Comput. Intell. 136, 239–253 (2008). https://doi.org/10.1007/978-3-540-79438-7_12 13. Burke, E.K., De Causmaecker, P., Vanden Berghe, G., Van Landeghem, H.: The state of the art of nurse rostering. J. Sched. 7(6), 441–499 (2004). https://doi.org/10.1023/B:JOSH.000 0046076.75950.0B 14. Karaboga, D.: An Idea Based on Honey Bee Swarm for Numerical Optimization, Kayseri, October 2005. https://abc.erciyes.edu.tr/pub/tr06_2005.pdf. Accessed 23 Dec 2021 15. Pham, D.T., Ghanbarzadeh, A.: The Bees Algorithm Technical Note Additive Manufacturing View project Micro milling issues View project, Tech. Note, Manuf. Eng. Centre, Cardiff Univ. UK, September 2005. https://www.researchgate.net/publication/260985621. Accessed 23 Dec 2021 16. Karaboga, D., Gorkemli, B.: A combinatorial Artificial Bee Colony algorithm for traveling salesman problem. In: International Symposium on Innovations in Intelligent Systems and Applications, pp. 50–53, June 2011. https://doi.org/10.1109/INISTA.2011.5946125 17. Liu, Y.F., Liu, S.Y.: A hybrid discrete artificial bee colony algorithm for permutation flowshop scheduling problem. Appl. Soft Comput. 13(3), 1459–1463 (2013). https://doi.org/10.1016/ J.ASOC.2011.10.024 18. Santos, A.: Auto-Parametrização de Meta-Heurísticas para Problemas de Escalonamento em Ambiente Industrial. Ph.D. thesis, Guimarães (2020) 19. Goos, G., et al.: Evolutionary multi-criterio optimization. In: 5th International Conference, EMO 2009, p. 16, April 2009. https://link.springer.com/content/pdf/10.1007%2F978-3-64201020-0.pdf. Accessed 23 Dec 2021
214
A. R. Ferreira et al.
20. Lee, Z.J., Su, S.F., Chuang, C.C., Liu, K.H.: Genetic algorithm with ant colony optimization (GA-ACO) for multiple sequence alignment. Appl. Soft Comput. 8(1), 55–78 (2008). https:// doi.org/10.1016/J.ASOC.2006.10.012 21. Jaiswal, U., Aggarwal, S.: Ant colony optimization. Int. J. Sci. Eng. Res. 2(7) (2011). http://cit eseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.301.1091&rep=rep1&type=pdf. Accessed 23 Dec 2021 22. Souza, F.G.: Método meta-heurístico de colônia de formigas e sua aplicação na alocação de petróleo. Escola Politécnica da Universidade de São Paulo, São Paulo (2019) 23. Montero, E., Riff, M.C., Neveu, B.: A beginner’s guide to tuning methods. Appl. Soft Comput. 17, 39–51 (2014). https://doi.org/10.1016/J.ASOC.2013.12.017 24. Lye, L.M.: Tools and toys for teaching design of experiments methodology. In: Proceedings of the Annual Conference - Canadian Society for Civil Engineering, vol. 2005, pp. 1–9 (2005) 25. Akay, B., Karaboga, D.: Parameter tuning for the artificial bee colony algorithm. In: Nguyen, N.T., Kowalczyk, R., Chen, S.-M. (eds.) ICCCI 2009. LNCS (LNAI), vol. 5796, pp. 608–619. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-04441-0_53 26. Akay, B., Karaboga, D., Akay, B.: Artificial Bee Colony (ABC), harmony search and bees algorithms on numerical optimization, March 2009. https://www.researchgate.net/pub lication/267718673. Accessed 23 Dec 2021 27. Jahjouh, M.M., Arafa, M.H., Alqedra, M.A.: Artificial Bee Colony (ABC) algorithm in the design optimization of RC continuous beams. Struct. Multidisc. Optim. 47(6), 963–979 (2013). https://doi.org/10.1007/s00158-013-0884-y 28. Li, Y., et al.: A discrete artificial bee colony algorithm for distributed hybrid flowshop scheduling problem with sequence-dependent setup times. Int. J. Prod. Res. 59(13), 3880–3899 (2020). https://doi.org/10.1080/00207543.2020.1753897 29. Gong, D., Han, Y., Sun, J.: A novel hybrid multi-objective artificial bee colony algorithm for blocking lot-streaming flow shop scheduling problems. Knowl. Based Syst. 148, 115–130 (2018). https://doi.org/10.1016/J.KNOSYS.2018.02.029 30. Celik, M., Karaboga, D., Koylu, F.: Artificial bee colony data miner (ABC-miner). In: 2011 International Symposium on Innovations in Intelligent Systems and Applications, pp. 96–100, June 2011. https://doi.org/10.1109/INISTA.2011.5946053 31. Pan, Q.K., Fatih Tasgetiren, M., Suganthan, P.N., Chua, T.J.: A discrete artificial bee colony algorithm for the lot-streaming flow shop scheduling problem. Inf. Sci. (Ny) 181(12), 2455– 2468 (2011). https://doi.org/10.1016/J.INS.2009.12.025 32. Qing, L.J., Qi Han, Y.: A hybrid multi-objective artificial bee colony algorithm for flexible task scheduling problems in cloud computing system. Cluster Comput. 23(4), 2483–2499 (2020). https://doi.org/10.1007/S10586-019-03022-Z/FIGURES/6 33. Peng, K., Pan, Q.K., Gao, L., Zhang, B., Pang, X.: An Improved artificial bee colony algorithm for real-world hybrid flowshop rescheduling in steelmaking-refining-continuous casting process. Comput. Ind. Eng. 122, 235–250 (2018). https://doi.org/10.1016/J.CIE.2018.05.056 34. Widmer, M., Hertz, A.: A new heuristic method for the flow shop sequencing problem. Eur. J. Oper. Res. 41(2), 186–193 (1989). https://doi.org/10.1016/0377-2217(89)90383-4 35. Serbencu, A., Minzu, V.: Hybridized ant colony system for tasks to workstations assignment. In: 2016 IEEE Symposium Series on Computational Intelligence, SSCI 2016, February 2017. https://doi.org/10.1109/SSCI.2016.7850060 36. Moreira de Souza, C.: Otimização por Colônia de Formigas para o Problema de Programação Job-shop Flexível Multiobjetivo, Universidade Federal de São Carlos, São Carlos (2018) 37. Priscila, V.Z.C., Goliatt, J., Angelo, S., Helio, J.C.: Barbosa: Colônia de Formigas, in Livro/Manual de computação evolutiva e metaheurística (2012) 38. Lin, B.M.T., Lu, C.Y., Shyu, S.J., Tsai, C.Y.: Development of new features of ant colony optimization for flowshop scheduling. Int. J. Prod. Econ. 112(2), 742–755 (2008). https://doi. org/10.1016/J.IJPE.2007.06.007
Analysis and Comparison of DABC and ACO in a Scheduling Problem
215
39. Dorigo, M., Maniezzo, V., Colorni, A.: Ant system: optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern. Part B 26(1), 29–41 (1996). https://doi.org/10.1109/ 3477.484436 40. Groleaz, L., Ndojh Ndiaye, S., Solnon, C.: ACO with automatic parameter selection for a scheduling problem with a group cumulative constraint, April 2020. https://doi.org/10.1145/ 3377930.3389818 41. Silberholz, J., Golden, B.: Comparison of metaheuristics. In: Gendreau, M., Potvin, J.-Y. (eds.) Handbook of Metaheuristics, pp. 625–640. Springer, Boston (2010). https://doi.org/10. 1007/978-1-4419-1665-5_21 42. Rochon, J., Gondan, M., Kieser, M.: To test or not to test: preliminary assessment of normality when comparing two independent samples. BMC Med. Res. Methodol. 12(1), 1–11 (2012). https://doi.org/10.1186/1471-2288-12-81/TABLES/4
Experimental Teaching of Robotics in the Context of Manufacturing 4.0: Effective Use of Modules of the Model Program of Environmental Research Teaching in the Working Process of the Centers “Clever” Olena Hrybiuk1(B)
and Olena Vedishcheva2
1 Institute of Information Technologies and Learning Tools of NAESU, Kiev, Ukraine
[email protected] 2 Lutsk Specialized School of I-III Degrees No. 5 of Lutsk City Council, Kiev, Ukraine
Abstract. Chemical kinetics have a huge practical solution, as it allows to determine the possibility or impossibility of a process, as well as to determine the conditions in which it occurs. This, ultimately, allows to develop new methods of synthesis of various substances, including drugs, and to solve environmental problems. It is worth noting that because of the complex tasks the requirements for methods of studying the kinetics of reactions are significantly increasing. The use of high-tech equipment allows for more in-depth research in theoretical physics, methods for measuring plasma parameters, kinetics of chemical reactions, synthesis of single and polycrystals of ultrapure ceramic materials, which confirms the relevance of this study. The subject of the study is nebulous computing and their balanced use. The object of research is the climate control system with the use of the concept of the Internet of Things. The purpose of the experimental study was to develop a hardware-software system with a balanced use of nebulous computing, which allowed to monitor the state of the environment. As part of the experimental study a device for recording changes in temperature of the substance, and the corresponding software for processing data from the microcontroller were created. A system for data analysis and determination of the amount of heat Q for different substances was developed in the table processor. The paper shows the development of a device that will investigate the course of a chemical reaction. The developed device can be used by installing software on a personal computer, which allows not to spend money on the purchase of valuable equipment, manual calculations and learning new software. Keywords: Computer oriented methodological systems of research learning · Variational models · Manufacturing · Educational robotics · Modeling · Engineering
1 Introduction Experimental studies confirm the influence of images on the productivity of thinking not only among children during artistic creativity, but also in other activities, especially © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Machado et al. (Eds.): icieng 2022, LNME, pp. 216–231, 2022. https://doi.org/10.1007/978-3-031-09385-2_20
Experimental Teaching of Robotics in the Context of Manufacturing 4.0
217
in scientific and technical creativity. The formation and development of scientific and visual thinking is an important component of the formation of students’ intelligence. Around a man, as in a man himself, there are constantly countless different chemical processes, during which energy is converted, new chemical compounds are formed while the original ones are destroyed. Clarification of the mechanism of these processes and their direction is one of the main problems not only in the process of solving specific problems of chemistry and chemical technology, but also is an important element for a common understanding of the device and the functioning of the world. An idea of the mechanism of chemical reactions cannot be obtained without knowledge of the rate of chemical transformations or, alternatively, the kinetics of chemical reactions [1]. The subject area covers the concepts of the Internet of Things (IoT), nebulous computing and information control systems. The combination of all these technologies and approaches allows us to present not only a system that can meet the business goals of some companies, but also to create an authoring system using a computer-based methodological approach of research teaching. The microclimate of the room is a set of physical factors and environmental conditions that determine its thermal state and affect human heat transfer [2]. The main and most important factors that shape the microclimate of the premises are temperature and humidity. The created conditions depend on many factors, such as: season, time of day, outdoor weather, etc. Correction of indoor microclimate is carried out with the help of complex and specialized climate control systems. Chemical kinetics has a huge practical significance, as it allows to determine the possibility or impossibility of a particular process, as well as to determine the conditions in which it occurs. This, ultimately, allows developing new methods of synthesis of various substances, including drugs, to solve environmental problems. Undoubtedly, due to the complex tasks, the requirements for methods for studying the kinetics of reactions are significantly increasing. The use of high-tech equipment allows for more in-depth research in theoretical physics, methods for measuring plasma parameters, kinetics of chemical reactions, synthesis of mono- and polycrystals of ultrapure ceramic materials [3]. Kinetic methods of analysis are characterized by high accuracy and therefore are often used in the determination of small and ultra-small contents. The application of kinetic methods for the determination of micro-impurities in pure and ultrapure substances and materials is especially effective, which confirms the relevance of the experimental work [1]. The relevance of this research work is determined by the need to develop a new direction of applied research, namely, the use of variable models of computer-based methodological systems of research teaching in robotics, management and dissemination of research methods in education. The aim of the work is to develop, substantiate and experimentally test variable models of computer-oriented methodological systems of research teaching in robotics of educational institutions, including the creation of a device for recording changes in temperature of a substance, and software for processing data from the microcontroller [4–6], which implement with the help of the technosphere of educational organizations, the principles of variability that promotes the activation of cognitive activity [1]. Students’ motivation for knowledge and choice of engineering professions is achieved due to their interest in research and innovative practices, including doing calculative and graphic work [2, 3].
218
O. Hrybiuk and O. Vedishcheva
2 Research Methodology Interactive learning with pedagogically weighted use of individual components of CBLE is provided through the use of interactive programs, expositions, laboratory and demonstration equipment, relevant software and content, active forms of organization of educational process, research and project activity of students [1]. CBLE “Clever” is based on scientific concepts, including the results of previous researches of advanced scientists: theoretical aspects of the concept of G.T. Frischkorn, [7], P. Plichart [8], K. Schildkamp [9], S.M. Wilson [10], M.J Tomasik [11], M.A. Bahrin [12] which are the foundation of new state educational standards and are focused on the practical educational and cognitive students’ activity, the formation of the younger generation as the basis of a new society of knowledge [13, 14]; scientific and technical creativity and handicraft [15–17]; international initiatives MINT, STEM, NBIC [18–21] and Others (European Society for Engineering Education, etc.) [22–24]; Principles of Convergent Natural Sciences and Engineering Education [25, 26]; PMBOK (Project Management Institute); principles of blended and adaptive learning; the practice of training specialists in the field of highly-productive calculations [27–30]. The main tasks are: Analysis of the characteristics of the signal at the output of the device. Selection and justification of hardware for the task implementation. Selection and justification of software environment for data processing. Hardware development based on selected equipment. Creating an algorithm for reading, digitizing and analyzing the signal. Realization useful signal filtering to eliminate interference and guidance. Development of a visualization user interface [1]. The result is a system of analysis, which will make it possible to determine the type of chemical reaction without manual calculations and its key parameters [3]. Based on the analysis of the scientific literature of native and foreign researchers, our own experience in the application of COMSRL, the results of experimental research, the main provisions used by teachers for choosing information resources, are formulated [2]. The integration of resource selection with methods of teaching robotics, taking into account the levels of intellectual development of students in the real conditions of the educational process distinguishes the proposed study from others in this field [1]. In the process of pedagogically balanced and methodologically motivated selection of information resources it is necessary to take into account psychophysiological and psychological-pedagogical factors, among which the features of intellectual development of students are of great importance (see Table 1). Determining the feasibility of using COMSRL in the process of research teaching robotics of students of robotics at school and assessing the attitude of students to the identified resources was the purpose of the experiment [3, 4]. The found correlations between the indicators of preference in the attitude of students to the use of COMSRL and levels of intellectual development of students for certain groups of information resources are used to adjust the method of research teaching robotics with the aim of pedagogically appropriate and methodically motivated selection of educational resources to minimize contradictions taking into account intellectual level of a for a specific group of students [6].
Experimental Teaching of Robotics in the Context of Manufacturing 4.0
219
Table 1. Differences between gifted boys and gifted girls (taking into account self-actualization profiles. The results appeared to be significant at the level of certainty (p ≤ 0,05)). Scales
Average value, %
t-criteria
Gifted boys
Gifted girls
Orientation in time
57,9
53
2,4*
Values
55
63,4
2,14
A look at human nature
40,8
50,6
2,54
The need for knowledge
60,8
52,9
2,37
Creativity
57,3
67,6
2,105
Autonomy
55,9
51
2,03
Spontaneity
43,6
48
2,13
Self-understanding
49,5
48,5
1,96
Autosympathy
52,8
51,5
1,94
Contact
52,5
49
2,00
Flexibility in communication
49,6
44
2,17
3 Basic Definitions and Classification of Microcontrollers Microprocessor system (MS) on the base of microprocessors (MP) are most often used as embedded systems to solve object management problems. An important feature of this application is real-time work, ie providing a response to external events during a certain time interval. Such embedded highly specialized control MS, made in the form of separate chips that work in real time, are called microcontrollers. Microcontroller (MC) is a functionally complete MS made on one ULIC (ultra-large integrated circuit). The microcontroller includes: processor, RAM, ROM, ports for connecting external devices, ADC analog signal input modules, timers, interrupt controllers, controllers of various interfaces, etc. The simplest MC is a LIC area of not more than 1 cm2 and only eight exits [1]. Controllers are usually created to solve a single problem or group of related problems. They usually do not have the ability to connect additional nodes and devices, such as large memory, input and output devices. Their system bus is often inaccessible to the user. The structure of the controller is simple and optimized for maximum speed. In most cases, running programs are stored in non-volatile memory and do not change. Structurally, the controllers are available in a single-board version. The main advantages of using systems with MC: flexibility increased significantly; cost reduced significantly; development and modification time reduced; the reliability of the system is increased by reducing the number of housings and connections [4]. The digital part of the device is represented by the Arduino Uno microcontroller. The Arduino platform is popular due to the convenience and simplicity of the programming language, as well as open architecture and software code [1]. The device is programmed via USB without the use of programmers. Arduino allows a computer to go beyond the
220
O. Hrybiuk and O. Vedishcheva
virtual world into the physical and interact with it. Arduino-based devices can receive information about the environment through different sensors, and can control different actuators. The microcontroller on the board is programmed using the Arduino language (based on the Wiring language) and the Arduino development environment (based on the Processing environment). Arduino-based devices designs can run on their own or interact with software on a computer (Flash, Processing, MaxMSP). The original schematic drawings (CAD files) are publicly available and can be used by users [4].
4 The Use of Arduino to Control the Parameters of Houseplants Information management system is always a set of different practices and technologies that are combined to achieve an effective final result, goal, business goal. Obviously, the final form of the system and the set of technologies and practices used depend on the type of the system itself, its purpose, potential users and other requirements. Therefore, such systems are usually characterized by the following features: good scalability, clarity, ease of use, flexibility in configuration, as well as the availability of support and a fast development cycle. The developed and described system of computer-oriented methodological approach to research training complies with all the above mentioned rules, but differs fundamentally from the systems used by enterprises and researched in educational institutions. The subject area under study lies at the intersection of technologies that are rarely used in traditional information systems. The obtained results can be used in environmental monitoring systems, which work with the use of the Internet, including without human intervention. The topic of this paper assumes that the research subject area will cover the concepts of the Internet of Things (IoT), nebulous computing, information control systems [2]. The combination of all these technologies and approaches makes it possible to present not only a system that meets the business goals of a number of enterprises, but also such a system using computer-oriented methodological system of research training (COMSRL). This feature will allow developing the proposed system in the future and will admit it to remain relevant and in demand [5]. Characteristics of the Object of Study. The climate control system primarily involves the accumulation of environmental data in the context of using the concept of the Internet of Things. Since the Internet of Things also involves machine-to-machine (M2M) interoperability, it is a matter of using computers for processes that were previously performed either by humans or manually, or by using computers, but without automation. The object of research is the climate control system with the use of COMSRL and the concept of the Internet of Things. Such a system can be considered in isolation, as it can be used as a separate system (part of a larger information system and adaptive software and hardware solution) [3]. Thanks to commercial developments in the field of hardware and the use of open hardware and software complexes, well-developed, scalable and distributed control and monitoring systems have been created, which can independently make decisions for pre-planned situations. All these aspects, technologies of the Internet of Things and nebulous computing are the subject of experimental research. Problem Statement. The main task of any information and management system is to solve the problem which is clearly set and regulated by the client, customer or other
Experimental Teaching of Robotics in the Context of Manufacturing 4.0
221
interested person [15]. To perform the tasks of monitoring and climate control, the model of collecting and processing information locally is used, ie the actual use of nebulous computing, client-server architecture to obtain instructions from the server and transmit information for long-term storage and further processing (MQTT is suitable for processing interactions between devices (M2M)). The resources used in the experimental study were divided into hardware and software [4]. The hardware resources are: Raspberry Pi platform or its analogues (Banana Pi, Orange Pi); Temperature sensors; Photoresistors for light monitoring; Humidity sensors; Solid state relay for switching and supplying or disconnecting voltage in certain parts of the system; Additianal elements for connecting components, their connection and power supply. The software resources include a number of software packages, among which are: Operating systems, compilers, development environments, version control systems and others. To implement the experimental study the following components were used: 1) Armbian operating system, which is one of the distributions of the GNU/LINUX that suits well for Internet of Things projects. 2) Compiler gcc for compiling programs written in C for Linux operating systems. 3) A number of development environments that allow you to write code faster and more efficiently with fewer errors. 4) Ready-made compiled libraries and libraries in the form of raw code for interaction with hardware and for data transfer between nodes and the cloud. 5) Git version control system that allows you to develop and track software versions flexibly. Input and Experimental Data. Although the system in many cases operates separately, most settings and configurations are made on the server and sent to the device. This approach allows not only to configure the parameters of data transmission and processing, but also to configure the system as a whole. The list of incoming messages is given in Table 2. Table 2. Input information. The name of the incoming message
Identifier
Submission form
Time and frequency
Source
Data CONFIG_PROCESS_DATA Arbitrary Arbitrary processing configuration configuration configuration text file text file file
Employee of the IT department through the company’s official servers
Response CONFIG_MATCH configuration on data class coincidence
Arbitrary Arbitrary configuration configuration text file text file
Employee of the IT department through the company’s official servers
Data from sensors
Text messages
SENSOR_DATA
Once per unit Sensors time set in the configuration
222
O. Hrybiuk and O. Vedishcheva
This principle of construction has its advantages: flexible remote configuration, easy way to set parameters, the ability to quickly scale and distribute configurations, obtaining only the necessary data for analysis and processing. Since the system works according to pre-established rules, it is necessary to describe the principle of their installation and the procedures that can be performed, giving specific examples. The rule configuration file has a simple look and sets the basic rules for the tracking parameters described below. For example, a rule can be set: at critical temperatures (above 80°), low humidity (less than 20 ppm) there is a high probability of fire. In response to this coincidence, a signal will be given to a mechanical relay that activates (turns on) the voltage supply to the automated irrigation system, which should increase humidity and prevent fires. Only one of the examples of rules is described [1]. Output Data. The list and description of the output information files are given in Table 4. The tested experimental system requires constant monitoring of the condition of the nodes, as failure or disconnection of one of them is unacceptable. For such situations, a fixed period of time is set at which the node should join the cloud and report its status, such as the state of the platform itself, its temperature, visibility of sensors, amount of free resources, and so on [1, 2]. An important aspect of this mechanism is that if all devices send this data at different times, it is impossible to get a complete picture of events, because it will not be known which devices are currently active. On the other hand, if all platforms use the same time, it can overload the server. To describe the sequence of actions and possible cases of interaction in the context of time, we build a diagram of the sequence of processes/actions [1, 3]. Development of Project Solutions. Analysis of the Components Used. The use of various sensors, software and other aids makes it possible to use the hardware capacity of the COMSRL platform actively. A thorough analysis of the hardware components of the experimental study is given below. We will use DALLAS DS18B20 sensors for temperature monitoring (see Fig. 1). This choice was made on the basis of several factors, namely good reviews of experts about the company and the specific product as a whole, fairly democratic prices and our previous successful experience with this device. This sensor works on the principle of 1-Wire and is supported by the relevant libraries. There is a possibility of parasitic power supply (via the data transmission channel), as well as via a separate pin for 3.3 or 5.0 V. If necessary, two versions are available on the market: one for air condition monitoring and the other for water or soil temperature monitoring. The only difference is that the sensor designed for liquid and the soil is isolated from external influences. The next component is a paired DHT12 temperature and humidity sensor (see Fig. 2). In fact, in the context of the experimental study, several of its uses are considered: continuous monitoring of humidity and testing of other temperature sensors, such as DALLAS. In order to calibrate the sensors, determine the error and to adjust the scale, the results of several sensors DS18B20 and DHT12 will be compared. This approach allows to model the system better, as well as to demonstrate its capabilities for the scaling procedure. As a light-responsive sensor, photo-resistors are used, which are reliable, cheap, and have a simple connection scheme. Figure 3 shows the functioning scheme. This solution is suggested in the context of connecting a DHT12 sensor for monitoring humidity, temperature, and connecting a photoresistor through a capacitor.
Experimental Teaching of Robotics in the Context of Manufacturing 4.0
223
This circuit is taken as a basis, including the addition of DALLAS sensors, a mechanical relay to switch the power supply.
Fig. 1. DS18B20 temperature sensors
Fig. 2. DHT11 temperature and humidity sensors
As a result, the COMSRL platform was designed to enable additional decisionmaking and scale up environmental processes [4]. The presence of a mechanical relay allows implementing the concept of nebulous computing, when the definition of the event and response to it occurs in a short period of time close enough to the final device. Of course, if the information received is not crucial, it is not transmitted to the cloud. It is primarily about closing and opening the electrical circuit to turn on and off devices that affect the environment and study of climatic conditions. For example, a humidifier (to increase humidity or decrease when the device is turned off), a lighting device that automatically turns on at a certain critical value of lighting, a heater or cooler that responds to the current temperature and corrects it with these devices. All these operations are fully automated and do not send unnecessary data to the server, including those that can be performed without cloud computing. This approach allows taking full advantage of the concept of nebulous computing. The concept of relay connection is offered in Fig. 4.
Fig. 3. Conceptual connection scheme
Fig. 4. Relay connection concept
The significance of using the designed relay is that having a low-voltage board operating from 3.3 and 5.0 V, one can control the closing and opening of the standard voltage circuit from the mains, ie 220 V, which is sufficient for efficient/ergonomic
224
O. Hrybiuk and O. Vedishcheva
connection of most devices. GPIO pins are used to collect information, transmit it, supply power, and ground it on hardware platforms. Although in the beginning this principle of connection frightens beginners, the main thing is to follow the instructions and carefully connect the devices. One of the advantages of our Orange Pi platform is its full compliance of the pins to the Raspberry Pi. As in the project documentation of COMSRL their designations essentially differ, it makes sense to check everything carefully before connection. The disadvantage of this system is the fact that the connection is direct without any hardware and software protection. This greatly expands the capabilities and simplifies the connection, but this can have some unpleasant consequences in the context of information security. First of all, incorrect connection of components leads to their breakdown. The next threat is the possibility of damaging the hardware of the platform by mechanical means, or by incorrect assembly and connection of the COMSRL circuit [2]. In order to preserve the system, it is advisable to use protective casings and auxiliary materials for heat dissipation. To prevent burning out of hardware components, it is necessary to minimize the impact of the human factor, including interference with the hardware of the system. User Interface Development. The user interface is an important part of any information control system, because it allows you to “interact” with the machine effectively, get results, and provide new operating conditions [1]. Two large categories of interfaces are used in the experimental study: 1) GUI (Graphic User Interface) - a graphical user interface, represented by a set of windows, widgets, schemes with the addition of basic interfaces for interaction. 2) CLI (Command Line Interface) - command line interface, which gained popularity during the active use of Unix-like operating systems. For a long time, such an interface was perceived as the only possible and most convenient, but this type of interface for obvious reasons can not simplify working with a PC. Despite the significant advantages of the GUI over the CLI, this study uses this type of interface. First of all, for the development and configuration of embedded systems, this tool remains a much better and only possible option for interaction. Therefore, today it has the main popularity among technicians, system administrators, developers. Among the advantages that give us the use of such an interface are: Easy automation and scripting of any system tasks (programs) that do not have a graphical interface. Service control for which the existence of any GUI is not provided. Their launch speed is much faster than in the graphical interface. The possibility to post more information by minimizing the use of space for external needs. The possibility to work through standardized data lines with low-power devices and integral devices. Design of the Support Part. Software. To support the performance of our system, its smooth operation, as well as to write programs, configure them, the experimental study used a number of fairly reliable programs: GCC compiler version 4.8, standard compiler for C on most Linux distributions; Vi/vim text editors and their extensions that come with the operating system; Git version control system, which allows to keep a history of development, trace any changes and, if necessary, return to previous versions of settings; Wiring Pi library, which allows to access the GPIO through a simple API in C/C++. To test the effectiveness of the COMSRL platform with the previously proposed sensors, it is recommended to connect them by pre-activating the test programs that are
Experimental Teaching of Robotics in the Context of Manufacturing 4.0
225
part of the software platform. In order to conduct the initial testing, the platform is set up according to technical requirements and guidelines. First of all, you need to connect the sensors correctly and accurately. To do this, it is recommended to use the project documentation for the hardware platform, including to activate GPIO pins on it [1]. Based on a thorough analysis of the schematic image, it can be concluded that there are several pins for power supply, for 3.3 and 5.0 V, several pins for grounding and a number of pins for general input and output. Depending on the type of sensor, it is possible to use and limit the use of power supply pins. For example, the ds18b20 sensor can operate in parasitic mode and connect via only one wire to one of the I/O pins. The study recommends using a more reliable circuit and connecting sensors using 3 wires and a resistor. The connection diagram is shown in Fig. 5. A 4.7 kO resistor is a crucial requirement, as without it the sensor will not be recognized by the COMSRL sensor. The peculiarity of this sensor is that each sensor has its unique identification number at the stage of manufacture. Undoubtedly, we can check its serviceability at this stage. It is recommended to perform such manipulation in two steps: Activate the 1-Wire connection mode on the platform; Check the presence of the sensor in the appropriate place Fig. 6.
Fig. 5. Hardware documentation
Fig. 6. Connection diagram ds18b20
Let’s proceed to the directory which is responsible for displaying devices connected to the main bus (/sys/bus/w1/devices). The name “28-15f13f126461” displayed in this directory is a unique sensor number. Of course, it can only be seen in this way at the connection stage. It is possible to check the correspondence of the sensor to the unique number only by experiment, by alternately heating the sensors and monitoring the corresponding temperature change. This peculiarity of the experiment is taken into account in the process of installing new sensors (the number must be known before the installation stage). It is recommended to check the file w1_slave, which is responsible for storing information about the sensor [1]. Therefore, to obtain the exact temperature, it is advisable to read these two values and process them correctly. However, it is necessary to anticipate two problems: Time delay, because in this mode the temperature is recorded once a second at best. In the case of a system crash, there is no record at all. The operation of reading from the disk is several tens of times more important in the context of processor time, so in the experimental study we will use software
226
O. Hrybiuk and O. Vedishcheva
implementation of this approach, which involves reading information directly from the pins and processing it in real time. It is recommended to clarify the version control system of the COMSRL taking into account the terms of reference and project documentation [1]. A new repository is created and a file with the COMSRL program is added. Then we proceed to the temperature and humidity sensor DHT11. The same steps are performed as for ds18b20, after which the program is compiled and started. In case of incorrect start of the program it is necessary to check the coincidence of pin numbering in the official documentation and in the library. It is recommended to check which pin numbers are used in the experiment in this library [2].
Fig. 7. Tracing changes in the control system
Fig. 8. Recompiled program
If the pin number in the source code of the program is specified incorrectly, the reference rule is used for correction: Change the number in the source code. This change is quite simple, because when writing the program, we used a macro in C to set this value. Macros function like constants and are processed directly before the program. Replace the “define DHT_PIN 7” macro with a macro with a pin number that corresponds to a valid “define DHT_PIN 23” connection. Recompile the program code to activate COMSRL and add these changes to the version control system [3]. Figure 7 shows
Experimental Teaching of Robotics in the Context of Manufacturing 4.0
227
that the version control system traced our changes. It is recommended to recompile the COMSRL program and restart it [1]. Figure 8 clearly shows that sometimes there is a problem with incorrect data that cannot be displayed correctly. This problem is solved later by increasing the frequency of the processor. You can also monitor the increase in humidity in the process of direct contact with the skin. The next step will be to run our temperature sensors simultaneously and compare their performance. It is recommended to calibrate them if necessary, change the approach to the analysis and just check the serviceability. Below are the results of running two programs simultaneously and an attempt to compare their performance. Conclusion: The ds18b20 sensor responds faster to changes in temperature for two reasons, the first one - the sensor is less isolated from the environment and has the ability to heat up faster, the second one – processing and data transfer on it is faster. Because DHT11 responds more slowly to temperature changes, it should not be used in areas where there is a rapid monitoring of effective changes and rapid response. On the other hand, if an accuracy of thousands of degrees is not required, and the corresponding sharp changes in temperature are not provided, then its use becomes efficient and justified, because it also offers humidity measurement for less cost. Such aspects are the most crucial and are taken into account during the development of the software of the COMSRL, including all the disadvantages of the hardware, differences in configuration and inaccuracies in the project documentation.
5 Design of a Variable Model for Thermochemical Reactions Development of a Microcontroller Program. In order to obtain a graph of the thermochemical reaction, the dependence of temperature on time, we need to build our own program. The DS18B20 temperature microcontroller plays the role of measuring and digitizing device. One measurement per second is enough to get an accurate graphic image. It is very important to make measurements with the same period. To do this, the experimental study uses Arduino internal memory as a buffer. Arduino UNO has 2kB of RAM. This allows us to store 1000 Integer values, occupying 80% of the available memory [1]. Transferring data via the USB port takes a long time to open the port and establish a connection. Undoubtedly, the data will come slowly and with different frequency. Accordingly, the data enters the buffer with the same frequency, which is limited only by the hardware of the controller. The data is read in a second, accordingly - the first data packet is sent for processing on a PC. To perform the functionality, there is the developed author’s code below [3, 4]. Microcontrollers in Thermochemical Reactions. As a result of the experimental study, a program was created that uses the hardware and software to carry out ecomonitoring of the environment. Before describing it, you need to determine what a thermochemical reaction is. Thermochemistry is a branch of chemical thermodynamics, the task of which is to determine and study the thermal effects of reactions, as well as to establish their interdependence with various physicochemical parameters [1]. Another task of thermochemistry is to measure the heat capacities of substances and establish their heat of phase transitions. If a certain amount of heat is applied to the system, then according to the law of energy conservation, it will change the internal energy of the system and the
228
O. Hrybiuk and O. Vedishcheva
system will work against external forces: Q = ΔU + A. The experiment introduces a new thermodynamic function called enthalpy and is denoted by the symbol H [2]. According to their thermal effect, reactions are divided into exothermic and endothermic [24]. The basis of the developed device is Arduino UNO and temperature sensor DS18B20. The temperature microcontroller reads data from the substance and transfers the data to the output file, which is loaded into the spreadsheet. The spreadsheet has developed a data analysis system and determines the amount of heat Q for different substances with the possibility of their further mathematical modeling. On the basis of the developed system it is possible to conduct laboratory, introductory, lecture, practical classes for pupils and students for better understanding of educational topics (thermochemical reactions, amount of heat, enthalpy, laws of thermodynamics, etc.) [1]. Technical Support. Technical support is a set of technical means created for data processing in COMSRL [6]. The technical support includes electronic computers that process information, means of data preparation on computer media, means of collecting and registering information, means of data transmission via communication channels, means of data collection and storage and issuance of environmental monitoring results, ancillary equipment and organizational machinery. Automated information system for analysis is implemented using a three-level client-server architecture. The three-level client-server architecture involves the following program components: a client application (usually called a “thin client” or terminal) connected to an application server, which is connected to a database server. The complex of technical means (CTM) provides work of the automated system. The main purpose of CTM: ensuring the sustainable operation of the automated system; centralized data storage (database maintenance); system data backup and recovery; ensuring the required level of security of protected resources and provided data. CTM equipment is placed in the manufacture taking into account safety requirements and the operating conditions of technical means [5]. The choice of placement for CTM is proposed according to the following requirements: requirements for access to the premises; requirements for the size of the premises; requirements for climatic characteristics of the premises; operating conditions of the equipment. Minimum requirements for the hardware of one workstation for the effective operation of COMSRL [1].
6 Conclusions The purpose of the experimental study was to create a software implementation for the climate control system of houseplants depending on the state of the environment and respond to its changes, including the use of nebulous technologies. As a result of the project, the software and hardware system of COMSRL was developed, which can control and monitor the climate as automated, according to pre-planned scenarios, as well as directly under human control [1, 15]. This approach allows to solve several problems at once, including problems of overflow of repositories and databases, which are overloaded with information messages from all smart devices connected to the network, solving the problem of data channel width, as a great amount of data is processed locally or is not sent or stored at all, the problem of rapid response to sudden climate changes and initial response to such changes on the edge of the “smart grid”, without the involvement of
Experimental Teaching of Robotics in the Context of Manufacturing 4.0
229
human resources, but using pre-planned human scenarios [24]. The proposed software and hardware implementation is based on the basic concepts of nebulous computing and implements them to achieve maximum results in accordance with the resources and needs of a particular user and business [14]. Methods of analysis of chemical kinetics have an important practical solution and are used for research in various fields of science. The ability to operate with these methods directly depends on the efficiency of the equipment and software used by the COMSRL [1, 2]. Taking into account the great cost of the equipment for ecomonitoring and chemical analysis, the task to develop more accessible means is always up-to-date. In the course of the experimental research the analysis of the signal characteristic at the output of the device was carried out, the hardware for the implementation of tasks and the software environment for data processing were selected and substantiated. The hardware part is developed on the basis of the chosen equipment of COMSRL [2, 24]. An algorithm for reading, digitizing and analyzing the signal has been created. The filtering of the useful signal for elimination of obstacles and visits of COMSRL is executed. Visualization programs were worked out for the user interface of the COMSRL. The author’s device can be used by installing COMSRL software on a personal computer, which allows not to waste money on expensive equipment, allows manual calculation and learning new software [3].
References 1. Hrybiuk, O.O.: Research learning of the natural science and mathematics cycle using computer-oriented methodological systems. Monograph, pp. 307–349. Drahomanov NPU, Kyiv (2019) 2. Hrybiuk, O.: Improvement of the educational process by the creation of centers for intellectual development and scientific and technical creativity. In: Hamrol, A., Kujawi´nska, A., Barraza, M.F.S. (eds.) MANUFACTURING 2019. LNME, pp. 370–382. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-18789-7_31 3. Hrybiuk, O.: Experience in implementing computer-oriented methodological systems of natural science and mathematics research learning in ukrainian educational institutions. In: Machado, J., Soares, F., Trojanowska, J., Yildirim, S. (eds.) icieng 2021. LNME, pp. 55–68. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-79168-1_6 4. Hrybiuk, O.: Engineering in educational institutions: standards for arduino robots as an opportunity to occupy an important niche in educational robotics in the context of manufacturing 4.0. In: Proceedings of the 16th International Conference on ICT in Education, Research and Industrial Applications. Integration, Harmonization and Knowledge Transfer, vol. 27–32, pp. 770–785 (2020) 5. Hrybiuk, O.O.: Perspectives of introduction of variational models of computer-oriented environment of studying subjects of natural sciences and mathematical cycle in general educational institutions of Ukraine. Collection of Scientific Works of the KPNU. Pedagogical Series, vol. 22, pp. 184–190. KPNU (2016) 6. Hrybiuk, O.: Problems of expert evaluation in terms of the use of variative models of a computer-oriented learning environment of mathematical and natural science disciplines in schools. In: Zeszyty Naukowe Politechniki Pozna´nskiej. Seria: Organizacja i Zarz˛adzanie, vol. 79, pp. 101–119. Wydawnictwo Politechniki Pozna´nskiej (WPP), Pozna´n (2019) 7. Frischkorn, G.T., Schubert, A.-L.: Cognitive models in intelligence research: advantages and recommendations for their application. J. Intell. 6, 34 (2018)
230
O. Hrybiuk and O. Vedishcheva
8. Plichart, P., Jadoul, R., Vandenbelle, L., Latour, T.: TAO: a collaborative distributed computerbased assessment framework built on semantic web standards. In: Paper Presented at the International Conference on Advances in Intelligent Systems (AISTA 2004), Luxembourg, pp. 15–18 (2004) 9. Schildkamp, K., Lai, M.K., Earl, L.: Data-Based Decision Making in Education: Challenges and Opportunities. Springer, Dodrecht (2013). https://doi.org/10.1007/978-94-007-4816-3 10. Wilson, S.M., Floden, R.E., Ferrini-Mundy, J.: Teacher preparation research: current knowledge, gaps, and recommendations. University of Washington Center for the Study of Teaching and Policy, Washington, DC (2001) 11. Tomasik, M.J., Berger, S., Moser, U.: On the development of a computer-based tool for formative student assessment: epistemological, methodological, and practical issues. Front. Psychol. 9, 2245 (2018) 12. Bahrin, M.A.K., Othman, M.F., Azli, N.H.N., Talib, M.F.: Industry 4.0: a review on industrial automation and robotic. Jurnal Teknologi 78, 6–13 (2016) 13. Zheng, P., et al.: Smart manufacturing systems for Industry 4.0: conceptual framework, scenarios, and future perspectives. Front. Mech. Eng. 13(2), 137–150 (2018) 14. Hrybiuk, O.O.: Psychological and pedagogical requirements for computer-oriented systems of teaching mathematics in the context of improving the quality of education. Humanitarian Bull. SHEI (31), IV(46), 110–123 (2013) 15. Hrybiuk, O.O.: Pedagogical design of a computer-oriented environment for the teaching of disciplines of the natural-mathematical cycle. Sci. Notes 7(3), 38–50 (2015) 16. Theorin, A., et al.: An event-driven manufacturing information system architecture for Industry 4.0. J. Prod. Res. 55(5), 1297–1311 (2017) 17. Kipper, L.M., et al.: Scientific mapping to identify competencies required by industry 4.0. Technol. Soc. 64, 101454 (2021) 18. Wang, X.V., Wang, L., Mohammed, A., Givehchi, M.: Ubiquitous manufacturing system based on cloud: a robotics application. Robot. Comput. Integr. Manuf. 45, 116–125 (2017) 19. Muhuri, P.K., Shukla, A.K., Abraham, A.: Industry 4.0: a bibliometric analysis and detailed overview. Eng. Appl. Artif. Intell. 78, 218–235 (2019) 20. Elsisi, M., Tran, M.Q., Mahmoud, K., Lehtonen, M., Darwish, M.M.: Deep learning-based industry 4.0 and Internet of Things towards effective energy management for smart buildings. Sensors 21(4), 1038 (2021) 21. Posada, J., et al.: Visual computing as a key enabling technology for industrie 40 and industrial internet. IEEE Comput. Graph. Appl. 35(2), 26–40 (2015) 22. Villani, V., Pini, F., Leali, F., Secchi, C.: Survey on human–robot collaboration in industrial settings: Safety, intuitive interfaces and applications. Mechatronics 55, 248–266 (2018) 23. Kadir, B.A., Broberg, O., Souza da Conceição, C.: Designing human-robot collaborations in industry 4.0: explorative case studies. In: DS 1992: Proceedings of the DESIGN 2018 15th International Design Conference, pp. 601–610 (2018) 24. Hrybiuk, O.O.: The variativ model for research training for math students using computeroriented methodical system. Inf. Technol. Learn. Tools 77(3), 39–65 (2020) 25. Choi, S., Jung, K., Noh, S.D.: Virtual reality applications in manufacturing industries: past research, present findings, and future directions. Concurr. Eng. 23(1), 40–63 (2015) 26. Kayembe, C., Nel, D.: Challenges and opportunities for education in the Fourth Industrial Revolution. Afr. J. Public Aff. 11(3), 79–94 (2019) 27. Kwon, D., Hodkiewicz, M.R., Fan, J., Shibutani, T., Pecht, M.G.: IoT-based prognostics and systems health management for industrial applications. IEEE Access 4, 3659–3670 (2016) 28. Lee, C., Lim, C.: From technological development to social advance: a review of Industry 4.0 through machine learning. Technol. Forecast. Soc. Change 167, 120653 (2021)
Experimental Teaching of Robotics in the Context of Manufacturing 4.0
231
29. Longo, F., Nicoletti, L., Padovano, A.: Smart operators in industry 4.0: a human-centered approach to enhance operators’ capabilities and competencies within the new smart factory context. Comput. Ind. Eng. 113, 144–159 (2017) 30. Xie, M., Zhu, M., Yang, Z., Okada, S., Kawamura, S.: Flexible self-powered multifunctional sensor for stiffness-tunable soft robotic gripper by multimaterial 3D printing. Nano Energy 79, 105438 (2021)
A Simulation-Based Approach to Reduce Waiting Times in Emergency Departments Aydin Teymourifar1,2(B) 1
2
CEGE - Centro de Estudos em Gest˜ ao e Economia, Cat´ olica Porto Business School, Porto, Portugal INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal [email protected]
Abstract. The goal of this work is to utilize a simulation model to reduce waiting times before examinations in emergency departments. Different types of arrivals to an emergency department, basic processes, and possible queues are explained. The implementation is conducted in the Rockwell Arena software and some related details are provided. The validity of the simulation model is demonstrated. Then scenarios are developed based on limited boosting of resources to reduce waiting times. It is analyzed how waiting times can be reduced. Also, it is discussed that which resources should be increased for this aim. The Process Analyzer software is used for the analysis of the scenarios, whose details are given. It is also analyzed how the utilization rate of resources changes in scenarios. Data from the literature are used for experimental results. Since the analyzes are based on generalizable approaches, they can be applied in similar emergency departments. The results show that with a reasonable increase in resources, it is possible to significantly reduce waiting times. The implications that may be useful in the management of emergency departments are presented. Keywords: Emergency department · Waiting time simulation · Rockwell arena · Process analyzer
1
· Discrete event
Introduction
Diminishing the waiting time of patients has always been a substantial matter in health management topics. One of the major reasons for this is that high waiting times negatively affect patients’ perception of service quality [1–3]. Emergency departments (EDs) are among the most dynamic health units, and the reduction of waiting times becomes more vital there [1]. Simulation is consistently among the methods used for the analysis of dynamic and complex environments such as EDs [4–14]. The outline of the work is shown in Fig. 1. A simulation model that is developed by the author in the Rockwell Arena software is applied to decline patients’ c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Machado et al. (Eds.): icieng 2022, LNME, pp. 232–244, 2022. https://doi.org/10.1007/978-3-031-09385-2_21
Sim-ED
233
waiting times in an ED. Main processes in EDs like registration, triage, and examination are included in the model. Usually, there are limited resources to carry out the processes, so waiting queues form in front of them. The aim of this study is to reduce the waiting times before examinations. Although data from a case study in the literature is used for experimental results, the simulation model is generalizable. Thus, it can be applied in many alike EDs. There are many articles in the literature that deal with an alike topic with a similar method. But unlike the others, this study tries to find a solution to reduce waiting times by considering all the resources used for registration, triage, and examination processes. Different than many studies, the details of the simulation model are presented, making it usable by others for similar situations. Therefore, it is thought that the article contributes to those who read it for administrative purposes, along with the literature.
Fig. 1. Outlines of the study.
The other chapters of the study are such that in Sect. 2 details about the system, and simulation model are described. The experimental results are given and discussed in Sect. 3. Conclusions are provided in Sect. 4, as well as the future works.
2
Description of the System and the Simulation Model
As seen in Fig. 2, patients come to an ED either by ambulance or through their own means. In an ED, patients are tagged with the colors red, yellow, and green, which are high, intermediate, and low-risk patients, respectively. Some EDs use additional colors, which are not the subject of this article. Figure 2 shows the processes that are focused on, in this study.
Fig. 2. General flows in an ED [2].
234
A. Teymourifar
Also, the relevant queues are seen in Fig. 2, which are the waiting queues for the admission part, triage, and examination. Admission is only for patients who come with their own means. Since the condition of the patients arriving by ambulance is serious, they are directly tagged in red and are transferred to the resuscitation unit quickly. Red zone patients are taken under intensive care and observation after the resuscitation process. Other patients are first registered in the admission part, and then they go to the triage department and are tagged as green or yellow. In fact, some of the patients who do not come by ambulance may pass through the triage and be tagged as red. Patients of the yellow area are usually kept under observation after the examination process. This may be true for green-tagged patients as well but for a small proportion of them. They usually just get out of the ED after a basic examination. But most of the yellow zone patients experience some medical analysis and tests after the first examination. In Fig. 2, intensive care, observation, medical analysis processes are not shown because, this study, just focuses on reducing the time that patients wait before the examination. Since there is no waiting for red zone patients, here only the waiting times of patients in green and yellow areas are implied. Though to illustrate the general processes in an ED, some details are also given in the following sections. In the next parts of this section, some details of the simulation of the system in the Arena software are discussed. A method as in Fig. 3 is used to adjust the time. In Fig. 3(a), when each entity arrives into the system, the time is assigned as an attribute to it. The used variable H for this purpose is shown in Fig. 3(b).
Fig. 3. Time adjustment.
Arrivals of patients into the system begins as in Fig. 4(a), the details of which are given in Fig. 4(b). Arrivals are set by defining a variable as in Fig. 4(c). This variable has 24 lines, each showing the time in seconds between patients’ arrivals to the ED within an hour of the day.
Sim-ED
235
Fig. 4. Arrivals to the ED.
Arrival types shown in Fig. 2 are separated by a decision module in Fig. 4(d). Patients arriving by ambulance go directly to the red area from the route in Fig. 4(e), but others move to the admission and register there as in Fig. 4(f). On all figures, the pink elements are made using the station & route modules in the Arena. As seen in Fig. 5(a) and (b), the station module in Arena is used for the acceptance zone, in which, as in Fig. 5(c) and (d), the registration process is done by the admission staff. Then, patients are sent to the triage area from the route shown in Fig. 5(e). The triage section also starts with a station module as seen in Fig. 6(a). The process in Fig. 6(b) is done by a triage nurse shown in Fig. 5(d). With the decision module in Fig. 6(c), patients are divided into red, yellow, and green tagged ones. Their percentage is given in the experimental results section. As an example, Fig. 6(d) shows the patients tagged in red. They are then sent to the respective area by the route modules, shown in Fig. 6(e). Patients who are tagged as green or yellow in the triage section enter the examination area as shown in Fig. 7(a). Using the decision module in Fig. 7(b), they are directed to specific areas. The doctor-nurse teams that examine these patients are the same, but the yellow-tagged patients have priority, which is provided by the Hold module in Fig. 7(c). The condition used in this module is as follows: SETSUM(Examination Doctors,5) < 4 && SETSUM(Examination Nurses,5) < 4 && SETSUM(Examination Beds,5) < 4 && NQ(Yellow Examination Process.Queue)==0. This represents the waiting area for the green
236
A. Teymourifar
Fig. 5. Acceptance and registration part of the ED.
Fig. 6. Triage unit of the ED.
examination, where patients wait there until at least one examination team is free and there are no patients waiting in the yellow area. In the Arena software, SETSUM symbolizes the number of busy resources, while NQ stands for the number of entities waiting in a queue. The values 4 and 5 are respectively the
Sim-ED
237
number of resources and the related index. The condition adequately represents the priority of patients of the yellow zone over green ones and the situation in which resources are more focused when a patient comes to the yellow zone.
Fig. 7. Examination processes in the green and yellow zones.
Figures 8(a), (b) and (c) show the red, observation, and exit zones, respectively, although, as mentioned before, these parts do not affect the outputs of this study. The resources in the red and observation zones are different from the green and yellow examination zones. Even when necessary, a team is called from the other parts of the ED and/or nearby health units for the red zone, because patients in this area can never be kept waiting and should be subjected to an immediate resuscitation process. This is shown as the resuscitation and intensive care unit (ICU) process in Fig. 8(a). Red zone patients are directed
238
A. Teymourifar
to observation zone after the resuscitation process. Patients go out of the ED, through the components shown in Fig. 8(c).
Fig. 8. (a) Red, (b) observation and (c) exit zones.
Expressions are defined for the outputs of the simulation, as seen in Fig. 9(a), (b), (c) and (d), and they are employed as seen in Fig. 9(e). They are characterized as user-defined outputs.
Fig. 9. Defining outputs and the simulation model.
3
Experimental Results
For experimental results, data from a case study from the literature is operated, where there are multiple EDs but only the data for one of them is used.
Sim-ED
239
The time between arrivals at the chosen hospital is as in Table 1, i.e. the variable shown in Fig. 4(c) is filled with the mentioned values in Table 1 [2]. Table 1. Average time between arrivals of patients to the ED (in seconds). Time intervals Value 02:00–08:59
391
09:00–16:59
106
17:00–01:59
82
About 8% of patients arrive at the ED by ambulance, which is represented in Fig. 4(d). As seen in Fig. 5(d), the number of resources in admission, triage, and examination zones are two, one, and four, respectively. The examination team consists of a doctor, a nurse, and the facilities necessary for the examination. In Fig. 5(d), these are defined separately, but it is also possible to collect them all under one examination resource. Furthermore, in Fig. 4(d), the needed facility is named as examination bed, which actually means all the requirements for the examination. As seen in Fig. 5(c), the distribution of the registration process is Normal(1,0.2). Like admission, the triage process shown in Fig. 6(b) also is according to Normal(1,0.2) distribution. In the Triage section, 3%, 12%, and 85% of the patients are tagged as red, green, and yellow, respectively, which is shown in Fig. 6(c). Commonly, patients who are tagged as green in EDs are more than yellow ones, but in the case study, it is the opposite. The reason for this and related details can be found in reference [2]. As shown in Fig. 7(d), the duration of the examinations in the green area is according to Normal(4,1.5) distribution. In the yellow zone, the period of the examinations follows Normal(5,1.5) distribution [2]. The data is employed within the simulation model illustrated in Sect. 2. The simulation period is 30 days, i.e. one month, the warm-up period is the first 3 days, and the number of repetitions is 3 times. For the validity of the gained results, a comparison is made with the relevant study in the literature. For this aim, the monthly total number of green, yellow, and red patients, utilization rate of resources, waiting times of green and yellow tagged patients are used. They are reported in the literature as 18723, 2297, 15872, 554, 68.17, 9.88, and 0.54, respectively [2], which are found 18852, 2202, 16130, 520, 69.24, 9.61, and 0.54. Therefore, it can be said that the simulation is valid. The waiting times are in minutes. 26 scenarios are designed based on increases of up to two units in the resources, which are admission staff, triage nurses, and examination teams. So, in total, there are 27 cases with the current situation. A maximum increase of two units in resources can be interpreted as a constraint. But it should be considered that in real life, these increases are not easy. The results of the scenarios are acquired from Process Analyzer. For this aim, the file of the Arena model whose validity has been approved is used inside of the Process Analyzer software. As seen in Fig. 10, resources are selected in the Controls part, while simulation outputs are selected on the side of Responses.
240
A. Teymourifar
Fig. 10. Using the Process Analyzer software to analyze scenarios.
The obtained results are presented in Table 2. In the first column of the table, scenarios are denoted by S. The numbers of resources are written in the second, third, and fourth columns. In this table, the waiting times are in minutes. As seen, waiting times in both green and yellow zones fall significantly, especially as the number of examination teams is supplemented. Increases in the resources of admission and triage staff also reduce waiting times. As seen in Fig. 11, there is no significant dissimilarity between scenarios in terms of the utilization rate of resources. There are scenarios like S-23, S-25, and S-26, which look that have better results regarding the waiting times. However, it can be said that the results don’t alter broadly after S-18. Therefore, for the employed data, it can be declared that raising by two units the number of resources that conduct examinations causes a substantial improvement. Thus, for example, applying S-18, savings can be made in resources compared to S-26. Obviously, these results may be different for other case studies. Table 2. Outputs for the current state and scenarios Admission Triage Examination Utilization Average Average staff nurses teams rate of waiting time in waiting time in resources the yellow zone the green zone Current 2 state
1
4
0.545
9.61
69.24
S-1
2
2
4
0.544
9.39
67.99
S-2
2
3
4
0.541
8.23
55.14
S-3
3
1
4
0.546
9.59
72.24
S-4
3
2
4
0.538
8.69
54.47
S-5
3
3
4
0.539
8.23
54.86
S-6
4
1
4
0.54
8.98
64.02
S-7
4
2
4
0.543
8.42
52.43
S-8
4
3
4
0.542
8.40
59.52
S-9
2
1
5
0.539
2.25
7.79
S-10
2
2
5
0.546
2.11
9.62
S-11
2
3
5
0.545
1.94
8.31
S-12
3
1
5
0.543
2.36
8.55
(continued)
Sim-ED
241
Table 2. (continued) Admission Triage Examination Utilization Average Average staff nurses teams rate of waiting time in waiting time in resources the yellow zone the green zone S-13
3
2
5
0.545
1.92
8.98
S-14
3
3
5
0.546
1.75
7.52
S-15
4
1
5
0.546
2.21
7.95
S-16
4
2
5
0.543
1.79
7.33
S-17
4
3
5
0.541
1.79
7.55
S-18
2
1
6
0.547
1.43
3.85
S-19
2
2
6
0.547
0.94
3.58
S-20
2
3
6
0.545
0.92
3.52
S-21
3
1
6
0.545
1.47
4.06
S-22
3
2
6
0.551
0.97
3.67
S-23
3
3
6
0.544
0.89
3.66
S-24
4
1
6
0.545
1.40
3.93
S-25
4
2
6
0.54
0.92
3.37
S-26
4
3
6
0.546
0.93
3.57
Fig. 11. Variations of (a) utilization rate of resources, average waiting times in (b) yellow and (c) green zones according to the scenarios.
242
4
A. Teymourifar
Conclusion and Future Works
Since waiting times in healthcare units, especially in EDs, affect patients’ perception of quality, reducing them has always been among the goals of managers. This paper describes the general processes and possible queues in an ED. On this basis, a simulation model is utilized within the Rockwell Arena software. Unlike many studies in the literature, details of the simulation process in Arena are given. The queues that a patient waits in for an examination and the resources used there are determined. Unlike many others, this study only focuses on decreasing waiting times for examinations. Though, some other relevant parts are also expressed. In an ED, patients are tagged as red, yellow, or green. Patients who are labeled as red, cannot be kept waiting, as they urgently need resuscitation. Therefore, waiting times for them are not considered. In this study, waiting times are for green- and yellow-tagged patients. The sections that these patients wait are the queues before registration, triage, and examination. In these areas, the resources that affect waiting times are the registration staff, triage nurses, and teams of doctors and nurses performing examinations. In addition, the facilities required for the examination are also among the influencing resources. Therefore, to decline waiting times, the appropriate level of these resources must be found. But as in many optimization problems, there are constraints as well. In particular, it may not be easy to increase the number of doctor-nurse teams and the required facilities. Besides, it should be noted that in this study, waiting times are only the time before starting the examination for the green and yellow-tagged patients. They may be directed to other parts of the ED or even be kept under observation after the initial examination, whose analysis is not the subject of this paper. Another study will be conducted on these processes, in the future. Scenarios about increasing resources impacting the waiting times of patients are designed and implemented in the Process Analyzer software. An upper limit has been defined for the proliferation of the resources because it can be difficult to increase staff. Likewise, physical facilities need to rise as well when the examination teams are increased, which may not be easy. For the used case study, the most significant improvement in waiting times occurs when the doctor-nurse team performing the examination is increased. However, boosting triage and admission staff alike resulted in a reduction in waiting times. These results are valid for both green and yellow zones patients. Since the urgency of yellow zone patients is higher than green tagged patients, it is even more important for them to lessen waiting times. The described simulation model represents a general case and can be used for similar EDs. Although dissimilar results may be obtained for different case studies, the method can be generalized. The rate of patients’ arrival to an ED varies according to different time intervals. Therefore, waiting times alter throughout the day. In this case, it may make sense to change resources over time. In future studies, it is planned to conduct a study based on this idea. In addition, for the case study used in
Sim-ED
243
this work, the reason why there is no improvement in the utilization rate of the resources may be due to the fact that the crowdedness is in certain time intervals, which will be investigated in future works. Registration, triage, and examination are between influential processes in an ED, and the level of resources used for them affects waiting times. In this work, unlike many studies in the literature, a model is developed to reduce waiting times by focusing on these processes and related resources. Different than many papers, details about the model are provided. on the other hand, it should be emphasized that one of the main limitations of this work is that only some aspects of specific activities are included in the presented scenarios. In future studies, the model will be generalized by considering more processes. Moreover, it is planned to design a more comprehensive simulation model, including more than one ED. In this case, considering the distance of the patients to EDs, an example of the applications of the sectorization problems [15] in health management will be presented. Acknowledgements. Financial support from Funda¸ca ˜o para a Ciˆencia e Tecnologia (through project UIDB/00731/2020) is gratefully acknowledged. The author would like to thank the editors and the anonymous referees for their valuable comments which helped to significantly improve the manuscript.
References 1. Teymourifar, A., Kaya, O., Ozturk, G.: Contracting models for pricing and capacity decisions in healthcare systems. Omega 100, 102,232 (2021) 2. Kaya, O., Teymourifar, A., Ozturk, G.: Analysis of different public policies through simulation to increase total social utility in a healthcare system. Soc. Econ. Plan. Sci. 70, 100,742 (2020) 3. Kaya, O., Teymourifar, A., Ozturk, G.: Public and private healthcare coordination: An analysis of contract mechanisms based on subsidy payments. Comput. Indust. Eng. 146, 106,526 (2020) 4. Lin, C.H., Kao, C.Y., Huang, C.Y.: Managing emergency department overcrowding via ambulance diversion: a discrete event simulation model. J. Formosan Med. Assoc. 114(1), 64–71 (2015) 5. Cabrera, E., Taboada, M., Iglesias, M.L., Epelde, F., Luque, E.: Simulation optimization for healthcare emergency departments. Proc. Comput. Sci. 9, 1464–1473 (2012) 6. Ruohonen, T., Neittaanmaki, P., Teittinen, J.: Simulation model for improving the operation of the emergency department of special health care. In: Proceedings of the 2006 Winter Simulation Conference, pp. 453–458. IEEE (2006) 7. Komashie, A., Mousavi, A.: Modeling emergency departments using discrete event simulation techniques. In: Proceedings of the Winter Simulation Conference 2005, p. 5. IEEE (2005) 8. Cabrera, E., Taboada, M., Iglesias, M.L., Epelde, F., Luque, E.: Optimization of healthcare emergency departments by agent-based simulation. Proc. Comput. Sci. 4, 1880–1889 (2011) 9. Duguay, C., Chetouane, F.: Modeling and improving emergency department systems using discrete event simulation. Simulation 83(4), 311–320 (2007)
244
A. Teymourifar
10. Ahmed, M.A., Alkhamis, T.M.: Simulation optimization for an emergency department healthcare unit in Kuwait. Eur. J. Oper. Res. 198(3), 936–942 (2009) 11. Baesler, F.F., Jahnsen, H., DaCosta, M.: The use of simulation and design of experiments for estimating maximum capacity in an emergency room. In: Winter Simulation Conference, vol. 2, pp. 1903–1906 (2003) 12. Uriarte, A.G., Z´ un ˜iga, E.R., Moris, M.U., Ng, A.H.: How can decision makers be supported in the improvement of an emergency department? A simulation, optimization and data mining approach. Oper. Res. Health Care 15, 102–122 (2017) 13. De Angelis, V., Felici, G., Impelluso, P.: Integrating simulation and optimisation in health care centre management. Eur. J. Oper. Res. 150(1), 101–114 (2003) 14. Liu, Z., Rexachs, D., Epelde, F., Luque, E.: A simulation and optimization based method for calibrating agent-based emergency department models under data scarcity. Comput. Indust. Eng. 103, 300–309 (2017) 15. Teymourifar, A., Rodrigues, A.M., Ferreira, J.S.: A comparison between simultaneous and hierarchical approaches to solve a multi-objective location-routing problem. In: Gentile, C., Stecca, G., Ventura, P. (eds.) Graphs and Combinatorial Optimization: from Theory to Applications. ASS, vol. 5, pp. 251–263. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-63072-0 20
Multimethodology Exploitation Based on Value-Focused Thinking: Drones Feasibility Analysis for National Defense Ygor Logullo de Souza1(B) , Miguel Ângelo Lellis Moreira2,3 , Bruno Thiago Rego Valeriano Silva1 , Mischel Carmen Neyra Belderrain1 Christopher Shneider Cerqueira1 , Marcos dos Santos3,4 , and Carlos Francisco Simões Gomes2
,
1 Aeronautics Institute of Technology, São José dos Campos, SP 12228-900, Brazil
[email protected]
2 Fluminense Federal University, Niterói, RJ 24210-346, Brazil 3 Naval Systems Analysis Center, Rio de Janeiro, RJ 20091-000, Brazil 4 Military Institute of Engineering, Urca, RJ 22290-270, Brazil
Abstract. There is a hypothetical necessity of Brazilian National Defense as for the implementation of RPAS (Remotely Piloted Aircraft Systems), commonly known as drones, in their tactical and strategic operations. Even the RPAS providing countless applications and even replacing the use of conventional aircraft, piloted by onboard pilots, was identified a lack of better exploitation of the problematic situation, starting from the following questioning: Why is it necessary to use RPAS in National Defense operations and what is the purpose of a particular implementation? Furthermore, if Brazilian security sectors and military forces would like to implement those systems, which would be the best choices? Develop or buy technology? Use in which areas and missions? In this context, we propose to build a value model for structure the problematic situation based on ValueFocused Thinking (VFT) combined with Systems Thinking approaches. The multimethodology proposed, seeks to provide a better understanding of the problem, identifying the strategic and fundamental objectives, what could make it possible to quantify them through an assigned utility value, indicating an enlightened solution. Therefore, using an objective-driven approach and a holistic understanding for supporting that strategic decision in Brazilian National Defense. Keywords: Systems thinking · Systems engineering · Decision process
1 Introduction Based on the Brazilian National Defense Policy [1], Brazil has a prominent role in the world environment, privileging peace and defending the dialogue and peace to resolve disputes between States. Although, it is worth mentioning that the country seeks continuous development of its defense strategies, expressed through political, economic, psychosocial, military, and scientific-technological development [1]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Machado et al. (Eds.): icieng 2022, LNME, pp. 245–256, 2022. https://doi.org/10.1007/978-3-031-09385-2_22
246
Y. L. de Souza et al.
Contextualizing the technological development for defense, the Remotely Piloted Aircraft Systems (RPAS), usually known as drones, has been exposing relative interest in recent years, motivated by its numerous and varied ways of application [2]. Especially for military applications, RPAS has become an essential device in modern military operations, with increased demand due to successful battlefield deployments, providing features such as persistent surveillance, tactical reconnaissance, low risk to human life, and low cost, when compared to the use of traditional aircraft systems [3]. As addressed in [4], in the last two decades, there has been an expansion of the development and implementation of technology, initially still with a strong presence in the United States, today there is a large implementation of RPAS in strategic military operations in Europe and Asia. Regarding Brazil, the country has been showing interest in the use, acquisition, and technological development of RPAS [5]. In a complementary way, [6] points out that currently, a group of aircraft coming from different technological industries is undergoing tests in activities related to national defense, specifically in logistics, surveillance, sensing, reconnaissance, and combat operations. Even the RPAS providing countless applications, and even, in many cases, making possible the substitution regarding the use of conventional aircraft piloted by onboard pilots, it is necessary a better understanding of the problem in question in search to provide the definition of objectives to be achieved variables present in the problem, thus starting from the following question: Why is there a need for the use of RPAS in national defense operations and what does one seek to achieve with this given implementation? Based on the questioning, it is possible to understand a problematic situation as a wicked problem, based on relationships with multiple stakeholders, multiple perspectives, conflicts of interest, intangible information, and uncertainties [7, 8]. In this way, the Operational Research (OR) provides, through its approaches and methodologies, the analysis of complex problems with the technical and scientific basis, providing its structuring, understanding, and, in specific cases, the viability of a favorable solution for a given problem [9]. Complementary to the analysis of unstructured problems, Systems Thinking seeks to provide a better alignment between how we think and how the real-world works, from the moment, that the real-world operates as a system, based on a complex network of many variables that interact with each other [10]. According to [11], Systems Thinking also enables a better understanding and conception and complex phenomena, including the description of complex entities and their relationships, approaches that make it possible to illustrate complex scenarios without necessarily having to simplify them, and support for holistic thinking. Concerning structuring problems, the Value-Focused Thinking (VFT) [12], even not presented as a methodology, makes it possible to understand a given problematic situation based on the values of the decision-maker in the context under analysis, not restricting itself to identifying alternatives for solving problems, but enabling the decision-maker to focus on essential activities, before actually seeking the final solution to the problem [13]. The great benefits of VFT lie in the ability to generate better alternatives for a decision analysis of any nature and to discover objectives that can lead to a more productive collection of information [14].
Multimethodology Exploitation Based on Value-Focused Thinking
247
The study searches to present a multimethodological approach based on the integration of the concepts of Systems Thinking, VFT, and Multi-Criteria Decision Analysis (MCDA). With the proposed framework, the approach seeks to clarify the main points, uncertainties, and real regarding the implementation of ARPs in Brazilian National Defense operations. The paper is divided into five sections. After the introduction, Sect. 2 explores the theoretical concepts of the basic approaches and methodologies to the analysis of the problem. Section 3 intends to explain the proposed framework, presenting its main viability points for decision analysis. Section 4 carries out the case study, providing better detail of the problematic situation and application of the proposed approach. Section 5 presents the final considerations for the study and future works.
2 Theoretical Concepts In search to reflect on the validity of a proposed plan, it is viable to bring up the boundary’s judgments and their implications, of the various stakeholders, for example, to systematically change and check how it can look different. Boundaries and values judgments are closely interconnected [15]. 2.1 Systems Thinking Values provided delimit the boundaries of the systemic intervention. Thus, the research practice seeks to enable the involvement of the largest number of values from different stakeholders, however, without compromising the understanding of the problem. In this way, not restricting itself to listening to the parties involved, but including them effectively in the problematic situation analysis [16, 17]. Defining boundaries, values, and facts define what needs to consider in the design of a system. The Critical Systems Heuristics and Systemic Intervention approach, coming from Systems Thinking (ST), explore the definition of boundaries in a decision, planning, or improvement. Midgley [18] conceptualizes this act as Boundary Critique. The dynamics of the real-world changes occur all the time, thus exploring the boundaries of a system, helping to minimize unforeseen situations and environmental changes, requiring constant review as part of learning the process [19]. The ST models and approaches search to recognizing the complexity of reality and seeks conditions to describe complex entities and their solutions, illustrates complex phenomena without undue simplification, and finally uses approaches to support holistic thinking [11]. The four fundamental patterns of ST are the distinctions, system, relationship, and perspective (DSRP). Thinking is to create mental models, and the ST searches to improve the construction of these mental models [10, 20]. Making distinctions is the understanding that mental models establish an identity and exclude the other. In the literature, it can be understood as the exploration of a set of boundaries, as discussed earlier, going a little further: “the creation of an identity means that the other is an identity from another perspective” [21]. ST may help to understand a situation that is too messy.
248
Y. L. de Souza et al.
2.2 Value Focused Thinking Understood as a philosophy, instead of a method, the VFT, proposed by Keeney [12], seeks to present a view of the importance of considering what is value in a given context, not restricting the identification of alternatives, but rather in organizing the problem with a focus on essential activities, identifying, structuring, analyzing and understanding the objectives to be achieved more deeply. A point of importance related to VFT is that even not classified as a PSM methodology, the approach has been used to understand and structure objectives, and consequently, decision problems [22]. Based on the VFT approach, [22] emphasize that the reason why a decision-maker wants to make a decision, consists in gaining some value, where [12] defines that these values enable to providing fundamentals of interest in a decision situation, and from the moment the values are clarified, providing a better definition of the objectives. As presented in [12], the hierarchy of fundamental objectives as a representation of the desirable consequences, and the objectives define the decision context, which is the limit of the set to elicit alternatives - the author states that, in essence, the decision context responds to the following question: what is the set of alternatives that achieve the fundamental objectives? As addressed [23], the VFT approach integrates different types of decision analysis models, focusing on modeling that enables treat subjective and non-determining information. Another main point is the strong presence of VFT in decision-making analyzes of high impact for a given country, exposing applications in governmental and military environments in the areas of environment, energy, and defense, respectively [23, 24]. This approach proves to be effective in military-strategic decisions such as presented in [25, 26]. [26] Used the VFT to find concepts of systems that could satisfy operational requirements and evaluate systems that would take the United States to the Aerospace domain and meet the capacities required for this purpose, corroborating that this approach is appropriate in situations as approached in this study. 2.3 Requirements, Capabilities and Needs The requirements can be understood as desired effects, achieved through the development and achievement of a certain potential, the capacity [27]. Still, within the scope of the operational requirement as a desirable effect, a set of metrics is needed to assess how a system performs its functions in the operational environment [27], focusing on the evaluation of the effectiveness achieved, when is desirable the change of the behavior or functioning of a system from an operational context. Similarly, [12] emphasizes the importance of the attributes generated for alternatives evaluation concerning the achievement of fundamental objectives. Therefore, this discussion leads to an alignment between the concept of operational requirements as the desired effect, and the fundamental objectives, as desirable consequences. For the development of the system and compliance with operational requirements, it is possible to use the concepts and techniques of Systems Engineering, enabling a better global solution [27]. As exposed in [11], only when the problem is well structured and clearly defined for stakeholders does a qualitative and quantitative investigation of the details make sense
Multimethodology Exploitation Based on Value-Focused Thinking
249
to systematically design and develop the solution space outline. It is a justification for the “general to specific” principle in the system engineering process model [11]. In this way to originate a new system, is necessary first to realize the analysis of needs [27]. This process should produce arguments that support the motivation for the project, answering: why the system is necessary? The Initial Capability Description is a product of this task of determining needs so that the definition of operational requirements emerges from this analysis [27]. The needs of a system are described in a hierarchical structure of objectives, which represents the limits of the solution space and the benefits of doing something to deal with the risk, as opposed to doing nothing [28]. Thus, giving a shape and pointing out what needs to do in the analysis of needs.
3 Framework Proposal Based on the exposed approaches, this study explores the creation of mental models to adapt the future decision. In search of a methodological transcription of this approach, is presented a multi-methodological framework. Of course, no representation of a system and no sharing of a mental model exhausts all possible identities and other distinctions, actions, and relationships of a perspective. Although, as discussed earlier, the expansion of these elements considered provides the minimization of the effects of future uncertainties. Figure 1 exposes the proposed framework. Integrating multiple methodological concepts, the approach starts by contextualizing the problematic situation, and then a systemic approach is explored. Firstly, it seeks to understand and define who are the stakeholders of the system under analysis seeking to provide a broader analysis, not restricting itself to defining the stakeholders of the organizations involved, but exposing the stakeholders of the given problem, as explored in [15, 30]. In this way, leading to the construction of two groups, those involved and those affected. In addition, as restrictions on communication with all individuals are very common, a model uses the power-interest relationship defined in [31], not in the “pure” form brought by these authors, although, more aligned with [30], in the sense of focusing on constructive exploration with a focus on values, seeking the expansion of borders, bringing to the fore values, and consequently information based on the DSRP, allowing the adaptation of the mental models built for future decisions. In a complementary way to the systemic approach integrated into the model, stakeholder analysis based on power and interest is used to elicit the key stakeholders of the problem. Managing them is an essential process for refining elicitation, helping to deal with the various organizational restrictions, such as the difficult access to managers with greater responsibilities, however, is carried out to deal with these restrictions without losing the essence of what was exposed. As a sequence of the proposed, the structuring of the problematic situation [13] and operational analysis of systems [26] begins. In this way, is used the VFT approach as support, seeking to enable the identification and structuring of the objectives and the possible directions to be taken with a given evaluation. Simplifying, the given stage of the model seeks to clarify the objectives to be achieved, transcribing them into the operational, quantitative, and qualitative requirements of the systems, determining the needs to think about these systems and which capacities are intended to fill.
250
Y. L. de Souza et al.
In this way, based on VFT, it is understood the importance of the analysis of the needs for the development of a new system, providing the structuring of the needs in a hierarchy of objectives and as previously discussed, promoting the definition of the space of solutions or alternatives through of the decision context. Thus, agreeing with [23], “Since we are designing systems for future environments, we have learned that a value-focused approach is the best way to think about the future and encourage the design of innovative solutions to meet future needs”. Considering the proposed framework, all this movement requires a cognitive and creative effort from stakeholders, for this reason, it is necessary a facilitator in the implementation of a given model, as approached in [32], it provides a closer analysis desired, from the moment that a group realizes the analysis.
Fig. 1. Proposed framework flowchart
Multimethodology Exploitation Based on Value-Focused Thinking
251
4 Case The purpose of this analysis is to carry out an intervention evaluating understanding the real needs regarding the use of RPAS in National Defense operations carried out by the Armed Forces. In this way, we seek to obtain a better understanding of the problematic situation, providing a structured final decision and, considering the real need of RPAS, enable a decision analysis model for evaluating the most favorable technologies regarding the established needs. As discussed in [33], decision-making in political and military environments involves different levels and areas, interconnecting strategic, tactical, and operational analysis in favor of a direction aligned with the objectives in a given problematic situation. In addition, it is necessary to consider that decision-making in the political and military sphere is complex, where the given form of solution can generate influences not only in the military sphere but also impacts on other areas of society [34]. Based on the decision context and the proposed framework, the analysis sequence sustains the identification of stakeholders related to the problematic situation [35]. In this scenario, the identified stakeholders can be allocated into four groups, they are: Clients, representing entities with a relative interest in the implementation of the technology; Decision-Makers, indicating the entities with interest and final decision-making power in the problem; Experts, entities with a relative level of interest and responsibility for the respective operational analysis of the needs to be established at a strategic level; and Affected, group of individuals directly affected by the final decision. Figure 2 exposes the stakeholders in their respective groups.
Fig. 2. Set of stakeholders
Once established the set of stakeholders, the entities are distributed in a diagram of power and interest, making it possible to represent the respective levels of influence of each entity involved in the problematic situation. Figure 3 presents the diagram of power and interest elaborated.
252
Y. L. de Souza et al.
Fig. 3. Power-interest grid of stakeholders
In search to enable the collection of data and information concerning the problematic situation, we carried a series of interviews out for a better understanding of the problem through the point of view and experiences of people directly connected to the context under analysis. The interviews were made with officers from the Brazilian Armed Forces, with vast experience in national defense operations and having already worked in RPAS employment environments. An Officer with experience in decision making and one with experience in Defense Minister were also interviewed. To reach this effective way in a decision with multiple stakeholders, [12] suggests that the facilitator guides the elicitation of objectives from each stakeholder, structures each set of objectives, and aggregates the hierarchies into a single fundamental objective’s hierarchy. There y, this study interviewed three representants from and Brazilian Defense Ministry to elicit objectives. A literature review, focusing on legislation – National Defense Policy and National Defense Strategy [1] – was also made to search for objectives that are related with RPAS use. During the interviews was remarkable the perception that RPAS should be part of a system that integrate other values as “Reach a technological proximity to the word”, which would be influenced by Defense’s National Objective of “Promote Technological and Productive Autonomy in Defense” [1]. Another redundant objective elicited from interviews, was “Maximize Surveillance” due to the large boarder and territorial waters that Brazil is accountable for, and thinking about Security, surveillance in Amazon Forest and metropolitan favelas are also a concern. Thinking about the values brought by stakeholders is reasonable deduce that there is a huge concern about technologically backward and surveillance in different aspects. Minimize technologically backward (replacing “Promote Techno-logical and Productive Autonomy”, and aggregating stakeholders’ perspectives) and Maximize surveillance, may be specified.
Multimethodology Exploitation Based on Value-Focused Thinking
253
Develop technology is not a concern only about developing the entity “air-craft”, but all the life cycle features. Thus, these may be specifications of the objective “Minimize technology backward”. They also aggregate the perception of different important features, like sensors and data transmission for control the aircrafts and receive the information collected by them. Thinking about surveillance, the possibilities were maritime, land border and own territory. All of them could be specified in many other objectives – land border can be split in air traffic and land traffic; own territory means the different biomes and tasks in metropolitan areas and situations like favelas or natural disasters – although this work goal is to look for the needs and not specifying the requirements. Those were the main aspects brought by stakeholders. In addition, minimizing cost and human resources protection were mandatory. Thus, they shall be treated as restrictions, screening criteria that says if an alternatives, options, projects, or technology must or not be assessed. Brazil has not the ambition of US to dominate the Air and Space, but, as already described, Defense Ministry and Armed Forces must focus on values when making strategic decisions and in system decision process as treated here – predominantly when decision opportunities arise once strategic objectives and values are specified. Thus, Fig. 4 shows hierarchy objectives that represents the needs identified in this study. Here the exploration of problematic situation was not exhausted (Fig. 5 represents the necessity of problem sequel), then, other objectives should be elicited in future studies, achieving a value model like the one proposed in [26, 36].
Fig. 4. Objectives hierarchy (needs)
The design of the decision context was initiated, and an action plan derive from it. Elicit alternatives should be part of this plan. But first, an analysis of requirements and functions is necessary – a sequel specification of needs. It is suggested implement because a system needs to deliver the values to stakeholders and the System Decision Process presented in [36] may support it. Another possibility is maintaining the value model produced and create alternatives analyzing it. If this value tree is considered, thinking in RPASs are plausible alternatives. However, not them alone. They shall be integrated and combined with other systems, i.e., RPAS is a system of system. Furthermore, technology is always going forward, for example, and developing Resilient Engineered System is an opportunity to successfully complete future missions with evolving threats.
254
Y. L. de Souza et al.
Fig. 5. Proposal of objectives hierarchy
5 Conclusion This study proposed an approach to think about the reason to acquire RPAS as the Armed Forces show will or have already made the purchase. A decision is an initial spent of resources to transform this first effort in values, then thinking which values, how be efficient and maximize the transformation, is necessary. To reach this goal its suggested to analyze stakeholders that will influence the decision mental model, and Systems Thinking approaches may support the messy situation, designing the boundaries, exposing them, and, hence, increasing transparency in decision. VFT is a tool to support decision making, proposed by Keeney, depicting the values and desirable consequences that the decision must achieve. RPAS as a system, is important to think in your life cycle, and maximize your design to maximize the values achievement. Thereby, Systems Engineering (SE) may enable a better global solution for it. An important and first task in SE is needs analyses. And here, was suggested to use ST to understand the problem and facilitate VFT, that depict the values and hence, the needs. We proposed to use VFT in our approach for defining a value model, but when the problem is too messy, arising the wicked problem characteristics, is necessary to use ST for understand all the situation and explore the mental models, thinking about getting as close as possible to reality. At the end, was possible to initiate the design of the needs, however without exhaust the situation. Minimize technology backward and Maximize Surveillance are needs in Defense and Security in Brazil. Stakeholder’s interviews suggested that these two needs are urgent to be achieved, although was important to detach that only RPAS acquisition is not the solution as an AFT approach to the decision, for this situation, may propose. It is indicated and necessary a broader facilitation to specify those needs in functions and requirements, building a value model, reaching a resilient system, that could achieve all objectives depicted.
Multimethodology Exploitation Based on Value-Focused Thinking
255
References 1. Brasil: Política Nacional de Defesa - Estratégia Nacional de Defesa. Ministério da Defesa, Brasília (2020) 2. Giones, F., Brem, A.: From toys to tools: The co-evolution of technological and entrepreneurial developments in the drone industry. Bus. Horiz. 60, 875–884 (2017). https://doi.org/10.1016/ j.bushor.2017.08.001 3. Hamurcu, M., Eren, T.: Selection of unmanned aerial vehicles by using multicriteria decision making for defence. J. Math. 2020 (2020). https://doi.org/10.1155/2020/4308756 4. Bergen, P., Salyk-Virk, M., Sterman, D.: The World of Drones. https://www.newamerica.org/ international-security/reports/world-drones/ 5. Santos, M. dos, Costa, I.P. de A., Gomes, C.F.S.: Multicriteria decision-making in the selection of warships: a new approach to the AHP method. Int. J. Anal. Hier. Process 13(1) (2021). https://doi.org/10.13033/ijahp.v13i1.833 6. Moreira, M.Â.L., Gomes, C.F.S., dos Santos, M., do Carmo Silva, M., Araujo, J.V.G.A.: PROMETHEE-SAPEVO-M1 a hybrid modeling proposal: multicriteria evaluation of drones for use in naval warfare. In: Thomé, A.M.T., Barbastefano, R.G., Scavarda, L.F., dos Reis, J.C.G., Amorim, M.P.C. (eds.) IJCIEOM 2020. SPMS, vol. 337, pp. 381–393. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-56920-4_31 7. Mingers, J., Rosenhead, J.: Problem structuring methods in action. Eur. J. Oper. Res. 152, 530–554 (2004) 8. Rosenhead, J.: Past, present and future of problem structuring methods. J. Oper. Res. Soc. 57, 759–765 (2006) 9. Costa, I.P.d. A., Moreira, M.Â.L., Costa, A.P.d.A., Teixeira, L.F.H.d.S., Gomes, C.F.S., Santos, M.D.: Strategic study for managing the portfolio of IT courses offered by a corporate training company: an approach in the light of the ELECTRE-MOr multicriteria hybrid method. Int. J. Inf. Technol. Decis. Making 21, 1–29 (2021). https://doi.org/10.1142/S0219622021500565 10. Cabrera, D., Cabrera, L.: Systems Thinking Made Simple: New Hope for Solving Wicked Problems. Odyssean Press, Ithaca (2015) 11. Haberfellner, R., De Weck, O., Fricke, E., Vössner, S.: Systems Engineering: Fundamentals and Applications. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-13431-0 12. Keeney, R.L.: Value-Focused Thinking: a Path to Creative Decisionmaking. Harvard University Press, London (1992) 13. Françozo, R., Belderrain, M.C.N., Bergiante, N., Pacheco, B.C.S., Piratelli, C.L.: Valuefocused thinking na prática: análise do desenvolvimento e aplicações no período (2010–2018). In: LI Simpósio Brasileiro de Pesquisa Operacional., Limeira (2019) 14. Françozo, R., Belderrain, M.C.N.: A problem structuring method framework for valuefocused thinking. EURO J. Decis. Process. 10, 100014 (2022). ISSN 2193-9438. https:// doi.org/10.1016/j.ejdp.2022.100014 15. Ulrich, W., Reynolds, M.: Critical systems heuristics: the idea and practice of boundary critique. In: Reynolds, M., Holwell (Retired), S. (eds.) Systems Approaches to Making Change: A Practical Guide, pp. 255–306. Springer, London (2020). https://doi.org/10.1007/978-14471-7472-1_6 16. Midgley, G.: An introduction to systems thinking for tackling wicked problems. Public lecture given at the University of Leicester, p. 29 (2014) 17. Ulrich, W.: A Primer to Critical Systems Heuristics for Action Researchers. Centre for Systems Studies, Hull (1996) 18. Midgley, G.: Complexity and philosophy. Systems thinking, complexity and the philosophy of science. Emerg. Complex Organ. 10, 55–73 (2008)
256
Y. L. de Souza et al.
19. Midgley, G.: Systems thinking, complexity and the philosophy of science. Emerg.Complex. Organ. 10, 55–73 (2008) 20. Cabrera, D., Colosi, L., Lobdell, C.: Systems thinking. Eval. Program Plann. 31, 299–310 (2008) 21. Cabrera, D., Cabrera, L., Powers, E., Solin, J., Kushner, J.: Applying systems thinking models of organizational design and change in community operational research. Eur. J. Oper. Res. 268, 932–945 (2018) 22. Morais, D.C., Alencar, L.H., Costa, A.P.C.S., Keeney, R.L.: Using value-focused thinking in Brazil. Pesquisa Operacional. 33, 73–88 (2013). https://doi.org/10.1590/S0101 23. Parnell, G.S., et al.: Invited review—survey of value-focused thinking : applications, research developments and areas for future research. J. Multi-crit. Decis. Anal. 60, 49–60 (2013). https://doi.org/10.1002/mcda 24. Keeney, R.L.: Applying value-focused thinking. Milit. Oper. Res. 13, 7–17(2008). https://doi. org/10.5711/morj.13.2.7 25. Logullo, Y., Bigogno-Costa, V., Silva, A.C.S., Belderrain, M.C.: A prioritization approach based on VFT and AHP for group decision making: a case study in the military operations. Prod. [online] 32 (2022). https://doi.org/10.1590/0103-6513.20210059 26. Parnell, G.S., Conley, H.W., Jackson, J.A., Lehmkuhl, L.J., Andrew, J.M.: Foundations 2025: a value model for evaluating future air and space forces. Manage. Sci. 44, 1336–1350 (1998) 27. Lessa, N.d.O.: Valiação De Arquiteturas De Sistemas De Defesa Baseada No Conceito De Capacidade (2016) 28. Kossiakoff, A., Sweet, W.N., Seymour, S.J., Biemer, S.M.: Systems Engineering Principles and Practice. John Wiley & Sons (2011) 29. Johnston, D.: Capability Life Cycle Manual. Australian Defence Force (2020) 30. Gregory, A.J., Atkins, J.P., Midgley, G., Hodgson, A.M.: Stakeholder identification and engagement in problem structuring interventions. Eur. J. Oper. Res. 283, 321–340 (2020) 31. Ackermann, F., Eden, C.: Strategic management of stakeholders: theory and practice. Long Range Plan. 44, 179–196 (2011) 32. Franco, L.A., Montibeller, G.: Facilitated modelling in operational research. Eur. J. Oper. Res. 205, 489–500 (2010) 33. de Almeida, I.D.P., et al.: Study of the location of a second fleet for the Brazilian navy: structuring and mathematical modeling using SAPEVO-M and VIKOR methods. In: Rossit, D.A., Tohmé, F., Mejía Delgadillo, G. (eds.) ICPR-Americas 2020. CCIS, vol. 1408, pp. 113– 124. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-76310-7_9 34. Haerem, T., Kuvaas, B., Bakken, B.T., Karlsen, T.: Do military decision makers behave as predicted by prospect theory? J. Behav. Decis. Making 24, 482–497 (2011). https://doi.org/ 10.1002/bdm.704 35. Gomes, C.F.S., Santos, M.d., Teixeira, L.F.H.d.S., Sanseverino, A.M.d.B., Barcelos, M.: SAPEVO-M a group multicriteria ordinal ranking method. Pesquisa Operacional. 40, 1–20 (2020) https://doi.org/10.1590/0101-7438.2020.040.00226524 36. Parnell, G.S., Driscoll, P.J., Henderson, D.L.: Decision Making in Systems Engineering and Management. John Wiley & Sons, Hoboken (2011)
An Application of Preference-Inspired Co-Evolutionary Algorithm to Sectorization ¨ urk3,5 , Pedro Rocha3 , Filipe Sousa1 , Margarida Lima2 , Elif Ozt¨ Ana M. Rodrigues2,3(B) , Jos´e Soeiro Ferreira3,4 , Ana C. Nunes1,6 , Cristina Lopes2 , and Cristina Oliveira2 1
ISCTE - University Institute of Lisbon, Lisbon, Portugal 2 CEOS.PP, ISCAP, Porto, Portugal 3 INESCTEC - Technology and Science, Porto, Portugal [email protected] 4 FEUP - Faculty of Engineering, University of Porto, Porto, Portugal 5 FEP.UP - Faculty of Economics, University of Porto, Porto, Portugal 6 CMAFcIO - Faculty of Sciences, University of Lisbon, Lisbon, Portugal
Abstract. Sectorization problems have significant challenges arising from the many objectives that must be optimised simultaneously. Several methods exist to deal with these many-objective optimisation problems, but each has its limitations. This paper analyses an application of Preference Inspired Co-Evolutionary Algorithms, with goal vectors (PICEA-g) to sectorization problems. The method is tested on instances of different size difficulty levels and various configurations for mutation rate and population number. The main purpose is to find the best configuration for PICEA-g to solve sectorization problems. Performance metrics are used to evaluate these configurations regarding the solutions’ spread, convergence, and diversity in the solution space. Several test trials showed that big and medium-sized instances perform better with low mutation rates and large population sizes. The opposite is valid for the small size instances. Keywords: Sectorization problems Many-objective optimisation
1
· Co-Evolutionary Algorithms ·
Introduction
Sectorization, the division of a whole – region, network, area – into subsets, usually appears in real-life situations, such as school/health districting, maintenance operations, political districting or design of sales territories. These are multi-objective problems since it is common to wish for balanced, compact or connected sectors. Multi-objective optimisation problems (MOP) require optimising several conflicting objectives simultaneously. Various algorithms have been developed and among the most well known are MOEA, MOGA, NSGA-II, and SPEA, which c The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Machado et al. (Eds.): icieng 2022, LNME, pp. 257–268, 2022. https://doi.org/10.1007/978-3-031-09385-2_23
258
¨ urk et al. E. Ozt¨
work well for up to 3 objectives (being NSGA-II the most adopted method to solve MOPs, and also used by many authors to deal with sectorization and related problems [3,21]). However, more than 3 objectives causes the performance of these algorithms to degrade [7]. The problems containing four or more objectives are called many-objective problems (MaOP) and emerge as a particular case of MOPs. MaOPs have harder challenges and require significantly more effort into the solution strategy [13]. The deterioration of MOP methods performance arises due to the following difficulties when many objectives are included: decreasing search capacity of Pareto dominance, increasing complexity of the approximation to the Pareto Front (PF), and the complications in the solution visualisation [7]. The evaluation of the solutions fitness is done through an important concept known as Pareto Dominance, which enables classifying solutions as dominated or non-dominated. However, in some situations, the entire solution set may be non-dominated, collecting in the same Pareto frontier. This reduces the search abilities of Pareto dominance, which is already very challenging due to the dimensionality of the objective space increasing in proportion to the number of objectives. When this happens, the hyper-surface of the PF gets larger, and the number of solutions to approximate the entire PF extensively increases. Ultimately, decision-making becomes harder, as the visualisation of solutions is complicated by many objectives. Hence, the algorithms developed to solve MaOPs try to erase these difficulties where MOP algorithms stand weak. In the literature, multiple techniques deal with these challenges. The present paper overviews them and selects a Co-Evolutionary Algorithm to solve sectorization problems, called Preference-Inspired Co-Evolutionary Algorithms, with goal vectors (PICEA-g). This method uses preferences to lead the solutions in the solution space to obtain more desirable solutions and facilitate decisionmaking. Since sectorization problems can involve many conflicting objectives, PICEA-g emerged as a promising exploration path. The current work contains the first application of the method for sectorization problems and offers preliminary results on its performance. Therefore, it constitutes a relevant contribution to the sectorization literature. The remainder of the paper is structured as follows. Section 2 presents the literature review about methods proposed to cope with MaOPs. Section 3 includes the framework of PICEA-g as well as the genetic operators and objectives selected. Section 4 shows the results and critically discusses PICEA-g performance in different instances and configurations. The conclusions are in Sect. 5.
2
Literature Review
Some techniques are proposed in the literature that enhance the performance of well-known MOP algorithms when many objectives are in question. For instance, to improve Pareto-dominance evaluation, the following modifications are presented: (i) the use of modified dominance instead of Pareto dominance to reduce the number of non-dominated solutions, such as -dominance [5], α-domination
An Application of PICEA to Sectorization
259
[6], (ii) the introduction of different ranks to non-dominated solutions to create higher selection pressure in the PF [8], and (iii) the use of different performance evaluation mechanisms than Pareto dominance. These mechanisms can be fit in two main groups: indicator-based and scalarising function-based. The former uses an indicator function to measure the quality of the solutions. The best known indicator based algorithm is Hypervolume Estimation (HypE) [1]. HypE employs Monte-Carlo simulation to measure the exact hypervolume values used as an indicator to evaluate the solutions. On the other hand, the latter evaluates the fitness through a scalarising function, such as weighted sum or weighted Tchebycheff (Chebyshev). The most well-known scalar function-based algorithm is Multi-objective Evolutionary Algorithm based on Decomposition (MOEA/D) [22] which decomposes the problem within scalar sub-problems and optimises them together. Moreover, NSGA-III can be included in the same category [2] where the predefined reference points are distributed to the objective space to keep a diverse solution set and help them converge. A proposed alternative to improve the PF approximation uses preferencebased procedures [4,17]. Priory integration of the preferences into the algorithm reduces the search in the objective space, by concentrating in a more representative sub-space, increasing the chance of finding improved solutions. The concept of simultaneous evolution of the candidate solutions regarding the preferences in practice is called preference inspired algorithms. The preference points used in these algorithms are randomly generated and are only used to increase the selection pressure of the candidate solutions [15]. Purshouse et al. [14] presented preference-inspired evolutionary algorithms that used target objective vectors as preference solutions (PICEA). The candidate solutions are then co-evolved according to their dominance on the preference solutions. Soon after, Wang et al. [18] proposed the idea of goal vectors to lead the candidate solutions, and called it a preference-inspired evolutionary algorithm with goals (PICEA-g). If a candidate solution dominates more goal vectors (while fewer candidate solutions dominate those goal vectors) it has a higher chance to proceed to the next generation [10]. PICEA-g is compared with several MaOP algorithms, and its superiority is shown in [18]. In the literature, it is likely to find some applications of PICEA-g addressing real-world problems that show it may be modified and adapted for a new application on a specific problem, as seen in the dynamic districting and routing problem by Lei et al. [9]. The authors implemented the method with a minor modification by mating neighbouring solutions to improve the offsprings during the coevolution. Moreover, Long et al. [11] implemented PICEA-g for multi-period location routing problem by integrating the Tchebycheff method to decompose the objective space while improving the diversity of the solutions. It is also possible to find some papers on workflow scheduling [10,12] with modifications of the original PICEA-g, appropriate to their specific problems. In all these applications, PICEA-g gave promising results and exemplary performance in MaOPs. For this reason, this paper focuses on this method to tackle current challenges in sectorization problems.
260
3
¨ urk et al. E. Ozt¨
Approximation Method: PICEA-g
This section explains the PICEA-g method step by step, following procedures and equations based on Wang et al. [18]. The generation of the full PF is a challenging problem due to existing limitations (convergence efficiency and computational cost), so an adequate representation is desired. A good representation of the PF requires having a sufficient number of solutions that provide ample coverage across its length while in close proximity, but maintaining the solutions with a certain degree of dispersion. When these requirements are not met, as in Fig. 1, the representation quality of the PF is significantly reduced. In Fig. 1a the solutions are shown as not converging to the PF, where close proximity is desired. Alternatively, Fig. 1b shows a situation where the solutions converged to a specific region, very close to the PF, being all very close to one another. This leads to almost redundant solutions, with a very small distinction between each adjacent solution, where having solutions with higher distinction, or separation, between the adjacent ones is desired. Finally, in Fig. 1c the solutions are in close proximity to the PF and dispersed enough so that few redundant solutions exist, but insufficient coverage of the full PF as seen by the empty gaps along with it. These complications arise due to, among other factors, the maximum size of the population, the total number of generations, and the guiding procedure that pushes the solutions into new unexplored regions of the solution space. Tweaking the size of population and number of generations will not overcome these challenges due to heavy computational constraints. An efficient approach to guide the solutions can dramatically improve the convergence efficiency to the PF, generating better quality solutions and reducing the cost of dealing with tradeoffs between solution quality, solution variety, and computational cost. This convergence process can be improved using the PICEAg algorithm, which produces goal vectors that enable guiding the generation of new solutions into different regions of the solution space, closer to the PF, and into its regions without solution representation. The PICEA-g uses the evolution process that is common in all Evolutionary Algorithms, that start with an initial solution population of size N defined by S(t) (where t is the generation index), and through the use of a crossover and mutation operation produces offspring Sc (t) that are evaluated using the fitness
(a) No convergence
(b) No dispersion
(c) No representation
Fig. 1. The different problems can appear in the PF convergence
An Application of PICEA to Sectorization
261
function Fs (in Eq. 1, using the goal vectors of PICEA-g method) which then allows filtering the full population (parent S and offspring Sc ) using a truncation operation, producing a new solution population of size N , defined as S(t + 1). This cycle is repeated until stopping criteria is met, usually a specific number of generations. A diagram with this process is exemplified in Fig. 2. Fs = 0 +
g∈GGc |sg
1 ng
(1)
The PICEA-g algorithm generates new goal vectors, defined as Gc (t) using random selection. The new goal vectors Gc are added to the population of existing goal vectors G, evaluated using the fitness function Fg (in Eq. 2) and truncated considering their fitness and population size. The fitness evaluation procedure follows a dominance metric where the solutions population is compared against the preference population, and vice-versa. This process happens in the Evaluation block of Fig. 2. 1, ng = 0 1 , α = ng −1 Fg = (2) 1+α otherwise 2N −1 , Depending on the goal vectors’ distance to the ideal PF and their location on the solution space, their usefulness as a mean for comparability between different solutions varies significantly. As specified on [19], goal vectors closer to the PF help the convergence of solutions to it, and depending on the solution space region where the goal vectors are located, they push the solutions to cover new sections of the PF. Using a random generation process for the goal vectors is a reasonable starting approach (since it does not have any specific optimisation that facilitates the convergence of the solution population to the PF) that produces a baseline performance that allows improvements in the goal vectors generation process to be compared to. The fitness of a given preference (Fg ) is given by the expression defined in Fg shown in Eq. 2, which attributes the fitness value for each one of the goal vectors.
Fig. 2. Solution and preference population evolution process in PICEA-g.
262
¨ urk et al. E. Ozt¨
The parameters N and ng represent, respectively, the maximum population size, and the number of solutions that satisfy preference g. By penalising the goal vectors that are satisfied by a large number of solutions, this process creates an incentive to eliminate them from the population. Goals that are satisfied (dominated) by fewer solutions are given high fitness value, which remain on the solution space, pushing the generation of new solutions that try to satisfy (dominate) them. The fitness of a given solution (Fs ) is dependent on the aggregated quality of all individual preferences g it satisfies. This fitness expression, seen in Eq. 1, states that the fitness for a given s is the sum of the relative quality of each solution that it satisfies (as in 1/ng , where ng is the number of solutions that satisfy preference g). If s does not satisfy any g, its fitness value is 0. A preference g that satisfies many solutions provides a very small contribution to the individual fitness of each one, but a preference g that satisfies only a few contributes significantly more. This creates the incentive to generate solutions outside of the solution space region where most other solutions are concentrated. This constant interaction between both populations (solutions and preferences) is the basis for the PICEA-g approach. 3.1
Genetic Operators
PICEA-g is based on evolutionary algorithms composed of specific operators, such as encoding scheme, crossover and mutation. In the encoding scheme, the solutions were encoded using a ‘matrix form binary grouping’ (MFBG) genetic encoding system. MFBG is a binary matrix where the rows represent the total number of basic units, while the columns represent the sectors. The feasibility of MFBG requires: (i) a basic unit cannot be assigned more than one sector, and (ii) each sector must have at least one basic unit, and cannot be empty. Thus, the sum of each row must be one, and the sum of each column must be one or more. The crossover operator is based on a multi-point crossover, where it randomly selects multiple rows from the two-parent solutions and switches them to generate two offspring solutions. This eliminates the need to set a crossover probability due to the advantageous design of the genetic encoding system used. Although simple, this method also improves the diversity in the following generations due to the random selection of multiple points. The mutation operator is applied to every offspring population with a given probability in order to provide some randomness to the population. The mutation is implemented row by row in a whole chromosome by assigning the basic unit to another sector when the mutation value exceeds the defined probability threshold. 3.2
Objectives
Three objectives commonly used in sectorization problems are considered in the solution method: Equilibrium, Compactness and Contiguity. Equilibrium is the balance between sectors. This indicator can refer to multiple sector characteristics, such as balance in demand, balance in workload or working
An Application of PICEA to Sectorization
263
hours, etc. Considering the current approach, the equilibrium was defined as the deviation from the mean, adopted from [16] and is shown in Eq. 3 and Eq. 4. J j=1 i xij × yi q¯ = (3) J In Eq. 3, q¯ represents the mean demand of each sector, where J is the total number of sectors, xij represents if the basic unit i is assigned to sector j and yi is its corresponding demand. Equation 4 refers to the deviation from the mean demand of the sectors. Here, qj is the sum of the basic units in sector j. A better equilibrium level represents a smaller standard deviation value from the mean. J 1 (qj − q¯)2 (4) stdeq = J − 1 j=1 Compactness refers to density in each sector. That is quite a relevant objective for most sectorization problems, especially if they are further concerned about routing or travelling. Equation 5 shows the mathematical representation of the compactness in this study. Here d gives the compactness level of the chromosome or solution. It is the sum of the distances between the centroid oj and the furthest point to the centroid pj in each sector j. The smaller the d gets, the more compact the sectors are. d=
J
dist(oj , pj )
(5)
j =1
Contiguity indicates the connectivity of a sector. It is a common objective in sectorization problems that evaluates the flexibility of moving from one basic unit to another within the sectors. The measure for this objective is also adopted from [16]. The authors represent contiguity through a square matrix of size equal to the number of basic units, and set the value 1 to all feasible paths between two nodes in the same sector, 0 otherwise. Equation 6 represents the contiguity in sector j. Here, the nominator is the sum of the links in a sector, and the denominator is the maximum number of links if all the basic units are linked. Thus, cj takes the value 1 when the sector is fully connected. ij cj =
ij
mjwi ) , ij (ij − 1)
i=1 (
w=1
mjwi
1 = 0
if path between w and i in sector j otherwise
(6)
The contiguity of the chromosome (¯ c) is calculated by the formula represented in Eq. 7. Here, the numerator is the sum of sector base contiguity calculated in Eq. 6, and the denominator is the total number of basic units I. Value c¯ varies between 0 and 1, being 1 the best, and 0 worst. In order to evaluate all objectives as minimisation, it is refactored as (1 − c¯) while also switching the best and worst limit values. k j=1 cj ij (7) c¯ = I
264
4
¨ urk et al. E. Ozt¨
Results and Discussion
The performance of PICEA-g is tested using the 3 specified objectives over 10 different sectorization instances, each with a different size and difficulty. These instances are generated following gamma distribution, where shape and scale parameters are created randomly for each basic unit produced. The demand for each basic unit is created following a uniform distribution. Finally, connected graph theory is considered while the links between the basic units are generated. The aim is to find a pattern that shows the best parameter composition for each instance type. For that, 3 performance metrics are used, namely, Error Ratio (ER), Inverted Generational Distance (IGD) and Spacing (S). ER is a cardinality metric and measures the proportion of the solutions on the approximation PF over the population size. IGD is a convergence and distribution metric. The distance of the approximation PF from the reference PF and the distribution of the solutions in the objective space are estimated by IGD. Finally, S is a spread metric that measures the deviations of the distances between the solutions in the approximation PF. All these metrics are assumed better when they have lower values. These metrics were adopted from Yen and He [20]. Table 1 shows the instances characteristics. The configuration parameters that were selected consisted on a combination of mutation rates (using 0.00, 0.02, 0.04, 0.06, 0.08 and 0.10) and population size (with 50, 60, 70, 80, 90, 100), which were evolved for a total number of 1000 generations. N ame refers to the Gamma (γ) instances set. N odes represent the number of basic units to be sectorized, while Sectors are the sector number. Each instance was tested with every combination of population size and mutation rate through PICEA-g, being run 20 times each. In total, every instance is tested 720 times for 36 configurations with 20 trials each. It is possible to obtain the instances and observe the results for all instances from the following link: https://drive.inesctec.pt/s/EQn6yCD3jdap3TW. According to the results presented in the link, large and middle-size instances performed better regarding selected performance metrics when the mutation rate was lower. However, smaller instances produced better results with higher mutation rate. This behaviour is not unexpected, since the large instance solutions set shows a higher degree of population diversity compared to the smaller ones and manages to converge to the PF successfully when no significant disturbances (mutations) are present. With high mutation probability, this convergence reduces or even stops. On the other hand, in the smaller instances, the diversity Table 1. Instances characteristics N ame
γ2
N odes
690 56 873 432 102 288 204 528 350 1000
Sectors 30
γ3 γ8 5
30
γ9 10
γ10 γ11 γ14 γ18 γ28 γ49 10
10
10
30
10
30
An Application of PICEA to Sectorization
265
in the initial solution set is more constrained. In this situation, a higher mutation probability compensated this limitation, which helps creating disturbances that increase the chances of finding better solutions. In the remainder of the section, we focused on the performance of two instances as a representative examples of our experiments. Figure 3a and 3b show the performance of the two instances, γ3 and γ8 respectively, on different metrics. We selected these instances to show how PICEA-g behaves in small and big instances. The bars show the mean value of the performance metrics for 20 runs, and the lines are the standard deviations (std) from the mean for each configuration.
(a) 873 nodes
(b) 56 nodes
Fig. 3. The performance of two instances for selected performance metrics
Figure 3a shows that when the mutation rate is higher, the performance of IGD and S get worse for the same population size, although ER remains in the same range. Thus, the solutions’ convergence, distribution, and spread are better with the lower mutation rate applied to the algorithm. On the other hand, Fig. 3b shows that higher mutation rates improve the performance of IGD while worsening the performance of S. ER again appears to be in similar ranges for different configurations. The IGD is a convergence and distribution metric. It is possible to say that the convergence of the solutions is better with a higher mutation probability, although the spread of the solutions on the PF decreases. This result shows that the solutions may converge well but not cover the objective space sufficiently.
266
¨ urk et al. E. Ozt¨
Moreover, Fig. 4 and 5 shows the performance of selected metrics over each other. The Pareto dominance concept is used for this comparison. In other words, after performance metrics are measured for each configuration considering each run, they are located on Pareto frontiers according to their dominance for three selected metrics. As seen, for γ3 with 56 nodes the sequence of the Pareto frontiers show that the solutions perform better with higher mutation probability. On the other hand, the lower mutation probability results better in the γ8 with 873 nodes. Unfortunately, the produced results were not sufficient to clearly identify the best parameters for each instance type, requiring further research. The algorithm is implemented through Python 3.9.6, and the method is executed on a PC Intel Xeon gold 6148 @ 2.4 GHz, 20 cores, 40 threads, 96 GB ram and Win X64 operating system. The computation time for the PICEA-g algorithm was 3.5 seconds per generation for a 100 population size.
(a) Population size 60
(b) Population size 100
Fig. 4. The performance of selected metrics over each other: case of γ3
(a) Population size 60
(b) Population size 100
Fig. 5. The performance of selected metrics over each other: case of γ8
An Application of PICEA to Sectorization
5
267
Conclusion
This work presented the first application of the PICEA-g to Sectorization problems. The current configuration used three performance metrics for the evaluation: ER, IGD and S. The instances were selected based on their small, medium and large sizes (number of basic units) and distinct features (connectivity and number of sectors), producing different difficulty levels. In order to estimate the performance of PICEA-g on different sectorization problems, a combination of diverse configuration parameters was selected (mutation rates, maximum population sizes, total number of generations). According to the preliminary results obtained, smaller instances show better performance when higher mutation rates are selected, while larger instances show better performance with lower mutation rates. A possible explanation for this behaviour is that the low degree of distinction between solutions (individuals) in smaller instances leads them to become stuck on a local minimum, which requires a significant amount of disturbance (mutation) to change. On the other hand, the high degree of distinction of the solutions in larger instances has difficulties converging to a neighbouring local minimum solution if the mutation rate is very high, showing better performance with a smaller rate of disturbance. Albeit these are preliminary results, the experiments revealed that the PICEAg is a promising method to solve sectorization and related problems. More precise insights require further analysis with different configuration parameters and instance types. Thus, future work consists of improving the parametrisation of the method depending on the instance type, improving performance, and comparing with other methods usually referenced in the sectorization literature. Acknowledgements. This work is financed by the ERDF - European Regional Development Fund through the Operational Programme for Competitiveness and Internationalisation - COMPETE 2020 Programme and by National Funds through the Portuguese funding agency, FCT - Funda¸ca ˜o para a Ciˆencia e a Tecnologia within project ‘POCI-01-0145-FEDER-031671’.
References 1. Bader, J., Zitzler, E.: Hype: an algorithm for fast hypervolume-based manyobjective optimization. Evol. Comput. 19(1), 45–76 (2011) 2. Deb, K., Jain, H.: An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part i: solving problems with box constraints. IEEE Trans. Evol. Comput. 18, 577–601 (2013) 3. Farughi, H., Mostafayi, S., Arkat, J.: Healthcare districting optimization using gray wolf optimizer and ant lion optimizer algorithms. J. Optim. Ind. Eng. 12(1), 119–131 (2019) 4. Fonseca, C.M., Fleming, P.J.: Multiobjective optimization and multiple constraint handling with evolutionary algorithms. I. A unified formulation. IEEE Trans. Syst. Man Cybern.-Part A: Syst. Hum. 28, 26–37 (1998) 5. Hern´ andez-D´ıaz, A.G., Santana-Quintero, L.V., Coello, C.A.C., Molina, J.: Paretoadaptive -dominance. Evol. Comput. 15(4), 493–517 (2007)
268
¨ urk et al. E. Ozt¨
6. Ikeda, K., Kita, H., Kobayashi, S.: Failure of pareto-based MOEAs: does nondominated really mean near to optimal? In: Proceedings of the 2001 Congress on Evolutionary Computation, vol. 2, pp. 957–962. IEEE (2001) 7. Ishibuchi, H., Tsukamoto, N., Nojima, Y.: Evolutionary many-objective optimization: a short review. In: 2008 IEEE congress on evolutionary computation (IEEE World Congress on Computational Intelligence), pp. 2419–2426. IEEE (2008) 8. Kukkonen, S., Lampinen, J.: Ranking-dominance and many-objective optimization. In: Congress on Evolutionary Computation, pp. 3983–3990. IEEE (2007) 9. Lei, H., Wang, R., Laporte, G.: Solving a multi-objective dynamic stochastic districting and routing problem with a co-evolutionary algorithm. Comput. Oper. Res. 67, 12–24 (2016) 10. Lei, H., Wang, R., Zhang, T., Liu, Y., Zha, Y.: A multi-objective co-evolutionary algorithm for energy-efficient scheduling on a green data center. Comput. Oper. Res. 75, 103–117 (2016) 11. Long, S., Zhang, D., Liang, Y., Li, S., Chen, W.: Robust optimization of the multiobjective multi-period location-routing problem for epidemic logistics system with uncertain demand. IEEE Access 9, 151912–151930 (2021) 12. Paknejad, P., Khorsand, R., Ramezanpour, M.: Chaotic improved PICEA-g-based multi-objective optimization for workflow scheduling in cloud environment. Futur. Gener. Comput. Syst. 117, 12–28 (2021) 13. Purshouse, R.C., Fleming, P.J.: Evolutionary many-objective optimisation: an exploratory analysis. In: The 2003 Congress on Evolutionary Computation, CEC 2003, vol. 3, pp. 2066–2073. IEEE (2003) 14. Purshouse, R.C., Jalb˘ a, C., Fleming, P.J.: Preference-driven co-evolutionary algorithms show promise for many-objective optimisation. In: Takahashi, R.H.C., Deb, K., Wanner, E.F., Greco, S. (eds.) EMO 2011. LNCS, vol. 6576, pp. 136–150. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-19893-9 10 15. Qiu, Q., Yu, W., Wang, L., Chen, H., Pan, X.: Preference-inspired coevolutionary algorithm based on differentiated resource allocation strategy. IEEE Access 8, 205798–205813 (2020) 16. Rodrigues, A.M., Ferreira, J.S.: Measures in sectorization problems. In: Barbosa P´ ovoa, A.P.F.D., de Miranda, J.L. (eds.) Operations Research and Big Data. SBD, vol. 15, pp. 203–211. Springer, Cham (2015). https://doi.org/10.1007/978-3-31924154-8 24 17. Thiele, L., Miettinen, K., Korhonen, P.J., Molina, J.: A preference-based evolutionary algorithm for multi-objective optimization. Evol. Comput. 17(3), 411–436 (2009) 18. Wang, R., Purshouse, R.C., Fleming, P.J.: Preference-inspired coevolutionary algorithms for many-objective optimization. IEEE Trans. Evol. Comput. 17(4), 474– 494 (2012) 19. Wang, R., Purshouse, R.C., Fleming, P.J.: Preference-inspired co-evolutionary algorithm using adaptively generated goal vectors. In: 2013 IEEE Congress on Evolutionary Computation, pp. 916–923 (2013) 20. Yen, G.G., He, Z.: Performance metric ensemble for multiobjective evolutionary algorithms. IEEE Trans. Evol. Comput. 18, 131–144 (2014) 21. Zhang, K., Yan, H., Zeng, H., Xin, K., Tao, T.: A practical multi-objective optimization sectorization method for water distribution network. Sci. Total Environ. 656, 1401–1412 (2019) 22. Zhang, Q., Li, H.: MOEA/D: a multiobjective evolutionary algorithm based on decomposition. IEEE Trans. Evol. Comput. 11, 712–731 (2007)
Unstable Systems as a Challenging Benchmark for Control Engineering Students Frantisek Gazdos(B)
and Zdenek Charous
Faculty of Applied Informatics, Tomas Bata University in Zlin, Nam. T.G. Masaryka 5555, 760 01 Zlin, Czech Republic [email protected]
Abstract. The paper highlights the importance of using unstable models in the process of training experts in the field of control engineering to gain better understanding of the underlying problems they pose. For such purposes one particular model, represented by the two-wheeled unstable transporter Inteco, is presented briefly and tested on the basic stabilization problem. Classical PID and LQ control approach is utilized in this paper, including also derivation of a simplified non-linear model of the system and its linearization for control purposes. Some of the experimental results are presented and discussed, followed by an overall evaluation of the model and its operation. Consequently, this paper can serve as an inspiration for other departments training experts in control systems design and similar engineering fields. Keywords: Control engineering · Inteco · LQ control · Modelling · PID control · Two-wheeled transporter · Unstable systems
1 Introduction Unstable processes are natural parts of our lives and they appear in many engineering applications including e.g. various types of combustion systems, reactors or distillation columns. Some of them are naturally unstable, others are designed unstable purposely to gain better maneuverability and increase the speed of command responses [1–3]. Such systems, however, need special attention and effort from control engineers to provide safe control solutions. The designers have to understand basic limitations resulting from the system instability – if they fail, the consequences may be catastrophic [4–7]. Therefore prospective control engineers have to be systematically prepared for such challenging tasks. During their studies it is advisable to provide them real-time platforms where they can study and test such types of systems safely. Scale real-time laboratory models have proved to be useful for such purposes [8–10]. Although they are smaller and simpler than their real industrial counterparts, they offer enough possibilities to train basic practical skills and understanding related to control of various types of systems including also unstable ones. Typical models from the unstable domain usually include various levitation systems (mostly magnetic), ball & beam or ball & plate models, inverted © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Machado et al. (Eds.): icieng 2022, LNME, pp. 269–279, 2022. https://doi.org/10.1007/978-3-031-09385-2_24
270
F. Gazdos and Z. Charous
pendulums in various configurations, miscellaneous rotor systems, two-wheeled vehicles of different design, etc. [10]. This paper briefly investigates possibilities of one such model represented by the two-wheeled transporter by Inteco [11, 12]. Control of two-wheeled vehicles is of great interest in the control engineering community, e.g. the survey paper [13] can serve as comprehensive introduction to the topic of modelling and control of two-wheeled robots in general; more recent results in this research area can be represented e.g. by the references [14–19]. This paper is devoted to one specific implementation of two-wheeled unstable transporters designed by Inteco. One of the goals is to test it for the purposes of including it in the education process of prospective control engineers to gain better awareness of the problems related to controlling unstable processes. In this work the basic task of stabilization is studied using the two popular approaches, namely the classical PID control and LQ control approach. First part introduces the tested real-time model together with its simplified mathematical model based on the Euler-Lagrange equations, which is further linearized in the unstable equilibrium point (upper position) for control system design purposes. Next section is devoted to testing the two common control approaches – PID and LQ control on the basic task of stabilization. Some of the obtained experimental results are presented and discussed here. Last section is dedicated to the summary of experiences with this model, inspiration for further works and some concluding remarks.
2 Two-Wheeled Unstable Transporter Inteco 2.1 General Description The two-wheeled unstable transporter utilized in this work and presented in Fig. 1 is a product of the Polish company INTECO [11]. It is a self-balancing mobile autonomous system to some extent similar to the inverted pendulum, i.e. it is unstable and nonlinear in nature. Stabilization in the upright position can be achieved by controlling torques of the two wheels driven by DC motors using PWM-type signals. The controller algorithm uses measurements from an Inertial Measurement Unit (IMU) including a gyroscope, accelerometer and encoders. The transporter is battery powered and uses wireless communication to run control algorithms designed and prepared in the realtime MATLAB/Simulink environment [20]. Besides the basic control task related to stabilization of the vehicle, more complex tasks including trajectory tracking can be also realized. For this purposes a state observer based on the Kalman filter is used to estimate the state variables in real-time, including: the average angle of rotation of the wheels, the yaw angle from the vertical axis, the angle of rotation around the vertical axis and their derivatives [11, 21, 22]. 2.2 Mathematical Model Let us assume the scheme of the transporter according to Fig. 2 below [21, 22] with the parameters (either measured, calculated or identified) defined in Table 1 and in the following text. Further, for the purposes of deriving the dynamics model using the Euler-Lagrange equations let us define generalized coordinates of the system as follows:
Unstable Systems as a Challenging Benchmark
271
Fig. 1. Inteco two-wheeled unstable transporter [11, 21]
Fig. 2. Two-wheeled unstable transporter – coordinates and forces [21]
• θ – the average angle of rotation of the wheels, expressed as: θ = (θ r + θ l )/2, with its derivative equal to angular speed, • ψ – the yaw angle from the vertical axis, • φ – the angle of rotation around the vertical axis, expressed as: φ = R(θ r − θ l )/W, where θ r and θ l denote angles of rotation for right and left wheel respectively and can be expressed as θ r = θ + φW/R, θ l = θ − φW/ (2R). Although there may be also different choice of the generalized variables, the selection suggested above simplifies resultant equations without a loss of generality of the model and it is more illustrative and understandable [21, 22]. If we choose the state-variables as: ˙ φ, φ˙ T (1) x = [x1 , x2 , x3 , x4 , x5 , x6 ]T = θ, θ˙ , ψ, ψ,
272
F. Gazdos and Z. Charous Table 1. Parameters of the transporter [21].
Parameter
Description
m = 0.32 kg
Weight of the wheel
2R = 0.15 m
Diameter of the wheel
J w = mR2 = 0.0013 kg.m2
Moment of inertia of the wheel
M = 5.41 kg
Weight of the vehicle
W = 0.4 m
Width of the vehicle
L = 0.102 m
Height of the mass center of the vehicle
J ψ = (mL 2 )/3 = 0.104 kg.m2
Moment of inertia of the vehicle tilt axis
Jφ
= 0.0484 kg.m2
Moment of inertia of the vehicle related to the axis of rotation
J m = 0.00119 kg.m2
Moment of inertia of the DC motor & gearbox including gearbox ratio
Rm = 1
Resistance of the DC motor winding
k t = 0.025 N.m/A
Torque constant of the DC motor
k e = 0.025 V.s/rad
Voltage constant of the DC motor
f m = 0.00024
Friction coefficient between the vehicle and DC motor
then, after the derivation, the resultant non-linear state-space model has the following form [21, 22]: x˙ 1 x˙ 2 x˙ 3 x˙ 4 x˙ 5 x˙ 6
= x2 3 +e) = d (Fx1 +c)−b(Fx ad −b2 = x4 1 +c) = a(Fx3 +e)−b(Fx ad −b2 = x6 2 x x sin x cos x 6 4 3 3 = 1 2Fx5 −2ML W2 2 2 2 mW
+ML sin x3 +
2R2
(2)
(Jw +Jm )+Jφ
with the parameters a, b, c, d, e calculated as (here g denotes the gravity constant): a = (2m + M )R2 + 2Jw + 2Jm , b = f1 (x3 ) = MRL cos x3 − 2Jm , c = f2 (x3 , x4 ) = MRLx42 sin x3 , d = ML2 + Jψ + 2Jm , e = f3 (x3 , x6 ) =
ML2 x62 sin x3 cos x3
(3)
+ MgL sin x3 ,
and the generalized forces Fx 1 , Fx 3 , Fx 5 defined as follows: Fx1 = Rkmt (ur + ul ) + 2 kRt kme + fm (x4 − x2 ), Fx3 = − Rkmt (ur + ul ) + 2 kRt kme + fm (x2 − x4 ), W W 2 kt ke Fx5 = − Rkmt 2R (ur − ul ) − 2R 2 Rm + fm x6 ,
(4)
Unstable Systems as a Challenging Benchmark
273
where ul , ur denotes left and right motor input voltage respectively and form a vector of inputs of the system u = [u1 , u2 ]T = [ul , ur ]T [21, 22]. The whole model described by Eqs. (2)–(4) illustrates complexity and nonlinearity of the system. For the purposes of control system design the non-linear model (2)–(4) has been linearized in the unstable equilibrium state x0 (upper position) for: x0 = [x1 , x2 , x3 , x4 , x5 , x6 ]T = [0, 0, 0, 0, 0, 0]T ,
(5)
u0 = [ul , ur ]T = [0, 0]T . Then, the matrices A, B of the resultant state-space realization: d x(t) = Ax(t) + Bu(t) dt y(t) = Cx(t) + Du(t) have the following general forms [21]: ⎡ 0 1 0 0 0 0 ⎢ 0 a a a 22 23 24 0 0 ⎢ ⎢ 0 0 0 1 0 0 ⎢ A=⎢ ⎢ 0 a42 a43 a44 0 0 ⎢ ⎣ 0 0 0 0 0 1 0 0 a66 0 0 0
⎤
(6)
⎡
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎥, B = ⎢ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦
0 0 b21 b22 0 0 b41 b42 0 0 b61 b62
⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦
(7)
with the constants aij , bij calculated using the parameters from the Table 1 as [21, 22]: a22 = −0.4733, a23 = −29.1464, a24 = 0.4733, a42 = 0.1401, a43 = 29.1651, a44 = −0.1401, b21 = b22 = 109.3664, b41
a66 = −2.6349, = b42 = −32.3752, b61 = −b62 = −0.5055,
(8)
and the matrix C6✕6 in the form of the identity matrix (provided that outputs of the system are equal to the states; the matrix D6✕2 will have zero elements only. The state-space description outlined above can be further used to design e.g. the state LQ controller as discussed in the next chapter.
3 Control System Design This section compares two common approaches to control system design on the basic task of stabilization of the two-wheeled unstable transporter Inteco. The trajectory tracking problem has been also addressed, however due to the limited space of this conference contribution, it has been decided to include it later in the extended version of this paper. The suggested control approaches include: • Classical PID Control, • LQ Control.
274
F. Gazdos and Z. Charous
Details of designing the control laws for these two different approaches are described further in the following sections, including also some of the experimental results. 3.1 Classical PID Control Classical Proportional-Integral-Derivative (PID) control algorithm has been tested as first one with the continuous-time transfer function of the controller as:
Ns 1 +D (9) C(s) = P + I s s+N where P, I, D denote the proportional, integral and derivative components respectively and N defines the properties of the filter for the derivative part (here s is the complex variable of the Laplace transform). Design and further tuning of the controller has been performed using the model-based design (MBD) approach with the help of the simulation model of the transporter based on the nonlinear mathematical model (2)–(4) implemented in the MATLAB/Simulink environment. This approach finally provided controller parameters as follows [22]: P = 9.5, I = 0.01, D = 0.53,
(10)
as a trade-off between the performance and robustness of the resultant control loop (the filter parameter has been set to the value N = 1). A part of the real-time experiments using this controller for both motors is presented below in Fig. 3. During the experiment in the time 15 s a disturbance has been injected programmatically (a pulse of height 0.02 [rad] and width 1 [s]) to test robustness of the control loop.
Fig. 3. Stabilization of the transporter – PID controller
Unstable Systems as a Challenging Benchmark
275
As can be seen from the response the controlled variable x3 (i.e. the yaw angle from the vertical axis) slightly oscillates around the unstable equilibrium state with the amplitude ±0.01 [rad] or 0.5 [deg] approximately. The injected perturbation in the form of the pulse disturbs the control response but it is suppressed in a few seconds preserving the previous behavior. During the experiments it was observed that although the tilt setpoint has been set to zero the transporter still tended to move slowly forward. This phenomenon may probably be caused by an imperfect distribution of the transporter’s weight with respect to the axes of symmetry. This behavior has been suppressed by setting the reference to a non-zero value in the hundredths of a radian [22]. Further, the experiments with PID controller have revealed overall sensitiveness of the resultant control system to initial conditions and disturbances. 3.2 LQ Control For Linear-Quadratic (LQ) optimal control, the goal is to find the optimal state-feedback control law: u(t) = −Kx(t)
(11)
given by the optimal state feedback (gain) matrix K such that the following quadratic cost function is minimized: 1 J (u) = 2
∞ xT (t)Qx(t) + uT (t)Ru(t) dt.
(12)
0
Here, Q, R are weighting matrices selected by a designer usually of the diagonal form with positive or zero elements. Solution of this problem leads to the well-known matrix Riccati equation [23] and can be e.g. done in the MATLAB environment with the help of the Control System Toolbox using the function lqr [20, 23]. Again, using a simulation model of the system implemented in the MATLAB/Simulink based on the non-linear mathematical model (2)–(4) the weighting matrices Q6✕6 , R2✕2 were selected with the diagonal elements qij , r ij set to these values (with other elements zero): q11 = 0.5, q22 = 1.5, q33 = 20000, q44 = 0.1, q55 = 20, q66 = 750, r11 = r22 = 1000.
(13)
Consequently the optimal gain matrix K was calculated using the lqr function as: 0.0158 0.0514 4.1139 0.5304 0.1000 0.1077 K=− (14) 0.0158 0.0514 4.1139 0.5304 −0.1000 −0.1077 where the gain elements K14 , K24 related to the state variable x 4 were further manually reduced a bit due to the vibrations caused by a noisy angular velocity signal from the gyroscope as suggested in [21]. Again, a part of the real-time experiments is presented in Fig. 4 below where in the time 15 s a disturbance has been injected similarly as in the PID experiment. As can be seen in the figure, similarly as in the PID experiment
276
F. Gazdos and Z. Charous
the controlled variable slightly oscillates around the unstable equilibrium state with a bit higher amplitude of ±0.02 [rad] or 1 [deg] approximately for the suggested setting of the weighting matrices (13). Here, the injected perturbation seems to be suppressed relatively faster comparing to the PID experiments. This fact has been proved by several experiments and confirmed overall good robustness of the resultant control system to both initial conditions and disturbances. However, here it is important to note that performance of the resultant control loop strongly depends on the choice of the weighting matrices Q, R from the optimization problem (12). The suggested choice (13) represented a tradeoff between stabilization and tracking problems taking into account also the actuator possibilities [21, 22]. Both discussed control approaches are compared together in the next picture – Fig. 5, where the controlled variable has been recalculated to degrees and also the control input to both of the motors is presented (scaled to the range from −1 to +1 MU, i.e. Machine Unit used in the MATLAB/Simulink model).
Fig. 4. Stabilization of the transporter – LQ controller
Here, it is important to note that due to the fact that the system itself is unstable and nonlinear in nature with relatively fast dynamics and facing also disturbances, noise- and saturation-related problems in the real-time environment, its behavior is by no means deterministic and every experiment is a bit different, depending on the initial conditions, disturbances present etc. Moreover, the model behavior is strongly influenced by the surface and its quality. However, as outline above and generally expected, the LQ control approach has provided more robust control loop, coping better with different conditions and disturbances, provided it has been reasonably tuned using the weighting matrices (in general, on average, the LQ approach provided 10–20% smaller over/under-shoots in response to the injected disturbances, and the settling time was also 50% shorter compared to the suggested PID setting).
Unstable Systems as a Challenging Benchmark
277
Fig. 5. Stabilization of the transporter – Comparison of PID and LQ approach
4 Conclusion Real-world unstable system with all its nonlinearities, actuator constraints and omnipresent disturbances can be a real challenge for control engineers. They have to understand fundamental limitations of such systems in order to design safe control solutions. Therefore it is advisable to train students of this field also using such systems to deepen their understanding of underlying problems. Scale real-time laboratory models can be a suitable solution. In this paper, one such model, namely the Inteco two-wheeled unstable transporter has been introduced and tested for such purposes. It allows both simpler stabilization task and more challenging trajectory tracking problems to be addressed and tested using different control algorithms implemented in the MATAB/Simulink environment. In this contribution the task of stabilization using the two common (and usually known to students) approaches has been presented – classical PID and LQ control. Next goal is to test also more advanced control algorithms, including e.g. nowadays popular Model Predictive Control (MPC) approach [24] – if it is suitable for this model with regard to the used HW and SW components. All this not only for the task of stabilization but for trajectory tracking problems as well. First impression so far is that this model can be a real challenge for control engineering students – with its unstable behavior, present nonlinearities, constraints and sensitiveness to disturbances and noise. As such it seems more suitable for postgraduate degrees of studies – Master’s or Doctoral study programmes related to control engineering, automation, robotics and similar ones, which has been proved by the work [22]. Besides this, as suggested in the referred thesis, it is advisable to check the state of batteries of the transporter and its mechanical condition (esp. screw connections) regularly as these factors can strongly influence behavior of the
278
F. Gazdos and Z. Charous
whole system. In conclusion, this paper can serve as an inspiration for other departments training experts in control systems design and similar engineering fields.
References 1. Chidambaram, M.: Control of unstable systems: a review. J. Energy Heat Mass Transfer 19, 49–57 (1997) 2. Padma Sree, R., Chidambaram, M.: Control of Unstable Systems. Alpha Science Int. Ltd., Oxford (2006) 3. Gazdoš, F.: Introducing a new tool for studying unstable systems. Int. J. Autom. Comput. 11(6), 580–587 (2014). https://doi.org/10.1007/s11633-014-0844-z 4. Middleton, R.H.: Trade-offs in linear control system design. Automatica 27, 281–292 (1991) 5. Skogestad, S., Havre, K., Larsson, T.: Control limitations for unstable plants. In: Proceedings of the 15th Triennial World Congress, pp. 485–490. IFAC, Barcelona (2002) 6. Stein, G.: Respect the unstable. IEEE Control Syst. Mag. 23, 12–25 (2003) 7. Formalskii, A.M.: Stabilisation and Motion Control of Unstable Objects. Walter de Gruyter GmbH, Berlin (2015) 8. Horacek, P.: Laboratory experiments for control theory courses: a survey. Ann. Rev. Control 24, 151–162 (2000) 9. Leva, A.: A simple and flexible experimental laboratory for automatic control courses. Control Eng. Pract. 14, 167–176 (2006) 10. Gazdos, F.: Using real-time laboratory models in the process of control education. In: Machado, J., Soares, F., Veiga, G. (eds.) HELIX 2018. LNEE, vol. 505, pp. 1097–1103. Springer, Cham (2019). https://doi.org/10.1007/978-3-319-91334-6_151 11. INTECO. http://www.inteco.com.pl. Accessed 3 Feb 2022 12. Knapik, D., Kolek, K., Rosol, M., Turnau, A.: Autonomous, reconfigurable mobile vehicle with rapid control prototyping functionality. IFAC-PapersOnLine 52, 13–18 (2019) 13. Chan, R.P.M., Stol, K.A., Halkyard, C.R.: Review of modelling and control of two-wheeled robots. Ann. Rev. Control 37, 89–103 (2013) 14. Zhang, J., Zhao, T., Guo, B., Dian, S.: Fuzzy fractional-order PID control for two-wheeled self-balancing robots on inclined road surface. Syst. Sci. Control Eng. (2021). https://doi.org/ 10.1080/21642583.2021.2001768 15. Uddin, N., Iaeng, M., Harno, H.G., Caesarendra, W.: Vector-based modeling and trajectory tracking control of autonomous two-wheeled robot. IAENG Int. J. Comput. Sci. 48, 1049– 1055 (2021) 16. Díaz-Téllez, J., Gutierrez-Vicente, V., Estevez-Carreon, J., Ramírez-Cárdenas, O.D., GarcíaRamirez, R.S.: Nonlinear control of a two-wheeled self-balancing autonomous mobile robot. In: Batyrshin, I., Gelbukh, A., Sidorov, G. (eds.) MICAI 2021. LNCS (LNAI), vol. 13068, pp. 348–359. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-89820-5_28 17. Isdaryani, F., Salam, R., Feriyonika, F.: Design and implementation of two-wheeled robot control using MRAC. J. Telecommun. Electron. Comput. Eng. (JTEC) 13, 25–30 (2021) 18. Srichandan, A., Dhingra, J., Hota, M.K.: An improved Q-learning approach with Kalman filter for self-balancing robot using OpenAI. J. Control Autom. Electr. Syst. 32(6), 1521–1530 (2021). https://doi.org/10.1007/s40313-021-00786-x 19. Nguyen, D.-M., Nguyen, V.-T., Nguyen, T.-T.: A neural network combined with sliding mode controller for the two-wheel self-balancing robot. IAES Int. J. Artif. Intell. 10, 592–601 (2021) 20. Eshkabilov, S.: Beginning MATLAB and Simulink: From Novice to Professional. Apress, New York (2019) 21. Two-Wheeled Unstable Transporter User’s Manual. Inteco Ltd., Krakow (2018)
Unstable Systems as a Challenging Benchmark
279
22. Charous, Z.: Modelling and control of the two-wheeled unstable transporter inteco. Master’s thesis, Tomas Bata University in Zlin, Faculty of Applied Informatics, Zlin (2021) 23. Dorato, P., Abdallah, C., Cerone, V.: Linear-Quadratic Control: An Introduction. Krieger Pub. Co., Melbourne (2000) 24. Camacho, E.F., Bordons, C.: Model Predictive Control. Springer, Londo (2007). https://doi. org/10.1007/978-0-85729-398-5
Deep Learning in Taekwondo Techniques Recognition System: A Preliminary Approach Paulo Barbosa1 , Pedro Cunha1,2(B)
, Vítor Carvalho1,2
, and Filomena Soares2
1 2Ai - School of Technology, IPCA, Barcelos, Portugal [email protected], [email protected], [email protected] 2 Algoritmi Research Center, School of Engineering, Minho University, Guimarães, Portugal [email protected]
Abstract. On an approach focused on skeleton-based action recognition along with deep learning methodologies, this study aims to present the application of classification models in order to enable real-time assessment of Taekwondo athletes. For that was used a developed dataset of some Taekwondo movements, as data for Long Short-term Memory (LSTM), Convolutional Long Short-term Memory (ConvLSTM) and Convolutional Neural Network Long Short-term Memory (CNN LSTM) training, validation, and inference. The results obtained allow to conclude that for the system application defined as goals the data structure the LSTM model achieved the best results. The obtained accuracy value of 0,9910 states the model reliability to be applied in the system proposed. Regarding time response the LSTM model also obtained the best result with 288 ms. Keywords: Computer vision · Deep learning · Human action recognition · Neural networks · Taekwondo
1 Introduction Athletes’ performance evaluation it is extremely important for the athlete and for his coach, as it allows obtaining relevant information about the level reached by the athlete and adapting training methodologies, as well as determining his evolution over time. For coaches, in any sport, it is a difficult task to evaluate the performance of athletes. Taekwondo martial art emerged in Korea, been for more than 1500 years an ancestral martial. In Seul Olympic Games (1988) Taekwondo was presented as a Korean martial art, becoming an Olympic sport in the Sidney Olympic Games (2000) [1], contributing to the increasing popularity of this martial art worldwide. The beginning of Taekwondo practice in Portugal is due the Grand Master David Chung Sun Yong [2]. Since then, the number of Portuguese athletes federated in Taekwondo has reached more than 4100 athletes [3]. In any sport, the performance assessment of the athletes it is a difficult task that must be fulfilled by coaches. However, technological development has contributed to make this task less difficult, through the development of several systems to assist coaches in © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Machado et al. (Eds.): icieng 2022, LNME, pp. 280–291, 2022. https://doi.org/10.1007/978-3-031-09385-2_25
Deep Learning in Taekwondo Techniques Recognition System
281
evaluating athletes’ performance. Regarding Taekwondo, the investment in the development of applications aimed at the sport, using technological solutions capable of assisting in the evaluation of performance in training environment has not been relevant. In several modalities, technology in sport has been used to analyze and improve athlete’s performance. Some of the developed systems allow obtaining information from the data of the movements performed by the athlete, being able to present values of speed, acceleration, applied force, displacement, among other characteristics [4–6]. In Taekwondo, the athlete’s performance evaluation methods are still very traditional consisting of on time visual interpretations and manual recordings during training sessions. This approach, in addition to being very time consuming, no provides quick feedback to athletes, of the trainer to apply the changes and adjust the training process [7]. In Fig. 1 is presented the paper organization. In this paper, the study presented is part of a project which main objective is to develop a prototype capable of assisting in the assessment of the performance of Taekwondo athletes in real time during training. The main results will be obtained through statistical analysis, that will allow the identification and quantification of the movements performed by the athlete in the moments of training. Thus, achieving an analysis of the evolution of the athlete’s performance through the processing of the data acquired in the moments of training; and biomechanics and movement analysis, which will allow calculating the acceleration, velocity and applied force related to the movement. The developed system is composed of a framework that together with a 3D camera for data acquisition of athletes’ movements. Used to calculate and provide the speed, acceleration and force applied to the athlete’s hands and feet. Including also including wearable devices placed on the athlete’s body using Velcro straps along with the 3D Camera [8, 9]. The organization of this article is divided into five chapters. The state of the art is presented in the second chapter; the description of the methodologies used in the third chapter; the preliminary results in the fourth chapter; finally presenting the final comments on the fifth chapter.
2 State of Art The development and adjustment of new technologies with the objective of facilitating the process of evaluating the performance of the athlete in the sport has aroused the interest of the scientific community. According to the literature [7, 10], there are systems that make it possible to analyze the performance of athletes in sport. But it turns out that there are few technology-supported tools available to assist in Taekwondo training. Different published studies, using different approaches to recognize movements performed by people, will be presented in this chapter. Having performed an analysis of the available references in the monitoring of human body movements, especially in the recognition of these same movements [11, 12]. The area of study highlighted with Human Action Recognition (HAR) has contributed to a great development in computer vision. Consisting in the process of identifying, analyzing, and interpreting the actions that a person is taking. Allowing the realization of several studies based on this method to better understand how it is feasible to recognize the movements and actions of the human body.
282
P. Barbosa et al.
Fig. 1. Paper structure flowchart.
Most of the studies carried out, as presented in [13], use optical sensors, namely cameras with depth sensors, to collect movement data that making possible to obtain human poses information that will be then represent the skeleton data. This opportunity to recognize human activities through technologies will make it possible to solve many current problems, such as automatic screening of video content by resource extraction, identification of violent acts in video surveillance systems, assistance in the automatic conduct of vehicle tasks, interaction with the robot vehicle, among others [12]. Other approaches allow skeleton-based action recognition through data obtained by locating the human joint in a three-dimensional environment to perform motion recognition. Usually, depending on systems, these data is composed by a matrix of
Deep Learning in Taekwondo Techniques Recognition System
283
20 joints considered to define the human body. To perform the skeleton-based action recognition other information is used to help accomplish the job, besides the joints data. Depending on the system, sometimes, motion sensors are used to obtaining data, or to prevent absence of data (e.g., due to occlusions) or complement the data collection, in addition to the position in a three-dimensional environment of the joints [14, 15]. Other studies have presented systems that consider depth sensors to acquire other information in addition to the raw information from the Infrared (IR) sensor and from the Red Green Blue (RGB) sensor. Other studies reveal that different joints can receive different attention from the Long Short-Term Memory (LSTM) neural network. In order for the network to focus more on a precise joint, hands and legs, the Attention Mechanism is introduced [16]. Most studies, regarding the area of pattern categorization are based on deep learning methods, that is, neural networks. Essentially due to the good performance presented in tasks such as object detection and image classification [17]. As well as in the recognition of human action, recent research presents the use of deep learning techniques, more specifically neural networks [13, 17]. In their research [18] present a fine comparison between deep learning different methods applied to action recognition challenges. Considering the presented studies, the most significant methods when it comes to single type architectures, were neural networks utilization that applied different types and architectures such as Convolutional Neural Networks (CNN) and LSTM networks [19, 20]. Other works that combine different types of network were also relevant, presenting the hybrid solutions [21, 22].
3 Methodology To find a solution with satisfactory results for the research question raised, different approaches are used to obtain interpretation and identification of human motion. In view of this, different approaches and methodologies used by other studies were tested, to detect the methodology that best fits the objectives of the study. To complete the task, the system shown in Fig. 2 was projected.
Fig. 2. HAR system diagram.
To acquire the data were used two similar equipment Orbbec Astra 3D camera and Microsoft Kinect camera, being the data processed through deep learning methods to enable identify and quantify the athlete techniques. In this chapter, the suggested methodology is debated, with a brief description of the data sets used and the proposed
284
P. Barbosa et al.
approaches to solve the problem of categorizing the movement type performed by the athlete. 3.1 Taekwondo Movements Dataset The dataset was created with specific movement data from movements performed by taekwondo athletes. To perform this task, a system previously developed as part of the main project presented [5, 9] was used, which allows collecting movement data according to the positions of the athletes’ joints in a three-dimensional space, more specifically the Cartesian coordinates of multiple joints. This data acquisition was carried out at the IPCA and at the University of Minho, with martial art Taekwondo practitioners. The obtained dataset consists of four classes, and each class represents a different technique/movement accomplished by the athlete.
Fig. 3. Raw data from Right Hand Joint during Jirugui movement.
In Fig. 3, it is possible to visualize over a sequence of 80 samples the raw data for the movement Jirugui of the Right Hand joint, in coordinates x, y, z, respectively. The main purpose of this dataset is to collect information about the movements of the taekwondo athlete in order to use it in the training of deep learning classification methods. Table 1. Taekwondo movements collected.
Description AP Tchagui: Characterized by the movement of one of the legs in the form of a front kick. Miro Tchagui: This movement is identified as a pushing front kick. Jirugui: Movement that is identified as a frontal punch.
Execution
Deep Learning in Taekwondo Techniques Recognition System
285
The taekwondo techniques that were collected for the dataset were described in Table 1. In addition to these, it was also collected data of non-movement (standing) for creating a class, to the system be able to distinguish between movement or stopped. 3.2 Pre-processing The method used to deliver the data is crucial to obtain better results in the recognition of movement patterns, when deep learning techniques are applied. Thus, the pre-processing has an important role in the whole classification task. The role of pre-processing gains in the case of sequenced data is even more relevant, as important parameters must be defined, such as the time window size, as well as the overlapping of data in each time window. When it comes to evaluating the performance of a martial art, the movements are all performed at high speed, allowing us to conclude that for most athletes, two seconds will be enough to get from the starting point to the end of the technique. Thus, it was chosen to use a window size of 80 points, a dataset of 80 samples is enough to acquire all the two-second data of the movement. 3.3 Deep Learning Methods The use of CNN networks in various deep learning applications has proven to be a success since in 2012 AlexNet won the ImageNet competition [23]. The good results obtained, initially in image classification and object recognition, contributed to the application of these techniques also to sequential data types. This is especially about 1D raw sequential data or even video-type data [24]. The original concept of convolution layers is to apply convolution by throwing the image and from that convolution extract features that will be different or unique in each motion class. LSTM belongs to the category of recurrent neural networks (RNN) developed for timestamp data problems such as audio files, GPS path, text recognition, etc. All these challenges involve that current moment information also considers the preceding and subsequent moment [25]. Recurrent cells were created, for this purpose. Due to problems such as gradient disappearance, and the difficulty of plate-containing long-term (previous) information, considering only short-term (current) information, in 1997 Hochreiter & Schmidhuber solved this problem with the introduction of LSTM cells [26]. These were created with a structure, as the name implies (Long Short Term), which makes it possible to define the significance of the information transmitted from t − 1, allowing us to understand whether this same information should be considered in the subsequent cell and whether it should be transmitted to t + 1. The authors [19], used a WIDSM dataset with data acquired through a simple sensor that enable identifying the x, y and z coordinates of human movement. Revealing that this network model allows to attain good results for human activities classification such as running, sitting, standing, etc. In the study presented in [16], the “Global Context-Aware Attention LSTM” model is revealed, which adds the particularity of defining greater relevance to specific joints of the skeleton. Adding a useful feature to the model, since not all joints introduce useful
286
P. Barbosa et al.
information, in some cases they can even induce noise in the model. Thus, obtaining superior results in relation to the simpler LSTM model, using the suggested strategy. The LSTM Hybrid Model was presented has an architecture that starts with two layers Convolution Neural Network (CNN). This layer makes it possible to extract spatial features from the motion sequence, these features will be used as input to the LSTM layer that will allow extracting temporal features, thus allowing the classification of a sequence. Another relevant model should also be mentioned, in this same study context, the ConvLSTM architecture, which consists of an LSTM with built-in Convolution, which permit a good spatio-temporal correlation of resources [22].
4 Preliminary Results After testing all the models presented in the methodologies using the NTU dataset, and analyze the results obtained in which one of them, it was decided to apply the models with the best results to the taekwondo dataset [27]. This dataset was created through the application of the system developed described above, using two different sensors, Orbecc Astra and Microsoft Kinect. This dataset is composed by 4 classes, with 200 samples per class, consisting in arm and leg techniques, along a non-movement position, performed by Taekwondo athletes. These data were used for training, validation and inference of the results using the deep learning models presented above (Figs. 4, 5 and 6). The training and validation proportions were 10% to validation and 90% to training.
Fig. 4. Training results and confusion matrix from LSTM model.
Deep Learning in Taekwondo Techniques Recognition System
Fig. 5. Training results and confusion matrix from ConvLSTM model.
Fig. 6. Training results of confusion matrix from CNN LSTM model.
287
288
P. Barbosa et al.
Then the results of the training of the different models in which the training loss and accuracy determine how well the model its fitting the training data along within the training. The best result should present the minimum loss possible and the maximum accuracy possible. As in the LSTM model, also were used 18 epochs in the ConvLSTM model. After training with this same number of epochs the validation results of this model return an accuracy of 0.9730. Therefore, the model with the worst results from the consider in this study. May be justified by the fact that in some cases there is some confusion both between the non-movement and the Miro Tchagui movement, and between the non-movement and the Jirugui movement. The last model trained was the CNN LSTM, a hybrid model composed by layers of CNNs and finally an LSTM, this model training allowed to reach results with accuracy of 0.9820 in the validation data (Fig. 6). Also, in the CNN LSTM model there was some confusion in the decision between the AP_Tchagui movement and Miro Tchagui movement, that are similar techniques. Table 2. Accuracy results from all tested models. Model
Accuracy
LSTM
0,9910
CNN LSTM
0,9820
ConvLSTM
0,9730
According with results presented above, it is possible to say that all the trained models managed to achieve very satisfactory results. Being evident the best performance of the LSTM model, that that despite being the simplest model presents the best results in the same validation data with an accuracy of 0,9910. Followed by the CNN LSTM model, that the same validation data achieved an accuracy of 0,9820. At last, the ConvLSTM model obtained an accuracy of 0,9730 for the same validation data (Table 2). Besides the validation results, it is also important to perceive which of the models presents the best time performance in real time operation, since the main objective of this project is to infer Taekwondo athletes’ movements in real time. To be able to test the inference time response in conditions close to the intended real application. It was created an HTTP server that was responsible for responding to inference requests send from clients. The client is thus responsible for sending the request via HTTP POST with the sequence of the movement in object JSON (Fig. 7). On the server side, the data will be pre-processed and then introduced to the previously trained model, resulting in the class corresponding to the movement inferred by the network (Fig. 8). All models were tested to see if there is a possibility that there are significant differences in response time between different models. With the results presented Table 3, it was possible to conclude that the differences in response time are not significant, this leads us to define the model that best suits our needs is the one with the best inference accuracy, the LSTM model.
Deep Learning in Taekwondo Techniques Recognition System
289
Fig. 7. POST inference.
Fig. 8. POST inference response.
Table 3. Summary of HTTP Inference response time. Model
HTTP response time
Total params
LSTM
288 ms
82 904
ConvLSTM
289 ms
197 368
CNN LSTM
282 ms
276 184
5 Final Remarks This article presents a study, piece of a project, which aims to build up a system to evaluate the performance of taekwondo athletes in real time. Focusing on identifying the appropriate methodology using deep learning in identifying the movements of taekwondo athletes, through testing different approaches used in previous studies. Using the different approaches presented in the Human Action Recognition methodologies in our dataset, allowed us to realize that with the intention of recognizing the movements of taekwondo athletes, the LSTM layer models obtained better results. This may be directly related to the temporal characteristics of the data used in this study.
290
P. Barbosa et al.
Obtaining both methods, CNN + LSTM and ConvLSTM, results above 99% in the validation of accuracy. According to the project, as a future work, the next step will be to continue with the integration of the inference server with the real-time framework developed in the main project that will allow the recognition and accounting of athletes’ movements in real time. Which will make it possible to continue with the acquisition of more and complete data of the movements performed in Taekwondo, with the aim of expanding and improving the data set. Acknowledgements. This work has been supported by FCT – Fundação para a Ciência e Tecnologia within the R&D Units Project Scope: UIDB/00319/2020.
References 1. História do Taekwondo. https://lutasartesmarciais.com/artigos/historia-taekwondo. Accessed 28 Jan 2022 2. Sun-Yong, D.: Chung Sun-Yong. https://www.wikisporting.com/index.php?title=Chung_ Sun-Yong. Accessed 09 Feb 2022 3. Praticantes desportivos federados: total e por todas as federações desportivas. https://www. pordata.pt/Portugal/Praticantes+desportivos+federados+total+e+por+todas+as+federa% c3%a7%c3%b5es+desportivas-2227-178606. Accessed 09 Feb 2022 4. Arastey, G., Arastey, G.: Computer Vision in Sport | Sport Performance Analysis. https:// www.sportperformanceanalysis.com/article/computer-vision-in-sport 5. Cunha, P., Carvalho, V., Soares, F.: Real-time data movements acquisition of taekwondo athletes: first insights. In: Machado, J., Soares, F., Veiga, G. (eds.) HELIX 2018. LNEE, vol. 505, pp. 251–258. Springer, Cham (2019). https://doi.org/10.1007/978-3-319-91334-6_35 6. Nadig, M., Kumar, S.: Measurement of velocity and acceleration of human movement for analysis of body dynamics. In: International Journal of Advanced Research in Computer Science & Technology (IJARCST 2015), vol. 3, pp. 37–40 (2015) 7. Pinto, T., Faria, E., Cunha, P., Soares, F., Carvalho, V., Carvalho, H.: Recording of occurrences through image processing in taekwondo training: first insights. In: Tavares, J.M.R.S., Natal Jorge, R.M. (eds.) ECCOMAS 2017. LNCVB, vol. 27, pp. 427–436. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-68195-5_47 8. Cunha, P., Carvalho, V., Soares, F.: Development of a real-time evaluation system for top taekwondo athletes SPERTA. In: SENSORDEVICES 2018, The Ninth International Conference on Sensor Device Technologies and Applications, pp. 140–145 (2018) 9. Cunha, P., Barbosa, P., Ferreira, F., Fitas, C., Carvalho, V., Soares, F.: Real-time evaluation system for top taekwondo athletes: project overview. In: BIODEVICES 2021 - 14th International Conference on Biomedical Electronics and Devices, pp. 209–216 (2021). https://doi. org/10.5220/0010414202090216 10. Zhuang, Z., Xue, Y.: Sport-related human activity detection and recognition using a smartwatch. Sensors 19, 1–21 (2019). https://doi.org/10.3390/s19225001 11. Wang, P., Li, W., Ogunbona, P., Wan, J., Escalera, S.: RGB-D-based human motion recognition with deep learning: a survey. Comput. Vis. Image Underst. 171, 118–139 (2018). https://doi. org/10.1016/j.cviu.2018.04.007 12. Kong, Y., Fu, Y.: Human Action Recognition and Prediction: A Survey. arXiv abs/1806.11230. 13 (2018)
Deep Learning in Taekwondo Techniques Recognition System
291
13. Zhang, H., et al.: A comprehensive survey of vision-based human action recognition methods. Sensors 19, 1005 (2019). https://doi.org/10.3390/s19051005 14. Jiang, W., Yin, Z.: Human activity recognition using wearable sensors by deep convolutional neural networks. In: 23rd ACM International Conference on Multimedia, pp. 1307–1310. Association for Computing Machinery, New York (2015). https://doi.org/10.1145/2733373. 2806333 15. Zhang, Y., Zhang, Y., Zhang, Z., Bao, J., Song, Y.: Human activity recognition based on time series analysis using U-Net. arXiv abs/1809.08113 (2018) 16. Liu, J., Wang, G., Hu, P., Duan, L., Kot, A.: Global context-aware attention LSTM networks for 3D action recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1647–1656 (2017). https://doi.org/10.1109/CVPR.2017.391 17. Ren, B., Liu, M., Ding, R., Liu, H.: A survey on 3D skeleton-based action recognition using learning method. arXiv:2002.05907, pp. 1–8 (2020) 18. Wang, L., Huynh, D., Koniusz, P.: A comparative review of recent kinect-based action recognition algorithms. IEEE Trans. Image Process. 29, 15–28 (2019). https://doi.org/10.1109/TIP. 2019.2925285 19. Pienaar, S., Malekian, R.: Human activity recognition using LSTM-RNN deep neural network architecture. In: IEEE 2nd Wireless Africa Conference, Piscataway, pp. 1–5 (2019) 20. Zhu, W., et al.: Co-occurrence feature learning for skeleton based action recognition using regularized deep LSTM networks. In: AAAI Conference on Artificial Intelligence. AAAI (2016) 21. Zhao, R., Wang, K., Su, H., Ji, Q.: Bayesian graph convolution LSTM for skeleton based action recognition. In: IEEE/CVF International Conference on Computer Vision, pp. 6882–6892 (2019). https://doi.org/10.1109/ICCV.2019.00698 22. Sanchez-Caballero, A., Fuentes-Jimenez, D., Losada-Gutiérrez, C.: Exploiting the ConvLSTM: human action recognition using raw depth video-based recurrent neural networks. arXiv preprint arXiv:2006.07744 (2020) 23. Ismail Fawaz, H., Forestier, G., Weber, J., Idoumghar, L., Muller, P.-A.: Deep learning for time series classification: a review. Data Min. Knowl. Disc. 33(4), 917–963 (2019). https:// doi.org/10.1007/s10618-019-00619-1 24. Khan, A., Sohail, A., Zahoora, U., Qureshi, A.S.: A survey of the recent architectures of deep convolutional neural networks. Artif. Intell. Rev. 53(8), 5455–5516 (2020). https://doi.org/ 10.1007/s10462-020-09825-6 25. Mittal, A.: Understanding RNN and LSTM. https://towardsdatascience.com/understandingrnn-and-lstm-f7cdf6dfc14e. Accessed 09 Feb 2022 26. Understanding LSTM Networks – colah’s blog. https://colah.github.io/posts/2015-08-Unders tanding-LSTMs. Accessed 05 Feb 2022 27. Barbosa, P., Cunha, P., Carvalho, V., Soares, F.: Classification of taekwondo techniques using deep learning methods: first insights. In: BIODEVICES 14th International Joint Conference on Biomedical Engineering Systems and Technologies, pp. 201–208 (2021). https://doi.org/ 10.5220/0010412402010208
New Conceptions of the Future in Cyber-MixMechatronics Engineering and Claytronics Gheorghe Gheorghe1,2,3,4(B)
and Florentina Badea5
1 Valahia University of Targoviste, Târgoviste, Romania ,
[email protected]
2 Politehnica University of Bucharest, Bucharest, Romania 3 Titu Maiorescu University of Bucharest, Bucharest, Romania 4 Academy of Technical Sciences of Romania, Bucharest, Romania 5 The National Institute of Research-Development for Mechatronics and Measurement
Technique Bucharest, Bucharest, Romania
Abstract. This scientific paper presents the generative evolutions from Mechatronics to Cyber-Mechatronics and from Mechatronics Systems to CyberMechatronics Systems, highlighting the scientific and technological advances in the mentioned fields, the new and complex innovative concepts of science and Mechatronics and Cyber-Mechatronics engineering and the constructive and applied architectures of the Mechatronics and Cyber-Mechatronics Systems, which focus and merge the Physical World with the virtual world. These cybermixmetronic systems are the concern of the world’s researchers and engineers as based on the results of research currently obtained in these fields of excellence and with the highest societal impact of the 21st century, to contribute to the sustainable development of Industry 4.0 and of the computerized society through the development of Cyber Mix-Mechatronics Engineering and Claytronics. Keywords: Mechatronics · Cyber-mechatronics · Mechatronic systems · Cyber-mechatronics systems · Claytronics
1 Introduction It is presented in an adaptative structure, the achievements of research in the field of concepts in Cyber Mix Mechatronics Engineering and Claytronics, in order to motivate the high and future level of these researches, with a worthy impact of the 21st century. An alignment of the ideas that are conceived and presented in the form of models/prototypes is made, in order to support the achievements of the research in the two fields of excellence, Cyber-MixMecatronics and Claytronics, with applicability in the society.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Machado et al. (Eds.): icieng 2022, LNME, pp. 292–301, 2022. https://doi.org/10.1007/978-3-031-09385-2_26
New Conceptions of the Future in Cyber-MixMechatronics Engineering
293
1.1 New Scientific Cyber-Mixmechatronics Concepts in the Future 1.1.1 Ideas for New Scientific Cyber-Mixmechtronics Concepts in the Future It summarizes the scientific cyber-mixmechatronic scientific concepts of complex intelligent cyber-mixmechatronic 3D electronic control systems, in 3D probe holders, with 3D probe holders, for measurement, integrated control and other intelligent industrial services, with telemonitoring and remote control, intelligent robotization and technology and multi-applicability, followed by the keywords of innovative hoarding: a) the new concept of integrated 3D intelligent cyber-mixmecatronic system with two 3D gripper holders for adaptive measurement and control (see Fig. 1).
Fig. 1. The new concept of intelligent 3D mechatronic mix system for adaptive measurement and control
This new concept of intelligent 3D mixed-mechatronic system for adaptive measurement and control, includes a multi-complex structure, architecture, as follows: – the mass support subsystem for the cyber-mixmechatronic system; – 3D intelligent cyber-mixmechatronic equipment for adaptive measurement and control – a PC central unit with complex architecture; – control unit and control of 3D intelligent cyber-mixmechatronic equipment; – barrier system with sensors for the protection of the workspace of the cybermixmecatronic system; – an electronic unit for digital display of adaptive measurement and control process data; – grippers subsystems -port 3D electronic probes for contacting the measuring points and adaptive control of the parts to be checked – parts to be checked (landmarks in the automotive industry); – a package of specialized software for adaptive measurement and control processes. This system is already applied in the automotive industry (at Renault-Dacia S.A. Pitesti-Romania) to perform the control in coordinates in the intelligent manufacturing
294
G. Gheorghe and F. Badea
processes of the car and the remote monitoring of this process from a distance, eliminating the physical presence of the service team at the respective beneficiary. This 3D cyber-mixmechatronic system is an absolute novelty in Romania, approaching for the first time this complex cyber-mixmecatronic concept and designed to perform the functions of remote control in coordinates (3D, 4D) and remote monitoring in intelligent metrological and/or industrial processes through the signals and information of the movements and ultra-precise angles of the measuring/control probe in 3D, within the program designed in PC and of the software of command and modeling and evaluation of the remote position, so that the information packages position-palpation-movement will be constituted in vector packages for complex mathematical processing that can be done both locally and remotely. The 3D /4D cyber-mixmechatronic system is based on intelligent architecture of specialized sensors and actuators and on specialized working modules, static and dynamic that transmit information to intelligent 4G/5G mechatronic units, which it processes, stores and transmits to entities/monitoring centers and command or databases, through the cybernetic integronic system, which merges with the virtual system (Internet, Intranet). The cyber-mixmechatronic system is cyber-physical subsystem, based on mechatronic and cyber-mechatronic concepts mixed with other components such as Intranet/Internet, IoT, specialized programs and software, which perform multiple functions of intelligent measurement and control, locally and/or remotely, participate in process automation and cybernetization (in this case, metrological measurement processes, industrial measurement and other types of processes), using the scientific discoveries of industrial space and cyberspace. b) the new concept of cyber-mixmechatronic system of robotization and intelligent technology with remote monitoring and remote control (Fig. 2).
Fig. 2. The new concept of cyber-mixmechatronic system of robotization and intelligent technology with remote monitoring and remote control
New Conceptions of the Future in Cyber-MixMechatronics Engineering
295
This new concept of cyber-mixmecatronic system of robotization and intelligent technology with remote monitoring and remote control, includes a multi-complex architecture of modular infrastructures, consisting of: – intelligent robotic system with telemonitoring and remote control; – IP camera subsystem for visualization of technological measurement and control processes for their transfer to the remote control center through cyberspace – Robotic workspace sensor barrier system – Central PC system – Robot command and control system – 3D electronic probe systems integrated in robot grips for parts measurement and control processes – cyberspace – a remote monitoring and remote-control center. In the new architectural ensemble, the information flow of the cyber-mixmechatronic system of intelligent robotization with remote monitoring and remote control transmits from the “input sizes”, of nonelectric nature, which are transformed into electrical quantities, then amplified, divided and displayed as “output sizes”, a result that can be taken into account to be controlled, monitored or configured remote by the interconnections of the cyberspace and the remote control and remote monitoring center [1]. This cyber-mixmechatronic intelligent robotization and technology system, developed in several hardware and software infrastructures and constructive and functional architectures, is used in any industrial/societal processes, depending on its complexity, for remote control and maintenance of manufactures and of intelligent production lines. The cyber-mixmechatronic intelligent robotization and technologized system is structured, functionally and holistically, according to the requirements of the industrial/societal process where it is integrated, so that the systems become intelligent control equipment or technological equipment for industrial services. In this sense, the gripper of the system can also support 3D electronic probe, for contacting the measuring points on the surfaces of the part to be checked, in addition to its function of gripper with technological devices related to different services. c) the new concept of intelligent cyber-mixmechatronic damping system with remote monitoring and remote control for smart cars (Fig. 3). This new concept of intelligent cyber-mixmecatronic damping system comprises a multi-complex and multi-functional structure, consisting of: – intelligent cyber-mixmechatronic damping system for intelligent cars (consisting of electromagnet; rheological fluid; acceleration sensor; acceleration sensor interface; high voltage source; intelligent remote control equipment; 4G GPRS modem; antenna). – cyberspace (Internet and industrial Ethernet) – remote control and remote monitoring center (PC unit, monitor, WAN Internet router and PC with specialized software).
296
G. Gheorghe and F. Badea
Fig. 3. New concept of intelligent cyber-mixmechatronic damping system with remote monitoring and remote control for smart cars
In its architectural ensemble of intelligent cyber-mixmechatronic damping system, the transfer of information takes place, from the non-electrical input quantities in the system, and transformed into electrical quantities subjected to the processes of amplification, division and digital display and then into output sizes which can be controlled, configured and monitored remotely. with the effects of not noticing the defects on the surface of the streets and highways for the circulation of intelligent vehicles. d) the new concept of multiplier 3D cyber-mixmecatronic system for remote control and remote monitoring for the mechatronics industry (Fig. 4).
Fig. 4. The new concept of cyber-mixmecatronic 3D multiplicative system for the mechatronics industry
New Conceptions of the Future in Cyber-MixMechatronics Engineering
297
This multiplicative 3D cyber-mixmechatronic system is structured in a complex architecture consisting of: • multiplicative 3D cyber-mixmechatronic system (3d system, 3D electronic probe, PC central unit, Security barriers, special equipment); • cybernetic space (communication bus, PLC, 5G GPRS modem, antenna, WAN internet); • remote monitoring and remote-control center (router, PC unit, 3D software). The multiplicative 3D cyber-mixmecatronic system can be used in the following industries: – in the smart automotive industry, by integrating into automated manufacturing and assembly lines; – in the precision mechanics and mechatronics industry, also through its integration in the intelligent manufacturing and production lines; – in the aerospace industry; – in the electronics and automation industry; – in the optics industry; – in the environment and energy industry.
2 Developing the Convergence of Complexity in Cyber-MixMechatronics Engineering and Claytronics 2.1 Developing the Convergence of Complexity in Cyber-MixMechatronics Engineering The complex convergence relationship between the two concepts “CyberMixMechatronics Engineering” and “Claytrnics”, consists in the fact that both concepts are for the simultaneous development of new and innovative technical and technological sciences, for the sustainable development of Industry 4.0 on large and medium scale (Cyber-MixMechatronics) and on micro and nano scale (Claytronics) and for the simultaneous and sustainable development of the information and post-information Society, by using the related intelligent technologies and the micro-nano innovative technologies of excellence. During all phases of the design process it is necessary to build models. Models are very important hierarchical tools for complex activities, such as design engineering. In engineering, high-performance products, mathematical modelling and simulation, ie experimenting with computer-based models, are an increasingly important technique for solving problems, evaluating solutions and making decisions. The models are known as inconsistencies that need to be addressed to verify the consistency of the design of such mechatronics and cyber-mixed-electronics models. Today, the concept of “time to market” and “rapid development” are very important aspects of the mechatronic and cyber-mechatronic product development process.
298
G. Gheorghe and F. Badea
The evolution of the corresponding market requirements in recent years has profoundly transformed the way the designer thinks and acts during all stages of product development. In fact, nowadays, time to market, quality standards, environmental impact, safety and cost-effectiveness are becoming essential requirements that affect the entire product development lifecycle. In order to improve the performance of new products, in their various fields and mechatronics, the interactions are increasingly exploited, which has led to a complexity of the product. Today, design activities take place in a multidisciplinary environment, often involving engineers from different work environments on a common product. Therefore, a new approach in Product Design and Engineering can be considered a key point for optimizing the design process. The role of consistency is to characterize dependencies in order to identify the conditions that must be met for objects and models to fit together [3]. One of the key aspects in the modern development of cyber-mix-electronic systems is the strict integration of mechanics, control, electrical and electronic software issues, as well as from the beginning of the first design phase which is shown in Fig. 5.
Fig. 5. Interaction in cyber-mixmechronics design
In the modelling process, caution is required in the context of the purpose of the model. Non-routine simulations, which tend to be more than routine exploratory simulations, should be facilitated by flexible models, i.e. models that can be easily configured and easily adapted for different purposes [4]. A cyber-mixmechatronic module designates the “smallest” mechatronic subsystem indivisible within the set of cyber-mixmechatronic subsystems, structured on several hierarchical levels corresponding to the procedure of the degree of detail, as shown in Fig. 6. Figure 7 shows the example of a hierarchical decomposition of a global cybermixmechatronic system where the hierarchical structuring allows to recognize and describe the internal interactions, as well as the integration of all systems involved in all levels of cyber-mixmechatronic coupling.
New Conceptions of the Future in Cyber-MixMechatronics Engineering
299
Fig. 6. Multilevel cyber-mix mode
Fig. 7. The hierarchical structure of a global cyber-mixmechatronic system
2.2 Developing the Convergence of Complexity in Claytronics 2.2.1 Preamble Programmable matter refers to a technology that allows you to control and manipulate three-dimensional physical artifacts that imitate the shape, movement, visual appearance, sound, and tactile qualities of the original object. 2.2.2 Claytronics Conception Claytronics comprises individual components, called catoms or claytronics atoms which can move in three dimensions relative to other catoms, adhere to other catoms to maintain a 3D shape and to calculate information about the state of the environment as a whole. To create a dynamic 3D object, ideas from various fields such as modular and reconfigurable robotics will be integrated.
300
G. Gheorghe and F. Badea
2.2.3 Claytronic System Compiling the specifications will give each cathode point point for achieving the desired overall shape. At this point, the cathodes begin to move toward each other using generated forces, either magnetic or electrostatic, that adhere to each other. The cathodes on the surface will display an image that shows the color and texture characteristics of the source object. If the source object starts to move, a concise description of the movements will be broadcast, allowing the cathomics to update their positions by moving each other. The end result will be that the system will look like a single coordinated system. 2.2.4 The Motivation and the Purpose of Claytronics An essential motivation for this work of specialists is that technology has reached a point where a system of programmable materials can be realistically built based on design principles that will allow it to finally reach millions of submillimeters. The purpose of Claytronics is to make the system usable (now) and scalable (in the future). Thus, the design principle behind both hardware and software is scalability. Hardware must adapt to micronized cathodes and millions of cathode assemblies. Claytronics will be a test for solving the problems we face today: how to build complex and massively distributed dynamic systems. It is also a step towards the real integration of computers into our lives - by integrating them into the artifacts around us and by making them interact with the world [5]. 2.2.5 Application as a Claytronics -Transformer System In Fig. 8 shows an application of the transformer-type claytronics system, a programmable material that is defined as the group of small modular robots that are a few centimeters in size. These robots can communicate with each other through sensor types. Each unit of programmable matter is known as a cathode or claytronics atom.
Fig. 8. The application of the transformer-type claytronics system
Each catom is a self-modular robot that consists of a computer in itself. They can move, communicate and interact with each other making a robot dependent on itself. Furthermore, if one of the units in this set is defective, there is no problem with the whole set. The damaged catom is only to be removed and the whole unit works as before. Such
New Conceptions of the Future in Cyber-MixMechatronics Engineering
301
robots made of these catoms behave as if the robot will be disassembled into several units and reassembled according to the completely different desired shape. Today, there are millions of self-modular robots that can be combined on their own in different forms depending on the program.
3 Conclusions The real technological revolution is just beginning and we can rejoice that we are witnesses, but also participants. Emphasis is placed on the development of new concepts in the future: – regarding the consistency and perspective of modelling cyber-mix-electronic multidisciplinary systems, they can be considered as simplified representations of the real world and can be automatically verified as the coherence of such distributed design models and address future challenges. – regarding the development of claytronics systems (eg giant, powerful and autonomous claytronics robots) for enemy protection at the border, which can take the place of soldiers, saving their lives, claytronics technology can make this dream possible.
References 1. Gheorghe, G.: Ingineria Cyber-MixMecatronica si Clatronica. Cefin Publishing House, Bucharest-Romania (2017). ISBN 978-606-8261-26-3 2. Gheorghe, G.: From mechatronics and cyber-mixmechatronics to claytronics. Int. J. Model. Optim. 7(5) (2017). https://doi.org/10.7763/IJMO.2017.V7.598 3. Benoit, P., Julien, B.: Designing a quasi-spherical module for a huge modular robot to create programmable matter. Auton. Robots 42(8), 1619–1633 (2018). ISSN 1573-7527 4. Abdullah, A., Othon, M., Igor, P.: On efficient connectivity-preserving transformation in a grid. Algosensors 76–91 (2020) 5. Gianlorenzo, D., Othon, M.: Algorithmic Aspects of Cloud Computing. LNCS, vol. 13084. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-19759-9. ISBN 978-3-030-93042-4
Author Index
A Antosz, Katarzyna, 1 Apostolescu, Tudor C˘at˘alin, 164 A¸stilean, Adina, 24 Avram, Camelia, 24 Avram, Mihai, 132 B B˘acescu, Daniel, 164 Badea, Florentina, 292 Barbosa, Paulo, 280 Bastos, João A., 203 Belderrain, Mischel Carmen Neyra, 245 Bogatu, Lucian, 164 Bozhko, Tetiana, 143 Brito, Irene, 98 C Cartal, Lauren t, iu Adrian, 164 Carvalho, Vítor, 280 Cerqueira, Christopher Shneider, 245 Charous, Zdenek, 269 Chetverzhuk, Taras, 143 Coanda, Philip, 132 Comeaga, Daniel, 132 Constantin, Victor, 132 Correia, Luis, 13 Cunha, Pedro, 280 D da Silva, António Ferreira, 72 Dana, Alionte Andreea, 193 de Souza, Ygor Logullo, 245 dos Santos, Marcos, 245
F Ferreira, Ana Rita, 203 Ferreira, José Soeiro, 257 Figliolini, Giorgio, 123 G Gabriel, Alionte Cristian, 193 Gazdos, Frantisek, 269 Gheorghe, Gheorghe, 292 Gomes, Carlos Francisco Simões, 245 Gramescu, Bogdan, 132 H Hell, Marko, 86 Hrybiuk, Olena, 216 I Ionas, cu, Georgeta, 164 K Kozłowski, Edward, 1 Krol, Oleg, 111 Krzywy, Jacek Jan, 86 L Leão, Celina P., 98 Lima, Margarida, 257 Lopes, Cristina, 257 Lopes, Didier R., 51 Lopes, Helena, 36 M Machado, José, 143 Mano, Vasco Moço, 174
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. Machado et al. (Eds.): icieng 2022, LNME, pp. 303–304, 2022. https://doi.org/10.1007/978-3-031-09385-2
304 Mazurkiewicz, Dariusz, 1 Mesaro¸s, George, 24 Moreira, Miguel Ângelo Lellis, 245
N Nelu, Staetu Gigi, 193 Nita, Emil, 132 Nunes, Ana C., 257
O Oliveira, Cristina, 257 Ottaviano, Erika, 123 Öztürk, Elif, 257
P Pata, Arminda, 155 Pereira, Filipe, 72 Perutka, Karel, 63 Pratas, Tiago E., 51
R Ramalho, Armando, 13 Ramos, Filipe R., 51 Rocha, Pedro, 257 Rodrigues, Ana M., 257 Rodrigues, Matilde A., 98
Author Index S Santos, Adriano A., 72 Santos, André S., 203 S˛ep, Jarosław, 1 Silva, Agostinho, 155 Silva, Bruno Thiago Rego Valeriano, 245 Silva, Susana P., 36 Soares, Ângelo, 203 Soares, Filomena, 280 Sokolov, Volodymyr, 111 Sousa, Filipe, 257 Stanciu, Andreea, 164 Svirzhevskyi, Kostiantyn, 182 T Teymourifar, Aydin, 232 Tkachuk, Anatolii, 182 Torres, Pedro, 13 Trokhymchuk, Ivanna, 182 V Valentina, Negoita Alexandra, 193 Varela, Leonilde R., 203 Vedishcheva, Olena, 216 Vieira, Luís Almeida, 174 Z ˙ nski, Tomasz, 1 Zabi´ Zablotskyi, Valentyn, 182 Zabolotnyi, Oleg, 143, 182 Zaleta, Olha, 143