Digitizing Production Systems: Selected Papers from ISPR2021, October 07-09, 2021 Online, Turkey (Lecture Notes in Mechanical Engineering) 3030904202, 9783030904203

This book contains selected papers from International Symposium for Production Research 2021, held on October 7–9, 2021,

118 95 97MB

English Pages 916 [893] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Foreword
Preface
Digitizing Production Systems—Future of Manufacturing
Organization
Editors
Co-editors
Honorary Chairs
Symposium Chairs
International Honorary Committee
Organizing Committee
Scientific Committee
Reviewers
Aiming Higher in Conceptualizing Manageable Measures in Production Research
1 Introduction
2 Measurement, Property Rights and Transaction Costs
3 Relevant Advances in Measurement Theory and Practice
4 Standards, Exceptions to the Rule and Locally Situated Adaptive Capacities
5 Three Semiotic Levels of Complexity Packaged in Metrological Assemblages
6 Rasch’s Distinctions between Semiotic Levels in Measurement
7 Conclusion
References
AutoReman—The Dawn of Scientific Research into Robotic Disassembly for Remanufacturing
1 Introduction
2 AutoReman
3 Focus of AutoReman
4 Results
5 Conclusion
References
Contents
Artificial Intelligence Applications
A Comparative Analysis on Improving Covid-19 Prediction by Using Ensemble Learning Methods
1 Introduction
2 Related Work
3 Method
3.1 Data Understanding and Data Preparation
3.2 Modeling
3.3 Evaluation
4 Findings
5 Deployment: Covid-19 Prediction Application
6 Conclusions
References
Developing Intelligent Autonomous Vehicle Using Mobile Robots’ Structures
1 Introduction
2 The Current State
3 Characterstics of the Mock-Up
4 Conclusions
References
Digital-Body (Avatar) Library for Textile Industry
1 Motivation for Body-Size Library
1.1 Returns in Online Clothing Sales
1.2 Daily Fluctuations
1.3 Garment Sizing System Problem
1.4 Inside the Fight in the Fitting Room0
2 Human Height
2.1 Body Growth
2.2 Body sizes in the Broader Sense
3 State of the Art of Body Scanning Technologies
4 Body Dimension Extracted from 3-D Body Scans
5 Sizing and Design for Apparel Industry
6 Virtual Human Model
7 The Trend of Mass Customization in Dress Production
8 Garment Design and Virtual Assembly Application
9 Body Digital Library
9.1 Computer Vision Scenario for Body Library
9.2 Dynamic Models with Python – TensorFlow-Real World Scenarios
10 Dynamic Models With Python Control Flow and Important Feautures of Digital Body-Size Library
11 Conclusion
References
Stereoscopic Video Quality Assessment Using Modified Parallax Attention Module
1 Introduction
2 Proposed Model
2.1 Modified ASPP Module
2.2 PAM Module
2.3 Data Preparation
3 Datasets and Experiments
3.1 Datasets
3.2 Experiments
4 Results and Discussions
5 Conclusions
References
Decision Making
A Design of Collecting and Processing System of Used Face Masks and Contaminated Tissues During COVID-19 Era in Workplaces
1 Introduction
2 Background
3 Methodology
4 Application
5 Conclusions
References
Identification of Optimum COVID-19 Vaccine Distribution Strategy Under Integrated Pythagorean Fuzzy Environment
1 Introduction
2 The Proposed Methodology
3 Case Study
4 Conclusions
References
Implementation of MCDM Approaches for a Real-Life Location Selection Problem: A Case Study of Consumer Goods Sector
1 Introduction
2 Case Study
3 Conclusions
References
Energy Management
Applications and Expectations of Fuel Cells and Lithium Ion Batteries
1 Introduction
1.1 Working Principles and Types of Fuel Cells
1.2 Working Principle and Types of Li-ion Batteries
2 Comparison of Fuel Cell and LIB
2.1 Applications of Batteries and Fuel Cells
2.2 Specific Energies
2.3 Cost Analyses
2.4 Hydrogen Cost
2.5 Raw Materials
2.6 Manufacturing of LIB’s and Fuel Cells
3 Conclusion
References
Integration of Projects Enhancing Productivity Based on Energy Audit Data in Raw Milk Production Sector — a Case Study in Turkey
1 Introduction
2 Material and Method
3 Results and Dıscussıon
4 Conclusion
References
Healthcare Systems and Management
A Fuzzy Cognitive Mapping Approach for the Evaluation of Hospital’s Sustainability: An Integrated View
1 Introduction
2 Literature Review
3 Material and Methods
4 Conclusion
References
Digital Maturity Assessment Model Development for Health Sector
1 Introduction
2 Literature Review
3 Overview of DEMATEL
4 Digital Maturity Model Criteria for Health Sector
5 Results of DEMATEL
6 Discussion and Conclusion
Appendix
References
Using DfX to Develop Product Features in a Validation Intensive Environment
1 Introduction
2 Concept Development
3 Characteristics of the Mock-up
4 Conclusions
References
Industrial Applications
Collaborative Robotics Making a Difference in the Global Pandemic
1 Introduction
2 Covid-19 Pandemic and the Manufacturing Industry
3 Collaborative Robots in Manufacturing
4 Collaborative Robots Implementation Worldwide
5 Social Distancing and Collaborative Robots
6 Case Studies of Collaborative Robots During the Covid-19 Pandemic
7 Conclusions and Looking Forward
References
Determination of Strategic Location of UAV Stations
1 Introduction
2 Literature Review
3 Methodology
4 Application of the Model and Analysis Results
5 Conclusion and Future Research
References
Evaluation of Aluminium Alloy (AlSi9Cu3(Fe)) Porosity by Destructive and Non-destructive Method (Computed Tomography)
1 Introduction
2 Experimental approach
3 Conclusion and discussion
References
Improvement of Solid Spreader Blade Design Using Discrete Element Method (DEM) Applications
1 Introduction
2 Literature Review
3 Material and Method
4 Results and Discussion
5 Conclusion
References
Investigation of Influence by Different Segmentation Parameters on Surface Accuracy in Industrial X-ray Computed Tomography
1 Introduction
2 Literature Review
3 Materials and Methods
4 Results and Discussions
5 Conclusions
References
Product Development for Lifetime Prolongation via Benchmarking
1 Introduction
2 Processing of a Torque Hinge
3 Benchmarking Process
4 Benchmark Results and Discussions
5 Conclusions
References
Research into the Influence of the Plastic Strain Degree on the Drawing Force and Dimensional Accuracy of the Production of Seamless Tubes
1 Introduction
2 Experiment Description and Analysis of the Material of the Initial Tube (Semi-Product)
3 Experimental Investigation
4 Determination and Calculation of the Important Parameters in the Drawing Process
5 Evaluation of the Influence of Reduction on the Drawing Force and Dimensional Accuracy of Tubes
6 Conclusions
References
Industry Robotics, Automation and Industrial Robots in Production Systems
An Example of Merit Analysis for Digital Transformation and Robotic Process Automation (RPA)
1 Introduction
2 Literature Search for Robotic Process Automation and Merit Analysis
3 Merit Analysis Study for RPA
4 Conclusions
References
Assembling Automation for Furniture Fittings to Gain Durability and to Increase Productivity
1 Introduction
2 The Automation Assembly Cell Design and Mounting
3 Testing and Analysis
4 Discussions
5 Conclusions
References
Use of Virtual Reality in RobotStudio
1 Solution Description
2 Creation of Smart Component - Conveyor
3 Creation of the Tool
4 Creation of a Smart Component - Tool Changer
5 Creation of Points and Trajectory of the Path
6 Creation of the Rapid Control Program
7 Use of Virtual Reality in the RobotStudio Program
8 Robot Path Programming, Speed Adjustment, Zone and Robot Configuration in Virtual Reality
9 Conclusion - Vision of the Possibilities of Using Virtual Reality in the Future
References
Industry 4.0 Applications
Direct-Drive Integrated Motor/controller with IoT and GIS Management for Direct Sowing and High Efficiency Food Production
1 Introduction
2 GIS and Soil Mapping for Precision Agriculture
3 Motor Controller Challenges
4 Motor Design and Simulation
5 Electronic Controller and Internet Of Things
6 Motor Testing
7 Conclusions
References
Evaluation of Fused Deposition Modeling Process Parameters Influence on 3D Printed Components by High Precision Metrology
1 Introduction
2 Experimental Setup
2.1 Test Part Geometry
2.2 Additive Manufacturing – Methods, Parameters
2.3 Measurement Methods
3 Results
3.1 Manufactured Parts
3.2 Measurement Results
3.3 CT Results
3.4 Regression Analysis
3.5 Surface Quality Analysis Using Focus Variation Microscopy
4 Conclusions
References
Implementation of Industry 4.0 Elements in Industrial Metrology – Case Study
1 Introduction
2 Machine Tools Accuracy
2.1 Geometric, Working and Manufacturing Accuracy
2.2 Factors Affecting the Manufacturing Accuracy of a Machine Tool
2.3 Machine Tool Measurement
2.4 Machine Tool Error Compensation Approaches
3 Description of the Experiment
4 UVSSR CELL and Implemented I4.0 Elements
4.1 UVSSR CELL Actual Layout
4.2 Automatic Cell Cycle Description
4.3 Virtual Commissioning of the Single Purpose Measuring Station
4.4 Data and Communication Structure
4.5 Cloud and Edge Computing
4.6 Virtual and Augmented Reality
5 Conclusion
References
Multi-criteria Decision Support with SWARA and TOPSIS Methods to the Digital Transformation Process in the Iron and Steel Industry
1 Introduction
2 Industry 4.0 and Digital Transformation
2.1 Industry 4.0 Technologies
3 Methodology
3.1 SWARA Method
3.2 TOPSIS Method
4 Multi-criteria Integration Assessment of Industry 4.0 Technologies in the Iron-Steel Industry
4.1 Determination of Evaluation Criteria
4.2 Identification of Alternatives
4.3 Determination of Criterion Weights by SWARA Method
4.4 Evaluation of Alternatives with the TOPSIS Method
5 Conclusions
References
Sensor Based Intelligent Measurement and Blockchain in Food Quality Management
1 Introduction
2 Method
3 Conclusions
References
Study of the Formation of Zinc Oxide Nanowires on Brass Surface After Pulse-Periodic Laser Treatment
1 Introduction
2 Results and Discussion
3 Conclusions
References
Lean Production
A New Model Proposal for Occupational Health and Safety
1 Introduction
2 Lean Philosophy and Occupational Health and Safety
3 A Lean Occupational Health and Safety Model
3.1 The stages of the model
3.2 Model Validation
4 Conclusions
References
An Integrated Value Stream Mapping and Simulation Approach for a Production Line: A Turkish Automotive Industry Case
1 Introduction
2 Literature Review
2.1 Lean Thinking
2.2 Lean Manufacturing
3 Methodology
3.1 Value Stream Mapping for the Company
3.2 Machine Downtime and Analysis of Machine Downtime Data
3.3 Simulation in ARENA Software
4 Conclusions
References
Eliminating the Barriers of Green Lean Practices with Thinking Processes
1 Introduction
2 Green Lean
3 Ways to Increase Success in Lean Green: Application of Thinking Processes
3.1 Current State Analysis
3.2 Root Cause Solutions with ECs
3.3 Solution Suggestions with FRT
4 Conclusions
References
Miscellaneous Topics
Competency Gap Identification Through Customized I4.0 Education Scale
1 Introduction
2 Competence Assessment in Industry 4.0 Environment
3 Methodology
3.1 Data Collection
4 Findings Of Need Analysis
4.1 Collected Data
5 Responses Obtained
6 Scale Factors and Sub-Factors
7 Conclusion And Further Studies
References
Design of a Routing Algorithm for Efficient Order Picking in a Non-traditional Rectangular Warehouse Layout
1 Introduction
2 Problem Statement
3 Solution Approaches for Order Picking Problem
3.1 Modified S-Shape Heuristic
3.2 Modified Aisle-by-Aisle Heuristic
3.3 Modified Combined Heuristic
3.4 Genetic Algorithm
4 Storage Assignment Policies
5 Framework for Experimental Study
5.1 Warehouse Layout
5.2 Locations of Items in Pick-Lists Subject to Random & ABC Class-Based Storage Assignment Policy
5.3 Parameter Tuning for Genetic Algorithm
6 Experimental Results
7 Conclusion
References
Education in Engineering Management for the Environment
1 Introduction
2 Human Centered Systems and Mechatronics
3 Engineering Management at TU Wien
4 MSc in Engineering Management – A Closer Look
4.1 The History
4.2 Facts and Figures
4.3 Modules Taught in 2020–2021
5 Msc in Engineering Management and the Environment
6 Summary and Outlook
References
Hybrid Flowshop Scheduling with Setups to Minimize Makespan of Rubber Coating Line for a Cable Manufacturer
1 Introduction
2 Production Flow & Problem Definition
3 Mathematical Models
3.1 Mathematical Model for Identical Parallel Machines in Isolation Stage (Adapted from [4])
3.2 Mathematical Model for 2-Stage Hybrid Flowshop in Bunching & Sheathing Stages (adapted from [5])
3.3 NP-Hardness of the Problem
4 Implementation of Genetic Algorithm
5 Parameter Tuning
6 Verification of Genetic Algorithm with a Small Size Problem
7 Solving Actual Size Problem with Genetic Algorithm
8 Design of User Interface
9 Conclusion
References
Improving Surface Quality of Additive Manufactured Metal Parts by Magnetron Sputtering Thin Film Coating
1 Introduction
2 Materials and Methods
3 Results and Discussion
4 Conclusıon
References
Labor Productivity as an Important Factor of Efficiency: Ways to Increase and Calculate
1 Introduction
2 Importance of Increasing Labor Productivity, Influencing Factors and Sources of Reserves
2.1 Material and Technical Factors:
2.2 Organizational Factors:
2.3 Economic Factors:
2.4 Social Factors:
3 Some Points of Calculating the Level and Dynamics of Labor Productivity
4 Conclusions and Recommendation
References
Risk Governance Framework in the Oil and Gas Industry: Application in Iranian Gas Company
1 Introduction
2 Literature Review
3 IRGC Risk Governance Framework in South Pars Gas Company
3.1 Pre-assessment
3.2 Appraisal
3.3 Characterization and Evaluation
3.4 Management
3.5 Cross-Cutting Aspects of Risk Management
4 IRGC Risk Governance Framework in South Pars Gas Company
4.1 Calculate the Average Matrix
4.2 Calculate the Normalized Direct- Relation Matrix
4.3 Acquire the Total Relation Matrix
4.4 Calculate Fuzzy Weighted Supermatrix
4.5 Acquire Fuzzy ANP Weighs
5 Conclusions
References
Operations Research Applications and Optimization
A Nurse Scheduling Case in a Turkish Hospital
1 Introduction
2 A Nurse Scheduling Case in a Turkish Hospital
3 Literature Review
4 Mixed Integer Programming
5 Problem Definition
6 Methodology
7 Results
8 Conclusions
References
An Analytical Approach to Machine Layout Design at a High-Pressure Die Casting Manufacturer
1 Introduction
2 Methodology
2.1 Product Classification Using ABC Analysis
2.2 Machine Sequencing Using Hollier Method
2.3 Machine Layout Optimization via Mathematical Model
3 Implementation and Results
4 Conclusions
References
Hybrid Approaches to Vehicle Routing Problem in Daily Newspaper Distribution Planning: A Real Case Study
1 Introduction
2 Literature Review
3 Methodology
3.1 Data Collection and Preparation
3.2 K-Means Clustering Process
3.3 Proposed Model and Assumptions
3.4 Particle Swarm Optimization
3.5 Simulated Annealing Algorithm
4 Computational Results
5 Conclusion
Appendix-I
References
Monthly Logistics Planning for a Trading Company
1 Introduction
2 Problem Definition
3 Methodology
4 Data Collection
4.1 Assignment Problem
4.2 Vehicle Routing Problem
5 Implementation and Results
5.1 Current Situation and Proposed Solution
6 Conclusion
References
Solving the Cutting Stock Problem for a Steel Industry
1 Introduction
2 Problem Definition
3 Methodology
3.1 Mathematical Model
3.2 Data Collected
3.3 Product Requirements and Their Weights
3.4 Output Data
4 Conclusion
References
Quality Management
Comparison of Optical Scanner and Computed Tomography Scan Accuracy
1 Introduction
2 Experimental Investigation
3 Discussion and Conclusion
References
Researches Regarding the Development of a Virtual Reality Android Application Explaining Orientation Tolerances According to the Latest GPS Standards Using 3D Models
1 Introduction
2 Literature Review
3 Results
4 Conclusions
References
Simulation and Modelling
Comparing the Times Between Successive Failures by Using the Weibull Distribution with Time-Varying Scale Parameter
1 Introduction
2 One-Way ANOVA with Time-Varying Parameter Weibull Distribution
3 Simulation Study
4 Application
5 Conclusions
References
Modelling of a Microwave Rectangular Waveguide with a Dielectric Layer and Longitudinal Slots
1 Introduction
2 Statement and Solution of the Problem
3 Research Results
4 Conclusions
References
Supply Chain Management and Sustainability
A Literature Analysis of the Main European Environmental Strategies Impacting the Production Sector
1 Introduction
2 Literature Review
3 Methodology and Analysis
4 Conclusions
References
A Raw Milk Production Facility Design Study in Aydın Region, Turkey
1 Introduction
2 Literature Review
3 Methodology
4 Facility Plant Location Selection
5 Capacity Planning
6 Distribution Routes
7 Financial Analysis
8 Conclusions
References
Recent Developments in Supply Chain Compliance in Europe and Its Global Impacts on Businesses
1 Introduction
2 The New Supply Chain Compliance Due Diligence and Compliance Rules in Europe
3 The New Supply Chain Due Diligence and Compliance Rules in Germany
3.1 Aim of Legislation
3.2 Scope of Legislation
3.3 Obligations of Companies
3.4 Sanctions and Enforcement
4 Conclusion
References
Sustainable Factors for Supply Chain Network Design Under Uncertainty: A Literature Review
1 Introduction
2 Literature Review
3 Sub-factors of Sustainability
4 Conclusions and Future Research Directions
References
Capstone Projects
A Decision Support System for Supplier Selection in the Spare Parts Industry
1 Introduction
2 Literature Review and Modelling Perspective
3 Methodology: Analytical Hierarchy Process
4 Implementation of the Study
5 Discussions and Conclusion
References
A Discrete-Time Resource Allocated Project Scheduling Model
1 Introduction
2 Modelling and Solution Methodology
3 Heuristic Solution Methods
4 Decision Support System
5 Conclusion
References
An Optimization Model for Vehicle Scheduling and Routing Problem
1 Introduction
2 Literature Review
3 Problem Definition
4 Results
5 Conclusion
References
Applying Available-to-Promise (ATP) Concept in Multi-Model Assembly Line Planning Problems in a Make-to-Order (MTO) Environment
1 Introduction
2 Literature Review
3 Problem Formulation and Solution Methodology
4 Conclusions
References
Capacitated Vehicle Routing Problem with Time Windows
1 Introduction
2 Problem Definition
3 Literature Review
4 Modeling and Solution Methodology
5 Computational Results and Decision Support System
6 Conclusion
References
Designing a Railway Network in Cesme, Izmir with Bi-objective Ring Star Problem
1 Introduction
2 Literature Review
3 Problem Definition
4 Methodology
5 Results
6 Conclusion
References
Distribution Planning of LPG to Gas Stations in the Aegean Region
1 Introduction
2 Problem Definition
3 Literature Review
4 Modeling and Solution Methodology
5 Numerical Study and Discussion of Results
6 Decision Support System
7 Conclusion
References
Drought Modelling Using Artificial Intelligence Algorithms in Izmir District
1 Introduction
2 Materials and Methods
3 Computational Results and Decision Support System for Drought Modeling
4 Conclusion
References
Electricity Consumption Forecasting in Turkey
1 Introduction
2 Literature Review
3 Methodology
4 Conclusion
References
Forecasting Damaged Containers with Machine Learning Methods
1 Introduction
2 Literature Review
3 Problem Definition
4 Data
5 Methodology
6 Results
7 Conclusion
References
Logistics Service Quality of Online Shopping Websites During Covid-19 Pandemic
1 Introduction
2 Literature Review
3 Methodology and Data Analysis
4 Results
5 Conclusion
6 Limitations and Suggestions for Future Research
References
Optimal Inventory Share Policy Search for e-Grocery Food Supply Network
1 Introduction
2 Problem Definition
2.1 Assumptions
3 Literature Review
4 Modelling and Solution Methodology
4.1 Notations
4.2 Lateral Inventory Share Policies
4.3 Simulation Goal and Variables
5 Computational Results
5.1 Verification
5.2 Validation and Results
6 Conclusion
References
Parallel Workforce Assignment Problem for Battery Production
1 Introduction
2 Literature Review
3 Problem Definition
4 Simulation Model
5 Decision Support System and Computational Results
6 Conclusion
References
Production and Inventory Control of Assemble-to-Order Systems
1 Introduction
2 Literature Review
3 Problem Definition and Model
3.1 Features of the General Problem
3.2 Described Problems of Assemble-to-Order Systems
4 Solution Methods
4.1 Solution Approaches for Described Problems
4.2 Simulation
4.3 Genetic Algorithm
5 Computational Results
6 Conclusion
References
Shipment Planning: A Case Study for an Apparel Company
1 Introduction
2 Problem Definition
3 Literature Review
4 Modeling and Solution Methodology
4.1 Mixed Integer Linear Model
4.2 Heuristic Method
5 Computational Results
6 Decision Support System
7 Conclusion
References
Spare Parts Inventory Management System in a Service Sector Company
1 Introduction
2 Problem Definition
3 Literature Review
4 Solution Methodology
5 Computational Results
6 Conclusion
References
Storage and Order Picking Process Design for Perishable Foods in a Cold Warehouse
1 Introduction
2 System analysis
3 Problem Definition
4 Literature review
5 Modeling and Solution Methodology
6 Computational Results and Decision Support System
7 Conclusion
References
Uniform Parallel Machine Scheduling with Sequence Dependent Setup Times: A Randomized Heuristic
1 Introduction
2 Problem Description and Analysis
3 Heuristic Model
4 Conclusion
References
Vehicle Routing Problem with Multi Depot, Heterogeneous Fleet, and Multi Period: A Real Case Study
1 Introduction
2 Literature Review
3 Problem Definition
4 Results
5 Conclusion
References
Water Resource Management Using a Multiperiod Water Pricing Model in Izmir District
1 Introduction
2 Problem Definition and System Analysis
3 Literature Review
4 Water Pricing Model
5 Computational Results and Decision Support System for Prıcıng Model
6 Conclusion
References
Author Index
Recommend Papers

Digitizing Production Systems: Selected Papers from ISPR2021, October 07-09, 2021 Online, Turkey (Lecture Notes in Mechanical Engineering)
 3030904202, 9783030904203

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Lecture Notes in Mechanical Engineering

Numan M. Durakbasa M. Güneş Gençyılmaz   Editors

Digitizing Production Systems Selected Papers from ISPR2021, October 07–09, 2021 Online, Turkey

Lecture Notes in Mechanical Engineering Series Editors Francisco Cavas-Martínez, Departamento de Estructuras, Universidad Politécnica de Cartagena, Cartagena, Murcia, Spain Fakher Chaari, National School of Engineers, University of Sfax, Sfax, Tunisia Francesca di Mare, Institute of Energy Technology, Ruhr-Universität Bochum, Bochum, Nordrhein-Westfalen, Germany Francesco Gherardini , Dipartimento di Ingegneria, Università di Modena e Reggio Emilia, Modena, Italy Mohamed Haddar, National School of Engineers of Sfax (ENIS), Sfax, Tunisia Vitalii Ivanov, Department of Manufacturing Engineering, Machines and Tools, Sumy State University, Sumy, Ukraine Young W. Kwon, Department of Manufacturing Engineering and Aerospace Engineering, Graduate School of Engineering and Applied Science, Monterey, CA, USA Justyna Trojanowska, Poznan University of Technology, Poznan, Poland

Lecture Notes in Mechanical Engineering (LNME) publishes the latest developments in Mechanical Engineering—quickly, informally and with high quality. Original research reported in proceedings and post-proceedings represents the core of LNME. Volumes published in LNME embrace all aspects, subfields and new challenges of mechanical engineering. Topics in the series include: • • • • • • • • • • • • • • • • •

Engineering Design Machinery and Machine Elements Mechanical Structures and Stress Analysis Automotive Engineering Engine Technology Aerospace Technology and Astronautics Nanotechnology and Microengineering Control, Robotics, Mechatronics MEMS Theoretical and Applied Mechanics Dynamical Systems, Control Fluid Mechanics Engineering Thermodynamics, Heat and Mass Transfer Manufacturing Precision Engineering, Instrumentation, Measurement Materials Engineering Tribology and Surface Technology

To submit a proposal or request further information, please contact the Springer Editor of your location: China: Ms. Ella Zhang at [email protected] India: Priya Vyas at [email protected] Rest of Asia, Australia, New Zealand: Swati Meherishi at [email protected] All other countries: Dr. Leontina Di Cecco at [email protected] To submit a proposal for a monograph, please check our Springer Tracts in Mechanical Engineering at http://www.springer.com/series/11693 or contact [email protected] Indexed by SCOPUS. All books published in the series are submitted for consideration in Web of Science.

More information about this series at http://www.springer.com/series/11236

Numan M. Durakbasa M. Güneş Gençyılmaz •

Editors

Digitizing Production Systems Selected Papers from ISPR2021, October 07–09, 2021 Online, Turkey

123

Editors Numan M. Durakbasa Faculty of Mechanical and Industrial Engineering, Institute of Production Engineering and Photonic Technologies TU Wien (Vienna University of Technology) Vienna, Austria

M. Güneş Gençyılmaz Faculty of Engineering, Industrial Engineering Department İstanbul Aydın University İstanbul, Turkey

ISSN 2195-4356 ISSN 2195-4364 (electronic) Lecture Notes in Mechanical Engineering ISBN 978-3-030-90420-3 ISBN 978-3-030-90421-0 (eBook) https://doi.org/10.1007/978-3-030-90421-0 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Foreword

Dear Colleagues and Friends, It is a great pleasure to welcome you to the “International Symposium for Production Research-ISPR 2021”, Antalya/Turkey, from 7 to 9 October 2021, with the overall theme of “Digitizing Production System”. As in all previous years, we continue to cherish and benefit from our successful collaboration with Society for Production Research, İstanbul, Turkey, in the organization of this event. The purpose of the symposium is to bring together researchers, scientists and leading world experts at universities, companies, institutions, communities, associations and societies in the domain of interest from around the world to share ideas, experiences, research, results and vision on production and operations management and technology. This organization shall also provide a forum for experts and professionals to discuss the challenges, opportunities and advance innovation of the theme of this year’s symposium. We hope that this once again inevitable “online” gathering will provide the participants an opportunity for scholarly exchange of ideas and questions despite the absence of physical presence and face-to-face discussions. We wish you all a productive symposium, and we sincerely hope that this pandemic will be behind us soon and that we can all go back to our regular format of conferences and symposia in the near future. October 2021

Kurt Matyas Vice-rector for Academic Affairs TU Wien

v

Preface

This book consists of the selected papers presented at the 21th International Symposium for Production Research (ISPR2021) held from 7 to 9 October 2021. While the original plan was to host the gathering in Antalya, Turkey, the continued threat of the COVID-19 pandemic compelled the organizers to convert it to a virtual symposium. Thanks to the rapid improvements in Internet technologies, we could easily switch our face-to-face meetings to virtual events. This technical infrastructure was provided by the vice-rectorate for Digitalisation and Infrastructure of TU Wien, and we are very grateful to Univ. Prof. Dipl. -Ing. Dr.techn. Dr.h.c.mult. Josef Eberhardsteiner, Vice-Rector, for his kind consideration and support. This symposium was organized by Society for Production Research, İstanbul, Turkey, and TU Wien, Austria, for the fifth year in a row. The generic theme of “Industry 4.0” was first adopted in the symposium held in 2016 and maintained in the following four symposiums in 2017, 2018, 2019 and 2020 but each time with an updated emphasis on the relevant developments and progress on the various aspects of this theme. In other words, the symposiums maintained the same main theme from 2016 but studied a different aspect of production systems and production management in the subsequent years, with the specific purpose of drawing the attention of the researchers to the cases of Industry 4.0 applications of this new industrial era. The world of science and technology is under increasing influence of the requirements of Industry 4.0. Transition to a new era seems inevitable for every sector of industry. Given the importance of this theme, ISPR2021 hosted numerous distinguished speakers from both the academia and the industry to hear their views on the applications of the Industry 4.0 on the various components of production systems. A total of over 300 participants attended this year’s symposium—academics, practitioners and scientists from 20 countries, who contributed 17 invited talks and 73 papers on the plenary, special, workshop and ordinary sessions. The symposium

vii

viii

Preface

programme included keynote addresses (opening/closing session), breakout sessions and workshop discussions. This book contains 71 refereed selected papers in 14 categories shown in the contents of the book. We would like to express our gratitude to Vice-Rector Prof. Kurt Matyas, also Honorary Chairman of the Scientific Committee of this symposium, for his leadership and generous support. Our thanks also go to Prof. Christian Bauer, Dean of the Faculty of Mechanical and Industrial Engineering; Prof. Detlef Gerhard, Former Dean of the Faculty of Mechanical and Industrial Engineering; and Prof. Friedrich Bleicher, Head of the Institute for Production Engineering and Photonic Technologies, for their interest and support for this symposium. On a sad note, Prof. Dr. Ayhan Toraman, one of the Honorary Chairmen of the Scientific Committee and Co-founder of the Society for Production Research and the first President of the Society (2006), passed away on 11 February 2021. We will always remember and cherish his memory and his valuable scientific contributions and leadership. We would like to thank all the keynote and invited speakers whose contributions enhanced the success of the symposium. In organizing this event, our colleagues in Vienna and İstanbul contributed endless hours of hard work, energy and wisdom to make this event a success. On the Vienna side, our sincere thanks go to the staff of the Research Group Production Metrology and Adaptronic Systems of the Institute for Production Engineering and Photonic Technologies, in particular, to Dipl. Ing. Erol Güçlü. On the İstanbul side, we are very grateful to Prof. Dr. Hatice Camgöz Akdağ and Assis. Prof. Dr. Nükhet Tunçbilek who were involved in every aspect of the organization from the very beginning. Our special thanks go to Ms. Tuğçe Beldek, Ind. Eng., and Mr. Aziz Kemal Konyalıoğlu, Ind. Eng., our young colleagues, research assistants and Ph.D. candidates at İstanbul Technical University and Mr. Tufan Tunçbilek, Comp. Engineer from the Society, also research assistants at TU Wien, for their hard and dedicated works and finally Prof. Dr. Bahadır Tunaboylu, for his valuable role and efforts in identifying some of the invited speakers. We would like to express our gratitude to the board members of the Society for Production Research in Istanbul for their strong support and involvement in the successful organization of the symposium. Our very special thanks go to our colleagues and the participants of this symposium. Undoubtedly, they were the core component of this organization. We would like to recognize and thank our dear colleagues who graciously accepted to join the honorary and scientific committees or who served as peers in this event. Finally, no such event is possible without the generous support of patrons and sponsors. In this regard, we would like to thank Dr. Michael Ludwig, the Mayor of Vienna, for his continued generous support of this symposium series and to all the corporations and individuals who provided invaluable financial and intellectual contributions.

Preface

ix

And last but not least, we are grateful to Ms. Petra Jantzen, Editor from Springer Nature, for her competent guidance, professionalism and patience. M. Güneş Gençyılmaz Numan M. Durakbasa

Digitizing Production Systems—Future of Manufacturing

Due to the rapid development of technology, new demands are placed on products and production systems. As a result, products are becoming increasingly complex and customized. New developments in industry are leading to an increase in the level of automation and diversity of production, resulting in a new concept: Industry 4.0. The visionary concept of Industry 4.0 focuses on digitizing the value chain of a product in the entire production system and improving the productivity, effectiveness, efficiency, flexibility, accuracy and quality of production through a variety of new and advanced technologies and an intelligent and optimized production environment. The ongoing rapid development of information technology and global networks, as well as the convergence of virtual and real worlds, are the key factors in the smart industrial production. Production processes will be designed dynamically and efficiently. The production of the future will be characterized by a fully integrated and highly flexible production chain with integrated sophisticated systems under the guidance of international standards. Cooperation of the next-generation manufacturers with technology developers has been an indispensable requirement for developing solutions to the trends of automation, IT and metrology in the production environments. The new aspects of cooperation between the research institutions and the manufacturing industry will provide a development capacity for high-quality and innovative products. The development of information highway, technology and advanced engineering data exchange techniques will make the transition to the Industry 4.0 efficient and productive by means of collaborative and interactive environment. On the basis of the technological developments, the manufacturing sector faces radical structural changes, with the digital transformation offering enormous opportunities but also presenting high-level challenges. These innovative developments require industrial plants to undergo a digital transformation driven by smart technologies and connected devices. In the process, digitalization poses major challenges for production companies worldwide. This means that the digitizing

xi

xii

Digitizing Production Systems—Future of Manufacturing

production system plays an essential role for the effective and efficient development of future industrial plants. October 2021

M. Güneş Gençyılmaz Numan M. Durakbasa

Organization

ISPR2021 was organized by TU Wien, Austria, and the Society for Production Research, Turkey. The symposium took place in Antalya, Turkey.

Editors Numan M. Durakbasa, Ausria M. Güneş Gençyılmaz, Turkey

Co-editors Peter Kopacek Selim Zaim Jorge Martin Bauer Serpil Erol Semra Birgün Kemal Güven Gülen Alptekin Erkollar Mahmut Tekin Jan Torgersen Nükhet Tunçbilek

Honorary Chairs Kurt Matyas Ayhan Toraman

TU Wien, Austria Society for Production Research, Turkey (Late)

Symposium Chairs Numan M. Durakbasa M. Güneş Gençyılmaz

TU Wien, Austria İstanbul Aydın University, Turkey

xiii

xiv

Organization

International Honorary Committee Albert Weckenmann Alice E. Smith Andrew Kusiak Christian Bauer Daniela Popescu Detlef Gerhard Duc Truong Pham Friedrich Bleicher Lubomir Šooš Sorin Popescu Stanislaw Adamczak Vidosav Majstorovic

Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany Auburn University, USA The University of Iowa, USA TU Wien, Austria Technical University of Cluj-Napoca, Romania Ruhr-University Bochum, Germany University of Birmingham, UK TU Wien, Austria Slovak University of Technology, Slovakia Technical University of Cluj-Napoca, Romania Kielce University of Technology, Poland University of Belgrade, Serbia

Organizing Committee Alp Baray, Turkey Lukas Kräuter, Austria Nükhet Tunçbilek, Turkey David Riepl, Austria Hatice Camgöz Akdağ, Turkey Haluk Soyuer, Turkey Hür Bersam Bolat, Turkey Ece Soyuer, Austria Jorge Bauer, Austria Tuğçe Beldek, Turkey Aziz Kemal Konyalıoğlu, Turkey Liane Höller, Austria Gökçen Baş, Austria Gamze Uğur Tuncer, Austria

Selim Zaim, Turkey Güneş Gençyilmaz, Turkey Erol Güçlü, Austria Semra Birgün, Turkey Serpil Erol, Turkey Berna Dengiz, Turkey Kemal Güven Gülen, Turkey Alptekin Erkollar, Turkey Mahmut Tekin, Turkey Eva Maria Walcher, Austria Ferhan Çebi, Turkey Dursun Delen, USA Günther Poszvek, Austria Şakir Esnaf, Turkey This book is prepared for publishing by Tuğçe Beldek, Turkey Aziz Kemal Konyalıoğlu, Turkey

Scientific Committee S. Adamczak H. Camgöz Akdağ A. Akdoğan K. Altinel İ. Ar B. Baki Ş. Baray

Kielce University of Technology, Poland İstanbul Technical University, Turkey Yıldız Technical University, Turkey Bosphorus University, Turkey Yıldırım Beyazıt University, Turkey Karadeniz Technical University, Turkey İstanbul University, Turkey

Organization

G. Baş N. Başoğlu J. Bauer A. Baykasoğlu E. Bayraktar F. Berto S. Birgün P. Blecha F. Bleicher H. B. Bolat F. Bozbura İ. Böğrekci I. Budak A. Bulgak İ. Çavuşoğlu L. A. Crisan F. Cus F. Çebi D. Delen M. Demirbağ P. Demircioğlu B. Dengiz T. Dereli M. Dinçmen M. Dragomir Á. Drégelyi Kiss N. Durakbaşa B. Durmuşoğlu A. Erkollar S. Erol Ş. Esnaf S. Firat W. Fisher G. Gençyılmaz D. Gerhard S. Gözlü S. Grozav K. Gülen A. Güngör E. Hekelová

xv

TU Wien, Austria İzmir Institute of Technology, Turkey National Technological University, Buenos Aires, Argentina Dokuz Eylül University, Turkey The American University of the Middle East, Kuwait Norwegian University of Science and Technology, Norway Doğuş University, Turkey Brno University of Technology, Czechia TU Wien, Austria İstanbul Technical University, Turkey Bahçeşehir University, Turkey Adnan Menderes University, Turkey Faculty of Technical Sciences in Novi Sad, Serbia Concordia University, Canada Marmara University, Turkey Universitatea Tehnica Cluj-Napoca, Romania University of Maribor, Slovenia İstanbul Technical University, Turkey Oklahoma State University, USA University of Essex, UK Adnan Menderes University, Turkey Başkent University, Turkey İskenderun University, Turkey İstanbul Technical University, Turkey Technical University of Cluj-Napoca, Romania Óbuda University, Hungary TU Wien, Austria İstanbul Technical University, Turkey Sakarya University, Turkey Gazi University, Turkey İstanbul University, Turkey Marmara University, Turkey University of California, Berkeley, USA İstanbul Aydın University, Turkey Ruhr-University Bochum, Germany İstanbul Technical University, Turkey Technical University of Cluj-Napoca, Romania Namık Kemal University, Turkey Pamukkale University, Turkey City University Bratislava, Slovakia

xvi

F. Holesovsky Z. Irani V. Işler C. Kahraman İ. Kara T. Katoka M. Klumpp D. Kocaoğlu T. Koç P. Kopacek M. Králik L. Kräuter C. Kubat U. Kula A. Kulakli J. Kundrak A. Kusiak V. Majstorovic I. Mankova A. Markopoulos K. Matyas M. Novák Y. Omurtag N. Özçakar D. Özdemir A. Özok E. Öztemel J. Peterka D. Pham M. Pokusová S. Popescu D. Popescu E. Pucher A. Sağbaş B. Sağbaş J. Sanz Juan A. Sharif A. Smith

Organization

J. E. Purkyně University in Ústí nad Labem, Czech Republic University of Bradford, UK Hasan Kalyoncu University, Turkey İstanbul Technical University, Turkey Başkent University, Turkey Kindai University, Japan Duisburg University, Germany Portland State University, USA İstanbul Technical University, Turkey TU Wien, Austria Slovak University of Technology in Bratislava, Slovakia TU Wien, Austria İstanbul Gelişim University, Turkey The American University of the Middle East, Kuwait The American University of the Middle East, Kuwait University of Miskolc, Hungary University of Iowa, USA University of Belgrade, Serbia Technical University of Kosice, Slovakia National Technical University of Athens, Greece Vienna University of Technology, Austria J. E. Purkyně University in Ústí nad Labem, Czech Republic Robert Morris University, USA İstanbul University, Turkey Bilgi University, Turkey Okan University, Turkey Marmara University, Turkey Slovak University of Technology, Slovakia University of Birmingham, UK Slovak University of Technology in Bratislava, Slovakia Technical University of Cluj-Napoca, Romania Technical University of Cluj-Napoca, Romania TU Wien, Austria Namık Kemal University, Turkey Yıldız Technical University, Turkey Polytechnic University of Valencia, Spain University of Bradford, UK University of Auburn, USA

Organization

Ľ. Šooš A. Sorguç H. Soyuer K. Stepien P. Tamas B. Tan M. Tanyaş F. Taşgetiren H. Taşkin M. Tekin J. Torgersen O. Torkul G. Ulusoy G. Varga K. Velíšek J. Wang A. Weckenmann M. Yalçintaş M. Yenisey M. E. Yurci S. Zaim W. Zebala M. Zerenler

xvii

Slovakia University of Technology in Bratislava, Slovakia Middle East Technical University, Turkey Ege University, Turkey Kielce University of Technology, Poland Budapest University of Technology and Economics Koç University, Turkey Maltepe University, Turkey Yaşar University, Turkey Sakarya University, Turkey Selçuk University, Turkey Norwegian University of Science and Technology, Norway Yalova University, Turkey Sabancı University, Turkey University of Miskolc, Hungary Slovak University of Technology, Slovakia National University of Singapore University of Erlangen-Nürnberg, Germany İstanbul Commerce University, Turkey İstanbul University, Turkey Yıldız Technical University, Turkey İstanbul Zaim University, Turkey Cracow University of Technology, Poland Selçuk University, Turkey

Reviewers Akdogan, A. Aktin, T. Baray, Ş. Bauer, J. Berto, F. Beyca, Ö. Börühan, G. Budak, I. Bulut, Ö. Büyüksaatçi, S. Camgöz-Akdağ, H. Crisan, L. Çebi, F. Demircioglu, P. Dragomir, M.

Dregelyi-Kiss, A. Durakbasa, N. Ekinci, E. Erkollar, A. Ersoy, P. Esnaf, Ş. Gençyılmaz, G. Gergin, Z. Gökçe, M. Grozav, S. Gülen, K. Kabadurmuş, Ö. Kandiller, L. Kazançoğlu, Y. Kopacek, P.

xviii

Králik, M. Kräuter, L. Majstrovic, V. Maňková, I. Mullaoğlu, G. Novák, M. Öner, A. Öner, E. Özgür, A. Özkan, S. Öztop, H. Öztürkoğlu, Y.

Organization

Öztürkoğlu, Ö. Paldrak, M. Pokusová, M. Soyuer, H. Staiou, E. Torgersen, J. Tunçbilek, N. Üney-Yüksektepe, F. Varga, G. Yetkin, B. Zaim, S.

Aiming Higher in Conceptualizing Manageable Measures in Production Research William P. Fisher, Jr.1,2,3 1 2 3

Living Capital Metrics LLC, Sausalito, CA, USA Research Institute of Sweden, Gothenburg, Sweden BEAR Center, GSE, University of California, Berkeley, USA wpfi[email protected]

Abstract. The economics of commercial production are dependent in several key respects on the market institutions that create the context for profitable transactions. Markets are not created as much by trade as by the institutions that structure the standards taken for granted in the background of agreements and contracts. Property rights, scientific rationality, access to capital, and transportation and communications networks are all essential to a functional economy. These key elements are integrated into today’s market institutions only for manufactured capital, however, as they are mistakenly deemed irrelevant or inaccessible to human, social and natural capital. Costs associated with measuring and managing human, social and natural capital are, then, minimized and externalized whenever possible. Contrary to common opinion, however, proven, well-documented and long-standing resources exist for bringing scientific rationality to bear on intangible assets in meaningful and useful ways. The role of measurement standards in creating common product definitions and enforceable property rights, and in lowering transaction costs—and so in creating economically effective market institutions—is well understood, but virtually no attention has been paid to opportunities for extending those definitions, rights and lower costs into the domains of intangible assets. One crucial aspect of the problem is the widespread but mistaken assumption that quantification inherently requires the homogenization and smothering of unique individual differences. On the contrary, standards are the means by which both global harmonization and mass customization become possible, as is eminently apparent in the way universally accessible musical scales and tuned instruments set the stage for creative improvisation. Everyday language provides the model prototype for metrology as well as for the complex interrelations of formal, abstract and concrete meanings that are deployed in measurement. Designing and implementing standardized communications media adaptable to local circumstances are inherently complex, but solutions have been available, tested and in use for decades. Multilevel semiotic systems for communicating and managing the development and growth of human, social and natural processes set the stage for new developments in production research.

xix

xx

W. P. Fisher

Keywords: Measurement  Intangible assets  Market institutions  Mass customization  Standards

1 Introduction The “Eureka!” moment of individual creative insight remains the widely assumed model of innovation despite the repeatedly established fact that evolutionary leaps in business development follow entirely from integrations of diverse and previously disconnected groups [1–3]. Thus, the simultaneity of democratic, economic, industrial, and scientific revolutions in Western Europe in the late eighteenth and early nineteenth centuries was not a coincidence. Each sphere of action co-evolved in tandem with the others. The introduction of the metric system, for instance, was explicitly motivated by political concerns for fair taxation and trade, as well as by science’s need for comparable quantities. “Just as the French Revolution had proclaimed universal rights for all people, the savants argued, so too should it proclaim universal measures” [4] (p. 3). Absent the complementary and interdependent effects of efforts in all three of these domains (political, economic, and scientific), the advances in overall quality of life achieved globally over the last 200 years and more are inconceivable. Today’s urgent needs to improve on those advances in more sustainable political and economic regimes require a new iteration on the previous instance of co-evolving domains. The history and philosophy of science have documented in considerable detail how new technologies and new forms of social organization co-evolve [5–12]. This body of work is useful in envisioning how advances in the measurement and management of human, social and natural capital must entail transformed forms of social organization if their potential for informing sustainable policies and practices is to be realized. The technical complexities of established measurement models and methods [13–17] constitute a paradigm shift in how quantification is conceived; enacting this shift requires new forms of social and economic organization. After spelling out the role of measurement in establishing common product definitions, property rights and lower transaction costs, the primary features of rigorously defined, meaningful and useful quantification will be described. In closing, some possibilities for future development are taken up.

2 Measurement, Property Rights and Transaction Costs In an early contribution to the theory of institutional economics and the understanding of how markets are not defined solely by the exchange activities of buyers and sellers, North [18] (pp. 18–19, 36) points out that: One must be able to measure the quantity of a good in order for it to be exclusive property and to have value in exchange. Where measurement costs are very high, the good will be a common property resource. The technology of measurement and the history of weights and measures is a crucial part of economic history since as measurement costs were reduced the

Aiming Higher in Conceptualizing Manageable Measures in Production Research

xxi

cost of transacting was reduced. ...so long as some characteristic of a good that has economic value is not measured, then there is divergence between private and social cost.

Advances in the science of measurement ought, then, to feed back on economics. The U.S. National Institute of Standards and Technology periodically documents the economic impact of improved measurement precision. Local measurement costs are reduced to negligible levels by means of standards that are themselves quite expensive, necessitating large investments on the part of the society as a whole. Continuous quality improvements in product design leverage common definitions to make markets predictable years in advance, as the example of Moore’s law demonstrates [19]. Unfortunately, probabilistic measurement approaches are so rarely understood that their potential for informing whole new classes of economic domains in this way remains woefully underdeveloped [20]. Motivations for taking up this task of understanding and leveraging advances in advances in measurement follow from the realization that, in accord with North’s observations: Economic theory suggests that changes in transaction costs have a first-order impact on the production frontier. Lower transaction costs mean more trade, greater specialization, changes in production costs, and increased output [21] (p. 370). Given the role of measurement as a crucial factor in the institutional arrangements that make market exchanges possible, it would then seem incumbent upon the field of production research to ask: • What is the state of the art in the measurement of human, social and natural capital? • Is there any hope that measurement can provide information of a high enough quality to lower transaction costs and confer property rights in these domains? With these questions in mind, we will now turn to some of the features of probabilistic measurement that ought to be much more widely known and applied. Not only are shared standards for quality-assured metrologically traceable quantities a viable goal, but they also do not entail the mechanized, manipulative and reductionist forms of control and objectivity often taken for granted as necessary to measurement.

3 Relevant Advances in Measurement Theory and Practice The public, as well as large portions of those who consider themselves experts in measurement research and practice, are generally not aware of many of the features of measurement models commonly employed in psychology and the social sciences. For instance: • The mathematical structures informing probabilistic models for log-interval measurement are the same as those informing the measurement of basic physical quantities (length, mass, charge, etc.), as illustrated in Fig. 1 [14, 17, 22–26].

xxii

W. P. Fisher

Fig. 1. Same mathematical form of models across the sciences [14]

• Mathematical proofs show when and how test and survey data are both necessary and sufficient to estimating interval measurement [27, 28]. • Historical and conceptual connections exist between psychometric and econometric approaches to forming identified models, those that retain their properties across data sets and so provide a reliable basis for informing policy and practice [29]. • Experimental evidence demonstrates the reproduction of physical units of measurement from ordinal observations, as shown in Fig. 2 [30, 31].

Fig. 2. Centimetres reproduced from ordinal ratings [30]

• Applications of metrological and psychological measurement models to the same data give the same results, as shown in Fig. 3 [32]. • Log-interval metric standards are already in use across a wide range of fields (pH acidity; decibel loudness volume; Richter scale; information; etc.). • Dramatic improvements in measurement quality stand to be made not only in social and psychological domains but in various areas of the natural sciences that have not yet taken advantage of advanced modelling methods, such as clinical chemistry and vision science [33–37].

Aiming Higher in Conceptualizing Manageable Measures in Production Research

xxiii

Fig. 3. Common results from engineering and Rasch methods applied to counting data [32]

• Metrologists describe probabilistic measurement models as: – “…paradigmatic of measurement” [25]. – “…not simply a mathematical or statistical approach, but instead a specifically metrological approach to human-based measurement” [38]. • Joint meetings of metrologists and psychometricians have been sponsored by IMEKO since 2008, with the 2016 IMEKO TC 1-7-13 Joint Symposium held at the University of California, Berkeley (Fig. 4) having nearly equal numbers of participants from each group [39, 40].

Fig. 4. IMEKO 2016 TC 1-7-13 Joint Symposium, University of California, Berkeley [39,40]

• Item banking, adaptive instrument administration and theory-based automatic item generation are technical means for customizing the content of questions asked in the course of making measurements without compromising the unit of comparison [41–44]. • Tens of millions of measurements are made annually of reading comprehension, health status and other variables at high enough levels of reliability and precision [44–46] to support studies of the inferential fit-for-purpose stability required for substantiating legally defensible property rights and pricing mechanisms. • Mass customized implementations of measurements in education and health care combine specific, qualitative recommendations and individual-level

xxiv

W. P. Fisher

Fig. 5. Theoretical versus empirical text complexity for 719 children’s magazine articles [44]

Fig. 6. Reading growth curves for 15 cohorts of North Carolina students, 1995–2014 [45]

instructional or clinical feedback with standardized quantitative expressions [47–49]; these tools will be essential to the management and growth of personally owned shares of human, social and natural capital stocks. • Thousands of publications in peer-reviewed scientific journals document the development and use of probabilistic models for measurement in research and practice. Metrological networks of instruments traceable to unit standards have been of decisive importance in the efficiency and effectiveness of scientific and economic markets. These networks refine and extend everyday language by mimicking its projection of shared, collectively determined conceptual ideals that are embodied in words used to negotiate local meanings.

Aiming Higher in Conceptualizing Manageable Measures in Production Research

xxv

Fig. 7. Repeated measures of individual student reading growth trajectory targeting desired outcomes [44]

4 Standards, Exceptions to the Rule and Locally Situated Adaptive Capacities Over the course of the second half of the twentieth century, philosophers and historians of science [9, 10, 50–57] contradicted accepted perspectives and provoked explorations in new directions. One of the puzzling consequences that followed from these explorations was the realization that science does not measure to discover laws; rather, the laws must already be in hand for measurements to be made. Quantification is not, it turns out, the only—or even the most important— feature of measurement and mathematical thinking [58, 59]. Shared measurement standards and quantity definitions are, of course, essential to communication, collaboration and the creation of profitable competitive markets. Practical applications of new advances would be impossible in the absence of metrological quality-assured alignments and coordinations. But contrary to unexamined assumptions, useful measurement standards adapt to or reveal unique local circumstances; they do not necessarily conceal or ignore them. Standards contextualize exceptions to the rule in two ways, one general and the other specific. Display of Anomalies across Observations First, as Kuhn [55] (p. 205) put it, To the extent that measurement and quantitative technique play an especially significant role in scientific discovery, they do so precisely because, by displaying significant anomaly, they tell scientists when and where to look for a new qualitative phenomenon. To the nature of that phenomenon, they usually provide no clues. When measurement departs from theory, it is likely to yield mere numbers, and their very neutrality make them particularly sterile as a source of remedial suggestions. But numbers register the departure from theory with an authority and finesse that no qualitative technique can duplicate, and that departure is often enough to start a search.

xxvi

W. P. Fisher

Similarly, Cook, writing in 1914, saw that laws are “the instrument of science; not its aim” [60] (p. 428). All learning proceeds from what is already known, and uses the conformity or the contrast between what is expected and what is observed to comprehend something new. In like fashion, scientific advances often take place when consistent anomalies and perturbations contradicting expectation occur in the context of shared standards. The process at this general level is one of realizing that the exception to the rule is an answer to a question that has not yet been asked. Most such anomalies are then ignored as mere noise until their repeated occurrence motivates a new way of looking and thinking. As Cook, Kuhn and Rasch all independently note, Neptune, for instance, was discovered because multiple repeated measurements partially conforming to and partially contradicting expectation displayed a consistently proportionate departure from the model of planetary motion [14, 55, 60]. As Kuhn [55] says, though Neptune, like Uranus, might have been discovered by accident, it is hard to imagine how phenomena like electron spin and the neutrino could have been discovered except by means of persistently observed anomalies. Similar examples of unexpected observations leading to what were ultimately highly useful products include the glue that would not stick (Post-It Notes), disappearing electromagnetic effects (X-rays), rubber hardened when accidentally left on a warm stove (Vulcanization), etc. Display of Anomalies within Observations The second, more specific, way measurement standards contextualize unique local circumstances focuses on adaptations that facilitate practical accommodations while also remaining generally communicable. Social and historical studies of science in action have documented the seemingly paradoxical need for practical systems and products to encompass both formal, computational demands for communicable generality, and informal, workplace demands for locally unique adaptations [12, 61, 62]. Unique characteristics can be perceived, after all, only in contrast to a general expectation: “we can but use the instruments we have” [60] (p. 431). This kind of simultaneously abstract and concrete phenomenon is taken for granted as an everyday commonplace, but also reveals important complexities in how standards function. For instance, clocks and calendars formatted in common terms make it easy for globally distributed participants to schedule online meetings at convenient times and to show up when everyone else does. But that general navigability has nothing all to do with the specific circumstances accompanying the experience of the same time in different places on the globe or at different times of year. Two locations at the same longitude but separated by a great distance into northern and southern latitudes entail very different kinds of consequences for relevant activities. A picnic dinner on a sunny beach in swimwear may be an attractive option in Santiago, Chile, for an early evening in January, while, at exactly the same time on the same day in Boston, one would plainly prefer a sweater by a warm fire in a cozy den. The lived experience of time obviously varies subtly day by day and place by place, and we accommodate local variations to the point that no one remarks much even on differences in when dawn and dusk take place on opposite ends of a given time zone at the same latitude.

Aiming Higher in Conceptualizing Manageable Measures in Production Research

xxvii

The same kind of unique qualitative variation occurs as a consequence of shared standards for the tuning of musical instruments. Far from resulting in a boring and lifeless homogenization, universally distributed uniform standards for musical scales have instead played important roles in repeated explosions of creativity. The ease with which different instruments are harmonized is accentuated by the associated ways creative individuals introduce noticeable amounts of dissonance that colour their personal statements in immediately recognizable ways. Standardized media not only structure creativity in the writing and performance of music, but also inform the music industry’s broader creativity in producing and distributing musical recordings. In the same way that musicians, recording and acoustic engineers, and industry legal counsel, accountants and investors all leverage technical standards, so, too, might educators, students, clinicians, employees, patients, environmentalists, activists and many others also all be situated in contexts enabling them to succeed as new kinds of entrepreneurs. That is, given the achievement of the reading, organizational and social skills needed to function at a given job description’s level of expertise, and given measurements calibrated in a quality-assured quantity value, questions need to be raised concerning the form of the legal, financial and communications standards through which property rights, lower transaction costs and efficient markets might be obtained. But anomalous exceptions to the rule are not just ancillary by-products of standards; in complex ways, they are integral to the creation and functioning of standards.

5 Three Semiotic Levels of Complexity Packaged in Metrological Assemblages The objectivity of data as the focus of descriptive models is still widely assumed to be the sine qua non criterion of scientific conduct. Systematic study, however, has led historians and philosophers of science to the conclusions that science is inherently participatory, and that the role of data is complemented by equally important roles played by instruments calibrated to uniform, quality-assured standards, and by explanatory theory [9–11, 50–53, 57, 62–67]. Contrary to Kuhn’s emphasis on the influence of paradigms as resulting primarily in convergent thinking among scientists, closer consideration suggests that “a spectrum of convergent, divergent, and reflective modes of thought may instead be a more appropriate indicator of collective intelligence and thus the healthy functioning of a scientific field” [65] (p. 1365). Because of the way that language serves as the medium of thought, the ultimately sociolinguistic nature of collective intelligence marks it as fundamentally semiotic. In his intensive ethnographic study of communities of theoreticians, instrument makers and experimentalists in microphysics, Galison [66, 67], for instance, had expected to find developments to be fairly well coordinated across areas, even if they did not occur in rigid lockstep. What he found, however, was that these groups of practitioners separated themselves into distinct communities with largely incommensurable beliefs. “Each subculture has its own rhythms of change, each

xxviii

W. P. Fisher

Fig. 8. Continuum of field-defining activities [65]

has its own standards of demonstration, and each is embedded differently in the wider culture of institutions, practices, inventions, and ideas” [67] (p. 143). Galison then proposes an open-ended, three-part model allowing each community partial autonomy in relations of rough parity in which none have a special priority (Fig. 9). Anthropologically, Galison argues the situation is akin to the ways in which culturally incompatible, or warring, groups will interact economically by setting up neutral trading zones. In the same way that the objects traded may have markedly different valuations ascribed to them by different cultures, so, too, does an intercalated periodization of relations between communities of theoreticians, instrumentalists and experimentalists in microphysics also allow them to agree to disagree. And so, Galison [66] (p. 46) says,

Fig. 9. Intercalated periodizations of relations across microphysics communities [66]

Theorists and experimenters, for example, can hammer out an agreement that a particular track configuration found on a nuclear emulsion should be identified with an electron and yet hold irreconcilable views about the properties of the electron, or about philosophical interpretations of quantum field theory, or about the properties of films.

Galison [66] (pp. 46, 49) continues, admitting that “at first blush, representing meaning as locally convergent and globally divergent seems paradoxical”. But observing the process in practice leads to the semiotic realization that people have and exploit an ability to restrict and alter meanings in such a way as to create local senses of terms that speakers of both ‘parent’ languages recognize as intermediate between the two. The resulting pidgin or creole is neither absolutely dependent on nor absolutely independent of global meanings.… It seems to be a part of our general linguistic ability to set broader meanings aside while regularizing different lexical,

Aiming Higher in Conceptualizing Manageable Measures in Production Research

xxix

syntactic, and phonological elements to serve a local communicative function. So too does it seem in the assembly of meanings, practices, and theories within physics.

Independently derived variations on Galison’s linguistic point have been proposed by other researchers conducting social studies of science and also by developmental psychologists and by philosophers. The somewhat divergent and somewhat convergent character of what Galison calls the trading zone is also featured, for instance, in Star’s notion of the boundary object [12, 61–62, 68]. Boundary objects [62] (p. 392): • “…are objects which are both plastic enough to adapt to local needs and the constraints of several parties employing them, yet robust enough to maintain a common identity across sites”. • “…are weakly structured in common use, and become strongly structured in individual site use”. • “…may be abstract or concrete”. • “…have different meanings in different social worlds but their structure is common enough … to make them recognizable, a means of translation”. Returning to the theme of how cross-sector partnerships are essential to applying standards in complementary scientific, legal, financial and communications domains, “The creation and management of boundary objects is a key process in developing and maintaining coherence across intersecting social worlds” [62]. Figure 10 illustrates the relationships of various stakeholders allied across social and economic boundaries by means of their translations of ostensibly shared standards. Each allied sector can communicate with the others only by passing through the passage points in which standards are translated by means of consensus processes.

Fig. 10. Alliances, obligatory passage points, and boundary objects [62]

This hierarchy of increasingly complex levels of meaning and social organization is, as Galison suggests, not found only in the organization of science but is a characteristic of language in general. Concurring with Galison, in a review of a book on creating scientific concepts, Lakoff [69], a philosopher and linguist known for his work on metaphor, wrote: “It should be obvious. Scientists are human beings and their scientific theories reflect normal human mechanisms of thought”. Einstein

xxx

W. P. Fisher

[70] (p. 290) similarly held that “The whole of science is nothing more than a refinement of everyday thinking”. And Bohr [71] (pp. 187–188) also said, “Ultimately, we human beings depend on our words. We are hanging in language…. We are suspended in language in such a way that we cannot say what is up and what is down”. The simultaneously concrete and abstract nature of linguistic objects that are alternately weakly or strongly structured is also implicated in Ricoeur’s [72] philosophy of narrative identity, where the stories we tell about ourselves are similarly cast as “variations on an invariant”. And psychological theories of skill development [73] and hierarchical complexity [74] addressing how hidden background assumptions become cognitively integrated objects of conceptual operations also document the simultaneity of the general and the specific [75]. As the medium of thought, language functions as the prototype metrological system [76] (p. 11) and semiotics becomes the discipline through which the creation of meaning is theorized and practised [77–80]. And so, in this context, Golinski [81] (p. 35) remarks that, “Practices of translation, replication, and metrology have taken the place of the universality that used to be assumed as an attribute of singular science”. In Galison’s [66] opinion, it should be noted allies do not seem to translate between their different languages so much as they negotiate new common languages (pidgins) when they need to communicate. Each sector, however, has to work out replications of the ostensibly shared object in their own semiotic terms. In this process, metrology plays a key role. In a ten-page section (pp. 247–257) of his 1987 book, Science in Action, Latour [9] expands on the “paramount importance of metrology”, saying: • “Every time you hear about a successful application of a science, look for the progressive extension of a network”. • “The predictable character of technoscience is entirely dependent on its ability to spread networks further”. • “Facts and machines are like trains, electricity, packages of computer bytes or frozen vegetables: they can go everywhere as long as the track along which they travel is not interrupted in the slightest. This dependence and fragility is not felt by the observer of science because ‘universality’ offers them the possibility of applying laws of physics, of biology, or of mathematics everywhere in principle. It is quite different in practice” [i.e. you cannot phone someone who does not have a phone; you cannot demonstrate Ohm’s law without a voltmeter, wattmeter, and ammeter; you cannot fix a machine without access to tools; try telling the time without a clock]. • “In all these mental experiments you will feel the vast difference between principle and practice, and that when everything works according to plan it means that you do not move an inch out of well-kept and carefully sealed networks”. • “Metrology is the name of this gigantic enterprise to make of the outside a world inside which facts and machines can survive”.

Aiming Higher in Conceptualizing Manageable Measures in Production Research

xxxi

• “Scientists build their enlightened networks by giving the outside the same paper form as that of their instruments inside. [They can thereby] travel very far without ever leaving home”. • “There is a continuous trail of readings, checklists, paper forms, telephone lines, that tie all the clocks together. As soon as you leave this trail, you start to be uncertain about what time it is, and the only way to regain certainty is to get in touch again with the metrological chains. Physicists use the nice word constant to designate these elementary parameters necessary for the simplest equation to be written in the laboratories. These constants, however, are so inconstant that the US, according to the National Bureau of Standards, spends 6 per cent of its Gross National Product, that is, three times what is spent on R & D, just to maintain them stable!” • “That much more effort has to be invested in extending science than in doing it may surprise those who think it is naturally universal”. We must, of course, disagree with Latour concerning his assumption that metrology confers certainty, while still accepting the validity of his overriding point as to the immense value obtained from quality-assured traceability to standards. Securing this value, as Latour notes, is of course incredibly expensive. But because those investments lower transaction costs and support legally defensible property rights, they pay significant returns [82–88]. It may be that, beyond the difficulty of communicating these complex ideas (which must include how they relate to the poetic and metaphorical [89–91]), the primary challenge will be figuring out how to organize ecological economies putting the multilevel metrology of human, social and natural capital to work.

6 Rasch’s Distinctions between Semiotic Levels in Measurement Though he does not approach the matter systematically, Rasch nonetheless provides useful orientations to each of the three levels of semiotic meaning in language. Our task is one of balancing and separating these in the course of defining and operationalizing translations, replications and the metrology of boundary objects for human, social and natural capital. The intercalated periodizations of communities of theoreticians, instrumentalists and experimentalists will likely also characterize the somewhat convergent and somewhat divergent relations of researchers in psychology, the social sciences and other areas in which probabilistic models for measurement and associated metrological standards are relevant. Semiotics in Everyday Language, in Science and in Typical Psychometrics In the semiotics of everyday language, the concept of the “chair” corresponds with various languages’ words (“chair”, “椅子,” “la silla,” “кpicлo,” “ ”) and the infinite variety of actual chairs in the world. Even for something as relatively simple and concrete as a chair, it is immediately apparent that no real chair could possibly embody every aspect of the “chair” ideal. Some chairs have four legs and others,

xxxii

W. P. Fisher

three, two or one; some have a back, and others do not; etc. The concrete meaning of the abstract word “chair” representing a formal ideal “chair” has to be worked out in the moment of use. The way words and images are suspended between formal ideals and concrete things is central to learning how to imagine new ways of managing what we measure. No one ever grasps the perfect form of an ideal state, nor is anyone not aware of that ideal when grappling with concrete situations. Words embody and represent ideals in ways that are endlessly adaptable to the infinite variation of unique local circumstances. In each instance of their use in conversation, standardized pronunciations, alphabets, grammars, vocabularies, semantics and syntaxes mediate social intercourse to conceive, gestate, midwife and nurture living meaning. The use of standardized words and semantics in language does not reduce what counts as something to some narrowly defined ideal. Instead, standards emerge as products of repeatedly observed things and are used as media for revealing contrasts between the formal universal ideal and the concrete local thing. Even something as every day and simple as a chair never embodies all of the features of an ideal chair, but people know what we mean even when we say “Pull up a chair” and everyone is sitting on the ground. Science extends the semiotic triangle of idea–word–thing into the theory–instrument–data relation. Scientific research investigates empirical patterns in repeatably observable things and learns how to synthesize them. When an object of investigation acts as an agent convincingly provoking agreement across observers as to its status as a real thing in the world, it is transformed into a product of the observers’ agreement expressed as a standard way of recognizing and communicating it [53, 92]. The “metre” then functions as a theoretical ideal never observed in the real world (the length of the path travelled by light in vacuum during a time interval of 1/299792458 of a second). The “metre” instrumental to measurement is embodied in a readable technology or inscription device traceable to a defined quantity value via a quality-assured system of replications. And the metre measured as the length of a distance along the ground is the datum of experience. Social and psychological measurement (psychometrics) as typically practised with scores from tests and surveys, however, disconnects from the semiotics of meaning usually taken for granted in the use of language [93]. Now, no effort is invested in relating an unrealistic theory to a standardized representation relevant to the infinite array of all possible concrete instances of the object involved. Instead, what happens is that the work of naming a repeating pattern is cut short. Instead of setting the goal of explaining and understanding a phenomenon well enough to reproduce it from theoretical specifications, and instead of setting up number words that retain their meaning across instruments and data sets, scores are accepted as inherently tied to and dependent on the local particulars of the questions asked and the persons responding. The patterns that are sought in the data are statistical, and consistencies are expressed only in descriptive statistical terms as functions of variation and significance tests. The fact that these numbers change in unpredictable and unreliable

Aiming Higher in Conceptualizing Manageable Measures in Production Research

xxxiii

ways across data sets is accepted as an unavoidable fact of life. This counterproductive perspective is unnecessary and wasteful. Rasch’s Semiotics The Danish mathematician and measurement theoretician, Rasch and his students and colleagues, however, have elaborated a quite different perspective. Rasch was in a direct line of intellectual descent from James Clerk Maxwell. Maxwell’s method of analogy is now widely recognized as an important factor in his scientific creativity [60, 94], as it also proved to be in Rasch’s. As recounted in detail in [29], in 1935, Rasch spent time in Oslo studying with Ragnar Frisch, forming a long-standing association with Tjalling Koopmans and making the acquaintance of Koopman’s advisor, Jan Tinbergen. Tinbergen had studied physics in Leiden with Paul Ehrenfest, a student of Ludwig Boltzmann. Boltzmann considered Maxwell’s epistemological ideas as significant as his physics, and so expounded, developed and admired them over the entire course of his career [95] (p. 136). Ehrenfest followed suit, and Tinbergen conceived the possibility of mathematical modelling in economics on the basis of Maxwell’s work [96] (p. 24). Implicitly taking up Maxwell’s method of analogy, Rasch [14] (pp. 110–115) explicitly relates the form of his model for measurement to the form of Maxwell’s analysis of Newton’s second law [97]. His analogy is to say that force is to mass and acceleration as reading comprehension is to reader ability and text difficulty. Rasch readily accepts the fact that this model asserts an unrealistic ideal, saying. To take a parallel from elementary physics: A “mathematical pendulum” is defined as “a heavy point, swinging frictionless on a weightless string in vacuum”. A contraption like that was never seen; thus as a model for the motion of a real pendulum, it is “unrealistic. [98]”. Rasch then also recognizes, just as Boltzmann did [96] (p. 28), that the point is not that the ideal model should be true, but that it should be useful [14] (pp. 37–38). The fact that the laws projected via measurements as the semiotic media of understanding are unrealistic idealizations of geometric linearity is well recognized by historians of science [99, 100]. As Butterfield [99] (pp. 16–17) put it, the law of inertia is not the kind of thing you would discover by mere photographic methods of observation—it required a different kind of thinking-cap, a transposition in the mind of the scientist himself; for we do not actually see ordinary objects continuing their rectilinear motion in that kind of empty space....we do not in real life have perfectly spherical balls moving on perfectly smooth horizontal planes—the trick lay in the fact that it occurred to Galileo to imagine these.

Just as Galileo, Newton, Maxwell and others “were discussing not real bodies as we actually observe them in the real world but geometrical bodies moving in a world without resistance and without gravity”, so, too, do Boltzmann and Rasch realize that we act on, share and make use of an unrealistic model as the formal conceptual aspect of a medium of thinking. The value obtained in imagining these idealizations was definitively articulated by Kant [101] (p. 20), who pointed out that

xxxiv

W. P. Fisher

reason has insight only into that which it produces after a plan of its own, and that it must not allow itself to be kept, as it were, in nature’s leading-strings, but must itself show the way with principles of judgment based upon fixed laws, constraining nature to give answer to questions of reason’s own determining.

Like Kuhn, Cook and Rasch, Butterfield and Kant both also hold that measurements do not accumulate into evidence of lawful regularities; rather, measurement is itself already structured in the form of lawful regularity. Because we are attending to all three of the points of the semiotic triangle in relation to one another, idealization at the level of the formal statement of the model is balanced by and separated from its representation in an abstract standard and its applicability in concrete situations. The practical value of the modelled regularity comes to bear, then, in the guidance it provides theory development and understanding what is measured. Rasch encouraged his son-in-law, Prien, to develop a theory of a mathematics ability construct useful in constructing tests [102]. Wright [103] remarks on not having understood what Prien was trying to do when first encountering the effort in the 1960s in Denmark, but said it began to make sense to him later in the light of Stenner’s [44–46] analogous results from a predictive theory of reading comprehension (see Fig. 5). Clearly articulating what it is that makes items exhibit the properties they do is highly useful in efforts aimed at producing statistically identical measurements from different instruments [44, 104, 105]. The practical value of theory is extended by the efficiency it brings to bear in instrument design and assembly. Automatic item generation, whether validated by theory or experiment, or by inferences as to how behaviours implicate responses to unasked questions [43, 106–109], entails very large reductions in the costs associated with instrument design and assembly [110]. Rasch [111] anticipated the semiotic role of the instrument, saying, With all of this available to us, we will have an instrumentarium with which many kinds of problems in the social sciences can be formulated and handled with the same types of mathematical tools that physics has at its disposal—without it becoming a case of superficial analogies.

Wright, the leading exponent advancing Rasch’s work [112–113], agreed, saying, “Science is impossible without an evolving network of stable measures” [17] (p. 33). Science can hardly be said to exist if research does not result in new things being brought into language by means of conceptually determined and meaningful representations.

7 Conclusion Probabilistic models for log-interval measurement have been extensively investigated, tested, documented and applied in a new class of quantitative theory, methods and instruments. The proven science available in this body of work comprises an untapped resource relevant to the production needs of a wide range of

Aiming Higher in Conceptualizing Manageable Measures in Production Research

xxxv

industries. Just as Galileo’s capacity to imagine a geometry of perfectly spherical balls rolling on frictionless planes led to the emergence of a new science of physics, so, too, will Rasch’s capacity to imagine a geometry of measured constructs affected by nothing but probabilistic differences between abilities and difficulties lead to the emergence of a new science of complex stochastics. In the light of that science, the role of measurement in the establishment of common product definitions, legally defensible property rights and lower transaction costs will come to the fore in new ways. Barriers to the application of advances in measurement theory and practice seem to be functions largely of unexamined assumptions that prevent imaginative visualizations of opportunities that may be highly profitable, both financially and substantively, in the fulfilment of human potential, the richness of community life and the enhancement of environmental quality [75, 93]. Though the challenges are truly monumental, not trying the semiotic keys we hold in the locks on the chains that bind us would be even more catastrophic and tragic than failing in the effort. Acknowledgment. Though no funding directly supported production of the results reported here, thanks are due to a wide range of colleagues and organizations who have contributed to advancing the cause, most recently Jan Morrison, and the Research Institute of Sweden.

References 1. Hargadon, A.: How Breakthroughs Happen: The Surprising Truth About How Companies Innovate. Harvard Business School Press, Cambridge (2003) 2. Kuenkel, P.: J. Corp. Citizensh. 58, 119–136 (2015) 3. Russell, M.G., Smorodinskaya, N.V.: Technological Forecasting and Social Change, vol. 136, pp. 114–131 (2018) 4. Alder, K.: The Measure of All Things: The Seven-Year Odyssey and Hidden Error That Transformed the World. The Free Press, New York, (2002) 5. Flichy, P.: Understanding Technological Innovation: A Socio-Technical Approach. Edward Elgar, Northampton (2007) 6. Jasanoff, S.: States of Knowledge: The Co-Production of Science and Social Order; International Library of Sociology. Routledge: New York (2004) 7. Murray, F.: Res. Policy 31(8–9), 1389–1403 (2002) 8. Hutchins, E.: Philos. Psychol. 27(1), 34–49 (2014) 9. Latour, B.: Science in Action: How to Follow Scientists and Engineers Through Society. Harvard University Press, New York (1987) 10 Latour, B.: We Have Never Been Modern. Harvard University Press. Cambridge (1993) 11. Latour, B. Reassembling the Social: An Introduction to Actor-Network-Theory; Clarendon Lectures in Management Studies. Oxford University Press, Oxford (2005) 12. Bowker, G., Timmermans, S., Clarke, A.E., Balka, E. (eds.) Boundary Objects and Beyond: Working with Leigh Star. MIT Press, Cambridge (2015) 13. Fisher, W.P., Jr., Wright, B.D. (eds.) Int. J. Educ. Res. 21(6), 557–664 (1994) 14. Rasch, G.: Probabilistic Models for Some Intelligence and Attainment Tests, reprint, with Foreword and Afterword by B. D. Wright, Chicago: University of Chicago Press, 1980; Reprint, with Foreword and Afterword by B. D. Wright. University of Chicago Press, Chicago (1980). Danmarks Paedogogiske Institut, Copenhagen (1960)

xxxvi

W. P. Fisher

15. Wilson, M.R.: Constructing Measures: An Item Response Modeling Approach. Lawrence Erlbaum Associates, Mahwah (2005) 16. Andrich, D., Marais, I.: A Course in Rasch Measurement Theory: Measuring in the Educational, Social, and Health Sciences. Springer, Cham (2019). https://doi.org/10.1007/ 978-981-13-7496-8 17. Wright, B.D.: Educ. Meas. Issues Pract. 16(4), 33–45, 52 (1997). Winter 18. North, D.C. Structure and Change in Economic History. W. W. Norton & Co., New York (1981) 19. Miller, P., O’Leary, T.: Accounting, Organizations, and Society 32(7–8), 701–734 (2007) 20. Fisher, W P., Jr., Stenner, A.J.: In fundamentals of measurement science. In: International Measurement Confederation (IMEKO) TC1-TC7-TC13 Joint Symposium, Jena, Germany, 2 September 2011 21. Benham, A., Benham, L.: In Institutions, Contracts and Organizations: Perspectives from New Institutional Economics. In: Ménard, C., (ed.), pp 367–375. Edward Elgar, Cheltenham (2000) 22. Narens, L., Luce, R.D.: Psychological bulletin 99(2), 166–180 (1986) 23. Luce, R.D., Tukey, J.W.: J. Math. Psychol. 1(1), 1–27 (1964) 24. Pendrill, L.: Quality Assured Measurement: Unification Across Social and Physical Sciences. Springer: Cham (2019). https://doi.org/10.1007/978-3-030-28695-8 25. Cano, S., Pendrill, L., Melin, J., Fisher, W.P., Jr.: Measurement 141, 62–69 (2019) 26. Mari, L., Wilson, M., Maul, A.: Measurement across the sciences. In: Morawski, R., Rossi, G, et al. (eds.) Springer Series in Measurement Science and Technology; Springer: Cham (2021). https://doi.org/10.1007/978-3-030-65558-7 27. Andersen, E.B.: Psychometrika 42 (1), 69–81 (1977) 28. Fischer, G.H.: Psychometrika 46 (1), 59–77 (1981) 29. Fisher, W.P., Jr.: J. Interdisc. Econ. 1–32 (2021). OnlineFirst 30. Stephanou, A.; Fisher, W. P., Jr.: J. Phys. Conf. Ser. 459 (2013) 31. Pelton, T., Bunderson, V.J.: Appl. Meas. 4(3), 269–281 (2003) 32. Pendrill, L., Fisher, W. P., Jr.: Measurement 71, 46–55 (2015) 33. Fisher, W. P., Jr., Burton, E.: J. Appl. Meas. 11(2), 271–287 (2010) 34. Powers, M., Fisher, W.P., Jr.: J. Phys. Conf. Ser. 1065(132009) (2018) 35. Powers, M.; Fisher, W. P., Jr.: Measurement: Sensors (2021, in press) 36. Camargo, F.R., Henson, B.: J. Phys, Conf. Ser. 459(1) (2013) 37. Massof, R.W., McDonnell, P.J.: Investig. Ophthalmol. Vis. Sci. 53(4), 1905–1916 (2012) 38. Pendrill, L.: NCSLi Meas. J. Meas. Sci. 9(4), 22–33 (2014) 39. Wilson, M., Fisher, W.P., Jr.: J. Phys. Conf. Ser. 772(1), 011001 (2016) 40. Wilson, M., Fisher, W.P., Jr. Measurement 145, 190 (2019) 41. Choppin, B.: Nature 219, 870–872 (1968) 42. Wright, B.D., Bell, S.R.: J. Educ. Meas. 21(4), 331–345 (1984) 43. Barney, M., Fisher, W.P., Jr.: Annual Review of Organizational Psychology and Organizational Behavior, vol. 3, pp. 469–490 (2016) 44. Stenner, A.J., Fisher, W.P., Jr., Stone, M.H., Burdick, D.S.: Front. Psychol. Quant. Psychol. Meas. 4(536), 1–14 (2013) 45. Williamson, G.L.: Cogent Education 5(1464424), 1–29 (2018) 46. Fisher, W.P., Jr.; Stenner, A.J.: Measurement 92, 489-496 (2016) 47. Chien. T.W., Linacre. J.M., Wang. W.C.: Examining student ability using KIDMAP fit statistics of rasch analysis in excel. In: Tan, H., Zhou, M. (eds.) Advances in Information Technology and Education. CCIS, vol. 201, pp. 578–585. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-22418-8_80 48. Fisher, W.P., Jr., Oon, E P.-T., Benson, S.: Educ. Des. Res. 5(1), 1–33 (2021) 49. Wright, B. D., Mead, R., Bell, S.R.: BICAL: Calibrating Items with the Rasch Model, MESA Research Memorandum 23C. University of Chicago, Statistical Laboratory, Department of Education. Chicago, 170 pp (1980)

Aiming Higher in Conceptualizing Manageable Measures in Production Research

xxxvii

50. Ackermann, J.R.: Data, Instruments, and Theory: A Dialectical Approach to Understanding Science. Princeton University Press, Princeton (1985) 51. Harding, S.: Whose Science? Whose Knowledge? Thinking from Women’s Lives. Cornell University Press, Ithaca (1991) 52. Heelan, P.A.: Space Perception and the Philosophy of Science. University of California Press, Berkeley (1983) 53. Ihde, D.: Instrumental Realism: The Interface Between Philosophy of Science and Philosophy of Technology. The Indiana Series in the Philosophy of Technology. Indiana University Press, Bloomington (1991) 54. Kuhn, T.S.: The Structure of Scientific Revolutions. University of Chicago Press, Chicago (1970) 55. Kuhn, T.S.: The Essential Tension: Selected Studies in Scientific Tradition and Change. University of Chicago Press, Chicago (1977) 56. Toulmin, S.E.: The Philosophy of Science: An Introduction. Hutchinson’s University Library: New York (1953) 57. Toulmin, S.E.: Crit. Inq. 1, 93–111 (1982) 58. Fisher, W. P., Jr.: In Renascent Pragmatism: Studies in Law and Social Science. In: Morales, A., (ed.) pp. 118–153. Ashgate Publishing Co., Brookfield, (2003) 59. Mari, L., Maul, A., Irribara, D.T., Wilson, M.: Measurement 100, 115–121 (2016) 60. Cook, T.A.: The Curves of Life. Dover, New York (1914/1979) 61. Star, S.L., Ruhleder, K.: Inf. Syst. Res. 7(1), 111–134 (1996) 62. Star, S.L., Griesemer, J.R.: Soc. Stud. Sci. 9(3), 387–420 (1989) 63. Bud, R., Cozzens, S.E. (eds.) SPIE Institutes. In: Potter, R.F., (ed.) Invisible Connections: Instruments, Institutions, and Science, vol. 9. SPIE Optical Engineering Press, Bellingham (1992) 64. Hankins, T.L., Silverman, R.J.: Instruments and the Imagination. Princeton University Press, Princeton (1999) 65. Woolley, A.W., Fuchs, E.: Organ. Sci. 22(5), 1359–1367 (2011) 66. Galison, P.: Image and Logic: A Material Culture of Microphysics. University of Chicago Press, Chicago (1997) 67. Galison, P.: In: Biagioli, M. (ed.) The Science Studies Reader, pp. 137–160. Routledge, New York (1999) 68. Fisher, W.P., Jr., Wilson, M.: Pensamiento Educativo: Revista de Investigacion Educacional Latinoamericana 52(2), 55–78 (2015) 69. Nersessian, N.J.: Creating Scientific Concepts. MIT Press, Cambridge (2008) 70. Einstein, A.: 1936. In Ideas and Opinions. In: Seelig, C., et al. (eds.) pp. 290–323. Bonanza Books, New York (1954) 71. Petersen, A.: Quantum Physics and the Philosophical Tradition. MIT Press, Cambridge (1968) 72. Ricoeur, P., Wood, D., (ed.) On Paul Ricoeur: Narrative and Interpretation, pp. 188–199. Routledge, New York (1991) 73. Fischer, K.W., Farrar, M.J.: Int. J. Psychol. 22(5–6), 643–677 (1987) 74. Commons, M.L.: World Futures J. New Paradigm Res. 64, 305-v320 (2008) 75. Fisher, W.P., Jr.: Sustainability 12(9661), 1–22 (2020) 76. Weitzel, T.: Economics of Standards in Information Networks. Physica-Verlag, New York (2004) 77. Pattee, H.H.: Universal principles of measurement and language functions in evolving systems. In: Casti, J.L., Karlqvist, A. (eds.) Complexity, Language, and Life: Mathematical Approaches. Biomathematics, vol. 16, pp. 268–281. Springer, Heidelberg (1986). https://doi. org/10.1007/978-3-642-70953-1_10

xxxviii

W. P. Fisher

78. Pattee, H.H., Raczaszek-Leonardi, J.: Laws, language and life: Howard Pattee’s classic papers on the physics of symbols with contemporary commentary. In: Barbieri, M., Hoffmeyer, J. (eds.) Biosemiotics, vol. 7. Springer, New York (2012). https://doi.org/10. 1007/978-94-007-5161-3 79. Brier, S.: Integr. Rev. 9(2), 220–263 (2013) 80. Maran, T.: Sign Syst. Stud. 35(1/2), 269–294 (2007) 81. Golinski, J.: Osiris 27(1), 19–36 (2012) 82. Gallaher, M.P., Rowe, B.R., Rogozhin, A.V., Houghton, S.A., Davis, J.L., Lamvik, M.K., Geikler, J.S.: Economic Impact of Measurement in the Semiconductor Industry, 07-2; 07-2, 191 p. National Institute for Standards and Technology: Gaithersburg (2007) 83. National Institute for Standards and Technology. In Assessing Fundamental Science: A Report from the Subcommittee on Research, Committee on Fundamental Science; Subcommittee on Research, Committee on Foundations of Science, Ed.; National Standards and Technology Council: Washington, DC (1996) 84. National Institute for Standards and Technology. Outputs and Outcomes of NIST Laboratory Research. Gaithersburg, MD (2009) 85. Swann, G.M.P.: John Barber’s Pioneering Work on the Economics of Measurement Standards; School of Business: University of Nottingham, 2 December 2005 86. Ashworth, W.J.: Science 306(5700), 1314–1317 (2004) 87. Barber, J.M.: In review of DTI work on measurement standards. In: Dobbie, R., Darrell, J., Poulter, K., Hobbs, R. (eds.) Department of Trade and Industry: London (1987). Annex 5 88. Barzel, Y.: J. Law Econ. 25, 27–48 (1982) 89. Fisher, W. P., Jr.: Metaphors and measurement: an invited symposium on validity. In: Maul, A., (ed.) International Meeting of the Psychometric Society, Lincoln, Nebraska, 12 July 2012 90 Fisher, W.P., Jr.: Theory Psychol. 13(6), 753–790 (2003) 91. Fisher, W.P., Jr.: Metaphor as measurement, and vice versa: convergence and separation of figure and meaning in a Mawri proverb. Social Science Research Network (2011) 92. Wise, M.N. (ed.): The values of precision, pp. 352–361. Princeton University Press, Princeton (1995) 93. Fisher, W.P., Jr.: Symmetry 13(1415) (2021) 94. Nersessian, N.J.: Reading natural philosophy: essays in the history and philosophy of science and mathematics. In: Malament, D. (ed.) Open Court, Lasalle, Illinois, pp. 129–166 (2002) 95. Boumans, M.: Non-natural social science: reflecting on the enterprise of “More Heat Than Light”. In: De Marchi, N., (ed.) pp. 131–156. Duke University Press, Durham (1993) 96. Boumans, M.: How Economists Model the World into Numbers. Routledge, New York (2005) 97. Fisher, W. P., Jr., Stenner, A.J.: In: Zhang, Q.; Yang, H. (eds.). Pacific Rim Objective Measurement Symposium 2012 Conference Proceedings, pp 1–11. Springer-Verlag, Berlin (2013). 98. Rasch, G.: Rasch Measurement Transactions 24(4), 1309 (1973/2011). Spring 99. Butterfield, H.: The Origins of Modern Science (Revised Edition). The Free Press, New York (1957) 100. Burtt, E.A.: The Metaphysical Foundations of Modern Physical Science (Revised Edition). Doubleday Anchor, Garden City (1954) 101. Kant, I.: Critique of Pure Reason (Unabridged) (Translated by Smith, N.K.). St. Martin’s Press, New York (1965) 102. Prien, B.: Stud. Educ. Eval. 15, 309–317 (1989) 103. Andrich, D.: Rasch measurement transactions, Part 1. In: Linacre, J.M., (ed.), pp 1–4. MESA Press, Chicago (1995) 104. Melin, J., Cano, S., Pendrill, L.: Entropy 23 (2), 212 (2021) 105. De Boeck, P., Wilson, M. (eds.) Explanatory Item Response Models. Springer-Verlag, New York (2004). https://doi.org/10.1007/978-1-4757-3990-9

Aiming Higher in Conceptualizing Manageable Measures in Production Research

xxxix

106. Attali, Y.: International Conference on Artificial Intelligence in Education, pp 17–29. Springer, Cham (2018) 107. Bejar, I., Lawless, R.R., Morley, M.E., Wagner, M.E., Bennett, R.E., Revuelta, J.: J. Technol. Learn. Assess. 2(3), 1–29 (2003) 108. Embretson, S.E.: Psychometrika 64(4), 407–433 (1999) 109. Gierl, M.J., Haladyna, T.M.: Automatic Item Generation: Theory and Practice. Routledge, New York (2012) 110. Kosh, A., Simpson, M.A., Bickel, L., Kellog, M., Sanford-Moore, E.: Educ. Meas. Issues Pract. 38(1), 48–53 (2019) 111. Rasch, G. Rasch Meas. Trans. 24(1), 1252–1272 (1972/2010) 112. Fisher, W. P., Jr.: In: Atkinson, P., Delamont, S., Cernat, A., Sakshaug, J.W., Williams, R. (eds.) In SAGE Research Methods Foundations. Sage Publications, Thousand Oaks (2020) 113. Wilson, M., Fisher, W.P., Jr.: Psychological and social measurement: the career and contributions of Benjamin D. Wright. In: Cain, M.G., Rossi, G.B., Tesai, J., van Veghel, M., Jhang, K.-Y. (eds.) Springer Series in Measurement Science and Technology, Springer Nature, Cham (2017)

AutoReman—The Dawn of Scientific Research into Robotic Disassembly for Remanufacturing D. T. Pham1, R. J. Cripps1, M. Castellani1, K. Essa1, M. Saadat1, C. Ji1, S. Z. Su1, J. Huang1, Y. Wang1, M. Kerin1 Zude Zhou2, Quan Liu2, Wenjun Xu2, Yilin Fang2, Jiayi Liu2, Ruiya Li2, Yongquan Zhang2, Yuanjun Laili3, and F. Javier Ramirez4 1 2 3 4

University of Birmingham, Birmingham, UK Wuhan University of Technology, Wuhan, China Beihang University, Beijing, China University of Castilla la Mancha, Albacete, Spain

Abstract. Product disassembly is usually the first step in remanufacturing, a process where used products are returned to a good-as-new condition with a guarantee at least equal to that for the original product. How disassembly is undertaken can affect the efficiency and capability of remanufacturing. As it is complex, disassembly tends to be manually performed and is labour intensive. Efforts have been spent on introducing intelligent robots in disassembly to make remanufacturing “smarter.” Like other applications of Industry 4.0 automation technologies, this should lead to increased productivity, reduced failure rate, more resource-efficient operations and improved product quality and working environment. This paper will cover research conducted as part of the 5-year Autonomous Remanufacturing (AutoReman) research programme funded by the UK Engineering and Physical Sciences Research Council at the University of Birmingham. The aim of the work, which investigates disassembly science, devises intelligent disassembly strategies and plans, and develops human–robot collaborative disassembly techniques, is to allow disassembly to be reliably performed either with minimal human intervention or in a collaborative fashion by man and machine. Keywords: Remanufacturing  Disassembly  Disassembly automation  Disassembly planning  Robots  Cobots  Human–robot collaboration

1 Introduction Remanufacturing is “the process of returning a used product to at least OEM original performance specification from the customers’ perspective and giving the resultant product warranty that is at least equal to that of a newly manufactured equivalent” (Ijomah 2002). Remanufacturing is a sizable industry. For example, in

xli

xlii

D. T. Pham et al.

the USA, there are more than 73,000 companies engaged in remanufacturing. They employ 350,000 people and have turnovers totalling $53 billion (Ortegon et al. 2014). Remanufacturing can be more sustainable than manufacturing de novo “because it can be profitable and less harmful to the environment …” (Matsumoto and Ijomah 2013). Several industry sectors have reported substantial energy savings and CO2 emission reductions—up to 83% and 87% for the automotive sector, respectively (Ortegon et al. 2014). In addition to benefits relating to energy consumption and the environment, there are other practical reasons for a company to remanufacture its products: high demand for spare parts, brand protection from independent operators and long lead time for new components (Seitz 2007). A key step in remanufacturing is disassembly of the “core” or the returned product to be remanufactured. In many ways, disassembly is more challenging than assembly to robotize due to variability in the condition of the core (Vongbunyong et al. 2015): unlike assembly of new or remanufactured products, which is deterministic because the components to be assembled are of known geometries, dimensions and states, disassembly is stochastic as it has to contend with used products of uncertain shapes, sizes and conditions. Thus, disassembly tends to have to be manually carried out. It is also very labour intensive, given the complexity of the operations involved. Before AutoReman, a project formally titled “Robotic disassembly as a key enabler of autonomous remanufacturing”, there had been relatively little research into automating disassembly. Early efforts include those of Hesselbach et al. (1994) who studied the automatic dismantling of printed circuit boards to recover valuable materials and components for reuse. Bueker et al. discussed the adoption of machine vision to recognize objects (bolts and wheels) during disassembly (Bueker et al. 1999) and to control an autonomous station for removing wheels from used cars (Bueker et al. 2001). Eckterth et al. (1998) and Feldman et al. (1998) highlighted a key benefit of intelligent and innovative disassembly strategies in recycling, namely the complete elimination of the cost of manufacturing recovered components that are in a directly reusable condition. Work on automatic, semi-autonomous and autonomous disassembly was described in Torres et al. (2004 and 2009), Vongbubyong et al. (2013 and 2015) and Barwood et al. (2015). The Torres and Vongbubyong teams focused on dismantling electronic equipment (PCs and LCD screens), their motivation presumably being the need for industry to comply with regulations governing waste electrical and electronic equipment (e.g. WEEE Directive 2013). Barwood et al. also considered the disassembly of various automotive components with electronic control units to recover critical materials. Machine vision was again proposed as the main sensing modality although the Torres team also discussed combining visual and force feedback to track disassembly trajectories (Gil et al. 2007). A common feature of the research reported prior to AutoReman is that it tended to be very much ad hoc and empirical and not supported by a strong underpinning theory or scientific basis. No research group had considered developing a scientific

AutoReman—The Dawn of Scientific Research into Robotic Disassembly

xliii

approach to disassembly problems akin to that successfully employed by pioneers of assembly research. To be able to design autonomous or semi-autonomous systems for robustly handling the uncertainties inherent in disassembly, a fundamental understanding of disassembly was required. Such an understanding did not exist, and there had been no research directed at building a knowledge base on disassembly and then applying it systematically to design and build disassembly systems for autonomous remanufacturing.

2 AutoReman The five-year AutoReman project started in 2016 with a detailed investigation of disassembly processes in order to derive the necessary fundamental understanding. The aim was to use the acquired basic process knowledge methodically to create plans, models, algorithms and tools to enable robotic systems to carry out disassembly with minimal human intervention or in a collaborative fashion by man and machine. In many situations, robots will function autonomously alongside people who will perform tasks that are too difficult or too costly to robotize. In other cases, robots will assist human operatives, as equal co-workers or as subordinates to humans. AutoReman developed learning and sensing tools and investigated collaborative working methods to enable these different scenarios. The goal was to enable the cost-effective robotization of a critical step in remanufacturing. This would unlock the potential of the process and make it feasible for many more companies to adopt, helping to expand the UK remanufacturing industry which was valued at £2.35 billion in 2009 (Resource Recovery Forum 2010). AutoReman fitted the Manufacturing the Future strategic theme of its funding agency, the Engineering and Physical Sciences Research Council (EPSRC). With its focus on developing novel robotic disassembly systems for remanufacturing and recycling applications, AutoReman was well aligned with the EPSRC’s Innovative Production Processes and Sustainable Industrial Systems sub-themes. The project also contributed to the Manufacturing Informatics sub-theme, as it contained a substantial element of developing and applying knowledge-based tools and techniques for next-generation manufacturing processes and systems. Furthermore, the research addressed by AutoReman was central to four of the areas covered by the Manufacturing the Future theme, namely manufacturing technologies (disassembly processes), robotics (collaborative robots), sensors and instrumentation (force and visual feedback) and resource efficiency (recycling). AutoReman was the first project in the world to undertake disassembly research scientifically and to create autonomous disassembly systems based on the knowledge and understanding derived from a fundamental investigation into disassembly. In the same way as basic assembly research at the MIT Draper Laboratory, Hitachi Central Research Laboratory and the University of Canterbury underpinned advances in robotic assembly, AutoReman was designed to support developments in robotic disassembly. This in turn should facilitate remanufacturing, an area of significant growth in future (Foresight 2013).

xliv

D. T. Pham et al.

As will be seen later, deliverables from AutoReman included learning and intelligent collaboration strategies for avoiding jamming, wedging and collision of parts during disassembly to preserve their integrity and enable direct reuse. The knowledge resulting from this research also helped the derivation of rules for design to facilitate robotic disassembly. Overall, the project should lead to lower remanufacturing costs, making remanufacturing attractive to more companies. It is well known that remanufacturing enables more efficient use of resources, thus contributing to making businesses more sustainable and helping the economy to expand, while also reducing impact on the environment. For the UK, wider adoption of remanufacturing could mean annual savings of some £364M, an increase in gross value added (GVA) of £2 billion a year, 50 Mt less waste and 20 Mt more materials recycled back into the economy by 2020 (Lee et al. 2007; Waste and Resources Action Programme 2015).

3 Focus of AutoReman AutoReman focused on disassembly as the least explored and arguably the most important step in multi-stage remanufacturing. AutoReman did not investigate the other stages which include cleaning, inspecting, reprocessing or replacing of worn parts, assembly and testing. The rationale was that cleaning can be automated easily. Much work had been done on machine inspection, and inexpensive inspection equipment was widely available. Reprocessing is product and component specific. There are no generic research issues to address in reprocessing or in part replacement and final testing. Assembly was excluded from consideration, because large amounts of effort had also been spent on automating assembly (as will be seen below) and there is little difference between assemblies in manufacturing and in remanufacturing, assuming new or as-new components are used in both cases. The hypothesis of AutoReman was that understanding the science of disassembly should enable the development of successful strategies for disassembly and the creation of autonomous disassembly systems. AutoReman has proved this hypothesis by: 1. Conducting a fundamental investigation into disassembly science (Objective 1) 2. Using the understanding obtained through achieving Objective 1, devising and testing autonomous strategies for a range of basic disassembly operations (Objective 2) 3. Building the infrastructure needed to integrate and implement the strategies developed in step 2. This includes autonomous planning and collaboration systems for handling complex disassembly tasks (Objective 3) 4. Demonstrating the robustness to uncertainties of the disassembly strategies, plans and autonomous systems created in steps 2 and 3 on tasks involving real products (Objective 4).

AutoReman—The Dawn of Scientific Research into Robotic Disassembly

xlv

4 Results Work in AutoReman was divided into five streams (Fig. 1). The results obtained in the different streams are detailed below.

Fig. 1. Five work streams of AutoReman

A. Work Stream 1: Disassembly Science The project began with a study of more than four hundred products to determine commonly occurring disassembly operations and the physical and cognitive efforts needed from human operators to resolve them (Table 1). Borrowing the approach used by previous assembly researchers, the AutoReman team divided a disassembly operation into elementary generic tasks such as unscrewing, removal of pins from holes with small clearances, separation of press-fit components, extraction of elastic parts (e.g. O-rings and circlips) and breaking up of “permanently” assembled components (e.g. those that are nailed, stapled, riveted, bonded, soldered or welded together). The team methodically identified the characteristics of those generic disassembly tasks, then developed solutions for those most frequent tasks and integrated the solutions into a system to perform the whole operation (Zhang et al. 2020; Li et al. 2020; Huang et al. 2020; Huang et al. 2021). Although AutoReman did not study assembly per se, the project built on extensive previous work on the mechanics of assembly operations is conducted over the past five decades by many researchers. These have included engineers and scientists from Russia (e.g. Savishchenko and Bespalov 1965 and Karelin and Girel

xlvi

D. T. Pham et al. Table 1. Common disassembly operations

1967), Japan (Goto et al. 1974; Takeyasu et al. 1976; Arai 1989), the USA (Nevins and Whitney 1977; Whitney 1979; Gustavson 1985; Caine et al. 1989; Zohoor and Shahinpoor 1991) and elsewhere (e.g. McCallion and Wong 1975; McCallion et al. 1979; Pham 1982; Simons et al. 1982; Martin 1984; Hopkins et al. 1991; Qiao et al. 1995; Haskiya et al. 1998; Setchi and Bratanov (1998); Newman et al. 2001; Xia et al. 2005; Fei et al. 2006; Usubamatov and Leong 2011; Jain et al. 2013; Park et al. 2014). In studying disassembly and designing autonomous disassembly systems, the AutoReman team learnt from the approaches adopted in prior research on assembly processes and, where appropriate, adapted tools and techniques previously created for autonomous assembly. For example, the team focused on contact problems in disassembly and, unlike previous researchers working on disassembly, the team placed strong reliance on contact force and moment sensing in addition to visual feedback in devising strategies for robotic disassembly (Fig. 2). This is because, in disassembly as in assembly, contact forces and moments can provide useful information to guide the process, thereby avoiding problems like jamming and wedging. The team applied tools such as finite-element analysis to model contact stresses and deformations and develop strategies to prevent damage to components during disassembly (Fig. 3).

AutoReman—The Dawn of Scientific Research into Robotic Disassembly

xlvii

Fig. 2. Active compliance strategy for removing a peg from a hole (Zhang et al. 2021)

(a)

(b)

(c)

Fig. 3. Finite-element modelling of contact stresses in disassembly operations: (A) unscrewing and (B) removing a ring from a shaft

xlviii

D. T. Pham et al.

B. Work Stream 2: Practical Disassembly Strategies The investigation included analysing the physical effort, manipulation, control and information needs of those tasks to group them into: (i) tasks performable by robots operating autonomously alongside humans and (ii) procedures requiring machines and people to collaborate, because the work may be too heavy or otherwise too difficult for the latter or may demand excessive manipulation, control and information processing skills of the former. The team then selected tasks in group (i) and developed/integrated force/torque/tactile sensors, vision systems and other sensors and the associated strategies to enable robots to perform them autonomously. The team devised two levels of strategies. At the reactive low level, the team employed passive accommodation (Kim et al. 2011; Balletti et al. 2012) or active impedance control (Marvel and Falco 2012) (Figure 4). Both passive accommodation and active impedance increase the degree of autonomy of the robotic disassembly system by enabling it to adapt itself and cope with uncertainties. At the higher deliberative level, the AutoReman team developed knowledge-based strategies for pattern recognition and decision-making. The disassembly system requires a pattern recognition capability to interpret feedback from sensors and take appropriate decisions based on the interpreted information. The team populated the knowledge base of the system with rules acquired from

Fig. 4. Passive compliance device for disassembly

AutoReman—The Dawn of Scientific Research into Robotic Disassembly

xlix

Fig. 5. Spiral search strategy for locating the head of a bolt (Li et al. 2020)

observing and interviewing disassembly workers. To handle uncertainties at this level, the team combined Bayesian and fuzzy logic methods as appropriate (Pham and Ruz 2009; Pham et al. 2009) (Fig. 5). The team also implemented machine learning techniques. The team considered the ANFIS adaptive-network-based fuzzy inference system (Jang 1993) or our more recent RULES, rule-induction and clustering algorithms (Pham and Afify 2005; Bigot et al. 2006; Pham et al. 2011), to allow robots to learn skills from human teachers and to improve their performance with experience. The team formulated machine learning-based strategies for both the reactive and the deliberative levels. In the former case, the team used learning to produce a continuous mapping between the feedback signals and the output to control the disassembly unit. In the latter case, learning generated discrete decision rules from a set of examples provided by a teacher. C. Work Stream 3: Disassembly Planning The AutoReman team investigated adding to the autonomy of the robotic system by enabling it to plan and replan the disassembly sequence. This was achieved by extracting information from the product CAD model through geometric reasoning and applying planning rules to be developed using the results of the study of the mechanics of disassembly (Work Stream1) and observations of, and interviews with, disassembly operators (Work Stream2). Before deployment, the sequence was validated in simulation by applying geometric, kinematic, static (finite-element) and dynamic modelling. It was assumed that CAD models were available for all products, which would be true in the long term. Reverse engineering could be used to create CAD models where they do not exist. The team considered four planning scenarios: a single robot performing a relatively simple disassembly sequence; a group of robots, each working on its own

l

D. T. Pham et al.

tasks; a group of robots collaborating with one another; and robots collaborating with humans. For the third and fourth scenarios 3 and 4, multi-agent techniques building on (Owliya et al. 2012) were investigated. Related to this work stream was the research of Torres et al. (2009) and Vongbunyong et al. (2015). However, as noted previously, AutoReman differed from previous work in two ways: the use of analytical models of disassembly processes to complement empirical knowledge and the focus on contact force and moment feedback as a means of providing information on the operation when it is important to preserve the integrity of the components to be disassembled. The results of this work stream which also includes research into disassembly line balancing and optimization decision-making in designing the robotic disassembly process have been widely published. For example, see Liu et al. (2018), Fang et al. (2019), Laili et al. (2019), Zhou et al. (2019), Fang et al. (2020), Liu et al. (2020), Ramirez et al. (2020), Wang et al. (2020), Xu et al. (2020) and Laili et al. (2022). D. Work Stream 4: Collaborative Disassembly by Humans and Machines This work stream addressed tasks in group (ii) that require man–machine collaboration because the work may be too heavy or otherwise too difficult for people, or may demand excessive manipulation, control and information processing skills of machines. Using the plans generated in Work Stream 3, the AutoReman team developed optimal collaboration strategies exploiting the complementarity of robots and humans to achieve complex operations (Liu et al. 2018; Liu et al. 2019; Xu et al. 2020). Collaborative robots are generally force-limited to enable them to be safe around people; different schemes are needed when they have roles equivalent to those of humans—for example, when robots and humans share a task as equals and concurrently perform task elements requiring similar efforts—and when robots are subordinate to humans, acting as assistants to provide the raw power needed for the task. The first type of role involves little man–machine interaction; the robots simply have to avoid colliding with people and to collide gently should collision happen. Currently available collaborative robots can handle this type of role as demonstrated in the case studies conducted in Work Stream 5. For roles of the second type, there is much interaction between man and machine. The robots need to be sufficiently strong to supply the required power and yet have enough compliance to work safely with people. The plan was to investigate strategies to resolve this apparent contradiction, for example, by providing the robots with flexible tooling which they can reconfigure and use to leverage their strengths without having to exert potentially injury-inducing forces. This work is in progress and will be pursued as part of Phase 2 of AutoReman with a new line of

AutoReman—The Dawn of Scientific Research into Robotic Disassembly

li

enquiry building upon the variable stiffness control system research of Babarahmati et al. (2021). E. Work Stream 5: Demonstrations and Dissemination In addition to experiments designed to validate the elementary disassembly strategies derived analytically in Work Stream 2 (Figs. 2 and 5), the robotic collaboration techniques devised in Work Stream 4 were also tested. Selected products from 3 industrial partners were used to demonstrate realistic collaborative disassembly operations. Three robotic disassembly cells were built to showcase the dismantling of three different products: an automotive water pump (Fig. 6, Huang et al. 2020), a turbocharger (Fig. 7, Huang et al. 2021) and an electric vehicle battery (Fig. 8, Peng 2021). These products are sufficiently complex to present reasonable challenges for robotic disassembly and to allow the practical evaluation of most of the strategies and techniques developed in the project for tasks in group (i) and group (ii). Both fully autonomous disassembly (battery disassembly) and disassembly involving man–machine collaboration of type (i) (equal task sharing—pump disassembly) and type (ii) (master-assistant collaboration—turbocharger disassembly) were demonstrated. For the selected products, the disassembly sequences were pre-planned. However, in the case of the fully autonomous battery disassembly cell,

Operator

Press

Water pump Gripper

Fixtures

Robot

Robot Parts

Collection boxes

Fig. 6. Robotic cell for collaborative disassembly of water pumps (Huang et al. 2020)

lii

D. T. Pham et al.

Fig. 7. Robotic cell for collaborative disassembly of turbochargers (Huang et al. 2021)

Fig. 8. Robotic cell for disassembly of electric vehicle batteries (Peng 2021)

AutoReman—The Dawn of Scientific Research into Robotic Disassembly

liii

online replanning of disassembly operations will be implemented in Phase 2 of AutoReman. The disassembly demonstrations are publicly available for viewing on YouTube (https://www.youtube.com/channel/UC19YoctUtyGAdazYFrYE1MQ). In addition to this video material, AutoReman has produced some fifty journal articles and presentations at international events (http://autoreman.altervista.org/Output.htm) including the series of International Workshops on Autonomous Remanufacturing (IWAR). Through these dissemination activities, the AutoReman team has ensured that the project outputs reach a wide audience.

5 Conclusion AutoReman has produced results relevant to researchers in robotic assembly and disassembly, and in sustainability, remanufacturing and recycling. The control strategies developed for disassembly robots to adapt to uncertainties will be usable in other robotic applications including assembly. The proposed autonomous disassembly techniques can be employed for recycling as well as remanufacturing. Sustainability researchers will be able to build on, or adopt elements of, the AutoReman robotic disassembly systems. AutoReman’s fundamental investigation of disassembly processes will be of interest to researchers in mechanics, autonomous planning methods, to manufacturing engineers, and learning and intelligent collaboration strategies, to computer scientists. From the above discussion, clearly, AutoReman has achieved all its objectives. However, AutoReman only represents the dawn of scientific research into robotic disassembly for remanufacturing. Much work on the science of disassembly still needs to be completed before one can claim fully to have understood it, and many tools and techniques are awaiting development before robots can truly autonomously perform complex disassembly operations to help usher remanufacturing into the digital age (Kerin and Pham 2019; Kerin and Pham 2020). Acknowledgements. AutoReman was funded by the Engineering and Physical Sciences Research Council (Grant Number EPN0185241). The work was supported by BorgWarner, Caterpillar, HSSMI, Meritor, MG Motor, MTC and Recoturbo. The authors are grateful to the project funder and partners and to the undergraduate and postgraduate students and members of the AutoReman Laboratory at the University of Birmingham and researchers at Wuhan University of Technology, Beihang University and Universidad de Castilla-La Mancha who contributed to the success of the project.

References Arai, T.: Annals of the CIRP, 38, 17–20 (1989) Balletti, L., et al.: 12th IEEE-RAS International Conference on Humanoid Robots, Osaka, Japan, pp. 402–408 (2012) Babarahmati, K.K., Tiseo, C., Smith, J., Lin, H.C., Erden, M.S., Mistry, M.: (2021). https://arxiv. org/abs/1911.04788v3. Accessed 4 September 2021

liv

D. T. Pham et al.

Barwood, M., Li, J., Pringle, T., Rahimifard, S.: Procedia CIRP 29, 746–751 (2015) Bigot, S., Dimov, S., Pham, D.T.: Proc IMechE, Part C 220, 1433–1447 (2006) Bueker, U., Druee, S., Goetze, N., Hartmann, G., Kalkreuter, B., Stemmer, R., Trapp, R.: IEEE Symposium on Emerging Technologies and Factory Automation, ETFA, vol. 1, pp. 79–88 (1999) Bueker, U., Druee, S., Goetze, N., Hartmann, G., Kalkreuter, B., Stemmer, R., Trapp, R.: Robot. Auton. Syst. 35, 179–89 (2001) Caine, M.E., Lozano-Perez, T., Seering, W.P.: IEEE International Conference on Robotics and Automation, vol. 1, pp. 472–477 (1989) Eckterth, G., Boenkerm, T., Kreis, W.: IFAC Workshop on Intelligent Assembly and Disassembly, Beld, Slovenia, pp. 105–110 (1998) Fang, Y.L., Liu, Q., Li, M.Q., Laili, Y.J., Pham, D.T.: Eur. J. Oper. Res. 276, 160–174 (2019). https://doi.org/10.1016/j.ejor.2018.12.035 Fang, Y., Ming, H., Li, M., Liu, Q., Pham, D.T.: Int. J. Prod. Res. 58, 846–862 (2020). https://doi. org/10.1080/00207543.2019.1602290 Fei, Y., Wan, J., Zhao, X.: Industrial robotics: programming, simulation and applications. In: Huat, L.K., (ed.) InTech, pp. 393–416 (2006). http://www.intechopen.com/books/industrial_ robotics_programming_simulation_and_applications/study_on_three_dimensional_dual_pegin-hole_in_robot_automatic_assembly. Accessed 4 September 2021 Feldmann, K., Trautner, S., Meedt, O.: IFAC Workshop on Intelligent Assembly and Disassembly, Beld, Slovenia, pp. 1–6 (1998) Foresight: The Future of Manufacturing: A new era of opportunity and challenge for the UK, The Government Office for Science, London (2013) Gil, P., Pomares, J., Puente, S.T., Diaz, C., Candelas, F., Torres, F.: Int. J. Compt. Integer. Manuf. 20, 757–772 (2007) Goto, T., Inoyama, T., Takeyasu, K.: Ind. Robot 1, 225–228 (1974) Gustavson, R.E.: ASME Trans. J. Mech. Transm. Autom. Des. 107, 112–122 (1985) Haskiya, W., Maycock, K., Knight, J.A.G.: Proc IMechE, Part B 222, 473–478 (1998) Hesselbach, J., Friedrich, R., Schuete, G.: Second International Seminar of Life Cycle Engineering, Erlangen, Germany, pp. 254–266 (1994) Hopkins, S.H., Bland, C.J., Wu, M.H.: Int. J. Prod. Res. 29, 293–301 (1991) Huang, J., Pham, D.T., Wang, Y., Qu, M., Ji, C., Su, S., Xu, W., Liu, Q., Zhou, Z.: Proc IMechE, Part B 234, 654–664 (2020) Huang, J., et al.: Comput. Ind. Eng. 143 (2021). https://doi.org/10.1016/j.cie.2021.107189 Ijomah, W.L.: Ph.D. Thesis, Plymouth University (2002) Jain, R.K., Majumder, S., Dutta, A.: Robot. Auton. Syst. 61, 297–311 (2013) Jang, J.-S.R.: IEEE Trans. Syst. Man Cybern. 23, 665–685 (1993) Karelin, N.M., Girel, A.M.: Russian Eng. J. 47, 73–76 (1967) Kerin, M., Pham, D.T.: J. Clean. Prod. (2019). https://doi.org/10.1108/10.1016/j.jclepro.2019. 117805 Kerin, M., Pham, D.T.: J. Manuf. Technol. Manage. 31, 1205–1235 (2020). https://doi.org/10. 1108/JMTM-06-2019-0205 Kim, B.-S., Kim, Y.-L., Song, J.-B.: IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM2011), Budapest, Hungary, pp. 1076–1080 (2011) Laili, Y., Tao, F., Pham, D.T., Wang, Y., Zhang, L.: Robot. Comput. Integr. Manuf. 59, 130–142 (2019) Laili, Y., Wang, Y., Fang, Y., Pham, D.T.: Optimisation of Robotic Disassembly for Remanufacturing. Springer Series in Advanced Manufacturing. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-81799-2 Lee, P., Walsh, B., Smith, P.: Quantification of the Business Benefits of Resource Efficiency. Oakdene Hollins, Cambridge (2007)

AutoReman—The Dawn of Scientific Research into Robotic Disassembly

lv

Li, R., et al.: IEEE Trans. Autom. Sci. Eng. 17, 1455–1468 (2020) Liu, J., Zhou, Z., Pham, D.T, Xu, W., Ji, C., Liu, Q.: Int. J. Prod. Res. 56, 3134–3151 (2018). https://doi.org/10.1080/00207543.2017.1412527 Liu, J., Zhou, Z., Pham, D.T., Xu, W., Ji, C., Liu, Q.: Robot. Comput. Integr. Manufact. 61 (2020). https://doi.org/10.1016/j.rcim.2019.101829 Liu, Q., Liu, Z., Xu, W., Tang, Q., Zhou, Z., Pham, D.T.: Int. J. Prod. Res. 57, 4027–4044 (2019). https://doi.org/10.1080/00207543.2019.1578906 Liu, Z.H., Liu, Q., Xu, W.J., Zhou, Z.D., Pham, D.T.: Procedia CIRP, 72, 87–92 (2018). https:// doi.org/10.1016/j.procir.2018.03.172 Martin, K.F.: Trans. Inst. Measur. Control. 6, 329–331 (1984) Marvel, J., Falco, J.: NIST Internal Report NISTIR 7901 (2012). http://dx.doi.org/10.6028/NIST. IR.7901. Accessed 4 Sept 2021 Matsumoto, M., Ijomah, W.: Handbook Sustainable Engineering, pp. 389–408. Springer, Dordrecht (2013). https://doi.org/10.1007/978-1-4020-8939-8 McCallion, H., Wong, P.C.: 4th World Congress on the Theory of Machines and Mechanisms, pp. 347–352. IMechE, London (1975) McCallion, H., Johnson, G.R., Pham, D.T.: Ind. Robot 6, 81–87 (1979) Nevins, J.L., Whitney, D.E.: Computer 10, 24–38 (1977) Newman, W.S., Zhao, Y., Pao, Y.-H.: IEEE International Conference on Robotics and Automation, pp. 571–576 (2001) Ortegon, K., Nies, L., Sutherland, J.W.: CIRP CIRP Encyclopedia of Production Engineering, pp. 1044–1047 (2014) Owliya, M., Saadat, M., Anane, R., Goharian, M.: IEEE Syst. J. 6, 353–361 (2012) Park, H., Kim, P.K., Bae, J.H., Park, J.H., Baeg, M., Park, J.: 11th International Conference on Ubiquitous Robots and Ambient Intelligence, P3-7, Kuala Lumpur, Malaysia (2014) Peng, X.: FYP Report, Autonomous Remanufacturing Laboratory, Department of Mechanical Engineering, University of Birmingham, UK (2021) Pham, D.T.: 3rd International Conference on Assembly Automation, Stuttgart, Germany, pp. 205– 214 (1982) Pham, D.T., Afify, A.: Proc IMechE, Part C 219, 1119–1137 (2005) Pham. D.T., Haj Darwish, A., Eldukhri, E.E.: Int. J. Comp, Aided Eng. Technol. 1, 250–264 (2009) Pham, D.T., Ruz, G.A.: Proceedings of the Royal Society A, vol. 465, pp. 2927–2948 (2009) Pham, D.T., Suarez-Alvarez, M.M., Prostov, Y.I.: Proceedings of the Royal Society A, vol. 467, pp. 2387–2403 (2011) Qiao, H., Dalay, B.S., Parkin, R.M.: Proc IMechE, Part C 209, 429–228 (1995) Ramírez, F.J., Aledo, J.A., Gamez, J.A., Pham, D.T.: Comput. Ind. Eng. 142. https://doi.org/10. 1016/j.cie.2020.106339 Resource Recovery Forum (2010) Remanufacturing in the UK, Oakdene Hollins, Cambridge, vol. 6 (2020) Savishchenko, V.M., Bespalov, V.G.: Russ. Eng. J. 45, 50–52 (1965) Setchi, R., Bratanov D.: Assembly Automation, vol. 18, pp. 291–301 (1998) Seitz, M.A.: J. Clean Prod. 15, 1147–1157 (2007) Simons, J., Van Brussel, H., De Schutter, J., Verhaert, J.: IEEE Trans. Autom. Control 27, 1109– 1113 (1982) Takeyasu, K., Goto, T., Inoyama, T.: ASME Trans B J. Eng. Ind. 98, 1313–1318 (1976) Torres, F., Gil, P., Puente, S.T., Pomares, J., Aracil, R.: Int J. Adv. Manufact. Technol. 23, 39–46 (2004) Torres, F., Puente, S., Diaz, C.: Control Engineering Practice, vol. 17, pp. 112–121 (2009) Usubamatov, R., Leong, K.W.: Assembly Automation, vol. 31, pp. 358–362 (2011) Vongbunyong, A., Kara, S., Pagnucco, M.: Assembly Automation, vol. 33, pp. 38–56 (2013)

lvi

D. T. Pham et al.

Vongbunyong, A., Kara, S., Pagnucco, M.: Robot. Comput. Integer. Manufact. 34, 79–94 (2015) Wang, Y., et al.: Int. J. Prod. Res. (2020). https://doi.org/10.1080/00207543.2020.1770892 Waste and Resources Action Programme (2015) WRAP’s Plan 2015–2020. https://www.wrap.org. uk. Accessed 4 September 2021 Whitney, D.E.: ASME Trans, J. Dyn. Syst. Meas. Control 101, 8–15 (1979) WEEE Directive 2012/19/EU (2013). http://eur-lex.europa.eu/legal-content/EN/TXT/?uri= CELEX:32012L0019. Accessed 4 September 2021 Xia, Y., Yin, Y., Chen, Z.: Int. J. Adv. Manufact. Technol. 30, 118–128 (2005) Xu, W., Tang, Q., Liu, J., Liu, Z., Zhou, Z., Pham, D.T.: Robot. Comput. Integer. Manufact. 62 (2020). https://doi.org/10.1016/j.rcim.2019.101860 Zhang, Y., et al.: R. Soc. Open Sci. 6 (2019). https://doi.org/10.1098/rsos.190476 Zhou, Z., Liu, J., Pham, D., Xu, W., Ramirez, F., Ji, C., Liu, Q.: Proc. IMechE, Part B 233 (2019). https://doi.org/10.1177/0954405418789975 Zohoor, H., Shahinpoor, M.: J. Manufact. Syst. 10, 99–108 (1991)

Contents

Artificial Intelligence Applications A Comparative Analysis on Improving Covid-19 Prediction by Using Ensemble Learning Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Elif Kartal

3

Developing Intelligent Autonomous Vehicle Using Mobile Robots’ Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Florin Popişter, Mihai Steopan, and Sergiu Ovidiu Someşan

15

Digital-Body (Avatar) Library for Textile Industry . . . . . . . . . . . . . . . . Semih Dönmezer, Numan M. Durakbasa, Gökçen Baş, Osman Bodur, Eva Maria Walcher, and Erol Güçlü Stereoscopic Video Quality Assessment Using Modified Parallax Attention Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hassan Imani, Selim Zaim, Md Baharul Islam, and Masum Shah Junayed

24

39

Decision Making A Design of Collecting and Processing System of Used Face Masks and Contaminated Tissues During COVID-19 Era in Workplaces . . . . . Beril Hekimoğlu, Ezgi Akça, Aziz Kemal Konyalıoğlu, Tuğçe Beldek, and Ferhan Çebi

53

Identification of Optimum COVID-19 Vaccine Distribution Strategy Under Integrated Pythagorean Fuzzy Environment . . . . . . . . . . . . . . . . Tolga Gedikli and Beyzanur Cayir Ervural

65

Implementation of MCDM Approaches for a Real-Life Location Selection Problem: A Case Study of Consumer Goods Sector . . . . . . . . Oğuz Emir

77

lvii

lviii

Contents

Energy Management Applications and Expectations of Fuel Cells and Lithium Ion Batteries . . . Feyza Zengin, Emin Okumuş, M. Nurullah Ateş, and Bahadır Tunaboylu

91

Integration of Projects Enhancing Productivity Based on Energy Audit Data in Raw Milk Production Sector — a Case Study in Turkey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Aylin Duman Altan and Birol Kayışoğlu Healthcare Systems and Management A Fuzzy Cognitive Mapping Approach for the Evaluation of Hospital’s Sustainability: An Integrated View . . . . . . . . . . . . . . . . . . 123 Aziz Kemal Konyalıoğlu and İlke Bereketli Digital Maturity Assessment Model Development for Health Sector . . . 131 Berrak Erdal, Berra İhtiyar, Ece Tuana Mıstıkoğlu, Sait Gül, and Gül Tekin Temur Using DfX to Develop Product Features in a Validation Intensive Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 Florin Popişter, Mihai Dragomir, and Calin Dan Gheorghe Neamtu Industrial Applications Collaborative Robotics Making a Difference in the Global Pandemic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Mary Doyle-Kent and Peter Kopacek Determination of Strategic Location of UAV Stations . . . . . . . . . . . . . . 170 Beyzanur Cayir Ervural Evaluation of Aluminium Alloy (AlSi9Cu3(Fe)) Porosity by Destructive and Non-destructive Method (Computed Tomography) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Michaela Kritikos, Martina Kusá, and Martin Kusý Improvement of Solid Spreader Blade Design Using Discrete Element Method (DEM) Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 Atakan Soysal, Pınar Demircioğlu, and İsmail Böğrekçi Investigation of Influence by Different Segmentation Parameters on Surface Accuracy in Industrial X-ray Computed Tomography . . . . . 202 Mario Sokac, Marko Katic, Zeljko Santosi, Djordje Vukelic, Igor Budak, and Numan M. Durakbasa Product Development for Lifetime Prolongation via Benchmarking . . . . 210 Turgay Şerbet, Mahir Yaşar, Tezel Karayol, Anil Akdogan, and Ali Serdar Vanli

Contents

lix

Research into the Influence of the Plastic Strain Degree on the Drawing Force and Dimensional Accuracy of the Production of Seamless Tubes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 Maria Kapustova, Michaela Kritikos, Jozef Bilik, Ladislav Morovic, Robert Sobota, and Daynier Rolando Delgado Sobrino Industry Robotics, Automation and Industrial Robots in Production Systems An Example of Merit Analysis for Digital Transformation and Robotic Process Automation (RPA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 Hakan Erdem and Tijen Över Özçelik Assembling Automation for Furniture Fittings to Gain Durability and to Increase Productivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Musaddin Kocaman, Cihan Mertöz, Rıza Gökçe, Sedat Fırat, Anıl Akdoğan, and Ali Serdar Vanlı Use of Virtual Reality in RobotStudio . . . . . . . . . . . . . . . . . . . . . . . . . . 256 Marian Králik and Vladimír Jerz Industry 4.0 Applications Direct-Drive Integrated Motor/controller with IoT and GIS Management for Direct Sowing and High Efficiency Food Production . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 Rodrigo Alcoberro, Jorge Bauer, and Numan M. Durakbasa Evaluation of Fused Deposition Modeling Process Parameters Influence on 3D Printed Components by High Precision Metrology . . . . 281 Alexandru D. Sterca, Roxana-Anamaria Calin, Lucian Cristian, Eva Maria Walcher, Osman Bodur, Vasile Ceclan, Sorin Dumitru Grozav, and Numan M. Durakbasa Implementation of Industry 4.0 Elements in Industrial Metrology – Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 Vojtech Stepanek, Jakub Brazina, Michal Holub, Jan Vetiska, Jiri Kovar, Jiri Kroupa, and Adam Jelinek Multi-criteria Decision Support with SWARA and TOPSIS Methods to the Digital Transformation Process in the Iron and Steel Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 Yıldız Şahin and Yasemin Bozkurt Sensor Based Intelligent Measurement and Blockchain in Food Quality Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 Gizem Şen, İhsan Tolga Medeni, Kamil Öncü Şen, Numan M. Durakbasa, and Tunç Durmuş Medeni

lx

Contents

Study of the Formation of Zinc Oxide Nanowires on Brass Surface After Pulse-Periodic Laser Treatment . . . . . . . . . . . . . . . . . . . . . . . . . . 335 Serguei P. Murzin and Nikolay L. Kazanskiy Lean Production A New Model Proposal for Occupational Health and Safety . . . . . . . . . 347 Mesut Ulu and Semra Birgün An Integrated Value Stream Mapping and Simulation Approach for a Production Line: A Turkish Automotive Industry Case . . . . . . . . 357 Onur Aksar, Duygu Elgun, Tuğçe Beldek, Aziz Kemal Konyalıoğlu, and Hatice Camgöz-Akdağ Eliminating the Barriers of Green Lean Practices with Thinking Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372 Semra Birgün and Atik Kulaklı Miscellaneous Topics Competency Gap Identification Through Customized I4.0 Education Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387 Murat Kocamaz, U. Gökay Çiçekli, Aydın Koçak, Haluk Soyuer, Jorge Martin Bauer, Gökçen Baş, Numan M. Durakbasa, Yunus Kaymaz, Fatma Demircan Keskin, İnanç Kabasakal, Erol Güçlü, and Ece Soyuer Design of a Routing Algorithm for Efficient Order Picking in a Non-traditional Rectangular Warehouse Layout . . . . . . . . . . . . . . . 401 Edin Sancaklı, İrem Dumlupınar, Ali Osman Akçın, Ezgi Çınar, İpek Geylani, and Zehra Düzgit Education in Engineering Management for the Environment . . . . . . . . 413 Peter Kopacek and Mary Doyle-Kent Hybrid Flowshop Scheduling with Setups to Minimize Makespan of Rubber Coating Line for a Cable Manufacturer . . . . . . . . . . . . . . . . 421 Diyar Balcı, Burak Yüksel, Eda Taşkıran, Güliz Hande Aslım, Hande Özkorkmaz, and Zehra Düzgit Improving Surface Quality of Additive Manufactured Metal Parts by Magnetron Sputtering Thin Film Coating . . . . . . . . . . . . . . . . . . . . . 436 Binnur Sağbaş, Hüseyin Yüce, and Numan M. Durakbasa Labor Productivity as an Important Factor of Efficiency: Ways to Increase and Calculate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444 Ilham Huseynli Risk Governance Framework in the Oil and Gas Industry: Application in Iranian Gas Company . . . . . . . . . . . . . . . . . . . . . . . . . . . 452 Mohsen Aghabegloo, Kamran Rezaie, and S. Ali Torabic

Contents

lxi

Operations Research Applications and Optimization A Nurse Scheduling Case in a Turkish Hospital . . . . . . . . . . . . . . . . . . 467 Edanur Yasan, Tuğba Cesur, Tuba Nur Aslan, Rana Ezgi Köse, Aziz Kemal Konyalıoğlu, Tuğçe Beldek, and Ferhan Çebi An Analytical Approach to Machine Layout Design at a High-Pressure Die Casting Manufacturer . . . . . . . . . . . . . . . . . . . . 476 Oğuz Emir and Tülin Aktin Hybrid Approaches to Vehicle Routing Problem in Daily Newspaper Distribution Planning: A Real Case Study . . . . . . . . . . . . . . . . . . . . . . . 489 Gizem Deniz Cömert, Uğur Yıldız, Tuncay Özcan, and Hatice Camgöz Akdağ Monthly Logistics Planning for a Trading Company . . . . . . . . . . . . . . . 500 İlayda Ülkü, Huthayfah Kullab, Talha Al Mulqi, Ali Almsadi, and Raed Abousaleh Solving the Cutting Stock Problem for a Steel Industry . . . . . . . . . . . . . 510 İlayda Ülkü, Ugur Tekeoğlu, Müge Özler, Nida Erdal, and Yağız Tolonay Quality Management Comparison of Optical Scanner and Computed Tomography Scan Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521 Michaela Kritikos, Jan Urminsky, and Ivan Buransky Researches Regarding the Development of a Virtual Reality Android Application Explaining Orientation Tolerances According to the Latest GPS Standards Using 3D Models . . . . . . . . . . . . . . . . . . . . . . . . 531 Grigore Marian Pop, Radu Comes, Calin Neamtu, and Liviu Adrian Crisan Simulation and Modelling Comparing the Times Between Successive Failures by Using the Weibull Distribution with Time-Varying Scale Parameter . . . . . . . . 541 Nuri Çelik and Aziz Kemal Konyalıoğlu Modelling of a Microwave Rectangular Waveguide with a Dielectric Layer and Longitudinal Slots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550 I. J. Islamov, E. Z. Hunbataliyev, R. Sh. Abdullayev, N. M. Shukurov, and Kh. Kh. Hashimov

lxii

Contents

Supply Chain Management and Sustainability A Literature Analysis of the Main European Environmental Strategies Impacting the Production Sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561 Denisa-Adela Szabo, Mihai Dragomir, Virgil Ispas, and Diana-Alina Blagu A Raw Milk Production Facility Design Study in Aydın Region, Turkey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 568 Hasan Erdemir, Mehmet Yılmaz, Aziz Kemal Konyalıoğlu, Tuğçe Beldek, and Ferhan Çebi Recent Developments in Supply Chain Compliance in Europe and Its Global Impacts on Businesses . . . . . . . . . . . . . . . . . . . . . . . . . . 578 Çiçek Ersoy and Hatice Camgoz Akdag Sustainable Factors for Supply Chain Network Design Under Uncertainty: A Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585 Simge Yozgat and Serpil Erol Capstone Projects A Decision Support System for Supplier Selection in the Spare Parts Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601 Ecem Gizem Babadağ, İdil Bıçkı, Halil Çağın Çağlar, Yeşim Deniz Özkan-Özen, and Yücel Öztürkoğlu A Discrete-Time Resource Allocated Project Scheduling Model . . . . . . . 616 Berkay Çataltuğ, Helin Su Çorapcı, Levent Kandiller, Fatih Kağan Keremit, Giray Bingöl Kırbaş, Özge Ötleş, Atakan Özerkan, Hazal Tucu, and Damla Yüksel An Optimization Model for Vehicle Scheduling and Routing Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 630 Tunay Tokmak, Mehmet Serdar Erdogan, and Yiğit Kazançoğlu Applying Available-to-Promise (ATP) Concept in Multi-Model Assembly Line Planning Problems in a Make-to-Order (MTO) Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 639 Mert Yüksel, Yaşar Karakaya, Okan Özgü, Ant Kahyaoğlu, Dicle Dicleli, Elif Onaran, Zeynep Akkent, Mahmut Ali Gökçe, and Sinem Özkan Capacitated Vehicle Routing Problem with Time Windows . . . . . . . . . . 653 Aleyna Tanel, Begüm Kınay, Deniz Karakul, Efecan Özyörük, Elif İskifoğlu, Ezgi Özoğul, Meryem Ustaoğlu, Damla Yüksel, and Mustafa Arslan Örnek

Contents

lxiii

Designing a Railway Network in Cesme, Izmir with Bi-objective Ring Star Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665 Oya Merve Püskül, Dilara Aslan, Ceren Onay, Mehmet Serdar Erdogan, and Mehmet Fatih Taşgetiren Distribution Planning of LPG to Gas Stations in the Aegean Region . . . 675 Berfin Alkan, Berfin Dilsan Kikizade, Buse Karadan, Çağatay Duysak, Elif Hande Küpeli, Emre Yağız Turan, Tuğçe Dilber, Erdinç Öner, and Nazlı Karataş Aygün Drought Modelling Using Artificial Intelligence Algorithms in Izmir District . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 689 Zeynep İrem Özen, Berk Sadettin Tengerlek, Damla Yüksel, Efthymia Staiou, and Mir Jafar Sadegh Safari Electricity Consumption Forecasting in Turkey . . . . . . . . . . . . . . . . . . . 702 Buğsem Acar, Selin Yiğit, Berkay Tuzuner, Burcu Özgirgin, İpek Ekiz, Melisa Özbiltekin-Pala, and Esra Ekinci Forecasting Damaged Containers with Machine Learning Methods . . . . 715 Mihra Güler, Onur Adak, Mehmet Serdar Erdogan, and Ozgur Kabadurmus Logistics Service Quality of Online Shopping Websites During Covid-19 Pandemic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 725 İlayda Gezer, Hasancan Erduran, Alper Kayıhan, Burak Çetiner, and Pervin Ersoy Optimal Inventory Share Policy Search for e-Grocery Food Supply Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735 Berk Kaya, Dilara Dural, Mehmet Sağer, Melike Akdoğan, Asime Bengisu İldeşler, Mert Paldrak, and Banu Yetkin Ekren Parallel Workforce Assignment Problem for Battery Production . . . . . . 747 Aslıhan Erdoğan, Ayşenur Talı, Berfu Kırcalı, Buket Günal, Ecem Nazlı Dedecengiz, Damla Yüksel, and Banu Yetkin Ekren Production and Inventory Control of Assemble-to-Order Systems . . . . . 757 Nisa Saracalıoğlu, Yiğit Sonbahar, Güner Asrın Akdeniz, Meryem Ünsal, Yarkın Yiğit Yıldız, Salih Kaan Çakmakçı, Önder Bulut, Sinem Özkan, and Ceylin Hökenek Shipment Planning: A Case Study for an Apparel Company . . . . . . . . . 772 Ceren Metinoğlu, Deniz Tetik, Egemen Ozgorenler, İlker Avcı, İrem Ersoy, Mustafa Akgül, Sena Alaboğa, Nazli Karatas Aygun, and Adalet Öner

lxiv

Contents

Spare Parts Inventory Management System in a Service Sector Company . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 787 Buse Atakay, Özge Onbaşılı, Simay Özcet, İrem Akbulak, Hatice Birce Cevher, Hümeyra Alcaz, İsmail Gördesli, Mert Paldrak, and Efthimia Staiou Storage and Order Picking Process Design for Perishable Foods in a Cold Warehouse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 800 Bensu Güntez, Gizem İşgüzerer, Çağla Güşeç, Merve Özen, Mert Paldrak, and Banu Yetkin Ekren Uniform Parallel Machine Scheduling with Sequence Dependent Setup Times: A Randomized Heuristic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 812 Beste Yıldız, Levent Kandiller, and Ayhan Özgür Toy Vehicle Routing Problem with Multi Depot, Heterogeneous Fleet, and Multi Period: A Real Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . 826 Barış Karadeniz, Mehmet Serdar Erdogan, and Yiğit Kazançoğlu Water Resource Management Using a Multiperiod Water Pricing Model in Izmir District . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 837 Zeynep İrem Özen, Berk Sadettin Tengerlek, Damla Yüksel, Efthymia Staiou, and Levent Kandiller Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 849

Artificial Intelligence Applications

A Comparative Analysis on Improving Covid-19 Prediction by Using Ensemble Learning Methods Elif Kartal(B) Istanbul University, Istanbul, Turkey [email protected]

Abstract. In this study, it is aimed to improve the Covid-19 predictions in terms of the distinction between Covid-19 and Flu by using several well-known ensemble learning methods namely, majority voting, bagging, boosting, and stacking. In this scope, the performance of base machine learning models was compared with the ensemble ones (majority voting, C5.0, stochastic gradient boosting, bagged CART, random forest, and stacking models) on a public Covid-19 dataset in which observations are labelled as Covid-19 and Flu. Since the task belongs to a classification problem, supervised machine learning algorithms (logistic regression (via generalized linear model), classification and regression trees, artificial neural networks, and support vector machines) are used as base learners. The Cross-Industry Standard Process Model for Data Mining, which is consisted of six stages: business understanding, data understanding, data preparation, modeling, evaluation, and deployment, is used as the study method. In the model performance evaluation stage, an additional metric was proposed by considering the accuracy and its change interval (max-min). The performance of the models was discussed in terms of accuracy and the proposed metric. A Shiny application is developed by using the best performing model. The application enables users to predict Covid-19 status through a web interface and to use it interactively. Analyses are performed with R and RStudio. Keywords: Bagging · Boosting · Classification · Covid-19 · Ensemble learning · Majority voting · Stacking

1 Introduction The Covid-19 pandemic has spread all over the world since cases were reported in Wuhan, China in December 2019. Then all around the world, multidisciplinary studies are carried out with the participation of different fields such as medical sciences, computer sciences, and informatics for the diagnosis and treatment of Covid-19 to get back to our normal lives. Machine learning is one of these areas where one of the focuses is on developing systems that make decisions like a human. In the decision-making process, if the biological brain is carried one step further to artificial models, it can be obtained more rational and accurate results. However, these models still can suffer from high bias © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 3–14, 2022. https://doi.org/10.1007/978-3-030-90421-0_1

4

E. Kartal

and variance issues in terms of performance evaluation. Ensemble learning (is also called committee-based learning, or learning multiple classifier systems [1]) methods are used to deal with these mentioned problems and also to improve the performance obtained from the machine learning algorithms [2]. The main idea behind ensemble learning can be thought of as consulting a few different people and getting their opinions before making an important decision. It is like taking the opinion of more than one doctor and acting according to the majority decision before having an operation. From the machine learning perspective, ensemble learning techniques are used to aggregate model output [3]. An ensemble learning algorithm considers two or more, same or different types of machine learning algorithms together and as a result of parallel or sequential processes, an ensemble can make a weak (base) learner more robust and increase its performance [4]. Majority Voting: This technique is also called as max voting technique and is generally used for classification problems in the literature [5]. The main idea behind it is after obtaining the predictions of each base learner as votes, getting the majority of the votes, and determining it as the final decision (or class label). In addition, averaging and weighted average methods can be used in a similar way except taking the averages (or weighted averages) of the predictions in the regression tasks or while calculating the probabilities in the classification tasks [5]. Furthermore, these techniques can be considered as the selection strategies to choose the appropriate base learners (dynamically weighted average, winner takes all, Bayesian combination, etc.) [6]. Bagging (Bootstrap Aggregating): The main aim of bagging is to decrease the instability through the bootstrap method [3]. The bootstrap sampling method enables the creation of several subsets of data in which the data is chosen randomly with replacement (one data point/observation can occur multiple times in a subset as well as in future subsets) [7, 8]. Bagging is performing the Bootstrap procedure with a high variance machine learning algorithm, in most cases decision trees [9]. This method is good at reducing the variance of the estimated prediction, moreover as the focus of bagging in aggregation is to average many noisy models that are unbiased, the subsequent reduction in variance is obtained [3, 10]. Boosting: This method combines many weak classifiers sequentially and turns a weak classifier into a strong classifier [11]. The base learner is forced to infer something new about the data in the boosting, so training subsets are chosen to accomplish this [12]. So the base learner (or the weak classifier) is trained on a differently weighted version of the training samples [11]. Misclassified input information takes a higher weight and models that are arranged accurately lose weight [13]. Stacking (Stacked Generalization): Unlike bagging and boosting, the stacking method uses several base learners and then the predictions of them are given as inputs to a meta learner that produces the final result [14]. According to Zhang and Li [15], while bagging is used for reducing variance and boosting is used for reducing bias, stacking is used for improving the prediction results. In this study, it is aimed to improve the Covid-19 predictions in terms of the distinction between Covid-19 and Flu by using several well-known ensemble learning methods namely, majority voting, bagging, boosting, and stacking. In this scope, the performance of base machine learning models was compared with the ensemble ones (majority voting,

A Comparative Analysis on Improving Covid-19 Prediction

5

C5.0, stochastic gradient boosting, bagged CART, random forest, and stacking models) on a public Covid-19 dataset in which observations are labeled as Covid-19 and flu [16], (no additional category in the dataset as healthy). Since the task belongs to a classification problem, supervised machine learning algorithms (logistic regression (via generalized linear model), classification and regression trees, artificial neural networks, and support vector machines) are used as base learners. The Cross-Industry Standard Process Model for Data Mining, which is consisted of six stages: business understanding, data understanding, data preparation, modeling, evaluation, and deployment, is used as the study method [17, 18]. In the model performance evaluation stage, an additional metric was proposed by considering the accuracy and its change interval (max-min). The performance of the models was discussed in terms of accuracy and the proposed metric. A Shiny application is developed by using the best performing model. The application enables users to predict Covid-19 status (to distinguish between Covid-19 and flu) through a web interface and to use it interactively. Analyzes are performed with R and RStudio. The rest of the paper is organized as follows: Part II gives the literature review of Covid-19 prediction with ensemble learning techniques as well as other machine learning techniques. The study method is introduced in Part III in terms of CRISP-DM steps. performance comparison results of the models are provided in Part IV. Details of the developed Shiny application are given in Part V. The last part covers the conclusion.

2 Related Work Technological developments and advances such as computers with powerful computational capacities as well as a larger amount of storage have made many contributions to the diagnosis and treatment of Covid-19. Moreover, solutions with big data concept and advanced techniques in machine learning such as deep learning, transfer learning, etc. have become very popular during the pandemic. One of the main research areas in the literature is using CT-scans and X-ray images for Covid-19 detection/prediction. For these purposes, on the one hand, the performance of single machine learning models is compared with the ensemble learning models, on the other hand, ensemble learning is used to improve the existing model performances in the literature. In this way, various systems can be developed to carry out the process automatically and to accelerate it. Tabrizchi et al. [19] used support vector machines, naïve bayes, artificial neural networks, multilayer perceptron, Convolutional Neural Networks (CNNs) as well as two different ensemble learning algorithms namely, Adaptive Boosting (AdaBoost) and Gradient Boosting Decision Tree. Analyses are performed on a dataset that has 430 Covid-19 and 550 healthy patients. According to accuracy values obtained from the performance comparison part of the study, SVM (0.992) and CNN (0.976) outperformed, while AdaBoost has 0.960 and GBDT has 0.952. Tang et al. [20] proposed an ensemble deep learning model (EDL-COVID) that is based on Covid-Net (a well-known deep learning architecture for Covid-19), so ensemble learning is thought to overcome some drawbacks of deep learning such as overfitting, high variance and, generalization errors that are occurred by noise and the limited

6

E. Kartal

number of datasets by creating multiple prediction models. Weighted Averaging Ensembling (WAE) is proposed by considering the variations on class-level accuracies of different machine learning models. Chest X-ray images dataset is used for the analyses. The performance results of EDL-COVID (accuracy = 95%) are obtained better than COVID-Net (accuracy = 93.3%) for Covid-19 detection. Li et al. [21] used a pre-trained VGG-16 deep learning model with the help of transfer learning and stacked generalization ensemble learning method. Basically, multiple deep learning models are trained then integrated by using stacked generalization and the final performance is obtained. Unlike the previous studies, multi-class classification is performed with Covid-19, common pneumonia, and normal class labels. The authors stated that their proposed model is obtained 93.57% accuracy and also it can improve the performance of deep learning models on multi-class classification tasks. Das et al. [22] used three different pre-trained CNN models (DenseNet20, Resnet50V2, and Inceptionv3), and also the weighted average ensembling technique is used to combine the models. According to study results, 91.62% accuracy is obtained from the proposed model which is better performed than the single CNN models. In addition, the authors developed a GUI-based application for medical staff to detect Covid-19 cases from the Chest X-ray images. Gifani et al. [23] used 15 different CNN models namely, EfficientNets (B0B5), NasNetLarge, NasNetMobile, InceptionV3, ResNet-50, SeResnet 50, Xception, DenseNet121, ResNext50, and Inception_resnet_v2. A publicly available CT scans dataset is used for the analysis. It consisted of 349 Covid and 397 non-Covid CT scans. Authors worked with the majority voting as the ensemble learning part of the study and the different number of single models are combined such as 3, 5, 7, etc. to create the ensemble models. According to the study results, the ensemble model, which includes the predictions of 5 deep transfer learning architectures with EfficientNetB0, EfficientNetB3, EfficientNetB5, Inception_resnet_v2 and, Xception and uses majority voting, has higher accuracy (0.85), precision (0.857), and recall (0.852) than the single models. Rafi [24] used a publicly available X-ray grayscale images dataset from a Kaggle competition to identify Covid-19 cases. Although the dataset has multiple categories for classification, 1409 images labeled as Covid-19 and normal are used. The performance of two CNN models namely, ResNet-152 and DenseNet-121 have compared the ensemble of these two models. Findings indicate that the proposed ensemble deep learning model has better accuracy (98.43%), specificity (99.23%), and sensitivity (98.71%) results than other single models. Berliana and Bustamam [25] used Support Vector Classification, Random Forest Algorithm, k-Nearest Neighbors Algorithm as base-learners, and Support Vector Classification as meta-learner for the stacking. Then model performances are compared on a dataset that consists of 1140 Chest X-ray images (570 Covid-19, 570 normal) and 2400 CT scans (1200 Covid-19, 1200 normal). Gabor feature is used for the feature extraction from images. According to the results of the study, the ensemble model has an accuracy above 97% for CT scans and 99% for X-ray images. Shrivastava et al. [26] compared the performance of three different CNN architectures namely ResNet50, InceptionV4, and EfficientNetB0 with their proposed ensemble model based on max voting. Three different datasets are merged and used in the analysis (with

A Comparative Analysis on Improving Covid-19 Prediction

7

CT scans and X-ray images). The study results show that the ensemble model achieved better performance with 97.47% accuracy, 98.18% sensitivity, 96.6% specificity. AlJame et al. [27] proposed an ensemble learning model (ERLX) by using the stacking method to diagnose Covid-19 from routine blood tests. Extra Trees, Random Forest Algorithm, and Logistic Regression are used as base learners and XGBoost is used as meta learner. According to the study results, ERLX performed successfully with 99.88% accuracy, 98.72% sensitivity, and 99.99% specificity.

3 Method In this study, Cross-Industry Standard Process for Data Mining (CRISP-DM) is used as the study method. CRISP-DM has six stages namely Business Understanding, Data Understanding, Data Preparation, Modeling, Evaluation, and Deployment. As the Introduction and Related Work parts of this study provides the business understanding/problem understanding step of CRISP-DM, this section starts with the data understanding and preparation part, and each step is explained briefly. 3.1 Data Understanding and Data Preparation COVID-19 Dataset is used for the analysis [16, 28]. The dataset has two class labels: COVID-19 and Flu. There is no additional category in the dataset as healthy. COVID-19 cases (n = 68) and Flu cases (n = 62) are obtained from the Italian society of medical and intervention radiology society (SIRM) database and the influenza research database (IRD), respectively. Firstly, cases were combined and then were shuffled randomly. In the beginning, the dataset had the following attributes: Number, Age, Gender, Fever, Dyspnea, Cough, PO2, CRP, Asthenia, Leukopenia, Exposure to Covid-19, High risk zone, Temp, Blood Test, History, and Decision Label. However, due to all patients did not undertake the same medical tests, the dataset has a lot of missing values (Fig. 1). Therefore, attributes with missing values below 90% were selected for use in the analyses. Then nonparametric missing value imputation using random forest (missForest), which is based on an out-of-bag (OOB) imputation error estimate [29, 30], is used to impute the missing values. Moreover, high risk zone and dyspnea are removed because of their imbalanced categories. Predictive and the target attribute of the dataset are given in Table 1. 3.2 Modeling Four base learners/machine learning algorithms are used namely a Logistic Regression (via Generalized Linear Model - GLM), Classification and Regression Tree (CART), Artificial Neural Networks (ANNs), and Support Vector Machines (SVMs) with a Radial Basis Kernel Function [2]. Then the performance of these models was compared with the ensemble ones (Majority Voting, C5.0, Stochastic Gradient Boosting, Bagged CART, Random Forest, and Stacking models). The R packages used in the study are given in alphabetical order: caret [31], caretEnsemble [32], dplyr [33], forcats [34], gbm [35], ggplot2 [36], hrbrthemes [37], kernlab [38], missForest [29, 30], randomForest [39], shiny [40], shinyjs [41], shinythemes [42], visdat [43]. Analyses are performed with R and RStudio [44, 45].

8

E. Kartal

Fig. 1. Missing values (NAS) in the dataset

Table 1. Attributes of the dataset Predictive attributes Age

Numeric

Gender

Categorical

Fever

Categorical

Cough

Categorical

PO2

Numeric

Asthenia

Categorical

Leukopenia

Categorical

Temp

Numeric

History

Categorical

Target attribute Decision Label

Categorical

3.3 Evaluation Repeated 5-fold cross-validation (3-times) is used as the performance evaluation technique. Accuracy is used as well as a proposed metric (pm), which consists of the accuracy and its change interval in the performance evaluation stage, which is given in below. Thanks to the proposed metric, the average accuracy is sort of normalized. A higher pm value indicates better performance. To find pm where i is the number of the models are

A Comparative Analysis on Improving Covid-19 Prediction

9

represented with i = 1, 2, . . . , n. pmi =

meanaccuracyi maxaccuracyi − minimumaccuracyi

4 Findings The accuracy values of the models obtained in the performance evaluation stage are shown with boxplots in Fig. 2. While dots on boxes indicates the mean accuracy values of the models, dots outside the boxes are their outliers. The accuracy of the generalized linear model (glm) has a wider range with an outlier on the left side (Fig. 2). On one hand, support vector machine model (svm) performed better than other base learners in terms of mean accuracy. On the other hand, random forest model (rf ) has the highest mean accuracy among the ensemble models.

Fig. 2. Performance evaluation (accuracy)

When base learners and ensemble techniques are compared in terms of mean accuracy, base learners are lower than ensemble models. The support vector machine model (svm) is the exception for the majority voting technique (maj_voting). The highest mean accuracy belongs to the random forest model (rf ), the lowest accuracy and the lowest mean accuracy belongs to the generalized linear model (glm) (Fig. 3). The performance improvement from the best performing base learner (svm) to best performing ensemble model (rf ) is approximately 2% (0.021). The proposed metric values of the models (pmi ) obtained in the performance evaluation stage are shown with a bar chart in Fig. 4. According to the overall performance evaluation made with the proposed metric (pm) to the generalized linear model (glm) has the lowest pm value while stacking model with the random forest as meta-learner (stacking_rf ) has the highest pm value among the others.

10

E. Kartal

Fig. 3. Performance evaluation (Mean Accuracy)

Fig. 4. Performance evaluation (proposed metric)

5 Deployment: Covid-19 Prediction Application A Shiny application [46, 47] (Fig. 5) is developed by using the best performing model according to the proposed metric (stacking_rf ). The application enables users to predict

A Comparative Analysis on Improving Covid-19 Prediction

11

Covid-19 status (to distinguish between Covid-19 and flu) through a web interface and to use it interactively (https://erdalbalaban.shinyapps.io/covid19prediction/).

Fig. 5. A screenshot from the shiny application

6 Conclusions In this study, it is aimed to improve the Covid-19 predictions in terms of the distinction between Covid-19 and Flu by using several well-known ensemble learning methods namely, majority voting, bagging, boosting, and stacking. Then the performance of these models was compared with the ensemble ones. This performance comparison was made using two performance evaluation measures. The first of these criteria is the accuracy value, which is also frequently used in the literature, and the other is the proposed metric in this study, which is a kind of normalized version of the average accuracy. The highest mean accuracy belongs to the random forest model (meanaccuracyrf = 0.960), the lowest accuracy and the lowest mean accuracy belongs to the generalized linear model (meanaccuracyglm = 0.884) (Fig. 3). The performance improvement from the best performing base learner (meanaccuracysvm = 0.939) to best performing ensemble model (rf) is approximately 2% (0.021). According to the overall performance evaluation made with the proposed metric (pm) to the generalized linear model (pmglm = 3.2) has the lowest pm value while stacking model with the random forest as meta-learner (pmstacking_rf = 14.8) has the highest pm value among the others. Also, a Shiny application (Fig. 5) is developed by using the best performing model according to the proposed metric (stacking_rf). The application enables users to predict Covid-19 status through a web interface and to use it interactively (https://erdalbalaban.shinyapps.io/covid19pr ediction/). This study shows that the performance of single learners can be improved with ensemble learning models. In the literature, many studies are focused on predict/identify

12

E. Kartal

Covid-19 by using deep ensemble models on medical images such as CT scans and chest X-rays. However, there is also a need to make predictions for Covid-19 by performing machine learning analysis on a dataset that has clinical data of patients. So more public datasets will be beneficial for researchers. Different datasets and machine learning techniques can be used for future studies. Also, this study is basically focused on the distinction between Covid-19 and Flu, so among these categories, the “None” or “Healthy” category can be added. It is believed that the proposed metric in this study may be used for performance evaluation in the field of machine learning. In addition, the developed Shiny application may inspire other researchers to do similar studies. Acknowledgement. The author would like to thank to Prof. Dr. M. Erdal BALABAN for his valuable comments and suggestions.

References 1. Zhou, Z.-H.: Ensemble Methods Foundations and Algorithms, 1st edn. CRC Press, Boca Raton, FL (2012) 2. Brownlee, J.: How to Build an Ensemble of Machine Learning Algorithms in R. Machine Learning Mastery, 07 February 2016. https://machinelearningmastery.com/machine-learningensembles-with-r/. Accessed 14 Feb 2021 3. Narayanachar Tattar, P.: Hands-On Ensemble Learning with R, 1st edn. Pact Publishing Ltd, Brimingham, UK (2018) 4. Kaushik, S.: Ensemble Models in machine learning? (with code in R). Analytics Vidhya, 15 February 2017. https://www.analyticsvidhya.com/blog/2017/02/introduction-to-ensemblingalong-with-implementation-in-r/. Accessed 10 Feb 2021 5. Singh, A.: Ensemble Learning|Ensemble Techniques. Analytics Vidhya, 18 June 2018. https://www.analyticsvidhya.com/blog/2018/06/comprehensive-guide-for-ensemblemodels/. Accessed 31 Jan 2021 6. Huang, F., Xie, G., Xiao, R.: Research on ensemble learning. In: 2009 International Conference on Artificial Intelligence and Computational Intelligence, November 2009, Vol. 3, pp. 249–252 (2009). https://doi.org/10.1109/AICI.2009.235 7. Joshi, P.: Bootstrap Sampling|Bootstrap Sampling In Machine Learning. Analytics Vidhya, 12 February 2020. https://www.analyticsvidhya.com/blog/2020/02/what-is-bootstrap-samplingin-statistics-and-machine-learning/. Accessed 29 June 2021 8. Saini, B.: Ensemble Techniques— Bagging (Bootstrap aggregating). Medium, 29 January 2021. https://medium.datadriveninvestor.com/ensemble-techniques-bagging-bootstrapaggregating-c7a7e26bdc13. Accessed 29 June 2021 9. Brownlee, J.: Bagging and Random Forest Ensemble Algorithms for Machine Learning. Machine Learning Mastery, 21 April 2016. https://machinelearningmastery.com/baggingand-random-forest-ensemble-algorithms-for-machine-learning/. Accessed 29 June 2021 10. Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd edn. Springer, New York (2008) 11. Shrivastava, R.: Comparative study of boosting and bagging based methods for fault detection in a chemical process. In: 2021 International Conference on Artificial Intelligence and Smart Systems (ICAIS), Coimbatore, India, March 2021, pp. 674–679 (2007). https://doi.org/10. 1109/ICAIS50930.2021.9395905 12. Oakley, J.G.: Access. In: Waging Cyber War, pp. 101–114. Apress, Berkeley, CA (2019). https://doi.org/10.1007/978-1-4842-4950-5_8

A Comparative Analysis on Improving Covid-19 Prediction

13

13. Sultan Bin Habib, A.-Z., Tasnim, T. Billah, M.M.: A study on coronary disease prediction using boosting-based ensemble machine learning approaches. In: 2019 2nd International Conference on Innovation in Engineering and Technology (ICIET), Dhaka, Bangladesh, December 2019, pp. 1–6 (2019). https://doi.org/10.1109/ICIET48527.2019.9290600 14. Yildirim, P., Birant, K.U., Radevski, V., Kut, A., Birant, D.: Comparative analysis of ensemble learning methods for signal classification. In: 2018 26th Signal Processing and Communications Applications Conference (SIU), May 2018, pp. 1–4 (2018). https://doi.org/10.1109/ SIU.2018.8404601 15. Zhang, T., Li, J.: Credit risk control algorithm based on stacking ensemble learning. In: 2021 IEEE International Conference on Power Electronics, Computer Applications (ICPECA), Shenyang, China, January 2021, pp. 668–670 (2021). https://doi.org/10.1109/ICPECA51329. 2021.9362514 16. Hamed, A.: COVID-19 Dataset. Harvard Dataverse, Vol. V1, p. UNF:6:RAlD/Ta6J+9xN/Ok+6Cr7A== [fileUNF] (2020). https://doi.org/10.7910/DVN/ LQDFSE 17. Balaban, M.E., Kartal, E.: Veri Madencili˘gi ve Makine Ö˘grenmesi Temel Algoritmaları ve R Dili ile Uygulamaları, 2nd ed. Beyo˘glu. Ça˘glayan Kitabevi, ˙Istanbul (2018) 18. Shearer, C.: The CRISP-DM model: the new blueprint for data mining. J. Data Warehousing 5(4), 13–22 (2000) 19. Tabrizchi, H., Mosavi, A., Szabo-Gali, A., Felde, I., Nadai, L.: Rapid COVID-19 diagnosis using deep learning of the computerized tomography scans. In: 2020 IEEE 3rd International Conference and Workshop in Óbuda on Electrical and Power Engineering (CANDO-EPE), November 2020, pp. 000173–000178 (2020). https://doi.org/10.1109/CANDO-EPE51100. 2020.9337794 20. Tang, S., et al.: EDL-COVID: ensemble deep learning for COVID-19 case detection from chest x-ray images. IEEE Trans. Industr. Inf. 17(9), 6539–6549 (2021). https://doi.org/10. 1109/TII.2021.3057683 21. Li, X., Tan, W., Liu, P., Zhou, Q., Yang, J.: Classification of COVID-19 chest CT images based on ensemble deep learning. J. Healthcare Eng. 2021, e5528441 (2021). https://doi.org/ 10.1155/2021/5528441 22. Das, A.K., Ghosh, S., Thunder, S., Dutta, R., Agarwal, S., Chakrabarti, A.: Automatic COVID19 detection from X-ray images using ensemble learning with convolutional neural network. Pattern Anal. Appl. 24(3), 1111–1124 (2021). https://doi.org/10.1007/s10044-021-00970-4 23. gifani, P., Shalbaf, A., Vafaeezadeh, M.: Automated detection of COVID-19 using ensemble of transfer learning with deep convolutional neural network based on CT scans. Int. J. Comput. Assist. Radiol. Surg. 16(1), 115–123 (2020). https://doi.org/10.1007/s11548-020-02286-w 24. Rafi, T.H.: An ensemble deep transfer-learning approach to identify COVID-19 cases from chest X-ray images. In: 2020 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB), Via del Mar, Chile, October 2020, pp. 1–5 (2020). https://doi.org/10.1109/CIBCB48159.2020.9277695 25. Berliana, A.U., Bustamam, A.: Implementation of stacking ensemble learning for classification of COVID-19 using image dataset CT scan and lung X-Ray. In: 2020 3rd International Conference on Information and Communications Technology (ICOIACT), Yogyakarta, Indonesia, November 2020, pp. 148–152 (2020). https://doi.org/10.1109/ICOIACT50329. 2020.9332112 26. Shrivastava, P., Singh, A., Agarwal, S., Tekchandani, H., Verma, S.: Covid detection in CT and X-Ray images using ensemble learning. In: 2021 5th International Conference on Computing Methodologies and Communication (ICCMC), Erode, India, April 2021, pp. 1085–1090 (2021). https://doi.org/10.1109/ICCMC51019.2021.9418308

14

E. Kartal

27. AlJame, M., Ahmad, I., Imtiaz, A., Mohammed, A.: Ensemble learning model for diagnosing COVID-19 from routine blood tests. Inf. Med. Unlocked 21, 100449 (2020). https://doi.org/ 10.1016/j.imu.2020.100449 28. Hamed, A., Sobhy, A., Nassar, H.: Accurate classification of COVID-19 based on incomplete heterogeneous data using a KNN variant algorithm. Arab. J. Sci. Eng. 46, 1–12 (2020). https:// doi.org/10.1007/s13369-020-05212-z 29. Stekhoven, D.J.: MissForest: Nonparametric Missing Value Imputation using Random Forest (2013) 30. Stekhoven, D.J., Buehlmann, P.: MissForest - non-parametric missing value imputation for mixed-type data. Bioinformatics 28(1), 112–118 (2012) 31. Kuhn, M.: caret: Classification and Regression Training (2020). https://CRAN.R-project.org/ package=caret 32. Deane-Mayer, Z.A., Knowles, J.E.: caretEnsemble: Ensembles of Caret Models (2019). https://CRAN.R-project.org/package=caretEnsemble 33. Wickham, H., François, R., Henry, L., Müller, K.: dplyr: A Grammar of Data Manipulation (2021). https://CRAN.R-project.org/package=dplyr 34. Wickham, H.: forcats: Tools for Working with Categorical Variables (Factors) (2021). https:// CRAN.R-project.org/package=forcats 35. Greenwell, B., Boehmke, B., Cunningham, J, G.B.M. Developers: gbm: Generalized Boosted Regression Models (2020). https://CRAN.R-project.org/package=gbm 36. Wickham, H.: ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag, New York (2016). https://ggplot2.tidyverse.org 37. Rudis, B.: hrbrthemes: Additional Themes, Theme Components and Utilities for “ggplot2” (2020). https://CRAN.R-project.org/package=hrbrthemes 38. Karatzoglou, A., Smola, A., Hornik, K., Zeileis, A.: kernlab – an S4 package for kernel methods in R. J. Stat. Softw. 11(9), 1–20 (2004) 39. Liaw, A., Wiener, M.: Classification and regression by random forest. R News 2(3), 18–22 (2002) 40. Chang, W., et al.: shiny: Web Application Framework for R (2021). https://CRAN.R-project. org/package=shiny 41. Attali, D.: shinyjs: Easily Improve the User Experience of Your Shiny Apps in Seconds (2020). https://CRAN.R-project.org/package=shinyjs 42. Chang, W.: shinythemes: Themes for Shiny (2021). https://CRAN.R-project.org/package= shinythemes 43. Tierney, N.: visdat: Visualising whole data frames. JOSS 2(16), 355 (2017). https://doi.org/ 10.21105/joss.00355 44. R Core Team: R: a language and environment for statistical computing. R foundation for statistical computing (2021). https://www.R-project.org/ 45. RStudio: RStudio | Open source & professional software for data science teams (2021). https:// rstudio.com/ 46. RStudio: Shiny (2020). https://shiny.rstudio.com/ 47. RStudio: Shinyapps.io (2020). https://www.shinyapps.io/

Developing Intelligent Autonomous Vehicle Using Mobile Robots’ Structures Florin Popi¸ster(B) , Mihai Steopan, and Sergiu Ovidiu Some¸san Department of Design Engineering and Robotics Technical, University of Cluj-Napoca, Cluj, Romania {florin.popister,mihai.steopan}@muri.utcluj.ro

Abstract. Analysis of the use of combine CAD, CAQ and CAE engineering tools in creating a vehicle that can navigate on its own, with the help of ultrasonic sensors, to improve the charging time of batteries with the help of solar panels. The main objective of this paper was making a smart autonomous vehicle with functional solar panels. Make it possible to explore new places where people cannot access and participate in military applications for landmine detection. The methods used to achieve the proposed theme are the customer’s voice, quality criticism, quality function implementation, benchmarking, Pugh and finally, 3D printing. Keywords: Autonomous robots · 3D printer · Solar panels · Vehicle

1 Introduction A mobile robot is a robot capable of locomotion. Mobile robots are considered a subcategory of robotics. Mobile robots have the ability to move in their environment, not being fixed in one physical location. Mobile robots can also be autonomous. This means they have the ability to navigate in an uncontrolled environment without the need for guidance. Alternatively, mobile robots can rely on guidance devices that allow them to travel on a predefined navigation route in a relatively controlled space (AGV - autonomous guided vehicle). [1]. Mobile robots, Fig. 1, are the focus of current research in robotics, almost every major university has one or more laboratories that deal with mobile robot research. Besides research, we can also find mobile robots in industrial, military, security and civilian settings. [2]. The purpose of the present research was to design and create a vehicle that can navigate autonomously or through teleoperation, with the help of various sensors. To improve movement autonomy and the charging time of batteries, a set of solar panels was added (Fig. 2). Solar panel systems in autonomous robotic vehicles have been used for years. A real example are the Mars rovers, where most of the energy is generated by photovoltaic panels. In any case, if there is no sunlight, the rover should minimize power consumption, as the batteries cannot be charged after they have been completely depleted. The use of © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 15–23, 2022. https://doi.org/10.1007/978-3-030-90421-0_2

16

F. Popi¸ster et al.

Fig. 1. Mobile robot [3]

Fig. 2. The Mars rovers [4]

rechargeable batteries in space missions was first used in “Mars Exploration Rovers”. NASA has inspired different generations of exploration vehicles. One example is the K9 rover. This is a rover for remote scientific exploration and autonomous operation. [5].

2 The Current State The problem which this paper addresses to it is focused on developing and creating an intelligent autonomous vehicle. The intelligent autonomous vehicle is to be built to be able to navigate autonomously, to bypass the obstacles he encounters, to transmit information in real time with the help of an integrated camera, to charge himself with the help of the solar panels that he has on the upper case. This vehicle is made in such a way that it can be used in hostile environments, such as areas with slopes, swamps, including sand dunes.

Developing Intelligent Autonomous Vehicle Using Mobile Robots

17

The concept that was developed refers to the rovers that are launched into space, especially on Mars or the Moon. The aim is to improve transport in areas that are difficult for people to reach and to be completely independent, not to depend on an operator. To achieve the above goals, DMAIC framework was used. DMAIC stands for Define, Measure, Analyse, Improve and Control. Following the steps of DMAIC, a concept and a mock-up were developed. Based on the mock-up. A real full-scale mobile platform was built using 3D printing technology for the chassis and supporting elements. In order to achieve the completion of the DMAIC framework, Fig. 3, and develop a viable concept, several competitive development methods were used. Amongst these are [6, 7, 8, 9]: • • • • •

VOCT (Voice of the Customer Table); Brainstorming; AHP (Analytic Hierarchy Process); QFD (Quality Function Deployment); Benchmarking.

To support the implementation of the methods, the engineering aspects and the creation of the mockup, several software packages were used: • Qualica QFD – for concept development; • PTC Mathcad and DELMIA – for the dimensioning and creation of the 3D model; • Arduino CC – for the programming and control of the physical model. The first step was to identify the needs this robot should meet based on the current state of the art and feedback from specialists. The top 3 most important needs identified using the AHP method were: 1. To be able to move in hostile environment – 16.3% relevance 2. To be able to gather and transmit data and information to the operator – 12.8% relevance 3. To be able to operate autonomous – 10.1% relevance The next step of the framework presumes the definition of the technical parameters and a functional assessment for mobile platform. For this the brainstorming method was used. After defining the characteristics and functions, they were evaluated using the QFD method, in order to determine which are the most important. Top three most important characteristics are: 1. Autonomy greater than 30 min – 12.3% 2. Control at more than 100m – 10.4% 3. Gross dimensions less than 300mm – 9%

18

F. Popi¸ster et al.

Top three most important functions are: 1. High recording frequency – 12% 2. Have easily replaceable parts – 10% 3. Be able to move on rough terrain – 9.8% Based on the functions, a set of ideas was generated for each function in part. For example, for the function called “be able to detect obstacles”, three options were considered: ultrasonic sensors, infrared sensors or video detection. Since video detection requires high processing power, leading to a decreased autonomy, in the final concept only the first two types of sensors were used.

Fig. 3. DMAIC framework [6]

By combining ideas from each of the functions, several concepts were generated. Considering the 4 most likely concepts to work, they were evaluated using the PUGH method in order to identify the most adequate one. The final concept consists of a micro mobile platform on wheels, having a simple casing so it is easy to replace in the field, an Arduino-Mega based controller having a Bluetooth attached, four ultrasonic sensor, one Wi-Fi IR camera. The mobile platform is actuated by 2 micro dc motors. One of the motors are used for generating the movement of the platform and the other one is used to actuate the arms used for obstacle surpassing. The physical parts were correlated using a QFD with the functions of the mobile platform. Knowing the costs for each part and their correlation index, a cost/importance diagram was created. As can be seen from the figure below, Fig. 4, the concept is well balance with respect to the overall cost and the relevance of each component. Having a list of technical characteristics and a minimal list of components, a 3D model was created, Fig. 5. Once the robot’s components were generated in the 3D area,

Developing Intelligent Autonomous Vehicle Using Mobile Robots

19

the search and acquisition steps followed, most of the components being bought from specialized websites such as: Robofu, Ardushop, Pololu.

Fig. 4. Cost/importance diagram

3 Characterstics of the Mock-Up In order to operate the robot, two electrical motors are required to impose movement to the entire platform and the side arms. This was achieved considering the construction of the robot in order to obtain the desired results, as follows: – – – – – – – – –

outer radius of the wheels: r = 35mm. the correction coefficient: k = 1.5 the robot’s acceleration: a = 1.5m/sˆ2. the robot’s mass: m = 0,88kg. medium speed: v = 1.5km/h. angular speed: ω = 11.9 rad/s. inertia speed: Fi = 1.32 N. necessary power: P = Fi*v*k = 2.97 W. momentum:Mt = P/ω = 0.24J.

The solar panels I used are from Asosmos. They have a power of 10 W and a voltage of 5 V, which results in an intensity of 2 A. These values are for a single solar panel. The panels are very light and thin, with a thickness of 6 mm. Solar cells are monocrystalline and are sealed with an epoxy surface. Due to the epoxy layer, it also benefits from IP64, that means water resistance.

20

F. Popi¸ster et al.

Fig. 5. 3D model of the concept

Ultrasonic sensors are almost the most widely used sensors for finding distances. The sensor emits ultrasound on a very high frequency of 40,000 Hz that circulates through the air. If it encounters an obstacle, it returns back to the mode, so considering the speed of sound you can calculate how far it is to the object. Some technical features: – Supply voltage: 5 V. – Current consumed: 15 mA. – Error: 3 mm. The batteries used are made of 3000 mAh Li-ion and have a voltage of 3.7 V. The concept has rear-wheel drive, coupled to an engine with a two-stage wormcylindrical. For the front axle we use an engine that makes a turn of 45° to the left, respectively 45° to the right.

Developing Intelligent Autonomous Vehicle Using Mobile Robots

21

The motor used to drive this mass is a metal one without gearbox, with a maximum supply voltage of 6 V. It rotates starting at 2.5 V. The speed it has without load is 2200 rpm and has a weight of 20 g. The worm-cylindrical two-stage gearbox with inclined teeth was chosen, driven by a transmission through V-belts. The power at the drive belt wheel is 720 W. The speed of the drive belt wheel is 1700 rpm. Gear lubrication is done by immersing the gears in the oil bath. Input data: ma = 1, 4 kg; v = 5000 m/min; Pmin = ma ∗ a ∗ v = 583.333 W

(3.1)

Pm = 720 KW. Initial data: Pm = 0.72 KW; nm = 1700 rot/min; u = 100 (total transmission ratio); Lh = 8000 ore Calculus of the transmission ratio on the two stages: u = itc ∗ ured

(3.2)

itc = 2

(3.3)

u = 50 itc ηtc = 0, 95

ured =

(3.4)

ηc = 0, 97 Power calculus: P1 = ηtc ∗ ηrul ∗ Pm = 0, 677 KW

(3.5)

P2 = ηtc ∗ η2rul ∗ ηm ∗ Pm = 0, 563 KW

(3.6)

P3 = ηtc ∗ η3rul ∗ ηm ∗ ηc ∗ Pm = 0, 541 KW

(3.7)

Speed calculus: nm = 863,147 rot/min itc nm n2 = = 76,16 rot/min itc *uI nm n3 = = 17 rot/min itc *uI *uII n1 =

(3.8) (3.9) (3.10)

22

F. Popi¸ster et al.

Calculus of torques: Tt1 = 9550000 ∗

P1 = 7, 492 ∗ 103 N ∗ mm n1

(3.11)

Tt2 = 9550000 ∗

P2 = 7, 061 ∗ 104 N ∗ mm n2

(3.12)

Tt3 = 9550000 ∗

P3 = 3, 038 ∗ 105 N ∗ mm n3

(3.13)

Because of the use of the DMAIC framework, the choice regarding the material and the design of the case is optimal. To make the upper case, the Catia V5 and Prusa Slicer application was used. These parts have been printed on a 3D printer from Prusa. Because the case exceeds the size of the printing bed, we needed to break the model down into two parts: the front and the back. Plus, a camera stand was needed. All these components were created in the Catia V5 software and are presented in the following figures (Fig. 6). Good results were obtained also in the recording are, where the clarity provided by the built-in camera is of 720 × 480 in normal light conditions, and in diminished light conditions the camera is trying to maintain a high clarity of the image due to its infrared

Fig. 6. Intelligent autonomous vehicle

Developing Intelligent Autonomous Vehicle Using Mobile Robots

23

sensors incorporated in the case. With the help of the camera, the robot transmits also audio signals with the camera’s microphone. Regarding the control and transmission part, the results are acceptable, obtaining during testing an engine speed of over 200 rot/min and a speed of the entire assembly higher than 1.2 m/s powered by a 6 V tension. The micro-metal HPCB engines have a high life span due to their carbon brushes. The next step in creating a functioning mobile platform was the programming of the internal microcontroller (Arduino Nano) and the creation of the interface software used to control the mobile platform from a phone or a tablet.

4 Conclusions This work has succeeded in making this intelligent autonomous vehicle operate within the established parameters, detect obstacles, and show us how far it is from them, operate on different terrain surfaces and create a superior case usable for the concept. The use of combine CAD, CAM, CAE, CAQ, tools, methods and software packages proved to have several advantages over traditional product development techniques. It focused the development process on the most relevant aspects. The final concept is balanced with respect to characteristics, functions, and cost of parts. The mockup that was build based on the final concept has a low weight (887g), low dimensions (200x200x100 mm), high autonomy (more than 30 min.).

References 1. http://www.robots.ox.ac.uk/ 2. Moubarak, P.M., Ben-Tzvi, P.: Adaptive Manipulation of a Hybrid Mechanism. Mobile Robot. http://rmlab.org/pdf/C22_ROSE_2011.pdf 3. https://robotnik.eu/products/mobile-robots/summit-xl-hl/ 4. https://www.nasa.gov 5. de Mateo Sanguino, T., González Ramos, J.E.: Smart host microcontroller for optimal battery charging in a solar-powered robotic vehicle. IEEE/ASME Trans. Mechatron. 18, 1039–1049 (2013) 6. https://www.qualica.net/cms/en/help/tools/qfd/ 7. Popescu, D., Popescu, S., Bacali, L., Dragomir, M.: “Home “smartness” -helping people with special needs Live independently”. In: Managing Intellectual Capital and Innovation Conference, 27–29 May 2015, Bari, Italy (2015) 8. Popescu, S., Rusu, D., Dragomir, M., Popescu, D., Nedelcu, S.: Competitive development tools in identifying efficient educational interventions for improving pro-environmental and recycling behavior. Int. J. Environ. Res. Public Health 17(1), 156 (2020) 9. Dragomir, M., Neamtu, C., Popescu, S., Popescu, D., Dragomir, D.: With the Trio of standards now complete, what does the future hold for integrated management systems? In: Durakbasa, N.M., Günes Gencyilmaz, M. (eds.) ISPR 2018, pp. 769–778. Springer, Cham (2019). https:// doi.org/10.1007/978-3-319-92267-6_62

Digital-Body (Avatar) Library for Textile Industry Semih Dönmezer(B) , Numan M. Durakbasa, Gökçen Ba¸s, Osman Bodur, Eva Maria Walcher, and Erol Güçlü Vienna University of Technology, Vienna, Austria [email protected], {numan.durakbasa,goekcen.bas, eva.walcher,erol.guclu}@tuwien.ac.at

Abstract. Personalized garment design and virtual assembly application of a garment is one of the most important goals for almost all textile companies. Aligning the designing of dress on the virtual garment with the feature points of the human body and can be considered as a success factor. For this reason, it is best solution to measure the human body and to find the dimensions and to produce tailor made garment according to sizing of body. Clothing sizes have been a challenge for the apparel industry. This is because, difficulties arising from to find the right size when purchasing an outfit. Customers are often disappointed with their product deliveries, causing them to be disappointed and less likely to buy clothes online. Increased returns and reduced customer loyalty have negative effects on profit margins. Considering the fit dress problem, it is necessary to create a body database that every company can use. The sensible solution of this problem is to create a common digital body library. The aim of Digital-body (avatar) library is an ongoing process from measuring the customer’s body to dressing according to the customization garment, and dramatically to reduce return rates due to non-fitting dresses over internet sales. It is needed a redesign for the sizing system, and it will be new paradigm shift in tailor made production for apparel industry system and to thrive the appeal industry. Digital Body Library will affect the dress which every sale in the future. Keywords: 3D body measurement · Return rate over internet selling · Digital library

1 Motivation for Body-Size Library In this model, the motivation is to increase customer satisfaction, especially in dress sales made over the internet, and therefore to decrease the rate of return. Other important reasons are the constant change of the human body, the sizing system varies from company to company, and the characteristics of the changing rooms. With the boosting percentage of online sales and the resulting return rates, retailers need to offer a higher quality shopping opportunity for their purchased product. Fashion stores need a seamless solution that focuses on customization and smart data to create © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 24–38, 2022. https://doi.org/10.1007/978-3-030-90421-0_3

Digital-Body (Avatar) Library for Textile Industry

25

an enhanced relationship between brands and consumers. A solution to this problem is needed to undertake, since due to the wide range of body shapes and sizes, it would be impossible to establish a global standard. However, the problem of the lack of trust associated with shopping for clothes now needs to be resolved. Increasing return rates also have many important dimensions that many companies and customers unwittingly ignore. A brand’s reputation depends on its reliability, and the higher the number of returns, the higher the carbon footprint for shipping and handling. The increasing percentage of online sales and the resulting returns make it clear that fashion sellers need to provide a more reliable shopping experience. 1.1 Returns in Online Clothing Sales In the Barclaycard study, it is commented on the “serial returns” phenomenon and it is reported that return management had a negative impact on the fashion business: one in five retailers admitted that they were increasing their prices to cover the return costs. The report states that customers feel more confident when purchasing more than one size and return unsuitable ones. The report also mentions that shoppers want size standardization and retailers need to offer smart technologies to better visualize online clothing shopping. BBC News reported that 63% of consumers return their online clothing purchases, again pointing to the lack of a global standard for the fashion industry. The infographic in Fig. 1, visualizes that there is no size standard for brands to have their own size charts [1]. Effects of Returns for Retailers; in the same Barclaycard survey also asked retailers were also asked how returns impact their business. 57% of retailers said that dealing with returns has a negative impact on the day-to-day running of their business. 33% of online retailers offer free returns but offset the cost of this by charging for delivery. 20% said they’d increased the price of products to cover the cost of returns [2]. Another survey conducted by Kurt Salmon Associates (1998) found that 70% of women and 50% of men in the USA stated that they cannot find clothing that fits well. Of these consumers, 39% of the women and 34% of the men were so dissatisfied that they were willing to pay more for custom-fit clothing [3]. 1.2 Daily Fluctuations The length of the body depends on the one hand on its location, on the other hand, the length decreases over the course of the day. No two people are not all similar in all of their measurable characteristics. That uniqueness has been the object concern and research since the period of Napoleon. The reduction is about two centimeters. This effect was first observed in England in 1724. In a letter, Reverend Wasse, a principal at Aynho, Northampton shire, wrote on May 16, 1724, to Dr. Mead "on the difference in the height of a human body between morning and night" (concerning the difference in the height of a human body, between morning and night) [4]. The cause of both effects is different. The difference in length during the day is mainly due to the compression of the intervertebral discs of the spine. The cause of the change in different body positions are primarily the joint connections of the legs [5].

26

S. Dönmezer et al.

1.3 Garment Sizing System Problem Each company should develop unique sizing standards for its customers. There are many using sizing standards according to country, such as the US, British, European, Japanese, Korean and Chinese sizing systems. With the help of these measurements, body and body types in the vast majority of target audiences are defined. How today’s dress measurements differ among a sampling of major retailers, according to data collected by Time [6].

Fig. 1. Size 8 through the brands, [6]

1.4 Inside the Fight in the Fitting Room0 “if I have a friend who is a size 6, we can’t go shopping together. They literally segregate us”. Melissa McCarthy, actress and designer- [6].

2 Human Height Human height or stature is the distance from the bottom of the feet to the top of the head in a human body, standing erect. It is measured using a stadiometer. A person’s height is a simple biometric characteristic in addition to body weight. In this narrow sense, the size of the person standing upright from the sole of the foot to the top of the head can be defined. The possibility of recording and evaluating body size is named somatogram, and this is a triangular model for recording individual physics using the three-digit classification of somatotyping. All possible somatotypes can be recorded on a somatogram. Scientific discipline, which deals with human body sizes, is also called anthropometry. In a broader sense, body size can also be measured in the body, including chest circumference, waist height, arm length, etc. The clothing industry also wants to know how to distribute sizes and proportions across the population in order to produce the right amounts of various clothing sizes. Thus, it is possible to estimate how much and what kind of products the companies need and how much they will sell. The shoe and hat industry has the same interest. The first thesis is that the population as a whole becomes

Digital-Body (Avatar) Library for Textile Industry

27

larger and thicker, so especially women have a hard time finding suitable clothes as the dress was designed based on previous measurements. Also, this problem is experienced in internet sales as an increasing trend nowadays. Because the average size of many countries can be different from each other. The tallest people in the world are in the Netherlands, besides, when the Japanese are shorter and different from people [7]. 2.1 Body Growth The growth in length rushes ahead of the growth, the increase in width and thickness follows. The greatest body growth after the baby age (up to 15 cm per year) takes place around the age of 14 and 15 with an average of six centimeters per year. After that, the growth gradually decreases. The development is essentially complete by the age of 19, when the annual increase in size is less than one centimeter. Another slight growth (less than 1 cm per year) can occur up to the age of 24. However, there are exceptions to this development of up to two centimeters increase in size per year [5]. 2.2 Body sizes in the Broader Sense Serial measurements- SizeGERMANY; were presented in 2009. This was carried out by the Hohenstein Institutes (Bönnigheim) and Human Solutions GmbH (Kaiserslautern) in cooperation with industrial partners. For this purpose, the body measurements of 13,362 men, women and children between the ages of 6 and 87 years were determined at 31 measuring locations throughout Germany. The measurement of the participants was carried out contactless with the latest 3D scanner technology in one sitting and three standing positions. Based on the 400,000 measurement points recorded per run, an electronic twin (scan) of the participants was generated on the PC. 44 body measurements such as hip and chest measurements for the clothing industry and 53 body measurements for technical ergonomics were taken on this [8].

3 State of the Art of Body Scanning Technologies Optical devices used by 3D body scanners can be light projectors, CCDs, and light sources (halogen, infrared, or laser). For the human body, the laser should be classified as Class 1 for eye safety. Most 3D body scanners reflect light rays horizontally. In Cyberware, Vitronic and Hamamatsu systems, cameras or mirrors are mounted above and below the projection system. In the Tec Math scanner, the cameras are only mounted on the laser projector. This means that the undersides of some body parts may not be well represented. Speed is important in reducing movement disorders in the human body. Round about 32 data acquisition for the configured light system is definitely a big advantage over laser scanning. Structured light and laser triangulation systems measure similar degrees of coverage at the body surface. However, all systems try to reduce the scanner size and cabinet size. This is especially important for the retail industry, where floor space is valuable [9]. With the latest scanning technology, the surface of the human body is determined in four different positions with a precision of more than 20 points within 1cm ˆ 2, and body measurements determined according to European

28

S. Dönmezer et al.

norms (e.g., EN 13402) will be determined at 43 different angles and statistical analysis is performed. Thus, body figures specific to the society will be determined and designed as virtual three-dimensional mannequin models (avatars). EN 13402–1 Terms; definitions and body measurement; The standard also defines a pictogram that can be used in language-neutral labels to indicate one or several of the following body dimensions. Head girth, neck girth, chest girth (| men); bust girth (~ women); under bust girth (~women); waist girth; hip girth (~women); height inside leg length; arm length; hand girth; foot length; body mass; measured with a suitable balance in kilograms EN-13402 [10].

4 Body Dimension Extracted from 3-D Body Scans It specifies procedures for the use of 3D surface scanning systems to obtain data on the shape of the human body and for measurements specified in ISO 7250–1 that can be obtained from 3D scans. Although the international standard is mainly related to full-body scanners, it can also be applied to body part scanners (head scanners, hand scanners, foot scanners). It is not applicable to devices that measure the location and / or the movement of individual measuring points. 3D body scanner: A system of hardware and software that creates digital data that describe the surface of the human body or parts of it in three dimensions. Software of the 3D scanner: Operating system, user interface, programs, algorithms, and instructions associated with a 3D scanning system. Hardware of the 3D scanner: Components of a 3D scanner and any associated computers. Accuracy: The extent to which the measured value approaches the true value. Anatomical measuring point: clearly defined point on the body that can be used to establish anthropometric measurements. Anthropometric database: Recorded collection of individual body measurements and background information recorded from a group of people [11]. Digital Mockup: Digital model virtual assembly test (VAT) is used to increase the compatibility of the mechanical interface and the applicability of the assembly. Previously, a physical prototype test was performed on a mannequin for a dress. by using digital factories in the textile industry, now this is a computer simulation procedure rather than physical operations. This test can be used to discover and resolve interface and/or assembly problems prior to physical product assembly [12].

5 Sizing and Design for Apparel Industry When we consider the multitude of textile manufacturers, textiles are a battleground. Wearing is the second most important need of human beings after food. Consumers always try to find clothes that best suit their body size. The purpose of the sizing system is to meet consumers’ need for ready-to-wear clothing [13]. Dress sizes vary from company to company and now prevent the desired sales on the internet, which is becoming more important today. The first step to industry research is to bring the dimensions to a standard, update, increase or create new body sizing systems. However, the provision of new shape

Digital-Body (Avatar) Library for Textile Industry

29

and size data enables a much wider range of products for apparel design and development, and as surveys that form the data continue to evolve, they help to build the basis of: • Establishing standards for new shape and sizing systems, • To understand the effect of shape during the aging process and to predicting these shape changes [14]. There are many using sizing standards according to country, such as the US, British, European, Japanese, Korean and Chinese sizing systems. With the help of these measurements, body and body types in the vast majority of target audiences are defined. Body sizing system used today are different from each other. The normal size is for a person who is neither tall, small, fat or thin, but it is rather for a person with a good posture. The regular size gives us about 50% of potential customers. 3D Body measurement scanner in fashion store instead of dressing room. For manufacturing customized clothes, quick measurement of the body size of each client is necessary. Such a measurement system should obtain the following conditions: (1) It should manage to measure customers in a very short time. (2) It should be able to measure the customers in a noncontact system without requiring them to be naked or using special measuring dress. (3) It should be able to measure customers accurately regardless of lighting conditions and the color of clothes. (4) It should be able to measure body circumferences without body landmark information. (5) The measured data should be able to be used directly in digital body library systems. (6) It should be easy-to-build, and occupy a small amount of space [15]

6 Virtual Human Model The virtual human body used in the fashion field reflects the attributes of different areas of the human body based on physical measurements and shape characteristics. Various types of virtual human body-based IT-fashion convergence technology is being attempted today, according to rapid development of the vast online fashion market, including the internet, mobile market, smart TVs, and virtual fittings at shops and stores. Meanwhile, the increased demand of mass customized and made-to-measure garments these days encourages efforts to innovate the traditional process of planning, production and sales. The use of digital technology in this new ubiquitous environment of the international apparel industry is leading to the use of three-dimensional information on consumers and digital human bodies that reflect somatotype characteristics, and consumers can now go online anytime, anywhere, to try on clothes, evaluate the style and form, and place orders. Despite such advantages, there is a lack of International Standards regarding the sizing system in textile production. The main goals are to define a virtual human body to be used to improve online communication and reliability of fashion products sold online and in-store through visual confirmation [16].

30

S. Dönmezer et al.

7 The Trend of Mass Customization in Dress Production Made-to-measure clothing is still considered a niche in the clothing industry. In practice, it utilizes a basic pattern that is adjusted to an individual customer’s measurements, unlike bespoke patterns which are developed from a customer’s measurements. Fashion Apparel Industry 4.0, called a “smart apparel factory,” is the current trend of automation and data exchange in apparel manufacturing technologies. As a combination of several major innovations in digital technology, smart production includes the Internet of things, cloud computing, and cyber-physical systems that communicate and cooperate with each other in real time, used by participants of the value chain driving a new shift of change across for the economy and there are with major affect for the fashion market. Customer are no longer satisfied with standardized products that force them to make compromises. Because of this situation, the Internet influences customers’ buying habits by creating needs that have to be satisfied instantaneously. At an increasing rate, people are losing interest in mass-produced items and are seeking a little piece of the manufacturer’s DNA, that which makes the item authentic. They’re looking for the experience, but not at any price [17].

8 Garment Design and Virtual Assembly Application In personalized garment design and virtual assembly application, feature points usually detect cut planes and feature lines of the virtual garment. To measure according to body size is to define the characteristics of the human body, more precisely, the symbolic structure of the body, and then to simulate the morphological form of the consumer, which provides references for the creation of a virtual garment shape. Feature detection, also known as 3D feature extraction, is still an important problem in geometric modeling and computer graphics. Measurement properties are generally known as local curvatures. All these local geometric features are defined by local Gauss curvature, ridge and valley lines, prominent end points, average curvature flow, and average geodetic distance distribution on the surface. These definitions are used for avatars created for a 3D scan of a finished dress. Some techniques such as 3D-Harris method, and mesh-SIFT method find more applications in 3D applications by using 2D image feature detection methods. These stated methods depend on the geometric properties and the clarity of the points, if it lacks semantic meaning, it is completely defined by the 3D shape itself [18].

9 Body Digital Library While it is measuring the body data by 3D body scanner, approximately from each body measurements are captured nearly 40 places of a body, of which 20 the upper part of the body and 20 cover the lower part. This information is enough for a fit dress according to apparel manufacturer. All this information will be stored as standard over the person’s avatar. When a customer’s body is measured, this information will first be given to the customer and later also going to go to the library. These data will be used by manufacturer, who will be able to manage the dress design. Here, the avatar body size and the dress body size should match.so that the data used by the manufacturer for the

Digital-Body (Avatar) Library for Textile Industry

31

dress design will also be the same as the data that the customer will use for the dress selection. Only in this way can we understand whether the dress fits the body perfectly. 9.1 Computer Vision Scenario for Body Library Computer vision by definition, is a computer being able to parse the pixels of an image and interpret the contents from them. So, when it is taken a picture of a bizarre curved body then the computer should be able to tell you that it is a large body. we have a great scenario, that Digital Body Library will affect the dress which every sale in the Future. Starting with complex images like part of body is actually a really interesting way to go. There is very sophisticated difference between images. There is existing work It can be used as a starting point. The answer to this may begin with TensorFlow Hub which is a repository of models. The models can come with attached metadata that will allow us to explore how they work. There are code samples about how to use them. It can be just browsed as a start and after poking around inside Hub for a bit. The Hub page for this model has a place where it can be actually uploaded (TensorFlow Hub) the body image. They are ordered by likelihood, so it can be seen a list of potential matches. It can be tried the body images in the browser with TensorFlow Hub. It works in the platform and cloud and data science environment we care about. It makes it really easy to add any dependencies that we need. It will be created libraries to make implanting image recognition easy through a Tensor Flow Task, Body-library. It supports a whole bunch of scenarios like bod image, text, object detection and coming online all the time. It is probably about a Phyton environment works. A model is generally a neural network, and it feed data into a neural network by turning body data into tensors, the neural network will give back its reply as it guessed. When it can be recognizing 500 000 body models, it will be sent it an image of body and then it will actually give us back 500 000 answers and the task body libraries will do all heavy lifting. Without explicitly coding them, it will be used the task body libraries to manage the body image, and to emulate by training a model to recognize accurate body images. In Body library will be avatar model and classifications model domain [19] (Fig. 2). In this Fig. 3, all captured data items organized into folders. When the data needs avatar that it is used labeled samples. These are a bunch of images in a folder called avatar. When we talk about labels, that means the names of the folders. Model- hub will give a body model detector. We need to roll up sleeves. It can actually be needed neural networks, loss functions, optimizers. It can be creating an image classifier by initializing it form folder, sub folder of all body images and to use a method called spilt by calling create on the image classifier. If it is built a detector based on top secret data, the producer will not let us share with unauthorized apparel manufacturer and will be keep confident. Finally, it can tailor data with swift too [19]. 9.2 Dynamic Models with Python – TensorFlow-Real World Scenarios Building a machine learning model is a multi-stage process. It has to collect, clean and process the body data, prototype as avatar and iterate on the model architecture, train and evaluate result, prepare body library model for production serving. Model is to be the huge only open source for members, and library for the computer vision, machine

32

S. Dönmezer et al.

Fig. 2. Sample of data set for body image

learning, and image processing and now it plays a major role in real-time operation which is very important in industry 4.0. By using it, one can process images and to identify landmark of body, faces, or whole body of a human. When it integrated with various libraries, such as to Identify image pattern and its various features it is used vector space and perform mathematical operations on these features [20]. This library model has to be updated and improved because this model is such as living thing. TensorFlow high level API’s aim to help at each stage of model’s lifecycle from the beginning of an idea to training and serving large scale applications. The key steps in developing a machine learning model, what TensorFlow provides for model at each and then it will also cover some of the new developments that it is necessary worked on to continue to improve the model workflow. It will be started problem an associated data set. It will be used the body data set from the 3D body scanner where placed dressing room in the shopping center to capture body sizing data. It is necessary in the first step, it is need about 4–5 million body-measurement, sizing data from different cities of Europa. It is going to use the features in this data set. Loading data is a CSV file with 55 columns of integers. First it will be used the TensorFlow CSV data which is A comma-separated values file is a delimited text file that uses a comma to separate values. Each line of the file is a data record. Each record consists of one or more fields, separated by commas. Parse the CSV and the incoming data are as a vector of 55 integers [22].

Digital-Body (Avatar) Library for Textile Industry

33

Fig. 3. This table it is going to use the features of body in this data set to tray to predict the body type that will be found in each customer’s body [21]

Parsing data, a data set is expected to return tuples of features and labels. Each row is to parse the row and return the set of features. Main goal of parsing function is to make sure correctly to separate and group columns of features. If it is read over the details of the body data set, it will see that body type is a categorical feature that is one-hot encoded. And then it is zipped up with human-readable column names to get a dictionary of features that can process further. And then los calculation might be preferable. Using Tensor Flow data sets allow us to take advantage of many built-in performance optimization that data sets provide for this type of mapping and batching to help remove I/O bottlenecks. Input-output (I/O) systems transfer information between computer main memory and the outside world. Like any other activity in a computer system, I/O is a concerted work of both hardware and software. The software which is executed to carry out an I/O transaction for a specific I/O device is called a device driver [23]. It still has lots of feature types, some are continuous, categorical a one-hot encoded. It is necessary to be meaningful to the model [24]. Defining features, a feature column is a configuration class. How to transform the raw data so that it matches the expectation. It is all data is already numeric, avatar data for example, feature columns may not necessary but many real-world applications, data is structured and represents. Numeric feature columns allow us to easily capture that relationship with the shape argument. This layer will also handle creating and training over embedding overtype. We need to transform before it can be used the data in machine learning models. It is defining a type o feature will act like any other Keras layer. Its primary role will be to take in the row data, including the categorical indices and transform it into the appropriate representations that our neural net is excepting. This layer will also handle creating and training our embedding cover type. We need having data that needs transformation before it fits into a model, in order to be their categorical string names. TensorFlow provides many feature columns and even ways to combine

34

S. Dönmezer et al.

individual columns into more complex representation of the data that the model can learn [25]. Building and refining models: The model it is defined layers, hooking the output of each into the input of the next. First layer will do all of the data transformation, and then go through some standard densely connected layers. Final layer will output the class predictions for each of the sizing areas. The layer architecture establishing of our model can compile which add optimizer, metrics and loss function. We still haven’t yet hooked it up to any data. TensorFlow provides a number of optimizers and lost choices and finally the rubber meets the road. It is passed the data set into the model, and it is trained. Now it is important that in a real-world situation with large body size data sets, and leverage hardware accelerators. Leverage hardware accelerators is like GPUs (Graphical Processing Unit) or TPUs (The Tensor Processing Unit) and then it is needed to use distributions strategies. First, it is needed to load in validation data. It is important here that it is used the same processing procedure for test data that it was for training data. The evaluate method of the model is needed with the validation data, which returns the loss and accuracy, that it is got on test data. Validation of our model held out data that was processed in same way as training data. TensorFlow provides a model saving format that works with the suite of TensorFlow products, TensorFlow saved model includes a checkpoint with all of the weights and variable and it also includes the graph that it is built for training, evaluating and predicting [26]. Clothes shopping is a taxing experience. The visual system absorbs an abundance of information. Can a computer automatically detect pictures of my body size to find me fit dress? It is possible that accurately classifying images of body size as my avatars items is possible to straight-forward to do, given quality training data. [27]. Body Image Classification. The problem of Image Classification detecting like this: Given a set of images that are all labeled with a single category to predict these categories for a novel set of test body images and measure the accuracy of the predictions. There are a variety of challenges associated with this task, including viewpoint variation, scale variation, intra-class variation, image deformation, image occlusion, illumination conditions, background clutter etc. How might it handle writing an algorithm that can classify images into distinct categories? Computer Vision model is come up with a datadriven approach to solve this. Instead of trying to specify what every one of the image categories of interest look like directly in code, they provide the computer with many examples of each body size image class and then develop learning algorithms that look at these examples and learn about the visual appearance of each class. In other words, they first accumulate a training dataset of labeled images, then feed it to the computer in order for it to get familiar with the data [27] (Fig. 4). Input of model is a training dataset that consists of N images, each labeled with one of K different classes. Then, it is using this training set to train a classifier to learn what everybody size of the classes looks like. In the end, it is evaluated the quality of the classifier by asking it to predict labels for a new set of body size images that it has never seen before. It will then compare the true labels of these body images to the ones predicted by the classifier. Convolutional Neural Networks: Convolutional Neural Networks (CNNs) is the most popular neural network model being used for image classification problem. The big idea

Digital-Body (Avatar) Library for Textile Industry

35

Fig. 4. Using the body library

behind CNNs is that a local understanding of an image is good enough. The practical benefit is possible that having fewer parameters greatly improves the time it takes to learn as well as reduces the amount of data required to train the model. Instead of a fully connected network of weights from each pixel, a CNN has just enough weights to look at a body part of the image. this convolution process throughout an image with a weight matrix produces another image, als body avatar of the same size, depending on the measurement.

36

S. Dönmezer et al.

Image classification research datasets are typically very large. Nevertheless, data augmentation is often used in order to improve generalization properties. Different schemes exist for rescaling and cropping the images (i.e., single scale vs. multi scale training). Multi-crop evaluation during test time is also often used, although computationally more expensive and with limited performance improvement. Cropping is to learn the important features of each object at different scales and positions. Keras does not implement all of these data augmentation techniques out of the box, but they can easily implement through the preprocessing function of the Image Data Generator modules [27].

10 Dynamic Models With Python Control Flow and Important Feautures of Digital Body-Size Library • The model will consist of support for custom and higher-order gradients, • The Models will offer multiple levels of abstraction, which helps you to build and train models, • The Model should allow us to train and to deploy quickly, no matter what body size model it uses and no matter what platform you use, • The Model should provide the flexibility and control with features like the Keras Functional API and Model, • The model should provide us the flexibility and control with features and It should be a well-documented, very easy-to-understand model and to be focused on user experience, • It is needed multi-backend and multi-platform, • Easy production of avatar models and then for dressing design mock-up will support us and then to allows for easy and fast prototyping, • The model will include Convolutional networks support and recurrent networks support, • The model is to be expressive, flexible, and apt for innovative research, • The model is to be a Python-based framework that makes it easy to debug and explore, • It will be Highly modular neural networks library written in Python • The model is to be used for high-performance models and large datasets • It should help execute subpart of a graph in which to retrieve [28].

11 Conclusion In the period called industrial revolution or Industry 4.0. this shifting causes the apparel factories to enter a digital transformation. Digitalization in apparel industry can be used by companies to meet to quality risks as well as real time designing improvements for customization. Processes can be managed if big data derived from 3D body scanning systems. No two people are even alike in all of their measurable characteristics. This unique has been the object curiosity and research over the 200 years. The aim of the digital–body–size (avatar) library will be an on–going process of customizing clothing by storing the customer’ s body measurements as meta data and building up their body avatars for preparing dress design. After all, we need a big digital body size library. The more it is captured the body size by 3D body scanner, the higher it can be finding the fit dress for customers over online shopping.

Digital-Body (Avatar) Library for Textile Industry

37

• It will be possible to sell tailor made dress. Manufacturer, if they know the size of the customer, can easily sell fit clothes. • Likewise, the customer can find a suitable fit dress from different companies. Because which dresses are fit will be selected in accord to the preference list and shall be presented to the customer. • Return rates over online dress sales will decrease dramatically. Current situation is the return rate of clothes sold online is about 30%. • These data will also be ergonomic data for the industry. • In terms of health, information such as asymmetric disorders and obesity can be classified. • In accord to the countries, body measurements are going to be classified, and anthropometric measurements can be evaluated. • This will be new paradigm for apparel industry that this concept as a new approach.

References 1. Ohnemus, I.: Returns in online clothing sales effect and solutions. About EyeFitU, 16 April 2019. https://www.roqqio.com/en/magazine/digitalisation/returns-in-online-clothingsales-effects-and-solutions 2. Charton, G.: E commerce Returns: 2020 Stats and Trends. SaleCyle, 15 January 2020. https:// www.salecycle.com/blog/featured/ecommerce-returns-2018-stats-trends/ 3. Goldsberry, E.S.S.R.N.: Women 55 years and older: Part II. Overall satisfaction and dissatisfaction with the fit of ready-to-wear. Cloth. Textiles Res. J. 14(2), 121–142 (1996) 4. R. o. A. i. N. t. D. M.: Part of a Letter from the Reverend Mr. Wasse, "Körpergröße eines Menschen, Concerning the Difference in the Height of a Human Body, between Morning and Night. Wikipedia, 16 May 1714. https://de.wikipedia.org/wiki/Körpergröße_eines_Men schen 5. Daffner, F.: Körpergröße eines Menschen, Das Wachstum des Menschen. Anthropologische Studie. Wikipedia, 1 April 2021. https://de.wikipedia.org/wiki/Körpergröße_eines_Men schen 6. E. Dockterman: One Size Fits None. Time (2021). https://time.com/how-to-fix-vanity-sizing/ 7. Hermanussen, M.: Auxology – Studying Human Growth and Development, 1 April 2021. https://de.wikipedia.org/wiki/Körpergröße_eines_Menschen 8. Size GERMANY: Hohenstein Institute (2009). https://www.yumpu.com/de/document/view/ 9117764/size-germany-pdf-224-kb-hohenstein-institute 9. Fan, J.Y.W.: Clothing appearance and fit: Science and technology. Elsevier Journal (2004) 10. E.13402: Joint European standard for size labelling of clothes. Wikipedia, 16 April 2005. https://en.wikipedia.org/wiki/File:EN-13402-pictogram.png 11. ISO 20685–1: INTERNAT˙IONAL STANDARD (2018) 12. I.-. 21143: Technical product documentation — Requirements for digital mock-up virtual assembly test for mechanical products. Internatıonal Standard (2019) 13. Schofield, N.A., LaBat, K.L.: Exploring the relationships of grading, sizing, and anthropometric data. Clothing Textiles Res. J. 23(1), 13–27 (2005). https://doi.org/10.1177/088730 2X0502300102 14. Zekeria, N.: Antropemetry, Apperal Sizing and Design. Woodhead Publishing, Swaston (2019)

38

S. Dönmezer et al.

15. Uhm, T., Park, H., Park, J.: Fully vision-based automatic human body measurement system for apparel application. Measurement 61, 169–179 (2015). https://doi.org/10.1016/j.measur ement.2014.10.044 16. Clothing, Digital fittings-Vocabulary and terminology used for the virtual human body, Internatıonal Standard, 01–07–2016 17. Bellemare, J.: Fashion apparel ındustry 4.0 and smart mass customization approach for clothing product design. ˙In: Fashion Apparel Industry 4.0 and Smart Mass Customization Approach for Clothing Product Design, 21 June 2018 18. Xiaohui, T., Peng, X., Liu, L., Qing, X.: Automatic human body feature extraction and personal size measurement. Elsevier. J. Visual. Lang. Comput. 47, 9–18 (2018) 19. Money, L.: Cross platform computer vision made easy with Model maker. TensorFlow Organisation, 19 May 2021. https://www.youtube.com/watch?v=GJvtOAtzZXg 20. OpenCV-Owerview: Geeks for Geeks, 03 October 2019. https://www.geeksforgeeks.org/ope ncv-overview/ 21. tf.compat.v1.enable_eager_execution. TensorFlow Core v2.6.0 , 14 May 2021. https://www. tensorflow.org/api_docs/python/tf/compat/v1/enable_eager_execution 22. Comma-separated values. Wikipedia, 2021. https://en.wikipedia.org/wiki/Comma-separa ted_values 23. Hayes, J.: Input-Output Operations. WCB/McGraw-Hill, New York (1998) 24. Allison, K.: TensorFlow high-level APIs-loding data. Coding TensorFlow, 4 December 2018. https://www.youtube.com/watch?v=oFFbKogYdfc&t=36s 25. Allison, K.: TensorFlow high-level APIs-going deep on data and features. TensorFlow, 2018. https://www.youtube.com/watch?v=TOP2aLxcuu8&t=28s 26. Allisson, K.: TensorFlow high-level APIs:- Building and refining your models. TensorFlow, 7 January 2019. https://www.youtube.com/watch?v=ChidCgtd1Lw&t=340s 27. Le,J.: The 4 Convolutional Neural Network Models That Can Classify Your Fashion Images. Towards data science, October 2018. https://towardsdatascience.com/the-4-convolutionalneural-network-models-that-can-classify-your-fashion-images-9fe7f3e5399d 28. Keras vs Tensorflow: Must Know Differences! Guru99, 2021. https://www.guru99.com/ten sorflow-vs-keras.html#2

Stereoscopic Video Quality Assessment Using Modified Parallax Attention Module Hassan Imani1(B) , Selim Zaim2 , Md Baharul Islam1 , and Masum Shah Junayed1 1 Computer Vision Lab, Faculty of Engineering and Natural Sciences, Bahcesehir University,

Istanbul, Turkey 2 Faculty of Engineering and Natural Sciences, Bahcesehir University, Istanbul, Turkey

Abstract. Deep learning techniques are utilized for most computer vision tasks. Especially, Convolutional Neural Networks (CNNs) have shown great performance in detection and classification tasks. Recently, in the field of Stereoscopic Video Quality Assessment (SVQA), 3D CNNs are used to extract spatial and temporal features from stereoscopic videos, but the importance of the disparity information which is very important did not consider well. Most of the recently proposed deep learning-based methods mostly used cost volume methods to produce the stereo correspondence for large disparities. Because the disparities can differ considerably for stereo cameras with different configurations, recently the Parallax Attention Mechanism (PAM) is proposed that captures the stereo correspondence disregarding the disparity changes. In this paper, we propose a new SVQA model using a base 3D CNN-based network, and a modified PAM-based left and right feature fusion model. Firstly, we use 3D CNNs and residual blocks to extract features from the left and right views of a stereo video patch. Then, we modify the PAM model to fuse the left and right features with considering the disparity information, and using some fully connected layers, we calculate the quality score of a stereoscopic video. We divided the input videos into cube patches for data augmentation and remove some cubes that confuse our model from the training dataset. Two standard stereoscopic video quality assessment benchmarks of LFOVIAS3DPh2 and NAMA3DS1-COSPAD1 are used to train and test our model. Experimental results indicate that our proposed model is very competitive with the state-of-the-art methods in the NAMA3DS1-COSPAD1 dataset, and it is the state-of-the-art method in the LFOVIAS3DPh2 dataset. Keywords: Parallax attention mechanism · 3D convolutional neural networks · Stereoscopic video · Quality assessment · Deep learning · Disparity

1 Introduction With the popularity of mobile phones, unmanned vehicles, and robots, the demand for 3D videos is increasing [1], and the number of 3D movies is almost doubling each year [2]. Also, 3D televisions and cameras are becoming more and more popular. Like 2D content, the 3D images and videos experience several post-processing stages such as sampling and quantization that each level decreases the quality of the media. Therefore, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 39–50, 2022. https://doi.org/10.1007/978-3-030-90421-0_4

40

H. Imani et al.

measuring and monitoring the quality of the 3D content is an essential need for 3D systems. Although the quality of 3D media can be assessed by subjects, it is almost unfeasible in real-world scenarios because of the required human resources needed. Therefore, there is a need for designing objective quality assessment methods. The aim of the Stereoscopic Video Quality Assessment (SVQA) is to assess the number of distortions in the quality of a stereoscopic video. SVQA can be categorized into Full Reference (FR), Reduced Reference (RR), and No-Reference (NR) methods [3]. Due to the fact that the FR and RR-based methods require the original video on the receiver side, the real-world systems cannot rely solely on them. This is because, in most of the scenarios, access to the original video is not possible. On the other hand, the NRbased metrics do not need the original video for assessing the received video’s quality. Because of this superiority of the NR methods over the FR and RR-based methods, it is increasingly required to develop NR-based methods. Most of the methods devised for SVQA used the machine learning-based methods, that used a feature extraction from the stereo video to extract the best discriminative features consistent with the HVS, and then trained a machine learning regressor to predict the quality of a stereoscopic video. Although several methods were developed for modelling the HVS [4], the efficacy of the extracted features can be a real challenge because of the complexities of the HVS. Therefore, there is a need for another technique to deal with the challenge of the SVQA. To deal with the limitations of the handcrafted features, recently, deep learning-based techniques were utilized in related research areas. Using several learning layers, deep neural networks can extract lots of robust features from the input images or videos. Because of the inherent 3D nature of the videos, recently, the 3-Dimensional CNNs (3D CNNs) are applied to the videos, and received better performance than their 2D counterparts [5], which are one of the well-known classes of deep neural networks that rely on the convolution operation, has shown great success in most of computer vision tasks. 2D CNNs have shown great success for image classification [6]. However, 2D CNNs just can be applied to one image, and the natural connection between the frames in a video will be ignored. For instance, [5] showed the superiority of 3D CNNs over 2D CNNs for motion sickness of stereoscopic video. Motivation. Recently, deep learning techniques are used for the SVQA. To the best of our knowledge, Yang et al. in [7] proposed the first SVQA method and used the difference of stereo videos as input to a deep model. They used a basic 3D CNN model, and the results were satisfying. This method was not included the disparity and motion information which are important for the assessment of a stereoscopic video’s quality. The authors in [8] proposed another deep method for SVQA based on the binocular fusion model. The long-term working system of the visual pathway is simulated based on the findings of the Human Visual System (HVS). On the other hand, to capture stereo correspondence over large disparities, recently proposed deep learning-based methods utilize cost volume methods to calculate the stereoscopic correspondence when we have large disparities. Due to the fact that disparities are not constant for stereo videos with different configurations and resolutions, setting a fixed maximum value for disparity cannot be suitable for large disparity stereo videos [9]. For handling different disparity values including large ones, [9] solved the problem by introducing PAM mechanism to capture the stereo correspondence disregarding the disparity value.

Stereoscopic Video Quality Assessment Using Modified Parallax

41

Contributions. None of the discussed methods for SVQA used the disparity information, which is very important in the stereoscopic video and can be used for the SVQA. In this paper, we propose a CNN-based SVQA model. The PAM module uses the global Receptive Field (RF) to handle stereo frames with different disparities. Firstly, we modify the PAM module to make it suitable for extracting the spatio-temporal features from the left and right videos. 3D CNNs are used at the first layers of the modified PAM module to extract features from the consecutive frames. Secondly, we use some fully connected layers. Finally, the quality score is computed as the output of the fully connected layers. Our main contributions of this paper are given below: – An end-to-end learning structure for the SVQA is proposed that applies 3D CNNs for SVQA. Our model learns the degradation in spatial, temporal, and disparity channels. – The PAM structure is modified to accept a video patch as the input. The modified PAM module fuses the spatio-temporal features from the left and right video patches. – Residual Atrous Spatial Pyramid Pooling (ASPP) Module is extended to 3D. Pooling and convolutional layers are modified to be suitable for 3D data processing. – To refine our datasets and avoid homogeneous parts of the video acting as the outlier, we used the easy-to-use entropy measure to remove them from the training and testing datasets. – The experiments have been done on two publicly available datasets, and the results show that our method outperforms most of the other state-of-the-art methods.

2 Proposed Model In this section, the architecture of the proposed model for SVQA is illustrated. This architecture is shown in Fig. 1. Firstly, the left and right videos are split into small patches to use as input to the model. As shown in Fig. 1, the left and right video patches are used as input to the 3D CNN-based residual extraction layers. A 3D convolutional layer followed by a residual block is used to extract the features. Then, the Residual Atrous Spatial Pyramid Pooling (ASPP) Module [9] is used to expand the RF and extract the features with the dense pixel sampling rate and scales. The extracted features from the left and right video patches are fed into the modified PAM module. Then, the modified PAM module is used to fuse the left and right features. One way to avoid overfitting is to use dimension reduction techniques. The output of the modified PAM module is used as the input to the Max Pooling (MP) layer. Then, a 3D convolutional layer is added which is followed by a Rectified Linear Unit (ReLU) activation function. Then a Batch Normalization (BN) followed by another MP layer is added. Another method to avoid overfitting is to use the Dropout (Drop) layer, and we add it before using Fully Connected (FC) layers. Two FC layers with 512 and 1 outputs are used to make the overall network a regression model. Finally, the quality score is calculated as the output of the last FC layer. In the following sections, we describe the main modules of our model.

42

H. Imani et al.

2.1 Modified ASPP Module The original ASPP module is proposed to expand the receptive field and extract various features. Since our input is inherently 3D, we extended this module to support 3D input. The residual ASPP module is formed by using a residual ASPP block and a residual block in a cascade manner (Fig. 2). Output features of the first convolutional layer are fed to a residual block to fuse the input features. The output features are sent to an ASPP block to produce different features. This procedure is repeated two times. As shown in Fig. 2, in a residual ASPP block, three dilated convolutions are fused to create an ASPP group, and then we cascade three ASPP groups. The highly discriminative feature extracted with the residual ASPP module will help the main SVQA module’s performance.

Fig. 1. The architecture of the proposed method for measuring the stereoscopic video quality. The input left and right videos are fed into the 3D convolutional and residual blocks. Then a modified PAM model is used for feature integration. 3D max pooling (MP), batch normalization (BN), MP, dropout (Drop) layers are used respectively. Two fully connected (FC) layers are used for the regression task.

Stereoscopic Video Quality Assessment Using Modified Parallax

43

Fig. 2. Residual ASPP Block [9]. Three dilated convolutions create a group in a residual manner.

2.2 PAM Module Based on the self-attention mechanisms [10, 11], Wang et al. in [10] proposed 2D PAM to calculate global correspondence in stereoscopic images. PAM effectively fuses the data from stereo image pairs. The architecture of the modified PAM is shown in Fig. 3. We modified this architecture to make it suitable for our inputs which have the time dimension added. We also converted the residual block to 3D. “L” and “R” show the left and right features which are input to the modified PAM module. They are fed to the 3D residual blocks. The output is fed into a 1 × 1 × 1 3D convolutional layer. Then we apply a batch-wise matrix multiplication and we pass the resulted feature maps to a SoftMax layer to create the parallax attention maps MR to L and ML to R . Then, the weighted sum of features at all disparity values is fused with its input left features. Finally, the features are fused and create the output features for the left feature map. For the fusion of the right features, the process is the same and we showed just the left feature fusion in Fig. 3.

Fig. 3. The modified PAM module. The input features are fed to a transition residual block, and the output is the integration of the input features. 2D convolutions are converted to 3D.

44

H. Imani et al.

Fig. 4. Splitting one channel of a video into several video patches. The shown patch is of size 64 × 64 × 16.

2.3 Data Preparation Since the SVQA datasets are designed to estimate the stereoscopic video quality containing a minimal number of videos, we are not using them directly for deep learning-based SVQA. The reason is the sensitivity of the trained model to overfitting. Therefore, in addition to using transfer learning and fine-tuning, we augment SVQA datasets to avoid overfitting. We are not using commonly used image data augmentation methods such as transformations, flipping, and random cropping, which are not suitable for quality assessment tasks. These methods can change the content of the stereoscopic video that affects the overall visual quality. Instead of using these data augmentation methods, we break each left and right video in three dimensions into small video cubes, then use them as input to our model. In this paper, for creating small cubes, first, we select 16 consecutive frames from the left and right videos. Next, we select 3 × 64 × 64 spatial blocks from stereo videos based on the experiments in yang2018stereoscopic, which received the best performance. Therefore, RGB cubes with 3 × 64 × 64 × 16 sizes are chosen from the left and right videos to use as input to the model. Figure 4 shows how we split one video into several patches. Despite the assumption in [8], we believe that different spatial parts of a video experience different quality degradation degrees. Every stereo video has a quality label based on its real quality, confirmed by subjective tests. For training our model, we need to assign a quality label to each of the stereoscopic cubes. Every stereo cube selected from a video will get the quality label of the main video. But some video cubes have no or minimal information about the overall video’s quality, particularly cubes with a homogeneous background. By calculating the entropy of each cube, we remove some of the cubes from our training dataset. Entropy is defined as measuring the amount of information based on the average uncertainty of the image. For calculating the entropy of a cube, we calculate the entropy of a video cube. The average of the entropy values along 16 frames is defined as the entropy of that cube. Also, we define the final entropy value of a stereo cube as the average of the entropy value of the left and right video cubes. We calculated the value of the stereo cube’s entropy for all training and testing videos and refined our datasets to get rid of the outliers. We removed the cubes that receive lower entropy scores than a threshold, T, from the training and testing sets. After doing some

Stereoscopic Video Quality Assessment Using Modified Parallax

45

subjective experiments on the videos of our datasets, we decided to set T = 5.1. Based on our subjective experiments, the entropy value for patches with no or less texture is lower than this threshold, and for patches with rich textures, the entropy value is much higher than the defined threshold.

Fig. 5. A sample of the two used datasets for evaluation of the proposed svqa method. top two rows: stereoscopic 100-th right frame of the twelve original videos in the LFOVIAS3DPH2 [12] dataset, and bottom two rows: stereoscopic 100-th right frame of the ten original videos in the NAMA3DS1-COSPAD1 [13].

3 Datasets and Experiments To evaluate the efficiency of the proposed SVQA method, we conduct a set of experiments with two publicly available datasets, namely LFOVIAS3DPh2 [12], and NAMA3DS1COSPAD1 [13]. 3.1 Datasets The LFOVIAS3DPh2 [12] dataset has 12 references and 288 stereoscopic test videos. These stereoscopic videos are distorted with H.264 and H.265 compression, Blur, and Frame freeze. The videos are chosen from the RMIT3DV [14] video dataset that includes symmetrically and asymmetrically distorted videos. All the videos in the RMIT3DV dataset are captured using a Panasonic AG-3DA1 camera with full HD 1920 × 1080 resolution. Videos with quality between very bad and excellent have scores between 0 and 5, respectively. The time duration of all videos in the dataset is constant, and it is 10 s. Figure 5 depicts the 100-th right frame of each reference video in the dataset. The [13] dataset includes 10 reference stereo videos and 100 test videos. Test videos are picked from the NAMA3DS1 [13] dataset. Distortions are added symmetrically, including coding and spatial distortions. Coding distortions include H.264/AVC, JPEG

46

H. Imani et al.

2000, and spatial losses, including reducing the resolution, image sharpening, and downsampling. All videos are in full HD resolution. The scores change from 1 (lower quality) to 5 (highest quality). Figure 5 depicts the 100-th right frame of each reference video in the dataset 1-COSPAD1 [13]. 3.2 Experiments All our experiments are done on a computing system with the following specifications: i9-10850K CPU 3.60 GHz, 64GB memory, NVIDIA GeForce RTX 2080 ti. We split the stereoscopic videos in each dataset to train and test sets before experimenting. We follow a general rule in dividing the datasets into 80% training and 20% test sets. We randomly generated train and test samples from all videos in the datasets. The Stochastic Gradient Descent (SGD) with momentum with a batch size of 128 is used to train the proposed model on the two datasets. Also, all the implementations and train-test scripts are in PyTorch 1.7.1. The Nesterov momentum is set to 0.9. Also, the learning rate is constant and equal to 0.001. Because both our datasets are imbalanced, and the generated datasets contain outliers due to the variation of the quality over a video, we use Huber [15] loss as our regression loss function. It is used in robust regression. This loss function is less sensitive to the outliers in data than the mean squared error loss. This loss function defines a metric that if the absolute element-wise error is higher than a threshold named beta, acts like the L1 loss. Otherwise, it is a squared term.

4 Results and Discussions The performance of the proposed method on two popular SVQA datasets of LFOVIAS3DPh2 and is proposed in this section. We also discuss the results and efficiency of the proposed method. Quantitative evaluation of the proposed method is considered using three measures, namely the Linear Correlation Coefficient (LCC), Spearman Rank Order Correlation Coefficient (SROCC), and Root Mean Square Error (RMSE). The LCC is also known as the Pearson linear correlation coefficient. It is a statistical method that calculates the linear correlation among quantities. The SROCC measures the monotonic correspondence between two inputs. The higher LCC and SROCC and lower RMSE values specify a good correlation between the predicted and MOS scores. Table 4 compares the proposed method with the existing methods on the dataset. It shows that the proposed method is the second-best performing method in terms of RMSE, LCC, and SROCC criteria. The method proposed by Shuai Ma et al. [9] shows the best performance between the compared methods. Tables 1, 2 and 3 show the results of the proposed method compared with the existing methods in the LFOVIAS3DPh2 dataset. The performance comparison of the proposed method with popular 2D IQA/VQA and 3D IQA/VQA methods is also shown in these tables. From All column of these tables, which shows the overall performance of the algorithms, the values of the LCC, SROCC, and RMSE shows the state of the out performance for the proposed method with 0.911, 0.901, and 0.425, respectively. Besides, comparing the overall performance with the other algorithms with all criteria indicate

Stereoscopic Video Quality Assessment Using Modified Parallax

47

Table 1. Performance of the proposed SVQA method compared with objective SVQA methods in terms of LCC on the LFOVIAS3DPH2 dataset. the best result is bold. SYM and ASYM represent the results for symmetric and asymmetric distortions, respectively. Algorithm

Distortions H.264

H.265

Blur

F.F

Sym

Asym

ALL

SSIM [17]

0.816

0.812

0.651

0.801

0.803

0.660

0.730

MS-SSIM [18]

0.901

0.873

0.802

0.897

0.885

0.716

0.819

VIF [19]

0.874

0.822

0.813

0.820

0.879

0.701

0.810

NIQE [20]

0.646

0.540

0.351

0.648

0.641

0.501

0.578

VQM [16]

0.887

0.907

0.785

0.815

0.868

0.799

0.830

STRIQE [21]

0.861

0.850

0.804

0.774

0.746

0.586

0.670

FI-PSNR [22]

0.722

0.677

0.464

0.717

0.688

0.648

0.660

VQUEMODES [4]

0.886

0.866

0.706

0.827

0.887

0.856

0.878

MoDi3D [13]

0.720

0.761

0.432

0.839

0.740

0.669

0.690

Our proposed method

0.842

0.908

0.887

0.917

0.911

0.899

0.911

Table 2. Performance of the proposed SVQA method compared with objective SVQA methods in terms of SROCC on the LFOVIAS3DPH2 dataset. the best result is bold. SYM and ASYM represent the results for symmetric and asymmetric distortions, respectively. Algorithm

Distortions H.264

H.265

Blur

F.F

Sym

Asym

ALL

SSIM [17]

0.795

0.798

0.480

0.807

0.744

0.585

0.682

MS-SSIM [18]

0.895

0.857

0.706

0.898

0.864

0.638

0.778

VIF [19]

0.875

0.810

0.781

0.807

0.862

0.652

0.784

NIQE [20]

0.667

0.388

0.349

0.445

0.559

0.443

0.501

VQM [16]

0.896

0.801

0.815

0.821

0.841

0.780

0.803

STRIQE [21]

0.851

0.836

0.663

0.632

0.705

0.532

0.652

FI-PSNR [22]

0.723

0.622

0.398

0.768

0.655

0.603

0.611

VQUEMODES [4]

0.864

0.825

0.606

0.772

0.857

0.835

0.839

MoDi3D [12]

0.687

0.671

0.396

0.627

0.682

0.593

0.661

Our proposed method

0.722

0.871

0.911

0.911

0.872

0.884

0.901

that after our method, VQUEMODES [4] and VQM [16] are the best performing methods on the LFOVIAS3DPh2 dataset, respectively. These results confirm that our proposed model demonstrates the best overall performance in comparison with the state-of-the-art methods on the LFOVIAS3DPh2 dataset and the second-best performing method on the dataset.

48

H. Imani et al.

Table 3. Performance of the proposed SVQA method compared with objective SVQA methods in terms of RMSE on the LFOVIAS3DPH2 dataset. the best result is bold. SYM and ASYM represent the results for symmetric and asymmetric distortions, respectively. Algorithm

Distortions H.264

H.265

Blur

F.F

Sym

Asym

ALL

SSIM [17]

0.528

0.521

0.546

0.393

0.596

0.557

0.596

MS-SSIM [18]

0.396

0.436

0.491

0.290

0.464

0.517

0.505

VIF [19]

0.444

0.510

0.444

0.375

0.476

0.529

0.508

NIQE [20]

0.698

0.753

0.627

0.500

0.768

0.642

0.718

VQM [16]

0.422

0.376

0.411

0.381

0.496

0.446

0.480

STRIQE [21]

0.464

0.471

0.533

0.416

0.665

0.601

0.647

FI-PSNR [22]

0.514

0.659

0.638

0.522

0.551

0.574

0.545

VQUEMODES [4]

0.355

0.395

0.461

0.350

0.442

0.379

0.444

MoDi3D [13]

0.672

0.642

0.566

0.393

0.677

0.587

0.657

Our proposed method

0.324

0.443

0.364

0.328

0.440

0.321

0.425

Table 4. Performance evaluation of the proposed SVQA method compared with the existing methods on the NAMA3DS1-COSPAD1 dataset. the best result is in bold. Algorithm

Criteria LCC

SROCC

RMSE

SSIM [17]

0.7664

0.7492

0.7296

PQM [23]

0.6340

0.6006

0.8784

PHVS-3D [24]

0.5480

0.5146

0.9501

SFD [25]

0.5965

0.5896

0.9117

Yang [26]

0.8949

0.8552

0.4929

MNSVQM [27]

0.8545

0.8394

0.4538

3D CNN [8]

0.9316

0.9046

0.4161

3D CNN SVR [8]

0.9478

0.9231

0.3514

Shuai Ma [9]

0.9597

0.9571

0.3065

Our proposed method

0.9499

0.9416

0.3396

5 Conclusions In this paper, a new NR SVQA method is proposed using CNNs and feature fusion using the modified parallax attention module. Our method’s performance is state-ofthe-art on LFOVIAS3DPh2 and the second-best performing method on NAMA3DS1COSPAD1 datasets. We found that using deep learning for measuring the quality of

Stereoscopic Video Quality Assessment Using Modified Parallax

49

the stereoscopic videos with the existing datasets may have some challenges. Problems such as imbalanced datasets need to be considered when using deep learning for SVQA. An alternative method to overcome this problem may be using other loss functions in future work. Additionally, we can use motion and saliency as other weighting factors for improving SVQA accuracy. In the future, a mixed dataset training from 2D VQA can be extended for the stereoscopic VQA. Acknowledgment. This work is supported by the Scientific and Technological Research Council of Turkey (TUBITAK) 2232 Outstanding International Researchers Program, Project No. 118C301.

References 1. Guo, Y., et al.: 3D object recognition in cluttered scenes with local surface features: a survey. IEEE Trans. Pattern. Anal. Mach. Intell. 36(11), 2270–2287 (2014) 2. Statistics: Theatrical Market. “Hollywood: Motion Picture Association of America (2011) 3. Yan, Q., Gong, D., Zhang, Y.: Two-stream convolutional networks for blind image quality assessment. IEEE Trans. Image Process. 28(5), 2200–2211 (2018) 4. Appina, B., et al: No-reference stereoscopic video quality assessment algorithm using joint motion and depth statistics. In: 25th IEEE International Conference on Image Processing (ICIP). IEEE (2018) 5. Lee, T.M., Yoon, J.-C., Lee, I.-K.: Motion sickness prediction in stereoscopic videos using 3d convolutional neural networks. IEEE Trans. Visual. Comput. Graph. 25(5), 1919–1927 (2019) 6. Karpathy, A., et al.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014) 7. Yang, J., et al.: Stereoscopic video quality assessment based on 3D convolutional neural networks. Neurocomputing 309, 83–93 (2018) 8. Ma, S., et al.: Stereoscopic video quality assessment based on the two-step-training binocular fusion network. In: IEEE Visual Communications and Image Processing (VCIP). IEEE (2019) 9. Wang, L., et al: Learning parallax attention for stereo image super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2019) 10. Zhang, H., Goodfellow, I.J., Metaxas, D.N., Odena, A.: Self-attention generative adversarial networks. In: NIPS (2018) 11. Fu, J., Liu, J., Tian, H., Fang, Z., Lu, H.: Dual attention network for scene segmentation (2018). arXiv preprint: arXiv:1809.02983 12. Balasubramanyam, A., et al.: Study of subjective quality and objective blind quality prediction of stereoscopic videos. IEEE Trans. Image. Process. 28(10), 5027–5040 (2019) 13. Urvoy, M., et al.: NAMA3DS1-COSPAD1: Subjective video quality assessment database on coding conditions introducing freely available high quality 3D stereoscopic sequences. In: Fourth International Workshop on Quality of Multimedia Experience. IEEE (2012) 14. Cheng, E., et al.: RMIT3DV: pre-announcement of a creative commons uncompressed HD 3D video database. In: 2012 Fourth International Workshop on Quality of Multimedia Experience. IEEE (2012) 15. Huber, P.J.: Robust Statistics, vol. 523. John Wiley & Sons, Hoboken (2004) 16. Pinson, M.H., Wolf, S.: A new standardized method for objectively measuring video quality. IEEE Trans. Broadcast. 50(3), 312–322 (2004)

50

H. Imani et al.

17. Wang, Z., et al.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004) 18. Wang, Z., Simoncelli, E.P., Bovik, A.C.: Multiscale structural similarity for image quality assessment. In: The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003. Vol. 2. IEEE (2003) 19. Sheikh, H.R., Bovik, A.C.: Image information and visual quality. IEEE Trans. Image Process. 15(2), 430–444 (2006) 20. Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012) 21. Sameeulla Khan, M., Appina, B., Channappayya, S.S.: Full-reference stereo image quality assessment using natural stereo scene statistics. IEEE. Signal. Process. Lett. 22(11), 1985– 1989 (2015) 22. Lin, Y.-H., Ja-Ling, W.: Quality assessment of stereoscopic 3D image compression by binocular integration behaviors. IEEE Trans. Image Process. 23(4), 1527–1542 (2014) 23. Joveluro, P., et al.: Perceptual video quality metric for 3D video quality assessment. In: 2010 3DTV-Conference: The True Vision-Capture, Transmission and Display of 3D Video. IEEE (2010) 24. Jin, L., et al.: 3D-DCT based perceptual quality assessment of stereo video. In: 2011 18th IEEE International Conference on Image Processing. IEEE (2011) 25. Feng, L., et al.: Quality assessment of 3D asymmetric view coding using spatial frequency dominance model. In: 2009 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video. IEEE (2009) 26. Yang, J., et al.: A no-reference optical flow-based quality evaluator for stereoscopic videos in curvelet domain. Inf. Sci. 414, 133–146 (2017) 27. Jiang, G., et al.: No reference stereo video quality assessment based on motion feature in tensor decomposition domain. J. Visual. Commun. Image. Represent. 50, 247–262 (2018)

Decision Making

A Design of Collecting and Processing System of Used Face Masks and Contaminated Tissues During COVID-19 Era in Workplaces Beril Hekimo˘glu, Ezgi Akça, Aziz Kemal Konyalıo˘glu(B) , Tu˘gçe Beldek, and Ferhan Çebi Department of Management Engineering, Istanbul Technical University, Istanbul, Turkey {konyalioglua,beldek,cebife}@itu.edu.tr

Abstract. During COVID-19 pandemic, many countries made it compulsory to wear masks or people started using face masks in order to protect themselves from being infected. Masks are the only solution when social distancing is not possible to apply. Thus, millions of masks are used daily in the world. In Turkey, it is compulsory to wear a mask outside of the houses, and waste management systems are not adequate to manage tons of waste masks that are produced at workplaces or houses. Every day, throwing out disposable face masks and contaminated tissues results in massive environmental pollution and a possible danger for people when they are not disposed correctly. In this study, possible solutions to “polluting the environment massively by throwing out disposable face masks and contaminated tissues” during the COVID-19 period are carried out. Keywords: Fuzzy analytic hierarchy process · Multi Criteria Decision Making (MCDM) · COVID-19 · Face mask waste · Waste management

1 Introduction According to the World Health Organization, COVID-19 is a disease which is infectious and caused by coronavirus [1]. 64.1 million COVID-19 cases have been reported by officials in 217 countries after China’s first case report to the World Health Organization (WHO) in December 2019 [2]. Coughing, speaking, sneezing, and shouting spread respiratory droplets to the air from people’s mouths are the primary reason why COVID-19 spreads among the people very easily. These droplets can move in the air and can drop others’ faces. Droplets that are very small in size can be breathed which at the end causes COVID-19 infection. In order to put an obstacle between the person and those droplets, masks can be used. Masks that cover the nose and mouth are the basic protection from the droplets which past studies proved [3]. According to the guidance published on the World Health Organization’s website, to narrow down the contagion of the COVID-19 and other viral illnesses, wearing a mask is the extensive solution. It can be worn by both infected and healthy people to prevent the spread of the virus from people who are infected as a way © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 53–64, 2022. https://doi.org/10.1007/978-3-030-90421-0_5

54

B. Hekimo˘glu et al.

of source control and to keep healthy people from infected people in case of getting in touch with them [4]. Worldwide masks and gloves usage and waste is estimated to reach almost 130 billion facemasks and 65 billion for gloves for each month of the COVID-19 era [5]. In today’s world, millions of contaminated face masks, gloves and supplies become irreversibly infectious waste to diagnose, detect and treat COVID-19 and other human pathogens. These infectious wastes will cause environmental and health problems as a result of not properly stored, transported and used [6]. One of the inevitable problems with used masks is the contagiousness of waste and can be the root cause of serious diseases and environmental problems in the long term, so it must be properly managed [7]. At the home level, waste PPE mixes and combines with other wastes, this situation carries risks to infect other residences and waste collectors without losing its contagious effect [8]. Moreover, disposable polymeric materials have been identified as an important source of plastic and plastic particle pollution in the environment [9]. In more detail, single use face masks (disposable face masks) are made of polypropylene, polyurethane, polyacrylonitrile, polystyrene, polycarbonate, polyethylene or polyester [10]. Another threat of plastics is for aquatic life, especially for fishes which have been a great food web to human existence. Fishes unintentionally ingest microplastics that can reach food for human consumption as well, raising the ultimate concern of global food security will be the result of food shortages [11]. It is understood that the fact that face masks only increase plastic pollution as environmental garbage in both land and water environment greatly, this is proof that the global pandemic does not reduce any plastic pollution problem in the environment. While banning face masks remains as one of the least desired options right now, public sensitization could greatly assist in the management of these wastes, given their positive impact on the ongoing global fight against COVID-19 [12]. This study aims to develop a Mask Waste Collection and Transportation System to support waste management studies effectively. Problem was examined in detail by enlarging the topic and it is aimed to convey the problem definition to the reader in a clear and striking way. A very extensive review was carried out in order to examine and convey the studies and measures taken on this subject. After the information obtained from different sources, a proper system design is proposed for a pilot area in Turkey. The study consists of four main parts including introduction, background, methodology and system design and finally the conclusion part in which the results are discussed.

2 Background If According to the COVID-19 information page of Turkey Ministry of Health, after the confirmation of the first case in Turkey, the first time the number of people who died from the coronavirus on April 14 reached to 107, and on 19 April which the cases in the period increased all over the world, 127 deaths was recorded as the date with the most deaths. With the implementation of restrictions and injunctions in Turkey, as well as worldwide, the death of patients and the number of cases decreased, and horizontal lines achieved. The number of those who lost their lives due to the Covid-19 on June

A Design of Collecting and Processing System of Used Face Masks

55

13 fell to 14 in Turkey. The number of deaths, which ranged from 10–30 per day at the end of May and the beginning of June, increased to 40–100 since September. The number of people who died because of Covid-19, which was 107 on April 14, was 103 on November 17, 7 months later. The number of people who died in the new type of coronavirus increased to 141 on 20 November [13]. According to Sangkham, the retrieved data on July 31 show that there were 229,891 total COVID-19 cases. On December 6, there were 539,291 total COVID-19 cases (which is more than double of July data). Moreover, estimation of daily face mask usage of the general population was calculated by adapting Nzediegwu and Chang’s equation [7]. Table 1 shows the estimated daily face mask usage and medical waste in Asia region, Turkey. Table 1. Estimated daily face mask use and medical waste in Asia with confirmed COVID-19 cases [7]. Country Population

Turkey

COVID-19 Urban Face mask Number of cases population acceptence face mask (%) rate need of each general population each day

84,410,984 229,891

39

80

1

Total daily face mask use (pieces)

Medical waste (tons/day)

26,066,112 908.07

In the news of Karar Newspaper 12 November, KTU Faculty of Medicine Farabi Hospital, Head of the Department of Medical Microbiology Dr. Faruk says that he predicts that the average daily consumption of 75 million masks. Also, he made a statement as “If this much mask is thrown to nature, it can bring up a serious challenge to the environment” [14]. Used face masks that are at risk of contamination should be considered medical waste and should be referred for incineration as they are counted to come from the clinical setting. However, a specific waste stream has not yet been adopted for such products used by the general public, a specific system is not started to implement nationally or internationally. At the home level, waste PPE mixes and combines with other wastes, this situation carries risks to infect other residences and waste collectors without losing its contagious effect [15]. According to Turkey Ministry of Environment and Urbanization General Directorate of Environmental Management’ Curriculum No. 2020/12, wastes are collected separately and transported to the temporary storage area and that the necessary cleaning process is carried out by a separate staff in a controlled manner. Moreover, these wastes are kept in temporary storage areas for at least 72 h and then delivered to the municipality as ‘other waste’ to be managed within the scope of domestic waste [16].

56

B. Hekimo˘glu et al.

3 Methodology Specifiying a group of people who use face masks regularly is an important point for further analyzing the amount of face masks usage. To identify our potential users; 121 employees / students who work online and 82 workers who work offline answered the surveys to identify the differences. First of all, the age range of the respondents is as follows. Although the general age range is 36–50 for both, we can observe that the 25–35 age range works mostly online and the age over 50 works mostly offline. When the data of the online working group are examined, around 60% of the respondents’ state that they change 1–3 masks weekly (in the comment section we left after this question, some of them stated that they are usually at home and do not throw their masks after if they were in non-crowded places). On the other hand, survey result for employees going to the workplaces are different. They stated that only around 35% have a separate area where they can throw their used masks / other PPE. Moreover, around 30% change 5–8 masks, 35% change 9–12 masks and around 5% change more than 12 masks per week. This is quite a high number compared to previous data (some respondents added in the comment section below that the reason for this number is use of public transportation or mask changing obligations for per shift). Finally, 80% of the respondents stated that if there was a waste bin at their workplaces, they will throw their used masks there. As a result, in the light of the data obtained with the surveys, it was decided to narrow the scope of the project location to the workplaces. For location selection, a pilot location is planned to be identified and move forward. The project can be expanded and adapted to other locations according to the outcomes and external factors in the pilot location. 4 different offline crowded working environments are selected which are ˙Ikitelli OSB, Sinan Pa¸sa Shopping Arcade, Zorlu Center Offices and ITU Ayazaga Campus. Also, for the attributes, options were compared in terms of Closeness to the Incineration Facility, Number of Workers (Average), Closeness to the Transfer Station, Total Number of Masks in a Working Day, Visitor Average Time Spending and Average People per mˆ2. Then AHP method is used to select the best among the options. Table 2 illustrates the values to 6 attributes that are assigned through research and co-decision. Table 2. The values to 6 attributes that are assigned through research and co-decision. Closeness to the incineration facility

0.08

Number of workers (Average)

0.26

Closeness to the transfer station

0.14

Total number of masks in a working day

0.10

Visitor average time spending

0.24

Average people per m2

0.18

Total

1.00

Then, attribute Table 3 with expert opinions and ready-measurements are obtained.

A Design of Collecting and Processing System of Used Face Masks

57

Table 3. Plot locatıon selectıon Closeness to Number of the workers incineration (Average) facility

Closeness to the transfer sation

Total number of masks in a working day

Visitor average time spending

Average people per mˆ2

0,08

0,26

0,14

0,10

0,24

0,18

˙Ikitelli OSB

0,26

0,61

0,39

0,56

0,09

0,16

0,34

Sinan pa¸sa shopping arcade

0,12

0,05

0,15

0,10

0,05

0,40

0,13

Zorlu center offices

0,56

0,11

0,39

0,25

0,28

0,40

0,29

ITU ayazaga campus

0,05

0,22

0,07

0,10

0,58

0,05

0,22

As a result, ˙Ikitelli OSB gives the best result for pilot area selection.

4 Application The current system for mask disposal is managed in two ways. If the waste is produced at the hospitals or in any health institution, then they are managed as medical waste. But if it is produced at the houses or workplaces, then it should be double bagged and kept for 72 h before throwing it in the household waste. After that, the waste is transferred to the transfer center and then for landfilling. It is not possible to recycle or reuse the masks which are mainly made of plastic. Millions of masks end up in the soil which creates a possible danger of groundwater, environment, and animals. Essential needs are as follows, contributing to minimize pollution and waste in the ˙Ikitelli OSB area, contributing to energy use and efficiency, and by evaluating used masks in the “Other Wastes” category, separating them from hazardous and recyclable wastes. Main goal is setting an example for all workplaces and educating employees in the ˙Ikitelli OSB area on waste masks. For further progress of the project, more details were needed to be clear. For this reason, three different departments in Waste Management Directorship were interviewed by phone. General questions about waste management facilities are answered by Waste Management Directorship’s official correspondence address, other specific questions were asked to domestic waste management and the recycling department. When the interviews were analyzed, working hours, process, and costs of waste management in the recycling department are more suitable to apply for the project. One

58

B. Hekimo˘glu et al.

of the recycling trucks after 5:30 pm can be directed to the ˙Ikitelli OSB for collection of waste masks. Since the monthly cost of a truck will be already paid, and the time which they do not work can be utilized. There is a need for one extra shift of 3 workers whose schedule will be determined according to the waste amount that is produced in the pilot area and the capacity of the trucks. After determining the route of the truck, the average cost of fuel used also can be calculated. In order to calculate the total daily number of visitors in the selected area, following people are considered: workers, employers, suppliers, and visitors. As it is stated on the website of ˙Ikitelli OSB, total workers are counted around 300.000, and the current daily total visitor average is 150.000 people. Mahmut M. Aydın who is the head of ˙Ikitelli OSB said that ‘We have 300 thousand employees in 20 thousand active companies. Our SMEs (Small to Medium Enterprises) have mostly taken their own measures. Our SMEs and industrialists continue their work because there is no curfew or a decision to stop work. However, due to the pandemic, the closure of many customs gates in the world has prompted export companies to reduce their capacity or stop production, so we have companies that suspend production and reduce capacity. Coronavirus energy consumption in our region after March 11 First seen in Turkey, we see that 16 percent decline’ [17]. Therefore, one of the assumptions while calculating masks usage and waste is taking the 84% percent of total visitors and workers since almost all sectors and workplaces faced a shrinkage due to the Covid-19 pandemic. Moreover, he also stated that the spread of the coronavirus is very rapid. In this context, we take care to consider every detail. At the entrance to our service building, we provide and make them change masks to all our guests who come by necessity, we measure their temperature with thermal cameras and thermometers and have their hands disinfected [18]. Thus, as it can be understood from the above news, every visitor is changing their mask so the other assumption is visitors use and dispose of one mask in ˙Ikitelli OSB. Total amount of average masks usage is calculated on the table below. Total workers of 300.000 and total daily visitors 150.000 are multiplied by 0,84 to eliminate decreasing the effect of shrinking in the economy on workplaces and decreased number of unnecessary visits. According to the news mentioned above, every visitor is assumed to use and dispose of one mask time independent. After the mask crisis happened in Turkey, workers are assumed to use and dispose of two masks in each working day since July 2020. This assumption is also based on the survey results of workers in which they stated on average 2 masks are used and disposed. As a result of calculations and assumptions, ˙Ikitelli OSB produced 68.166.000 units of facemask waste by itself in 2020. Table 4 shows the calculations for ˙Ikitelli OSB. To collect masks specific bins should be used, and there are numerous types of bins available in the market; however, not all of them ensure the requirements for a possible waste mask bin. In order to determine what type of bins to use in the ˙Ikitelli OSB, different kinds of bins are selected to ask for expert opinion. Mr. Veli and Sercan who are responsible for waste management and recycling in Ka˘gıthane were asked about criteria and attributes that can be important for bin selection. Then their opinions were recorded so Fuzzy TOPSIS can be utilized.

A Design of Collecting and Processing System of Used Face Masks

59

Table 4. Facemask production in ˙Ikitelli, Istanbul in 2020 March, 2020

April, 2020

May, 2020

June, 2020

July, 2020

August, 2020

September, 2020

October, 2020

November, 2020

December, 2020

Number of days

30

31

30

31

31

30

31

30

31

Number of holiday days

4

7

4

6

6

4

4

5

4

Number of working days

26

24

26

25

25

26

27

25

27

Number of workers (average)

300.000*

252.000

252.000

252.000

252.000

252.000

252.000

252.000

252.000

252.000

Number of visitors (average)

150.000*

126.000

126.000

126.000

126.000

126.000

126.000

126.000

126.000

126.000

Daily mask usage per worker

252.000

252.000

252.000

504.000**

504.000

504.000

504.000

504.000

504.000

Daily mask waste per worker

-

-

-

252.000

252.000

252.000

252.000

252.000

252.000

Daily mask usage per visitor

126.000***

126.000

126.000

126.000

126.000

126.000

126.000

126.000

126.000

Daily mask waste per visitor

126.000

126.000

126.000

126.000

126.000

126.000

126.000

126.000

126.000

Total mask waste daily

126.000

126.000

126.000

378.000

378.000

378.000

378.000

378.000

378.000

Total mask waste monthly

3.276.000

3.024.000

3.276.000

9.450.000

9.450.000

9.828.000

10.206.000

9.450.000

10.206.000

Mr. Veli’s answers were recorded as Decision Maker 1 (DM1) and Mr. Sercan’s answers are recorded as Decision Maker 2 (DM2). There are 3 different alternatives for bins which are listed below. Durability of the materials are considered in terms of both against weather conditions and impact resistance. Alternative 1: The first alternative is brown, 150-L size, made of paper, with no lid and handle. It is easy to move.

60

B. Hekimo˘glu et al.

Alternative 2: The second alternative has different color options, is 660-L size, made of plastic, with lid, handle, and wheels. This bin is comparably heavier than the paper one but wheels and handles make it easy to move. Alternative 3: The third alternative is blue, 1500-L size, made of metal, with lid, handle, and wheels. Even though this bin is the heaviest one, wheels and handles make it comparably easy to move. Criteria for bin selection are listed below: 6 criteria were considered. Moreover, a 5-scale rating is used for deciding. All alternatives, criteria, and scale are explained clearly to the decision makers. Then their answers are recorded and the decision-making process was summarized on the tables below. C1: Color C2: Volume C3: Durability of Material C4: Lid C5: Handle/Wheel C6: Caution Signs For fuzzy calculations, Fig. 1 is considered in order to make defuzzification the weightings.

Fig. 1. Lingustic scale for importance [19]

Tables 5 and 6 show the alternative ratings and criteria weightage based on decision makers respectively. Table 5. Alternatıve ratıngs for face mask collectıon Criteria A1

A2

A3

DM1 DM2 DM1 DM2 DM1 DM2 C1

JE

JE

JE

JE

JE

JE

C2

JE

MI

JE

MI

SI

ES

C3

JE

JE

VS

SI

ES

VS

C4

JE

JE

VS

ES

ES

ES

C5

MI

MI

VS

VS

VS

VS

C6

ES

ES

SI

MI

JE

JE

A Design of Collecting and Processing System of Used Face Masks

61

Table 6. Criteria weightage Criteria

DM1

DM2

C1

JE

MI

C2

ES

VS

C3

VS

SI

C4

VS

ES

C5

MI

SI

C6

SI

VS

According to the criteria weightings, Table 7 shows the aggregation fuzzy decision matrix for criteria weightings while Tables 8 and 9 indicate weighted normalized fuzzy decision matrices respectively. Table 7. Aggregation fuzzy decision matrix for criteria weights Criteria

Aggregated weightage

C1

1

2

5

C2

5

8

9

C3

3

6

9

C4

5

8

9

C5

1

4

7

C6

3

6

9

Table 8. Weighted normalized fuzzy decision matrix Criteria

A1

C1

0,333

0,333

1,000

0,333

A2 0,333

1,000

0,333

A3 0,333

1,000

C2

0,200

0,500

1,000

0,200

0,500

1,000

0,111

0,143

0,333

C3

0,111

0,111

0,333

0,333

0,667

1,000

0,556

0,889

1,000

C4

0,111

0,111

0,333

0,556

0,889

1,000

0,778

1,000

1,000

C5

0,111

0,333

0,556

0,556

0,778

1,000

0,556

0,778

1,000

C6

0,778

1,000

1,000

0,111

0,444

0,778

0,111

0,111

0,333

As a result, alternative 1 has the lowest score, alternative 2 has the second lowest score: 0,361 and 0,422 respectively. Alternative 2 has the highest score among the others which is 0,493. Therefore, alternative 2 which is 660 L, made of plastic with handles,

62

B. Hekimo˘glu et al. Table 9. Weighted normalized fuzzy decision matrix

Criteria

A1

A2

A3

C1

0,333

0,667

5,000

0,333

0,667

5,000

0,333

0,667

5,000

C2

1,000

4,000

9,000

1,000

4,000

9,000

0,556

1,143

3,000

C3

0,333

0,667

3,000

1,000

4,000

9,000

1,667

5,333

9,000

C4

0,556

0,889

3,000

2,778

7,111

9,000

3,889

8,000

9,000

C5

0,111

1,333

3,889

0,556

3,111

7,000

0,556

3,111

7,000

C6

2,333

6,000

9,000

0,333

2,667

7,000

0,333

0,667

3,000

lids and wheels was selected as a mask collection bin. Figure 2 shows the scores of the alternatives.

Fig. 2. Alternatives’ scoring

5 Conclusions This study aims to be a non-profit project to prevent the masks from ending up in the soil. As required by the scope of the project, it has the characteristics of saving the large amount of mask waste that is produced in one pilot region from harm to the environment, raising awareness among the public, producing energy, being an example to other regions and also protecting employees from viruses. As stated before, waste management workers should be well equipped with protective equipment to ensure their health and safety. Also, they pointed out that repeated collections should be performed to avoid infection and cross contamination [20]. Moreover, this study tries to create sensitivity to environmental issues, and will also research how to separate their other wastes by public. In this case, it will be critical to providing the necessary information within Turkey. With the pandemic, many things have changed in society and this change should be evaluated in the context of contributing to energy production in the best way and causing the least damage to the environment. All in all, this study can be easily adjusted according to the constantly changing circumstances and has a positive contribution.

A Design of Collecting and Processing System of Used Face Masks

63

References 1. World Health Organization: Coronavirus (2020). https://www.who.int/health-topics/corona virus#tab=tab_1 2. Pettersson, H., Manley, B., Hernandez, S.: Tracking coronavirus’ global spread. CNN Health, 2 December 2020. https://edition.cnn.com/interactive/2020/health/coronavirus-maps-andcases/ 3. Centers for Disease Control and Prevention: Considerations for Wearing Masks (2020). https://www.cdc.gov/coronavirus/2019-ncov/prevent-getting-sick/cloth-face-coverguidance.html 4. World Health Organization: Mask use in the context of COVID-19 (2020). https://www.who. int/publications/i/item/advice-on-the-use-of-masks-in-the-community-during-home-careand-in-healthcare-settings-in-the-context-of-the-novel-coronavirus-(2019-ncov)-outbreak 5. Islam, S., Kennedy, K.: Single-use masks could be a coronavirus hazard if we don’t dispose of them properly. The Conversation, 21 July 2020. https://theconversation.com/single-usemasks-could-be-a-coronavirus-hazard-if-we-dont-dispose-of-them-properly-143007 6. Nzediegwu, C., Chang, S.X.: Improper solid waste management increases potential for COVID-19 spread in developing countries. Resour. Conserv. Recycl. 161, 104947 (2020). https://doi.org/10.1016/j.resconrec.2020.104947 7. Sangkham, S.: Face mask and medical waste disposal during the novel COVID-19 pandemic in Asia. Case Stud. Chem. Environ. Eng. 2, 100052 (2020). https://doi.org/10.1016/j.cscee. 2020.100052 8. Allison, A. L., et al.: The environmental dangers of employing single-use face masks as part of a COVID-19 exit strategy. UCL Open: Environment Preprint (2020). https://doi.org/10. 14324/111.444/000031.v1 9. Schnurr, R.E.J., et al.: Reducing marine pollution from single-use plastics (SUPs): A review. Marine Pollution Bulletin (2018). https://doi.org/10.1016/j.marpolbul.2018.10.001 10. Potluri, P., Needham, P.: Technical textiles for protection. In: Woodhead Publishing Series in Textiles2005). https://doi.org/10.1533/9781845690977.1.151 11. Aragaw, T.A.: Surgical face masks as a potential source for microplastic pollution in the COVID-19 scenario. Marine. Pollut. Bull. 159, 111517 (2020). https://doi.org/10.1016/j.mar polbul.2020.111517 12. Fadare, O.O., Okoffo, E.D.: Covid-19 face masks: A potential source of microplastic fibers in the environment. Sci. Total. Environ. 737, 140279 (2020). https://doi.org/10.1016/j.scitot env.2020.140279 13. Hafta sonu soka˘ga çıkma kısıtlaması ba¸sladı! Soka˘ga çıkma kısıtlamasında market, fırın, eczane ve bakkala gidilebilecek mi? ˙I¸ste merak edilenler, 4 December 2020, Hürriyet. https://www.hurriyet.com.tr/galeri-son-dakika-haberi-hafta-sonu-sokaga-cikma-kisitlamasibu-aksam-basladi-market-firin-eczane-ve-bakkala-gidilebilecek-mi-iste-merak-edilenler41678572/1 14. Karar:. Çöpe atılan maskeler için Bakanlıktan genelge, 12 December 2020. https://www.karar. com/cope-atilan-maskeler-icin-bakanliktan-genelge-1593708 15. Allison, A.L., et al.: The environmental dangers of employing single-use face masks as part of a COVID-19 exit strategy. UCL Open: Environment Preprint (2020). https://doi.org/10. 14324/111.444/000031.v1 16. Turkey Ministry of Environment and Urbanization. Tek Kullanımlık Maske Eldiven Gibi Tek Kullanımlık Hijyen Malzeme Atıklarının Yönetiminde Covid-19 Tedbirleri (2020). https:// webdosya.csb.gov.tr/db/cygm/icerikler/gng2020-16-cov-d-19-20200408101457.pdf 17. OSB’lerde üretim daraldı talep, ‘cezasız öteleme’, Dünya, 27 March 2020. https://www. dunya.com/ekonomi/osblerde-uretim-daraldi-talep-cezasiz-oteleme-haberi-466033

64

B. Hekimo˘glu et al.

˙ 18. Ikitelli OSB: salgın nedeniyle önlemleri artırdı, HaberOrtak, 22 April 2020. https://www.ste ndustri.com.tr/haberortak/ikitelli-osb-salginnedeniyle-onlemleri-artirdi-h105653.html 19. Nurani, A.I., Pramudyaningrum, A.T., Fadhila, S.R., Sangadji, S., Hartono, W.: Analytical hierarchy process (AHP), fuzzy AHP, and TOPSIS for determining bridge maintenance priority scale in Banjarsari, Surakarta. Int. J. Sci. Appl. Sci. Conf. Ser. 2(1), 60 (2017) 20. Scheinberg, A., et al.: (2020, April). https://www.iswa.org/fileadmin/galleries/0001_COVID/ ISWA_Waste_ManagementDuring_COVID-19.pdf

Identification of Optimum COVID-19 Vaccine Distribution Strategy Under Integrated Pythagorean Fuzzy Environment Tolga Gedikli(B) and Beyzanur Cayir Ervural Department of Industrial Engineering, Konya Food and Agriculture University, Konya, Turkey {tolga.gedikli,beyzanur.ervural}@gidatarim.edu.tr

Abstract. The recent COVID-19 pandemic has affected the world in many ways, economically, socially and culturally. One of the most important ways to recover from the devastating effects of COVID-19 is identification of ideal vaccination strategy. The fact that it is not possible to vaccinate everyone at the same time in the world leads to vaccinations by giving priority to certain groups. Therefore, governments should determine priority groups for allocating COVID-19 vaccines. In this study, four main criteria (age, people’s health status, woman status, and job kinds), fifteen sub-criteria, and six possible vaccine alternatives are identified. First, the main and sub-criteria are evaluated using the Interval-Valued Pythagorean fuzzy Analytical Hierarchy Process (IVPF-AHP). Then, possible COVID-19 vaccine alternatives are ranked using Interval-Valued Pythagorean fuzzy WASPAS (IVPFWASPAS). In addition, the results show that the most suitable vaccine for people with risky health problems and the elderly have priority over other alternative vaccines. Keywords: COVID-19 · Vaccine · Multi-criteria decision making · Pythagorean fuzzy AHP · Pythagorean fuzzy WASPAS

1 Introduction COVID-19 initially appeared in the city of Wuhan in the Hubei region of China in late 2019. The World Health Organization (WHO) declared Coronavirus disease (COVID19) as a pandemic on March 12th, 2020. All of humanity is struggling with the relentless disease that afflicts all over the world. The epicentre of the outbreak is Wuhan, China’s primary global transit hub. Domestic and international movements are important determinants in the rapid spread of the malady, due to the possibility of unidentified disease transmission and accessibility of the population [1]. Some mortality rates vary considerably between regions and countries, partly due to the varying size of testing and possibly differences due to demographics and quality of health. Because the disease is highly contagious, strict measures need to be taken to control transmission of the virus. Strict measures have been taken until vaccines are developed, such as limiting outdoor activities, self-isolation, closing borders, lockdowns, closing schools and universities [2]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 65–76, 2022. https://doi.org/10.1007/978-3-030-90421-0_6

66

T. Gedikli and B. Cayir Ervural

The inability to deliver the vaccine to all people at the desired level is one of the biggest handicaps of the health policies. When any of the pandemic diseases experienced in the past were investigated, it was seen that there were serious losses until the vaccine was developed. Therefore, it is seen that vaccine studies are a very important and strategic weapon in such epidemics. It has been observed that while the powerful and high-income countries have easy access to the vaccine, the low-income countries are in distress [3]. Before the vaccine, some important decisions such as hygiene rules, social distance, quarantine, remote work decisions, avoiding collective activities were applied. After the vaccine was developed, the process and living conditions were gained a little more flexibility. Under these circumstances, it is important to use the scarce limited resources we have in the most appropriate way. That is why the vaccine wars of the century have begun again. It was observed that the vaccine was not exactly sufficient for the full normalization process. According to analyze results, the number of re-infections, the number of cases, intensive care rates, and death rates have partially reappeared [4]. State authorities are responsible of use maximum resources to combat the spread of the disease and protect lives, with implementing solution offers. In pandemic conditions, the most effective argument/way is application of correct, speed, and practical vaccination strategies. However, how the limited amount of vaccines should be distributed at global, national level is a crucial decision-making issue [5]. Under these conditions, it is necessary to prioritize and decide which age, gender, and disease group should be vaccinated in the first stage. In this way, it is expected that the people in the vulnerable group will be identified, and a roadmap should be drawn in order to fight against COVID 19 disease in a reasonable and successful way. In the literature studies, most of the studies demonstrate that there is a serious interest in identification of ideal vaccination policy. Lopez and Gunasekaran [6] employed the fuzzy VIKOR ((VIsekriterijumska optimizacija i KOmpromisno Resenje)) method to assess H1N1 Influenza vaccination tactics. Singh and Avikal [7] applied the AHP method to detect preventive movements arranging to curtail the COVID-19 disease. Bubar KM et al. [8] indicated a deliberate manner to select vaccines by age and serological position. Dooling K. [9] classified the population as high-risk and low-risk groups for the disease to provide vaccines. Alkan and Kahraman [10] applied q-rung orthopair fuzzy the technique for order of preference by similarity to ideal solution (TOPSIS) method to propose governmental policies against COVID-19 pandemic. Hezam et al. [11] studied a neutrosophic multi-criteria decision making (MCDM) approach for defining the priority groups under COVID-19 Vaccine. Ahmad et al. [12] applied the Best Worst Method in the assessment of intervention strategies for COVID-19. Varotsos et al. [13] developed the decision support system as an epidemiological prediction tool to provide several tactics including vaccination strategies against COVID19. Markovic et al. [14] proposed vaccination strategies for best COVID-19 containment considering higher-risk groups, older people, comorbidities and poor metabolic health, and people in lower quality of life in general. Jentsch et al. [15] aimed to compare expected COVID-19 death across various strategies using mathematical modeling to prioritize COVID-19 vaccines for elderly individuals, children, uniform allocation, and a new strategy based on the contact structure of the citizens. According to literature

Identification of Optimum COVID-19 Vaccine Distribution Strategy

67

surveys, there is a huge gap to study COVID-19 vaccination strategies to combat the COVID-19 pandemic. Since the success of the fuzzy sets methodology is known under vague, uncertain, and limited data, the most appropriate vaccine strategy has been tried to be determined according to the current chaotic and ambiguous dynamics in the nature of the pandemic. In this study, we aim to develop optimal vaccine strategies in vague and indefinite environment utilizing Interval-Valued Pythagorean fuzzy Analytical Hierarchy Process (IVPF-AHP) - Interval-Valued Pythagorean Fuzzy Weighted Aggregated Sum Product Assessment (IVPF-WASPAS) methodology due to the limited amount of vaccine/dose in restricted competition conditions. In the study, a vaccine strategy was proposed for the first time using a combined IVPF-MCDM method in the literature, based on the criteria and alternatives from the study of Hezam et al. (taken as case study) [11]. The main criteria, sub-criteria, and alternatives are stated based on some reports published by WHO and the consensus of some key experts [11]. So, in the study, four main criteria and fifteen sub-criteria and six vaccine strategies were considered according to the age index, health status, women state, and job kind index. Initially, the weight of main criteria and sub-criteria are calculated employing the IVPF-AHP. And then, vaccination strategies are ranked utilizing IVPF-WASPAS. In this study, we presented the utilized methods namely IVPF-AHP and IVPFWASPAS in Section 2. The case study with details, application steps of the methods and the obtained analysis results are discussed in Section 3. The conclusion and future research directions are given in Section 4.

2 The Proposed Methodology A. Pythagorean Fuzzy Analytic Hierarchy Process (PF-AHP) IVPF-AHP is used to calculate criterion weights. The linguistic variables table of the IVPF-AHP method is given in Ilbahar [16]. The IVPF-AHP steps are as follows [16]. Step 1: The criteria weights are aggregated with the IVPF weighted geometric (IVPFWG)   operator presented in Eq. (1) and construct the pairwise comparison matrix R = rij mxm .   IVPFWG P˜ 1 , P˜ 2 , . . . , P˜ n =

 n  i=1

μiL

 n

n n   w  wi   w  w μiU i , viL i , viU i , i=1

i=1

(1)

i=1

Step 2: Find the differences matrix between lower and upper values of the membership and non-membership functions using Eqs. (2) and (3). dij L = μ2ij L − vij2 U dij U = μ2ij U − vij2 L

(2)

(3)   Step 3: Calculate the interval multiplicative matrix S = sij mxm using Eqs. (4) and (5). sijL = 1000dijL (4)

68

T. Gedikli and B. Cayir Ervural

sijU =

1000dijU

(5)

  Step 4: Find the determinacy value τ = τij mxm using Eq. (6).     τij = 1 − μ2ijU − μ2ijL − vij2 U − vij2 L

(6)

  Step 5: Calculate the matrix of weights T = tij mxm using Eq. (7). tij =

sijL + sijU τij 2

(7)

Step 6: Calculate the normalized priority weights wi by using Eq. (8). m j=1 tij wi = m m i=1 j=1 tij

(8)

B. Pythagorean Fuzzy WASPAS After the criterion weights were obtained, the IVPF-WASPAS method was used for the ranking of alternatives. The linguistic variables used for the IVPF-WASPAS method are given in Table 1. Table 1. Scale for the IVPF-WASPAS evaluations. Linguistic Term

μL

μU

vL

vU

VVB

0.03

0.18

0.75

0.90

VB

0.12

0.27

0.66

0.81

V

0.21

0.36

0.57

0.72

MB

0.30

0.45

0.48

0.63

F

0.39

0.54

0.39

0.54

MG

0.48

0.63

0.30

0.45

G

0.57

0.72

0.21

0.35

VG

0.66

0.81

0.12

0.27

VVG

0.75

0.90

0.03

0.18

*VVB: Very very bad, VB: Very bad, B: Bad, MB: Medium bad, F: Fair, MG: Medium good, G: Good, VG: Very good, VVG: Very very good

Identification of Optimum COVID-19 Vaccine Distribution Strategy

69

The IVPF-WASPAS steps are as follows [17]. Step 1: Aggregate expert opinions using the IVPF weighted average (IVPFWA) operator given in Eq. (9).  n  n

n n     L U L U IVPFWA = wi μi , wi μi , wi vi , wi vi (9) i=1

i=1

i=1

i=1

where n denotes the number of experts. ˜ 1 for alternatives using the operators of Step 2: Calculate the weighted sum model Q i interval-valued Pythagorean fuzzy numbers (IVPFN) given in Eqs. (10), (11), and (12) [18]. ˜ i1 = Q

n 

r˜ij wj

(10)

j=1

 

 λ λ  λ    λ 2 2 , vL , vU λ˜p = 1 − 1 − μL , 1 − 1 − μU 

   2  2  2  2  2  2  2  2  U U U , , vL vL , vU vU p˜ 1 ⊕ p˜ 2 = + μ − μ μL1 + μL2 − μL1 μL2 , μU μ 1 2 1 2 1 2 1 2

(11) (12)

˜ 2 for alternatives using Eq. (13). Then, Eqs. Step 3: Calculate the weighted sum model Q i (14) and (15) [18] are used to complete the computation. ˜ i2 = Q λ

p˜ =  p˜ 1 ⊗ p˜ 2 =





n 

w

r˜ij j

(13)

j=1

λ

μL , μU

λ



    λ λ  2 2 1 − 1 − vL , 1 − 1 − vL ,



  2  2  2  2  2  2  2  2 U , L L L L , U U U U μL1 μL2 , μU μ + v − v + v − v v v v v 1 2 1 2 1 2 1 2 1 2 

(14) (15)

Step 4: Determine the threshold value (λ). Pythagorean fuzzy weighted sum values and Pythagorean fuzzy weighted product values are combined using Eqs. (11), (12) and (16) [18]. ˜ i1 + (1 − λ)Q ˜ i2 , λ ∈ [0, 1] ˜ i = λQ Q

(16)

Step 5: Defuzzify relative importance scores to determine the final results of each alternative using Eq. (17).     2 2 2 1 − vL2 1 − vU μL + μU + 1 − vL + 1 − vU + μL μU − (17) p= 4 Step 6: Rank alternatives with decreasing score. The alternative with the highest score is determined to be the best alternative.

70

T. Gedikli and B. Cayir Ervural

3 Case Study The data set of the defined problem has been taken from Hezam et al. [11]. They proposed a framework of both AHP and TOPSIS approaches to provide suitable vaccination strategy. Three experts evaluated criteria weights and possible COVID-19 vaccine alternatives. They also used neutrosophic fuzzy numbers when evaluating their weights. In this study, we will evaluate the COVID-19 vaccine alternatives with the criteria discussed using the IVPF-AHP and IVPF-WASPAS methods. The weights of criteria were computed using the IVPF-AHP. Then, the COVID-19 vaccine alternative ranking was performed with the IVPF-WASPAS method. A. Identifying the Criteria In this study, four main criteria and fifteen sub-criteria were considered. The main criteria and sub-criteria are defined as shown in Fig. 1. Woman (C3)

Age (C1) • • • • • •

C11: Sick elderly people C12: Healthy elderly people C13: Sick adult people C14: Healthy adult people C15: Sick children C16: Healthy children

• • •

C21: High risk C22: Risk C23: Risk free

• • •

C31: Pregnant C32: Breastfeeding C33: Other

• • •

C41: Health worker C42: Basic worker C43: Other

Job Kinds (C4)

People’s health state (C2)

Fig. 1. Main criteria and sub-criteria.

B. COVID-19 Vaccines Alternatives The best vaccines are specified by several factors, including safety and extensive testing before generalization, price, quality, and risks. Some assumptions have been made about COVID-19 vaccine alternatives since not all features of current vaccines are known. COVID-19 vaccine alternatives are defined as follows [11]. A1: It is more appropriate for the elderly. A2: It is appropriate for people with risk health problems. A3: It is appropriate for pregnant and breastfeeding women. A4: It is appropriate for health workers. A5: It is appropriate for young people and healthy people. A6: It is more appropriate for young adults and children. C. Application of Pythagorean Fuzzy AHP The Pythagorean fuzzy AHP method was used to evaluate the criterion weights. The pairwise comparison matrix for the main criteria is given in Table 2, and the pairwise

Identification of Optimum COVID-19 Vaccine Distribution Strategy

71

comparison matrix for the sub-criteria is given in Table 3. Later, the criteria weights are aggregated with the IVPFWG operator. Table 2. Pairwise comparison matrix of main criterion. Main criteria

C1

C2

C3

C4

w

C1

EE, EE, EE

BAI, AI, AI

HI, AAI, AI

AI, AI, BAI

0.25

C2

AAI, AI, AI

EE, EE, EE

HI, AAI, HI

AI, AI, AI

0.36

C3

LI, BAI, AI

LI, BAI, LI

EE, EE, EE

LI, AI, BAI

0.12

C4

AI, AI, BAI

AI, AI, AI

HI, AI, AAI

EE, EE, EE

0.26

By following the steps given in Sect. 2.A, criterion weights were calculated with the Pythagorean Fuzzy AHP method. Weights of main criteria, local weights of sub-criteria, and global weights are given in Table 4. According to the results obtained, the criteria with the highest weight are C21, C41, C22, C11, and C31, respectively. Table 3. Pairwise comparison matrix of sub-criterion. C1 C11 C12 C13 C14 C15 C16

C11 EE LI AI CLI AI CLI

C12 HI EE HI LI VHI VHI

C13 AI LI EE VLI AAI VLI

C14 CHI HI VHI EE CHI AI

C15 AI VLI BAI CLI EE VLI

C16 CHI VLI VHI AI VHI EE

C2 C21 C22 C23

C21 EE VLI CLI

C22 VHI EE VLI

C23 CHI VHI EE

0.72 0.25 0.03

C3 C31 C32 C33

C31 EE LI CLI

C32 HI EE VLI

C33 CHI VHI EE

0.67 0.3 0.03

C4 C41 C42 C43

C41 EE LI CLI

C42 HI EE VLI

C43 CHI VHI EE

0.67 0.3 0.03

0.35 0.04 0.19 0.02 0.32 0.09

D. Application of Pythagorean Fuzzy WASPAS In this section, after obtaining the criterion weights with IVPF-AHP, the IVPF-WASPAS method is used to evaluate the alternatives. Evaluation matrix of COVID-19 vaccine alternatives according to criteria is given in Table 5.

72

T. Gedikli and B. Cayir Ervural Table 4. Final weights of sub-criteria and main criteria. Main criteria

Weights

Sub-criteria

Local weights

Global weights

C1

0.253

C11

0.345

0.087

C12

0.040

0.010

C13

0.191

0.048

C14

0.016

0.004

C15

0.320

0.081

C16

0.088

0.022

C21

0.719

0.257

C22

0.254

0.091

C23

0.027

0.010

C31

0.672

0.084

C32

0.297

0.037

C33

0.031

0.004

C41

0.672

0.178

C42

0.297

0.078

C43

0.031

0.008

C2

0.358

C3

0.124

C4

0.264

Table 5. Evaluation matrix of COVID-19 vaccine alternatives according to sub-criteria. Expert 1 A1 A2 A3 A4 A5 A6

C11 VVG G VVB VVB VVB VVB

C12 VVG VVB VVB VVB VVB VVB

C13 B G VVB VVB F VVB

C14 B VVB VVB VVB F VVB

C15 VVB G VVB VVB VVB VVG

C16 VVB VVB VVB VVB VVB VVG

C21 B G VVB VVB VVB VVB

C22 B F VVB VVB VVB VVB

C23 B VVB VVB VVB VVB VVB

C31 B B VVG VVB B VVB

C32 B B VVG VVB B VVB

C33 B B B VVB VVB VVB

C41 B B VVB VVG B VVB

C42 B B VVB F B VVB

C43 B B VVB VVB B VVB

Expert 2 A1 A2 A3 A4 A5 A6

C11 G F VVB VVB VVB VVB

C12 G VVB VVB VVB VVB VVB

C13 B F VVB VVB G VVB

C14 B VVB VVB VVB G VVB

C15 VVB F VVB VVB VVB G

C16 VVB VVB VVB VVB VVB G

C21 B F VVB VVB VVB VVB

C22 B B VVB VVB VVB VVB

C23 B VVB VVB VVB VVB VVB

C31 B B G VVB B VVB

C32 B B G VVB B VVB

C33 B B B VVB VVB VVB

C41 B B B G B VVB

C42 B B B G B VVB

C43 B B B VVB B VVB

Expert 3 A1 A2 A3 A4 A5 A6

C11 F VVG VVB VVB VVB VVB

C12 F B VVB VVB VVB VVB

C13 VVB VVG VVB VVB VVG VVB

C14 VVB VVB VVB VVB VVB VVB

C15 VVB VVG VVB VVB VVG F

C16 VVB VVB VVB VVB VVB F

C21 VVB VVG VVB VVB VVB VVB

C22 VVB G VVB VVB VVB VVB

C23 VVB VVB VVB VVB VVB VVB

C31 VVB B F VVB VVB VVB

C32 VVB B F VVB VVB VVB

C33 VVB B B VVB VVB VVB

C41 VVB B B F VVB VVB

C42 VVB B B F VVB VVB

C43 VVB B B VVB B VVB

First, expert opinions are combined with the IVPFWA operator, and the aggregated decision matrix is given in Table 6.

0.15 0.30 0.63 0.78 0.21 0.36 0.57 0.72 0.57 0.72 0.21 0.36 0.03 0.18 0.75 0.90 0.15 0.30 0.63 0.78 0.03 0.18 0.75 0.90

0.15 0.30 0.63 0.78 0.21 0.36 0.57 0.72 0.21 0.36 0.57 0.72 0.03 0.18 0.75 0.90 0.03 0.18 0.75 0.90 0.03 0.18 0.75 0.90

0.15 0.30 0.63 0.78 0.21 0.36 0.57 0.72 0.15 0.30 0.63 0.78 0.57 0.72 0.21 0.36 0.15 0.30 0.63 0.78 0.03 0.18 0.75 0.90

0.15 0.30 0.63 0.78 0.21 0.36 0.57 0.72 0.15 0.30 0.63 0.78 0.45 0.60 0.33 0.48 0.15 0.30 0.63 0.78 0.03 0.18 0.75 0.90

0.15 0.30 0.63 0.78 0.21 0.36 0.57 0.72 0.15 0.30 0.63 0.78 0.03 0.18 0.75 0.90 0.21 0.36 0.57 0.72 0.03 0.18 0.75 0.90

C41

C42

C43

vU

C33

vL

C32

μU

0.15 0.30 0.63 0.78 0.21 0.36 0.57 0.72 0.57 0.72 0.21 0.36 0.03 0.18 0.75 0.90 0.15 0.30 0.63 0.78 0.03 0.18 0.75 0.90

μL

0.15 0.30 0.63 0.78 0.03 0.18 0.75 0.90 0.03 0.18 0.75 0.90 0.03 0.18 0.75 0.90 0.03 0.18 0.75 0.90 0.03 0.18 0.75 0.90

vU

C31

vL

0.15 0.30 0.63 0.78 0.39 0.54 0.39 0.54 0.03 0.18 0.75 0.90 0.03 0.18 0.75 0.90 0.03 0.18 0.75 0.90 0.03 0.18 0.75 0.90

μU

C23

μL

C22

vU

0.15 0.30 0.63 0.78 0.57 0.72 0.21 0.36 0.03 0.18 0.75 0.90 0.03 0.18 0.75 0.90 0.03 0.18 0.75 0.90 0.03 0.18 0.75 0.90

vL

0.03 0.18 0.75 0.90 0.03 0.18 0.75 0.90 0.03 0.18 0.75 0.90 0.03 0.18 0.75 0.90 0.03 0.18 0.75 0.90 0.57 0.72 0.21 0.36

μU

C21

μL

0.03 0.18 0.75 0.90 0.57 0.72 0.21 0.36 0.03 0.18 0.75 0.90 0.03 0.18 0.75 0.90 0.27 0.42 0.51 0.66 0.57 0.72 0.21 0.36

vU

C16

vL

C15

μU

0.15 0.30 0.63 0.78 0.03 0.18 0.75 0.90 0.03 0.18 0.75 0.90 0.03 0.18 0.75 0.90 0.33 0.48 0.45 0.60 0.03 0.18 0.75 0.90

μL

0.15 0.30 0.63 0.78 0.57 0.72 0.21 0.36 0.03 0.18 0.75 0.90 0.03 0.18 0.75 0.90 0.57 0.72 0.21 0.36 0.03 0.18 0.75 0.90

vU

C14

vL

C13

μU

0.57 0.72 0.21 0.36 0.09 0.24 0.69 0.84 0.03 0.18 0.75 0.90 0.03 0.18 0.75 0.90 0.03 0.18 0.75 0.90 0.03 0.18 0.75 0.90

vU

C12

vL

μL

μU

0.57 0.72 0.21 0.36 0.57 0.72 0.21 0.36 0.03 0.18 0.75 0.90 0.03 0.18 0.75 0.90 0.03 0.18 0.75 0.90 0.03 0.18 0.75 0.90

A6

μL

A5

C11

A4

Criteria

A3

A2

Alternatives A1

Table 6. Aggregated decision matrix in the form of IVPFNS. Identification of Optimum COVID-19 Vaccine Distribution Strategy 73

74

T. Gedikli and B. Cayir Ervural

Then, the IVPF-WASPAS method was applied using the given steps in Sect. 2.B. The results, defuzzified values and alternative ranks obtained by taking threshold value (λ) = 0.5 with the IVPF-WASPAS method are given in Table 7. Table 7. Results of IVPF-WASPAS. Alternatives

PF-WASPAS result

Defuzzified value

Rank

μL

μU

vL

vU

A1

0.20

0.35

0.63

0.83

0.32

2

A2

0.40

0.58

0.40

0.58

0.52

1

A3

0.17

0.31

0.69

0.91

0.28

4

A4

0.21

0.35

0.66

0.88

0.32

3

A5

0.14

0.29

0.70

0.91

0.26

5

A6

0.15

0.27

0.74

0.97

0.24

6

Pythagorean fuzzy set-based MCDM methods were mostly preferred recently, since Pythagorean fuzzy sets reflect expert opinions better than intuitionistic fuzzy sets [16]. According to the obtained results, alternative A2 was chosen as the best option among vaccine strategies. The remaining alternatives were emerged as A1, A4, A3, A5, and A6, respectively. In other words, it is necessary to use the vaccine firstly that is proper for people with risky health problems. If it is not available, proper vaccine should be preferred for elderly people. Alternative COVID19 vaccines were evaluated with the AHP and TOPSIS method based on neutrosophic fuzzy numbers in Hezam’s study. According to this research result, the author obtained appropriate vaccine strategies order as alternative A2, A4, A1, A3, A5 and A6, respectively. It is seen that the best option is emerged as alternative A2 when two studies are compared. Similar results were obtained, although there were minor differences in the ranking of alternatives. It shows that the evaluated study produces consistent results.

4 Conclusions The long duration of vaccine studies makes it impossible for the vaccines produced to be used equally and fairly all over the world. This situation causes some parts of the world to be deprived of vaccines. On a country basis, there are also concerns about the fair distribution of vaccination efforts among different groups. For this reason, until the number of produced vaccines increases, some priority groups should be determined for vaccination studies in societies and should be carried out according to the specified priority groups. At this point, objective scientific methods are needed to distribute the limited amount of vaccine resources to the most susceptible groups in the most rational way. In this study, four main criteria (people’s health status, woman status, and job kinds) and fifteen sub-criteria were determined. The IVPF-AHP method was used to

Identification of Optimum COVID-19 Vaccine Distribution Strategy

75

evaluate and rank the main and sub-criteria. Then, the six possible alternative COVID19 vaccines were ranked using the IVPF-WASPAS method. The results of the study give priority to high-risk people, health workers, risk people, sick elderly people, and pregnant women. According to the obtained criteria weights, the alternative COVID-19 vaccine suitable for people with risky health problems was selected as the option among vaccine strategies. Future studies alternative vaccines obtained in realistic studies can be used by considering more groups. In addition, as a solution method, other fuzzy sets can be integrated into MCDM methods, apart from Pythagorean fuzzy sets.

References 1. Wu, J.T., Leung, K., Leung, G.M.: Nowcasting and forecasting the potential domestic and international spread of the 2019-nCoV outbreak originating in Wuhan, China: a modelling study. Lancet 395(10225), 689–697 (2020). https://doi.org/10.1016/S0140-6736(20)30260-9 2. U. K. Government, (COVID-19) Coronavirus restrictions: what you can and cannot do GOV.UK, Coronavirus restrictions: what you can and cannot do 3. U. N. News, Low-income countries have received just 0.2 percent of all COVID-19 shots given | | UN News (2021) 4. Vox, A Covid-19 vaccine may not be enough to end the pandemic - Vox (2020) 5. OECD Policy Responses to Coronavirus (COVID-19), Access to COVID-19 vaccines: Global approaches in a global crisis (2021) 6. Lopez, D., Gunasekaran, M.: Assessment of vaccination strategies using fuzzy multi-criteria decision making. In: Ravi, V., Panigrahi, B.K., Das, S., Suganthan, P.N. (eds.) Proceedings of the Fifth International Conference on Fuzzy and Neuro Computing (FANCCO - 2015). AISC, vol. 415, pp. 195–208. Springer, Cham (2015). https://doi.org/10.1007/978-3-31927212-2_16 7. Singh, R., Avikal, S.: COVID-19: a decision-making approach for prioritization of preventive activities, 13(3), 257–262 (2020) https://doi.org/10.1080/20479700.2020.1782661 8. Km, B., et al.: Model-informed COVID-19 vaccine prioritization strategies by age and serostatus. Science 371(6532), 916–921 (2021). https://doi.org/10.1126/SCIENCE.ABE6959 9. Dooling, K.: COVID-19 vaccine prioritization: Work Group considerations | NITAG RESOURCE CENTER, Atlanta, GA: Advis. Comm. Immun. Pract. (2020) 10. Alkan, N., Kahraman, C.: Evaluation of government strategies against COVID-19 pandemic using q-rung orthopair fuzzy TOPSIS method. Appl. Soft Comput. 110, 107653 (2021). https://doi.org/10.1016/J.ASOC.2021.107653 11. Hezam, I.M., Nayeem, M.K., Foul, A., Alrasheedi, A.F.: COVID-19 vaccine: a neutrosophic MCDM approach for determining the priority groups. Results Phys. 20, 103654 (2021). https://doi.org/10.1016/j.rinp.2020.103654 12. Ahmad, N., Hasan, M.G., Barbhuiya, R.K.: Identification and prioritization of strategies to tackle COVID-19 outbreak: a group-BWM based MCDM approach. Appl. Soft Comput. 107642 (2021). https://doi.org/10.1016/J.ASOC.2021.107642 13. Varotsos, C.A., Krapivin, V.F., Xue, Y., Soldatov, V., Voronova, T.: COVID-19 pandemic decision support system for a population defense strategy and vaccination effectiveness. Saf. Sci. 142, 105370 (2021). https://doi.org/10.1016/J.SSCI.2021.105370 14. Markoviˇc, R., Šterk, M., Marhl, M., Perc, M., Gosak, M.: Socio-demographic and health factors drive the epidemic progression and should guide vaccination strategies for best COVID-19 containment. Results Phys. 26, 104433 (2021). https://doi.org/10.1016/J.RINP.2021.104433

76

T. Gedikli and B. Cayir Ervural

15. Jentsch, P.C., Anand, M., Bauch, C.T.: Prioritising COVID-19 vaccination in changing social and epidemiological landscapes: a mathematical modelling study. Lancet Infect. Dis. (2021). https://doi.org/10.1016/S1473-3099(21)00057-8 16. Ilbahar, E., Kara¸san, A., Cebi, S., Kahraman, C.: A novel approach to risk assessment for occupational health and safety using pythagorean fuzzy AHP & fuzzy inference system. Saf. Sci. 103, 124–136 (2018). https://doi.org/10.1016/J.SSCI.2017.10.025 17. Ilbahar, E., Kahraman, C.: Retail store performance measurement using a novel intervalvalued pythagorean fuzzy WASPAS method. J. Intell. Fuzzy Syst. 35(3), 3835–3846 (2018). https://doi.org/10.3233/JIFS-18730 18. Peng, X., Yang, Y.: Fundamental properties of interval-valued pythagorean fuzzy aggregation operators. Int. J. Intell. Syst. 31(5), 444–487 (2016). https://doi.org/10.1002/INT.21790

Implementation of MCDM Approaches for a Real-Life Location Selection Problem: A Case Study of Consumer Goods Sector O˘guz Emir(B) Department of Industrial Engineering, ˙Istanbul Kültür University, ˙Istanbul, Turkey [email protected]

Abstract. Selecting a suitable warehouse location elevates the supply chain performance by reducing the lead times and increasing the response efficiency. Hence, location selection emerges as a strategic decision-making problem for competitive advantage. In this paper, a real decision-making problem of a multinational company operating in the consumer goods sector has been examined. The company’s Turkey office is responsible for the operations in many regions such as Middle East, Africa, Central Asia, and Eastern Europe. A case study is handled by the logistics network planning team of the company to evaluate the new warehouse request coming from the regional sales team. For this decision problem, two different multi-criteria decision-making methods TOPSIS and VIKOR, are employed to evaluate four alternative scenarios. In addition, the AHP technique is also applied to determine criteria weights. The results of both methods revealed the same alternative as the best decision. Keywords: Multi criteria decision making · Analytical hierarchy process · TOPSIS · VIKOR · Location selection

1 Introduction Nowadays the competition becomes more and more challenging. Moreover, companies started to realize the need for collaboration and coordination with other organizations to survive. Therefore, the importance of supply chain activities gradually increases over time. The performance of the supply chain fundamentally depends on elevated harmony and coordination of supply network structure, logistics integration, inventory control policy, and information flow [4]. Significantly, the strategic location where the company maintains its operations always carries great importance [6]. The selection of an accurate location provides vital advantages in terms of efficiency and speed in the supply chain network. Notably, the flow rate of materials in the chain increases with the reduction in delivery times. An accurate location selection creates an effective supply chain network and enables companies to become strategic players in the challenging market [7]. Literature research shows that many studies deployed multi-criteria decision-making methods for location selection problems. For example, Tavakkoli-Moghaddam, Mousavi, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 77–88, 2022. https://doi.org/10.1007/978-3-030-90421-0_7

78

O. Emir

and Heydar (2011) focused on a location selection problem and implemented the commonly used two multi-criteria decision-making methods of AHP and VIKOR in an integrated manner. Following the criteria weights determination by AHP, the VIKOR method is performed to find the best plant location among alternative locations. Besides, a Delphi method is utilized in this study to evaluate criteria alternatives. Lastly, an application is conducted to display the efficiency of the proposed methodology [8]. Özcan, Çelebi, and Esnaf (2011) selected four well-known multi-criteria decision-making methods as TOPSIS, AHP, Grey Theory, and ELECTRE for a warehouse location selection. The case study is implemented in a retail sector where high uncertainty and product variety exist. Each method is used to select the best warehouse location among alternatives. A comparative analysis between each method is reported in the paper [5]. Dey, Bairagi, Sarkar, and Sanyal (2016) investigated a warehouse location selection problem with three newly extended fuzzy multi-criteria decision-making methods to evaluate subjective and objective factors together. TOPSIS, MOORA, and SAW methods are integrated with the concept of fuzzy set theory to measure subjective factors. The proposed methods are presented in a supply chain case study, and the efficiency of these methods in the selection of warehouse locations has been proved [1]. Emeç and Akkaya (2018) concentrated on a stochastic decision-making process where uncertain conditions exist. A warehouse location selection problem is discussed for a retail company that owns vendors in many Turkey regions. In the solution approach, stochastic AHP calculates the weight of criteria, and a fuzzy VIKOR method is performed to rank the alternatives and select the convenient warehouse [2]. Kutlu Gündo˘gdu and Kahraman (2019) contributed to the literature by developing three-dimensional spherical fuzzy sets. They also focused on the extension of the classical VIKOR method to a spherical fuzzy VIKOR. Finally, a warehouse location selection problem is handled to demonstrate the applicability and validity of the developed method [3]. Based on the literature review conducted for this study TOPSIS and VIKOR methods are selected. Like a location selection problem studied in this paper, real-life problems are usually characterized by non-measurable and conflicting criteria. Therefore, there might not be an optimal solution that meets all the criteria simultaneously, and the final solution is formed from a series of compromise solutions based on decision makers’ preferences. The compromise solution is the nearest and the most convenient to the ideal solution. TOPSIS (The Technique for Order Preferences by Similarity to an Ideal Solution) and VIKOR (Vise Kriterijumska Optimizacija I Kompromisno Resenje) methods are frequently used in multi-criteria decision-making problems, and they are very suitable for real-life decision-making problems as they consider the closeness to the ideal solution with the aim of reaching the compromise solution [9]. Hwang and Yoon developed the TOPSIS method in 1981. The main target is to determine the alternative which is the nearest optimal solution and the furthest inferior solution by using the compromise solution concept. The method ranks alternatives between each other, and the first alternative in the rank is selected as the best alternative. VIKOR method is a multi-criteria decision-making technique developed by Opricovic and Tzeng in 2004 to reach compromise solutions in complex systems. The method is designed to provide a compromise solution considering the weights of alternatives under conflicting

Implementation of MCDM Approaches for a Real-Life Location Selection Problem

79

criteria. VIKOR handles the ranking of alternatives in the existence of conflicting criteria. Compromise ranking is achieved by comparing ideal alternative closeness values under the assumption that each alternative is evaluated for each criterion. VIKOR also holds a similar principle with TOPSIS in terms of concerning ideal distance and compromise solutions. The compromise solution achieved by VIKOR provides maximum group benefit for the majority and minimum regret for dissenters [9]. Furthermore, the weights of decision-makers play a significant role in multi-criteria decision-making problems. Thus, another decision-making technique called AHP (Analytic Hierarchy Process) is utilized to determine criteria weights in this study. AHP methodology is developed by Thomas L. Saaty in the 1970s when he was working at the Wharton School of Business. Decision-makers compare existing criteria by using Saaty’s 1–9 scale. This technique is practical and easy to use for ranking the decision alternatives by evaluating all existing criteria [9]. The remainder of the paper is organized as follows: the problem that the logistics network planning team of the company handles has been introduced in Sect. 2. Then, criteria selection is discussed, and the weighting of criteria with the AHP method is explained. After that, TOPSIS and VIKOR methods are introduced, and implementation steps are expounded. Lastly, obtained results and some recommendations are shared for future studies.

2 Case Study In this study, a real decision-making problem of a multinational company operating in the consumer goods sector has been examined. The company’s Turkey office is responsible for the operations in many regions such as Middle East, Africa, Central Asia, and Eastern Europe. The logistics network planning team of the company needs to evaluate the new warehouse request coming from the regional sales team. The location problem is solved by applying TOPSIS and VIKOR methods respectively. A. Problem Definition In the current state, as shown in Fig. 1, materials produced in China, Spain, Poland, and Germany are sent to Germany and then from Germany to Turkey. Also some produced material can also be sent straight to Turkey. Material flow to Kazakhstan is provided from Turkey except for Russia. The material handling process is carried out in Turkey, and after that, materials are sent to the consumer’s warehouse directly in Kazakhstan. Currently, business partners in Kazakhstan have increased their annual sales. Business plan values and key performance indicators achieved results above expectations, and business partners have requested a new warehouse in Kazakhstan to reach customers quickly and become strategically closer to them. Therefore, a decision-making analysis is expected from the logistics network team by evaluating the alternative scenarios along with the determined criteria. Board members expect a location decision that holds the best scenario in terms of order management processes, material flow time, vehicle utilization, stock cycle rate, and most importantly costs. For this reason, six criteria are selected considering the literature review conducted and the criteria specified by the company. The decision criteria for the case are determined as cost (Euro), lead time (day),

80

O. Emir

Fig. 1. Current state material flow

vehicle utilization (% volume loaded/vehicle capacity), stock cycle rate, information technology infrastructure, and order management respectively. Cost criterion includes transfer between countries, material handling, labelling operations, storage, loading, and unloading costs. Alternative scenarios of the case study are described below. Figures that show material flows in each scenario are illustrated respectively. Scenario 1: A new warehouse will be acquired in Kazakhstan, and all countries will manage their export operations themselves. For Kazakh vendors and consumers, materials will be transferred from the new warehouse. Figure 2 shows the material flow for scenario 1.

Fig. 2. Scenario 1 material flow

Scenario 2: A new warehouse will be acquired in Kazakhstan, and all countries will supply the Kazakhstan warehouse except for Turkey and Russia. Turkey and Russia will continue to deliver materials to consumers as in the current state. Other countries will send the materials to the warehouse in Germany. Then, material handling and storage operations will be conducted in Germany. Materials consolidated in Germany will be transferred to the new warehouse in Kazakhstan. Figure 3 shows the material flow of scenario 2.

Fig. 3. Scenario 2 material flow

Implementation of MCDM Approaches for a Real-Life Location Selection Problem

81

Scenario 3a: A new warehouse will be acquired in Kazakhstan, and countries will send the materials to the warehouse in Germany and Turkey except for Russia. Materials delivered to Germany will be transferred to the Turkey warehouse for material handling and storage. Materials consolidated in Turkey and materials received from Russia will be collected in the Kazakhstan warehouse to supply vendors and consumers in Kazakhstan. Figure 4 shows the material flow of scenario 3a.

Fig. 4. Scenario 3A material flow

Scenario 3b: The last scenario is generated based on scenario 3a. According to scenario 3a, final deliveries to Kazakh consumers and vendors are planned through the new warehouse. However, in scenario 3b, a new warehouse with a smaller capacity will be acquired. As a result, Russia and Turkey will transfer %76 of the volume to the Kazakhstan warehouse and %24 of the volume to vendors and consumers directly. Figure 5 shows the material flow of scenario 3b.

Fig. 5. Scenario 3B material flow

B. Criteria Weighting by AHP Decision-makers compare existing criteria by using Saaty’s 1–9 scale. Table 1 displays the AHP scale used in pairwise comparisons. Table 1. AHP scale Intensity of importance

Definition

1

Equal importance

3

Moderate importance

5

Essential importance

7

Very strong importance

9

Extreme importance

2, 4, 6, 8

Intermediate values

82

O. Emir

Steps followed in AHP are summarized below: Step 1: Listing decision criteria and creating a hierarchical structure. Step 2: Creating pairwise comparison matrix. Logistics network planning team creates a pairwise comparison matrix by using AHP scale. Table 2 shows a pairwise comparison matrix for criteria defined in the case study. Table 2. Pairwise comparison matrix Criteria

Costs

Lead time

Vehicle utilization

Stock cycle rate

IT infrastructure

Order management

Costs (e)

1.0

6.0

5.0

3.0

6.0

0.5

Lead time

0.2

1.0

0.3

0.3

2.0

0.2

Vehicle utilization

0.2

3.0

1.0

0.5

4.0

0.3

Stock cycle rate

0.3

4.0

2.0

1.0

5.0

0.3

IT infrastructure

0.2

0.5

0.3

0.2

1.0

0.1

Order management

2.0

5.0

4.0

3.0

7.0

1.0

Step 3: Creating normalized pair-wise comparison matrix. Step 4: Calculating consistency index and consistency ratio. In step 4, consistency index and ratio are calculated to confirm the weights of each criterion. max −n) = 0.06. Consistency index = CI = (λ(n−1) Consistency ratio = CR = CI RI = 0.05 = %5. Criteria weights are accepted when the consistency ratio is less than 10%.

Step 5: Determination of criteria weights. As a result, calculations prove that criteria evaluation is consistent, and weights can be used in the case study. Table 3 displays the six criteria weights obtained by applying AHP. C. Implementation of TOPSIS for Decision The main target of TOPSIS is to determine the alternative which is the nearest optimal solution and the furthest inferior solution by using compromise solution concept. The method ranks the alternatives between each other and the first alternative in the rank is selected as the best alternative. Each step followed in TOPSIS method is listed below respectively.

Implementation of MCDM Approaches for a Real-Life Location Selection Problem

83

Table 3. Criteria weights Criteria

Weight

Goal

Cost (e)

0.30

Minimum

Lead time (day)

0.05

Minimum

Vehicle utilization

0.10

Maximum

Stock cycle rate

0.15

Minimum

IT infrastructure

0.04

Maximum

Order management

0.36

Maximum

Step 1: Creating decision matrix. A = {A_k | k = 1,…, n}, Alternatives. C = {C_j | j = 1,…., m}, Criteria. w = {w_j | j = 1,…, m}, Weights. X = {X_kj | k = 1,…,n; j = 1,…., m}, Performance values. If the performance value of an alternative cannot be measured, 5 points scale can be used that is shown in Table 4. Performance values of information technology infrastructure and order management are assigned by using this scale. Table 4. 1 to 5 points scale table. 5 Points scale Very poor

1

Poor

2

Average

3

Good

4

Excellent

5

Step 2: Creating normalization matrix Xkj rkj (x) =  n

2 k=1 Xkj

, k = 1, ...., n; j = 1, ..., m

Step 3: Creating weighted normalization matrix vkj (x) = wj rkj (x), k = 1, ..., n; j = 1, ...., m 

m

 Xkj − Xj−  , Benefit attribute rkj =  Xj∗ − Xj−

j=1

wj = 1

84

O. Emir

 Xj− − Xkj  , Cost attribute rkj =  Xj− − Xj∗ 

Best performance value is represented as “*”, and worst performance value is represented as “–”. Table 5 demonstrates the decision matrix that includes performance values of alternative scenarios. Table 5. Decision matrix Decision matrix

Minimum

Minimum

Maximum

Minimum

Maximum

Maximum

Weights

0.30

0.05

0.10

0.15

0.04

0.36

Alternatives

Cost (e)

Lead time (day)

Vehicle utilization

Stock cycle rate

IT infrastructure

Order management

Current scenario

2645000

32

1.00

2.32

3

5

Scenario 1

2900000

40

0.20

2.50

5

3

Scenario 2

2657000

40

0.20

3.30

4

4

Scenario 3.a 3056000

25

0.05

4.60

3

1

Scenario 3.b 2894000

35

0.75

3.00

2

2

Step 4: Determining positive (PIS) and negative (NIS) ideal solutions   + PIS = A+ = v1+ (x), v2+ (x), ....vj+ (x), ..., vm (x) =





max vkj |j ∈ J1 , min vkj |j ∈ J2 |k = 1, ...., n

  − NIS = A− = v1− (x), v2− (x), ....vj− (x), ..., vm (x) =





min vkj |j ∈ J1 , max vkj |j ∈ J2 |k = 1, ...., n

Step 5: Calculating euclidean distance of performance values to PIS and NIS ideal solutions

 2  m  ∗ Dk =  vkj (x) − vj+ (x) , k = 1, ...., n j=1

 2  m  − vkj (x) − vj− (x) , k = 1, ...., n Dk =  j=1

Implementation of MCDM Approaches for a Real-Life Location Selection Problem

85

Step 6: Calculating performance score D− Ck∗ = ∗ k − , k = 1, ...., n Dk +Dk Step 7: Ranking of alternatives and finalizing decision-making. The ranking of each alternative is shown in Table 6 based on their performance scores in descending order. Table 6. Performance scores and rankings of alternatives Alternatives

Performance score

Ranking

Current scenario

0.955

1

Scenario2

0.644

2

Scenario1

0.481

3

Scnario3.b

0.353

4

Scnario3.a

0.049

5

As a result of TOPSIS method, the current scenario is ranked as the best alternative. D. Implementation of VIKOR for Decision VIKOR holds a similar principle with TOPSIS in terms of concerning ideal distance and compromise solutions. The compromise solution achieved by VIKOR provides maximum group benefit for the majority and minimum regret for dissenters. Below, the steps followed in the VIKOR method are given sequentially. Step 1: Creating decision matrix and determining benefit/cost attribute A = {Ak |k = 1, ..., n}, Alternatives C = Cj |j = 1, ...., m , Criteria w = wj |j = 1, ..., m , Weights X = Xkj |k = 1, ..., n; j = 1, ...., m , Performancevalues fk∗ = max xkj ve fk− = max xkj , Benefitattribute

fk∗ = min xkj ve fk− = max xkj , Costattribute Step 2: Apply normalization

∗ fk − xkj rkj = ∗ fk − fk− Step 3: Creating weighted normalization matrix vkj = rkj xwkj

86

O. Emir

Step 4: Calculating maximum group utility (Sk) Sk =

n 

vkj

j=1

S − = max Sk

S ∗ = min Sk

Step 5: Calculating minimum individual regret (Rk) Rk = max rkj R− = max Rj ⎡

r11 ⎢ r21 R=⎢ ⎣ r31 rm1

R∗ = min Rj r12 r22 r32 rm2

⎤ . . . r1n . . . r2n ⎥ ⎥ . . . r3n ⎦ .. rmn

Step 6: Calculating Qk for each alternative (Sk − S ∗ ) (R − R∗ ) + (1 − v)x k Qk = vx − S − S∗ R− − R∗ Step 7: Obtained Qk, Sk, and Rk values are ranked in ascending order. The alternative that has minimum Qk has identified the best option among the existing alternatives. Table 7 indicates Sk, Rk, and Qk values and rankings for each alternative scenario. Table 7. SK, RK and QK values and rankings Alternatives

Sk

Ranking Rk

Ranking Qk

Ranking

Current scenario 0.05 1

0.024 1

0.00 1

Scenario1

0.52 3

0.185 3

0.50 3

Scenario2

0.31 2

0.090 2

0.25 2

Scenario3.a

0.94 5

0.360 5

1.00 5

Scenario3.b

0.59 4

0.270 4

0.67 4

Step 8: Two conditions should be met to validate the results. The alternative with minimum Qk value is only qualified as the best alternative when only these two conditions met. These conditions can be expressed as follows.

Condition 1 (C1) – (Acceptable Advantage): Q(P2 ) − Q(P1 ) ≥ D(Q)

Implementation of MCDM Approaches for a Real-Life Location Selection Problem

D(Q) =

87

1 1 = = 0.25 (j − 1) (5 − 1)

Q(P2 ) − Q(P1 ) ≥ D(Q) : 0.25 − 0 ≥ 0.25 Based on the equation above, the first condition of acceptable advantage is satisfied. Condition 2 (C2) – (Acceptable Stability in Decision Making): According to condition 2, the alternative with the best Qk value should also have the best score in at least one of the Sk or Rk values. If one of the conditions stated above is not met, the compromise solution set is suggested as follows: – If condition 2 does not satisfy alternatives P1 and P2 , – If condition 1 does not satisfy alternatives P1 , P2 , ..., Pm are considered and expressed as below: Q(PM ) − Q(P1 ) < D(Q) – Sorting is performed considering the compromise solution set. The best alternative has the minimum value among all alternatives. In the case study, for the final compromised decision, both conditions 1 and 2 stated above are satisfied and the current scenario has the best Qk value that is obtained and likewise, Sk and Rk values are also sorted in the best order for the current scenario. Hence, the current scenario is the highest ranked alternative selected by VIKOR. The ranking of each alternative can be given as follows: Current scenario > Scenario 2 > Scenario 1 > Scenario 3.b > Scenario 3.a. Applied methods of TOPSIS and VIKOR selected the current scenario as the best option in the case study. Therefore, it is reported to board members that maintaining operations as in the current scenario would be appropriate.

3 Conclusions Location selection emerges as a decision question for companies to take a competitive advantage in the supply chain recently. A suitable location selection enables to accelerate the supply chain activities and increases efficiency rates. Also, this decision reduces lead times significantly, and companies become more responsive to customer requests. In location selection problems, multi-criteria decision-making methods are frequently preferred when multiple conflicting criteria exist. In this paper, a real case study of a multinational company operating actively in the consumer goods sector is handled. Kazakhstan’s sales team has managed to fulfil the objectives and increase the sales rates significantly in the last years. Therefore, business partners have requested a new warehouse facility in Kazakhstan to meet customer demands rapidly and expand customer portfolios in the region. In line with this request,

88

O. Emir

board members asked for a decision analysis report from the logistics network planning team to assess all alternative scenarios. In this context, six criteria are identified to be evaluated in scenario analysis based on company objectives and literature review. Then, the weights of criteria are determined based on experts’ opinions in the team with the AHP method. After specifying performance values of alternatives in each scenario TOPSIS and VIKOR methods are performed respectively. No significant difference is observed between the outputs of the methods applied. As a result, the current state is ranked as the best alternative. Therefore, it is decided not to open a new warehouse in Kazakhstan and maintain operations as in the current state. For future studies, criteria used in similar cases can be increased. For instance, expected sales or demand volume statistics, competent workforce potential and compliance levels to laws in countries can be included by evaluating from different perspectives. Additionally, different multi-criteria decision-making methods or fuzzy techniques can be implemented to expand the scope. Acknowledgment. The author would like to thank Elif Nur Acar for information sharing and support, and especially Asst. Prof. Zeynep Gergin for her precious suggestions and useful critiques.

References 1. Dey, B.B., Sanyal. S.: Warehouse location selection by fuzzy multi-criteria decision making methodologies based on subjective and objective criteria. Int. J. Manag. Sci. Eng. Manag. 11(4), 262–278 (2016) 2. Emeç, S, ¸ Akkaya, G.: Stochastic AHP and fuzzy VIKOR approach for warehouse location selection problem. J. Enterp. Inf. Manag. 31(6), 950–962 (2018) 3. Kutlu Gündo˘gdu, F., Kahraman, C.: A novel VIKOR method using spherical fuzzy sets and its application to warehouse site selection. J. Intell. Fuzzy Syst. 37(1), 1197–1211 (2019) 4. Mehmeti, G., Musabelliu, B., Xhoxhi, O.: The review of factors that influence the supply chain performance. Acad. J. Interdisc. Stud. 5(2), 181–186 (2016) 5. Özcan, T., Çelebi, N., Esnaf, A.: Comparative analysis of multi-criteria decision making methodologies and implementation of a warehouse location selection problem. Exp. Syst. Appl. 38(8), 9773–9770 (2011) 6. Simchi-Levi, D., Kaminsky, P., Simchi-Levi, E.: Designing and Managing the Supply Chain Concepts, Strategies and Case Studies. McGraw-Hill Publishing, New York (2003) 7. Singh, R., Chaudhary, N., Saxena. N.: Selection of warehouse location for a global supply chain: a case study. IIMB Manag. Rev. 30(4), 343–356 (2018) 8. Tavakkoli-Moghaddam, R., Mousavi, S., Heydar, M.: An integrated AHP-VIKOR methodology for plant location selection. Int. J. Eng. Trans. B 24(2), 127–137 (2011) 9. Tzeng, G.H., Huang, J.J.: Multiple Attribute Decision Making: Methods and Applications (1st ed.). Chapman and Hall/CRC, New York (2011)

Energy Management

Applications and Expectations of Fuel Cells and Lithium Ion Batteries Feyza Zengin1 , Emin Okumu¸s2 , M. Nurullah Ate¸s3 , and Bahadır Tunaboylu1(B) 1 Metallurgical and Materials Engineering Department, Marmara University, Istanbul, Turkey

[email protected], [email protected] 2 Energy Institute, TUBITAK, Kocaeli, Turkey [email protected] 3 RUTE, TUBITAK, Kocaeli, Turkey [email protected]

Abstract. One substantial need for human beings is “energy” in daily life. To generate energy especially from fossil fuels are frequently utilized. However; they are about to deplete day by day and this fluctuates fuel prices in addition to their adverse effect on environment. Leading countries are shifting their energy resources from fossil fuels to renewables sources such as wind energy, solar energy, biomass energy. Among the renewable sources, hydrogen energy is one of the most candidates to mitigate greenhouse gas emission and climate change. One other issue is storage of these generated energy. Increase in the demand of renewable energy is a driving force to develop more Lithium ion batteries (LIBs) and new battery chemistries. In the article, the comparison of batteries and fuel cells in various aspects such as application areas and cost are given. The aim of this paper is to give comprehensive information about the better understanding of future of energy storage applications. According to literature, future of energy storage systems will be affected strongly by the developments in storage materials and systems. Keywords: Fuel cell · Lithium · Battery · Hydrogen · Energy · Electricity · Electric vehicle · Fuel cell vehicle

1 Introduction Fossil fuels are used in versatile application as an energy source. Apart from the increasing prices of fossil fuels, they harm environment and for all of this issues many countries are trying to supply more energy from renewable sources such as wind energy, solar energy, biomass energy. In addition to generating energy from renewable source, storage of this energy is also important issue. Because sun does not shine all the time and wind does not blow continuously. Among the renewable sources, hydrogen energy is one of the most pursued candidate to solve the problems such as greenhouse gas, climate change and energy shortage [1, 2]. Especially with comparing to internal combustion engine systems, fuel cells are © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 91–106, 2022. https://doi.org/10.1007/978-3-030-90421-0_8

92

F. Zengin et al.

generating energy with high efficiency and lower greenhouse gas emissions. Also during energy generation process of Fuel Cells (FCs), hydrogen is used as reactant that is why FC technology is the most environmentally clean and noiseless systems [3]. Fuel cells may be an efficient choice to meet the need of electricity of energy depleted areas which do not have access to public grid or transportation of electricity and wiring is expensive [4]. FCs are used in versatile application areas from kW to MW scale applications in terms of energy need such as mobile power systems, combined heat and power (CHP) systems so on. Li ion batteries are preferred every corner of our daily lives. Especially in portable electronics from cell phone to laptops many applications, they are frequently utilized. Especially, with the effect of global warming, world tries to utilize renewable energy sources to reduce the effect of CO2 emissions and global warming. Increase in the demand of renewable energy is a driving force to development of new battery chemistries. With the improvement in the electrical vehicle market, the utilization of batteries has increased dramatically. For example, in 2017, it is known that 10000000 EVs were sold [5]. In 2018 160,000 MW/h battery were sold. In this article, fuel cells and Lithium ion batteries will be compared in various aspects. Detailed information will be supplied for the working principles, cost analyses, usage areas, efficiencies of fuel cells and Li ion batteries. 1.1 Working Principles and Types of Fuel Cells Hydrogen and hydrocarbon-fuels has higher specific energy density with respect to batteries and given in Table 1. The working principle of FCs is depending on the electrochemical reaction of ions. Fuel cells (FCs) convert the chemical energy of fuels -which may be hydrogen, methanol, ethanol, hydrocarbons etc. – directly to electrical energy via electro-chemical reaction and materials are not consumed with compared to batteries. FCs can produce electricity continuously as fuel and oxygen are supplied to the system. Table 1. Energy densities of hydrogen and hydrocarbon – fuels [6] Fuel

Specific energy (W.h/kg)

Energy density (W.h/L)

H2 700 bar (25 °C)

33 314

1 250

H2 liquid (−253 °C)

33 314

2 359

H2 1 atm, (25 °C)

33 314

3

NG

14 889

10

LNG (−160 °C)

14 889

6 167

LPG propane

13 778

7 028

Gasoline

12 888

9 500

Ethanol

8 333

6 667 (continued)

Applications and Expectations of Fuel Cells and Lithium Ion Batteries

93

Table 1. (continued) Fuel

Specific energy (W.h/kg)

Energy density (W.h/L)

Methanol

5 472

4 333

NaBH4

9 285

-

Ni-MH Batarya

114

140–406

Li-ion Batarya

100–243

250–731

Fuel cell consists of four main components which are anode, cathode, electrolyte and external-circuit [7]. Fuel is supplied to the anode while oxidant is to cathode and oxidant refers to pure oxygen gases such as halogens. The electro-chemical reaction of fuel cell is given below. Hydrogen + Oxygen → Water + Energy

(1)

Thermodynamically this reaction is combined form of two cell reactions which take places independently. In anode: H2 → 2H+ + 2e−

(2)

From the chemical Eq. (2), it is seen that hydrogen molecules are ionized in anode, basically. At cathode: 1/2O2 + 2H+ + 2e− → H2 O

(3)

From the chemical Eq. (3), it is seen that oxygen and hydrogen chemically reacts and generate energy and water. As well as it depends on the nature of electrolyte, in general to generate electricity, ions (or protons) move through an electrolyte which is an insulator for electrons and electrons are transported through external circuit. Fuel cell technologies are named as the structure of their electrolytes. Types of fuel cells are proton-exchange membrane fuel cells (PEMFCs), alkaline fuel cells (AFCs), phosphoric acid fuel cells (PAFCs) to high-temperature molten-carbonate fuel cells (MCFCs) and solid-oxide fuel cells (SOFCs). Properties such as operating temperatures of fuel cells depend on the electrolytes. As the working temperature range and system complexity increase the efficiency of fuel cell increases. However, the material cost decreases. The efficiency increment is due to amount of heat generated. On the other hand, AFC, PAFC and MCFC contain highly corrosive electrolytes. Therefore, the commercializing options are limited for these fuel cells. The most commercially promising type of fuel cell is PEM. It is used in three major areas; transportation, stationary and portable power generation applications. The generated power range in the transportation applications are from 5 kW to 1 MW. While for stationary applications, the range is 1 to 50 MW and 100 W to 1 kW for telecommunication applications [8].

94

F. Zengin et al.

1.2 Working Principle and Types of Li-ion Batteries Batteries are devices which converts chemical energy directly to electrical energy by redox reactions. Main parts of batteries are anode, cathode, separator and two current collectors (Fig. 1). Li-ions are stored in the anode and cathode electrodes. During charging process of battery Li-ions travel cathode to anode part while discharging Li-ions move to cathode through electrolyte which generates electrical current [9]. The function of electrolyte is to enable ion transfer between positive and negative electrodes while minimizing side reactions with Li-ions. Electrolytes generally includes organic solvents with some lithium salts (hexafluorophosphate etc.) which increases the ion conductivity. The aim of the separator is to prevent electron transport. Active material, binder and conductive agent casted on the current collector which are Al and Cu. Conductive agent increases electrical conductivity while binder improves mechanical strength of electrode and helps electrons to circulate electrons to current collector better. The electrochemical performance of battery is affected from the chemistry of anode and cathode active material (Table 2) mostly. Li-ion batteries are categorized according to their cathode chemistries. With the development of EV the need for higher energy density and lower mass Li-ion batteries has been raised dramatically [10]. To be able to obtain high power density batteries, researches have been shifted from intercalation materials to high-capacity active materials, such as alloy-type materials [11], conversion-type materials [12] and sulfur cathodes [13]. Due to its high initial columbic efficiency, longterm cycle stability, non-toxicity and low-cost graphite is one of the most frequently utilized anode material. Cathode chemistries are one of the most expensive part of battery. There are different cathode chemistries are available. Cathode of LiBs (Fig. 1) has an important effect on the energy density, specific energy and overall cost of battery. Table 2. Cathode materials and their properties Cathode materials

Main properties

Current main application areas

Lithium cobalt oxide (LCO)

High energy density and cycling Portable electronics stability, High output voltage, High overall cost because of cobalt

Lithium nickel cobalt manganese oxide (NMC)

High energy density and capacity, High output voltage Nickel increases capacity,Manganese improves stability

Electric vehicles, portable electronics

(continued)

Applications and Expectations of Fuel Cells and Lithium Ion Batteries

95

Table 2. (continued) Cathode materials

Main properties

Current main application areas

Lithium nickel cobalt aluminium oxide (NCA)

Higher energy density than NMC,high capacity, not safety as much as NMC

Electric vehicles, portable electronics

Lithium manganese oxide spinel (LMO)

Moderate capacity and moderate Power tools, medical devices energy density, good safety, Short lifetime

Lithium iron phosphate (LFP)

Higher thermal and chemical stability,lower energy density and capacity than NMC, longer cyclability, cheap and no toxic materials

Stationary, electric vehicles, power tools

Fig. 1. Schema representation of LIBS

2 Comparison of Fuel Cell and LIB 2.1 Applications of Batteries and Fuel Cells 2.1.1 Trains Apart from cars fuel cells are utilized in various applications such as passenger trains one example is Coradia ilint, diesel traction system was replaced by electrical traction system. Primary energy supply is hydrogen fuel cell. Fuel consumption of train is 0.25 kg/km which has a capacity of 300 people. Power of fuel cell is 400 kW. Coradia ilint reached 130,000 km in 2019 in Germany from 2018 2 corodia ilint are operating in Eisenbahnen Verkehrsbetribe Elbe-Weser (EVE) in Lower Saxony. On the other hand, in 2018 UK announced by 2040, all diesel trains will be replaced while French operator SNFC will replace its diesel trains by 2035. Also SNFC has ordered its first Alstom hydrogen units to utilize in the Auxerre and Auvergne-Rhone-Alps regions in France which obtains 50% more power than the iLint [14]. One of the leading technology to replace diesel combustion engines is Fuel cell and hydrogen technology. Three reports are published by the Shift2Rail Joint Undertaking (S2R-JU) and Fuel Cells and Hydrogen Joint Undertaking (FCH2-JU) about the use of

96

F. Zengin et al.

Fuel Cells and Hydrogen in the Railway applications. In the first report examines hydrogen rail applications in various aspects such as market potential, state of art. Ten case studies which includes multiple units, shunter locomotives, freight locomotives applications are examined. And the last one is about technical and non-technical barriers of the applications. [15]. The main conclusions and outcomes of this study was summarized in Table 3. The S2R-JU and FCH2-JU study results were being developed in close collaboration with an industrial Advisory Board (AB) that has expertise in all aspects of the fuel cell, hydrogen and rail value chain. In total, the AB was consisting of 27 members, of which four are rail OEMs, eight rail operators, one train and locomotive lessor, seven fuel cell suppliers, and seven hydrogen infrastructure suppliers. Three rail applications were the focus of the study: Multiple Units, Shunters and Mainline Locomotives. The CO2 saving and Total CAPEX of each train set can be seen in Table 3. 2.1.2 Stationary Applications Large-size utility scale storage systems and behind the-meter applications are categorized in stationary energy storage systems. According to estimations a sharp increase is expected from current capacity estimations which is 11 GWh to 421 GWh of cumulative capacity by 2030 all over the world [16, 17]. According to International Energy Agency (IEA) total capacity in stationary applications will be 560–640 GWh by 2030, and in 2040 it will be increased to 1.3–2.2 TWh [18]. In 2017, most of the installed capacity in stationary energy storage applications were held by pumped hydro power which is nearly 98% while electrochemical energy storage systems 1.6 GW and in electrochemical storage systems Li-ion batteries are the most widely used category which has 1.3 GW of all electrochemical energy storage systems [19]. According to estimations, deployment of Li-ion batteries in stationary storage activities in the stationary applications will be 103 GWh in 2025, in 2035 it will reach to 777 GWh capacity [20]. Stationary applications of fuel cells are designed to supply power at a fixed location. In stationary applications of fuel cells, combine heat and power, backup power systems are most basic two categories. Usage of hydrogen in stationary applications have improved upon the project of ENE-FARM in 1990s. This project has been developed by Japanese government to supply electric and thermal energy to cities and individual buildings through utilizing hydrogen energy. Deployment of large-scale stationary fuel cells has been increasing constantly from 2014. 410 MW capacity of fuel cells are deployed in 2014 while it is increased to 867 MW globally in 2018 [22]. According to future scenarios, large CHP deployment will be used in 2024 as 14 GW and in 2030 as 45GW. For back-up power applications in 2021, 140 MW capacity will be deployed and the number will increase to 400 MW in 2030. For micro CHP applications which has capacity of 1–5 kW, the estimations point outs that in 2024 1.5 GW and in 2030 5.7 GW capacity will be employed [23]. If implemented stationary applications were compared it can be seen that PEMFC is far beyond the other types of fuel cells in 2018 and it has a capacity of 589.1 MW while PAFC, SOFC, MCFC, AFC, DMFC are 97.3, 91, 25.2, 0.1, 0.4 respectively [24].

350

2 x 4 car

France

1334

200

0.36

245

450

100

350

3 × 4 car

Country

CO2 saving (tones/yr)

H2 tank (kg)

H2 consumption (kg/km)

H2 consumption (kg/day)

Fuel cell size (kW)

Battery capacity (kWh)

Hydrogen pressure (Bar)

Rolling stock

270

Load capacity (ton)

165

Seats

230

Track length (km) 140

270

410

240

0.31

175

767

Spain

Aragon

Montrejeau

Place

Multiple

Multiple

Type

230

300

70x3 car

350

160

300

16500

0.22

210

56389

Netherlands

Groningen

Multiple

150

149

2x2 car

350

80

200

0.36

135

639

Romania

Brasov

Multiple

1200

10

15 Shunt

350

180

100

480

0.39

50

1969

Germany

Hamburg

Shunters

3000

100

15 Shunt

350

420

255

850

0.49

170

3350

Latvia

Riga

Shunters

3000

35

10 Shunt

350

320

190

180

0.72

50

339

Poland

Gdanks

Shunters

5000

210

15 Locom

350

1000

1150

670

0.67

980

2556

Estonia

Talllinn

Mainline

Table 3. The main conclusions and outcomes oF S2R-JU and FCH2-JU study

2000

720

2 Locom

350

890

680

1500

0.82

765

12874

Germany

Frankfurt

Mainline

(continued)

800

230

230

5 Locom

350

400

400

3000

0.48

450

4980

Sweeden

Kalmar

Mainline Applications and Expectations of Fuel Cells and Lithium Ion Batteries 97

9,3

12,4

22,6

140

140

18,5

19,9

7.32

25

Tractive effort (kN)

Diesel TOC (e/km)

FCH TOC (e/km) 21,2

27,5

Route (km)

Catenary (e/km)

Battery (e/km)

CAPEX/Train set (million e)

Total CAPEX (million EUR)

14

6.0

13,7

147

400

450

510

Average power (kW)

Multiple

Multiple

Type

398

4.5

5,3

4,5

5,0

4,8

128

800

400

Multiple

5.0

75

375

200

Multiple

40

2.2

11,7

12,9

10,1

236

200

800

Shunters

Table 3. (continued)

29

1.85

21,8

20,4

20,9

290

100

1000

Shunters

21

2.0

36,9

36,7

32,1

220

25

600

Shunters

14

3.9

24,4

22,8

22,6

405

1200

Mainline

38

5.44

6,4

11,9

9,2

300

1350

Mainline

48

5.2

22,0

6,7

5,7

200

900

Mainline

98 F. Zengin et al.

Applications and Expectations of Fuel Cells and Lithium Ion Batteries

99

2.1.3 EV Applications In 2019, 2.1 million EVs are sold. For this amount of vehicle approximately 100 GWh of batteries were needed. Especially during COVID-19 pandemic, world auto industry declare that they have met the targets for the EV production. According to estimations in 2025, 1000 GWh capacity battery production facilities will be required. In 2035, LIBs will be deployed in 322 million EVs and the number will be reached to 562 million EV in 2040 [25]. Key players have begun to establish their giga-factories to meet the demand soon. In Table 4. some giga factory investments are given. The charging time of EV changes according to charging capacity of stations (2,8 kW to 50 kW) and the charging of a EV battery at high capacity decreases the battery lifetime. The specifications of some of the EV is shown in Table 5. In 2018, 5800 units of FCEV were being sold and it is increased to 12350 vehicles in 2019 and the specifications of some of them summarized in Table 6. The numbers are indicating that with the help of development in Asia FCEV market begin to grow up. In China with the help of policies which tries to increase FCEBus usage, China had the 97% (4300) of FCEB and 98% of FCETruck (1800) worldwide. While Korea announced in 2019 that they aim to increase FCEV car manufacturing capacity to 6.2 million and FCEB 40000 and 30000 trucks [27]. FCEV 84 GW in 2024 and in 2030 560GW fuel cell will be deployed [23]. When the efficiency of these energy sources is examined, it is seen that Hydrogen car has lots amount of energy lost from production of energy source to end usage in the car. Nearly 45% of its all energy is used during electrolysis and the 20% of energy is used during stages of compression and liquidation, transportation and filling and energy converting in the car which are represented in Fig. 2 b, c, d, e, f respectively. Overall, the fuel cell vehicle has the 25–35% of energy efficiency rate. On the other hand, the situation in the batteries is much better. Battery has only 20% of energy loss during the process of transportation and storage and filling to car battery which are denoted as b, c, d respectively in the same figure. Another parameter which is quite important at EV applications is vehicle down time and refilling time. FC vehicles can be refilled in short period of times in terms of minutes. When it compared with other power supply types, it enables FC to compete with internal combustion engine car. Table 4. GIGA-factory invesments through Europe Company

Year

Capacity

City

Northvolt

2024

16 GWh–24 GWh

Salzgitter

Northvolt

2021

32 GWh–40 GWh

Skelleftea

CATL

2022

14 GWh–100 GWh

Erfurth

Farasis

2022

16 GWh

Bitterfeld-Wolfen

TESLA

202X

40 GWh

Berlin-Grünheide

BYD

202X

24 GWh

EU

100

F. Zengin et al. Table 5. Specifications of some of the electrical vehicles [26]

Brand

Model

Battery size (kWh)

EPA range (km)

0–97 km (sec)

Top speed (km/h)

Peak power (kW)

EPA energy Cons. (Wh/km)

Audi

e-tron

95

357

5.5

200

300

268

BMW

i3s

42.2

246

6.8

161

135

185

Ford

Mustang Mach-E Route 1 ER

98.9

491

6.1

216

207

Hyundai

Kona Electric

64

415

7.9

167

150

174

Kia

Niro EV

64

385

7.5

167

150

187

MINI

Cooper SE

32.6

183

6.9

150

135

190

Nissan

Leaf S

40

240

7.4

145

110

189

Porsche

Taycan

79.2

322

5.1

230

300

265

Tesla

Model 3 Standard plus

60

423

5.3

225

Tesla

Model S Long Range

100

652

3.1

250

Volvo

XC40 Recharge

78

335

4.7

Volkswagen

ID4 Pro

82

419

147

499

175

300

265

150

211

Fig. 2. Efficiency of fuel cell and li-ion battery in EV applications

Applications and Expectations of Fuel Cells and Lithium Ion Batteries

101

2.2 Specific Energies Either in liquid form or compressed form hydrogen has specific energy density of 33314 Wh/kg and Li-ion batteries has 243 Wh/kg [6]. Especially in the automotive market the weights of vehicles have quite importance. In the aspect of weight savings of vehicles, energy densities of these technologies are critical. Fuel cell vehicles can take longer distances when they are compared with battery counterparts. On the other hand, Hydrogen tanks are lighter and smaller than battery packs. Especially to be able to increase the range of car, in fuel cell vehicle one can obtain 1:1 ratio increase in the range with added energy storage unit while it is not the same with batter electric car. The added energy storage unit increases less than 1:1 ratio in batteries because additional weight of battery pack causes a decrease in the range of vehicle. Also, dead batteries which are waiting to be charged needs to be carried all the way to charging station and they increase dead weight when it was compared to fuel cell. Table 6. Specifications of fuel cell vehicles [28] Vehicle Specifications Year Motor (kW) Max speed (km/h) Range (km) Torque (Nm) Fuel Economy (km/kg H2, city/highway) Curb weight (kg) Number of Cell Stack Stack Power (kW) Density (kW/L) @0,6 V Volume (liter) Tank Pressure (bar) H2 stored (kg) Type Battery Voltage (V DC)

HYUNDAI Nexo 2018 120 179 570 394

HYUNDAI TUCSON ix35 FCEV 2014 100 160 426 300

HONDA CLARITY FUEL CELL 2016 130 161 589 300

TOYOTA MIRAI 2015 113 179 502 335

93/85

77/81

109/106

106/106

1811 440 95 3.1 157 700 6.33 Li-ion 240

1830 434 100 1.7 144 700 5.64 Li-ion 180

1875

1850 370 114 3.1 122 700 5.00 Ni-MH 245

100 3.1 141 700 5.00 Li-ion 358

2.3 Cost Analyses There are several parameters that affect the cost of batteries which are boundaries, application type of service and chemistry of battery. The high amount of research and development in the Li-ion battery applications lead to decrease in the total cost of Liion batteries. In the stationary applications of Li-ion batteries, the cost was 1800−1900 e/kWh in 2010 and it has decreased to 1100−1700 e/kWh in 2015 [29, 30]. According to estimations overall Li-ion battery costs will be 175–406 e/kWh for energy designated storage applications in 2030, and for power designated applications it will be 315–666 e/kWh [21]. In EV batteries, according to BNEF battery pack cost has decreased to 170–215 e/kWh in 2017. Battery size cell quality cell format affects the overall battery cost. For example, cylindirical cell is 30% cheaper than large prismatic electric vehicle cell [31,

102

F. Zengin et al.

32]. NMC structure is the most frequently used battery chemistry in the EV applications for example Nissan leaf, BMW ˙I3 utilizes NMC chemistry. In Electric buses, LFP structure is leader. In China among of all battery chemistries, 88% of applications LFP was utilized in 2018. Moreover, according to estimations, in 2028 NMC chemistry will take the 42% of market share and LFP will take 58% [33]. In FCV total cost is 45–50 $/kW while it will decrease to 30 $/kW in 2025 according to DOE (Department of Energy). One important barrier which needs to be overcome to generalize the deployment of fuel cells is the high cost of electrolyzer. In 2020 cost of investment for for alkaline electrolyzer is around 750–800 USD per kW. Different countries adapted electrolyzer production targets (Fig. 3) to decrease overall cost of process. With the help of scaling up activities such as establishing giga factories for electrolyzer and increasing module sizes, overall cost will be decreased by 60% [34, 35].

Fig. 3. Electrolyser production targets of countries

2.4 Hydrogen Cost Hydrogen is a clean fuel source and it will have a critical role to convert global systems of energy to sustainable systems by 2050.According to Hydrogen Council (2020), hydrogen utilization has increased dramatically in 2020. Hydrogen may be obtained by renewable and non-renewable sources. But the cheapest H2 production is made by thermochemical and biochemical conversion. Hydrogen is utilized in various applications such as ammonia production (51%) and oil refining (31%) etc. H2 has been used as fuel in the transportation applications by most of the developed countries also it has been used in fuel cells to produce electricity and water vapor. It has the potential to unlock 8% of global energy demand (GED) with a production cost of 2.50 USD/kg. According to estimations, the cost for H2 production will be diminished below 1.80 USD/kg and it can reach up to 15% of GED by 2030. Depend on the raw material costs and technologies, costs of H2 changing between 0.8 and higher than 4 USD/kg in various regions (Fig. 4). Hydrogen Council (2017) estimates that the project H2 supply/demand will be 10 EJ/year by 2050 via increasing 5–10% in demand every year. And in 2050 H2 will take 18% of GED. It’s very clear that H2 will definitely play a vital role in all energy-based sector, due to its high energy density, low production cost and low carbon content [36].

Applications and Expectations of Fuel Cells and Lithium Ion Batteries

103

Fig. 4. Cost of hydrogen production in different regions.

2.5 Raw Materials The supply of primary raw materials is one critical issue about Li-on battery and fuel cells. For lithium ion batteries Co, Li, C, Ni, Mn, Si, Cu, P are most widely used elements and in 2017 Co, C, Si P are categorized as critical raw materials for EU economy. Nearly half of the critically categorized raw materials are supplied by China. The second biggest supplier of raw materials is Democratic republic of Congo. 59% of all Co are mined in Democratic republic of Congo and that situation causes unbalanced prices over years and Co shortages in markets because of unstable political conditions of Democratic Congo Republic. The most important raw material for batteries is Lithium. Australia, Chile, Argentina, and China has the 92% of all lithium mining activities and 4 companies supply 88% of all lithium globally. Up to 2012, mining amount of lithium was higher than the demand of Lithium in the market. Since 2013, although lithium reserves are higher than market demand, markets consistently experienced lithium shortage. Cause of the situation can be addressed as oligopolistic power on the Lithium market [37]. According to some scenarios, production and processing of Li should be increased as 10 times to satisfy the demand in the future with the increase of usage of Li ion batteries. Pt, Pd, Ru, Rh, Ni, Sr, Zr, Co, Al, Ag, Si are some of the examples for raw materials of Fuel cells. One of the biggest supplier of Fuel cell raw materials is China 22%, second and third ones are South Africa 11% and Russia 7%. Among them Pt, Pd, Ru, Rh, Co, Si are categorized as critical raw materials by EU. Pt, Rh, Pd and Ru are the raw materials essential for fuel cells and they are hardly substitutable along with their high price. The biggest suppliers of critically categorized raw materials are China nearly 30%, South Africa 17% and Russia 14% [38]. 2.6 Manufacturing of LIB’s and Fuel Cells Most researchers put their efforts on discovering and improving new materials instead of new manufacturing processes. The reason why is major cost in the fuel cell stacks still belongs to material. Nonetheless, manufacturing cost is an important issue for fuel cells and alternative manufacturing technologies have been developed. Different technologies are utilized to manufacture different components of fuel cell stacks. For catalyzed deposition, spray coating and screen printing technologies are applied currently. While

104

F. Zengin et al.

tape casting and nano-structured thin film technologies are emerging thanks to their better quality products. For manufacturing of Bi-polar plates, compression molding and spray coating processes are used while additive manufacturing is an emerging alternative technology for production of bi polar plates. For the production of end plates Sand casting and die casting were utilized while Stamping and welding are emerging for end plate production. Current production technologies are expensive and less automation included. It should be considered that higher automation levels come along with high investment and higher demand. It is the reason why facilities cannot realize low costs in techno-economic analyses. The manufacturing of LIB may be separated into 5 stages which are mixing, coating/drying, slitting, vacuum drying and welding. Ultrasonic mixing and ball milling are utilized for mixing stage while dry calendaring and dry printing are some technologies for drying. Laser cutting, argon purging and laser welding are utilized for the rest of stages respectively. Different technologies are applied in these stages. Researchers are putting their most efforts on materials to decrease cost further and increase energy density of Lithium ion batteries. That is the main reason researches have been made about manufacturing method fall behind than the researches on material developments. The major contribution to manufacturing cost comes from formation and aging 32.16%, coating and drying 14.96% and enclosing 12.45% while major production time contributing steps are formation and aging, vacuum drying, and slurry mixing. In addition to that most energy consuming steps are drying and solvent re-covery (46.84%) and dry room (29.37%) [39]. Innovations have been made on this steps will have an important effect on the final costs of batteries.

3 Conclusion When the energy efficiencies of fuel cells and batteries are compared it is clear that hydrogen cars consume 2–3 times higher energy to take same distance than a battery car nowadays. During this process lots of energy is wasted, green energy must be spent more carefully in the future. On the other hand, natural gas and coal utilized as the source of hydrogen and each year nearly 700 million tons of hydrogen produced. It causes a consumption of 800 million tons of carbon dioxide (CO2) emissions [40]. High cost of electrolyzer and high infrastructure costs are most certain barriers for the common usage of hydrogen technologies. While batteries are more reachable for each application with cheap prices. However, Lithium-ion battery technology is in its maturing time in aspects of research and development activities. Most of the big advancements are accomplished so far. On the other hand, the usage of fuel cell trains is getting more common day by day. Also, fuel cells are more compatible with the aviation and space industries in terms of operational security and specific energies. With the help of enough knowledge to overcome the most important barriers of fuel cell technology which are efficiency and cost problem, the storage systems of future will be based on fuel cell technology.

References 1. Wang, Y., Chen, K.S., Cho, S.C.: PEM Fuel Cells: Thermal and Water Management Fundamentals: Momentum press, p. 386 (2013)

Applications and Expectations of Fuel Cells and Lithium Ion Batteries

105

2. van Biert, L., Godjevac, M., Visser, K., Aravind, P.V.: A review of fuel cell systems for maritime applications. J. Pow. Sour. 64, 327-345 (2016) 3. Larrosa-Guerrero, A., Scott, K., Head, I.M., Mateo, F., Ginesta, A., Godinez, C.: Effect of temperature on the performance of microbial fuel cells. Fuel. 12, 89–94 (2010) 4. Mekhilef, S., Saidur, R., Safari, A.: Comparative study of different fuel cell technologies. Renew. Sustain. Energy Rev. 16(1), 981–989 (2012) 5. Lie, J., Tanda, S., Liu, J.-C.: Subcritical water extraction of valuable metals from spent lithiumion batteries. Molecules 25(9), 2166 (2012) 6. https://en.wikipedia.org/wiki/Energy_density 7. Hemmat Esfe, M., Afrand, M.: A review on fuel cell types and the application of nanofluid in their cooling. J. Therm. Anal. Calorim. 140(4), 1633–1654 (2019). https://doi.org/10.1007/ s10973-019-08837-x 8. Lipman, T., Sperling, D.: Market concepts, competing technologies and cost challenges for automotive and stationary applications. Handbook of Fuel Cells (2010) 9. Chawla, N., Bharti, N., Singh, S.: Recent advances in non-flammable electrolytes for safer lithium-ion batteries. Batteries. 5(1), 19 (2019) 10. Wu, J., Cao, Y., Zhao, H., Mao, J., Guo, Z.: The critical role of carbon in marrying silicon and graphite anodes for high-energy lithium-ion batteries. Carbon Energy. 1(1), 57–76 (2019) 11. Seng, K.H., Park, M.-H., Guo, Z.P., Liu, H.K., Cho, J.: Self-assembled germanium/carbon nanostructures as high-power anode material for the lithium-ion battery. Angew. Chem. Int. Ed. 51(23), 5657–5661 (2012) 12. Ryu, J., Hong, D., Lee, H.-W., Park, S.: Practical considerations of Si-based anodes for lithium-ion battery applications. Nano Res. 10(12), 3970–4002 (2017). https://doi.org/10. 1007/s12274-017-1692-2 13. Yun, J.H., Kim, J.-H., Kim, D.K., Lee, H.-W.: Suppressing polysulfide dissolution via cohesive forces by interwoven carbon nanofibers for high-areal-capacity lithium-sulfur batteries. Nano Lett. 18(1), 475–481 (2018) 14. The Fuel Cell Industry Review (2019). https://www.e4tech.com/news/2018-fuel-cell-ind ustry-review-2019-the-year-of-the-gigawatt.php 15. https://shift2rail.org/publications/study-on-the-use-of-fuel-cells-and-hydrogen-in-the-rai lway-environment/ 16. Ioannis, T., Dalius, T., Natalia, L.: Li-ion batteries for mobility and stationary storage applications,. Report No.: JRC113360 (2018) 17. Stampatori, D., Raimondi, P.P., Noussan, M.: Li-Ion batteries: a review of a key technology for transport decarbonization. Energies. 13, 26-38 (2020) 18. Agency IE. World Energy Outlook 2019. Paris, France: International Energy Agency (2019) 19. Tsiropoulos, I., Tarvydas, D.: Li-ion batteries for mobility and stationary storage applications Scenarios for costs and market growth (2018) 20. Finance BNE. New Energy Outlook 2018. Bloomberg New Energy Finance (BNEF) (2018) 21. Huisman, J., Ciuta, T., Bobba, S., Georgitzikis, K., Pennington, D.: Raw materials in the battery value Chain - final content for the raw materials information system – strategic value chains – batteries section (2020) 22. Felseghi, R., Carcadea, E., Raboaca, M., Trufin, C., Filote, C.: Hydrogen fuel cell technology for the sustainable future of stationary applications. Energies 12, 4593 (2019) 23. Study on Value Chain and Manufacturing Competitiveness Analysis for Hydrogen and Fuel Cells Technologies FCH contract 192. E4tech (UK) Ltd for FCH 2 JU in partnership with Ecorys and Strategic Analysis Inc. (2019) 24. Felseghi, R-A., Carcadea, E., Raboaca, M.S., Trufin, C.N., Filote, C.: Hydrogen fuel cell technology for the sustainable future of stationary applications. Energies 12(23), 4593 (2019) 25. BNEF. Long-Term Electric Vehicle Outlook 2018 (EVO 2018). Bloomberg New Energy Finance (BNEF) (2018)

106 26. 27. 28. 29. 30. 31. 32. 33. 34. 35.

36. 37.

38. 39. 40.

F. Zengin et al. https://insideevs.com/reviews/344001/compare-evs/ Hydrogen. Paris: International energy agency -IEA (2020) https://www.sciencedirect.com/science/article/am/pii/S0960148119304446 Schmidt, O., Hawkes, A., Gambhir, A., Staffell, I.: The future cost of electrical energy storage based on experience rates. Nat. Energy 2(8), 17110 (2017) Hocking, M., JK, P., Young, C.T, Begleiter, D.: Lithium 101 F.I.T.T. for investors Welcome to the Lithium-ion Age, Deutsche Bank Market Research (2016) Agency IE. Global EV Outlook 2018 Energy A. Battery market for Hybrid, Plug-in & Electric Vehicles. Avicenne Energy (2018) Adesanya-Aworinde, V.: Electric bus sector is game changer for battery market (2016) IRENA. Renewable Capacity Statistics 2020. IRENA Nel to slash cost of electrolysers by 75 % wghaspafHb. Recharge News 2021. www.rechar genews.com/transition/nel-to-slash-cost-ofelectrolysers-by-75-with-green-hydrogen-at-sam eprice-as-fossil-h2-by-2025/2-1-949219 Kannah, R.Y., et al.: Techno-economic assessment of various hydrogen production methods - a review. Biores. Tech. 319, 124175 (2021) Mo, J.Y., Jeon, W.: The impact of electric vehicle demand and battery recycling on price dynamics of lithium-ion battery cathode materials: a vector error correction model (VECM) analysis. Sustainability 10(8), 2870 (2018) Europian Comission, Materials dependencies for dual-use technologies relevant to Europe’s defence sector (2019) Liu, Y., Zhang, R., Wang, J., Wan, Y.: Current and future lithium-ionbattery manufacturing. Science 24, 102332 (2021) IEA. Batteries and hydrogen technology: keys for a clean energy future (2020)

Integration of Projects Enhancing Productivity Based on Energy Audit Data in Raw Milk Production Sector — a Case Study in Turkey Aylin Duman Altan1(B) and Birol Kayı¸so˘glu2 1 Industrial Engineering Deparment, Corlu Engineering Faculty, Tekirdag Namık Kemal

University, Tekirda˘g, Turkey [email protected] 2 Biosystems Engineering Deparment, Agricultural Faculty, Tekirdag Namık Kemal University, Tekirda˘g, Turkey [email protected]

Abstract. The aim of this study is to establish a infrastructure of energy management system with integration of PEPs (projects enhancing productivity) in a raw milk production enterprise. For this purpose, an energy audit has been conducted and identified energy conservation opportunities as PEPs cost analysis of the findings are performed. The main originality of the study is that the energy audit is conducted for the first time in Turkey, raw milk production sector. The study is conducted in a modern raw milk production enterprise in the province of Tekirda˘g. As a result of the study, it has been determined that the compressed air line systems in operation are appropriately sized, the insulation of hot water lines is deformed. Energy efficient luminaires are not used in the lighting system. As a result of the study, it is revealed that there is an energy saving potential of 16 kW with the implementation of the PEPs, like the use of energy efficient motors, energy efficient lighting systems, more efficient chiller, more efficient pre-cooling exchanger and external environment for compressor suction. Seven of the eight proposed projects have a payback period of less than approximately five years. Keywords: Energy efficiency · Energy management · Milk cooling · Energy audit

1 Introduction In countries that are highly dependent on foreign energy, the efficient use of energy is of great importance for the country’s economy. The lack of practice in energy efficiency already leads to a further increase in energy imports a large cost burden to the national economy. Since Turkey imports over 75% of its primary energy source [1], measures that This article is produced from the first author’s Ph.D. thesis titled “Establishment of Energy Management System Infrastructure in a Raw Milk Production Enterprise” conducted under the supervision of second author. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 107–120, 2022. https://doi.org/10.1007/978-3-030-90421-0_9

108

A. D. Altan and B. Kayı¸so˘glu

can be taken to maintain in current account balance focus on energy use. In this context, detection of the energy efficiency levels at the sectoral bases, elimination of nonconformities in the application, and increasing the energy efficiency with the technological renewal studies are strategic priorities. The penetration of industry 4.0 applications, which aim to bring together all valueadded activities through information technologies, has accelerated rapidly in recent years. With the inclusion of Industry 4.0 in sustainable energy management, the concept of energy 4.0 has emerged. It provides real-time energy management focused on energy efficiency and availability of equipment. Moreover, adoption with Industry 4.0 also offers instant traceability of energy consumption in production processes for enterprises. The first step towards to accelerate the transition to sustainable energy systems is carried out the current situation analysis. Although relevant laws and regulations have been published in Turkey to accelerate energy efficiency studies in industry and to help enterprises determine a roadmap, agricultural structures have been excluded from the scope. However, as can be seen in Fig. 1 [2], the energy density value is gradually and rapidly increasing in the agricultural sector. Service Industry Agriculture

Fig. 1. Sectoral energy ıntensity on the basis of years (Btep/Million TL).

To reduce the growing impact of the agricultural sector on total energy demand, it is necessary to accelerate energy efficiency studies in both animal production and plant production. Milk, one of the staple foods, is one of the most significant products of the animal production industry. Due to the high level of mechanization and automation in raw milk production enterprises, the amount of energy consumption are quite high. Milking (vacuum pumps), milk cooling (cooling tanks), lighting, water heating systems are energy-intensive processes [3], as shown in Fig.2. In literature (Table 1), there exist various studies addressed to the energy-intensive activities which can be adopted milk production processes. Many research studies have recommended the energy saving applications focusing on one or a few processes. The study spans the gap in the existing literature, since it includes a detailed energy audit.

Integration of Projects Enhancing Productivity Based on Energy Audit Data

Water Heang % 31

Other %3

Milking %19

109

Lighng % 10 Milk Cooling % 37

Fig. 2. Energy consumption ın raw milk production enterprises

This paper aims to build infrastructure for sustainable energy management and energy-efficient manufacturing in raw milk production industry. The study includes a detailed energy study conducted in a raw milk production facility for the first time in Turkey. PEPs, which require relatively investment, have been determined for improving energy-efficient processes and the recovery of waste energy. Moreover, a cost analysis study is conducted to evaluate the feasibility of implementing these projects.

2 Material and Method A raw milk production plant, with a capacity of 2000 cows, 22000 tons of milk/year, in the Tekirda˘g is considered as a case study. There is a rotary type milking system in the enterprise and 60 cows can be milked at the same time (Fig. 3). All relevant data used for the energy audit obtained on-site measurement via the related devices. General information about measuring devices and measuring points are given in Table 2. Data is obtained during 2016–2017. The energy-intensive systems (vacuum pumps, compressed air systems, cooling systems, lighting systems, hot water systems) are examined by a detailed energy audit. The following calculation illustrates the electrical energy consumption of the systems.   (2.1) ECe = ty .Py + (tb .Pb ) Where; EC k,g (kWh) is the electrical energy consumption of the system, ty (h) is the operating time of the system under load, Py (kW) is the power consumed by the system under load, tb (h) sistem is the idle time of the system and Pb (kW) is the power consumed by the system at idle. The amount of energy saving per year and the payback period are calculated by the following equation for in case of installation of proposed systems instead of existing systems. ES = ECe − ECp

(2.2)

110

A. D. Altan and B. Kayı¸so˘glu Table 1. Summary of literature

Study

Sector

Key findings

[4]

Plastic product manufacturing

Increasing energy savings by 86.5% with • Improve pump efficiency •Lower chiller condenser water temperature • Install premium efficiency pump motors • Variable speed process chilled water pumping • Improve chiller performance by increasing load factor • Digital control of process cooling system)

[5]

Raw milk production

15% of the consumed energy was water heating, 21% was milk cooling, 23% was milking system, 14% was lighting, 12% was ventilation

[6]

General

10% of the total industrial consumed energy was compressed air systems

[7]

Raw milk production

Waste heat from cooling process was specified as 525598 kJ -600683 kJ (24 kW-27.8 kW)

[8]

General-experimental

Reduction of 41–50% of greenhouse gas emission and cumulative energy demand with LED luminaire

[9]

Raw milk production-Experimental

Increase of 44.75% pre-cooling efficiency with using a plate heat exchanger

[10]

Raw milk production

The heat recovery system was presented as the best application.(for 60cows) The heat recovery with combined pre-cooler system was presented as the best option (for 200-400cows)

[11]

Raw milk production-experimental

About 53–65% of total heat lost was recovered in heat recovery heat exchanger COP was incerased from 3 to 4.8 with the incorporation heat exchanger

[12]

Raw milk production

The highest return on investment figures were achieved using pre-cooling sysyem, electrical water-heating system and vacuum pump control

[13]

Raw milk production

63% -54% of the energy consumed was from the cooling and milking systems

[14]

Raw milk production

After optimizing the lighting in cow buildings, the annual energy consumption decreases with 13% (continued)

Integration of Projects Enhancing Productivity Based on Energy Audit Data

111

Table 1. (continued) Study

Sector

Key findings

[15]

Raw milk production

It was determined that the lighting level was insufficient and energy efficient lighting system design was proposed

Fig. 3. Rotary system

Table 2. General information about measuring devices/points Device

Measuring points

Energy analyzer

Compressed air compressor, Pump, Fan, Motor, Chiller

Clamp meter

Compressed air compressor, Pump, Fan, Motor, Chiller

Temperature meter

Hot water line, Chiller, Cooling tank, Compressed air compressor

Thermal camera

Hot water line, Cooling tank

Where; ES (kWh/year) is the amount of energy saving per year, EC e (kWh/year) is the energy consumption of the existing system and EC p (kWh/year) is the electrical energy consumption of the proposed system. PK = UPE.ES

(2.3)

Where; MS (TL/year) is the amount of money saving per year, and UPE (TL/kWh) is the unit price of electrical energy. PP =

IC MS

(2.4)

112

A. D. Altan and B. Kayı¸so˘glu

Where; PP (year) is the payback period ve IC (TL) is the investment cost. SEC (specific energy consumption) values of the systems are analysed using the formula as given. SEC =

EC MP

(2.5)

Where; EC (kWh) is the electrical energy consumption of the system and MP (liter) is the amount of milk production. In addition to main calculations, Darcy-Weisbach Equation is used to calculate the pressure loss as follows.    2 c L ∗ (2.6) hf = f ∗ D 2g Where; hf (m) is the pressure loss, g (m/s2 ) is the gravity, L is the pipe length (m), D (m) is the hydraulic diameter of the pipe, c is the the mean flow velocity (m/s) and f is the the Darcy friction factor. As regards the PEPs, the used formulations are presented below by reference [16–18]. A. PEP1- Energy Saving by the Use of Energy Efficient Motors The energy saving to be achieved by using the proposed energy efficient motors instead of the existing electric motors are calculated with the following equations;   1 1 (2.7) − ESm = Pn .t.LR. ηs ηr Where; ES m (kWh/year) is the energy saving, Pn (kW) is the nominal power, t (h/year) is the operating time, LR is the loading rate, ηs is the standard type motor efficiency and ηr is the recommended motor efficiency. B. PEP2- Compressor Suction from External Environment In the facility, the compressor is in a closed room and air suction takes place in this room. The energy saving to be realized with the suggestion of making the air intake outdoors is calculated with the help of the following equations. The power reduction factor is calculated with the following equation; GAf = 1 −

Td Ti

(2.8)

Where; GAf is the power reduction factor, Td (K) is the outdoor temperature and Ti (K) is the indoor temperature. ˙In the calculations, the annual average temperature is taken as 15.2 °C and the temperature difference as 7 °C. ESc = GAf .t.P

(2.9)

Integration of Projects Enhancing Productivity Based on Energy Audit Data

113

where; ES c (kWh/year) is the is the energy saving, t (h/year) is the compressor operating time and P (kW) is the compressor power. C. PEP3- Compressor Heat Recovery In the existing compressor unit, the heat energy generated during the operation of the compressor is not evaluated and is thrown out. ˙In the proposed project, the heat energy taken from the compressor with the help of air can be used for space heating in winter months. The amount of heat energy saving from the air heated with the heat energy taken from the compressor is found with the following equation; EShr = 3600.q.chava .T

(2.10)

Where; ES hr (kJ/h) energy saving, q (kg/s) is the air flow rate, C (kJ/kg.K) is the specific heat of the air ve T (K) is the temperature increase in the heated air. D. PEP4- Change of Lighting Fixtures With the recommendation of the use of efficient fixtures to provide the total luminous flux in the current situation, the amount of energy saving per year and the payback period are calculated. E. PEP5- Replacement of the Pre-cooling Exchanger in the Milk Cooling System When the pre-cooling exchanger is replaced with more efficient systems, the energy saving is calculated with the following equations; LD = Yp − Ye

(2.11)

Where; LD (kW) is the load difference, Yp (kW) s the load of the proposed system and Ye (kW) is the load of the existing system. ESP = (LD/COP).t

(2.12)

Where; ES P (kWh) energy saving, COP coefficient of performance and t (h) is the operating time of the system. F. PEP6- Replacing the Existing Chiller with a More Efficient Chiller The energy saving to be achieved by replacing the existing cooling tank with the proposed tank is calculated with the following equation;   1 1 .t (2.13) − ESec = CL COPe COPp Where; ES ec (kWh) is the energy saving, CL (kW is the cooling load of the tank, COP e (kW) is the coefficient of performance of the existing system, COP p (kW) is the coefficient of performance of the proposed system and t (h) is the operating time of the system. G. PEP7- Heat Recovery for Milk Cooling System The energy recovery in the milk cooling unit is calculated with the following equation; EScr = (YF).t

(2.14)

114

A. D. Altan and B. Kayı¸so˘glu

where; ES cr (kWh) energy saving, YF (kW) is the load difference ve t (h) is the operating time of the system. H. PEP8- Hot Water Line Insulation The heat losses in the hot water line are calculated with the following equation; Q=

π L(Ti − Td ) 1 αi di

+

1 2λ

ln

dd di

+

1 αd dd

(2.15)

where; Ti (◦ C)is the temperature of the surface to be insulated, Td (ºC) is the temperature of the ambient air, αi (W/m2. ºC) is the inner surface heat transfer coefficient„ αd (W/m2 . ºC) is the outer surface heat transfer coefficient, di (m) is the inner diameter, di (m) is the outer diameter and L (m) is the length.

3 Results and Dıscussıon As a result of the research, the current situation of the processes in the enterprise and suggestions for increasing energy efficiency as PEPs are given below, respectively. A. Vacuum Pumps There are 3 vacuum pumps in the milking station. While one of these pumps is operational, one of them is activated at peak loads and the other one is waiting as a backup. The daily energy consumption of the continuously operating vacuum pump are found to be 60.4 kWh. The SEC value of the vacuum pump are calculated as 0.002 kWh/l. Vacuum pumps are used in milking processes to create the negative pressure required (from the milk receiver to the cooling tank) for milking. Pumps operate as active for 15 s and inactive for 5 s in a 20-s period. The milking process takes 14.5 h a day. Therefore, the pumps are switched on and off 2610 times a day. The vacuum pump turns off at –420 mbar pressure during milking, and turns on when the pressure rises to -350 mbar. It operates at constant constant pressure of –420 mbar during the washing process. It takes 3.5 h a day for the washing process. Pump exhaust temperature is measured as 59 °C. The vacuum pump goes up to a maximum of 6 kW (4.2 kW on average). Since the label power of the motor is 7.5 kW, it has been chosen ideally. Vacuum pumps have energy saving potential via PEP applications. One of the efficiency project that can be implemented is to increase the efficiency class of the electric motor of the vacuum pump, and the other project is to ensure that the on-off pump motor works with the speed driver. It has been determined that there is a speed driver application in the current system. B. Compressed Air System The compressor used in the compressed air system works for 18 h a day during the milking and washing period. ˙Idle time is 6 h. Compressor daily energy consumption is calculated with reference to the load (16 kW) and idle (5.44 kW) operation. The daily energy consumption of the compressor and SEC value of the compressed air system are calculated as 281.65 kWh and 0.009 kWh/l, respectively.

Integration of Projects Enhancing Productivity Based on Energy Audit Data

115

The average voltage harmonic of the compressor motor is 1.16 and the current harmonic is 1.7. According to the ˙International IEC 519–1992, the harmonic distortion values accepted within the standards are determined as 3% for voltage and 5% for current. Measures should be taken if the voltage harmonic is above 3% to 5% and the current harmonic is above 10% to 12% in the measurements made on the main distribution panel. Therefore, the harmonic values of the compressor motor are within the specified limits. Harmonics are not at a level to cause overheating or malfunction in the future. The places where the harmonics increase are when the compressor operates at low power (Fig. 4).

Fig. 4. Harmonic values of compressed air compressor

C. Compressed Air Line Control In the line between the compressor and the tank, the air velocity in the pipe (2.58 m/s) is less than the limit value (10 m/s) and the pressure drop in the pipe (0.019 bar/100m) is less than the limit value (0.15 bar/100m). ˙In the line between the Tank-Dryer, the air velocity in the pipe (4.01 m/s) is less than the limit value (10 m/s) and the pressure drop in the pipe (0.045 bar/100m) is less than the limit value (0.15 bar/100m). According to the measurements of the lines, it is determined that the sizing of lines is appropriate. D. Milk Coolling System The total cooling load of the cooling system is calculated as 80.55 kW. The COP is 2.6. The preheating exchanger covers 18% of the total cooling capacity. However, since the heat exchanger type is immersed serpentine along the wall, the entire cooling capacity of the well water in the secondary circuit cannot be used. Because, the water temperature can go up to the maximum milk output temperature. However, it is possible to obtain a higher cooling load if a counter flow plate heat exchanger is used. In counter-flow exchangers, the secondary circuit (well water) outlet temperature converges to the primary circuit (milk) inlet temperature in theory, but it can converge by about 2 °C in practical applications. Therefore, with the right heat exchanger application, the capacity of the chiller can be reduced by more pre-cooling. The application has been designed as a PEP. In this case, the pre-cooling load will be 24.8 kW.

116

A. D. Altan and B. Kayı¸so˘glu

It is recommended to replace the motors driving the 3 compressors in the cooling system with higher efficiency motors. The energy saving to be provided is calculated in the PEP section. The daily energy consumption of the cooling sysytem and SEC value of the cooling sysytem are calculated as 459 kWh and 0,015 kWh/l, respectively. According to a study conducted in Ireland, 52 enterprises are examined and the average cooling SEC value is determined as 0.013 kWh/l [19]. Since the SEC value of the enterprise examined within the scope of the study is higher, it is clear that the system does not work efficiently. Although the COP value of 2.6 is within acceptable limits, energy savings can be achieved by using a new chiller with a higher COP value. This recommendation is presented as a PEP. E. Water Heating System The energy saving to be obtained in case of insulation of the hot water line is presented as an PEP. For this purpose, thermal camera images are taken on the lines given in Table 3, to determine the heat losses in the hot water circuit. The luminaire and line surface temperature acceptance are applied in the insulation savings calculation. Table 3. Insulation status of the line based on thermal camera images Location

Detection

Combi water inlet and outlet line

Luminaires without insulation

Hot water line

No insulation

Balance vessel

No insulation

Round-trip line

No insulation

Combi water inlet and outlet line

Luminaires without insulation

Hot water line Heating line

No insulation ˙ Insulation deformation

Boiler line

Luminaires without insulation

Boiler overview

Good body insulation/ Hot waterline without insulation

Boiler primary circuit output / secondary circuit input

No insulation

Boiler primary circuit input

No insulation

Boiler secondary circuit output

No insulation

Buffer tank outlet line

No insulation

Buffer tank inlet line

No insulation

F. PEP Evaluation

1) PEP1- Energy Recovery Provided by the Use of Energy Efficient Motors.

Integration of Projects Enhancing Productivity Based on Energy Audit Data

117

The electric motors used are those that drive the compressed air compressor, vacuum pump and cooling tank compressor. The technical specifications of the engines are given in Table 4. The energy saving is calculated when electric motors, all of which belong to IE1 efficiency class, are replaced with electric motors belonging to IE3 efficiency class. Table 4. Technical specifications of the engines Place of use

Label power (kW)

Number

Number of poles

Efficiency class

Compressed air compressor

15,0

1

4

IE1

Vacuum pump

7,5

1

4

IE1

Cooling tank compressor

7,5

3

2

IE1

2) PEP2- Compressor Suction Project From External Environment. The energy gain to be provided in case the compressor suction is made from the external environment has been calculated. The cost of construction (breaking the wall, etc.) for the suction grille is also added to the investment cost. 3) PEP3- Heat Recovery Project from Compressor Exhaust. Air-cooled compressors cool the circulating cooling oil with air. By comparing the ambient temperature air with the oil exchanger, it transfers the heat of the oil to the air flow itself and discharges to the outside environment with an increased temperature. This heat potential can be used for space heating. The exhaust line will be transferred to the administrative building above by means of a damper to be controlled by temperature and will be given to the environment through the vents. Thus, the amount of natural gas consumed by the combi boilers will be reduced. The flow rate of the heated air is 1116 Nm3/ h, the inlet temperature is 32 ºC, the outlet temperature is 65,4 ºC and the specific heat of the air is 0.24 kcal/Nm3 K, and the annual working hours are calculated with reference to the winter period. The lower heating value of the natural gas distributed in Turkey is accepted as 8250 kcal/Nm3 . 4) PEP4- Lighting Fixtures Replacement Project. The SEC value of the lighting system (for a barn) is calculated as 0.006 kWh/l. There are 42 pieces of 400 W metal halide armatures in total in the factory. The efficiency value of metal halide luminaires is evaluated as 80–90 lm/watt. However, due to the fact that the luminaires have been used for many years, it is thought that there is a loss of efficiency and it is taken as 60 lm/watt. The lumen value of the current system can be obtained with 60 pieces of 140 W LED. The catalog efficiency value of the new luminaires is 120 lm/watt. The gains to be achieved by changing the lighting luminaires have been calculated. 5) PEP5- Project for Replacing the Pre-Cooling Exchanger. Above, the cooling loads of the existing heat exchanger and the proposed plate heat exchanger are calculated. This load difference will reduce the capacity of the

118

A. D. Altan and B. Kayı¸so˘glu

existing chiller, causing it to consume less electricity. The gains to be achieved with the use of the proposed system (73 plate M3 base type) have been calculated. 6) PEP6- Project of Replacing the Existing Chiller with a More Efficient Chiller. When the existing chiller is replaced with an air-cooled chiller with a higher COP, less electricity is consumed for the same cooling load. The gains to be achieved with the proposed project have been calculated. 7) PEP7- Heat Recovery Unit for Milk Cooling System. The heat recovery unit mainly works with the principle of recovering the superheating (sensible) energy of the refrigerant in the superheated vapor phase. Depending on the type of fluid and the manufacturer of the recovery unit, water at temperatures up to 70 °C can be produced. Thus, the energy costs of the water heating system can be significantly reduced. The gains to be achieved are calculated when a heat recovery unit with a thermal capacity of 18 kW and a capacity of 450 L is preferred. 8) PEP8- Hot Water Line Insulation. Calculations related to the gains to be achieved with the insulation of the hot water line project are made for 10 armatures and a 20 m long line without insulation. To sum up, the PEPs analyzed with an energy management approach and the PEPs’ indicators (such as energy savings and payback period) are quoted in Table 5. Considering that the acceptable payback period is about 5 years, all other projects except one (PEP2) are viable and feasible. In this way, the amount of energy consuption is reduced by 241,116 kWh/year with a total simple pay-back period less than almost 5 years. Table 5. Summary table of PEPs. PEPs

Energy saving (kWh/year)

Payback period (year)

PEP1

11733

3

Compressor motor replacement

3.544

2,8

Vacuum pump motor replacement

1.227

5,3

Cooling compressor motor replacement

6.962

2,7

PEP2

2.304

10,9

PEP3

22.464

5,1

PEP4

30.366

4,4

PEP5

17.515

3,45

PEP6

42.592

3,48

PEP7

93.960

1,7

PEP8

22.686

3,36

Total

243.420

Integration of Projects Enhancing Productivity Based on Energy Audit Data

119

4 Conclusion A number of possible energy saving applications have been individuated into PEPs. The energy saving and payback period linked to each PEP has been calculated. It has been demonstrated that it is feasible to be projected a range of energy saving applications, like the use of energy efficient motors, heat recovery units, more efficient chiller and pre-cooling exchanger. It has been determined that there is a total energy saving potential of 243.420 kWh/year. By implementation of PEPs with a payback period of less than almost 5 years, the enterprise can produce a saving of about 241,116 kWh/year. This study provides an in-depth understanding of the importance of energy management studies in the raw milk production sector. Detailed energy audits are essasial tool for managers to decision-making regarding implementing energy saving recommendations. In this context, the dissemination of energy efficient applications in the relevant sector should be determined as a government strategy. By raising awareness and providing financial support for the relevant sector, quite important results will be achieved in terms of both the competitiveness of the sector and rural development. The integration of advanced computer simulation programs for detailed energy audits and the evaluation of PEPs should also be encouraged to meet the energy 4.0 requirements and to ensure rapid integration into it.

References 1. Kayi¸so˘glu, B., Diken, B.: The current situation of renewable energy use in Turkey and problems. J. Agric. Mach. Sci. 15(2), 61–65 (2019) 2. Bayrak, R., Bakırcı, F., Sarıkaya, M.: Savunma sanayinde VZA yöntemiyle etkinlik analizi. J. Entrepreneurship Dev. 10(2), 26–50 (2016) 3. Upton, J., Murphy, M., French, P., Dillon, D.: Dairy farm energy consumption. In: Teagasc National Dairy Conference. Teagasc Moorepark, Fermoy, Co. Cork (2010) 4. M. M. Tillou, M. E. Case and P. L. Case. “Energy Efficiency in Industrial Process Cooling Systems: A Systems Approach”, Proceedings, 2003 ACEEE Summer Study On Energy Efficiency ˙In ˙Industry. American Council For An Energy-Efficient Economy, OmniPress, Washington, DC, pp 290–300, 2003. 5. Clarke, S., House, H.: Using Less Energy on Dairy Farms. Ministry of Agriculture, Food and Rural Affairs, Ontario (2010) 6. Saidur, R., Rahim, A.N., Hasanuzzaman, M.: A review on compressed-air energy use and energy savings. Renew. Sustain. Energ. Rev. 14, 1135–1153 (2010) 7. Kishev, A.M., Ulimbashev, B.M.: Reduction of energy consumption during milk cooling. Russ. Agric. Sci. 37(2), 185–187 (2011) 8. Principi, P., Fioretti, R.: A comparative life cycle assessment of luminaires forgeneral lighting for the office-compact fluorescent (CFL) vs light emittingdiode (LED) – a case study. J. Clean. Prod. 83, 96–107 (2014). https://doi.org/10.1016/j.jclepro.07.031 9. Nejdek, V., Fryˇc, J., Los, J.: Measurements of flat-plate milk coolers. Acta Univ. Agric. Et Silviculturae Mendel, anae Brunensis 62(5), 1057–1064 (2014) 10. Corscadden, K.W., Biggs, N.J., Pradhanang, M.: Energy efficient technology selection for dairy farms: milking cooling and electric water heating. Am. Soc. Agric. Biol. Eng. 30, 3 (2014) 11. Sapali, N.S., Pise, M.S., Pise, T.A., Ghewade, V.D.: Investigations of waste heat recovery from bulk milk cooler. Case Stud. Therm. Eng. 4, 136–143 (2014)

120

A. D. Altan and B. Kayı¸so˘glu

12. Upton, J., et al.: Investment appraisal of technology innovations on dairy farm electricity consumption. J. Dairy Sci. 98(2), 898–909 (2015) 13. Duman, A., Gönülol, E., Ülger, P.: A study of determination energy costs and factors affecting the energy efficiency for milking mechanization. In: International Conference on Agricultural, Civil and Environmental Engineering (ACEE-16), 18–19 April, Istanbul, Turkey (2016) 14. Bey, M., Hamidat, A., Benyoucef, B., Nacer, T.: Viability study of the use of grid connected photovoltaic system in agriculture : case of algerian dairy farms. Renew. Sustain. Energy Rev. 63, 333–345 (2016) 15. Altan, A.D.: imulation of energy efficient lighting system for energy optimization: a case study of a dairy farm. Int. J. Eng. Res. Dev. 13(11), 28–31 (2017). e-ISSN: 2278–067X, p-ISSN: 2278–800X 16. Anonymous (a). Energy Manager Training Note, 1.Book, UCLA Chamber of Mechanical Engineers, Kocaeli (2015) 17. Anonymous (b). Energy Manager Training Note, 2.Book, UCLA Chamber of Mechanical Engineers, Kocaeli (2015) 18. Anonymous (c). Energy Manager Training Note, 3.Book, UCLA Chamber of Mechanical Engineers, Kocaeli (2015) 19. Murphy, D.M., Upton, J., O’Mahony, J.M.: Rapid milk cooling control with varying water and energy consumption. Biosys. Eng. 116, 15–22 (2013)

Healthcare Systems and Management

A Fuzzy Cognitive Mapping Approach for the Evaluation of Hospital’s Sustainability: An Integrated View Aziz Kemal Konyalıo˘glu1(B) and ˙Ilke Bereketli2 1 Department of Management Engineering, Istanbul Technical University, Istanbul, Turkey

[email protected]

2 Department of Industrial Engineering, Galatasaray University, Istanbul, Turkey

[email protected]

Abstract. The relationship among sustainability criteria in the hospital cannot be assessed easily. As there exist a lot of criteria to evaluate if the hospital is sustainable based on green concept, the relationships between the qualifications by using fuzzy cognitive mapping (FCM) have been tried to be seen in a macro perspective. Furthermore, Fuzzy Cognitive mapping is a very useful tool to evaluate the relationship between the qualifications/concepts and see how the qualifications affect each other. It is clear that fuzzy cognitive mapping allows us to see which qualifications should be improved. Adjacent matrix of the mapping also shows the negative and positive effects in a quantitative way. In this study, it is aimed to investigate hospitals’ sustainability criteria by using fuzzy cognitive mapping approach. Based on the numerical analysis, indoor quality and policies and regulations are the most centered-ones while food quality and occupational health and safety are the least centered ones based on sustainability evaluation of hospitals. Also, it is tried to understand that policies and regulations are the most effecting concepts on the other concepts. Energy usage and water usage are relatively high effecting concepts of a hospital’s sustainability evaluation to be green hospital. There are many factors affecting a hospital’s view trying to be sustainable and high-qualified. As quality is a large concept including continuous improvement, in terms of sustainability, many factors should be taken into consideration by observing the relationship and correlation between factors. Keywords: Healthcare management · Sustainability · Cognitive mapping · Fuzzy logic · Hospitals’ sustainability

1 Introduction Healthcare is one of the most crucial area in human life. Effective healthcare management leads people to be aware of the definition of sustainability and sustainable healthcare management. Not only sustainable healthcare management but also sustainable hospitals are important for waste management which is a big part of sustainability concept. Hospitals try to be sustainable as the sustainability is seen as a big necessity in current world. That’s why in health sector, hospitals are in a race to have a green certificate all © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 123–130, 2022. https://doi.org/10.1007/978-3-030-90421-0_10

124

A. K. Konyalıo˘glu and ˙I. Bereketli

around the world however there exist too many variables to be taken into consideration in terms of green thinking, not only quality certification but avoiding the wastes are getting important for the healthcare sector. By considering that the sustainability is also included in healthcare area, it is a very important to note that the sustainability concept is barely implemented and discussed in harmony [1]. In this case, healthcare companies which focus on the investments for putting forward their sustainability initiatives believe that they gain some strategic competitive advantages by taken into consideration of the sustainability initiatives [2]. Implementing sustainability qualifications also provide the hospitals the opportunity to avoid the wastes and to create financially and economically more durable hospitals. Besides, evaluating each factor in terms of sustainability provides hospitals to think about the total quality management tools. This implementation seems to be simpler in public organizations but in healthcare area, the implementation is very hard as there have crucial adaptability problems [3]. In this study, there are mainly four sections which includes an introduction section in order to give a brief definition and introduction for healthcare sustainability, a literature review section for deepening the previous studies about sustainability in healthcare and fuzzy cognitive mapping applications in literature, the materials and method section to investigate the main component of this study and application of fuzzy cognitive mapping on general sustainability factors in hospitals and finally a conclusion part which provides the results and conclusions of the application given in the previous section.

2 Literature Review A. Sustainability in Healthcare Sustainability, or sustainable development, is a popular and important topic for the quality of life [4]. A report which is published by World Commission on Environment and Development in 1987, gives a sustainability definition as meeting the need of the future generation [5]. The complexity of healthcare systems includes increasing demand and decreasing sources. Allowance for Natural Health gives sustainable healthcare system defined as: “a complex system of interacting approaches to the restoration, management and optimization of human health that has an ecological base, that is environmentally, economically and socially viable indefinitely, that functions harmoniously both with the human body and the non-human environment, and which does not result in unfair or disproportionate impacts on any significant contributory element of the healthcare system” (2006). For providing a sustainable healthcare system, focusing “now” is not enough, the future needs should also be considered [4]. One of the strategies of European Commission is to seek environment friendly goods, services and works in terms of life cycle. They published standards which include different criteria for different topics such as cleaning, heating, construction, electronic equipment for healthcare systems, food, transportation etc. [5]. In literature there are different sustainability studies for healthcare sector. Mortimer studied climate change in medical schools. To teach climate change, following subjects are considered: lack of training between educators, selecting right topics for medical students, considering time

A Fuzzy Cognitive Mapping Approach for the Evaluation

125

to develop new materials [6]. Cavicci studied at Emilia-Romagna Health Service that they seek the effects of leadership, performance, culture, technologies and social capital, on sustainable development in terms of intellectual capital [7]. B. Fuzzy Cognitive Mapping Fuzzy cognitive maps (FCMs) are graphical explanations for causal relationships given by Kosko in 1986 [8]. FCMs are preferred as a simulation methodology in terms of flexibility and fuzzy reasoning and also abstraction [9]. As simplicity of modelling and easiness of the methodology FCMs have several advantages to be used. FCM models are being used in different study areas such as military, history, medicine, engineering, etc. [10]. There are different examples in literature which used FCMs. Styblinski and Meyer compared Signal Flow Graphs and FCMs and similarities are mentioned. This comparison is applied for a circuit analysis and results are discussed to find out FCM and Signal Flow Graph concepts [11]. Georgopolos et al., mentioned about a language disorder and this disease’s diagnosis is difficult. A computer model is conducted for FCM and data is gathered from experts who has sufficient knowledge about this medical problem [12]. In healthcare area, Trochim and Kane have tried to explain the relationship between phases in a hospital and conception by using Cognitive Mappings [17]. Shewchuk et al. have interpreted a cognitive mapping to understand the representation that healthcare journals have given [18]. Eldredge et al. planned the health promotion programs by using also cognitive mappings [19].

3 Material and Methods A. Fuzzy Cognitive Mapping Approach Fuzzy Cognitive Mapping is a very useful approach to evaluate and interpret the relationship between factors, specific kind of groups. The connections interpreted between the factors explain the cognitive models under a system [13]. Cognitive maps generally include two main parts which are casual beliefs and concepts. Relationship between the variables may be negative or positive which indicates that the change of the effect on the variable is the opposite direction [14]. Fuzzy cognitive maps are like graphs which all have their adjacent matrices indicated as W. In this case, for introducing fuzzy cognitive maps (FCM), the first step is to decide the relations between factors as well as the arrows in directions. Secondly, the weights should be decided based on the Table 1. The concept of Fuzzy cognitive maps is similar to neural networks (NN). One may conclude that the factor’s outputs are working as neurons, which can be seen in Fig. 1. The adjacent matrix of the FCM indicates the effecting factors between the variables. The variable types can be explained in terms of outdegree (od) and indegree(id) values whose definitions can be given as the outdegree is the sum of rows and the indegree is the sum of columns in terms of absolute values and it indicates respectively the power of leaving and coming variable aij by remarking that the transmitter variables have to

126

A. K. Konyalıo˘glu and ˙I. Bereketli Table 1. The transformation between fuzzy values and linguistic values [8] Fuzzy Values

Linguistic Values

1.0

The strongest

0.8

Very Strong

0.7

Strong

0.6

Moderately strong

0.5

Weak

0.4

Very weak

0.2

The weakest

0.0

Lack

Fig. 1. An example for Fuzzy cognitive Mapping including the effect factors [15]

obtain a zero of outdegree and indegree given in Eqs. (1) and (2) respectively. od (vi ) =

N 

aii

(1)

aki

(2)

k−1

id (vi ) =

N  k−1

A Fuzzy Cognitive Mapping Approach for the Evaluation

127

The value of each factor/concept is affected by the factors which are connected with weights and the previous value. The values can be found as given in (3). ⎛ ⎞

Ai+1

⎜ ⎟ ⎜  ⎟ ⎜ n  ⎟ ⎜ =f⎜ Ai xWji ⎟ ⎟ ⎜ ⎟ ⎝j = 1 ⎠ j = i

(3)

Where Ai values can be defined as the activation level of the concept Ci and the Wji are the weights [15]. The function f is a nonlinear function which is called as Sigmoid and given in [4] f (x) =

1 1 + e−x

Implementing fuzzy cognitive mapping is supposed to be a very complex process as there are a lot of connections between the factors. In the study, the factors have been evaluated negatively or positively by the experts and the relations between the factors affecting sustainability have been attributed. In the Fig. 2, the fuzzy cognitive mapping is interpreted by MentalModeler program. B. Fuzzy Cognitive Mapping for Hospitals’ Sustainability The nodes given in the Fig. 2 give us the hospitals’ sustainability concepts. These concepts proposed by Zadeh et al. take place in order to have a detailed cognitive map. There are 10 main concept taken place in the model [16]. The concepts of cognitive mapping given in Fig. 2 are the most important factors of sustainability evaluation of a hospital. The blue arrows and red arrows show respectively positive and negative effects on the concepts. Furthermore, it is clearly seen that the complexity of the cognitive mapping leads us to a detailed analysis. The adjacent matrix of the fuzzy cognitive mapping given in the Fig. 2 has been obtained by the views of the experts who are a group of medical doctors working in Istanbul. The fuzzy cognitive mapping includes 83 connections which implies that the system is very complex given in matrix A. ⎡ FoodQuality ⎢ IndoorQuality ⎢ PoliciesandRegulations ⎢ ⎢ ⎢ ⎢ GreenFacilityDesign ⎢ ⎢ EnergyUsage ⎢ A= ⎢ WaterUsage ⎢ ⎢ WaterQuality ⎢ ⎢ OccupationalHealthandSafety ⎢ ⎢ ⎣ AirQuality ChemicalUsage

0 0 0.3 0.2 0.1 0.2 0 0.05 0 −0.2

0 0 0 0 0 0 0 −0.4 0.5 0 0 −0.5 0.2 0 0 −0.5 0.2 0 0 0 0.1 0 0 0 0.3 0 0 −0.05 0 −0.1 0.1 −0.08 0.3 0 0.1 −0.1 −0.1 0 0 0.3

0.2 −0.3 −0.5 −0.5 0.25 0 0 0 −0.1 0.2

0.1 0.3 0.3 0.3 0.1 0 0 0 0.1 −0.2

0.1 0.5 0.2 0 0 0 0.1 0 0 0

0 0.3 0.4 0.3 0.1 0 0.1 0 0 −0.2

⎤ −0.1 0 ⎥ ⎥ −0.4 ⎥ ⎥ ⎥ −0.4 ⎥ ⎥ 0.4 ⎥ ⎥ 0 ⎥ ⎥ ⎥ −0.1 ⎥ ⎥ 0 ⎥ ⎥ −0.1 ⎦ 0

Furthermore, by analysing adjacent matrix, indoor quality has the maximum centrality value which implies that Indoor quality is the most important value in a hospital’s

128

A. K. Konyalıo˘glu and ˙I. Bereketli

Fig. 2. Fuzzy cognitive mapping approach for sustainability factors in a hospital

sustainability. Indoor quality in a hospital can be mainly defined as the removal of contamination, filtration, mechanical ventilation and disinfection. Policies and regulations can be said as the central of the model since it has the highest outdegree value 3 as seen in the Table 2. Chemical usage should be avoided as it has the most negatively affecting concept. Based on the adjacent matrix, it is said that several factors or concepts are directly connected with each other. Policies and regulations can force hospitals to be environmentally friendly hospitals by laws. Besides, it can be seen based on the adjacent matrix and fuzzy cognitive mapping given in Fig. 2 that policies and regulations force hospitals to be energy and water saver hospitals which implies that being a “green and sustainable hospital” is mostly possible when regulations and policies are strict. On the other hand, green facility design is the most affected- one of the systems. The other factors severely affect “green facility design” factor while indoor quality is the most affecting one. For a better sustainable and green hospital, it can be said that energy and water usage should be decreased and policies and regulations should be increased to be more effective and sustainable hospital. In the model and weights, even though occupational health and safety is underrated, it should be more effective one as in the green hospital concept, the life and the safety of workers and patients are very important.

A Fuzzy Cognitive Mapping Approach for the Evaluation

129

Table 2. The degree levels and centralities of concepts Component

Indegree value

Outdegree value

Centrality value

Indoor quality

1.7

2.1

3.8

Policies and regulations

0,10

3,00

3,10

Energy usage

1,93

1,14

3,07

Chemical usage

1,50

1,20

2,70

Green facility design

0,20

2,21

2,41

Water usage

2,04

0,30

2,34

Air quality

1,40

0,80

2,20

Water quality

1,40

0,65

2,05

Food quality

1,06

0,50

1,56

Occupational health and safety

0.9

0,33

1,23

4 Conclusion There are many factors affecting a hospital’s view trying to be sustainable and qualified. As quality is a large concept including continuous improvement, in terms of sustainability, many factors should be taken into consideration by observing the relationship and correlation between factors. In this study, we have investigated many factors applied by Fuzzy Cognitive Maps. We tried to observe the effects of concepts on each other. Furthermore, based on the numerical analysis, indoor quality which is disinfection, mechanical ventilation and airflow controlling, and policies and regulations are the most centeredones while food quality and occupational health and safety are the least centered ones based on sustainability evaluation of hospitals. Also, it is tried to understand that policies and regulations are the most effecting concepts on the other concepts. Energy usage and water usage are relatively high effecting concepts of a hospital’s sustainability evaluation to be green hospital. For further researches, the concepts and relationships between the nodes should be extended in order to have a well-extended analysis and the other mapping methods can be used.

References 1. Weisz, U., Haas, W., Pelikan, J., Schmied, H.: Sust. Hosp. Soc.-Ecol. App. Gaia, 20(3), 191–198 (2011). http://160.75.22.2/docview/898368886?accountid=11638 2. Sustainability - A new engine of innovation for healthcare? PR Newswire. Accessed 27 Jun 2011 http://160.75.22.2/docview/873717815?accountid=11638 3. Silva, S., Fonseca, A.: Portuguese primary healthcare - sustainability through quality management. Int. J. Qual. Reliab. Manag. 34(2), 251–264 (2017). http://160.75.22.2/docview/186 0633994?accountid=11638 4. Faezipour, M., Ferreira, S.: A system dynamics approach to water sustainability in healthcare. In: IIE Annual Conference. Proceedings (p. 1). Institute of Industrial and Systems Engineers (IISE), January 2012

130

A. K. Konyalıo˘glu and ˙I. Bereketli

5. World Commission on Environment and Development (WCED): Our Common Future. Oxford University Press, Oxford, England (1987) 6. Mortimer, F.: Emerging sustainability: reflections on working in sustainability and health. Int. Transdiscipl. J. Compl. Soc. Syst. 87 7. Cavicchi, C.: Healthcare sustainability and the role of intellectual capital: evidence from an italian regional health service. J. Intellect. Cap. 18(3), 544–563 (2017) 8. Kosko, B.: Fuzzy cognitive maps. Int. J. Man Mach. Stud. 24(1), 65–75 (1986) 9. Parsopoulos, K.E., Papageorgiou, E.I., Groumpos, P.P., Vrahatis, M.N.: A first study of fuzzy cognitive maps learning using particle swarm optimization. In: Proceedings of IEEE, pp. 1440–1447 (2003) 10. Stach, W., Kurgan, L., Pedrycz, W., Reformat, M.: Genetic learning of fuzzy cognitive maps. Fuzzy Sets Syst. 153(3), 371–401 (2005) 11. Styblinski, M.A., Meyer, B.D.: Signal flow graphs vs fuzzy cognitive maps in application to qualitative circuit analysis. Int. J. Man Mach. Stud. 35(2), 175–186 (1991) 12. Georgopoulos, V.C., Malandraki, G.A., Stylios, C.D.: A fuzzy cognitive map approach to differential diagnosis of specific language impairment. Artif. Intell. Med. 29(3), 261–278 (2003) 13. Çoban, O Seçme, N.Y., Seçme, G., Özesmi, U.:. An empirical analysis of firms’ strategies in the turkish automobile market 1. Econ. Bus. Rev. Central South – East. Euro. 8(2), 117–141 (2006). http://160.75.22.2/docview/219478869?accountid=11638 14. León, M., Mkrtchyan, L., Depaire, B., Ruan, D., Vanhoof, K.: Learning and clustering of fuzzy cognitive maps for travel behaviour analysis. Knowl. Inf. Syst. 39(2), 435–462 (2014). https://doi.org/10.1007/s10115-013-0616-z 15. Özesmi, U., Özesmi, S.L.: Ecological models based on people´s knowledge: a multi step fuzzy cognitive approach. Ecol. Model. 176, 43–64 (2004) 16. Sagha Zadeh, R., Xuan, X., Shepley, M.M.: Sustainable healthcare design: existing challenges and future directions for an environmental, economic, and social approach to sustainability. Facilities 34(5/6), 264–288 (2016) 17. Trochim, W., Kane, M.: Concept mapping: an introduction to structured conceptualization in health care. Int. J. Qual. Health Care 17(3), 187–191 (2005) 18. Shewchuk, R.M., O’Connor, S.J., Williams, E.S., Savage, G.T.: Beyond rankings: using cognitive mapping to understand what health care journals represent. Soc. Sci. Med. 62(5), 1192–1204 (2006) 19. Eldredge, L.K.B., Markham, C.M., Ruiter, R.A., Kok, G., Fernandez, M.E., Parcel, G.S.: Planning Health Promotion Programs: an Intervention Mapping Approach. John Wiley & Sons (2016)

Digital Maturity Assessment Model Development for Health Sector Berrak Erdal1(B) , Berra ˙Ihtiyar2 , Ece Tuana Mıstıko˘glu1 , Sait Gül2 , and Gül Tekin Temur1 1 Industrial Engineering, Bahçe¸sehir University, Istanbul, Turkey {berrak.erdal,ecetuana.mistikoglu}@bahcesehir.edu.tr, [email protected] 2 Management Engineering, Bahçe¸sehir University, Istanbul, Turkey [email protected], [email protected]

Abstract. This research aims to determine the digital maturity levels of hospitals in Turkey with the purpose of enabling institutions to adapt easily to digital changes, and increasing their digital maturity levels. In the study, 6 main criteria and 16 sub-criteria developed for the evaluation of the digital maturity levels in the health sector were taken into account. In this study, DEMATEL method as one of multiple criteria decision making (MCDM) methodologies was used to evaluate the criteria set obtained. In the research, survey data of 10 decision-makers working in different hospitals and different departments in the health sector of Turkey were used. The results of the DEMATEL method were examined in detail and explained throughout the research. Considering the main criteria and their sub-criteria, the effect of the main criteria on each other was examined, and then the effects of the sub-criteria on each other were examined. Local weights of main and sub-criteria and their global weights were calculated. According to the results, the main criterion with the highest impact on the others was determined as “Organization and Management”. With the information derived as a result of the research, a website was created for each hospital in Turkey to provide easy access and a questionnaire was prepared to measure the digital maturity levels of hospitals. This test aims to allow the hospitals to make a self-assessment about their digital maturity level and to get some suggestions accordingly from the website. As a result of the research, it has been determined that the health sector in Turkey is open to digitalization and the investments to be made in this sense will increase the quality of health services. Keywords: Digital maturity · Health sector · MCDM · DEMATEL · Expert judgment

1 Introduction With the developing technology, digitalization became a concept having increasing popularity throughout the world. Societies have continued to digitalize by improving themselves in the field of technology in line with their needs. Digital transformation has © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 131–147, 2022. https://doi.org/10.1007/978-3-030-90421-0_11

132

B. Erdal et al.

become a necessity for organizations if they want to gain a competitive advantage in the global markets, especially during the Covid-19 process, which still continues to affect the world. In this process, many sectors in Turkey have started to carry out their work on digital platforms. Digital transformation applications are seen in our country, but compared to other countries, Turkey is in the early stages of digital transformation. According to the report announced by the International Management Development Institute (IMD) in 2019, Turkey ranked 52nd among 63 countries in the World Digital Competitiveness ranking [1]. According to the report published by IMD in 2020, Turkey showed progress in digitalization and rose to 44th place among 63 countries [2]. According to the 2020 report, Turkey ranked 56th in the knowledge factor, 42nd in the Technology factor, and 34th in the Future Readiness factor [2]. According to the report, Turkey’s move from 52nd place to 44th place results in the improvements made in future readiness particularly in adaptive attitudes and business agility compared to the 2019 report [2]. Turkey’s Digitalization Index 2020 report prepared by the TÜB˙ISAD (Türkiye Bili¸sim Sanayicileri Derne˘gi – Turkish Informatics Industry Association) consist of 4 main components and subcomponents in 10 categories and Turkey took an average of 3.06 points out of 5 [3]. In TÜB˙ISAD’s 2019 report, Turkey took the point of 2.94 [3]. Studies show that Turkey is developing itself in terms of digital transformation. Another study conducted in Turkey with the collaboration of TÜS˙IAD, Samsung, GFK, and Deloitte, namely “CEO Perspective on Digital Transformation in Turkey” aims to present how CEOs and top managers from different sectors understand and manage the digital transformation in their organization [4]. As a result of this study, CEOs and top managers gave great importance and support to the subject of digital transformation. However, it is observed that digital transformation is applied separately in distinct departments of the company, but not throughout the entire company. Also, CEOs and top managers who participated in this study stated that they clearly understand the requirements and benefits of digital transformation in organizations. In the last study mentioned, it is seen that top managers and CEOs are from the banking and retail industries or holdings, and there is no participation from the health sector. Studies in digitalization of the health sector can be seen in Turkey such as digital hospitals. The digital hospital concept started in 2013 and has been developing in the health sector ever since. According to the Ministry of Health of Turkey “Digital hospital can be defined in a broad sense from a hospital where the maximum level of information technologies is used in administrative, financial and medical processes, to a hospital where all kinds of communication tools and medical devices are integrated with each other and other information systems, and healthcare staff and patients can exchange data inside or outside the hospital by using telemedicine and mobile medicine practices” [5]. Also, with the website of digital hospitals created by the ministry of health information can be obtained about the digitalization level of hospitals. The amount of share allocated to health expenditures in Turkey’s budget is perceived as the share that the country allocates to health and therefore to the healthy development of its citizens. The high share of education and health expenditures from the budget can be associated with the development levels of countries. According to the data of the World Health Organization (WHO), the world’s average share for 2014 is 15.5% [6]. The share of health expenditures in the country’s gross national product (GNP) is also

Digital Maturity Assessment Model Development for Health Sector

133

used as a criterion in determining the development level of countries. It is understood how much of the country’s total economic capacity comes from the production of health services. According to WHO’s 2014 figures, health expenditures constitute 9.9% of the world’s GNP. Turkey is in the range of 5.1%-8% in this table and is mostly among Eastern European and Asian countries [6]. While the ratio of total health expenditure to GDP was 4.4% in 2018, it was 4.7% in 2019. While the ratio of current health expenditures to GDP was 4.1% in 2018, it became 4.3% in 2019 [7]. With this research, it has been seen that Turkey is open to development in digitalization in the health sector. In this sense, the developments and investments to be made in the health sector in the digital sense will have a direct impact on the quality of the health service given in the country and will have a positive effect on the development level of the country. In the 21st century, many studies have been carried out in the field of Industry 4.0 and digitalization, and these studies have been adapted to many areas. Since the concepts of digitalization and Industry 4.0 are examined in a wide range, it has been a compelling factor to follow every development by the sectors. Allocating time and budget is an important criterion for companies to integrate Industry 4.0 and technological developments into their systems. With the Covid-19 period, the need for hospitals has increased, so hospitals have done some work to keep the service quality stable. It is an important element to ensure that the innovations and developments in the field of digitalization are followed by hospitals. Hospitals need to meet certain criteria in order to have an efficient digital transformation. For this reason, it has been determined that there is a need for a digital maturity model that will shed light on the digital transformation journey of hospitals. Thus, it will be ensured that the criteria open to development in reaching digital maturity will be determined and hospitals will be encouraged to invest in that area. The digital maturity model is used as a guide in the digital transformation journey of organizations. At the same time, organizations use this model to understand their digital maturity level and to perform optimization studies according to the results of the model. In general, the digital maturity model is a test of how effectively an organization uses new technologies to achieve improved results. This study aims to propose a digital maturity assessment model for the use of hospitals. With the criteria of the digital maturity model that were determined from the literature review, the most important factors affecting digitalization in the health sector have been obtained. The importance (weights) of the digital maturity model criteria for the digital transformation journey of the health sector were computed via a well-known multiple criteria decision making (MCDM) tool, DEMATEL (Decision-Making Trial and Evaluation Laboratory). The most distinctive feature of DEMATEL is its ability to consider the influences and relations among criteria. Analytical Hierarchy Process (AHP) and Analytic Network Process (ANP) were other alternatives, but we choose DEMATEL because AHP does not consider the criteria relations and ANP requires a given network showing the relations. DEMATEL draws the relation map and also calculates the criteria weights by considering these influences. After building the model, collecting data from the expert, running DEMATEL, and obtaining the influences and criteria weights, a digital maturity level scale was created. This scale was transformed into a digital maturity measurement test and served on a website, https://dijitalsaglik. wixsite.com/digihealth. This website aims to measure the digital maturity level of the

134

B. Erdal et al.

hospitals concerning the outputs of the test and to give some suggestions with respect to the level they are located in. This study aims to create digital awareness in institutions and employees operating in the health sector in our country.

2 Literature Review In the literature, there are digital maturity assessment models created for general purposes and none of them includes specific criteria affecting the digital maturity level of the health industry. Some examples are listed as follows: • IMPULS is a 6-dimensional evaluation method containing 18 items in 5 stages to show readiness. Barriers to development to the next stage are described as well as advice on how to resolve them. The development of the “IMPULS-Industrie 4.0 Readiness” model is based on a detailed dataset and information is given regarding measurements, items, and the evaluation approach. The model is well-grounded scientifically, and its structure and the results are clarified in transparent ways [8]. • Maturity and Readiness Model for Industry 4.0 model is divided into 3 main sections and 13 distinct subsections. This model consists of 4 stages and according to this model, the evaluation questionnaire is made to the institutions and the stage of the institutions in the field of digital maturity is observed [9]. • The Industry 4.0 Maturity Model is an easy-to-use assessment tool that can be used by companies to assess their maturity level. This model is based on 62 maturity items that are unequally grouped into nine organizational dimensions and maturity levels are observed under 5 levels [11]. The Industry 4.0 Maturity Model was developed with a multi-methodological approach including a systematic literature review, conceptual modeling, and qualitative and quantitative methods. The maturity level is evaluated by the questionnaires consisting of one close-ended question per item [8]. According to the result of the questionnaires weighting points of the dimensions are calculated and maturity level can be determined. According to the literature review, there is a gap in a digital maturity assessment tool dedicatedly developed for the health sector. In this study, we aimed to build one for fulfilling this important gap. Each sector has its own determinants about digitalization, and they should be rigorously and carefully studied. With this study, we developed first the criteria set directly affecting the maturity level of hospitals and then utilized it in establishing a model. The findings of the model are also used on the website, https://dij italsaglik.wixsite.com/digihealth. From a technical perspective, DEMATEL tool of MCDM field was chosen. The DEMATEL approach help identifying workable solutions by using a relational structure to enhance comprehension of the particular problem by constructing clusters of interconnected items. Using a causal diagram, this technique as one of the structural modeling techniques can define the dependencies among the elements of a system, unlike conventional techniques such as AHP and SWARA which assume that items in an MCDM problem are independent. The causal diagram describes the basic concept of contextual relationships and the strengths of influences among the elements using digraphs rather than directionless graphs [10].

Digital Maturity Assessment Model Development for Health Sector

135

3 Overview of DEMATEL DEMATEL is a decision support tool that helps to create applicable solutions by using a relational network structure to provide an understanding of a particular problem or set of interconnected problems. It identifies the interdependencies between the elements of a system by creating a causal diagram. This diagram emphasizes the strengths of influence between elements by using the basic concepts of contextual relationships and digraphs [10]. DEMATEL is based on graph theory and allows users to analyze and solve problems using a visualization tool. This structural modeling approach uses a graph, also known as a causal influence diagram, to present interdependencies and values of influence between variables. All items are divided into causal and affected groups according to the visual relationship of levels between system variables. This helps researchers better understand the structural relationships between system components and identify solutions to complex system problems. This approach not only transforms interdependence interactions into a cause and effect set using matrices but also examines key variables in a complex structural system under the influence relationship diagram. It is increasingly used in various fields as it is very useful for visualizing complex causal structures between multiple factors [11]. The DEMATEL method provides a more comprehensive analysis and a more consistent result when compared to MCDM methods that do not consider the relationship between criteria. A. Step 1: Creating the Group Direct-Influence Matrix (Z) To assess the relations between n criteria F = {F1 , F2 , . . . , Fn } in the system, l experts are asked as a decision group E = {E1 , E2 , . . . , El } to indicate direct influences between digital maturity model criteria using an integer scale of “No Influence (0)”, “Low Influence (1)”, “Medium Influence (2)”, “High influence (3)”, and “Very High Influence (4)”. Table 1 shows the definitions of the scale points. Then, the individual direct-influence matrix Zl = [Zijl ] which is provided by experts is formed [12]. n∗n

Table. 1 DEMATEL Method Scale. Influence degree

Definition

0

There is no influence between compared criteria

1

Row criterion has a low influence on the column criterion

2

Row criterion has a medium influence on the column criterion

3

Row criterion has a high influence on the column criterion

4

Row criterion has a very high influence on the column criterion

Zijl represents the judgment of the decision-maker El on the degree of how much Fi affects Fj . By taking the arithmetic average of the judgments, the group direct influence matrix Z is built.  l l Zij (3.1) Z = [Zij ]n*n where Zij = l

136

B. Erdal et al.

B. Step 2: Creating the Normalized Direct-Influence Matrix (X) After determining Z, the direct influence matrix X can be established by using the following formulas [12]: ⎞ ⎛   z (3.2) Zij , max Zij ⎠ X = where s = max⎝ max 1≤i≤n 1≤j≤n s j

j

C. Step 3: The Total-Influence Matrix (T) Using the normalized direct-influence matrix X, the total-influence matrix T = [Tij ]n∗n is then computed by summing the direct effects and all the indirect effects by Eq. (3) where I represents the identity matrix [12]. T = X + X 2 + X 3 + . . . + X ∞ = X (I − X )−1

(3.3)

D. Step 4: Creating the Influential Relation Map (IRM) At this step, the vectors R and C perform the sum of the rows and the sum of the columns of matrix T as follows [12]:   Ri = Tij , Ci = Tj (3.4) i

i

Where r i is the ith row sum in the matrix T and displays the sum of the direct and indirect effects dispatching from factor F i to the other factors [9]. Furthermore, cj is the jth column sum in the matrix T and illustrates the sum of direct and indirect effects that factors F j are receiving from the other criteria [12]. Let i = j where i, j ∈ {1, 2, . . . , n}, the horizontal axis vector (R + C) which is called “Prominence”, demonstrates the general strength of influences that are given and received of the criterion. That is, (R + C) denotes the degree to which the criteria play a central role in the system. Similarly, the vertical axis vector (R – C) which is called “Relation” indicates the net influence that the criteria have on the system. If (rj − cj ) > 0, then the criteria Fj has a net influence on the other criteria and can be grouped into cause group; if (rj − cj ) < 0, then the criteria Fj is being influenced by the other criteria on the whole and should be grouped into effect group [12]. Step 5 highlights several crucial measures applied to the initial method for solving complex and intertwined problems in different areas [12]. E. Step 5: Threshold Value for IRM To explain the structural relations between factors, Influence Relation Map (IRM) is built using information from the matrix T. However, in some cases, the IRM may be too complex to provide useful information for decision-making if all relations are shown on it. Hence, in order to filter out insignificant effects, a threshold value θ is set. That is, only the element of matrix T, whose influence level is greater than the value of θ, is selected and shown on the IRM. It is seen in the literature that the threshold value θ generally can be obtained according to the results of the literature review, through experts’ discussions,

Digital Maturity Assessment Model Development for Health Sector

137

with the brainstorming technique, etc. For simplicity, the maximum value of the diagonal elements of the matrix T was taken for θ [12]. F. Step 6: Criteria Importance Weights Calculation Another motivation for using DEMATEL is to calculate the weights of the criteria. Criterion weights showing their importance for the digital maturity assessment model in the health sector are calculated through a normalization method which is based on prominence [12]. That is, each criterion’s (R + C) value is divided by the sum of all the criteria’s (R + C) values.

4 Digital Maturity Model Criteria for Health Sector The digital maturity of hospitals and the factors affecting digitalization in the health sector are examined under 6 main criteria and 16 sub-criteria in total (Table 2). Detailed explanations of the sub-criteria specified in the set are given below. Table 2. Criteria set for digital maturity model Digital Maturity Model for Health Sector Strategy (C1) Technology Investments (C1.1) Innovation and Technology Management (C1.2)

Organization and Management (C2)

Healthcare Workers (C3)

Technology in Health (C4)

Interoperability of Healthcare ( C2.1)

Healthcare Worker Training (C3.1)

Electronic Health Record (C4.1)

Leadership (C2.2)

Motivation Among Healthcare Workers (C3.2)

Cyber Security (C4.2)

Healthcare Standards (C2.3)

Patient Monitoring (C4.3)

Data Management (C5) Big Data& Machine Learnig & Artificial Intellegence (C5.1) Health Data Quality (C5.2)

Governance (C6) Regulation (C6.1) R&D Investments (C6.2) Goverment Support (C6.3)

Blockchain Technology (C4.4)

Strategy (C1). Technological Investments (C1.1): With the development of technology, health institutions should incorporate technological investments that increase productivity. Efficient technology investment and application can enable hospitals to increase their costs and quality efficiency [13]. Innovation and Technology Management (C1.2): A healthcare organization’s responsiveness and acceptance of innovations often depend on their use of new technology and their corporate culture. Efficient and easy use of technology and the adaptation process are important for the success of innovations in the health sector. For the innovation of

138

B. Erdal et al.

organizational culture to be implemented effectively in healthcare organizations, healthcare professionals should encourage interaction between different departments in the healthcare organization [14]. Organization and Management (C2). Interoperability of Healthcare (C2.1): The capacity of two or more systems or elements to share information and use the information exchanged” is a general definition of interoperability. Making interoperability a key issue in medicine and healthcare is critical, as it requires collaboration from healthcare practitioners, academics, IT professionals, data engineers, and policymakers [15]. Leadership (C2.2): A consistent concept of leadership in digital healthcare can guide healthcare leaders and managers in their work, foster interprofessional and multi-sectoral collaboration, and advance clinical practice, particularly in the context of digitalization [16]. Healthcare Standards (C2.3): Technology standards and specifications provide the basis for achieving interoperability, integration, and scalability through standardized protocols and data models [17]. Healthcare Workers (C3). Healthcare Workers Training (C3.1): It is vital that hospitals provide training in a digital sense, as it increases the productivity of healthcare providers and the quality of service; however, how the training is designed, implemented, and conducted is also critical in terms of providing beneficial outcomes to the healthcare professionals receiving the training [18]. Motivation Among Healthcare Workers (C3.2): The healthcare workforce is crucial as it is at the heart of a healthcare system. Motivation is one of the guiding factors for healthcare professionals and has the ability to contribute to the success of the healthcare organization in achieving better health and equity goals [19]. Technology in Health (C4). Electronic Health Record (C4.1): One of the main aspects of the Electronic Health Record is that certified clinicians can create and maintain patient records in a digital format that can be shared with other providers through various healthcare organizations [20]. Cyber Security (C4.2): It is essential to allocate time and resources to protect the security of healthcare technology and the confidentiality of patient data against unauthorized access [21]. Patient Monitoring (C4.3): This system allows physicians to access their patient’s health data remotely, allowing patients to receive medical care using internet technology without going to the hospital. This criterion examines concepts such as telemedicine, remote sensor technology, and smart device under the heading of patient observation [22]. Blockchain Technology (C4.4): Blockchain is recommended as a viable technology to develop a healthcare network that will enable patients to monitor how their data is shared, processed, and used [23].

Digital Maturity Assessment Model Development for Health Sector

139

Data Management (C5). Big Data & Machine Learning & Artificial Intelligence (C5.1): These technologies accelerate the diagnosis of diseases and allow the patient’s data to be followed closely [24]. Health Data Quality (C5.2): Data content information quality issues are an important issue in healthcare, as unstructured data is extremely complex and often inaccurate [25]. Governance (C6). Regulation (C6.1): All kinds of information or promotion in the field of health are among the restrictions determined by the state. It has set regulations for all kinds of activities that may mislead patients, such as the fact that health institutions put their names on competitive websites [26]. R&D Investments (C6.2): R&D investments are the most important factors in increasing the level of development and welfare of society. It is essential to establish a structure that will support R&D activities in the health sector [27]. Government Support (6.3): The Ministry of Health, which is responsible for the protection of public health and affiliated to the state, is responsible for basic public health functions, mass vaccination, and protection of environmental health [28].

5 Results of DEMATEL A survey was formed based on the criteria table created to measure digital maturity in the health sector, and 10 decision-makers working in different positions in the health sector in Turkey participated in this survey. This survey was spesifically constructed according to DEMATEL method, and the decision-makers completed the survey in accordance with its scale depicted in Table 1. Decision-makers first evaluated the main criteria in a pairwise comparison manner, and then evaluated the sub-criteria under the main criterion in the matrices created for the sub-criteria. After performing the steps of DEMATEL, the local main criteria weights and sub-criteria weights were calculated. Then, the global criteria weights were calculated to examine the breakdown of the subcriteria by multiplications of the local weights. In Table 3, the local and global criteria weights are given. The outputs in Table 3 are evaluated as follows. While the most important factor affecting digital maturity in the health sector is found as “C2. Organization and Management”, “C3. Healthcare Professionals” criterion was found to be the least effective criterion in digitalization. As it can be analyzed from the result of the model, the organizational structure of the institutions is of great importance when integrating digital technologies into institutions. For this reason, it is necessary to have a digital transformation unit within the institutions that follow digital innovations and investigates the methods of integrating these innovations into the corporate culture. The digital transformation unit will ensure healthier progress of the digitalization process within the organization and will ensure that the organization gets maximum efficiency from the investments made on the subject. The sub-criterion with the highest global weight in the “C2. Organization and Management” the main criterion is “C2.1. Interoperability

140

B. Erdal et al. Table 3. DEMATEL method distribution of criteria weights DEMATEL METHOD DISTRIBUTION OF CRITERIA WEIGHTS DEM ATEL M ain Criteria

Criterion Weight

C2. Organization& M anagement

0,178

Sub-Criteria C2.1 Interoperabilitiy of Healthcare

C1. Strategy

C4. Technology in Health

0,174

0,170

C6. Governance

C3. Healthcare Workers

0,168

0,157

0,153

0,06414

C2.2 Leadership

0,057884

C2.3 Healthcare Standards

0,055516

C1.1 Technology Investment

0,087006

C1.2 Innovation and Technology M anagement

0,087006

C4.1 Electronic Health Record

0,04559

C4.2 Cyber Security

0,04211

C4.4 Blockchain Technology

0,04162

C4.3 Patient M onitoring C5. Data M anagement

Sub-Criterion Global Weight

C5.1Big Data&M achine Learning&Artificial Intelligence

0,041 0,084108

C5.2 Health Data Quality

0,084108

C6.1 Regulations

0,054118

C6.3 Government Support

0,052701

C6.2 R&D Investments

0,050396

C3.1 Healthcare Workers Training

0,088479

C3.2 M otivation Among Healthcare Workers

0,064219

of Health Service”. As can be seen, the ability of healthcare professionals to adapt digital technologies together and share information among them is critical in raising digital maturity in institutions. “C2.2. Leadership” sub-criterion is also important in this regard since leaders with high digital competencies ensuring coordination in teams has a positive effect on interoperability. The most important feature that distinguishes the DEMATEL method from other MCDM methods is that it examines the effects of the criteria on each other. The influences among the criteria are visualized by using IRM. In this way, the relations between the criteria can be examined more easily. When the IRM for the main criteria (Fig. 1) is examined, it is seen that the criterion with the highest influence on the criteria is “C2. Organization and Management”. Optimization attempts and investments to be made in this criterion will directly affect other criteria and developments in the digital field will also be observed as a result. For instance, training can be given to the teams about the work of different units so that units within the organization can communicate more easily and ensure easier coordination in the work. The training aimed at raising the digital awareness of people in leadership positions can be provided. The existence of a unit within the organization that follows digital innovations and aims to integrate these innovations within the organization will increase digital maturity and awareness. This application can be a strategic move aimed at improving the organization and management of the institution.

Digital Maturity Assessment Model Development for Health Sector

141

Fig. 1 IRM for Main Criteria.

When the IRM for the sub-criteria of “C2. Organization and Management” (g) is examined, it is found that the sub-criterion with the highest impact on the other subcriteria is “C2.2. Leadership”. Due to space requirements, all the figures showing the relations among all the sub-criteria are given in Appendix, but not explained in the text. For this study proposing a digital maturity assessment tool for healthcare institutions, the basic output of DEMATEL is the weights of the attributes because they are directly used in the website which can be used as a self-assessment tool for the hospitals.

Fig. 2 IRM for C2.

Based on the weight outputs of the DEMATEL method, a website (please see https:// dijitalsaglik.wixsite.com/digihealth) was built. Via this online tool, hospitals can easily measure their digital maturity level and reach some suggestions accordingly. In the test part of the website, 16 questions are representing each of the 16 sub-criteria, and each question carries the weight of the relevant sub-criteria (please see Table 1). Considering their performance about the sub-criterion, hospital managers will rate each question on a scale of 1 to 10 where 1 shows the worst performance score and 10 is the highest performance score. The digital maturity score of the hospital is obtained by multiplying the weight of each sub-criterion with the score given by the hospital for the relevant

142

B. Erdal et al.

question. This methodology has a basic factor rating understanding. As stated in Table 4, hospitals are classified in 5 different degrees. Table 4. Digital Maturity Levels and Definitions. Digital maturity levels Digital maturity level definition Beginner (1–2.8)

This level includes hospitals that have just started their digital transformation journey. In order to reach digital maturity, hospitals should grasp all the requirements of digital transformation and implement them in their hospitals

Learner (2.8–4.6)

Hospitals at this level have begun to grasp the requirements of digital maturity and are open to investments to be made in this direction. Hospitals are open to learning and improving themselves by doing the necessary research to learn about digital maturity. Digitalization strategies have started to be created within the hospital, but there are deficiencies in implementation

Intermediate (4.6–6.4)

Hospitals at this level have begun to understand the importance of digitalization, have shown the necessary importance to technology and innovation management, and are ready to invest in this context. They use electronic health records efficiently and show the importance of protecting these records. There is a working environment suitable for transferring information between teams in their hospitals, they have no difficulty in adapting to innovations in the digital field

Experienced (6.4–8.2)

Hospitals at this level include hospitals that have high awareness and developed themselves in the field of digitalization. Training is organized in the hospital to increase digital awareness, including healthcare professionals from the management. They have an organization and equipment to oversee digitalization strategies within the organization. Studies are carried out on data security and data analysis. Blockchain technology and big data analysis are used in data security studies. Studies are carried out for the integration of technological developments such as Blockchain technology, Big Data analysis, and Artificial Intelligence into the institution. Necessary infrastructures have been established to monitor patients remotely, and these systems are actively used by doctors to monitor the health status of patients

Pioneer (8.2–10)

Hospitals at the pioneer level are in a leading position in terms of digital maturity level and lead hospitals at other levels. These hospitals have fully met the requirements of digital maturity and implemented them in their institutions. Hospitals have a unit that monitors digital transformations. They actively follow the development of artificial intelligence technologies and have no difficulty in implementing them in their institutions. Their cybersecurity infrastructures are strong, they care about the privacy of the personal data of patients and healthcare professionals. Employees in the leading positions in the hospital have high digital awareness, and they have leadership qualities that can easily transfer their competencies to their teams. The hospital allocates a certain budget to R&D studies and encourages the employees of the institution for these studies. It works to increase the motivation of hospital employees and to connect them to the hospital

Hospitals that measure their digital maturity on the website can review the recommendations on the site to improve their levels as a result of the test. Recommendations for each level are explained as follows:

Digital Maturity Assessment Model Development for Health Sector

143

Beginner Level: The hospital should understand the requirements of digital maturity and receive training. Hospitals that will just start their digital transformation journey can employ experts in the field of digitalization and adapt to digitalization more efficiently. In order to obtain maximum efficiency from digitalization, training should be given to the employees within the organization and seminars can be organized to raise awareness of digital awareness. Inventory work should be done for the equipment used in the digitalization process and market research should be done in order to achieve maximum efficiency with low cost. Learner Level: Strategies for digitalization have started to be developed in hospitals at this level, but deficiencies have been observed. These strategies should first be implemented in pilot groups, a SWOT analysis should be made, and the deficiencies should be analyzed in detail because identifying and analyzing these deficiencies is critical for the process to proceed without error. After the pilot implementation phase is completed, these targets can be integrated into the institution according to the analysis results. Investing in technology and innovation can be made by taking the leading hospitals in the field of digitalization as a reference. Intermediate Level: Hospitals at this level are ready to invest in innovation and technology. It is recommended that they start to implement telemedicine systems in their hospitals that will allow remote monitoring of patients. A digital transformation desk should be established within the institution to follow digital developments and supervise the digitalization process. In this way, mid-level hospitals will be able to follow the developments in the digital field in the digital transformation process, conduct market research, easily identify their deficiencies, and improve themselves by working on the related deficiencies. Studies can be carried out to ensure data security and data analysis. They can follow the developments in the field of big data and artificial intelligence. Experienced Level: Hospitals can get consultancy services from the digital transformation desk of the hospitals that are at the pioneer level in order to reach the pioneer level and become a leader in digitalization. They can identify the deficiencies necessary to realize all the requirements of digital transformation through consultancy services. Awareness about the importance of cybersecurity should be created and cybersecurity infrastructure should be developed. It is recommended to allocate a certain part of their budget to R&D studies. Pioneer Level: Hospitals at this level fulfill all the requirements of digital maturity. To avoid disruptions in the processes of these hospitals, their processes should be monitored and controlled periodically. Digital transformation studies carried out globally should be followed, and joint studies can be carried out with hospitals that are in the leading position abroad when deemed necessary. In the country, they can aim to stay up-to-date by providing consultancy to experienced hospitals.

144

B. Erdal et al.

6 Discussion and Conclusion Within the scope of the research, it was aimed to determine the level of digital maturity in the health sector, to identify the deficiencies in the institutions, to increase the development in the health sector in digital transformation, and to provide better service in the health sector. The necessary points for an organization to reach digital maturity were determined and a comprehensive literature review was conducted. In the criteria set prepared as a result of the literature review, the criteria that contributed the most to the digital maturity of the organizations were determined. After the criteria set was prepared, the DEMATEL method, one of the most common MCDM methods, was used to measure digital maturity in organizations. DEMATEL method is used for calculating criteria weights by taking into account the relationships among criteria. In such studies, it is observed that the methods that take into account the relationship between the criteria give more realistic results. Decision-makers rated the effects of the criteria on each other on a scale of 0–4 (Table 1). Considering the relationship between the criteria revealed by DEMATEL, the criteria weights were calculated. Based on the outputs of the method, a website (see https://dijitalsaglik.wixsite.com/digihealth) where hospitals can measure their digital maturity was also designed and published. The purpose of establishing the website is to create benefits for the health sector of our country by enabling all hospitals to measure their digital maturity levels. Calculated criteria weights are assigned to the questions determined for each criterion during the preparation phase of the website. The computed digital maturity score was examined at 5 levels and suggestions were made to be beneficial to hospitals at all levels. As a result of the analysis, the importance and necessity of digitalization in the health sector have been observed. It has been concluded that Turkey is still developing in digital transformation and the health sector needs developments in the digital field. Hospitals across the country can learn their digital maturity levels by solving the digital maturity level measurement test on the website, https://dijitalsaglik.wixsite.com/digihe alth. In this manner, more hospitals will be reached in Turkey, and hospitals will be able to identify their digital weaknesses and areas that are open to development. This study aims to increase the quality of health by increasing digital maturity in the health sector throughout the country. Acknowledgment. We wish to thank the participants who completed our survey as experts supporting this project.

Appendix See Fig. 3 and Fig. 4

Digital Maturity Assessment Model Development for Health Sector

Fig. 3 IRM for C4

Fig. 4 IRM for C6

145

146

B. Erdal et al.

References 1. The Institute for Management Development (IMD), World Digital Competitiveness Ranking 2019, Switzerland (2019) 2. The Institute for Management Development (IMD) (2020), World Digital Competitiveness Ranking 2020, Switzerland 3. (2020) TÜB˙ISAD website. https://www.tubisad.org.tr/tr/guncel/detay/TUBISAD-Turkiy enin-dijitallesme-notunu-acikladi/58/2723/0 4. TÜS˙IAD, Samsung, GFK, Delloite, CEO Perspective on Digital Transformation in Turkey, 2016, Türkiye. 5. (2021) Dijital Hastane website. https://dijitalhastane.saglik.gov.tr/ 6. Sayım, F.: Türkiye’de Sa˘glık Ekonomisi ˙Istatistikleri ve Sa˘glık Harcamalarının Geli¸simi. Yalova Sosyal Bilimler Dergisi 7(15), 13–30 (2017) 7. TÜ˙IK website (2019). https://data.tuik.gov.tr/Bulten/Index?p=Saglik-Harcamalari-Istatisti kleri-2018-30624 8. Schumacher, A., Erol, S., Sihn, W.: A maturity model for assessing industry 4.0 readiness and maturity of manufacturing enterprises. Procedia CIRP 52, 161–166 (2016) 9. Akdil, K.Y., Ustundag, A., Cevikcan, E.: Maturity and readiness model for industry 4.0 strategy. Industry 4.0, Managing The Digital Transformation, pp.61–94 (2017) 10. Shieh, J.I., Wu, H.H., Huang, K.K.: A DEMATEL method in identifying key success factors of hospital service quality. Knowl.-Based Syst. 23(3), 277–282 (2010) 11. Zhou, X., Shi, Y., Deng, X., Deng, Y.: D-DEMATEL: a new method to identify critical success factors in emergency Management. Saf. Sci. 91, 93–104 (2017) 12. Si, S.L., You, X.Y., Liu, H.C., Zhang, P.: DEMATEL technique: a systematic review of the state-of-the-art literature on methodologies and applications. Math Problems Eng. 1–33 (2018) 13. Li, L., Rubin, B.: Technology investment in hospital: an empirical study. Int. J. Manage. Enterprise Dev. 1(4), 390 (2004) 14. Thakur, R., Hsu, S.H.Y., Fontenot, G.: Innovation in healthcare: issues and future trends. J. Bus. Res. 65(4), 562–569 (2012) 15. Lehne, M., Sass, J., Essenwanger, A., Schepers, J., Thun, S.: Why digital medicine depends on interoperability. Digital Med. 2(79) (2019) 16. Laukka, E., Pölkki, T., Heponiemi, T., Kaihlanen, A.M., Kanste, O.: Leadership in digital health services: protocol for a concept analysis. JMR Res. Protocols 10(2) (2021) 17. Memon, M., Wagner, S.R., Pedersen, C.F., Beevi, F.H., Hansen, F.O.: Ambient assisted living healthcare frameworks, platforms, standards, and quality attributes. Sensors 14(3), 4312–4341 (2014) 18. Jeenan, A., Altawaty, A.B., Maatuk, A.M.: Experts opinion on the IT skills training needs among healthcare workers. In: Proceedings of the 6th International Conference on Engineering & MIS 2020 (ICEMIS 2020) (2020) 19. Kjellström, S., Avby, G., Areskoug-Josefsson, K., Gäre, B.A., Bäck, M.A.: Work motivation among healthcare professionals. J. Health Organ. Manag. 31(4), 487–502 (2017) 20. Shahmoradi, L., Darrudi, A., Arji, G., Nejad, A.F.: Electronic health record implementation: a SWOT analysis. Acta Med. Iran. 55(10), 642–649 (2017) 21. Kruse, C.S., Frederick, B., Jacobson, T., Monticone, D.K.: Cybersecurity in healthcare: a systematic review of modern threats and trends. Technol. Health Care 25(1), 1–10 (2017) 22. Farahani, B., Firouzi, F., Chang, V., Badaroglu, M., Constant, N., Mankodiya, K.: Towards fog-driven IoT eHealth: promises and challenges of IoT in medicine and healthcare. Futur. Gener. Comput. Syst. 78, 659–676 (2018)

Digital Maturity Assessment Model Development for Health Sector

147

23. Agbo, C.C., Mahmoud, Q.H., Eklund, J.M.: Blockchain technology in healthcare: a systematic review. Healthcare 7(2), 56 (2019) 24. Addepto website (2021). https://addepto.com/artificial-intelligence-and-big-data-in-health care/ 25. Pramanik, M.I., Lau, R.Y.K., Demirkan, H., Azad, M.A.K.: Smart health: big data enabled health paradigm within smart cities. Expert Syst. Appl. 87, 370–383 (2017) 26. I¸sık, T.: Sa˘glık ˙Ileti¸siminde Dijital ˙Ileti¸sim Kanallarının Kullanımı: Sektör Aktörlerinden Acıbadem Hastanesinin Dijital Etkile¸sim Kanalı ve Sosyal Medya Hesaplarının ˙Incelenmesi. Elektronik Cumhuriyet ˙Ileti¸sim Dergisi 1(2), 147–162 (2019) 27. Sa˘glık Bakanlı˘gı, T.C.: Türkiye Kamu Hastaneleri Kurumu. Sa˘glık Alanı Ar-Ge Faaliyetleri Çalı¸stay Raporu, Ankara (2015) 28. Yurdo˘glu, H., Kundakcı, N.: Swara ve Waspas Yöntemleri ile Sunucu Seçimi. Balıkesir Üniversitesi Sosyal Bilimler Enstitüsü Dergisi 20(38), 253–270 (2017)

Using DfX to Develop Product Features in a Validation Intensive Environment Florin Popi¸ster(B) , Mihai Dragomir, and Calin Dan Gheorghe Neamtu Departmentof Design Engineering and Robotics Technical University of Cluj-Napoca Cluj, Cluj-Napoca, Romania {florin.popister,mihai.dragomir,calin.neamtu}@muri.utcluj.ro

Abstract. The paper presents the work performed by a team of engineers and doctors from the Technical University of Cluj-Napoca and healthcare institutions in the same city in Romania to develop an innovative hospital bed with smart features. The undertaking is part of the project PN-III-P2–2.1-PED-2019–5430 financed by the Romanian Government and is focused on achieving a novel product that incorporates state of the art IT&C, original mechanisms, and composite materials with antibacterial properties. One of the main constraints for developing new products in a validation intensive environment such as healthcare is to plan from the beginning the devices that are incorporated in such a way that they can showcase their performance in various standardized tests, thus facilitating the accreditation of the product for medical use. In this article, we use the DfX concept in the design phase of the smart hospital bed and then we investigate its impact through simulation. A comprehensive integrative framework of all the approaches used in this case is developed therefore and future research is proposed to adapt it to the needs of other fields. Keywords: DfX · Smart hospital bed · Product validation · Healthcare

1 Introduction Smart medical beds are an important development trend in the past years related to healthcare advancement [1]. Ghersi, Mariño and Miralles [2] identify two stages in the automation of this product: electrically controlled beds, for 40 years, starting with 1940, and mechatronic (and recently smart) beds for another 30 years, starting with 1990. The same authors summarize in [1] the main features of the current concept of a smart hospital bed, for in response to generic healthcare needs: •interactivity of interfaces between the bed and the occupant or staff, •a large wealth of accessorizing options to accommodate the patient and. •increased comfort and options to implemented standard protocols of hospital care. Many companies are involved in implementing various technical solutions related to this innovation dimensions and trends, working with new mechanical structures, compact © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 148–157, 2022. https://doi.org/10.1007/978-3-030-90421-0_12

Using DfX to Develop Product Features

149

actuators, precise sensors, ergonomic software, composite materials, or other ideas [3– 6]. Figures 1 and 2 presents two different type of currently available smart beds: first one [7] is designed for long-term care of patients who are immobilized in bed and the second one [4] for cardiac and progressive care of patients with circulatory and heart problems, also mentioned in [1].

Fig. 1. Illustration of a smart medical bed - smart B_ADD / medical Bed (source and copyright [7])

Fig. 2. Malvestio® sigma PCU electric bed scale system (source and copyright [4])

150

F. Popi¸ster et al.

2 Concept Development The hospital bed developed in the project HoPE (code number PN-III-P2–2.1-PED2019–5430), will improve both patient comfort and the quality of medical procedures, by addressing the needs of the users and the healthcare professionals in an integrated manner. The bed will be equipped with a retractable dome like the one in the figure Fig. 3 below (presented by Intel at CES 2016), which allows to create a private environment when the patient wants it. This dome features a connection to an air purification system and allows the control of lighting inside of it (such that a patient can read at night without disturbing others), it has support for a 10-inch tablet and can be equipped with a noise cancelling system. One of the innovations of this bed is the presence of a sensor system, which allows it to take care of the patient. Such a tracking system can direct the modification of the position of the patient on the bed.

Fig. 3. Intel relaxing cupola

One of the innovative elements of this project is the mechanism developed by the authors, the mechanism which is used for folding the bed into a chair. This is done using a rod mechanism, illustrated in Fig. 4. A single electric actuator has the possibility to transform the bed into a chair using a translation movement applied within one point of the bar system. The bed is structured using three segments which together with the bar mechanism compose a module that can be rotated horizontally, and it can also be lifted vertically to facilitate various operations, such as the transferring of the patient or to flip patients with a very low degree of mobility (Fig. 5). Compared to other bed models, the proposed mechanism of folding uses a single actuator while most have two, three or even four drive motors. The bed folding can be done with the patient lying on the bed. Another novelty is the self-care system that uses a tracking system and a sensor system in order to take “care” of the patient by changing

Using DfX to Develop Product Features

151

the position of the bed so that the patient will naturally change their position. For other models from literature there aren‘t any reports regarding a tracking system that allows the avoidance of bedsores.

Fig. 4. Kinematic skeleton of the hope bed

Fig. 5. 3D Model of the concept of hospital bed

Another innovation of the bed is the mattress ventilation system represented by the network of composite materials from Fig. 6, which allows the ventilation of the skin in the case of convalescent patients and even the administration of some drugs in the form of powder or liquid that can be sprayed through it. The patient can sit directly on this

152

F. Popi¸ster et al.

elastic and comfortable mesh made of composite materials with silver ion paste. The system is inspired by the Lexus Kinetic Seat Concept [4] as it is shown in Fig. 6.

Fig. 6. Ventilation System Of The Mattress And Lexus Kinetic Seat Concept

The design of the mechanical structure is using composite materials that contain silver ions for the bed construction, thus reducing the weight and increasing the antimicrobial efficacy. By incorporating in the bed structure components made of composite materials the product weight will be reduced by 30% compared to a classical bed made of metal and plastic. By adding silver ions directly into the molded polymer composite material, a second advantage will be achieved by increasing the antimicrobial effect. The “personal environment” system allows the patient to create a private space by opening the dome that allows the user to temporary be isolated from the rest of the room. Inside the dome the user can control the conditioning air intake using a refreshment and filtration system. The dome is equipped with a tablet that enables the possibility of changing different settings of the bed and the environment of the dome (position, lighting or climate), the system can be used as a multimedia system or e-reader platform. The current beds available on the market are using different systems. The feasibility of the project is underlined by the following arguments: the hospital bed concept is composed of several sub-systems which are already feasible, with some similar solutions developed by the project team members in other fields; the instruments used for realizing the proof-of-concept design represent powerful and proven aids that have yielded solutions before; the state of the art in this field reveals a maturing market niche in search for innovations that could help it grow for the benefit of the ever more health conscious public and national health systems. A preliminary finite element analysis was performed on the main structural elements of the bed. The product design has been done using CATIA V6 and the study regarding the ergonomics and possible patient postures on the bed were simulated in DELMIA V6.

3 Characteristics of the Mock-up At the moment (Technology Readiness Level 2) the bed features and functions are established and defined, the current development is at the stage of a preliminary 3D model (Fig. 8) on which different simulations and analyses have been performed (Fig. 7).

Using DfX to Develop Product Features

153

Fig. 7. Simulation features in DMU kinematics

In terms of characteristics the bed has a length of 2.5 m, a width of 1.1 m and can be adjusted in height between 0.5 m and 1 m. The structure of the bed is made of metal and composite materials, in order to drive the bed, the system uses two electric motors, one being used for the translation module of the bed. In terms of functions there are some special mentions such as: the possibility to have the bed assist the medical staff in performing various procedures, the stimulation of the body’s natural healing mechanism by offering relaxation elements, and the availability of multiple sensor data streams. By switching to Technology Readiness Level 4 the proposed concept seeks validation through experimentation of the bed folding system using a single electric motor and optimization of the mechanism size. Another element that will be validated in a proofof-concept phase is the combination of metal and composite materials that will be used for the mechanical structure of the bed. Based on this level, modifications and improvements can be introduced and then patented. There will be laboratory prototypes built for the dome and the mattress with the ventilation system to experiment to what extent these can be integrated and synchronized with a central control system. At a concept level the „personal environment” has been modelled and all the proposed kinematics issues were resolved, there are still elements to be resolved and clarified regarding the sealing of the dome and the coupling system that will provide an internal microclimate. Different scenarios were simulated in order to establish the concept design that creates an ergonomic space for the patient. A few of the most important scenarios from a medical point of view are presented in the figures below, Figs. 9,10, 11,12.

154

F. Popi¸ster et al.

Fig. 8. 3D Model of the folded concept

Fig. 9. Body position when the bed is folded

Fig. 10. Thigh position when the bed is folded

Using DfX to Develop Product Features

155

Fig. 11. Leg position when the bed is folded

Fig. 12. 3D printed model of the concept

The validation of the mechanism functionality has been accomplished by using DELMIA simulation features available in DMU Kinematics. The mechanism components that are made of composite materials have been designed in the Composite Design module from CATIA V6 and verified using the finite element method (Fig. 7). Also, in the TRL2 stage, the mattress with the ventilation system was designed (Fig. 6), for which a tension subsystem will be created during the transition stage from TRL2 to TRL4. From the point of view of integrating the tracking system, the Microsoft Kinect II equipment has been tested to track the patient continuously. At this stage (TRL 2) the tracking system has been positioned above the dome support. From the point of view of composite materials, different tests were performed using several methods of combining carbon fiber with substances that contain silver ions. The methods used and described in the literature are: direct antimicrobial impregnation of agents in polymers [8], covalent linkages between carbon fiber and binder [9, 10] and component painting [11, 12].

156

F. Popi¸ster et al.

4 Conclusions The development of a mechanical structure of a hospital bed that enables the bed to be folded in the shape of an armchair using a single electric motor is discussed in the present paper. As a result of achieving this objective the following outputs have been obtained: the mechanical study upon mechanisms and actuation, components stress testing and system study, ergonomic testing regarding the interaction of the bed with the patients and the medical staff, the material costs and the environmental impact study, machining and assembly simulations, detailed designs and execution specifications, bill of materials, the laboratory prototype. The integration of the innovative systems to ensure and improve the comfort level of convalescent patients and increase their response to the medical care that they receive is the next important milestone of the new product development project. The research team expects challenges to arise in relation to creating the proper interfaces and connections between the mechanical systems, the electrical actuators and the sensing and software components. Also, once the HoPE bed is complete as a laboratory prototype, we intend to transform the virtual simulation scenarios into physical simulations using stan-ins, in order to validate the entire work of the developing team.

References 1. Ghersi, I., Mariño, M., Miralles, M.T.: Smart medical beds in patient-care environments of the twenty-first century: a state-of-art survey. BMC Med. Inform. Decis. Making 18(1), 1–2 (2018) 2. Ghersi, I., Mariño, M., Miralles, M.T.: From modern push-button hospital-beds to 20th century mechatronic beds: a review. J. Phys. Conf. Ser. Volume 705, 20th Argentinean Bioengineering Society Congress, SABI 2015 (XX Congreso Argentino de Bioingeniería y IX Jornadas de Ingeniería Clínica) 28–30 October 2015, San Nicolás de los Arroyos, Argentina, Journal of Physics: Conference Series 705, 012054 (2016) 3. Arab Hospital - Leading Healthcare Arabic Magazine published by Arab Health Media. https://thearabhospital.com/articles-eng/hill-roms-smart-beds-commitment-connec tivity/. Accessed Jun 2020 4. iF International Forum Design GmbH 5. https://ifworlddesignguide.com/entry/225477-smart-nursing-bed. Accessed Jun 2020 6. Verified Market Research has been providing Research Reports, with up to date information, and in-depth analysis https://www.verifiedmarketresearch.com/product/smart-hospital-bedsmarket/. Accessed Sep 2020 7. Ajami, S., Khaleghi, L.: A review on equipped hospital beds with wireless sensor networks for reducing bedsores, J. Res. Med. Sci. 20(10), 1007–1015 (2015). https://doi.org/10.4103/ 1735-1995.172797 8. iF International Forum Design GmbH. https://ifworlddesignguide.com/entry/189395-smartb-add. Accessed Oct 2020 9. Malvestio. Sigma PCU Electric Bed Scale System. https://malvestio.it/en/prodotti/ward/ sigma-cardiac-and-progressive-care-unit/sigma-pcu-electric-bed-346950be/33. Accessed Oct 2020 10. World premiere of Lexus kinetic seat concept. https://newsroom.lexus.eu/world-premiere-oflexus-kinetic-seat-concept/. Accessed Sep 2020

Using DfX to Develop Product Features

157

11. Brody, A., Strupinsky, E., Kline, L.: Active Packaging for Food Applications, Lancaster PA, Technomic, Publishing Co. (2001), ISBN 9781587160455 12. Ishitani, T., Ackerman, P., Jagerstad, M., Ohlsson, T. (eds.): Foods and Packaging MaterialsChemical Interactions, p. 177. Royal society of Chemistry, Cambridge, UK (1995) 13. Gray, J.E., Norton, P.R., Marolda, C.L., Valvano, M.A., Griffiths, K.: Biomaterials 24 (2003), ISSN 0142-9612 14. Ozdemir, M., Yurteri, C., Sadikoglu, H.: Crit. Rev. Food Sci. Nutr. 39(5), 457 (1999), ISSN 1040–8398 15. Appendini, P., Hotchkiss, J.H.: Packag. Technol. Sci. 10(5), 271 (1997)

Industrial Applications

Collaborative Robotics Making a Difference in the Global Pandemic Mary Doyle-Kent1(B) and Peter Kopacek2 1 Department of Engineering Technology, Waterford Institute of Technology, Waterford, Ireland

[email protected]

2 Institute of Mechanics and Mechatronics, IHRT, TUWien, Vienna, Austria

[email protected]

Abstract. In March 2020 the world as we know it was shaken to the core by the Covid-19 global pandemic. This crisis has had a significant impact on the EU27 economy and triggered unprecedented policy responses across Europe and the globe. Collaborative Robotics, or Cobots, have made a significant difference to many manufacturing plants by facilitating social distancing between workers, introducing flexibility on the factory floor and allowing the plants to manufacture high quality items for use in the medical field. California-based DCL Logistics, a third-party logistics company, decided to employ Cobots to manage a 30% increase in orders in the immediate aftermath of the outbreak and the Cobots have led to a 300% increase in productivity and a 60% jump in labor cost savings [1]. This paper will look at these flexible robots and discuss the benefits they brought to manufacturing in a time of global crisis. Keywords: Global pandemic · Covid-19 · Collaborative Robotics · Industry 4.0 · Industry 5.0 · Human centred systems

1 Introduction As the global pandemic continues to engulf the world in waves, there has been an abrupt impact on the EU27 economy which will continue for the foreseeable future. According to De Vet et al. Europe’s economy has experienced a larger impact as compared to the global economy in 2020 and it is expected to have a slower recovery in 2021. They forecast that the real Gross Domestic Product (GDP) is expected only to reach pre Covid19 crisis levels by mid-2022, in both the EU and the euro area. It is important to note that this adjustment is positive as compared to previous predictions, but a return of the economic activity to pre-crisis levels entails slow growth for the EU economy [2]. The world is reacting to this global pandemic in a myriad of different ways, each national government putting emergency measures into place to try to protect the health and safety of their citizens, and, at the same time to try to keep their economies in as healthy state as possible. In Europe manufacturing is of critical importance and finding novel ways of keeping manufacturing plants operating as efficiently and effectively as possible often falls on the shoulders of the technical staff. Automation and digitisation during this global pandemic are being fast tracked and this will have a positive impact in the longer term. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 161–169, 2022. https://doi.org/10.1007/978-3-030-90421-0_13

162

M. Doyle-Kent and P. Kopacek

2 Covid-19 Pandemic and the Manufacturing Industry Although all businesses were affected by the global pandemic manufacturing, according to De Vet et al. the EU has had a significant negative impact. “In the EU27 experienced a sharp decrease in March and April 2020 (respectively −11.1% and −20% change on the previous period), which coincided with the first wave of the spread of the coronavirus. It was then followed by a rebound in May and June 2020 (respectively 13% and 10.4% change on the previous period) and then by a small but increasing values in the period September-November 2020 that coincided with the resurgence of COVID-19 cases” [2]. As reported by Reuters when a resurgence in Covid-19 cases in Texas brought many businesses to a halt, eight Cobots have kept the company All Axis Machining’s metal fabrication facility in Dallas, USA, up and running. They state that the small, nimble Cobots perform multiple jobs, such as machine-tending, sanding, deburring, part inspection and laser marking, leaving the company far less dependent on manual labour. In fact when all the workers on one shift went into self-quarantine during the pandemic, they reported that it had no impact on the facility’s productivity [1]. According to Grover and Karplus management practices in manufacturing strongly correlate with the company’s ability to quickly change the product mix by adjusting operations, and in return, this may translate into better outcomes on sales and reduce factory closures [3]. This flexibility is directly related to the many factors, including the technology used to manufacture products. The affordability of automation can be a problem but the relatively low cost Cobot promises to payback the investment in months, making the changeover easier, even for small and medium-sized enterprises and several examples are outlined to demonstrate this [1]. Mark Muro, a Senior Fellow and Policy Director at Washington-based the Brookings Institution, says “the automation drive will result in a net reduction in the workforce as companies invest in technology not just for social distancing, but also to boost productivity and protect profits from the pandemic-induced recession.” [1].

3 Collaborative Robots in Manufacturing Automation and robotics excel in the manufacture of standardised products using standard manufacturing processes in high volumes to an excellent quality standard. When creativity or customisation is expected, the human being is key. The solution is the collaboration of robots and the humans. Traditional robots cannot work side by side with humans but Cobots are designed to work in synchronisation with human employees and were first developed in 2012 in Denmark [4]. A Cobot is not a replacement robot; it assists workers rather than replaces them. These robots are safe around humans by using force limiting sensors and rounder geometries than traditional robots. They are lightweight and thus are easily moved from task to task. In addition they are easy to implement and use without a specialised automation Engineer or Technician. In fact, an Operator with Cobot programming skills can deploy it. Another advantage mentioned previously is that Cobots are so affordable that it is a worthwhile investment for any company, regardless of the size of the company.

Collaborative Robotics Making a Difference in the Global Pandemic

163

Cobots are extremely versatile and can be used for a wide variety of applications, examples of which include: Packaging and Palletizing, Machine Tending, Industrial Assembly, Pick and Place, Quality Inspection, Injection Molding, CNC Tending, Assembly, Polishing, Screw driving, Gluing, Dispensing and Welding. They can be easily changed over from one operation to another is a very short time. At the moment they are being used in the larger manufacturing companies but there is an opportunity to introduce this technology into small to medium size companies [5] . Figure 1 shows a family of Cobots from the company Universal Robots [6].

Fig. 1. Cobot example from Universal Robots [5]

As detailed by the ISO10218 standard, robots can have four types of safety features. They are: • • • •

Safety Monitored Stop Hand Guiding Speed and Separation Monitoring Power and Force Limiting

The safety monitored stop is implemented in environments where the robots operate mostly alone, with occasional human interference. The feature will cause the robot to pause (though not shutdown) when the safety zone is violated (i.e. a human enters its workspace) [7, 8]. The speed and separation monitoring feature are an extension of safety monitored stop. Instead of adopting a single behaviour throughout the robot’s entire workspace, the latter is gradated into several safety zones.

4 Collaborative Robots Implementation Worldwide ABI Research state that “various (industrial) robot manufacturers are bringing collaborative robots to the market. This market is expected to rise from $100 million to

164

M. Doyle-Kent and P. Kopacek

$1 to $3 billion by 2020.” They undertook an assessment of Cobot manufacturers and ranked the leading 12 in a study in 2019. In this study they found that there are currently over 50 manufacturers worldwide, but most do not have a product that is available on a meaningful scale. The “Industrial Collaborative Robots Competitive Assessment” concluded that Universal Robots (UR) were leading, particularly when focusing on the implementation volumes. The companies ranked in the assessment were: • • • • • • • • • • • •

ABB Aubo Robotics Automata Doosan Robotics FANUC Franka Emika Kuka AG Precise Automation Productive Robotics Techman Robot Universal Robots Yaskawa Motoman

The criterion used were innovation and implementation. The innovation criteria included: payload, software, ergonomics and human-machine interaction experimentation and safety. The implementation criteria focused on: units and revenue, cost and return on investment, partnerships, value-added services and employee numbers [8, 10].

5 Social Distancing and Collaborative Robots In 2020 the term “social distancing” became part of everyday vocabulary once the modes of transmission of the Covid-19 virus were established. The World Health Organisation describes the modes of transmission of the virus in their July 2020 document, explaining that social distancing was one of the most effective measures against transmission [11]. One of the difficulties in manufacturing plants is that workers need to stand in relatively close proximity on manufacturing lines. While job functions involving marketing, sales, management, finance, and research and development can work virtually, execution at the assembly line needs to be at specific physical locations. This resulted in disease outbreaks in several different industries. Bui et al. comprehensively studied outbreaks in different sectors. They state that “approximately 12% (1,389 of 11,448) of confirmed COVID-19 cases in Utah were associated with workplace outbreaks. Of the 210 workplace outbreaks occurred in 15 of 20 industry sectors; nearly one half of all workplace outbreaks occurred in three sectors: Manufacturing (43; 20%), Construction (32; 15%) and Wholesale Trade (29; 14%); 58% (806 of 1,389) of workplace outbreak-associated cases occurred in these three sectors.” [16]. As stated by Okorie et al. “manufacturing the COVID-19 disruption scope has been largely twofold; an endogenous disruption of manufacturing processes and systems

Collaborative Robotics Making a Difference in the Global Pandemic

165

as well as extreme shifts in demand and supply caused by exogenous supply chain disruption.” [17]. Cobots are designed to work with humans in several different modes. “Humanindustrial robot collaboration can range from a shared workspace with no direct humanrobot contact or task synchronisation, to a robot that adjusts its motion in real-time to the motion of an individual human worker” [8, 9]. Figure 2 illustrates the various types of human-industrial robot collaboration possible, ranging from a completely separate set up on the left hand side, to a responsive collaboration on the right hand side.

Fig. 2. Adapted from types of human-industrial robot collaboration [8, 9].

Cobots can be used to facilitate social distancing in a flexible timely manner due to their ability to be easily and fully integrated into manufacturing lines in a relatively short period of time. In a study undertaken by Doyle-Kent in 2020 a qualitative analysis was undertaken in the area of Cobots and their applications in Irish manufacturing plants. It was the first trial to describe Cobots in an interdisciplinary manner including the technological, social, ethical, industrial, and educational aspects. Individual concepts can be found but the idea of gathering together these themes in a cohesive manner was the essence of this research [8]. A number of Cobot Specialists were asked to describe the positive and negatives of introducing Cobots into manufacturing plants. Table 1 shows a summary after a comprehensive thematic analysis was undertaken by the Researcher to analyse the results. It is obvious from this review that Cobots are seen by the Specialists as a very flexible, safe, affordable automation solution can be easily applied to many problems experienced by Irish companies, including a severe skills shortage in normal times. The detailed answers in this qualitative study also revealed that Cobot Operators, or Coboters, reported feeling more fulfilled and safer in their work environment when working with Cobots. Work output from Cobots is of a very high quality, versatile and

166

M. Doyle-Kent and P. Kopacek Table 1. Themes resulting from the qualitative study [8].

THEME

1. HEALTH AND SAFETY OF THE WORKFORCE IS ENHANCED BY INTRODUCING COLLABORATIVE ROBOTS.

THEME

2. THERE ARE UNCERTAINTIES ABOUT MEETING THE STATUARY HEALTH AND SAFETY REQUIREMENTS BY USING

COLLABORATIVE ROBOTS UNGUARDED . THEME

3. COLLABORATIVE ROBOTS ARE EASY TO INSTALL , TO USE AND TO MAINTAIN AND DO NOT REQUIRE A ROBOTICS

EXP ERT ON SITE . THEME

4. THE VERSATILITY

OF COLLABORATIVE ROBOTS MAKES THEM UNIQUELY AP P LICABLE TO MOST ENVIRONMENTS.

THEME

5. THE RELATIVELY LOW COST OF A COLLABORATIVE ROBOT ENSURES IT IS WITHIN THE

RANGE OF SMALL TO

MEDIUM ENTERP RISES. THEME

6. COLLABORATIVE ROBOTS MAY NOT BE FINANCIALLY VIABLE TO ALL BUSINESS.

THEME

7. COLLABORATIVE ROBOTS IMP ROVE P RODUCTIVITY .

THEME

8. COLLABORATIVE ROBOTS CAN FILL THE SKILLS GAP S IN INDUSTRY AND OP ERATORS WILL HAVE A MORE

REWARDING & INTERESTING WORK ENVIRONMENT DUE TO INCREASED SKILLS AND VARIED WORK P RACTICES. THEME

9. MOST COLLABORATIVE ROBOTS

ARE NOT WORKING TO THEIR FULL P OTENTIAL IN IRISH INDUSTRY AND LARGER

COMP ANIES HAVE AN ADVANTAGE OVER SMALLER ONES.

they can operate 24 hours per day. It was recommended that they were a viable option for social distancing, in a time when humans need to stay separated, due to the airborne transmission of a deadly virus. In addition a comprehensive questionnaire was carried out as part of this research. 111 Respondents were asked 37 questions about robotics and automation in their plants. One of the questions asked was “have you heard of collaborative robotics [Cobot]?“ Out of a total of 108 replies, 77 (71.3%) stated yes and 31 (28.7%) no. Also they were asked “The current Covid-19 global pandemic makes physical distancing a requirement in the workplace, does this influence your opinion on using Cobots in manufacturing? “ Out of a total of 91 replies 39 (42.9%) stated yes and 52 (57.1%) no. An interpretation of these results is that Cobot Manufacturers need to be a lot more proactive in how they market their products to enable Engineering Professionals to have a clear understanding of how to use their products during a pandemic. Whilst gathering information for the literature review a number of case studies of how Cobots were actually used in the pandemic were highlighted on the Universal Robot LinkedIn page, and a summary of these will be outlined in the next section.

6 Case Studies of Collaborative Robots During the Covid-19 Pandemic The following case studies illustrate that the Covid-19 pandemic has been positively assisted by automation and Cobots. The authors have just chosen a few examples out of many available online. Case study 1: Endutec Maschinenbau Systemtechnuk GMBH [8, 12]. This machine manufacturing company is also a certified systems integrator. The managing director explained that two years previously they had started to automate parts of

Collaborative Robotics Making a Difference in the Global Pandemic

167

their production with a UR10e Cobot. The reason was to “achieve the fullest possible utilization of our machine capacity while also addressing the shortage of skilled workers. We are always desperately looking for qualified employees. Therefore, we planned to automate as many simple tasks as possible in order to be able to use our staff for highervalue tasks.” He went on to say in spite of the global pandemic production ran as normal, the difference being some of the programmers worked from home and uploaded the programmes onto the company server whilst others worked on site. The only noticeable change, apart from mask wearing and hand sanitizing, was that shop floor production orders from the ERP system were distributed around the company in paper form. Now the employees receive the production order as PDF in an e-mail. Automation and Cobots are an integral part of what they do and each task is analyzed to see if automation is possible. They state that skilled workforce shortage means that they use automation to make their facilities more competitive. Due to Covid-19 their customers are often working from home which means that they no longer have full and permanent access to their own data and this can cause delays in their orders but delivery must be on time. Nonetheless thanks to the Cobot they make the reduced delivery times as it runs through the night and also on weekends. “For us, the crisis has shown that the time and money we invested in automation has more than paid off. I am convinced that other small and medium-sized companies will now also increasingly rely on robot technology to prepare themselves for the future.” This case study documents how the UR10e Cobot enabled Endutec to set up a two-shift operation, utilizing its machines to full capacity. Case study 2: Conquer Manufacturing, India [8, 13]. India has been severely affected by the global pandemic in 2021, but even before this Pradeep was explaining the importance of Cobots to India manufacturing. [13] He states that “plant closures, partial layoffs, staggered shifts, labor shortages, stringent hygiene measures and restrictions on the number of people working together at the same time” are some of the realities on the ground. He goes on to state that “according to Business Insider, the seasonally adjusted IHS Markit India Manufacturing Purchasing Managers Index (PMI), a reflection of the health of the manufacturing economy, fell to 27.4 in April (2020), from 51.8 in March” which is the sharpest deterioration in Indian business condition in the past 15 years. The government are cognisant of the urgent changes that need to be made so that the manufacturing plants can adapt and overcome. Mandatory guidelines (social distancing, testing and tracing and limiting the number of workers in any location) in Indian manufacturing is a problem for this highly the manual workforce. Labour is often unskilled, this together with the lack of space on the shop floor are major barriers to the integration and operation of traditional automation. Cobots overcome these issues and the following points are key to the successful implementation of Cobots: • Social distancing was enabled • Partial automation was possible • Quick deployment and flexible redeployment of the Cobots

168

M. Doyle-Kent and P. Kopacek

Democratised automation was the result because of the fast payback of this relatively low cost technology and the fact that highly skilled automation experts were not required to deploy and maintain the Cobots. Case study 3: Reliance Automation [14]. Lacy in 2021 reported on the increase of automation due to the Covid-19 pandemic quoting the managing director of an automation company in Ireland, Reliance Automation. “We have had two difficult years just as every other business; however, we continue to experience demand for automation resulting from the Covid-19 pandemic situation. Production lines of people who were tightly packed within one meter of each other suddenly needed to distance themselves 2 m apart and there was also a huge increase in demand for Covid-19 related products such as testing and vaccines.” She goes on to discuss that food companies, who are in particular usually slow to introduce automation, moved to rapidly incorporate automation due of significant increased demand. Finally Lacy states that The International Federation for Robotics (IFR) testified that automation has now been deployed 25 times faster than was expected pre-pandemic. As outlined by Universal Robots, Cobots have been used to manufacture face shields, N95 masks and ventilator components. Cobots are also being used for mouth swabs, ultrasounds, and temperature checks [8, 15].

7 Conclusions and Looking Forward The features of Cobots that make them so versatile during a global pandemic will ensure their uptake post pandemic. Industry is in the midst of the fourth industrial revolution but is moving steadily into the fifth where personalisation and customisation are some of the key drivers. Cobots will play an important role in this person-centred manufacturing [5]. Doyle-Kent offered a definition of Industry 5.0 – “Industry 5.0 is the human-centred industrial revolution which consolidates the agile, data driven digital tools of Industry 4.0 and synchronises them with highly trained humans working with collaborative technology resulting in innovative, personalised, customised, high value, environmentally optimized, high quality products with a lot size one.” [9]. Comprehensively educating Engineers on the benefits of Cobots and how to use them effectively will be key to their widespread use in manufacturing. However, this global pandemic has accelerated exposure to this technology, its possible applications, and the advantages it can bring to manufacturing plants on a global scale. Acknowledgment. We gratefully acknowledge the support of the CONNEXIONS grant from Waterford Institute of Technology, Ireland.

References 1. Kumar Singh, R.: Coronavirus pandemic advances the march of ‘Cobots.’ Reuters (2021). https://www.reuters.com/article/us-health-coronavirus-automation-idUSKCN24L18T

Collaborative Robotics Making a Difference in the Global Pandemic

169

2. De Vet, J.M., Nigohosyan, D., Ferrer, J.N., Gross, A.K., Kuehl, S., Flickenschild, M.: Impacts of the COVID-19 pandemic on EU industries. Publication for the committee on Industry, Research and Energy, Policy Department for Economic, Scientific and Quality of Life Policies, European Parliament (2021). https://www.europarl.europa.eu/RegData/etudes/STUD/2021/ 662903/IPOL_STU(2021)662903_EN.pdf#page42 3. Grover, A., Karplus, V.J.: Coping with COVID-19 (2021). https://openknowledge.worldbank. org/handle/10986/35028 4. Doyle-Kent, M., Kopacek, P.: Industry 5.0: is the manufacturing industry on the cusp of a new revolution? In: Durakbasa, N., Gençyılmaz, M. (eds.) ISPR 2019. LNME, pp. 432–441. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-31343-2_38 5. Doyle Kent, M., Kopacek, P.: Do we need synchronization of the human and robotics to make industry 5.0 a success story?. In: Durakbasa, N.M., Gençyılmaz, M.G. (eds.) ISPR 2020. LNME, pp. 302–311. Springer, Cham. https://doi.org/10.1007/978-3-030-62784-3_25 6. Universal Robots: Universal Robots website (2021). https://www.universal-robots.com/uk/ education-kit/ 7. Kent, M.D., Kopacek, P.: Social and ethical aspects of automation. In: Durakbasa, N.M., Gençyılmaz, M.G. (eds.) ISPR 2020. LNME, pp. 363–372. Springer, Cham. https://doi.org/ 10.1007/978-3-030-62784-3_30 8. Doyle-Kent, M.: Collaborative Robotics in Industry 5.0. Doctoral dissertation, TUWien (2021). https://doi.org/10.34726/hss.2021.70144 9. International Federation of Robotics: Demystifying Collaborative Industrial Robots (2019). https://ifr.org/downloads/papers/IFR_Demystifying_Collaborative_Robots_Update_ 2019.pdf 10. ABIresearch: Collaborative Robotics Market Exceeds 1 Billion Dollars By 2020 (2020). https://www.abiresearch.com/press/collaborative-robotics-market-exceeds-us1-billion-/ 11. World Health Organisation: Modes of transmission of virus causing Covid-19 (2020). https://www.who.int/news-room/commentaries/detail/modes-of-transmission-of-virus-cau sing-Covid-19-mplications-for-ipc-precaution-recommendations 12. Universal Robots: Universal-Robots.Com/Manufacturing-In-The-Age-Of-Covid-19-3 (2020). https://blog.universal-robots.com/manufacturing-in-the-age-of-Covid-19-3 13. Universal Robots: Universal-Robots.Com/In/Role-Of-Cobots-In-Covid-19era (2020). https:// blog.universal-robots.com/in/role-of-Cobots-in-Covid-19era 14. E. Lacy Reliance Automation: Covid-19-Pandemic-Shifts-Manufacturing-TowardsAutomation-Emma-Lacy (2021). https://www.linkedin.com/pulse/Covid-19-pandemicshifts-manufacturing-towards-automation-emma-lacy/?trackingId=P4RL%2F8BISU%2BH Lxw3212PjA%3D%3D 15. Universal Robots: Cobots-Vs.-Covid-19-Part-II (2020). https://blog.universal-robots.com/ Cobots-vs.-Covid-19-part-ii 16. Bui, D.P., et al.: Racial and ethnic disparities among COVID-19 cases in workplace outbreaks by industry sector—Utah, March 6–June 5, 2020. Morb. Mortal. Wkly Rep. 69(33), 1133 (2020). https://doi.org/10.15585/mmwr.mm6933e3 17. Okorie, O., Subramoniam, R., Charnley, F., Patsavellas, J., Widdifield, D., Salonitis, K.: Manufacturing in the time of COVID-19: an assessment of barriers and enablers. IEEE Eng. Manage. Rev. 48(3), 167–175. (2020). https://doi.org/10.1109/EMR.2020.3012112

Determination of Strategic Location of UAV Stations Beyzanur Cayir Ervural(B) Department of Industrial Engineering, Konya Food and Agriculture University, Meram, Turkey [email protected]

Abstract. The strategic importance of unmanned aerial vehicles (UAV) has attracted the attention of many researchers. Nowadays, there are numerous serious projects and researches that have gained momentum in recent years, particularly in order to improve the technical performance of the system and to increase the search and practice skills of the model. UAVs, which are the latest technological inventions, are very advantageous in terms of practicality, flexibility, cost advantage and multitude of usage areas. UAV technology effectively demonstrates its benefits in both civilian applications (disaster management, agricultural applications, health services, etc.) and military platforms (counter terrorism, reconnaissance activities, smuggling etc.). Their unique features vary according to implementation purposes, and so decision makers can determine the size of automation system, altitude time, speed, and capacity of the vehicles’ according to the application area and specific requirements. In this study, the most suitable mini-UAV ground control stations in the area of responsibility were selected according to the maximum coverage model regarding different characteristics for civil and military applications. Thus, in order to reach the optimal solution, UAV stations and service areas to be established within the coverage distance will be defined in the specified zones by using different scenario analysis. The study is expected to help researchers and strategic decision makers working on this hot topic. Keywords: Optimization · Location selection · UAV · Maximal covering problem · Strategic decision model

1 Introduction Location models generally attempt to establish facilities in a landscape, often in a network of nodes and arcs, to meet public demands [1]. Quantitative decision models have been emerged as useful tools to assist policy makers, strategists, and process managers in determining where to set up site sets as optimal location of the specified/projected region. In the maximal covering location problem, the goal is to establish a fixed number of facilities on a network to maximize the number of population-weighted demand points covered or served over a given distance or time. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 170–180, 2022. https://doi.org/10.1007/978-3-030-90421-0_14

Determination of Strategic Location of UAV Stations

171

The selection of some strategic locations requires serious optimization applications. Strategically important points such as hospitals, air/naval bases, blood centres, and unmanned aerial vehicle (UAV) stations that will provide civil/military services should be placed in the most rational, lowest cost and widest service area [2]. In location selection problems, the aim is to ensure that as many people as possible (in the widest coverage area) receive service while the distance between the facility to be established and the service area is expected to be close. Facilities should be assigned to the closest point to service zones or the centres of need, all which should be settled conveniently and rationally in order to take service from the nearest neighbouring facility. The increasing importance and prevalence of UAVs around the world has led to the rapid start of research and studies on this subject for the efficient usage of UAVs. Therefore, installation of the most suitable UAV station and serving the region with the highest coverage rate is an important decision criterion in terms of technological, social, and economic aspect. UAVs have many different functions across the world such as civilian, commercial attempts, as well as military goals and efforts. Civilian applications mostly cover healthcare, conservation, cargo transport, measurement for pollution monitoring, agriculture, manufacturing, disaster relief and nowadays for pandemic response efforts [3]. In military practice, UAVs are employed for reconnaissance, attack, defence, and for some criminal/terrorist purposes. The main aims of UAVs can be considered as data transfer/evaluation, flying aircraft, and providing specific images [2]. The obtained information is analysed by combining other sources to serve/ensure a useful common tactical and strategic outcome. It is important to process and transfer the obtained meaningful data results to take quick action for reliable operations. In this study, we aimed to decide strategic locations of the UAV stations under certain restrictions such as population rate, number of daily cases, transportation availability and number of agricultural and irrigation activities in terms of military and civilization aims considering high level of service area with aspect of maximal coverage model. It has been stated that the use of scenario analysis with different distance areas and need points at different importance levels provide important results according to the objective function. Then, the areas to be covered based on distance (with various field length) are determined with a maximal coverage modelling framework. In this model, it is aimed to maximize the number of centres that will receive service with a certain number of facilities to be opened. In case of different situations, scenario analyses were developed and tested.

2 Literature Review In the literature, several studies exist about maximum covering location problem [4]. Farahani et al. [5] and Daskin [6] ensured a comprehensive survey on various maximum covering location problems and related solution approaches. Albareda-Sambola et al. [7] aimed to minimize cost whereas maximize coverage using capacity and distance constrained plant-location approach.

172

B. Cayir Ervural

According to literature, there are numerous studies focusing on the maximum coverage problem under the location selection problem for UAVs. Caillouet and Razafindralambo [8] proposed UAVs to cover targets which is a complicated issue since each target should be covered, while the operation cost and the UAV altitudes try to guarantee good communication value. They employed a bi-objective linear programming model to solve the problem with a fair trade-off optimal solution. Otto et al. [9] ensured a detailed review of all optimization studies based on civil utilizations of UAVs. According to the authors, the study is the first application regarding drone energy consumption as an equation of payload and distance calculations within a drone maximum coverage location problem context. Chauha et al. [10] studied a maximum coverage facility location problem with drone employment according to real-life UAV battery and weight constraints, and then they provided two solution technique depend on greedy search alg. and three-stage heuristic. Huang et al. [11] developed an unconstrained mathematical model to search the optimal sites of the UAVs on the street graph to maximize UE coverage and to minimize the interference impact. Karatas et al. [12] proposed a bi-objective location-allocation model for UAVs acting in a hostile region. The aim is to obtain the locations to position UAVs for surveillance activities. In the study, the first aim is to maximize search activities’ success and the second aim is to minimize threats to UAVs. They developed a metaheuristic approach, namely, elitist non-dominated sorting genetic algorithm to solve such large-scale problem. As can be seen from the literature review, there is a need for more applications on the subject due to the limited number of studies and the serious impact of UAV issue.

3 Methodology A. Maximal Covering Problem Maximal Covering Problem (MCP) is a kind of location selection problem which effectively applied in different areas such as health care, emergency planning, ecology, and security works [13]. MCP is modelled by Church and ReVelle [4] and traditional formulation is given as follows:  1, if demand node j is covered Yj = 0, o.w  1, if candidate site i is sited xi = 0, o.w  1, if demand node i can be covered by a facility at candidate site j aij = 0, o.w dj = demand at node j P = number of facilities to locate

Determination of Strategic Location of UAV Stations

max Z



173

dj Yj

j∈J

Yj ≤



aij xi ∀j ∈ J

i∈I



xi ≤ P

i∈I

xi ∈ {0, 1} ∀i ∈ I Yj ∈ {0, 1} ∀j ∈ J where, the objective function allows to maximize the number of demand points covered. The first constraint provides an element can only be covered only if at least one of the sets containing the element is selected. The following constraint gives the number of facilities is to be placed and finally decision variables are considered as binary variable. Due to the maximal covering problem is a kind of combinatorial optimization problem, several solution approaches are utilized to obtain optimal solution such as exact algorithms (Branch and Bound alg., Lagrange relaxation) and heuristic algorithms (Greedy search, Genetic alg., Bee Algorithm etc.) to deal with larger size problems.

4 Application of the Model and Analysis Results The location selection of UAV stations was previously considered as a maximum coverage problem. The developed maximal coverage model aims to increase the security level of the region and satisfaction rate of citizens to meet the instant needs coming from the security units. In this study, we assumed fifteen eligible sites to effectively allocate UAV stations in rural areas, considering the needs of the relevant region. To create the constraints that consider regional differences, weight values were given to each candidate point in the light of the opinions of the expert team working in the region for the feature of UAV such as flight availability, number of daily cases, population rate and transportation availability. All these factors are the main criteria taken into account in determining the locations and areas of responsibility of UAV stations. The following tables (Tables 1 and 2) show the importance weights of the determined criteria. For military activities such as intelligence, surveillance, and reconnaissance requests, we considered significant determinants as the daily case number, population rate and transportation availability under the importance levels of 0.5, 0.3, 0.2, respectively. Similarly, for civil activities such as agriculture, irrigation, land control activities, we evaluated important indicators as population ratio, civil activities and transportation availability under the following importance levels of 0.3, 0.5, 0.2, consecutively. The deployment of ground control stations of UAVs differs depending on the regional characteristics of each land segment. Agriculture, irrigation, land control works are assumed to be in civil activities.

174

B. Cayir Ervural Table 1. Importance weights of military applications

Pop. rate (w1)

Daily case number (w2)

Transportation availability (w3)

Wort

0.0553

0.0301

0.071

0.0459

0.0664

0.0602

0.119

0.0738

0.0775

0.0904

0.095

0.0875

0.0620

0.0452

0.071

0.0555

0.0752

0.0753

0.048

0.0697

0.0797

0.0964

0.048

0.0816

0.0974

0.1355

0.095

0.1160

0.0581

0.0512

0.048

0.0526

0.0395

0.0211

0.071

0.0367

0.0470

0.0331

0.024

0.0354

0.0498

0.0392

0.024

0.0393

0.0653

0.0512

0.048

0.0547

0.0686

0.0602

0.071

0.0650

0.0863

0.1205

0.095

0.1052

0.0719

0.0904

0.071

0.0810

Table 2. Importance weights of civil activities Pop. rate (w1)

Civil activities (w2)

Transportation availability (w3)

Wort

0.0553

0.051

0.071

0.0562

0.0664

0.084

0.119

0.0860

0.0775

0.095

0.095

0.0896

0.0620

0.034

0.071

0.0498

0.0752

0.057

0.048

0.0608

0.0797

0.101

0.048

0.0841

0.0974

0.118

0.095

0.1074

0.0581

0.044

0.048

0.0489

0.0395

0.020

0.071

0.0363

0.0470

0.030

0.024

0.0341

0.0498

0.037

0.024

0.0383

0.0653

0.041

0.048

0.0494

0.0686

0.061

0.071

0.0653

0.0863

0.118

0.095

0.1041

0.0719

0.108

0.071

0.0899

Determination of Strategic Location of UAV Stations

175

Table 3. Distance between facilities (km.) 1 1 0

2

3

4

5

6

7

8

9

10

11

12

15.56 19.68 25.34 28.78 35.55 38.78 42.85 45.75 49.09 32.65 45.5

2 15.56 0

12.34 11.56 7.89

25.36 45.36 40.23 42.26 35.5

3 19.68 12.34 0

9.96

24.36 44.36 40.89 25.5

4 25.34 11.56 9.96

0

17.75 33.36 24.5

5 28.78 7.89

24.36 17.75 0

38.78 9.9

13

26.36 39.85 37.62 48

12.11 10.36 23.23 45.26 40.1

35.35 40.3

30

16.78 22.3

8.85

7 38.78 45.36 40.89 24.5

0

46.35 30.12 38.75 19.25 25.25 20.2

10.36 9.56

46.35 0

43.45 37.62 40.1

11 32.65 38.78 27.26 48 12 45.5

9.9

30

26.89 42.13 36.4 47.76 10.5

34.69 38.75 42.13 47.76 0 11.17 19.25 36.4

17.8

10.5

30.33 23

34.36 30.3 27.56 25.68

32.36 25.36 18.75 20.38

40.45 45.48 42.32 40.52 36.87

40.45 0

10.75 15.59 16.78 20.22 25.25 30.33 32.36 45.48 49.8

49.8

35.35 30.56 40.4

0

25.2

20.56 24.68

0

30.36 32.75

13 20.25 35.35 45.47 30.36 22.3

45.36 20.2

14 32.75 40.3

48.45 30.35 20.2

39.85 34.36 27.56 18.75 40.52 30.56 20.56 30.36 0

40.4

15 18.5

12.6

36.45 30.3

0

15.6

25.8

17.8

23

20.2

11.25 34.69 11.17 20.22 45.36 39.85 36.45

9 45.75 42.26 41.25 39.85 45.26 11.25 30.12 26.89 0 10 49.09 35.5

15.6

15.59 30.36 30.35 25.8

9.56

26.36 23.23 8.85

15

41.25 43.45 27.26 10.75 45.47 48.45 12.6

6 35.55 25.36 44.36 33.36 12.11 0

8 42.85 40.23 25.5

14

20.25 32.75 18.5

25.36 42.32 35.35 25.2

25.68 20.38 36.87 40.4

24.68 32.75 40.4

The utilized notation of the study is given below. i, j = number of facilities to locate i, j = 15 t = type of UAV station t = 1, 2  yjt =

1, if demand node j is covered by UAV station type t 0, o.w



1, if demand node j is covered by a station located at candidate site i 0, o.w ⎧ if demand node i can be covered by a facility at candidate site j ⎨ 1, mij = according to the defined distance ⎩ 0, o.w xij =

wit = the importance level of UAV type t located at site i C = number of facilities to locate max Z xij ≤

 i∈I



wit xij

i∈It∈T

mij yj,t ∀j ∈ J

176

B. Cayir Ervural



yj,t ≤ C

j∈J

xi ∈ {0, 1} ∀i ∈ I yj ∈ {0, 1} ∀j ∈ J The objective function provides to maximize the total covered demand. The second constraint ensures that if demand node j is not covered by site i, the corresponding variables xij is forced to take value of 0. The third constraint states the number of opened facilities equals to C. Binary decision variables are given in the last constraint. As seen from the analysis, the obtained results occur depending on the number of facilities and the distance between candidate eligible sites/points (Table 3), according to the developed mathematical model. In the first case, we assume installation of three UAV stations in a ten km. and twenty km. range circle and then we considered five and eight UAV stations in ten and twenty km. range circle respectively. We evaluated the allocated regions and the allocated station types by expanding our constraints step by step. The obtained optimal solution of the model using GAMS software is given in Table 4. According to the analysis results, when the number of UAV stations equals to three in a ten km distance, the identified UAV station locations are emerged as two of both type of UAV stations and six of the first type UAV. Moreover, if the second UAV station is installed, five and twelve regions/points receive service and, if the sixth UAV station is installed, the seventh region/points receive service. For the first case, the objective function equals to 0.458. Similarly, if eight UAV stations installed in the twenty km range area, seven (2, 3, 4, 5, 6, 12, 15) UAV stations from initial type of UAV station and just one station (5) of the second type UAV have been assigned to sites (as seen from Table 4). Moreover, if the second UAV station is installed, the region 3, the region 4, the region 5, the region 12 and the region 15 get service in the identified area (20 km). If the third UAV station is installed, the close zones are the 2nd region, 4th region, 12th region and 15th region all which can take service, and when the fourth UAV station is installed, fifth region get service and if the fifth UAV station is established, the second and the sixth regions satisfy their regional requirements. If the sixth UAV station is established, fifth region satisfies rural requirements. And if the seventh UAV station is installed, fifth and sixth regions meet their needs. If the twelfth UAV station is installed, fifth location meets demands, and finally the fifteenth UAV is established in the region, 2nd , 3rd and 5th sites satisfy the regional needs. All facilities allocated to meet each specified district’s military and civil requirements and expectations (fifteen subsets). The optimal maximal covering plan regarding the minimum number of facilities has been provided a satisfactory and successful result (Fig. 1).

Determination of Strategic Location of UAV Stations

177

Table 4. Scenario analyses and the obtained results

(continued)

178

B. Cayir Ervural Table 4. (continued)

Fig. 1. Civil and military applications of UAVs

5 Conclusion and Future Research Ideal location/site selection issues have always been a substantial research topic, for instance, the appropriate location of UAV stations provides strategic, operational, and tactical advantages over their competitors. The most significant purpose of location selection problems is to be close to demand points and to reach all requirement points in the given restrictions. In this study, the topic of UAVs, which has become one of the most current issues in recent years, and the optimum position (s) of UAV stations utilized for civil or military purposes all which evaluated under the maximum coverage modelling approach. In this study, the aim is to find the UAV station assignments that will cover the most convenient target according to their priority values. In the model, the simultaneous deployment of two different types of UAVs with different coverage distances to diverse points in terms of their technical characteristics has been examined for two various types of UAVs with distinct coverage distances. The determination of candidate points was made according to expert opinions. In order to reflect the effect of regional conditions on the model, the weight values of the candidate points where the station will be established were added to the model. UAV station assignments are performed according to criteria such as the number of cases in the region, transportation status and population ratio.

Determination of Strategic Location of UAV Stations

179

To minimize the uncertainty in the selection of candidate points and to reflect the effect of regional conditions on the model, the weight values of the positions to be located were added to the model. When the developed UAV types and terrain conditions adjust, the possibilities and capabilities of UAV stations will also change. For this reason, it should be taken into account that the covering distance of each UAV system will alter according to various situations. Coverage distances in the model are based on the technical possibilities and capabilities of the UAV types. For future studies, the model can be extended by adding new constraints, new decision variables and new dynamic conditions for diverse application areas. Alternatively, stochastic programming method can be considered for probabilistic structure or uncertain environments such as weather conditions, transfer efficiency ratios, service ratio etc.

References 1. Drezner, Z., Hamacher, H.W. (eds.): Facility Location: Applications and Theory. Springer, Heidelberg (2002). https://doi.org/10.1007/978-3-642-56082-8 2. Giordan, D., et al.: The use of unmanned aerial vehicles (UAVs) for engineering geology applications. Bull. Eng. Geol. Env. 79(7), 3437–3481 (2020). https://doi.org/10.1007/s10 064-020-01766-2 3. Maddikunta, P.K.R., et al.: Unmanned aerial vehicles in smart agriculture: applications, requirements and challenges. IEEE Sens. J. 21(16), 17608–17619 (2021) 4. Church, R., ReVelle, C.: The maximal covering location problem. Pap. Reg. Sci. Assoc. 32(1), 101–118 (1974). https://doi.org/10.1007/BF01942293 5. Farahani, R.Z., Asgari, N., Heidari, N., Hosseininia, M., Goh, M.: Covering problems in facility location: a review. Comput. Ind. Eng. 62(1), 368–407 (2012). https://doi.org/10.1016/ J.CIE.2011.08.020 6. Daskin, M.S.: Network and Discrete Location: Models, Algorithms, and Applications. Wiley, Hoboken (2011) 7. Albareda-Sambola, M., Elena, F., Gilbert, L.: The capacity and distance constrained plant location problem. Comput. Oper. Res. 36(2), 597–611 (2009). https://doi.org/10.1016/J.COR. 2007.10.017 8. Caillouet, C., Razafindralambo, T.: Efficient deployment of connected unmanned aerial vehicles for optimal target coverage. In: 2017 Global Information Infrastructure and Networking Symposium, GIIS 2017, vol. 2017, pp. 1–8 (2017). https://doi.org/10.1109/GIIS.2017.816 9803 9. Otto, A., Agatz, N., Campbell, J., Golden, B., Pesch, E.: Optimization approaches for civil applications of unmanned aerial vehicles (UAVs) or aerial drones: a survey. Networks 72(4), 411–458 (2018). https://doi.org/10.1002/NET.21818 10. Chauhan, D., Unnikrishnan, A., Figliozzi, M.: Maximum coverage capacitated facility location problem with range constrained drones. Transp. Res. Part C Emerg. Technol. 99, 1–18 (2019). https://doi.org/10.1016/J.TRC.2018.12.001 11. Huang, H., Savkin, A.V.: A method for optimized deployment of unmanned aerial vehicles for maximum coverage and minimum interference in cellular networks. IEEE Trans. Ind. Inf. 15(5), 2638–2647 (2019). https://doi.org/10.1109/TII.2018.2875041

180

B. Cayir Ervural

12. Karatas, M., Yakıcı, E., Dasci, A.: Solving a bi-objective unmanned aircraft system locationallocation problem. Ann. Oper. Res. 2021, 1–24 (2021). https://doi.org/10.1007/s10479-02003892-2 13. Church, R.L., Murray, A.: Location modeling and covering metrics. In: Location Covering Models. ASS, pp. 1–22. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-998 46-6_1

Evaluation of Aluminium Alloy (AlSi9Cu3(Fe)) Porosity by Destructive and Non-destructive Method (Computed Tomography) Michaela Kritikos1(B) , Martina Kusá1 , and Martin Kusý2 1 Institute of Production Technologies, Faculty of Materials Science and Technology, Slovak

University of Technology in Bratislava, Bratislava, Slovakia {michaela.kritikos,martina.kusa}@stuba.sk 2 Institute of Materials Science, Faculty of Materials Science and Technology, Slovak University of Technology in Bratislava, Bratislava, Slovakia [email protected]

Abstract. Nowadays, porosity evaluation is very often part of the process of evaluation of the part´s quality. Most of the applications of the porosity investigation is on the drawings for measurement of the plastic or aluminium parts which have to be part of an assembled group. Up to now it was investigated by destructive method (the part was destroyed, and it was not for further use). It means that the producers have to produce many parts to be tested. In recent years the world and the society exert pressure to reduce waste and the application of the non-destructive testing can lead to waste decreasing from part production. Very sophisticated kind of non-destructive testing is computed tomography which uses an X-ray beam for part scanning. After the software use can bring results of defects in the part without destroying the part, so it can be implemented to the assembled group or it can be used for what it was intended for. However, there are still many questions about the accuracy of the data results gotten from virtual scan and its reality representation. This paper deals with the comparison of the pores evaluated in the chosen cross sections by destructive method and non-destructive method. The foundation from the experimental investigation was that the CT device can substitute the metallurgical cut very accurate in the case of larger pores with diameter above 1 mm evaluation. In case of pores with a minimum enclosed circle below 0.5 mm the relative difference could exceed up to 40%. The results are very important for practical use of computed tomography in the process of defectoscopy to prevent parts’ destruction. Keywords: Destructive testing · Non-destructive testing · Computed tomography – X-ray · Porosity

1 Introduction Porosity in structural materials generally leads to deterioration of mechanical properties [1], although in the case of specific porous materials, metal foams, the presence of pores © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 181–191, 2022. https://doi.org/10.1007/978-3-030-90421-0_15

182

M. Kritikos et al.

is crucial and leads to unique properties [2]. The main problem is that porosity can critically weaken the strength of a material. In general, porosity can be described as a large number of air or gaseous microbubbles, dissolved hydrogen originating from the reaction between molten melt and adsorbed by moulding material [3]. The presence of pores is related to the process of production technology and are most often classified as blow holes, shrinkage porosity, gas, shrinkage cavities [1, 4, 5]. Their shape, position, size, distribution and quantity can be used to characterize defects [1]. The presence of pores negatively affects the static tensile strength, but especially, the fatigue strength characteristics [1, 5]. However, the increased porosity can also negatively affect other useful properties of already final products, e.g. hermetic tightness of castings, which is used as a test method to inspect quality of the casting [4]. The importance of pores is closely linked to the need to develop techniques that will allow their detection, localization and, if necessary, quantification and determination of characteristic dimensions. On the one hand, there are conventional techniques based on the inspection of predetermined metallographic sections. They are significantly limited by high laboriousness and inability to detect pores whose position does not correspond to the position of the cut. This disadvantage is suppressed by modern techniques of nondestructive defectoscopy, which can include, for example, x-ray computed tomography, pulse infrared thermography, ultrasound or eddy current method [4]. However, their effective application depends on the type of material, the shape of the inspected component and the size of the defects. Thermography is effective, for example, for detecting the pores and imperfections of carbon fibre reinforced polymer matrix composites [3]. On the contrary, it fails in the case of Al-based metal alloys for which the X-ray computed tomography XCT method has proved to be very effective [4]. It is the development of XCT that plays an important role in controlling the closed internal porosity of castings. Advanced image visualization technique enables effective image analysis and evaluation of the inspected part. It also makes it possible to analyse in an automated mode a sufficiently high number of inspected parts, which also enables the possible integration of this inspection technique into the production process. Despite the increasing accuracy of XCT, in many cases a request is addressed for the verification of sites with indicated increased porosity by means of targeted sections in order to inspect the conformity of shape, size, distribution, etc. This procedure is most often realized by means of metallographic procedures of section preparation in combination with conventional OM optical microscopy. This is most often applied in several fields of view, which are evaluated separately or in the form of a single (stitched) image. The aim of the article is to compare the non-destructive technique used to determine the shape and size of pores in the monitored sample with the destructive method, which shows the shape and size of pores in individual planes by method of direct observation.

2 Experimental approach Current development of the engineering industry (not only) shows that the product’s defect analysis is very important, and it starts to become part of almost every quality part control process. The most common application for porosity evaluation was destructive

Evaluation of Aluminium Alloy (AlSi9Cu3(Fe)) Porosity

183

cut of the part and amount or size of the pores evaluation by microscopy. In recent years there is effort for non - destructive porosity evaluation. The possible solution is computed tomography application for internal part´s quality control. The experimental investigation of this paper is based on the comparison of twodimensional porosity evaluation by using metallurgical cut in different heights and microscopy on the one side and on the other side application of CT device for assessed part scanning in the same heights but virtually. There are many questions and distrust about the precision of the CT scans and still many companies don’t want to use computed tomography for porosity evaluation. The chosen part was made of die casting alloy, specifically AlSi9Cu3(Fe). Chemical composition of the alloy determined by Bruker Tasman Q4 optical emission spectrometer is shown in the Table 1. Exhibiting good casting properties, this alloy represents a typical choice for die cast components with complex shape and thin walls. The component is cylindrical in shape with central hole. There are 5 inner fins at the upper part of the component. The most intriguing part of the design is localized just above the dimension 26 mm shown in Fig. 3 where first cut was placed. Here a segmented double wall design of the component terminates and transform into a single wall with significant reduction of wall thickness. In this region excessive porosity was expected to be formed. Table 1. Chemical composition for the die casting alloy - AlSi9Cu3(Fe) Si (%) Fe (%) Cu (%) Mn (%) 9.78

0.82

2.34

Mg (%)

Cr (%) Ni (%) Zn (%) Pb (%) Sn (%) Ti (%)

0.22 0.22 0.04

0.05

0.82

0.06

0.03

0.03

The CT device used in this investigation is METROTOM 1500 from Zeiss company. Software for parts scanning was Metrotom OS 2.8. Scanning parameters were set as follows: – – – – –

voltage: 200 kV, current: 850 µA nr of projections: 1700, detector resolution: 1024 × 1024 px, voxel size: 119.43 µm.

A 2 mm thick copper filter was used for part´s scanning. The scan achieved from computed tomography can be seen in Fig. 3. The data evaluation was performed in VGStudioMAX 3.0 software. There is a possibility to choose different porosity analysis. In this case VGDefX analysis method was used. It is the most common use method for aluminium and plastic parts porosity evaluation. At first, the three-dimensional porosity evaluation was executed (Fig. 1). It was necessary for determination of the area with a high incidence of pores which will be subjected to metallurgical cutting.

184

M. Kritikos et al.

Fig. 1. Full porosity evaluation of the part taking into account probability higher than 1

Fig. 2. Porosity evaluation of the part taking into account probability higher than 2

As it is seen, defects with different sizes were evaluated. The pores with probability of 1 were also included. However, the occurrence of those pores is questionable. In the commercial measurements, the pores with probability higher than 2 are evaluated. If the pores with the probability less than 2 are not taken into consideration, the porosity evaluation will look as it is shown in Fig. 2.

Evaluation of Aluminium Alloy (AlSi9Cu3(Fe)) Porosity

185

Fig. 3. First cut section

According to porosity evaluated in the entire part, the cut sections for metallography were chosen. The first cut section was set at the height of 26 mm from the top plane (Fig. 3). Another cut section was at height of 25.8 mm and the last section was at height of 24.4 mm. The same sections were used for two-dimensional porosity evaluation in the software virtually on the scan. For the evaluation and comparison of the pores´ size was used the diameter of the minimum enclosed circle. Complementary destructive technique used for porosity determination in three different metallographic cross-sections was optical microscopy OM. The Zeiss LSM 700 in white light OM mode and Plan Apochromat objective with 5× magnification and numerical aperture 0.16 was used to collect bright field images. Data collection was executed using Zeiss ZEN2009 proprietary software. 2D images of the entire cross-section of the component were collected in an array of individual images organized in a regular matrix of 15 × 12 horizontally and vertically aligned images with 6% overlap. Subsequently, individual images were stitched into single images which were further processed by ImageJ FIJI [6] in order to determine detailed data on porosity – diameter of enclosed circle. The comparison of the pores in the first cut Section (26 mm) is shown in Fig. 4.

186

M. Kritikos et al.

I6 I5

I4 I3

I2 I1

10 mm Fig. 4. Porosity comparison at the height of 26 mm from the top plane

Evaluation of Aluminium Alloy (AlSi9Cu3(Fe)) Porosity

187

The visual comparison shows that the distribution of the detected pores was the same and the defects evaluated in the CT scan were comparable to the real part in this cross section. The numerical evaluation of the scan can be seen in Table 2. The biggest difference in pore size was found in evaluation of the largest pore with reference dimension of 3.153 mm and it was 0.238 mm. The smallest pore diameter value difference was 0.045 mm. Table 2. Individual parameters for pores found in cross-section at 26 mm Pore label

Diametermicroscopy (mm)

Diameter- CT (mm)

Difference (mm)

Relative difference (%)

I1

3.153

2.915

0.238

8%

I2

0.923

0.878

0.045

5%

I3

0.861

0.771

0.090

10%

I4

0.840

0.719

0.121

14%

I5

0.696

0.603

0.093

13%

I6

0.579

0.402

0.177

31%

After this evaluation, the experimental investigation continued by grinding to another cut Section (25.8 mm). The evaluation of the pores in this area can be seen in Fig. 5. As it is seen from the previous figure the biggest pores detected by X - ray beam were also revealed by metallurgical cut and microscopy observation. The distribution of the pores was very close. The numerical evaluation of the scan can be seen in Table 3. Differences between pores achieved by CT scan evaluation and metallurgical cut were between 0.098 mm and 0.289 mm. The last step of the experimental investigation was grinding at the height of 24.4 mm. The achieved porosity can be seen in the Fig. 6. The same cut section was also set up in the virtual scan from the CT device. The comparison of the visual porosity evaluation shows that the CT device can lead to very accurate results representing real state. The pores detected by the CT device are also present in the physical cut and the shape is very similar.

188

M. Kritikos et al.

II2 II3 II6

II1

II7

II4

II9

II5

II7

Fig. 5. Porosity comparison at the height of 25.8 mm from the top plane

It is very important to deal with accuracy of the new porosity method in the industrial process to bring operators into comfort and to be sure that results achieved by CT scanning are very close to reality representation. The only way to compare CT results with reality is realization of destructive metallurgical cuts after performing the CT scanning. The numerical evaluation of the scan can be seen in Table 4. The biggest difference from pore size between these two used methods was 0.241 mm and the smallest difference was −0.045 mm.

Evaluation of Aluminium Alloy (AlSi9Cu3(Fe)) Porosity

189

III1 III2

III3

III5

III6 III8

III7

III4 III9

III10

III11

10 mm Fig. 6. Porosity comparison at the height of 24.4 mm from the top plane

Table 3. Individual parameters for pores found in cross-section at 25.8 mm Pore label

Diameter- microscopy (mm)

Diameter – CT (mm)

Difference (mm)

Relative difference (%)

II1

3.162

2.926

0.236

7%

II2

0.975

0.693

0.282

29%

(continued)

190

M. Kritikos et al. Table 3. (continued)

Pore label

Diameter- microscopy (mm)

Diameter – CT (mm)

Difference (mm)

Relative difference (%)

II3

0.917

0.628

0.289

32%

II4

0.779

0.622

0.157

20%

II5

0.693

0.592

0.101

15%

II6

0.662

0.564

0.098

15%

II7

0.654

0.555

0.099

15%

II8

0.651

0.529

0.122

19%

II9

0.567

0.446

0.121

21%

Table 4. Individual parameters for pores found in cross-section at 24.4 mm Pore label

Diametermicroscopy (mm)

Diameter - CT (mm)

Difference (mm)

Relative difference (%)

III1

2.901

2.993

−0.092

3%

III2

2.351

2.288

0.063

3%

III3

2.037

2.082

−0.045

2%

III4

1.288

1.156

0.132

10%

III5

0.876

0.635

0.241

28%

III6

0.606

0.419

0.187

31%

III7

0.590

0.408

0.182

31%

III8

0.507

0.303

0.204

40%

III9

0.471

0.266

0.205

44%

III10

0.459

0.229

0.230

50%

III11

0.274

0.102

0.172

63%

3 Conclusion and discussion The experimental investigation is dealing with porosity evaluation by destructive (metallurgical cut) and non - destructive methods (computed tomography). It is very important to pay attention to this topic because the distrust of the results obtained by computed tomography is significant and makes it difficult to apply the method in practice. It was proved in this paper: – the computed tomography is a relatively accurate method with scan results representing the reality in the analysed cross-section plane. – the settings of porosity evaluation (surface determination, deviation factor) can influence the accuracy of the results obtained from computed tomography, – the porosity evaluation by physical cut and microscopy application is more time consuming than porosity evaluation by computed tomography – in this study the

Evaluation of Aluminium Alloy (AlSi9Cu3(Fe)) Porosity

191

porosity evaluation in every physical cut section was 141 min and the CT application leaded to the time saving of 72 min, – it was proved that also the pores with probability less than 2 are visible in the metallurgical cut and it is necessary to evaluate them, – the porosity evaluated by microscopy application is closer to reality presentation. – relative differences documented in Table 2, 3 and 4 point out that CT is more precise in case of larger pores with diameter above 1 mm. It is typically found below 10% difference with respect to optical microscopy results. In case of pores with a minimum enclosed circle below 0.5 mm the relative difference could exceed up to 40%.

Acknowledgment. This research was supported by the research project KEGA No. 022STU4/2019. The authors express their sincere thanks for financial contributions.

References 1. Nourian-Avval, A., Fatemi, A.: Characterization and analysis of porosities in high pressure die cast aluminum by using metallography, x-ray radiography, and micro-computed tomography. Materials 13(14), 3068 (2020). https://doi.org/10.3390/ma13143068 2. Lu, G., (Max) Lu, G., Xiao, Z.: Mechanical properties of porous materials. J. Porous Mater. 6, 359–368 (1999). https://doi.org/10.1023/A:1009669730778 3. Mayr, G., Plank, B., Sekelja, J., Hendorfer, G.: Active thermography as a quantitative method for non-destructive evaluation of porous carbon fiber reinforced polymers. NDT & E Int. 44(7), 537–543 (2011). ISSN 0963–8695. https://doi.org/10.1016/j.ndteint.2011.05.012 4. Wilczek, A., Długosz, P., Hebda, M.: Porosity characterization of aluminium castings by using particular non-destructive techniques. J. Nondestr. Eval. 34(3), 1–7 (2015). https://doi.org/10. 1007/s10921-015-0302-z 5. Nicoletto, G., Anzelotti, G., Koneˇcná, R. X-ray computed tomography vs. metallography for pore sizing and fatigue of cast Al-alloys. Procedia Eng. 2(1), 547–554 (2010). ISSN 1877–7058, https://doi.org/10.1016/j.proeng.2010.03.059 6. Schindelin, J., Arganda-Carreras, I., Frise, E., et al.: Fiji: an open-source platform for biologicalimage analysis. Nat. Methods 9, 676–682 (2012). https://doi.org/10.1038/nmeth.2019

Improvement of Solid Spreader Blade Design Using Discrete Element Method (DEM) Applications Atakan Soysal1 , Pınar Demircio˘glu2(B) , and ˙Ismail Bö˘grekçi2 1 EYS Metal R&D Center, Aydin, Turkey

[email protected]

2 Mechanical Engineering Department, Aydın Adnan Menderes University, Aydin, Turkey

{pinar.demircioglu,ibogrekci}@adu.edu.tr

Abstract. The time and fuel consumption required to complete the fertilization process is high, as traditional blade designs are better suited for laying the manure behind the machine. When a homogeneous and wide area distribution is provided, both time and fuel are saved. In this study, improvement studies of blade design in order to increase the spreading width of solid manure distribution trailers used in the fertilization of agricultural lands and to obtain a homogeneous distribution in the field are presented. This new design was tested both in the field and in computer environment. By bending the blade drums, fertilizer was thrown and the blade surfaces were increased, increasing the amount of fertilizer thrown per unit time. Collection boxes were used to monitor the distribution in the field. The tractor-driven machine was operated at three different feeding speeds (4 km/h, 5 km/h and 6 km/h) and the amount of fertilizer accumulated in the boxes was observed by passing the boxes on the determined route. Two different fertilizers were used in this experimental study: solid manure (65% moisture) and compost (50% humidity). The volume weights of these materials were measured as 720 kg/m3 and 600 kg/m3 , respectively. After the distribution was completed, the fertilizer collected in each container was weighed and the coefficient of variation was calculated. Keywords: DEM analysis · Distribution pattern · Field application · Solid spreader · Vertical beater

1 Introduction The nutrition and development of plants depends on the presence of enough nutrients in the soil. Soil contains many mineral substances in its structure, but their amounts are not always sufficient. Especially the lands on which plants are grown become poor in terms of nutrients. Plants take these elements into their bodies and use them in the synthesis of organic compounds and biochemical events necessary for growth and metabolism activities [1]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 192–201, 2022. https://doi.org/10.1007/978-3-030-90421-0_16

Improvement of Solid Spreader Blade Design Using DEM Applications

193

Fertilizing equipment such as solid spreaders are used to fertilize the land. Fertilizing has two vital purpose. First one is to amend the soil in terms of nutrients and the second one is to improve biological and physical properties of soil in order to grow plants efficiently [2].

2 Literature Review When considered in terms of Turkey, a substantial part of organic matter is insufficient scope of soil. Lack of organic matter, on the other hand, speeds up the deterioration in the soil. Increasing the organic matter coverage of soils and improving soil properties are among the priority issues [3, 4]. 75% of soil in Turkey has insufficient nitrogen and phosphorus while insufficient potassium ratio is 1.3%. Only 6% of soil has enough nitrogen and phosphorus. 80% of soil in Turkey has enough potassium [5]. Required manpower to fertilize the land is quite high and time consumption keeps increasing due to increased need of plants. In order to minimize time loss and maximize fertilizing efficiency, demand on manure spreaders are greatly increasing [6]. Maximizing the efficiency of a solid manure application shortens fertilizer throwing time, decreases the harmful effects to human health and provides a uniform spread pattern for soil. While the use of solid manure is widespread in some parts of our country, the mechanization is still not in the desired level. Fertilizing land by using a solid spreader is found to be 7 to 10 times more efficient than traditional methods by manpower. [7]. The main task of manure spreading machines is to distribute the fertilizer uniformly to the field. Excessive manuring causes the vegetative component to increase in the plant, and the NO3 and similar plant foods that are not used by the plant are washed deeply with rain or irrigation water and mixed with the ground water. Incomplete manuring decreases product yield and quality [8].

3 Material and Method A. Vehicle Specifications Solid spreader used in this study has single axe, two rubber wheels and is driven by a tractor. A dual frame with a single frame is attached at the rear end of trailer. Rear cover, chain conveyor and beaters are controlled through hydraulic controls of tractor. General view of trailer is given in Fig. 1.

194

A. Soysal et al.

Fig. 1. General view of solid spreader [1]

Table 1. Solıd spreader specifications [1] Specifications of solid spreader Overall Length

6585 mm

Spread Direction

Left, rear, right

Overall Width

2820 mm

Beater Type

Dual Vertical

Overall Height

3245 mm

Tractor Power

90 HP

Loading Capacity

12 m3

PTO Revolution

540 & 1000 RPM

Weight

4250 kg

Gearbox Revolution

870 RPM

Beater Quantity

2

Shaft Diameter

139 mm

Shaft Speed

870 RPM

Diameter of Beater

674 mm

Beater Angle

6° or 12°

Beater Material

S235

Technical details of solid spreader are given in Table 1. B. Solid Manure and Compost Details Required fertilizers were collected from a dairy farm in Aydin city. The first sample was gathered from a screw press separator while second sample was produced in an in-vessel compost drum. In order to reduce moisture of compost further, compost was maturated by windrowing method. Moisture of each sample was 65% and 50% respectively. C. CAD Modelling and DEM Analysis Blades with a simple trailer was designed in Autodesk Inventor program. 3D model was then exported to be used in Rocky DEM application. Main idea behind new blade design was increasing distribution width as much as possible without sacrificing total mass per unit area. In addition, durability and ease of replacement of blades were also vital for new design. Disks and blades made of S235 were used in the machine. Beater design is shown in Fig. 2. 3D model was imported to Rocky DEM. Compost particle design and injection and DEM analysis were completed in Rocky DEM. Rocky DEM’s precise shape representation (including custom convex and concave shapes, flexible

Improvement of Solid Spreader Blade Design Using DEM Applications

195

fibers, and shell particles) combined with its several laws for computing the fluid forces on particles, increases the accuracy of the models. By the same token, adhesive/cohesive materials can be modelled using one of the adhesion models available in Rocky DEM. In addition, all the particles can be initialized in an instant leading to significant reduction in time associated with depositing the particles by using volume fill feature in Rocky DEM. Based on analysis of compost and solid manure particles, parameters used in analysis are given in Table 2. Table 2. Parameters used in DEM analysis [1] Particle size

5 to 30 mm

Bulk density

650 kg/m3

Adhesive distance

0.02 mm

Force fraction

0.7

PTO speed

540 RPM

Gearbox exchange ratio

1.16

Gearbox speed

870 RPM

Wind speed

1m

Tractor velocity

4 km/h

A sample view of injected particles in DEM application is given in Fig. 3 along with an image of actual unit.

Fig. 2. Compost particle injection in DEM analysis [1]

196

A. Soysal et al.

D. Field Test In order to measure distribution pattern of new blade design, field test was carried out according to TS EN 13080 standard and in presence of committee of Turkish Standards Institution. Forty-eight boxes with 600 × 300 × 250 mm (L × W × D) dimensions were used to collect distributed solid manure and compost in the test. A pattern with 24 boxes on the left, 24 boxes on the right and three boxes on center was created. A 90 HP tractor was used to drive solid spreader which had 12 m3 fertilizer loaded on it. Box placement and test field is shown in Fig. 3.

Fig. 3. Test layout and box placement [1]

After tractor completed operation, each box was photographed and then weighed in precision scale. Results of weights were used to calculate coefficient of variation. Distribution was maximum at the center and as the distance to center increased, distribution dwindled down on sides. After a range on left and right side, there were no solid particles found. So folding technique is used before calculation of coefficient of variation (TSE, 1994).   n  1  (3.1) S= (xi − x)2 n−1 i=1

1 xi n n

X =

i=1

Where; S is standard deviation n is quantity of box in distribution width xi is manure weight in each box after folding technique

(3.2)

Improvement of Solid Spreader Blade Design Using DEM Applications

197

X is mean After standard deviation was calculated, coefficient of variation was also calculated by using below formula. Cv =

S

(3.3)

X

Where; Cv is coefficient of variation In order to comply with TS EN 13080 standard, coefficient of variation must be lower than 35%. After calculation, results were compared with the standard.

4 Results and Discussion A. Analysis Results Efficient working width was found as 13 m maximum. The results on left and right side of the field were seen as opposite of field test. Particle size, direction and velocity of wind are also effective on this difference. Coefficient of calculation was less than 35% up to 13 m working width. So the design complies with TS EN 13080 standard.

Distribution on X Coordinate Distributed Manure Weight (g)

700 600 500 400 300 200 100 0 -7

-6

-5

-4

-3

-2

-1

0

1

2

3

4

5

6

7

Working Width (m)

Fig. 4. Working width in DEM analysis [1]

B. Field Test Results Effect of vertical beater design on distribution width was investigated in detail using DEM applications and with field test. Trials with TSE committee was continued for 10

198

A. Soysal et al.

Distributed Manure Weight (g)

h. No replacements or maintenance was done on the machine during this period. This method gave the opportunity to observe possible damages and wear on the blades during trial. As the test completed, a certain amount of manure was seen in every box. As shown in Fig. 4, maximum distribution was at the center of movement direction. As the range increased to left and right side, weight of collected manure decreased. Effective working width is shown in Fig. 5. Compost Left % : 51.24 Right % : 48.76 X : 1.654 S : 0.55 % : 33.24

440 420 400 380 360 340 320 300 280 260 240 220 200 180 160 140 120 100 80 60 40 20 0 9

8

7

6

5

4

3

2 1 0 1 Working Width (m)

2

3

4

5

6

7

8

9

Fig. 5. Transverse distribution, optimum working width, standard deviation after folding technique and coefficient [1]

The test with compost was carried out with committee of Turkish Institute of Standards. Since the results were to be recorded, more tests were carried out compared to separated manure. Tests were conducted with half open and fully open rear cover and at three different speeds. Details of the tests are given in Table 3. Angle of vertical beater assembly was 12º and velocity of wind was measured as 1 m/s while maximum allowed value is 3 m/s. In addition, effective working width was calculated as 13 m. Distribution difference between each side of field was observed as 2.48%. 5 km/h tractor speed was chosen to calculate distribution pattern. Calculated coefficient of variations for certain working widths are given in Table 4. Coefficient of variation for 13 m working width was calculated as 33.24%. As the value is smaller than 35%, the design complies with TS EN 13080 standards. C. Comparison of Old and New Blade Design In old design, same blades were placed both horizontally and vertically. Purpose of placing blades in both axes were to achieve a uniform spread pattern. With old design,

Improvement of Solid Spreader Blade Design Using DEM Applications

199

Table 3. Test details with compost [1] Manure type

Position of rear cover

Working width

Distributed manure (kg/min)

Manure (kg/da) 4 km/h

5 km/h

6 km/h

Compost

Fully open

13

2181

2307

1846

1538

Compost

Half Open

13

1672

1776

1480

1230

Table 4. Coeffıcıent of variation for different working width [1] Effective working width (m)

Coefficient of variation (CV%)

8.5

10.36

9

14.27

9.5 10

9.88 8.01

10.5

11.22

11

17.17

11.5

21.83

12

26.06

12.5

29.97

13

33.24

almost all of manure was spread at rear of trailer. Based on the results of first test, a certain angle was given to each blade. This improvement helped increase spread pattern on left and right side of trailer. However, the spread pattern was still not uniform. The results were out of acceptable limits of TS EN 13080 standard. Compared to old model, design of each blade was changed completely. Ten disks were welded on the shaft as dual sets. Blades were installed between each dual disk by pin connection. Three blades were installed with 120° between them. Bottom three blades were designed as flat to scrape the surface of trailer. If a blade is damaged or bended, it can easily be replaced by removing the pin. Since the blades were fixed to shaft in old design, their rotation was limited to rotation of shaft. In addition to rotation by shaft, now the blades can also rotate at a limited angle in new design. Both designs with frame is shown in Fig. 6.

200

A. Soysal et al.

Fig. 6. Old and new design [1]

5 Conclusion Effect of vertical beater design on distribution width was investigated in detail using DEM applications and with field test. Trials with TSE committee was continued for 10 h. No replacements or maintenance was done on the machine during this period. This method gave the opportunity to observe possible damages and wear on the blades during trial. Effective distribution width was increased from 8 m to 13 m and coefficient of variation stayed below 35% up to 13 m working width. According to TS EN 13080 standards, the design is acceptable and is eligible to receive national certificate of approval. At current state, design is suitable to be used in industrial applications.

References 1. Soysal, A.: Improvement of Solid Spreader Blade Design Using DEM Applications, Unpublished M.Sc. thesis (2020- M.Sc.-042), Aydın Adnan Menderes University, Turkey (2020) 2. Kus, E., Yildirim, Y.: Katı Ahır Gübresi Da˘gıtma Makinaları Genel Özellikleri. 26. Tarımsal Mekanizasyon Ulusal Kongresi (22–23 Eylül, 2010), Hatay (2010) 3. Lampkin, N.: Organic Farming. Old Pond Publishing. 104 Valley Road Ipswich, IPI 4PA, U.K (2002) 4. Schoenau, J.J.: Benefits of Long-Term Application of Manure (2006) 5. Taban, S.: Gübre Bilgisi Dersi Notları. Ankara Üniversitesi, Ziraat Fakültesi (2020) 6. Anonim: Ülkemizde Makinala¸sma (2015). https://www.renklinot.com/soru-cevap-2/turkiy ede-makinelesme-ulkemizi-nasil-etkiliyor.html. Eri¸sim Tarihi 08 May 2021

Improvement of Solid Spreader Blade Design Using DEM Applications

201

7. Long, G.W., Tien, Y.S.: Performance and economic benefit research of a turnplate type manure spreader. Bull. Taichung District Agric. Improv. Stat. 38, 23–36 (1993) 8. Ergüne¸s, G.: Tarım Makinaları. Ders Notu, pp. 262–272 (2009)

Investigation of Influence by Different Segmentation Parameters on Surface Accuracy in Industrial X-ray Computed Tomography Mario Sokac1(B) , Marko Katic2 , Zeljko Santosi1 , Djordje Vukelic1 , Igor Budak1 , and Numan M. Durakbasa3 1 Faculty of Technical Sciences, University of Novi Sad, Novi Sad, Serbia

{marios,zeljkos,vukelic,budaki}@uns.ac.rs

2 Faculty of Mechanical Engineering and Naval Architecture,

University of Zagreb, Zagreb, Croatia [email protected] 3 TU Wien (Vienna University of Technology), Vienna, Austria [email protected]

Abstract. In today’s rapid industry development, there is a need for more accurate technologies that can cope with the rising demand of manufacturing accuracy. X-ray computed tomography presents such a method that can reconstruct internal structures of an object without its destruction. This gives X-ray computed tomography great advantage over other tactile and optical measuring devices. This paper presents a study in which different segmentation parameters are used to reconstruct surface 3D models of an object and investigate the influence on surface accuracy caused by different segmentation parameters. Results of this investigation shows that the variation of segmentation parameters corresponds to change in dimensional deviations regarding measurands used in both 2D and 3D analysis. Keywords: Computed tomography · X-ray CT · Segmentation · Industrial CT images · Image processing

1 Introduction Over the recent years the need for more accurate digital reconstruction of various physical objects has increased, and, thus, forcing the measurement systems to be more versatile in approach. Industrial X-ray computed tomography (CT) has the required answers to those demands due to its non-destructive abilities and high accuracy. Its applications are wide, ranging from inspection of cavities and cracks [1], particle analysis [2–4], quality control in different industries [5]-[8], additive manufacturing [9–14] and injection moulding [15]. However, certain problems can occur when using X-ray CT, and those problem can be related so numerous factors such as parameters setup [16], presence of artefacts [17–19], and other factors that can cause measurement errors [20, 21]. There are however guidelines that are being used for successful implementation of these systems [22–26]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 202–209, 2022. https://doi.org/10.1007/978-3-030-90421-0_17

Investigation of Influence by Different Segmentation Parameters on Surface Accuracy

203

Surface reconstruction is very important for performing both 2D and 3D dimensional analyses [21]. Reconstruction of surfaces, for the purpose of their extraction, is performed using segmentation which presents a tool for extraction of 2D boundaries of objects present on X-ray CT data. Therefore, two important challenges faced by CT users today are selection of the X-ray CT parameters and surface reconstruction [27]. Several different methods are used for segmentation, however ISO-50% presents most commonly used method today [27].

2 Literature Review Segmentation of X-ray CT data presents a very important step for the reconstruction of 3D data. In order to be able to carry out dimensional measurements, it is necessary to carefully determine the threshold values of pixel intensity present on X-ray CT data, because it is a critical parameter on which depends the accuracy of reconstructed surface 3D geometry [28]. A typical way to carry out the segmentation process is to use ISO50% method. However, that is not always the best case, since various parameters can influence the reconstructed surface, as described earlier. When it comes to implementation of X-ray CT data for the purposes of various analyses and measurements, many researchers have been investigating this. Extraction of data using different segmentation methods will ensure that accurate results can be achieved by comparing their performance [29]. In regards to that, different methods have been developed for surface extraction that are based on application of different algorithms [30], local surface grey values [31], executing additional calibration measurements [32], voxel classification [33] or combination of different segmentation methods [34]. This is all vital to the performance of X-ray CT systems and extraction of information from CT data. In this way, it will be ensured that the accurate data can be extracted for the purpose of various dimensional measurements and analyses.

3 Materials and Methods In order to investigate the influence of segmentation parameters on the surface accuracy of the reconstructed 3D models, a study was performed. This study will serve as a preliminary research topic towards better understanding of influence of segmentation parameters and their role in the extraction of objects information from X-ray CT data. Case study in this paper consists of a common object, it is a female to male direct current (DC) power connector adapter (Fig. 1). A DC connector presents a common type of electrical connector for supplying DC power, and as such is used in this study. CT data of the adapter was obtained using NIKON XT H 225 industrial X-ray CT system (Fig. 2a). Segmentation and surface reconstruction were performed using the method presented in [34]. For the purpose of this study four different setups of segmentation parameters were used. Resulting reconstructed surface 3D models were then subjected for CAD inspection analysis, and also for dimensional analysis as well, in order to investigate the influence of segmentation parameters on surface accuracy. For the purpose of obtaining reference measurements needed in this study, a threeaxis coordinate measuring machine Contura G2 (CARL ZEISS) was used (Fig. 2b).

204

M. Sokac et al.

Setup consisted of using a measurement styli with 1 mm diameter and L = 50 mm length. The focus of this study was on the metallic part of the DC adapter because of its importance regarding proper functioning of the adapter.

Fig. 1. Object used in this study - DC power connector adapter

a)

b)

Fig. 2. Showing a) acquisition of CT data using Nikon XT H 225 x-ray CT and b) measurement of DC adapter on CMM Contura G2 by CARL ZEISS

4 Results and Discussions For this analysis metallic part of DC adapter was reconstructed using four different setups of segmentation parameters using the method presented in [34]. The segmentation parameters were labelled S1, S2, S3 and S4. After that, surface 3D models were reconstructed and then exported in STL file format for further analysis.

Investigation of Influence by Different Segmentation Parameters on Surface Accuracy

205

For the purpose of dimensional and comparative analysis CAD inspection, as a part of 3D analysis, was performed on surface 3D models generated using all four sets of segmentation parameters. CAD inspection was carried out in GOM Inspect v2019 software and nominal CAD model of the DC adapter was obtained. In order to perform CAD inspection, it was required to align surface 3D models with the nominal CAD model. The alignment was performed by, first, using prealignment for coarser aligning of the 3D models with CAD model, and then Local best-fit method was used for final alignment. Figure 3 shows the results of the CAD inspection, while Table 2 shows the obtained numerical deviation values.

a)

b)

c)

d)

Fig. 3. CAD inspection of surface 3D models reconstructed on basis of four segmentation parameters a) S1, b) S2, c) S3 and d) S4

On the basis of the CAD inspection performed for all four 3D models, certain areas can be noticed in which deviations are larger (shown in red and blue colour). It can also be noted that setup of segmentation parameters S1 is showing more deviations located in the top area of the adapter (blue colour), when compared to others. And for setup S2, it can be seen that the inside of the adapter shows bigger deviations (marked in red), that other three setups S1, S3 and S4. By comparing all four segmentation setups, it can be concluded that setup S4, which was used for reconstruction of surface 3D model, shows smaller deviations when compared to other three segmentation setups S1, S2 and S3. However, surface 3D model reconstructed using S4 setup (from Table 1) shows best results with better deviation concentration of − 0.018 mm when compared to the surface 3D models reconstructed using other three segmentation parameters S1, S2 and S3. Dimensional analysis, as a 2D analysis, was also carried out for all surface 3D models obtained using four different segmentation parameters. Their reference measurements were collected using a CMM Contura G2 by CARL ZEISS (maximum permissible error MPEE = (1.8 + L/300) µm, where L is the measured length expressed in mm). Reference measurements were collected using CALYPSO v4.8 software.

206

M. Sokac et al.

Table 1. CAD inspection results of four surface 3D models reconstructed on basis of four segmentation parameters for DC adapter Parameters

Min. Max. deviation [mm]

Deviation concentration [mm]

Mean distance [mm]

Distance std deviation [mm]

S1

±0.200

−0.027

−0.018

+0.072

S2

± 0.200

−0.030

−0.010

+0.074

S3

±0.200

−0.026

+0.008

+0.069

S4

±0.200

−0.018

+0.023

+0.070

In order to perform dimensional analysis, three dimensional characteristics D1, D2 and D3 were defined for DC adapter, and they are shown in Fig. 4. Their measured values using CMM are shown in Table 2, while comparison of relative measurement errors is shown in Fig. 5.

Fig. 4. Defined dimensional characteristics D1, D2 and D3 of DC adapter for dimensional analysis on CMM

The results of the relative measurement error of dimensional characteristics D1, D2 and D3 for DC adapter (Fig. 5) shows that increase in error of 0.15 is noticed for setup S4 on geometrical characteristic D3, while no presence of measurement error was indicated by characteristic D2 using segmentation parameters S2. This indicates that the presented dimensional analysis shows the slight trend of stagnant deviations for segmentation parameters S2 and S3, while dimensional characteristic D3 for segmentation parameters S1 and S4 shows highest measurement errors. This can also be seen by examining dimensional characteristics D1 and D2, which show maximum measurement error of 0.06 for all four segmentation parameters.

Investigation of Influence by Different Segmentation Parameters on Surface Accuracy

207

Table 2. Measurement results obtained on CMM for dimensional characteristics D1, D2 and D3 of DC adapter Parameters

D1 [mm]

D2 [mm]

D3 [mm]

S1

7.03

2.1

5.75

S2

7.01

2.09

5.71

S3

7.03

2.13

5.58

S4

7.05

2.15

5.49

CMM

6.99

2.09

5.64

Fig. 5. Relative error measurements of dimensional characteristics D1, D2 and D3 for DC adapter

5 Conclusions This paper presents a preliminary study on segmentation parameters and its influence on surface reconstruction and accuracy of reconstructed surface 3D models using industrial X-ray CT. Both CAD inspection and dimensional analyses conducted in this paper shows that the use of different segmentation parameters will influence the accuracy of reconstructed surface 3D models. The research conducted in this paper aims to further elaborate the influence of segmentation on accuracy of surface 3D models. Future research will focus more on further investigation of this influence, and one way to gain more insight is to come up with a new designed and manufactured workpiece. Unlike the mass-produced part (DC adapter used in this study), the newly designed workpiece will serve for further research where much more parameters can be varied and, perhaps involvement of machine learning tools can aid in prediction of adequate segmentation parameters. This will, of course, vary depending on the object being scanned on X-ray CT and the type of the material it consists of.

208

M. Sokac et al.

Acknowledgment. This research has been supported by the Ministry of Education, Science and Technological Development through the project no. 451–03-68/2020–14/200156: “Innovative scientific and artistic research from the Faculty of Technical Sciences (activity) domain”. Project IKARUS also supported parts of presented research (European Regional Development Fund, MIS: RC.2.2.08–0042, Faculty of Mechanical Engineering and Naval Architecture, University of Zagreb).

References 1. Hanke, R., Fuchs, T., Uhlmann, N.: X-ray based methods for non-destructive testing and material characterization. Nucl. Instrum. Methods Phys. Res. Sect. A 591, 14–18 (2008) 2. Manahiloh, K.N., Abera, K.A., Motalleb Nejad, M.: A refined global segmentation of X-ray CT images for multi-phase geomaterials. J. Nondestr. Eval. 37(3), 1–12 (2018). https://doi. org/10.1007/s10921-018-0508-y 3. Iassonov, P., Gebrenegus, T., Tuller, M.: Segmentation of X-ray computed tomography images of porous materials: a crucial step for characterization and quantitative analysis of pore structures. Water Resour. Res. 45, 1–12 (2009) 4. Zhang, P., Lu, S., Li, J., Zhang, P., Xie, L., Xue, H., et al.: Multi-component segmentation of X-ray computed tomography (CT) image using multi-Otsu thresholding algorithm and scanning electron microscopy. Energy Explor. Exploit. 35, 281–294 (2017) 5. Müller, P.: Coordinate metrology by traceable computed tomography, Ph.D. thesis, Technical University, Denmark (2012) 6. Simon, M., Tiseanu, I., Sauerwein, C., Walischmiller, H.: Advanced computed tomography system for the inspection of large aluminium car bodies In: 9th European NDT Conference (ECNDT), pp. 1–9 (2006) 7. Fintová, S., Anzelotti, G., Koneˇcná, R., Nicoletto, G.: Casting pore characterization by x-ray computed tomography and metallography. Arch. Mech. Eng. 57, 269–273 (2010) 8. Du Plessis, A., Rossouw, P.: X-ray computed tomography of a titanium aerospace investment casting. Case Stud. Nondestr. Testing Eval. 3, 21–26 (2015) 9. Gapinski, B., Janicki, P., Marciniak-Podsadna, L., Jakubowicz, M.: Application of the computed tomography to control parts made on additive manufacturing process. Proc. Eng. 149, 105–121 (2016) 10. Bibb, W.J., Thompson, D.: Computed Tomography characterization of additive manufacturing materials. Med. Eng. Phys. 33, 590–596 (2011) 11. Léonard, F., Tammas-williams, S., Todd, I.: CT for additive manufacturing process characterisation: assessment of melt strategies on defect population. In: ICT 2016: 6th Conference on Industrial Computed Tomography, pp. 1–8 (2016) 12. Pavan, M., Craeghs, T., Verhelst, R., Ducatteeuw, O., Kruth, J.P., Dewulf, W.: CT-based quality control of laser sintering of polymers. Case Stud. Nondestr. Testing Eval. 6, 62–68 (2016) 13. Townsend, A., Pagani, L., Scott, P., Blunt, L.: Areal surface texture data extraction from X-ray computed tomography reconstructions of metal additively manufactured parts. Precis. Eng. 48, 254–264 (2017) 14. Shah, P., Racasan, R., Bills, P.: Comparison of different additive manufacturing methods using computed tomography. Case Stud. Nondestr. Testing Eval. 6, 69–78 (2016) 15. Reinhart, C., Poliwoda, C., Günther, T.: How industrial computer tomography accelerates product development in the light metal casting and injection moulding industry. In: 10th European Conference on Non-Destructive Testing - ECNDT, Moscow, pp. 1–10 (2010)

Investigation of Influence by Different Segmentation Parameters on Surface Accuracy

209

16. Welkenhuyzen, F.: Investigation of the accuracy of an X-ray CT scanner for dimensional metrology with the aid of simulations and calibrated artifacts, Ph.D. thesis, KU Leuven, Leuven, Belgium (2016) 17. Hsieh, J.: Computed tomography: principles, design, artifacts, and recent advances. Wiley, Hoboken (2009) 18. Heinzl, C.: Analysis and Visualization of Industrial CT Data, Ph.D. thesis, University of Technology, Vienna, Austria (2008) 19. Boas, F.E., Fleischmann, D.: CT artifacts: causes and reduction techniques. Imaging Med. 4, 229–240 (2012) 20. Kruth, J.P., Bartscher, M., Carmignato, S., Schmitt, R., De Chiffre, L., Weckenmann, A.: Computed tomography for dimensional metrology. CIRP Ann. 60, 821–842 (2011) 21. Hiller, J., Hornberger, P.: Measurement accuracy in X-ray computed tomography metrology: toward a systematic analysis of interference effects in tomographic imaging. Precis. Eng. 45, 18–32 (2016) 22. VDI/VDE 2630–1.1.: Computed Tomography in Dimensional Measurement — Basics and Definitions. Beuth Verlag, Berlin (2009) 23. VDI/VDE 2630–1.3: Computed Tomography in Dimensional Measurement—Guideline for the Application of DIN EN ISO 10360 for Coordinate Measuring Machines with CT-sensors (2011) 24. VDI/VDE 2630–1.4 Computed Tomography in Dimensional Metrology - Measurement Procedure and Comparability (2010) 25. VDI/VDE 2630–1.4: Computed Tomography in Dimensional Metrology — Measurement Procedure and Comparability. Beuth Verlag, Berlin (2008) 26. VDI/VDE 2630–2.1: Computed Tomography in Dimensional Measurement — Determination of the Uncertainty of Measurement and the Test Process Suitability of Coordinate Measurement Systems with CT Sensors. Beuth Verlag, Berlin (2013) 27. Borges de Oliveira, F., Stolfi, A., Bartscher, M., De Chiffre, L., Neuschaefer-Rube, U.: Experimental investigation of surface determination process on multi-material components for dimensional computed tomography. Case Stud. Nondestr. Test. Eval. 6, 93–103 (2016) 28. Carmignato, S.: Traceability of dimensional measurements in computed tomography, In: Proceedings of 8th AITeM Conference, Italy, pp. 1–11 (2007) 29. Ontiveros, S., Yagüe, J.A., Jiménez, R., Brosed, F.: Computer tomography 3D edge detection comparative for metrology applications. Proc. Eng. 63, 710–719 (2013) 30. Yagüe-Fabra, J.A., Ontiveros, S., Jiménez, R., Chitchian, S., Tosello, G., Carmignato, S.: A 3D edge detection technique for surface extraction in computed tomography for dimensional metrology applications. CIRP Ann. 62, 531–534 (2013) 31. Townsend, A., Pagani, L., Blunt, L., Scott, P.J., Jiang, X.: Factors affecting the accuracy of areal surface texture data extraction from X-ray CT. CIRP Ann. 66, 547–550 (2017) 32. Kowaluk, T., Wozniak, A.: A new threshold selection method for X-ray computed tomography for dimensional metrology. Precis. Eng. 50, 449–454 (2017) 33. Fujimori, T., Suzuki, H.: Surface extraction from multi-material CT data. In: IEEE Ninth International Conference on Computer Aided Design and Computer Graphics (CAD-CG 2005), Hong Kong, pp. 1–6 (2005) 34. Sokac, M., Budak, I., Katic, M., Jakovljevic, Z., Santosi, Z., Vukelic, D.: Improved surface extraction of multi-material components for single-source industrial X-ray computed tomography. Measurement 153, 1–14 (2020)

Product Development for Lifetime Prolongation via Benchmarking 1 , Mahir Ya¸sar1 , Tezel Karayol1 , Anil Akdogan2(B) , Turgay Serbet ¸ and Ali Serdar Vanli2 1 Mesan Kilit A.S, ¸ Silivri, ˙Istanbul, Turkey {turgayserbet,mahiryasar,tezerkarayol}@essentra.com 2 Department of Mechanical Engineering, Yıldız Technical University, Besiktas, ˙Istanbul, Turkey {nomak,svanli}@yildiz.edu.tr

Abstract. Companies apply various methods to maintain or increase their market shares in increasing world competition conditions. Benchmarking is one of the best quality improvement tools among these methods especially in automotive industry. Benchmarking presents a continuous process for measuring systems, processes, and products within the organization, and a comparison with the organization performing best practice. This scientific method, which is used to improve the positive aspects of competitors and our own business, finds widespread use. This study examines the product design and improvement project of a local industrial lock mass manufacturer for automotive industry. Since quality management system standards in automotive industry requires continuous quality improvement, these studies are the use of correct methods in the collection and processing of quality data is a big deal. The results achieved here have provided considerable benefit in achieving equivalent quality characteristics with equivalent products. This paper also represents the product development stages in order to improve the quality characteristics of a product. Design and mass manufacturing processes are expressed and reconsideration of the quality characteristics of the product by the collected data via benchmarking is analyzed in detail. Keywords: Mass production · Quality assurance · Product lifetime · Benchmarking · Position control

1 Introduction Torque hinges are also called constant torque hinges provide resistance to pivoting motion of the hinge itself. Such kind of hinges are more suitable for holding lids, doors, and panels than the regular ones. That means, the innovation of torque hinges has been a welcome addition for manufacturers and contractors. Torque hinges retain their angle or position when a force is applied while regular hinges cannot resist [1]. The door hinge is most commonly used industrial applications. While designing an application that hinges two panels together, determining suitable level of torque value is necessary for a reliable operation and intuitive end user experience. The possible different levels of torque value © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 210–218, 2022. https://doi.org/10.1007/978-3-030-90421-0_18

Product Development for Lifetime Prolongation via Benchmarking

211

can help device and system designers to select the ideal hinge solution [2]. A potential function that simultaneously combines gravity compensation and a desired Cartesian stiffness relationship for connection angles was investigated in literature [3]. Besides, the torque hinge is a type of hinge that provides constant resistance throughout the entire movement, allowing users to easily position the doors, display screens and similar assembled parts and safely hold them at the desired angle such as completely closed, fully open or somewhere in between. Torque hinge is mounted tightly with spring’s cylinder in a body. Torque occurs by friction between the springs and the cylinder. The occurring torque is proportional to the size and the number of the springs and the cylinder. The assembling directions of the springs determine whether the hinge is asymmetrical or symmetrical. The assembling figure of the torque hinge is given in Fig. 1.

Fig. 1. The assembling figure of the torque hinge

Torque hinges is used in many industries like shown in Fig. 2. Since they are in a variety of commercial and industrial settings for their practical usage and durability specification, torque hinges have been widely adopted in the electronics sector. Lifetime is an important factor for torque hinges like any other product. Many factors that affect the lifetime of the hinges are invaluable. These factors are determined by the conditions of usage and the environmental conditions. The lifetime of the torque hinges used in similar applications in the market is between 20.000 and 25.000 cycles. Under global competitive conditions, it is important to be able to demonstrate an advantage in terms of lifetime. For this purpose, the maximum attention is paid to lifetime prolongation studies. Research and development studies are continuing on the subject matter intensively. The basis of these studies is the quality data and their evaluation with the correct methods. One way to access this data is through the widely used benchmarking technique. The conducted benchmarking studies in this work give information not only about the cycle times but also the material variations of the products. There are many studies on product development, prolonging the lifetime of the product and bracket positioning with the benchmarking technique [4]. In particular, papers deal with the investigation of how the torque values of the human hip and waist joints may be influenced by the position of the exoskeleton joints, by exploiting a novel 3D human multi-body model. The computational approach allows analyzing different configuration and examining human efforts in terms of net joint moments and interfacing forces with the device [5].

212

T. Serbet ¸ et al.

Fig. 2. Usage areas of torque hinges

The aim of this study is to improve the following requirements of the product. But firstly it is need to define the required quality characteristics of the product clearly. Consumers generally require not less than 25.000 cycle times at equivalent toque values in market conditions. At the design stage the required exact quality characteristics of the product are determined as below. • + / − 20% maximum allowable deviation of torque value from the initial torque after 25.000 on-off tests, • Ability to retain the form under different forces, • Torque adjustable hinge designs against asymmetric torque values, • A patent-free design, • Meeting the needs of different sectors, • Sealing standardized design. There are some difficulties to supply such requirements due to the manufacturing process parameters. The most important ones of them are listed below. • Material type possibilities, While choosing the material, the following features should be considered. – Hardenable – High fatigue strength – Low coefficient of friction • Heat treatment difficulties, After heat treatment, the following should be noted. – Hardness values of springs and pin should be different – Attention should be paid on uneven hardness for same parts

Product Development for Lifetime Prolongation via Benchmarking

213

• Providing difficulties of surface roughness at desired values, • Production difficulties in dimensional and geometrical tolerances, Fine tolerances should be applied for pin, rings and inner hole diameters of hinge parts. The circularity of rings is very important for equal torque value in all hinge positions. • The problems experienced in the production of the spring, – – – –

Production method of springs Fine tolerances for all dimension Heat treatment Surface polishing

• Cannot be reached the required torque value after 25.000 on-off tests. This paper represents the product development stages in order to improve the quality characteristics of the product. Design and manufacturing processes are expressed and reconsideration of the quality characteristics of the product by the data from benchmarking is analyzed in detail [6]. Since benchmarking process is a powerful tool to improve quality of the manufacturing products the gains of the process is highly used for the purposes of this work.

2 Processing of a Torque Hinge A. Material Since they are used for holding laptop screens, shop cabinets, commercial doors to open and etc., most people use torque hinges daily maybe without considering. According to the usage area, torque hinge materials are in a wide range of materials from stainless steel to plastics. Body of the hinges have been made from zinc die cast alloy. The pin, one of the friction element, probably be made of heat treated steel ant the other friction element, the ring, generally made of hardened spring steels. B. Parts The body of the hinges are made of from zinc by die casting method. It is powder coated after deburring operation. The pin has been produced in a high precise computer numerical controlled machine and grinded after quenching. The spring has been formed in an eccentric press and polished by fine granules after quenching and nitration process. C. Assembling The assembling operation is detailed below. • Two of four rings are placed in the body in clockwise and the other two in anticlockwise direction, • The pin is driven into the body, • The o-ring holder part is assembled. • The two o-rings are attached to the right and left sides of the body. • The right and left side parts are assembled and completed in a second fixture.

214

T. Serbet ¸ et al.

3 Benchmarking Process Benchmarking presents a continuous process of measuring systems, processes and products within the enterprise and its comparison with enterprise that is doing best practice. It is defined by The American Productivity and Quality Center in 1996 as “The process of identifying, understanding, and adapting outstanding practices and processes from organization anywhere in the world to help your organization improve its performance.” There are four steps of benchmarking process. The first one is the Planning which is the subject of benchmarking defining, depth of benchmarking defining and objectives defining. As the second, the Data Collection; consisting of internal data collection, external data collection, finding a benchmarking partner, contact partner to ensure their consent and cooperation, gathering detailed data from benchmarking partner, aggregate data about the partner from other sources. As the third, the Analysis; converting data to information, sorting, organizing and monitoring the information and data, removal of irregular factors (if any), detection performance difference with proven best practices, discretion of the causes of the results, identification of processes which can be improved, formulation of new goals, identification of plans for changes. Finally, the Adaptation; including the plan creation, implementation of the best practices, connecting new plan with current plan [7]. A. Planning The benchmarking study was planned to conduct on three different international commercial competitors. The cylinder and spring materials of the torque hinges and their critical dimensions were needed. The objective of the benchmarking is planned to define the best characteristics of the parts by data collection, analyze and adaptation steps, respectively. B. Data Collection The data collection step of the benchmarking study has conducted on three different international commercial competitors. The pin and spring material data of the competitors’ torque hinges were collected. Their critical dimensions were measured. C. Analysis The pin and spring material info of the competitors’ torque hinges are seen in Table 1. Table 2 gives the critical dimensions of them. It can be seen the cycle time data of torque hinges of the competitors in Table 3.

Product Development for Lifetime Prolongation via Benchmarking

215

Table 1. Material info of pins and springs Material (%)

Pin raw material analysis

Spring raw material analysis

A Competitor

B Competitor

C Competitor

A Competitor

B Competitor

C Competitor

C

0.447

0.41

0.39

0.66

0.605

0.119

Si

0.178

0.231

0.21

0.248

0.26

0.329

Mn

2.101

0.823

0.94

0.83

0.73

0.952

P

0.005

0.008

0.001

0.014

0.009

0.025

S

0.317



0.01

0.016





Cr

0.024

1.02

0.8

0.014

0.078

1.662

Mo

0.011

0.17



0.014

0.022

0.221

Ni

0.016

0.002

0.016

0.037

0.026

0,982

Al

0.003

0.003



0.014

0.032

0.005

Cu

0.022





0.011

0.001

0.555

Pb

0.158











V



0.004



0.006

0.004

0.078

S



0.009





0.001

0.009

Sn



0.01





0.008

0.01

Hardness (HRC)

54–56

48–50

46–48

44–45

44–45

35–36

Analysis Results

44SMnPb28

AISI 1040

AISI 4140

Ck 67

Ck 60

AISI 301

Table 2. Dimensional data of pins and springs A Competitor

B Competitor

C Competitor

Pin diameter (mm)

Ø5.8

Ø5.45

Ø5.2

Spring internal diameter (mm)

Ø5.6

Ø5.3

Ø5

Spring external diameter (mm)

Ø8.6

Ø8.3

Ø7.5

Spring thickness (mm)

1.5 × 6

Body external diameter (mm)

Ø8.85

1.5 × 6

Ø8 thickness

Ø8.5

Ø7.75

Table 3. Cycle times of the competitors hinges Cycle comparison of competitors Life of cycle

A

B

C

25.000

20.000

20.000

216

T. Serbet ¸ et al.

The lifetime tests were conducted at the designed and developed test setup device at the laboratories of Mesan Kilit A.S. ¸ Research and Development department. Lifetime test setup device simulates torque hinges cycling periods as same as at their usage. Figure 3 shows the image from life cycle tests. It is capable of counting cycles and measuring torque value automatically. The determined data is evaluated by the software program to know whether the maximum allowable deviation of torque is exceeded or not.

Fig. 3. Lifetime test of torque hinges at the cycling device

D. Adaptation After planning, data collection and analyzing steps the adaptation includes the plan creation, implementation of the best practices, connecting new plan with the current. It is also to determine the technical levels of similar products of competitors to use the resulting values in our designs used in the following chapter.

4 Benchmark Results and Discussions After following the required steps of benchmarking process the below benchmark results were obtained. It was evaluated that the biggest challenge for the hinge design was the improvement of the life cycle value of the hinge over the results. The solution to the problem was found to be increasing the wear resistance between the pin and the ring by using tribology science. • By evaluating the pin and ring hardness values, it was determined that the pin hardness is higher than the ring hardness.

Product Development for Lifetime Prolongation via Benchmarking

217

• It was observed that oil is required between the friction elements. • It is determined that pin and ring surface qualities are important. It was observed that it needs an additional polishing process to be applied to the ring to improve the surface quality. • It is found that dimensional accuracy, shape and the position tolerances (circularity, coaxiality, perpendicularity) of the friction elements are very important. • Less heat accumulation was observed during and after cycle tests. • It is observed that competitors’ pins are made of easily machinable quenched steels. • Competitors’ pin material is generally made of high carbon spring steels. • It is observed that the hinges provide asymmetric or symmetrical torque depends on the mounting direction of the rings. • It was determined that the pin diameter needs to be 0.15–0.2 mm greater to achieve requested torque.

5 Conclusions This paper represents the product development stages in order to improve the quality characteristics of the torque hinges with the help of benchmarking technique. In this work, design and manufacturing process data of the products were collected by precise measurement techniques. The collected data was analyzed by benchmarking technique. The benchmarking data analyzing results give us valuable info not only about design but also manufacturing and final processing steps of the product. These analyzes enabled us to choose the right material for the part, determine the optimum hardness value, and plan post-process operations such as nitration, coating and polishing as detailly emphasized below item by item. Additionally, the analyzes pointed to us increasing the cycling capability of hinge pins by 20%. It is a satisfactory result to have achieved almost 20% increase in the product obtained. • The optimum material qualities were determined. • Ideal hardness values were determined; additionally, surface hardening was performed with nitration to increase the life with 60–67 HRC hardness. • Alternative oil trials were made to improve the friction conditions under high temperature. • Different coatings were tested in order to increase the wear life (PVD, CVD). • Polishing operation with granules is preferred to increase the surface quality of the spring and pin after heat treatment. • Forcing die is better calibrated to achieve circularity in the inner diameter of the ring. • Not less than 20.000 life cycle was achieved by product development which is 20% improved against the old designed and processed.

Acknowledgment. This paper is a part of “Tork Mente¸se Ürün Grubu Geli¸stirme” Project Number MP17003.

218

T. Serbet ¸ et al.

References 1. Wang, S.-B., Wu, C.-F.: Design of the force measuring system for the hinged door: analysis of the required operating torque. Int. J. Ind. Ergon. 49, 10 (2015). https://doi.org/10.1016/j.ergon. 2015.05.010 2. Froese, M.: How to determine the ideal level of torque when specifying hinges. Fastener Eng. (2019) 3. Albu-Schäffer, A., Ott, C., Hirzinger, G.: A unified passivity-based control framework for position, torque and impedance control of flexible joint robots. Res. Art. (2007) 4. Mestriner, M.A., Enoki, C., Mucha, J.N.: Normal torque of the buccal surface of Mandibularteeth and its relationship with bracket positioning: a study in normal occlusion (2006) 5. Panero, E., Muscolo, G.G., Pastorelli, S., Gastaldi, L.: Influence of hinge positioning on human joint torque in industrial trunk exoskeleton (2019) 6. ISO 9001: 2015. Quality Management System Standard, Requirements (2015) 7. Carrión-Garcia, A., Grisales, A.M., Papic, L.: Deming’s chain reaction revisited. Int. J. Prod. Qual. Manage. 21(2), 264 (2017). https://doi.org/10.1504/IJPQM.2017.083777

Research into the Influence of the Plastic Strain Degree on the Drawing Force and Dimensional Accuracy of the Production of Seamless Tubes Maria Kapustova1 , Michaela Kritikos2(B) , Jozef Bilik1 , Ladislav Morovic2 , Robert Sobota1 , and Daynier Rolando Delgado Sobrino3 1 Faculty of Materials Science and Technology in TrnavaDepartment ¸ of Forming of Metals and

Plastics, Institute of Production Technologies, Slovak University of Technology in Bratislava, Bratislava, Slovakia {maria.kapustova,jozef.bilik,robert.sobota}@stuba.sk 2 Faculty of Materials Science and Technology in Trnava, Department of Machining and Computer Aided Technologies, Institute of Production Technologies, Slovak University of Technology in Bratislava, Bratislava, Slovakia {michaela.kritikos,ladislav.morovic}@stuba.sk 3 Faculty of Materials Science and Technology in Trnava, Department of Production Devices and Systems, Institute of Production Technologies, Slovak University of Technology in Bratislava, Bratislava, Slovakia [email protected]

Abstract. The process of tube drawing was carried out in laboratory conditions using a special drawing jig clamped in a testing machine. The aim of the performed experiment was to determine the influence of technological parameters, namely the plastic strain degree, friction and drawing speed on the magnitude of the drawing force and the accuracy of the dimensions of the tubes drawing. The main monitored parameter was the plastic strain degree, the values of which changed when drawing three tubes with outer diameters of Ø14, 16 and 18 mm to a final outer diameter of Ø12 mm. The paper evaluates the influence of the plastic strain degree on the magnitude of the force required to the tubes drawing and on the dimensional accuracy of the wall thickness of the produced tubes. Research has shown that the plastic strain degree had an effect on the thickening of the tube wall, with a greater increase in thickness at higher degrees of deformation. This positive change in thickness was within the permissible tolerances applicable to the producing of precision tubes. Keywords: Seamless tubes · Plastic strain degree · Cold tubes drawing · Drawing tool · Drawing preparation

1 Introduction Cold tube drawing is a metal forming process that allows the production of seamless tubes with precise dimensions, quality surface and required mechanical properties. The © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 219–229, 2022. https://doi.org/10.1007/978-3-030-90421-0_19

220

M. Kapustova et al.

working principle behind this process has been already analyzed in detail by authors like, for example: [1–5], to just cite a few. The aim of the experiment carried out and described in this paper is to investigate the effect of the degree of plastic deformation (reduction) on the amount of force required to draw a tube through the drawing die. At the same time, the actually obtained values of deformation are to be also compared with the limit values of the tube drawing process, because exceeding these limits most certainly leads to the failure of the tube or the occurrence of defects on its surface. In all this process, the ductility (plasticity) and formability of the drawn material have an effect on the problem-free tube drawing process, as they affect the limit value of plastic deformation (limit reduction) for individual draws in multi-operational tube production. The limit of material ductility in conventional tube drawing has been addressed by the authors like, for example: [6], where the potential for optimizing the cold drawing process of steel tubes by reducing the number of draws was investigated. Similarly, another research covering the limit ductility of aluminium tubes with variable wall thickness during the fixed plug drawing process was also carried out by [7]. On the other hand, despite existing several tools for assessing and evaluation the drawing process, it has been precisely simulation software which have proved to be an important tool for monitoring the plastic flow of material in the drawing machine during cold tube drawing process. Besides, the wellknown FEM analysis makes it possible to optimize process and technological parameters and predict their possible effects on the accuracy of drawn tube production, as reported in [8–13].

2 Experiment Description and Analysis of the Material of the Initial Tube (Semi-Product) For the tube drawing laboratory experiment, three types of initial tubes were used, namely tubes with outer diameter of Ø14, 16 and 18 mm respectively, all with a wall thickness of 2 mm. All of these tubes were drawn to the required outer tube diameter of the Ø12 mm. Being mentioned all this, the aim of the experiment is set on determining the effect of the degree of deformation on the size of the tensile (drawing) force and on the dimensional accuracy of the produced tubes. Usually, the input semi product (initial tube) for this kind of cold drawing process is a thick-walled tube made by hot rolling or cold drawing. In the case of the experiment presented in this paper, such initial tubes were specifically obtained by a cold-drawn process. These tubes were then subsequently cut or sawed into pieces of the required length of 500 mm. One end of each tube was tipped to a diameter of Ø11 mm. This was done to facilitate the proper holding of the tubes by the clamping jaws of the testing machine. The shape and dimensions of the initial tube are shown in Fig. 1. The tube will be cold-drawn using a drawing tool (drawing die) with a diameter of Ø12 mm. The die material is a Cr-based alloy steel that meets the required mechanical properties for a tool used in cold tube drawing under laboratory conditions. The geometry and dimensions of the drawing tool can be seen in Fig. 2. A ferritic-pearlitic low-carbon steel of class E235 was used for the production of the drawn tube; this type of steel is also suitable for the production of seamless tubes by cold drawing technology. The mechanical properties of steel are as follows: Yield

Research into the Influence of the Plastic Strain Degree on the Drawing Force

221

Fig. 1. Sketch of the shape and dimensions of Fig. 2. Shape and dimensions of the Ø12 mm the initial drawing die

strength Re = (235 ÷ 245) MPa, Tensile strength Rm = (343 ÷ 441) MPa, Minimum elongation A5 = 24%. This low-carbon steel E235 has guaranteed weldability, good machinability, and also hot and cold formability. The material used in this experiment was supplied as normalized annealed (890–950 °C). The following Table 1 shows the chemical composition of low carbon steel E235. These values are in accordance with the standard EN10305–1. Table 1. Chemical composition of the steel E235 (wt. %) Element name C Content

Si

Mn

P

S

Cr

Cu

Mo

Ni

Sn

0,180 0,230 1,180 0,015 0,014 0,050 0,200 0,020 0,080 0,016

3 Experimental Investigation For the purposes of the laboratory experiment, a special fixture was designed and manufactured to perform a technological test of seamless tubes drawing. The method of clamping the fixture in the hydraulic machine EU 40 with a nominal force of 400 kN is documented in Fig. 3. All the test tubes in the seamless tube drawing experiment were numbered and then used for single draw production. Three test specimens of tubes with a diameter of Ø14, 16 and 18 mm and a wall thickness of 2 mm were drawn to the required outer diameter of the tube Ø12 mm using a drawing tool - a die. The drawing speed was 60 mm/min. Liquid mineral oil was applied to reduce the effect of friction. The advantage of the experiment of tube drawing with a special fixture is that the EU 40 testing machine provided a graphical development of the drawing force required to produce each sample of a seamless tube. The shape of the tube before and after drawing is shown in the Fig. 4.

222

M. Kapustova et al.

Fig. 3. The hydraulic testing machine EU 40 with a clamped fixture

Fig. 4. Seamless tube before and after drawing

4 Determination and Calculation of the Important Parameters in the Drawing Process A total of nine tubes with three different initial outer diameters Ø14, Ø16 and Ø18 with a wall thickness of 2 mm to a final outer diameter of Ø12 were drawn through the die. For the subsequent assessment of the influence of the plastic strain degree on the magnitude of the drawing force and dimensional accuracy of the final seamless tubes, the following parameters were selected for evaluation: degree of relative deformationreduction, degree of actual deformation and coefficient of tube elongation. The main evaluation parameter is the reduction - the relative change in cross-sections, which is calculated: R=

S0 − S · ≤ 100 [%] S0

(4.1)

Research into the Influence of the Plastic Strain Degree on the Drawing Force

223

R – reduction, S 0 – tube cross – section before drawing, S – tube cross – section after drawing, S0 =

π D02 π d02 − [mm2] 4 4

(4.2)

D0 – outer diameter of the tube before drawing, d 0 – inner diameter of the tube after drawing S=

π D2 πd2 − [mm2] 4 4

(4.3)

D – outer diameter of the tube before drawing, d – inner diameter of the tube after drawing Another important parameter is the actual logarithmic transformation ϕ, which is calculated: ϕ = ln

S0 [−] S

(4.4)

Another parameter is the tube elongation coefficient λ, which is determined as follows: λ=

(D0 − to) · to S0 = [−] S (D − t) · t

(4.5)

t 0 – tube wall thickness before drawing, t – tube wall thickness after drawing The values of the evaluated parameters were calculated on the basis of the measured dimensions of the outer and inner diameters of the tubes before and after drawing. The ZEISS CenterMax coordinate measuring machine, which is used for measuring of the components in industrial production was used to measure tube diameters. All the measured dimensions and calculated values of the evaluated parameters are in Tables 2, 3, 4, 5, 6 and 7. Table 2. Measured dimensions of tubes and drawing force Material Sample Outer E235 no tube diameter D0 [mm]

Tube 1 diameter 2 [mm] 3 Ø14

Outer tube diameter after drawing D [mm]

Inner tube diameter after drawing d [mm]

Wall thickness before drawing t0 [mm]

Wall Drawing thickness force after [kN] drawing t [mm]

9.995

11.999

7.871

2.001

2.064

13.992

9.998

11.994

7.897

1.997

2.049

11.840

13.998

10.006

11.997

7.903

1.996

2.047

12.200

13.997

Inner tube diameter d0 [mm]

12.000

224

M. Kapustova et al. Table 3. Calculated values of deformation, changes in tubes wall thickness and stress

Material E235

Sample Cross-section Cross-section Reduction Actual Tube Wall no before after drawing R [%] transformation elongation thickness drawing S0 S [mm2 ] ϕ [−] coefficient differences λ [−] t [mm] [mm2 ]

Tube 1 diameter 2 [mm] 3 Ø14

75.411

64.421

14.573

0.158

1.171

0.063

75.254

64.005

14.948

0.162

1.176

0.052

75.260

63.987

14.979

0.162

1.176

0.051

Table 4. Measured dimensions of tubes and drawing force Material Sample Outer E235 no tube diameter D0 [mm]

Inner tube diameter d0 [mm]

Outer tube diameter after drawing D [mm]

Inner tube diameter after drawing d [mm]

Wall thickness before drawing t0 [mm]

Wall Drawing thickness force after [kN] drawing t [mm]

Tube 4 diameter 5 [mm] 6 Ø16

16.015

11.931

11.986

7.771

2.042

2.108

21.480

16.013

11.942

11.987

7.761

2.036

2.113

21.640

16.020

11.951

11.997

7.654

2.035

2.173

22.000

Table 5. Calculated values of deformation, changes in tubes wall thickness and stress Material E235

Sample no

Cross-section before drawing S0 [mm2 ]

Crosssection after drawing S [mm2 ]

Reduction R [%]

Actual transformation ϕ [−]

Tube elongation coefficient λ [−]

Wall thickness differences t [mm]

Tube diameter [mm] Ø16

4

89.639

65.405

27.035

0.315

1.371

0.065

5

89.382

65.545

26.668

0.310

1.364

0.077

6

89.389

67.065

24.974

0.287

1.333

0.139

5 Evaluation of the Influence of Reduction on the Drawing Force and Dimensional Accuracy of Tubes The influence of technological parameters, i.e. reduction and the coefficient of elongation of tube to the drawing force are shown in the graphs. The dependence of the drawing force on the reduction is documented in Fig. 5. The course of the graph shows that as the reduction in the cross-section of the tube increased, the drawing force required to

Research into the Influence of the Plastic Strain Degree on the Drawing Force

225

Table 6. Measured dimensions of tubes and drawing force Material Sample Outer E235 no tube diameter D0 [mm]

Inner tube diameter d0 [mm]

Outer tube diameter after drawing D [mm]

Inner tube diameter after drawing d [mm]

Wall thickness before drawing t0 [mm]

Wall Drawing thickness force after [kN] drawing t [mm]

Tube 7 diameter 8 [mm] 9 Ø18

18.000

13.945

11.978

7.781

2.028

2.099

27.000

18.032

13.977

11.999

7.658

2.028

2.171

26.800

18.028

13.971

11.984

7.798

2.029

2.093

26.400

Table 7. Calculated values of deformation, changes in tubes wall thickness and stress Material E235

Sample Cross-section Cross-section Reduction Actual Tube Wall no before after drawing R [%] transformation elongation thickness drawing S0 S [mm2 ] ϕ [−] coefficient differences λ [−] t[mm] [mm2 ]

Tube 7 diameter 8 [mm] 9 Ø18

101.738

65.132

35.981

0.446

1.562

0.071

101.942

67.019

34.258

0.419

1.521

0.143

101.960

65.037

36.214

0.450

1.568

0.065

overcome the resistance of the material to forming increased. Similarly, it was necessary to use a higher drawing force to achieve a higher strain degree. During the process of tube drawing, the tube´s cross-section decreased and the length of the tube increased, which affected the coefficient of elongation of the tube. The value of this technological parameter increased with increasing drawing force, as can be seen in Fig. 6. During cold drawing of tubes, the wall thickness of the tube may increase or even decrease. However, there may also be a case when the wall thickness does not change and remains the same as on the inlet tube. Any change in thickness during drawing can be affected by the size of the plastic deformation - reduction, but also by the geometry of the die, material defects or inappropriate previous heat treatment of the semi-finished product. Table 8 documents how the reduction influences the change in tube wall thickness after drawing. The data in the Table 8 show that with a smaller reduction of the tube’s cross-section, the change in the wall thickness of the tube is also smaller. As the reduction increased, the change in tube’s wall thickness also increased. In the case of drawing the tube specimens from the diameter Ø14 to Ø12, the reduction was about 15% and the change in wall thickness of the tube in the range of 2.5–3.2%.

M. Kapustova et al. 28 26 24 22 20 18 16 14 12 10

Drawing force F [kN]

Drawing force F [kN]

226

12

22

32

42

28 26 24 22 20 18 16 14 12 10 1.1

Reduction R [%]

1.3

1.5

1.7

Coefficient of elongation λ [-]

Fig. 5. Dependence of drawing force on reduction

Fig. 6. Dependence of drawing force on coefficient of reduction

In the case of drawing the tube specimens from the diameter Ø16 to Ø12, the reduction was about 27% and the change in wall thickness of the tube in the range of 3.2–3.8%. In the case of drawing the tube specimens from the diameter Ø18 to Ø12, the reduction was about 36% and the change in wall thickness of the tube in the range of 3.2–3.5%. Highlighted samples of tubes no. 6 and 8 in Table 8 shows up to a twofold change in the wall thickness of the tube with the calculated reduction. This phenomenon can be attributed to structural errors in the material of the drawn tube. Table 8. Influence of reduction on the change of wall thickness of tube after drawing Drawing from → to Ø [mm]

Sample no

Reduction R [%]

Coefficient of elongation of the tube λ [−]

Change of wall thickness of tube t [%]

Ø14 → Ø12

1

14.573

1.171

3.148

2

14.948

1.176

2.579

3

14.979

1.176

2.555

4

27.035

1.371

3.208

5

26.668

1.364

3.807

6

24.974

1.333

6.808

7

35.981

1.562

3.502

8

34.258

1.521

7.053

9

36.214

1.568

3.180

Ø16 → Ø12

Ø18 → Ø12

Research into the Influence of the Plastic Strain Degree on the Drawing Force

227

Change of the wall thickness of the tube ∆t [mm]

The values of the change in the wall thickness of the tube changed in the same way, depending on the drawing force but also on the reduction, in the range of 0.051– 0.077 mm. This range was exceeded only in two disputed tube specimens (no. 6 and 8 in Table 8), where the tube material apparently showed internal defects. The dependence of the change in wall thickness of the tube on the drawing force is shown in Fig. 7 and the dependence of the change in the wall thickness of the tube on the reduction is documented in Fig. 8. 0.15 0.13 0.11 0.09 0.07 0.05 0.03 11.8 12.0 12.2 21.5 21.6 22.0 26.4 26.8 27.0 Drawing force F [kN]

Change of the wall thickness of the tube ∆t [mm]

Fig. 7. Dependence of the tube´s wall thickness of the tube on drawing force

0.15 0.13 0.11 0.09 0.07 0.05 0.03 14.6 14.9 15.0 27.0 26.7 25.0 36.0 34.3 36.2 Reduction R [%] Fig. 8. Dependence of the tube´s wall thickness of the tube on reduction

6 Conclusions Tubes are produced by cold drawing with common but also with increased accuracy of tube dimensions. Precision tubes have narrower dimensional tolerances of diameters as

228

M. Kapustova et al.

well as wall thicknesses and precise production places increased demands on the quality of process parameters as well as the drawing tool. In conclusion, it can be stated that in the laboratory drawing of tubes through the die, in all reductions in the cross-section of the tube, only the wall of the tube thickened, while the greater increase in thickness was in the case of higher reductions. The measured change in the wall thickness of the tube was in the range of 2.5 to 3.5% and in the case of two tube specimens up to about 7%. This change in wall thickness of the tube was within the permissible tolerances, because depending on the purpose of use, standard precision tubes are supplied either with a wall thickness tolerance of − 0 + 20%, if thinning of the tube wall is not permitted. Alternatively, according to the agreement with the customer, precision seamless tubes are delivered with a thickness tolerance of ± 10%. Especially in the production of small diameter tubes, there must be no change in the wall thickness of the tube when cold drawing is the final operation. Practical experiments are of great importance in monitoring the change in the wall thickness of the tube. The research results can help further develop and optimize the production process of cold drawing of precision seamless tubes. Acknowledgments. This work was supported by the Slovak Research and Development Agency under the Contract no. APVV-18–0418.

References 1. Hosford, W.F., Caddell, R.M.: Metal Forming: Mechanics and Metallurgy. Cambridge University Press, New York (2011) 2. Kumar, P., Agnihotri, G.: Cold drawing process – a review. Int. J. Eng. Res. Appl. 3(3), 988–994 (2013) 3. Altan, T., Oh, S., Gegel, H.: Metal Forming—Fundamentals and Application, pp. 285–288. ASM, Metals Park, OH (1983) 4. Ridzoˇn, M.: The Effect of Technological Parameters Influencing the Properties of Seamless Cold-Drawn Tubes, 1st ed., Scientific Monographs. Hochschule Anhalt, Köthen (2012) 5. Groover, M.P.: Fundamentals of Modern Manufacturing 4/e. Wiley (2010) 6. Karnezis, P., Farrugia, D.C.J.: Study of cold tube drawing by finite-element modelling. J. Mater. Process. Technol. 80–81, 690–694 (1998) 7. Bui, Q.H., Bihamta, R., Guillot, M., D’Amours, G., Rahem, A., Fafard, M.: Investigation of the formability limit of aluminium tubes drawn with variable wall thickness. J. Mater. Process. Technol. 211, 402–414 (2011) 8. Sawamiphakdi, K., Lahoti, G.D., Kropp, P.K.: Simulation of a tube drawing process by the finite element method. J. Mater. Process. Technol. 27, 179–190 (1991) 9. Acharya, J.Y., Hussein, S.M.: FEA based comparative analysis of tube drawing process. Int. J. Innov. Eng. Res. Technol. 1, 1–11 (2014) 10. Bella, P., Durcik, R., Ridzon, M., Parilak, L.: Numerical simulation of cold drawing of steel tubes with straight internal rifling. Proc. Manufact. 15, 320–326 (2018)

Research into the Influence of the Plastic Strain Degree on the Drawing Force

229

11. Kapustova, M., Sobota, R.: The research of influence of strain rate in steel tube cold drawing processes using FEM simulation. In: Novel Trends in Production Devices and Systems V (NTPDS V): Special Topic Volume with Invited Peer Reviewed Papers Only. 1. Vyd, pp. 235– 242. Trans Tech Publications, Zurich (2019) 12. Mishra, G.K., Singh, P.: Simulation of seamless tube cold drawing process using finite element analysis. Int. J. Sci. Res. Dev. 3 1286–1291 (2015) 13. Boutenel, F., Delhomme, M., Velay, V., Boman, R.: Finite element modelling of cold drawing for high-precision tubes. C. R. Mecanique 346 665–677 (2018)

Industry Robotics, Automation and Industrial Robots in Production Systems

An Example of Merit Analysis for Digital Transformation and Robotic Process Automation (RPA) Hakan Erdem1(B) and Tijen Över Özçelik2 1 IT Application Support, Daikin, Turkey

[email protected]

2 Industrial Engineering, Sakarya University, Sakarya, Turkey

[email protected]

Abstract. In the study, “Merit Analysis” was carried out within the scope of a project decision for Robotic Process Automation (RPA) technology and its application. In recent years, businesses have been very interested in RPA technology. RPA technology provides the opportunity to record the digital works that the employees do very intensively and repetitively during the day and then to do it alone and very quickly. The robot (digital) with artificial intelligence can eliminate the human requirement in routine work if the process underlying the work is properly defined. In this context, the RPA study was evaluated with the Merit analysis, and the breakeven point was obtained. Keywords: Digital transformation · Merit analysis · Robotic process automation · Breakeven point · Software robot

1 Introduction Digital Transformation is the holistic transformation carried out by businesses in human, business processes, and technology elements to provide more effective and efficient service and ensure beneficiary satisfaction, in line with the opportunities offered by rapidly developing information and communication technologies and changing social needs. It is impossible to reduce digital transformation to a few technologies, but the groundbreaking impact of web 2.0, mobile, broadband internet, cloud computing, digital media, big data, artificial intelligence, augmented reality, internet of things, and 3D printers have started a new era. With digital technologies, firstly, analog records were processed in the digital environment (automation), and the processes were transferred to the digital domain (e-service). At this point, all corporate assets and stakeholder relations are redefined in the digital environment (digital transformation). The digitalization process is not one-sided; it can always make the automation of businesses more efficient with new technologies and improve the digital technology experience in their services. As digital transformation requires agility and adaptability to new conditions and expectations, even the most successful businesses have difficulty fully completing their transformation. Some of the challenges experienced in the digital transformation process are listed below; © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 233–242, 2022. https://doi.org/10.1007/978-3-030-90421-0_20

234

H. Erdem and T. Ö. Özçelik

• • • •

There is no single and ready package solution. There is no clear answer about what the solution is. Changing habits is not easy as technology changes rapidly. Digital transformation requires transforming and managing many elements (people, process, and technology) together. • Jobs and transactions are not waiting. • Digital transformation is continuous. • Digital transformation requires thinking about the past, present, and future at the same time [1]. Digital transformation is about the differentiation of a business through digitalization. With a more detailed expression, digital transformation; It is the transformation of a company, an industry, a process, a business model from its physical or analog state to digital, customer-oriented, fast, scalable and spreadable in a virtual environment by being exposed to the digitalization trend. Today, the digitalization of the industry is discussed intensively; the concept called Industry 4.0 means that the production is customer-oriented, flexible, fully integrated, as autonomous and optimum as possible. In the process optimization study carried out by the company, it was determined that different units spent a lot of time preparing reports. Although the data source of the prepared statements is ERP and the extracted data are similar, it has been determined that there is no integrated approach since each unit obtains data from different fields. In short, although the fundamental processes of preparing reports were similar, the reports differed. Although the frequency of data extraction was intense, it took place at different periods for each unit. For this reason, as a result of the researches carried out, it was determined that reflecting the data received from the ERP via RPA to the reporting tool offers a good solution (scenario). Merit analysis was also used to determine what kind of return this scenario would bring to the business. In this way, the time spent and its cost would be determined, and it would be possible to calculate the operating profit by taking into account the amount of investment required for the new process.

2 Literature Search for Robotic Process Automation and Merit Analysis Robotic Process Automation (RPA) is started to be popular for recent years. For sure, Covid-19 pandemic has huge effect on it. But before pandemic, companies started to search for more practical ways on digital processes. We can consider digitalization process under to step. First step was digitalizing of the processes from paper to computer. And now we are living second step which is optimizing digital processes. Many working ours is being spent in front of the computer and daily routins needs to be automized significantly. And RPA is very effective method what companies look for. While interest in robotic process automation studies has increased recently, most of the literature is related to the theoretical structure of RPA, or specifically the financial side and outsourcing [2]. Businesses have repetitive manual tasks that take up valuable time. They are monotonous and consume the time that the human workforce would spend on more critical tasks. Robotic Process Automation (RPA - Robotic Process Automation)

An Example of Merit Analysis for Digital Transformation and RPA

235

automates processes through software robots. RPA simplifies repetitive tasks for business users by working with software robots the same way a person works with that software. RPA manages business processes and repetitive work by people; It aims to replace it with the digital workforce, a scalable team of software robots working with the human workforce. People then make judgment calls, handle exceptions, and provide oversight. In other words, people are interested in jobs with high added value [3].

Fig. 1. RPA usage areas

Regardless of the program or platform used, RPA can perform all repetitive tasks on a computer based on a specific rule. For example, RPA can receive data from an e-mail and process it into an Excel table, use the data returned from Excel in a module in SAP and print the new incoming data as PDF. It can send a notification e-mail about the subject by transferring a prepared PDF file to the database and reporting all these transactions in Word by adding the date. Once programmed, it will perform all these tasks one after the other in a matter of minutes. A robot can be started and used in well-known typical applications and within their specific applications [4]. Figure 1 expresses RPA usage areas visually. A software robot is a piece of software that can complete repetitive operations performed by a human. Software robots can communicate with other systems to perform various repetitive tasks, commenting and triggering responses. In short, RPA robots can emulate many user actions. The RPA software robot logs into applications, connects to other systems, copies and pastes data, moves files and folders, reads and writes from databases. It can also extract and process structured and semi-structured content from documents, PDF files, emails, and forms and pull data from the web by performing calculations. The average knowledge worker employed in a back-office process has many repetitive, routine tasks that are uninteresting. RPA is a type of software that mimics the activity of a human while performing a task within a process. It can do repetitive things faster, more accurately, and tirelessly than humans, allowing employees to perform other manpower-intensive tasks such as emotional intelligence, reasoning, judgment, and customer interaction [5].

236

H. Erdem and T. Ö. Özçelik

RPA studies can be considered in four groups. The first is highly specialized software that can only run on specific processes, such as accounting and finance. Some types produce more general and slow solutions, collect and synthesize data from documents or the web, and save to a document on the desktop. Another group consists of robots designed for customized work for a particular organization by expert programmers within a template framework while also developing enterprise-safe robots that are very fast and scalable, reusable [6]. There has been an intense and global increase in RPA studies and marketing in the last few years. Within the scope of RPA and Artificial Intelligence (AI) studies, the aim of software robots is based on developing techniques based on imitating human action using a combination of declarative rules and cognitive techniques [7] (Fig. 2).

Steps to a successful RPA process Evaluaon of RPA potenal RPA applicaon development Maintaining RPA benefits Fig. 2. Steps for a successful RPA process

The initial decision within RPA means exploring the potential for process automation. Typically, there are dozens of process types and process steps with different levels of automation within a company. For a successful RPA implementation, processes must be scalable, repeatable, and standardized [8]. If an organization runs complex and non-standardized processes, automation needs to be applied with the extra measure. Duplicating a complex process with many variables using RPA is often tedious and requires significant investment. In addition, the cost of maintaining and servicing the robots may outweigh the savings they achieve. Therefore, before embarking on an RPA initiative, it is crucial to understand the maturity of business processes and decide which processes are standardized enough to benefit from RPA and which processes will benefit from harmonization and standardization. Once processes are standardized, the highest automation potential in an organization must be identified. Most systems already have some form of automation, so looking at the current automation rates within processes is crucial to defining RPA targets. In addition, there may be specific situations based on geography, vendor, or material where manual work is typical. By comparing automation rates, users can discover where existing automation solutions can be improved. Additional automation can create benefits, such as reducing production times or improving

An Example of Merit Analysis for Digital Transformation and RPA

237

other process-related performance metrics. The next step covers training the RPA application. As a best practice, users should start training robots with the current workflow. The trained robots start their work as part of a pilot project, and the underlying IT systems follow their activities. After multiple executions, the generated process instances can be evaluated using the process mining application. The performance of different robots and non-robot-assisted processes should be compared to determine the most effective RPA implementation. After selecting and implementing the most effective RPA practices, continuous monitoring ensures that the impact of the RPA initiative, especially the return on investment, is monitored. Process mining allows the user to see how processes have changed over time with RPA. It also immediately detects when a process is evolving and how robots must adapt to a changing business environment [9]. Although all this is promising, the use of RPA also carries some risks. While RPA can help do routine work faster and with higher quality, it can also cause rapid and precise errors. It lacks human control before acting. For example; Inadequate definition of inappropriate data or business rules can result in the wrong parts being ordered quickly and in large quantities. In short, RPA requires detailed knowledge of the business process to use. Otherwise, it may not be possible to achieve the expected performance. However, these risks can be minimized by adequately handling the RPA implementation [10, 15]. In short, although the robotic workforce enables the effective use of the human workforce, the human workforce will continue to exist [11]. Metrics are the preferred criteria for planning in a design process, but metrics can sometimes take months, and the iterative work performed can be very costly. For this purpose, Merit Figures are used, which are much more cost-effective and much faster to develop. Merit Figures can be created due to one or two days’ work of a group of design and production experts [12]. The term Figure of Merit (Merit Analysis) has been used for electronic systems since 1965. FOM is a widely used metric that allows the expression of digital systems. It has a structure that enables the comparison of similar systems in value. It is generally used to compare part, vehicle, or system parameters. There is a short list of questions that allow determining whether it is appropriate to use merit analysis. Merit analysis can be applied if the answers to these questions are all “yes”. The merit figure is based on a cost-benefit analysis. It is based on controlling costs to start the production and sustainability of a product/system designed, developed, and tested until the best design is achieved. The degree to which direct costs influence design decisions and the balance between benefit and cost significantly impact the Merit analysis. This level of balance in cost-benefit analysis is accepted as an indicator of the use and usefulness of Merit Analysis. When using merit analysis, some situations need to be considered. These; • Merit analysis should consist of independent cost measures. • All cost factors should be measurable and calculable. • It should be a clear and definable standard (such as a threshold that separates what is acceptable from what is not), and it should have a structure that allows comparing different processes and systems [13, 14].

238

H. Erdem and T. Ö. Özçelik

3 Merit Analysis Study for RPA Within the scope of the application, Merit Analysis has been handled for a project planned to be carried out within the scope of the Process Development Unit studies. Within the scope of business data protection, the relevant values are in similar proportions and are representative. Within the project’s scope, it aims to increase the reporting performance of the units in the enterprise. In the study, it has been determined that different teams try to pull similar data from the system in various ways due to the intense monthly, weekly, and daily reporting. For this reason, it is aimed to centralize the reports and prepare them automatically with the help of an RPA-robot. Below are the steps, calculations, and graphics prepared within the project scope of using RPA in the reporting process. Studies and Calculations Related to Costs; • The number of company employees envisaged for the relevant work is three people and 45 working days are foreseen. It is envisaged as 400 TL per day (1 person-hour 40 TL and 10 work-hour per day). Thus, the cost of company employee for the project = 40 × 3 × 45 = 54.000 TL was calculated. • It is planned to receive support from the headquarters abroad for the study. 2 people helped with the project, and it is planned to give support for five man-days. The cost of 1 man-day is 9.000 TL. Thus, the cost of international support = 2 × 5 × 9.000 = 90.000 TL. • For Process Analysis and development, it is envisaged to be agreed with a consultant firm, and the entire project will be decided for 250,000 TL. • Within the project’s scope, it aims to purchase Datastok (representative name) Database (to ensure that the data taken from the system is sent to a central database). Its cost is 200.000 TL. • Within the project’s scope, RePeA (representative name) robot was purchased from the consultant firm for RPA activities. Its cost is 300.000 TL. An annual maintenance fee of 15.000 TL (for the first year) is budgeted. An increase of 15% is targeted for the following years. • A license is required to automatically flow the data drawn from the system to the database. For this, license fees are 20.000 TL per year (for the first year). A 15% increase is budgeted for the following years. The Calculated Earnings Are as Follows; An evaluation has been made below, taking into account the reports of different units. Since these reports will be prepared by the robot when the project is completed, all the below-mentioned resource costs are considered earnings. (An hour for a man is 40 TL, a monthly working day is regarded as 22 days, an annual working day is regarded as 250 days, and a working week is considered 45 weeks a year. The labor cost increase is accepted as 15% per year). • The Sales unit prepares one report daily, and one person spends 1 h on this report. 2 reports are prepared weekly, and 4 h are spent on these reports. There is one report prepared monthly, and 6 h are spent on this report. Resource spent by the unit for reports;

An Example of Merit Analysis for Digital Transformation and RPA









239

For Daily Reports =1 × 1 × 40 = 40 TL/day, Monthly = 40 × 22=880 TL/Month, Annual = 880 × 12 = 10,560 TL. For Weekly Reports = 2 × 4 × 40 = 320 TL/week, Annual = 320 × 45= 14.400 TL For Monthly Reports = 1 × 6 × 40 = 240 TL/month, Annual = 240 × 12=2880 TL. Unit Total = 10,560 + 14,400 + 2880 = 27,840 TL/year The Customer unit prepares two reports daily, and one person spends 1 h on this report. 2 reports are prepared weekly, and 5 h are spent on these reports. There are two reports prepared monthly, and 8 h are spent for each of these reports. Resource spent by the unit for reports; For Daily Reports = 2 × 1 × 40 = 80 TL/day, Monthly = 80 × 22=1760 TL/Month, Annual = 1760 × 12 = 21,120 TL. For Weekly Reports = 2 × 5 × 40 = 400 TL/week, Annual = 400 × 45 = 18,000 TL For Monthly Reports = 2 × 8 × 40= 640 TL/month, Annual= 640 × 12 = 7.680 TL. Unit Total = 21.120 + 18.000 + 7.680 = 46.800 TL/year The Institutional Planning unit prepares one report on a daily basis, and one person spends 1 h on this report. Three reports are prepared on a weekly basis, and 4 h are spent on these reports. There are two reports prepared monthly, and 12 h are spent on each of these reports. Resource spent by the unit for reports: For Daily Reports = 1 × 1 × 40 = 40 TL/day, Monthly = 40 × 22 = 880 TL/Month, Annual = 880 × 12= 10,560 TL. For Weekly Reports = 3 × 4 × 40 = 480 TL/week, Annual = 480 × 45 = 21.600 TL For Monthly Reports = 2 × 12 × 40 = 960 TL/month, Annual= 960 × 12 = 11.520 TL. Unit Total = 10,560 + 21,600 + 11,520 = 43.680 TL/year The Finance unit prepares four reports daily, and one person spends 1 h on this report. Three reports are prepared on a weekly basis, and 4 h are spent on these reports. There are two reports prepared monthly, and 12 h are spent on each of these reports. Resource spent by the unit for reports; For Daily Reports = 4 × 1 × 40 = 160 TL/ day, Monthly =160 × 22 = 3.520 TL/ month, Annual = 3.520 × 12 = 42.240 TL. For Weekly Reports = 3 × 4 × 40 = 480 TL/ week, Annual = 480 × 45= 21.600 TL For Monthly Reports = 2 × 12 × 40 = 960 TL/ month, Annual = 960 × 12=11.520 TL. Unit Total = 42.240 + 21.600 + 11.520 = 75.360 TL/year The Marketing unit prepares four reports daily, and one person spends 1 h on this report. Three reports are prepared on a weekly basis, and 4 h are spent on these reports. There is one report prepared monthly, and 6 h are spent for each of these reports. Resource spent by the unit for reports; For Daily Reports =4 × 1 × 40 = 160 TL/ day, Monthly = 160 × 22 = 3.520 TL/ month, Annual = 3.520 × 12 = 42.240 TL.

240

H. Erdem and T. Ö. Özçelik

For Weekly Reports = 3 × 4 × 40= 480 TL/ week, Annual = 480 × 45 = 21.600 TL For Monthly Reports = 1 × 6 × 40= 240 TL/ month, Annual = 240 × 12 = 2.880 TL. Unit Total = 42.240 + 21.600 + 2.880 = 66.720 TL/ year • The Credit Management unit prepares two reports daily, and one person spends 1 h on this report. 2 reports are prepared weekly, and 5 h are spent on these reports. There are two reports prepared monthly, and 8 h are spent for each of these reports. Spent resource; For Daily Reports = 2 × 1 × 40 = 80 TL/ day, Monthly =80 × 22 = 1760 TL/ month, Annual =1760x12 = 21.120 TL. For Weekly Reports = 2 × 5 × 40 = 400 TL/ week, Annual = 400 × 45 = 18.000 TL For Monthly Reports = 2 × 8 × 40 = 640 TL/ month, Annual = 640 × 12 = 7.680 TL. Unit Total = 21.120 + 18.000 + 7.680 = 46.800 TL/year

4 Conclusions As can be seen in the chart, it has been calculated in the study that it will be profitable from 2022. Project approval seems to be given according to the relevant result. Based on the above calculations, the following Table 1 was created (Fig. 3). Table 1. Chart of Merit (*FY: Financial Year) Explanation

*FY20

FY21

FY22

FY23

FY24

Costs

TL

TL

TL

TL

TL

Internal Employee

54.000

Overseas Support

90.000

Local Consultant

250.000

DataStok Database

200.000

RePeA Robot

315.000

17.250

19.838

22.813

26.235

20.000

23.000

26.450

30.418

34.980

46.288

53.231

61.215

License Fee Total

929.000

40.250

Cumulative Cost

929.000

969.250

1.015.538 1.068.768 1.129.983

Gain Sales Unit Reports

27.840

32.016

36.818

42.341

48.692

Customer Unit Reports

46.800

53.820

61.893

71.177

81.853

Institutional Planning Unit Reports

43.680

50.232

57.767

66.432

76.397

Financial Unit Reports

75.360

86.664

99.664

114.613

131.805 (continued)

An Example of Merit Analysis for Digital Transformation and RPA

241

Table 1. (continued) Explanation

*FY20

Marketing Unit Reports Credit Control Unit Reports

66.720

FY21

FY22

FY23

FY24

76.728

88.237

101.473

116.694

46.800

53.820

61.893

71.177

81.853

Total

307.200

353.280

406.272

467.213

537.295

Cumulative Earnings

307.200

660.480

1.066.752 1.533.965 2.071.260

Cumulative Merit

−621.800 −308.770 51.215

465.197

941.276

TL 25,00,000 20,00,000 15,00,000 10,00,000 5,00,000 -5,00,000 -10,00,000

* FY20 Cummulative Cost

FY21

FY22 Cummulative Earning

FY23

FY24

Cummulative Merit

Fig. 3. Merit chart

References 1. https://www.dijitalakademi.gov.tr/dijital-donusum-nedir, Eri¸sim. Accessed on 29 June 2021 2. Enriquez, J.G., Jimenez-Ramirez, A., Dominguez-Mayo, F.J., Garcia-Garcia, J.A.: Robotic process automation: a scientific and industrial systematic mapping study. IEEE Access 8, 39113–39129 (2020). https://doi.org/10.1109/ACCESS.2020.2974934 3. Huang, F., Vasarhelyi, M.A.: Applying robotic process automation (RPA) in auditing: a framework. Int. J. Account. Inf. Syst. 35, 100433 (2019). https://doi.org/10.1016/j.accinf.2019. 100433 4. https://www.novacore.com.tr/rpa-robotik-surec-otomasyonu-nedir.html, Eri¸sim. Accessed on 29 June 2021 5. Osmundsen, K., Iden, J., Bygstad, B.: Organizing robotic process automation: balancing loose and tight coupling. In: Proceedings of the 52nd Hawaii International Conference on System Sciences (2019) 6. The next acronym you need to know about: RPA (robotic process automation), Xavier Lhuer (2016) 7. Beerbaum, D.: Artificial intelligence ethics taxonomy - robotic process automation (RPA) as business case. SSRN Electron. J. (2021). https://doi.org/10.2139/ssrn.3834361 8. https://www.goodreads.com/quotes/70385-the-purpose-of-business-is-to-create-and-keep-a, Eri¸sim. Accessed on 29 June 2021 9. Geyer-Klingeberg, J., Nakladal, J., Baldauf, F, Veit, F.: Process mining and robotic process automation: a perfect match (2018) 10. Kirchmer, M.: Robotic process automation – pragmatic solution or dangerous illusion? (2017)

242

H. Erdem and T. Ö. Özçelik

11. Madakam, S., Holmukhe, R., Jaiswal, D.: The future digital work force: robotic process automation (RPA). J. Inf. Syst. Technol. Manage. 16, 1–17 (2019). https://doi.org/10.4301/ S1807-1775201916001 12. https://pcb.iconnect007.media/index.php/article/98335/producibility-and-other-figures-ofmerit/98338/?skin=pcb, Eri¸sim. Accessed on 29 June 2021 13. Josephs, H.C.: A figure of merit for digital systems. Microelectron. Reliab. 4(4), 345–350 (1965). https://doi.org/10.1016/0026-2714(65)90171-X 14. https://resources.pcb.cadence.com/blog/2020-using-figure-of-merit-in-electronic-systemsfor-pcba-cost-benefit-analysis 15. Stople, A., Steinsund, H., Iden, J., Bygstad, B.: Lightweight it and the it function: experiences from robotic process automation in a Norwegian Bank. 2017, Paper presented at NOKOBIT 2017, Oslo, 27–29 Nov. NOKOBIT, vol. 25, no. 1. Bibsys Open Journal Systems. ISSN 1894-7719

Assembling Automation for Furniture Fittings to Gain Durability and to Increase Productivity Musaddin Kocaman1 , Cihan Mertöz1 , Rıza Gökçe1 , Sedat Fırat1 , Anıl Akdo˘gan2(B) , and Ali Serdar Vanlı2 1 Design and Production Department, Mesan Plastik Ve Metal San. A.S., ¸ Istanbul, Turkey {musa.kocaman,cihan.mertoz,riza.gokce,sedat.firat}@mesan.com.tr 2 Mechanical Engineering Department, Yildiz Technical University, Istanbul, Turkey {nomak,svanli}@yildiz.edu.tr

Abstract. It is almost an obligation to use furniture fittings consisting of at least three parts in demountable wooden panel systems in the furniture manufacturing industry. The strength capacity of the existing fasteners produced within the company Mesan, was around 500 N on average by manual assembling. Moreover, at least fifteen workers were required to assemble 1400 products per hour at maximum capacity. In the global furniture manufacturing industry, customers expect to be able to disassemble their end-products and require durable connecting fittings. In this study, an automation cell, with high productivity, that assembles three separate parts into one crucial furniture fitting which will meet customer expectations has been established on an industrial scale. The design of the subjected furniture fitting has been re-engineered for these purposes and finally at least 800 N strength capacity is achieved. Additionally, the designed assembly line has a structure that includes pneumatic and mechanical systems that can ensure the assembly of 3400 products per hour in total. Keywords: Automation · Product quality · Durability · Productivity

1 Introduction Demountable furniture, which is a style of modern furniture, is consumer goods designed by the consumer or seller as ready to assemble the workpieces at the place of use and created by packaging and transporting the workpieces to be disassembled and used again in another place. Although the production of demounted furniture is the same as the production of other furniture classes in most aspects, they differ in accessories, which are indispensable for furniture. For this reason, different methods are used in furniture assembly in the furniture industry. These methods can be categorized as chemical joining, joining with mechanical methods using accessories, and construction joining by processing furniture. Among these methods, the joining method using accessories is the most widely used method. The most used accessory among the accessory joining method is the accessories that combine with the eccentric puller. The accessories that connect with this eccentric puller are especially used in demounted modular panel furniture. The © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 243–255, 2022. https://doi.org/10.1007/978-3-030-90421-0_21

244

M. Kocaman et al.

Fig. 1. Zamak Piece with an eccentric puller structure and connecting shaft (Dufix) on which the plastic is mounted, zamak puller head, spindle positions before panel assembly and rotation of the eccentric head for pulling at the panel joint

accessories are mounted into the holes drilled on the surfaces and bay windows of the panel furniture and the furniture is joined [1]. As can be seen in Fig. 1 drilling is performed on wood according to the dimensional properties of the Zamak piece with an eccentric puller structure and the connecting shaft (Dufix) on which the plastic is mounted. The name of the connecting rod with plastic mounted on it is pronounced as Dufix. In this study, the assembled structure of the shaft and plastic will be called Dufix. A hole with a diameter of Ø15mm is drilled into the wood where the Zamak piece is located. For the use of the Dufix part, holes are drilled in the bay window. The Dufix shaft is passed through the hole that is connected with the hole of the Zamak part and drilled in the middle of the bay window of the chipboard. In the meantime, the arrow (V) sign on the position of the Zamak part should be in the downward direction, that is, in the direction of pointing to the shaft. Thus, the Dufix shaft head structure fits into the drawing channel of the Zamak part. As seen in Fig. 3, in the combination formed by pulling the shaft with the eccentric head, the puller starts at the number 1 position, the pulling process ends at the number 2 position and the panel assembly is achieved. The arrow mark on the Zamak eccentric structure sweeps an angle of 210° from position 1 to position 2.

(a)

(b)

(c)

(d)

Fig. 2. The movement and structure of the chipboard during the pulling process and the knotty structure on the zamak.

During the assembly of the panel furniture, while the eccentric head turns 210° on the outer surface of the panel, the movement of the plastic and the shaft inside the chipboard is shown in Fig. 2. In Fig. 2(a), the head of the drawing shaft fits into the drawing channel of the Zamak part. As seen in Fig. 2(b), while the Zamak part rotates around its own axis, the eccentric structure on the channel ensures that the shaft is pulled towards the Zamak part. The shaft moving towards the Zamak head causes the plastic to open outwards thanks to the conical structure in which it is placed. As the plastic opens outward, the

Assembling Automation for Furniture Fittings

245

teeth on the plastic sink into the wood and provide grip. Dufix Zamak, which cannot move outward during pulling thanks to the teeth sinking into the wood, pulls the wood with the Zamak piece towards itself, allowing the two panels to be mounted together. The Zamak piece, which provides 210° rotation, provides locking by jumping over the articulated structures on the eccentric channel shown in Fig. 2(d). The purpose of use of furniture connection accessories is to assemble and disassemble furniture which is used in the assembly of furniture that occupies volume or that cannot be moved in physical dimensions and weight. The biggest advantage over the other joining methods used with chemical adhesives is that it can be disassembled again after assembly, separated into its components and reassembled again. Since these connection parts can be disassembled, they provide a volume advantage, and they are easy to transport by placing them in the parcel. Various wooden materials such as solid, chipboard, mdf-lam are used in furniture assembly. These materials have a direct effect on the durability of the furniture. After the furniture strength of these materials and the use of furniture parts and fasteners made of these materials together, various research have been made about the strength of the furniture and some data have been obtained against the tensile strength. According to these data, the value of the fasteners connected with mdf-lam gave better results in terms of tensile strength than those connected with chipboard. Among the joining methods, Zamak eccentric head showed moderate strength in the strength tests of the cabinet screw joint [2]. The points observed as a result of our analyzes and evaluations according to their place in the Zamak eccentric parts and Dufix usage areas are as follows. A. Conical Structure The gap that will occur in the conical structure has an effect in many ways, from the pulling distance of the shaft and the opening of the plastic in the wood during the pulling process. After pulling, the module recovers itself, but it oscillates because it cannot reach the sufficient pulling distance. In addition, light leakage occurs due to the gap formed at the joints of the woods.

Old plastic (a)

New plastic (b)

Fig. 3. Conical structure that allows the plastic to be opened in the form of dowels in the wood by pulling the shaft in old (a) and new (b) plastics.

246

M. Kocaman et al.

B. The Number of Teeth and Geometry The number of teeth and the geometry of the tooth structure that allow the plastic to hold in the wood are important. When the number of teeth and the geometry of the tooth structure that ensure the adhesion of the plastic in the wood are examined, it is seen that the dense tooth structure adheres better to the wood, and the helix structure given to the tooth structure allows the plastic to be easily removed and reattached from the wood (Fig. 4).

Old plastic (a)

New plastic (b)

Fig. 4. Tooth structure and number of teeth of old (a) and new (b) plastics

III. Four Piece Formation of The Gear Structure The nail structure that provides the assembly of the plastic is important both for automation and for product delivery and use. The designed nail should be easily attached and fixed in assembly automation. Otherwise, the plastics do not close completely, in this case, it is observed that the plastics are easily separated from the shaft. In addition, during product shipment or in kit packaging machines, vibrations cause the plastics to separate from the shaft and cause the product to become disassembled (Fig. 5).

Fig. 5. Nail structure that allows two plastics to be mounted together

IV. Raw Material Selection The raw material selection of the plastic part is of great importance in the stretching of the product and its opening and acting as a dowel. It has been observed that the products produced with soft raw materials are easily pulled and released more than the products with hard raw materials after the furniture is collected. Material analyzes of competing companies were made and the following results were seen.

Assembling Automation for Furniture Fittings

247

Competitor 1: Raw material PPC, 30% glass fiber added, density 1.0861 g/cm3 . Competitor 2: Raw material PA6, 4% Glass fiber added, density 1.0828 g/cm3 . Competitor 3: Raw material PA66, density 1.0791 g/cm3 . When using plastic raw materials, glass fiber additive increases the mechanical properties of the material. Although the average length of the glass fiber in the additive is short, it is seen that it has suitable mechanical properties, especially impact strength [3–5]. In the pressure casting process, which is the manufacturing method of the Zamak part, many issues such as the correct selection of the process parameters, the mold runner design, the gas and sediment porosity that will occur during the process, the compression time of the part during printing directly affect the quality and mechanical values of the part. Porosity causes the part to break more easily. The runner design of the mold designs of the parts and the number of runners is important in the formation of these negative factors [6, 7]. It has been observed that the knotted structure puller made to the pulling path in the Zamak part is more effective and eliminates the risk of opening back. Tensile and torque tests of fasteners were carried out by taking samples from the products of two different international companies and a national company from competing companies [8, 9]. According to these tests, the following results given in Table 1 were obtained. Table 1. Connecting shaft tensile strength test results Part number

Mesan plastik max. tensile strength (N)

Competitor 1 max. Competitor 2 max. Competitor 3 max. tensile strength (N) tensile strength (N) tensile strength (N)

1

471 N

791 N

704 N

719 N

2

458 N

606 N

639 N

678 N

3

455 N

722 N

635 N

685 N

4

365 N

680 N

650 N

771 N

5

505 N

693 N

650 N

762 N

6

506 N

679 N

683 N

623 N

7

514 N

709 N

657 N

755 N

8

439 N

717 N

614 N

645 N

Average

465 N

700 N

654 N

705 N

Torque tests were applied to the Zamak parts and the torque values that the parts could withstand were observed (Table 2). Programmable automation is a form of automation for producing hundreds of thousands of workpieces in mass production environments. Today, in manufacturing environments most industrial processes are automated with the goal of reducing the cost of labor and improving production productivity. Industrial automation has great advantages

248

M. Kocaman et al. Table 2. Zamak part torque test results

Part number Mesan plastik Competitor 1 max. Competitor 2 max. Competitor 3 max. max. torque (Nm) torque (Nm) torque (Nm) torque (Nm) 1

6.25 Nm

8.22 Nm

6.35 Nm

6.95 Nm

2

6.35 Nm

9.15 Nm

6.79 Nm

6.40 Nm

3

6.36 Nm

9.30 Nm

5.84 Nm

7.00 Nm

4

6.19 Nm

10.29 Nm

6.10 Nm

6.97 Nm

5

6.03 Nm

9.86 Nm

7.27 Nm

8.41 Nm

6

6.09 Nm

10.40 Nm

5.71 Nm

8.82 Nm

7

5.34 Nm

9.46 Nm

6.88 Nm

6.70 Nm

8

5.86 Nm

10.47 Nm

7.20 Nm

7.57 Nm

Average

6.06 Nm

9.64 Nm

6.51 Nm

7.35 Nm

in activities that were previously carried out manually. We use automated production lines consisting of a series of workstations connected by a transfer system in order to move parts between the stations. Each station is designed specifically. Modern automated lines are controlled by programmable logic controllers. Moreover, assembly operations are traditionally performed manually in single assembly workstations or in multi-station assembly lines. Due to the high labor work and cost, more attention has been given to the use of automation for assembly work in the last recent decades. Assembling operations can be automated using production line principles if quantities are large, the product is small, and the design is simple. For products that do not meet these requirements, manual assembly is generally preferred. There are works detailing on manufacturing automation using at least considering one of the computer aided systems. Some papers subjected to present the effects of gathering and analyzing data across machines in Industry 4.0, enabling faster, more flexible, and more efficient processes [10–16]. In this study, the factors that cause our furniture fastener to be lower than the test values of competing fasteners were investigated. According to the results of the research, it was foreseen that the mechanical values of the products would improve as a result of the improvements to be made in the product quality as a result of the new design, and at the end of the study, studies were carried out according to the target of the product to withstand 800 N and above loads. In the studies carried out, designs with appropriate shape and geometry were made to increase the mechanical values of the product. While our current products have an average load strength of 480 N, it will be increased to a minimum of 800 N, thus providing an increase of approximately 65% in mechanical values. While carrying out these studies, the automation cell design that will increase the production efficiency and production numbers was also considered. With 15 people, 1400 products per hour were increased from the automation cell to 3400 with I person, and production efficiency was increased by 140%. While 20 m3 of air is consumed per hour in the imported equivalent automation cells, 8 m3 of air is consumed in the automation cell, resulting in an energy efficiency of 60%.

Assembling Automation for Furniture Fittings

249

2 The Automation Assembly Cell Design and Mounting The automation assembly cell design, which will mount the Dufix plastics on the Dufix shaft, has been designed as shown in Fig. 6. Pneumatic and mechanical systems were used in the automation system. The assembly process cycle is provided with a total of 12 stations placed on and around the turntable. Mounting slots have been created to mount two plastic shafts at the same time. Operational planning and assembly in the assembly cell is described as follows. Station No 1 On the assembly line this station is the structure that ensures that the first plastic part is put on the assembly line during the assembly of our Dufix product, which consists of three parts. Since the Dufix part will be mounted horizontally during the assembly phase, the first plastic part left on the assembly line will be called the lower plastic part, and the second plastic part left after the shaft will be called the upper plastic part. Parallel gripper jaws holding the lower plastic allow two plastics to be deposited on the assembly line at the rotary table. The plastic parts, which are arranged from the vibrator according to the mounting style and come on the moving car with the linear slide, are held by specially designed parallel gripper jaws and left to the station slot on the turntable.

Fig. 6. Automation system plan view and 3D model

Station No 2 In the assembly system, after the lower plastics are placed in the assembly slots at the station number l, the rotary table moves to the control station number 2. At station 2, it is checked whether the bottom plastic from station 1 is placed in the correct position or whether it is missing. Station No 3 The task of this station on the Assembly Line is to ensure that the Dufix shaft is left on the Dufix lower plastic part left on the assembly line at station l.

250

M. Kocaman et al.

Station No 4 The task of this station on the assembly line ensures that the lower plastic left on the assembly line at the 1st station and the Dufix shaft left on the Dufix lower plastic at the 3rd station are placed in the plastic slot thanks to the nails on the plastic. Thanks to the process at this station, the rotational movement of the turntable prevents the spindle from being thrown over the plastic and losing its mounting position. Station No 5 The task of the Control Station No. 5 is to check whether the bottom plastic and shafts in the slots coming from the shaft driving station No. 4 are suitable for the next operation. Station No 6 The task of this station on the Assembly Line is to place the upper plastic on the shaft driven into the lower plastic at station 4. Parallel gripper jaws holding the lower plastic numbered 4 and 5 allow two plastics to be released to the assembly line at the same time. The plastic parts, which are arranged from the vibrator according to the mounting style and come on the moving car with the linear slide, are held by specially designed parallel gripper jaws and left to the station slot on the turntable. Station No 7 The task of the control station number 7 is to control the top plastics placed in the slots at number 6, and to enable the top plastic driving and nail locking operations number 8 and 9. Station No 8 The task of this station is to nail the upper plastic placed on the assembly line on the assembly line at station 6 to the lower plastic nails and to tighten the nails to ensure better locking. At this station, processing is carried out for the part on the right side according to the top view from the two products left side by side on the assembly line. For the piece on the left, the fastening and compression process is done at the next station. Station No 9 The task of this station is to nail the upper plastic placed on the assembly line at the 6th station to the lower plastic tabs and to tighten the nails to ensure a better locking. Station No 10 The task of this station is to take the assembled solid products from their bearings on the turntable and leave them to the packaging department. Parallel gripper jaws holding the finished Dufix plastic allow two plastics or the right/left plastic assembly to be taken from the assembly line and left to the packaging area at the same time, according to the command from the previous control station. This station works in the opposite direction of station 1 and discharges the finished product. Station No 11 The task of this station is to discharge the incorrectly assembled product from the slot on the turntable. Station No 12 The task of the control station No. 12 is to check that the slots are empty after all the operations whose cycle has been completed, and to place the bottom plastic into the slots

Assembling Automation for Furniture Fittings

251

at the No. 1 station again. If one or both stations are not empty, it allows the machine to stop and the operator to intervene by giving an alarm.

3 Testing and Analysis After the assembly work, both product performance and machine performance were tested, and comparisons were made. Product test performance, old product and new product comparisons made under the same conditions are given below at Table 3. Table 3. Old dufix and new dufix maximum tensile test comparison after panel assembly Part number

Old dufix max. tensile strength (N)

New dufix max. tensile strength (N)

1

586 N

1160 N

2

579 N

1226 N

3

538 N

1294 N

4

521 N

1361 N

5

540 N

1140 N

6

481 N

1281 N

7

552 N

1284 N

8

524 N

1291 N

Average

540 N

1254 N

Range

105 N

221 N

The test results of the old Dufix part and the new Dufix part tensile test after panel assembly were as shared above. According to these test results, 130% increase in tensile value has been achieved. When the image in Table III is examined, it is seen that the teeth come out of the wood when the old Dufix part sees 586 N force. The new Dufix part, on the other hand, was well sunk into the wood after the design revision on the teeth of the plastic part, and it was observed that the shaft came out of the plastic when it saw 1361 N force. Here, we can say that the angular and structural changes made about the teeth have a positive effect on the result. In addition, in order to determine the locking force of the nail structure after the assembly of the Dufix part with the shaft, a force test was carried out to ensure that the plastics are released from the nail structures by pulling the shaft by holding the plastics fixed in the pulling device by making an idle pulling apparatus. With this test, the design verification and functionality of the nail structure that provides locking in the plastic design has been observed. Tensile device graphs of the test performed are shown in the force-time graph below.

252

M. Kocaman et al.

As seen in Fig. 7, when the locking force, the max. load, of the old Dufix part is maximum 11.65 N, the plastic parts and the shaft started to be disassembled. As can be seen in Fig. 8, when the locking force, the max. load, of the new Dufix part is maximum 63.16 N, the plastic parts and the shaft begin to be disassembled. When the graphics shown in Fig. 8 examined, it is observed that the design work on the locking tab is effective and a very good result of 470% in the force value is obtained.

Fig. 7. Old dufix part and new dufix part claw locking force

As can be seen in Table 4, after the design changes made on the old Zamak part and the change in the molding method, the density of the part was improved by 9% on average. In parallel with torque tests, when the torque value of the old part and the torque values of the new part were compared, an increase of approximately 43% was observed on average and an increase in the mechanical values of the product was achieved. While testing and verifying the products produced, the automation cell is also run at full capacity and the values taken from the machine’s control screen. The data on the control panel of the automation system for 59 min is analyzed. It is seen that 3456 products were purchased in the 58th second (Fig. 8). In our project, we aimed that we could produce 1400 pieces/hour of products with our team of 15 people and that we would design an automation system that would increase it to 1800 pieces/hour. According to the preliminary design made in the machine design, the mounting slots on the turntable were changed from 1 to 2 each. The design was designed in such a way that two Dufix product assembly operations would be carried out at the same time at each station, so that 2 times the targeted number of products was considered. The disadvantage of this design change was the increase in the physical dimensions of the automation system. However, this increase was kept at a producible level and the design progressed to double mounting slots and the automation system was installed. The finished image of the system was as seen below.

Assembling Automation for Furniture Fittings

253

Table 4. Old zamak part, new zamak part torque value and density comparison table Part number

Old zamak part max. torque (Nm)

Old zamak part density (gr/cm3 )

New zamak part max. torque (Nm)

New zamak part density (gr/cm3 )

1

6.30 Nm

5.40 gr/cm3

8.40 Nm

5.95 gr/cm3

2

6.10 Nm

5.43 gr/cm3

8.16 Nm

5.98 gr/cm3

3

5.45 Nm

5.33 gr/cm3

8.05 Nm

6.00 gr/cm3

4

6.15 Nm

5.39 gr/cm3

8.62 Nm

6.03 gr/cm3

5

6.50 Nm

5.40 gr/cm3

9.30 Nm

5.95 gr/cm3

6

5.40 Nm

5.40 gr/cm3

8.50 Nm

5.90 gr/cm3

7

5.60 Nm

5.77 gr/cm3

8.60 Nm

5.95 gr/cm3

8

6.47 Nm

5.65 gr/cm3

8.05 Nm

5.95 gr/cm3

Average

5.93 Nm

5.45 gr/cm3

8.52 Nm

5.94 gr/cm3

Range

1.1 Nm

0.44 gr/cm3

1.25 Nm

0.07 gr/cm3

Fig. 8. Automation assembly unit control panel hourly data analysis and project completion image

4 Discussions The design changes made by adhering to the working principles of Dufix plastic and spindle and eccentric Zamak parts have provided significant increases in the targeted and examined mechanical values. In the increase of mechanical values, the force angles that enable the plastic in the plastics design to hold on to the wood are thought to be the design according to the appropriate positioning. After the automation cell design, it seen that there is a satisfactory increase in daily unit production. For this reason, using modern technology ensures a decrease in automation production costs in production, and an increase in product quality with correct designs. Because of automation talent, defects caused by manual assembly have been eliminated, while the quality of the parts is at the desired level. The numerical values given in Sect. 3 prove this. Traceability of production has also been improved with the automation system.

254

M. Kocaman et al.

5 Conclusions When the project inputs and outputs were compared after the Dufix plastic and shaft design, eccentric Zamak part design and automation assembly system designs, it was seen that the study was successful according to the values obtained. Below, we briefly summarize the values obtained by testing under the same conditions. While the product density of the Zamak eccentric piece was 5.45 gr/cm3 on average according to the old design, it was increased to 5.94 gr/cm3 after the new design and accordingly the new mold design, and an improvement of approximately 9% was achieved. Parallel to this improvement. While the torque value of the old Zamak part was 5.93 Nm, a torque value of 8.52 Nm was reached in the new Zamak part and an improvement of approximately 44% was achieved. When we look at the Dufix product, the shrinkage force has been improved with the compatible design of the shaft on the plastic and the conical structure inside the plastic. After the revisions made on the tooth structure and number of the plastic, it was observed that the average tensile value of the Dufix part, which was 540 N, increased to 1254 N, and a very good result was obtained by providing an increase of 130%. In addition, according to the tensile test result of the plastic after the revision of the nail structure, which has a direct effect on the Dufix shaft assembly of Dufix Plastic, the locking force was measured as 11.65 N in the old plastic, while it was 63.16 N in the new plastic, and a good result was obtained by providing an increase of 470%. The Dufix product, which was produced with a staff of 15 people at an hourly rate of 1400, was targeted as 1800 after the preliminary design made at the beginning of the project. However, after the new design made during the project phase, this target has been realized as 3456 units per hour with an increase of approximately 2 times. A very good improvement with an increase of 146% is succeeded. Acknowledgment. This paper is a part of 1501 TUBITAK Project Number: 3191861.

References 1. Tankut, A.N., Tankut, N.: Ülkemizde Demonte mobilya yapımında kullanılan özel ba˘glantı elemanları (Special fixing units in demounted furnitures used in Turkey). Z.K.Ü. Bartın Orman Fakültesi Dergisi 3, 77–94 (2001) 2. Güray, A., Kılıç, M., Özyurt, A.: Mobilya kö¸se birle¸stirmelerinde kullanılan farklı birle¸stirme elemanlarının diyagonal çekme direnci üzerine etkilerinin ara¸stırılması. Pamukkale Üniversitesi Mühendislik Fakültesi Mühendislik Bilimleri Dergisi 8, 131–137 (2002) 3. Tan, Y., Wang, X., Dezhen, W.: Preparation, microstructures, and properties of long glass –fiber – reinforced thermo plastics composites based on olycarbonate/poly(butylene terephthalate) alloys. J. Reinf. Plast. Compos. 34(21), 1804–1820 (2015) 4. Zeng, F., Le Grognec, P., Lacrampe, M.-F., Kraeczak, P.: A constitutive model for semicrystalline polymers at high temperature and finite plastic strain: application to PA6 and PE biaxial stretching. Mech. Mater. 42, 686–697 (2010) 5. Patlazhan, S., Remond, Y.: Structural mechanics of semicryistalline poimers perior to the yield point: a review. J. Mater. Sci. 47, 6749–6767 (2012). https://doi.org/10.1007/s10853012-6620-y

Assembling Automation for Furniture Fittings

255

6. Pinto, H., Silva, F.J.G.: Optimisation of die casting process in Zamak alloy. In: 27th International Conference on Flexible Automation and Intelligent Manufacturing, FAIM2017, Modena, Italy (2017) 7. Vanli, A.S., Akdogan, A., Durakbasa, M.N.: Tools of industry 4.0 on die casting production systems. In: Durakbasa, N.M., Gençyılmaz, M.G. (eds.) ISPR -2019. LNME, pp. 328–334. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-31343-2_29 8. TSE EN ISO 6789-1 Assembly tools for screws and nuts – Hand torque tools – Part 1: Requirements and methods for design conformance testing and quality conformance testing: minimum requirements for certificates of conformance (2017) 9. ISO 6892-1:2016 Metallic materials –Tensile testing –Part 1: Method of test at room temperature (2016) 10. Rüßmann, M., Lorenz, M., Gerbert, P., Waldern, M., Justus, J., Engel, P.: Michael harnisc industry 4.0: the future of productivity and growth in manufacturing industries (2015) 11. Groover, M.P.: Automation, Encyclopædia Britannica (2019) 12. Kovács, G.L.: Changing paradigms in manufacturing automation. In: Proceedings of IEEE International Conference on Robotics and Automation, Minneapolis, MN, USA, vol. 4, pp. 3343–3348 (1996) 13. Kovács, G.L., Mezgár, I., Kopácsi, S.: Concurrent design of automated manufacturing systems using knowledge processing technology. Comput. Ind. 17, 257–267 (1991) 14. Udroiu, R.: Introductory chapter: integration of computer-aided technologies in product lifecycle management (PLM) and human lifecycle management (HUM). In: Computer-aided Technologies – Applications in Engineering and Medicine, IntechOpen, pp. 1256–1276 (2016). https://doi.org/10.5772/66202 15. Akdogan, A., Vanli, A.S.: Introductory chapter: mass production and industry 4.0. In: Mass Production Processes, IntechOpen, pp. 1–2 (2020). https://doi.org/10.5772/intechopen.90874 16. Vanli, A.S., Akdo˘gan, A., Fırat, S., Kocaman, M.: Productıon productivity by automation application in manufacturing industry. Acta Materialia Turcica 4, 28–32 (2020)

Use of Virtual Reality in RobotStudio Marian Králik(B) and Vladimír Jerz Faculty of Mechanical Engineering, Slovak University of Technology in Bratislava, Bratislava, Slovak Republic {marian.kralik,vladimir.jerz}@stuba.sk

Abstract. The article describes two goals of the project. The first is to create a functional simulation of a workplace with a robot, the second goal is to find a suitable smart application for virtual reality using the program RobotStudio. A manipulation procedure was created for the given workplace layout and manipulated part and subsequently simulated in the RobotStudio program. Subsequently, an experiment with virtual reality is described. We were looking for suitable smart applications of virtual reality and RobotStudio. Keywords: RobotStudio · Smart component · Point · trajectory · Simulation · Virtual reality

1 Solution Description The creation of the application solution was based on the initial layout of the robotic cell. The cell is currently used for presentation purposes, but after certain modifications, the cell can be included in the production process. In the robotic cell, the component is manipulated, technological operations are also performed, and the bar code of the individual manipulated components is scanned. The layout of the robotic cell is shown in Fig. 1. For the required automated operation of the workplace, it was necessary to supplement the workplace with the following components, which were missing: a console with a camera, an output buffer, a model that represents a pallet on a conveyor on which the manipulated object is placed. The layout that is used in the application simulation is shown in Fig. 2. Where position A represents an inter-operational buffer whose position has been adjusted by the above dimension. Position B represents the console with the camera, position C the output buffer and position D the trolley located on the conveyor.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 256–268, 2022. https://doi.org/10.1007/978-3-030-90421-0_22

Use of Virtual Reality in RobotStudio

257

Fig. 1. The layout of robotic cell. A - conveyor, B - tool stand, C - mechanical manipulator, D - magnetic manipulator, E - pneumatic manipulator, F - media knot, G - press cell, H - rotary table, CH - intermediate buffer, I - cell for cleaning internal diameter of the manipulated object, J - plexiglass cell door, K - robot ABB IRB 1100, L - mechanism for centring the position of the manipulated part.

Fig. 2. Layout of the robotic workplace, which was used in the application solution. A - interoperational buffer, B - console with camera, C - output buffer, D - trolley placed on the conveyor

2 Creation of Smart Component - Conveyor Due to the geometry of the conveyor model and the synchronization of the removal of the part (gear wheel) - by passing the part on the trolley through the position in which

258

M. Králik and V. Jerz

the part is removed, the conveyor was designed as a Smart component. The logic of the component is shown in Fig. 3.

Fig. 3. The logic of “conveyor” Smart component

The Timer element gives a signal to the Source element in the time interval set by us, which then creates a copy of the predefined component, which is afterwards placed in the Queue element. The Smart component includes two sensors that sense preset planes. When the scanned object intersects them, it changes its output signal from 0 to 1. One sensor is used to attach the created manipulated part, which has been placed in the Queue element to the trolley and the other is used to stop the trolley in a predefined position so that the manipulated object can be gripped by a tool mounted on the robot flange. The Comparer logic elements compare the input values with the defined value and, based on this, generate an output signal of 1 if the comparison is true and 0 if it is false. The logic element Logic Gate - AND compares the input values and only if both input signals are 1, the output signal is 1, otherwise the output signal is 0. The logic element Counter writes the numbers of predefined signals. The Move Along Curve element is a component that moves or stops the carriage and the manipulated part attached to it when it receives an impulse from the logic components. The Smart component conveyor is set to generate only two components, the first one is generated 7 s after the start of the simulation and the second one 14 s after the start of the simulation. The generation time of manipulated components and their number can be changed according to the requirements of the work cycle. Move Along Curve requires a guide curve according to which the movement is performed. The curve can be created in two ways: 1. Create the required curve directly as one element, which is quite demanding on the accuracy of the given curve due to the fact that it is created as a connection of the points we entered. 2. Create individual sections (straight, curved) separately and then merge them into one unit. In this way, it is important to create the given sections in one direction, as this direction determines the direction of movement of the object specified in the Move Along Curve element.

Use of Virtual Reality in RobotStudio

259

3 Creation of the Tool RobotStudio offers two options for creating a tool whose type is not part of the RobotStudio library. When creating a tool, its model and the origin of its coordinate system must be identical with the origin of the coordinate system of the whole cell [1, 3]. If it is not, it is necessary to place it there. This can be done in two ways - to change the coordinates of the origin of the object so that they match the origin of the cell’s coordinate system, or to use the “umiestniˇt (place)” command to open the options to place according to: • • • • •

one point two points three points coordinate system two coordinate systems

In the “modelovanie (modelling)” tab, the tool can be created by clicking on the create tool function, where we first select the tool geometry, which must be inserted in the given application solution. Then we choose the weight and center of gravity of the tool [2, 8]. After selecting the data, we will assign a Tool Center Point - TCP to the tool (Fig. 4). TCP can be understood as the point at which the robot approaches the defined points.

Fig. 4. Creating a tool via the Create tool function and assigning to a TCP tool

The second way is to create a tool as a mechanism. This way the moving parts of the tool, such as the opening/closing pliers of a mechanical manipulator, can also be formed. It is necessary to select the tool in the “Typ Mechanizmu (Mechanism type)” section. With this type of tool, it is required to determine the flange around which the moving part will move or rotate (Fig. 5), where the tab L1 represents the flange and the tab L2 represents the movable component of the tool. Afterwards, the Typ kloubu (type of joint) – Rotaˇcní (rotational) or Posuvný (translational) (Fig. 5), its extent and direction is selected. As can be seen in Fig. 5, we have chosen a translational movement around the flange L1, whereby the movable part L2 can perform a translational movement in

260

M. Králik and V. Jerz

the range from 0–25 mm. The direction of movement is in the positive direction of the y-axis.

Fig. 5. Selection of the flange and the rotating part of the tool and selection of the type of movement and its range

Subsequently, the tool is assigned its weight, positions are created and then the created tools are attached to the robot. This is done in RobotStudio with the Attach command. Due to the limited length of the article, these procedures will not be described.

4 Creation of a Smart Component - Tool Changer Gripping and changing of tools can be solved in various ways. For example, it is possible to create a Smart component consisting of “attacher” and “detacher" elements, which, based on a digital input, can connect or disconnect the element captured by the sensor at the moment then a digital input has been entered [7, 8]. In our application solution, we used another way of exchanging tools so that there would be no problems if the sensor captures another object. The solution itself can be seen in Fig. 6. In the Fig. 7 is the operation of the tool exchanger in terms of signals. When creating the tool changer, the logical gate “not” was used, which negates the input parameter, in this case 1 to 0 and 0 to 1. It also contains the elements “Show” and “Hide” and the element “flip-flop - LogicSRLatch”. In such solution of tool change, it is necessary to have the tool itself connected to the robot flange and the tool model in the stand where the tools are stored (it is sufficient that it is only a model of tool, as it is a non-functional element that is stored only in the stand). This exchanger works in such a way that when it receives a digital input, it displays the tool on a flange that was “invisible” until this digital input, and hides the tool model in the stand. Of course, after each tool change, the corresponding TCP, which is assigned to the tool we want to

Use of Virtual Reality in RobotStudio

261

Fig. 6. Tool change solutions

Fig. 7. Tool changer in terms of signals

work with, must also be changed. It is necessary to create the same Smart Component for each tool, while only the digital inputs and objects that will be displayed or hidden will change. In subsequent solution, it was necessary to create mechanisms for pressing, centering the position and a rotary table. The process of creating individual mechanisms is almost identical to the creation of a tool through the “vytvoritˇ mechanizmus (create mechanism)” function. One of the differences is that in the “Typ mechanismu (Type of Mechanism)” section, we choose the mechanism. For mechanisms, TCP, center of gravity and weight are not selected neither, as in the case of tool creation. Then Smart components were created for gripping and releasing parts by the tools. For all three Smart Components, the solution was analogous, differing only in that each tool has its own sensor and its own digital input. A LineSensor is used for all three Smart Components, which changes the value of the output signal from 0 to 1 if the scanned object “crosses” the straight line specified by two points. In Fig. 8 the logic of the given Smart Component can be seen. It consists of a Logic Gateway Not, which negates the input signal and its output is this negation. It also includes a flip-flop that rectifies the signals. The “Attacher” element is used to attach one object to another. In our case, the second object, is listed in the “Attacher” element as “Parent”. The “Detacher” element is used to disconnect the connected object from another. LineSensor is used to recognize the object to be connected. The LineSensor element must be created and then connected in our case to the tool. The attachment is done with the “attach” command, which is necessary because the

262

M. Králik and V. Jerz

tool is moving around the workplace and if the LineSensor is not attached to the tool, its position would not change with the tool position, but would remain fixed according to the start and end coordinates specified in the LineSensor parameters. In this case, the radius parameter can be understood as the “thickness” of the line. The parameters themselves and the LineSensor, which is mounted on a magnetic tool, are shown in Fig. 8.

Fig. 8. LineSensor parameters and LineSensor mounted on a magnetic tool

5 Creation of Points and Trajectory of the Path There are two ways to create points to which the robot will move with the specified end effectors in automatic mode: 1. we will move to the point with the given effector in manual mode 2. we will determine the coordinate and orientation of the point. Manual mode offers a wide range of workable options. The specified object can be moved translationally, the selected object can be rotated around the selected coordinate system, the object can be moved within the physical possibilities or the individual joints of the robot can be moved [4, 8]. The manual mode itself is shown in Fig. 9, where the translational movement button shows position A, position B shows the rotary movement. The position we move into in manual mode can be saved as a learned point. The accuracy of approaching the tool to the desired point in the automatic mode is higher if the given point was created in the second way, by entering the desired position, or by turning the tool. This accuracy is higher due to the fact that we did not run to a given point when creating it in manual mode, which may result in a different position we want to reach when the position of the point is set in manual mode. The creation of a point by entering a coordinate is shown in Fig. 10. It often happens that the initial orientation when creating a point by entering coordinates is not appropriate. Therefore, it is necessary to check each point individually. This

Use of Virtual Reality in RobotStudio

263

Fig. 9. Movement in manual mode

Fig. 10. Dialog box for entering the coordinate of a point

can be done using the “Zobrazit Robot v cíli (Show Robot in target)” function, where the robot is displayed with the selected tool for which the given point was created in the given position with the appropriate orientation. In Fig. 11 it is possible to see the incorrect orientation of the tool, which had to be subsequently adjusted by rotating about 90° in the direction of the x-axis. The correct orientation is shown in Fig. 11.

Fig. 11. Incorrect and correct tool orientation after correction

264

M. Králik and V. Jerz

In our application solution, the Action Instructions are above-mentioned, which are mainly connected with the change of the value of the digital output parameter. Almost all Smart components in our application solution are controlled by changing the value of the digital input. In order for this change to take place, it is necessary to enter an Action Instruction that will change the value of the digital output. Digital output is a parameter that is stored in the virtual controller section. This part is responsible for the communication of the robot within the environment in which it is located, while each robot is assigned its own virtual controller. For each digital input controlled by the Smart Component, we have one digital output assigned to it. Changing the value of the digital output from 0 to 1 changes the value of its assigned digital input from 0 to 1, and changing the value of the digital output from 1 to 0 changes the value of the digital input from 1 to 0. In Fig. 12 you can see the digital outputs in the “Zdrojový signál (Source code)” column and the associated digital inputs in the “Cílový signal nebo vlastnost (Target signal or property)” column. The “Cílový object (Target object)” column shows the individual Smart Components to which the individual digital inputs are assigned.

Fig. 12. Digital outputs and their associated digital inputs

6 Creation of the Rapid Control Program After creating a trajectory with the appropriate trajectories and Action Instructions, it is necessary to create a Rapid robot control program. This concerns the synchronization and loading of all data, trajectories, points, etc. into the robot controller. It is also possible to enter various conditions in Rapid, which determine the operation of individual trajectories, their number, etc. At the same time, it is possible to transfer the program from Rapid to the real robot itself. In Fig. 13 we can see a part of the Rapid of the application solution. It is possible to create various subroutines/trajectories, but it is necessary to put them in the “main” tabs, because RobotStudio has an existing syntax requirement for Rapide. After creating the Rapid program, it was possible to run the simulation and verify the correctness of the created trajectories, action instructions, functionality of mechanisms, etc. The length of one cycle lasted about 77 s.

Use of Virtual Reality in RobotStudio

265

Fig. 13. The part of the Rapid control program for given application

7 Use of Virtual Reality in the RobotStudio Program Currently, one of the best ways to present an application solution to a customer is through virtual reality. RobotStudio offers the possibility to describe individual parts of the cell [8]. Virtual reality allows us to virtually walk around the cell and view the individual parts, or run the simulation itself and view it in 3D. The description of the parts itself can be seen in Fig. 14.

Fig. 14. Description of individual parts of the application solution in virtual reality

At the same time, there is the possibility to present the application solution remotely through the “meeting” function, which allows the customer to connect remotely. This

266

M. Králik and V. Jerz

function supports virtual reality. Virtual reality allows us to capture and explore individual objects within the workplace. In Fig. 15 we can see the function “Grab objetcs”, which allows us to grab an object and the mechanical tool at the same time.

Fig. 15. Grab objects function and gripped mechanical tool

8 Robot Path Programming, Speed Adjustment, Zone and Robot Configuration in Virtual Reality Virtual reality allows to program or eventually edit existing points and trajectories. Manipulation with the robot in virtual reality is only possible in manual mode. Points are created by moving the active TCP to the desired position and creating a point by selecting the “Teach” function. This way of creating points is limiting due to the accuracy with which we can create points. Therefore, this method is not recommended for creation a trajectory, which requires higher accuracy. In Fig. 16 can be seen the “Teach” function and the selected movement type MoveL, which is linear motion. At the same time, there is selected a trajectory, to which the given point is added - “Path_10”.

Fig. 16. Creating a point in virtual reality

Use of Virtual Reality in RobotStudio

267

In Fig. 17 we can see the adjustment of the speed between the individual points. The adjustment is performed by clicking on the individual required sections, where the possible speeds to which the given section can be set are displayed. The modification of the zone is realized in a similar way as the speed, the only difference is that we do not choose the section, but we choose the desired point at which we want to make the change. Changing the robot configuration is based on the same procedure as changing the zone - by clicking on the desired point at which we want to change the configuration.

Fig. 17. Adjusting the speed of individual sections

Virtual reality also allows us to change the position of individual points. To do this, select the editing mode, which can be seen in Fig. 18. The positions of the points change by clicking on the point and holding down the button on the left joystick, the selected point changes its position along with the movement of the left joystick.

Fig. 18. Trajectory adjustment mode

268

M. Králik and V. Jerz

9 Conclusion - Vision of the Possibilities of Using Virtual Reality in the Future Due to safety, it is not possible to control a real robot through virtual reality. This is expected to change in the near future thanks to technological advancements. This option of the smart application will be advantageous in that it will be possible to debug the program using real conditions in the workplace, which will significantly reduce the possibility of collision [6]. Another option is using of smart applications, what allows offline programming of the robot. The article also points out the possibility of presenting the application solution in virtual reality. Virtual reality can also be used for training purposes [5]. The article presents the research results from the project KEGA 027STU-4/2019. Acknowledgment. This paper was created with support of the Cultural and Educational Grant Agency of the Ministry of Education, Youth and Sports of the Slovak Republic under the KEGA 027STU-4/2019 contract.

References 1. Záhonová, V., Králik, M.: Aplikácia špeciálnych matematických metód pri riešení vybraných úloh strojárskeho charakteru. 1. vyd. Bratislava : FX s.r.o., 109 s (2008). ISBN 978-80-8931338-9 2. Božek, P.: Špecializované robotické systémy, ÁMOS, Ostrava (2011). ISBN 978-80-9047663-9 3. Murray, R.M., Li, Z., Shankar, S.S.: A Mathematical Introduction to Robotic Manipulation. CRC Press, Boca Raton (1994).ISBN 0-8493-7981-4 4. Angeles, J.: Fundamentals of Robotic Mechanical Systems: Theory, Methods, and Algorithms, 3rd edn. Springer, Montreal (2007).ISBN 0-387-29412-0 5. Kuna, Peter - Kozík, Tomáš - Kunová, Silvia. Virtuálna realita a vzdelávanie. [online], 2017, [cit: 19.3.2021]. Available: https://repozytorium.ur.edu.pl/bitstream/handle/item/2501/ 16%20kuna-virtu%c3%a1lna%20realita.pdf?sequence=1&isAllowed=y 6. Komák, M., Králik, M., Jerz, V.: The generation of robot effector trajectory avoiding obstacles. MM Sci. J. 2018, 2367–2372 (2018). ISSN 1803–1269(P). In database SCOPUS: 2-s2.0-850– 48218759 7. Králik, M., Fiˇtka, I.: Základy obsluhy priemyselných robotov. 1. vyd. Bratislava Nezisková organizácia Centrum kontinuálneho vzdelávania, 124 s (2020). ISBN 978-80-973700-0-8 8. Drozd, R.: Project of a robotic workplace for selected applications. Final theses. Thesis supervisor: Assoc. Prof. Marian Králik, Slovak University of Technology in Bratislava, Faculty of Mechanical Engineering, p. 69 (2021)

Industry 4.0 Applications

Direct-Drive Integrated Motor/controller with IoT and GIS Management for Direct Sowing and High Efficiency Food Production Rodrigo Alcoberro1,2(B) , Jorge Bauer1,2 , and Numan M. Durakbasa1 1 TU-Wien, Vienna, Austria

[email protected], [email protected] 2 UTN – FRBA, Buenos Aires, Argentina

Abstract. Industry 4.0 is growing fast in the industrial sector but it is in a more incipient phase in the agricultural sector food’s production, also called open-air factory. In this paper we present the design, using Finite Elements Analysis, and the prototype manufacturing of an integrated motor and controller system for Direct Seeding application. With the measurements on the prototype built, the simulated parameters in the motor design were validated. In order to eliminate reduction mechanisms, the motor drives directly to the seed feeder. This requires a careful design to minimize the cogging torque and torque ripple as much as possible, since a precise seed planting position is required. The motor and electronic controller replace a complex mechanism system to move the seed feeders from each row/groove. By having independent control of the rotation speed of each row, the number of seeds per meter can be governed according to different variables, such as the performance of the soil, or if the seeder is turning, the terrain irregularities, among others. This flexible production method also allows reconfiguring the type of seed to use without changing any gear ratio, as it is done through the communication interface. The management and monitoring of the system is carried out through the IoT technology protocols. The system uses GIS and soil survey to optimize planting performance. Keywords: Industry 4.0 applications · IoT and production systems · Flexible manufacturing systems · Productivity and performance management · Production planning and control · Simulation and modelling · Sustainability

1 Introduction In order to achieve higher productivity in the sown countries, different techniques have been developed over the years. These techniques take advantage of new technologies to improve soil performance. The sector of the food’s production, such as fruits seeds and vegetables, is currently highly technical, and year after year it incorporates new technologies, but there is still much to do. Global positioning technologies and Geographic Information Systems (GIS) are widespread in this productive sector, and little by little, semi-autonomous agricultural machines are beginning to be seen working in the countries. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 271–280, 2022. https://doi.org/10.1007/978-3-030-90421-0_23

272

R. Alcoberro et al.

Direct Sowing is born in this scenario of production optimization and cost reduction. In this method of sowing, also called conservation tillage or zero tillage, the sowing is performed without prior plowing of the field but directly on the stubble. Direct seeding is the central element in what is now called conservation agriculture. It represents a considerable advance in crop production technology because it makes agriculture harmoniously related to nature. It implements ideas initially proposed by Edward Faulkner in his groundbreaking book, published in the United States in 1947. Faulkner had the audacity to challenge the paradigm that tilling the soil was beneficial [1]. Figure 1 shows the main components of a Direct Sowing Train. These have a disc or blades intended to open the furrow (A), a fertilizer dispenser, the seed dispenser (B), the depth limiting wheel (C), the roller to hold the seeds (D) and the blades that close the soil leaving the seeds sown (E).

Fig. 1. Direct sowing train

The seed feeder consists of a disk to which the seeds are forced to adhere, generally by making a vacuum over it. The disk has evenly spaced grooves in which the seeds adhere by the effect of the vacuum produced by a pneumatic system. The disk rotates, dropping the seeds one by one on the ground. The spacing between the seed slots, the speed of rotation of the disc and the speed of translation of the machine determine the amount of seeds per meter that are introduced into the soil. An average size planter has 30 seed furrows, which means that it has 30 seed feeders. To rotate these 30 feeders, a complex mechanism system, gears and transmission is currently used. The power is taken from the tractor and, although the efficiency of the system has been improving with the use of bearings and better production techniques, the power losses in the set of transmission and gears is notorious. On the other hand, this complexity makes the system more expensive and more prone to failure. In addition, with the current system, all the discs move at the same speed, regardless of whether the machine is passing through a low-performance zone or is spinning. This produces a waste of seeds and fertilizer and / or a waste of the soil. On the other hand, if you want to go from a very closely spaced seeding to a widely spaced one, the machine must be mechanically reconfigured.

Direct-Drive Integrated Motor/controller with IoT

273

In summary, the difficulties that arise with current mechanical drives in seeders are: • • • • •

Mechanical complexity, with multiple transmissions and returns The impossibility to vary the distance between plants independently per row The impossibility to keep the seeds equidistant if the seed drill rotates Low efficiency Multiple points of failure

In this work we present a novel electric motor and electronic controller system, which moves each seed distributor independently, from a central control unit that has geolocated data on soil performance. Although there are already electrically driven seed distributors, this motor design does so in a direct-drive way, without gears. This increases the efficiency of the system, reduces the possibility of failure, improves the dynamic response of the system, among other things.

2 GIS and Soil Mapping for Precision Agriculture The concept on which precision agriculture is based is to apply the right amount of seeds, at the right time and in the right place. The use of information technology improves soil management taking into account the high variability in the sowing. Precision agriculture involves the use of global positioning systems (GPS), software and electronics to obtain the crops data. The technologies of precision agriculture make it possible to satisfy one of the demands of modern agriculture: the optimal management of big areas. The main advantage is that the analysis of test results can be carried out by different sectors within the same batch, and in this way, adjust the differential management within them. For example, the yields of two crops may be identical if averages are used, but diametrically opposite in a hill situation and in a low situation in a given lot. This data can only be obtained by carrying out a performance map. In the same way, the type and dose of fertilizer to be applied, the seed density, the sowing date, and the spacing between rows can be analyzed. The use of precision agriculture technologies can help improve margins, through an increase in the value of yield (quantity or quality), a reduction in the quantity of inputs, or both simultaneously [2].

3 Motor Controller Challenges In this application, the motor and controller assembly has some significant challenges. On the one hand, speed control must be precise over a wide range of RPMs. The speed of rotation chosen can vary by different factors and from one moment to another. Some of those factors are listed below. 1. Distance between plants: This distance varies depending on the type of plant but also depending on the performance of a certain area of the land 2. Seeder machine Turns: In uneven terrain or at the head of the fields, the seeder machine must rotate and at that moment the difference in distance traveled between the feeder in the inner furrow and the feeder in the outer furrow is notable. Taking

274

R. Alcoberro et al.

an average planter of 8 m wide seeding, and a hypothetical turning radius of 4 m, the feeder on the inner side of the turn would stay in place while the one on the outer side would be moving at twice the translation tractor speed. 3. Yield of each portion of the land: To optimize the amount of seeds and fertilizers used, more quantity is sown in the areas of the land that have more yield and less quantity in the others, for this reason a flexible and independent speed control is desirable for each sowing furrow. 4. Seed type: Depending on the type of seed, the number of plants per meter that is recommended changes. In mechanical seeders, the speed change is made by configuring the different drives and pulleys designed for this purpose. 5. Machine speed: With the proposed system, the seeds can be dosed in their optimal quantity depending on the real speed of the seeder Table 1 shows the minimum and maximum rotational speed that the motor-controller assembly must be able to handle. As you can see, it is required from approx 3 RPM to 90 RPM. Table 1. Motor/Controller speed reqs Corn minimum

Corn maximum

Seed cells

35

Sowing SPEED

4

Seed qtty

1.5

Direct-Drive motor speed

2.86

Seed Cells

35

Km/h

Sowing speed

9

Km/h

Seed/meter

Seed Qtty

4

Seed/meter

R.P.M

Direct-Drive Motor Speed

17.14

R.P.M

Soy minimum

Soy maximum

Seed cells

50

Seed Cells

50

Sowing SPEED

3

Km/h

Sowing speed

11

Km/h

Seed qtty Direct-Drive motor speed

10

Seed/meter

Seed Qtty

24

Seed/meter

10.00

R.P.M

Direct-Drive Motor Speed

88.00

R.P.M

4 Motor Design and Simulation As mentioned in [3–5], we continuously develop our own software for the design and virtual tests (validations) of permanent magnet electric motors. This software is a collection of functions and scripts made in Matlab/Octave language and compatible with both applications. It allows creating the 2D CAD of the motor from the geometric parameters and carrying out the operating simulations of the motor under design. It can also be used as a “virtual electric motor test bench”. We use FEMM [6] for finite element calculation, which is highly studied and validated in numerous research works [7–9]. Another section

Direct-Drive Integrated Motor/controller with IoT

275

of the software that we use for the design of electric motors is a Preprocessor, which results in the geometric parameters of the motor based on the requirements of Power, Torque, RPM, efficiency, etc. Currently the preprocessor and the simulation software are not connected, and the output data must be manually passed from the Preprocessor to the Simulator. Work is being done on their integration and also on the automatic optimization of the design. Some of the output parameters of the Preprocessor and input of the Simulator are: Number of Poles, Number of coils, slot width, polar expansion measurements, rotor diameters, stator diameters, etc. The entire design process is iterative, that is, the input parameters are fed back with the result of the simulations, but this is done manually by a designer with knowledge and experience in motor design, not automatically. This simulation suite allows us to obtain with high fidelity the motor torque production as a function of the phase current, in its magnitude and angle, the torque constant, the losses due to the Joule effect, eddy currents, hysteresis, and skin effect. The latter both as a function of the phase current and as a function of the electrical frequency. We can also obtain the phase inductance and resistance, the core saturation as a function of the current, the cogging torque, the torque ripple and the concatenated flux. For this research, 6 different engine architectures were analyzed with these mathematical tools. Stators of 36, 39 and 45 coils and rotors with 42, 44, 50 and 52 magnets. The analysis includes the winding torque factor, resistance, inductance, torque, currents, topologies, efficiency, data for assembly, etc. Below there is a spreadsheet with the input data and the results of the calculations mentioned for the chosen topology: 44 poles - 39 windings. Of the 6 preliminary designs mentioned, 3 were taken and the data was entered into the simulator to perform the 2D CAD. From this step the complete CAD is obtained from the electromagnetic point of view with which the simulations are carried out. After several iterations, the characteristics of the motor obtained are shown in Table 2. Table 2. Motor results Parameter

Value

Units

Voltage

12

V

Max output power

30

W

Max output power @ max bat. current

15

W

Max battery current

1.5

A

Max phase current

5.5

A

Phase qtty

3 (continued)

276

R. Alcoberro et al. Table 2. (continued)

Parameter

Value

Units

Poles qtty

44

Windigs QTTY

39

Slot/Pole/Phase

0.30

Air gap flux density

0.84

T

Core flux density (Max)

1.2

T

Blackiron flux density

1.25

T

Winding fill factor

41.5

%

Kv

10.3

RPM/V

Kt

0.92

Nm/A

Phase resistance

0.46

ohm

Phase inductance

470

uHy

Figure 2 shows the magnetic field density obtained by FEMM and Fig. 3 shows the simulation of the concatenated flow for the 3 phases.

Fig. 2. Motor density plot

5 Electronic Controller and Internet Of Things A VESC 4 [10] is used as the electronic motor controller, which is an open hardware and firmware controller. The firmware of this controller was modified for the seeding application, with a special focus on precise speed control. This electronic controller allows real-time telemetry to be obtained, and its serial interface is used to send telemetry data through the MQTT protocol.

Direct-Drive Integrated Motor/controller with IoT

277

Fig. 3. Motor flux simulation

IoT technology is being strongly inserted at homes and is gradually gaining a place in the industry, thanks to all the virtues it has for the control and telemetry of machines and devices. The industry takes some time to adopt new technologies in part because it has to change processes and the investment is large. Also because you need control and management technology that is reliable, since a stop in the production line implies a lot of lost money. In the primary food production industry, such as the field, technology is just beginning to be inserted, therefore there is much to be done. The producers of the sector are, in general, very conservative and need extremely reliable machines that work without problems in extreme climatic situations. The system proposed in this work transmits the telemetry data through the standard MQTT protocol to a broker and the latter inserts it into an Influx database. The data sent at this moment are: rotational speed, input voltage, input current (DC), phase current, temperature of the motor and the controller. With this data, together with the positioning information of the seeder machine, it allows us to have real-time information on the operation of the seeding. Thresholds are also set to trigger alarms in the event of temperature rise or current overload in any of the motors. We believe that the provision of this technology, together with the implementation of 5G networks, will strongly benefit the Agro Industry sector, and it is a necessary step towards fully autonomous machines.

6 Motor Testing To test the motor, with the main purpose of validating the simulations carried out, we built a small test bench, with a friction brake. We use the open source application VESC Tool [11], together with the electronic controller VESC already has the facility to display all telemetry data in real time and store it in a CSV file for further processing. Figure 4 shows the motor manufactured for the tests and the VESC Tool screen, showing the Real Time Data measured.

278

R. Alcoberro et al.

Fig. 4. Motor prototype and VESC measurement Tool

The backbone of all research is testing and improving our own Design and Simulation Tool. Table 3 shows the differences between the simulated and measured parameters. Table 3. Measured vs Simulated parameters Parameter

Simulated

Measured

Difference

Torque Constant (Nm/A)

0.91

0.92

1%

Cogging Torque Max (Nm)

0.02

0.015

33%

Phase Resistivity (ohm)

0.43

0.46

6.5%

Phase Inductance (uHy)

275

470

40%

Figure 5 shows the minimum difference between the BEMF Simulated with our tools and the measurement with an oscilloscope.

Fig. 5. BEMF simulated and measured comparison

Direct-Drive Integrated Motor/controller with IoT

279

With the result of the measurements in conjunction with the simulations of losses in the ferromagnetic core, an efficiency diagram was made that is shown in Fig. 6. In the figure we are: Torque in Nm (0–3.5 Nm) in the horizontal axis, Rotation Speed in RPMs (0–95 RPM) in the perspective axis and Efficiency in the vertical axis.

Fig. 6. Motor hybrid mode efficiency map

7 Conclusions In this work, an integrated motor-controller system for Direct Sowing with IoT connectivity is presented. The self-designed motor is direct-drive to minimize the parts involved, maximize reliability and efficiency. A VESC 4 design with modified firmware for this application is used as the electronic controller. For control and telemetry, the system has communication through the MQTT protocol, the IoT technology stack and also a wired interface. A self-made design software toolset and an iterative design process are used for the motor design and simulation. Design tools are validated and refined with measurement results of every designed motor. The measurements show a low dispersion with that predicted by the simulation tools. These software tools are continually improved by adding new features. In the near future, a set of automatic optimization tools will be added.

References 1. Faulkner, E.: Plowman’s Folly. University of Oklahoma Press. Reprint edition (2012) 2. United Nations, Food and Agriculture Organization. http://www.fao.org/3/y2638s/y2638s04. htm

280

R. Alcoberro et al.

3. Durakbasa, N., Bauer, J., Alcoberro, R., Capuano, E.: A computer-simulated environment for modeling and dynamic- behavior-analysis of special brushless motors for mechatronic mobile robotics systems. IFAC, TECIS 49(29), 12–17 (2016) 4. Numan, D., Jorge, B., Pagnola, Rodrigo, A.: Needs and requirements in metrological tests of qualitative characterization from magnets for application in special brushless motors for exoskeleton and bipeds. In: IFAC, ISPR (2017) 5. Alcoberro, R., Durakbasa, N., Bauer, J., Kopacek, P.: A low-cost integrated concept for the hybridisation and electric conversion of cars and other mechatronic vehicles. In: IFAC. TECIS (2021) 6. Meeker, D.: Finite Element Method Magnetics, User’s Manual (2010). Version 4.2. Accessed 13 Feb 2011, https://www.femm.info/wiki/HomePage 7. Mach, M.: Simulation of outer rotor permanent magnet brushless DC motor using finite element method for torque improvement. Model. Simul. Eng. 2012, 1687–5591 (2012) 8. Colton, S.W.: Design and prototyping methods for brushless motors and motor control.Massachusetts Institute of Technology (2010) 9. Pellegrino, G., Luigi, M., Cupertino, F., Gerada, C.: Automatic design of synchronous reluctance motors focusing on barrier shape optimization. IEEE Trans. Ind. Appl. 51(2), 1465–1474 (2015). ISSN 0093–9994 10. VESC Project. https://vesc-project.com/ 11. VESC Tool. https://vesc-project.com/vesc_tool

Evaluation of Fused Deposition Modeling Process Parameters Influence on 3D Printed Components by High Precision Metrology Alexandru D. Sterca1(B) , Roxana-Anamaria Calin1 , Lucian Cristian1 , Eva Maria Walcher2 , Osman Bodur2 , Vasile Ceclan1 , Sorin Dumitru Grozav1 , and Numan M. Durakbasa2 1 TCM, Technical University of Cluj-Napoca, Cluj-Napoca, Romania

{vasile.ceclan,sorin.grozav}@tcm.utcluj.ro

2 IFT, TU Wien (Vienna University of Technology), Vienna, Austria

{eva.walcher,osman.bodur,numan.durakbasa}@tuwien.ac.at

Abstract. Additive manufacturing technologies are becoming an integral part of the manufacturing process in various fields ranging from medicine and aeronautics to Industry 4.0 concepts due to their individualized generative design capabilities, high flexibility, compact production techniques, and direct CAD to product manufacturing capabilities. This increased utilization of additive manufacturing exposes the need for an in-depth analysis of the quality of parts obtained through these technologies and the factors that influence the dimensional and geometric precision and surface quality. This paper proposes a study on the effects of different parameters of the Fused Deposition Modeling additive manufacturing process on the quality and precision of the parts obtained through this process using PETG material. The parameters to be varied are temperature and speed. An analysis will also be performed to determine if deviations vary for the different lengths of the features. The results of the study indicate a correlation between process parameters and deviations, which can be used to create a mathematical model and apparatus to predict geometric and dimensional deviations and part behavior and to correct such errors during the design process, allowing for the manufacturing of precise parts and, in the future, the creation of a tolerance standard, leading to increased interchangeability between parts obtained through Additive Manufacturing techniques. Keywords: Additive manufacturing · Computed tomography · Precision metrology · Micro coordinate measuring · Geometrical product specification · CMM

1 Introduction Additive manufacturing is becoming more widespread in fields ranging from medicine to aeronautics and concepts of industry 4.0. In industry 4.0 additive manufacturing received increased attention due to its individualized generative design capabilities, efficient use of digitalization, remote controllability, high flexibility, direct CAD to product © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 281–295, 2022. https://doi.org/10.1007/978-3-030-90421-0_24

282

A. D. Sterca et al.

manufacturing, and the capability of producing components for rapid prototyping and reverse engineering at a very low cost. For all these advantages, additive manufacturing technologies, and more specifically Fused Deposition Modelling dimensional and geometric precision and surface quality which leads to high post-processing costs and reduced component interchangeability. To address this issue, studies [1] have been made to determine the quality of components obtained employing additive manufacturing and their behavior in an assembly. In order to correct for the deviations, studies [2] have been made to establish a technique by which measurements taken on a part are used to correct the geometry of a subsequent part, increasing the precision using an iterative approach. Other studies [3] have analyzed the influence of 3D printing parameters on quality, strength, mass, and processing times for parts obtained from PLA material. More studies [4–8] have been done regarding the quality of 3D printed parts. This paper proposes a study to determine the influence of the main process parameters on the quality of parts manufactured from PET-G material using the Fused Deposition Modeling technique. To this end, a test part geometry will be designed to incorporate the most commonly found complex and simple internal and external geometries. The part geometry will also be designed to maximize and amplify the influence of the process parameters. A customized geometry approach was preferred to better determine the effects of specific parameters (speed and temperature). The geometry was also chosen in such a way as to allow the use of the same parts for yield strength measurements in a future study, which will enable the creation of an orthotropic material profile to be used in Finite Element Analysis during the design process of 3D printed parts. In addition to the measurements and analysis performed in other studies, we propose the use of Computed Tomography to obtain porosity internal structure data for the tested parts as well as a 3D model that can be used to corroborate the measurements obtained through other techniques. The main purpose of the study is to determine if there is a correlation between process parameters, dimensional and geometric deviations, surface quality, and part porosity. If a predictive correlation can be found, it can be used to generate a mathematical model and apparatus for predicting and correcting for deviations during the design process on the STL model, leading to reduced costs, reduced material usage, and reduced time consumption by reducing or eliminating the number of failed parts and post-processing operations.

2 Experimental Setup A test part will be designed and manufactured using the Fused Deposition Modeling method. The test part will contain dimensions and geometries commonly found in functional 3D printed parts and will be manufactured with different parameters and combinations of parameters. Measurements will be performed on the manufactured parts to determine if there is a correlation between precision and parameters variation.

Evaluation of Fused Deposition Modeling Process Parameters

283

2.1 Test Part Geometry In order to perform the study presented in this paper, we propose the use of a test part, the geometry of which is presented in Fig. 2c. The model was created using the SolidWorks software package [9]. The geometry was designed to amplify the effects of the different parameters and material behavior. A low surface area zone (Width4) was implemented to amplify the effects of part warping and to provide a zone where thermal effects due to over-heating are increased. A high surface area zone (Width1 and Width2), as well as a transition from low to high surface area zones (Width4 to Width1 and Width2, through Angle1 and Angle2), were implemented to determine how the effects of the parameters vary with the surface area. Different linear dimensions (indicated as Length, Width, and Height) are used to determine if there is a correlation between feature dimensions and dimensional deviations. Circular features (Cylinder1, Cylinder2, and Radius1) are used to determine the effects of process parameters on internal and external circular geometries. The coordinate system used for measurements is defined by the planes A, B, and C, plane A will also be used for flatness measurements to determine part deformation. Angular features ( Angle1, Angle2, and Angle3) will be used to determine angular size deviations in different planes as well as for determining part deformation. The geometries were also selected to allow yield strength measurements in a future study, by using the thin section defined by Width4 and Height1 (5 × 5 mm) as a breaking point and Cylinder1 and Cylinder2 for gripping the part to be tested. 2.2 Additive Manufacturing – Methods, Parameters The test part will be manufactured using the Fused Deposition Modelling method on a Leapfrog Creatr XS 3D printer [10] (Fig. 1a and Fig. 1b). For the 3D printing process, a nozzle of 0.5 mm diameter will be used and a 0.25 mm layer height. The material used will be PET-G which according to the material technical specifications [11] has a thermal contraction coefficient of a minimum of 0.2% and a maximum of 1%. PETG was chosen due to its physical and chemical properties which make it a versatile material when used for manufacturing functional components. The results obtained for PETG in

Fig. 1. (a) Leapfrog Creatr HS 3D printer, (b) printing process, and (c) test part 3D model

284

A. D. Sterca et al.

this study should not be used for other materials due to differences in material properties and behavior. Other studies can be performed in the future to determine dimensional and geometrical deviations across a wide range of materials and if a predictive model can be derived under those conditions. In order to determine if there is a correlation between dimensional and geometric deviations and 3D printing process parameters, the test parts will be printed using a variation of the main process parameters. The main process parameters tested in this study will be speed, temperature and part orientation due to the orthotropic behavior of 3D Printed parts as a result of the layer-by-layer strategy used by additive manufacturing techniques. The values of the process parameters will be varied in a range from low to high in the case of temperature and speed and horizontal and vertical in the case of part orientation. The speed and temperature parameters will thus take 3 values each, representing low, medium, and high speed and temperature for both horizontal and vertical orientations yielding a total of 12 parts representing different parameter combinations as seen in Table 1. Part name notation is “Temp_Speed_Orientation”, denoting the parameters used to manufacture the part. Thus, the part labeled 230_60_H represents a part manufactured using a temperature of 230 °C, a speed of 60 mm/s in Horizontal orientation. To ensure that manufacturing in both the horizontal and vertical orientations for each temperature and speed combination will be performed in the same conditions, both orientations will be printed at the same time for each parameter variation (Fig. 1a and Fig. 2b). Table 1. Process parameter variations used for test part manufacturing No

Temperature

Speed

Orientation

1

220

45

H

2

220

45

V

3

230

60

H

4

230

60

V

5

230

70

H

6

230

70

V

7

230

90

H

8

230

90

V

9

240

45

H

10

240

45

V

11

250

45

H

12

250

45

V

Evaluation of Fused Deposition Modeling Process Parameters

285

2.3 Measurement Methods 1) Coordinate Measuring Machine (CMM). Zeiss PRISMO navigator 795 CMM [12] will be used in conjunction with the Calypso software from Zeiss [13] to perform high precision measurements on the test part dimensions and geometries. Figure 2a shows the Zeiss PRISMO navigator 795 CMM with the part to be measured.

Fig. 2. (a) Zeiss prismo navigator 795 CMM (b) Werth tomoscope XS computer tomograph (c) Alicona infinite focus

2) Computed Tomography (CT). – A CT scan of each part will be performed using a Werth TomoScope XS machine [14]. The results of the CT scan will be used to determine porosity and other volume defects of the parts and if there is a correlation between said volume defects and the process parameters. The CT scan results will also be used to perform dimensional and geometric measurements using the specialized software GOM Inspect [15] to corroborate and complete the measurements obtained from the CMM technique. Figure 2b shows a test part inside the Werth TomoScope XS machine. 3) Focus variation microscopy. A surface roughness analysis will be performed on the 2 extremes of the studied parameters (high speed and high temperature) and a control part (medium speed and medium temperature) using an Alicona Infinite Focus as seen in Fig. 2c. [16, 17]. The parts will be analyzed on the vertical orientation since surface roughness and the variation of surface roughness is most pronounced in the planes perpendicular to the layer orientation, in this case, the X-Z, and the Y-Z planes.

3 Results The measurements and scans of the manufactured parts will be used to determine the deviations from nominal. An analysis of the results will determine if there is a correlation between manufacturing parameters and part deviations and if a predictive mathematical model can be established. 3.1 Manufactured Parts The parts (Fig. 3a) were printed using a 2.5 mm wall thickness and a linear infill of 15%. The internal structure of the parts can be seen in the CT scan (Fig. 3b) where darker colors represent higher material densities.

286

A. D. Sterca et al.

Fig. 3. (a) parts obtained through the FDM process (b) CT scan of a printed part

3.2 Measurement Results The test parts were measured and scanned using CMM and CT in order to determine dimensional and geometric deviations from nominal, as defined by the ISO 1101–2017 standard [18]. The results from the CMM and CT measurements used for the analysis are presented in Table 2 in the form of deviations from nominal. 3.3 CT Results CT scans obtained from the Werth TomoScope XS machine will be used for volume and surface analysis of the test parts. The volume analysis results will be used to determine the porosity of the parts and to detect holes or inclusions in the part volume. The results will be used to determine if there is a correlation between part porosity and process parameters. Figure 4 shows samples of the results of the CT scan for porosity analysis. An analysis of the internal structure of the 3D printed parts indicates that porosity increases with temperature (Fig. 4a). This behavior can be explained by humidity in the 3D printing filament being turned into steam and outgassing and by material overheating leading to a blistering effect which in turn causes pores to appear in the volume of the parts. A decrease in temperature (Fig. 4d) reduces the number of pores, however in this case gaps can be observed between layers along the external and internal contours of the parts. This effect can be explained as a result of lower layer adhesion at low temperatures. Low temperatures also lead to lower cooling times, increasing internal tensions due to contraction, and combined with the effect of lower layer adhesion, an effect of layer de-cohesion can occur. A surface analysis will be performed on the scanned geometries using an actual part to CAD comparison. Measurements of the dimensional and geometric characteristics will be performed on the scanned parts to corroborate the results of CMM measurements and to obtain a more in-depth view of geometric deviations using a higher number of reference points than in the CMM method. Figure 5 shows examples of surface comparison on CAD obtained by the use of the GOM inspect software. The samples were selected to show the difference in deformation for changed parameters. Surface comparison on CAD was performed for all parts in the dataset. The results indicate part deformations in the form of bending for the parts manufactured with horizontal orientation (Fig. 5a and Fig. 5c) and twisting for the parts manufactured with vertical orientation (Fig. 5b and Fig. 5d). These results are corroborated by the deviation

90,00

90,00

5,00

10,00

15,00

15,00

10,00









angle 2

angle 3

height 1

height 2

cylinder 1

cylinder 2

Radius 1

position 1

position 2

cylindricity 1

cylindricity 2

10,00

width 3

5,00

50,00

width 2

90,00

50,00

width 1

angle 1

12,50

length2

width 4

130,00

length 1

−0,317 −0,055

−0,376 −0,126 −0,182 −0,094

−0,085 0,169 0,467

0,117 −1,108

0,129

−0,166 0,004

−0,394 0,046

0,305

0,337

1,126

0,095

0,486

0,553

0,577

0,233

0,266

0,275

0,310

1,499

0,146

0,115

0,487

0,525

0,633

0,115

0,284

0,425

0,357

1,866

0,240

0,015

−0,169

−0,286 −0,193 −0,249 −0,171

0,048

−0,083

−0,264 −0,218 −0,198 −0,211

−0,213

−0,659

0,402

−0,026 0,057 0,068

−0,716 −0,116 −0,369 1,076

−0,673

0,025

−0,405 −0,650 −0,618 −0,150

0,383

0,079

−0,643 0,983

0,269

−0,302

−0,139 −0,062 −0,140 −0,036

0,258

−0,252

−0,176 −0,119

−0,176 0,291

−0,538

−0,424 −0,084 −0,416 −0,087

−0,100 0,107

0,500

0,674

0,423

0,120

0,377

−0,215

−0,183

0,271

0,051

−0,227

0,410

0,553

0,159

0,271

−0,106

−0,077

−0,162

−0,171

0,426

0,462

1,885

0,483

0,046

−0,232

−0,192

−0,187

−0,155

−0,556

−0,757

−2,253

0,300

0,007

−0,295

−0,527

−0,265

−0,767

0,769

0,639

0,436

0,659

0,459

−0,207

−0,334

0,009

0,171

−0,420

1,954

−0,293

0,361

0,330

−0,078

−0,188

−0,142

0,064

0,256

0,287

0,805

0,028

0,104

−0,390

−0,329

−0,203

0,006

−0,761

−0,459

−0,089

0,365

0,193

−0,259

−0,050

−0,018

−0,290

0,529

0,529

0,335

0,090

0,100

−0,218

−0,211

0,042

0,254

−0,711

0,032

1,268

0,051

0,114

−0,088

−0,076

−0,003

−0,079

0,376

0,374

0,819

0,069

0,070

−0,411

−0,385

−0,094

0,016

−0,714

−0,617

1,006

0,413

0,188

−0,221

−0,030

−0,063

−0,273

(continued)

0,492

0,573

0,432

0,273

0,316

−0,333

−0,288

0,029

0,147

−1,187

2,318

1,081

0,071

0,246

−0,085

−0,048

0,030

−0,098

Measured Nominal 220_45 220_40 230_60 230_60_V 230_70_H 230_70_V 230_90_H 230_90_V 240_45_H 240_45_V 250_45_H 250_45_V characteristic [mm] H V H

Table 2. CMM/CT measurement results

Evaluation of Fused Deposition Modeling Process Parameters 287







80,00

flatness

roundness cylinder 1

roundness cylinder 2

cylinder distance

0,484

0,567

0,060

0,220

0,228

0,555

0,420

0,424

0,190

−0,229 −0,090 −0,246 −0,078

0,175

0,158

0,299

−0,309

0,281

0,263

0,331

−0,121

0,460

0,498

0,141

−0,413

0,300

0,295

0,156

−0,185

0,487

0,667

0,208

−0,282

0,153

0,142

0,177

−0,112

0,414

0,338

0,251

−0,298

0,196

0,178

0,065

−0,091

0,879

0,491

0,103

Measured Nominal 220_45 220_40 230_60 230_60_V 230_70_H 230_70_V 230_90_H 230_90_V 240_45_H 240_45_V 250_45_H 250_45_V characteristic [mm] H V H

Table 2. (continued)

288 A. D. Sterca et al.

Evaluation of Fused Deposition Modeling Process Parameters

289

Fig. 4. A sampling of 4 In volume CT scans for porosity analysis: (a) 250_45_H, (b) 240_45_H, (c) 230_60_H, (d) 220_45_H

Fig. 5. Surface comparison on CAD for test parts (a) 230_60_H, (b) 230_260_V, (c) 250_45_H (d), 250_45_V

from flatness measured on plane A by both CMM and the CT scan model of the manufactured parts. The analysis indicates a higher level of deformation for parts obtained at lower temperatures and lower printing speeds (Fig. 5c and Fig. 5d). This result can be explained by the effects of lower cooling times and the formation of a temperature gradient throughout the part in the Z-axis direction for parts manufactured with these conditions. These effects lead to a faster contraction of the material and the generation of high internal tensions which lead to the deformation of the part.

290

A. D. Sterca et al.

3.4 Regression Analysis In order to determine if there is a correlation between process parameters and the dimensional and geometric precision of the manufactured parts, a regression analysis [19] will be performed on the resultant deviations versus the process parameters used to manufacture the test parts. An analysis will also be performed for deviation versus geometry length.

Fig. 6. Regression analysis for Deviation versus (a) speed, (b) temperature, (c) length, and (d) predicted deviations for different lengths.

The results indicate there is a correlation between deviation and speed (Fig. 6a), the model accounts for 12.35% of deviation variation for different values of speed. The results also indicate a correlation between temperature and deviations (Fig. 6b). The model accounts for 7.76% of the variation. It can be noted from this analysis that there is a decrease in deviations with an increase in temperature. This is consistent with previous determinations indicating that the source of the deviations is part deformation due to material contraction. A higher temperature leads to a longer cooling time and a more homogenous temperature distribution throughout the part, reducing internal tensions and material contraction. Regression analysis indicates a very strong correlation between deviations and the length of the measured geometry (Fig. 6c). The model accounts for 40.27% of the variation. This result strongly indicates that the main source of dimensional and geometric

Evaluation of Fused Deposition Modeling Process Parameters

291

errors in parts obtained by the Fused Deposition Modelling process is material behavior, specifically part warping and twisting due to contraction. This hypothesis is also supported by the fact that the deviations for all of the measured geometries are within the 0.2%–1% margin of the material contraction coefficient according to the material datasheet. Predictions for different parameter variations could be made through the use of regression as exemplified in Fig. 6d, which can be used for geometry compensation. With a more extensive dataset, an artificial neural network [20] can be developed and trained to high accuracy to predict and compensate geometrical errors. 3.5 Surface Quality Analysis Using Focus Variation Microscopy An Alicona focus variation microscope was used to perform surface roughness measurements on three parts. One part represents the extreme of the temperature range at 250 °C, one part represents the extreme of the speed range at 90 mm/s and the other is a control part with medium process parameters at a temperature of 230 °C and a speed of 60 mm/s. The measurements were performed on a 25 mm long segment encompassing a low surface area geometry (5 × 5 mm) and a transition to a higher surface area geometry. Figure 7 presents the surface profile measurement of the three test parts. The parameters and definitions for determining surface texture by the Profile Method are defined in the ISO-4287–2010 standard [21].

Fig. 7. Surface profile for (a) 230_60_V (b) 230_90_V and (c) 250_45V profile roughness measurements

Figure 8 presents the 3D areal scan of the three test parts. The parameters and specifications for determining the surface texture by the areal method are defined in the ISO 25178–2 standard [22]. The surface roughness results for both the surface profile and the 3D areal method are presented in Table 3.

292

A. D. Sterca et al.

Fig. 8. 3D areal scan for (a) 230_60_V, (b) 230_90_V, and (c) 250_45V surface textures

Table 3. Surface roughness results from Focus Variation microscopy Surface profile method

3D Areal method

Part

Ra [µm]

Rq [µm]

Rz [µm]

Sa [µm]

Sq [µm]

Sz [µm]

230_60_V

20.1863

24.4306

112.5838

20.6512

25.3948

324.5304

230_90_V

33.5738

46.081

177.7358

30.4462

41.0966

452.8352

250_45_V

32.3410

43.8057

250.1803

33.1832

47.5606

876.7487

As can be seen from the surface profile results, both speed and temperature have a major influence on surface quality on the planes perpendicular to the layer orientation (X−Z and Y−Z). Analyzing the surface profile for the high-speed part (Fig. 7b and Fig. 8b), it can be determined that the surface quality for the low surface area geometry is significantly lower than the quality for the high surface area zone indicating that on small geometries, the underlying layers do not have enough time to cool before the next layer is deposited resulting in displacements and deformations that directly affect surface quality. On the high surface area zone (9.5 to 20.5 mm) the surface quality increases because the time needed to print the layer is longer, allowing for better cooling of the underlying layers and thus reducing displacements. The high-temperature part results (Fig. 7c and Fig. 8c) indicate displacements in both the low and the high surface area geometries resulting in low surface quality. This is a result of overheating of the underlying layers, creating unfavorable conditions for the deposition of the next layer. These results are corroborated by other studies as well [23]. These results indicate that another parameter, layer cooling time, needs to be taken into consideration when designing the 3D printing process.

Evaluation of Fused Deposition Modeling Process Parameters

293

4 Conclusions The results of the analysis show that there is a correlation between process parameters and geometric and dimensional precision and surface quality. An analysis of the characteristics of the dimensional and geometric deviations indicates that there are 2 more important parameters that need to be taken into account when considering 3D printed part quality: Layer cooling time and Part cooling speed. The influence of these two parameters is important because the results indicate that the majority of the error is caused by the material behavior, specifically contraction and internal tensions generated during cooling. We propose a low part cooling speed to reduce warping. One method of reducing cooling speed is a controlled reduction of heated build platform temperature over a longer time if a heated enclosure is not available for the machine. Layer cooling time should also be increased by removing the nozzle from the part and allowing the layer to cool before printing the next layer, thus avoiding depositing material on an overheated, soft previous layer, resulting in lower displacements and higher surface quality. The results show a linear variation on which a mathematical model could be created that could be used to make predictions and corrections on the part geometry. However, in order to build an accurate mathematical model, a more extensive dataset needs to be created and analyzed. Having determined that there is a correlation between process parameters and manufactured part quality, the study can be extended and improved with a more extensive dataset. Using a bigger dataset will allow for the determination of fine characteristics of the correlation and the generation of an accurate mathematical model. With a large dataset, an artificial neural network can be developed and trained to high accuracy, which can then be used to make inferences on model geometries and compensate the geometry to correct for known process parameter influence. The same test parts used in this study will be used for yield strength measurements in a future study, allowing for the creation of an orthotropic material profile to be used in Finite Element Analysis during the design process for 3D printed parts. By extending the dataset and increasing the accuracy of the mathematical model, the precision of 3D printed parts can be increased and a tolerance standard for 3D printed parts can be established to allow for increased interchangeability between 3D printed components. Acknowledgment. This paper and the research presented were made possible through a collaboration with IFT, TU Wien (Vienna University of Technology), Austria, and the Technical University of Cluj-Napoca, facilitated by the CEEPUS program and the OeAD. Computer Tomography measurements were made possible by a collaboration with the Werth Messtechnik Österreich GmbH company.

294

A. D. Sterca et al.

References 1. Durakbasa, N.M., et al.: Additive miniaturized-manufactured gear parts validated by various measuring methods. In: Majstorovic, V.D., Durakbasa, N. (eds.) IMEKOTC14 2019. LNME, pp. 276–290. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-18177-2_25 2. Bodur, O., Stepanek, V., Walcher, E.M., Durakbasa, N.: Precision in additive manufacturing optimization and evaluation of the accuracy of 3D printer based on GPS system. In: Katalinic, B. (ed.) Proceedings of the 31st DAAAM International Symposium, Vienna, Austria , pp. 0963–0972. Published by DAAAM International (2020). ISBN 978–3–902734–29–7, ISSN 1726–9679, https://doi.org/10.2507/31st.daaam.proceedings.134 3. Camposeco-Negrete, C., Varela-Soriano, J., Rojas-Carreón, J.J.: The effects of printing parameters on quality, strength, mass, and processing time of polylactic acid specimens produced by additive manufacturing. Progr. Addit. Manuf. 2021, 1–20 (2021). https://doi.org/10. 1007/s40964-021-00198-y 4. Ficzere, P., Lukács, N.: Influence of 3D printing parameters. IOP Conf. Ser. Mater. Sci. Eng. 903, 012008 (2020). https://doi.org/10.1088/1757-899X/903/1/012008 5. Wu, J.: Study on optimization of 3D printing parameters. IOP Conf. Ser. Mater. Sci. Eng. 392, 062050 (2018). https://doi.org/10.1088/1757-899X/392/6/062050 6. Ferretti, P., et al.: Relationship between FDM 3D printing parameters study: parameter optimization for lower defects. Polymers 13, 2190 (2021). https://doi.org/10.3390/polym1313 2190 7. Hamza, I.: Experimental optimization of fused deposition modeling process parameters: a Taguchi process approach for dimension and tolerance control. In: Proceedings of the International Conference on Industrial Engineering and Operations Management, pp. 1–11. IEOM Society International Paris (2018) 8. Ahn, D., Kweon, J., Kwon, S., Song, J., Lee, S.: Representation of surface roughness in fused deposition modeling. J. Mater. Process. Technol. 209(2009), 5593–5600 (2014) 9. Dassault Systems SolidWorks. https://www.solidworks.com/domain/design-engineering 10. Leapfrog Creatr HS User manual. http://support.lpfrg.com/support/solutions/articles/110000 83231-user-manual 11. Plastic Shrinkage Rate Chart. Up Mold Technology Limited (UMT). https://upmold.com/pla stic-shrinkage-rate-chart/ 12. Zeiss PRISMO Navigator 795 Specifications. https://www.zeiss.com/metrology/brochures. html?catalog=PRISMO 13. ZEISS CALYPSO. https://www.zeiss.com/metrology/products/software/calypso-overview. html 14. TomoScope®XS, Werth, Inc. (2018). http://werthinc.com/products/tomoscope-xs/ 15. GOM Inspect suite. https://www.gom.com/en/products/gom-suite/gom-inspect-pro 16. Danzl, R., Helmli, F., Scherer, S.: Focus variation—a robust technology for high resolution optical 3D surface metrology. Strojniski Vestnik. 2011, 245–256 (2011). https://doi.org/10. 5545/sv-jme.2010.175 17. Alicona Infinite Focus. https://www.alicona.com/en/products/infinitefocus/ 18. ISO 1101; Geometrical product specifications (GPS) - Geometrical tolerancing - Tolerances of form, orientation, location, and run-out (2020) 19. Morris, A.S., Langari, R.: Chapter 8 - Display, Recording, and Presentation of Measurement Data, Measurement and Instrumentation, pp. 183–205. Butterworth-Heinemann, Oxford (2012). ISBN 9780123819604, https://doi.org/10.1016/B978-0-12-381960-4.00008-5. 20. Chowdhury, S., Anand, S.: Artificial Neural Network Based Geometric Compensation for Thermal Deformation in Additive Manufacturing Processes (2016). https://doi.org/10.1115/ MSEC2016-8784

Evaluation of Fused Deposition Modeling Process Parameters

295

21. DIN EN ISO 4287:2010. Geometrical Product Specifications (GPS) - Surface Texture:Profile Method - Terms. Definitions and surface texture parameters (2010) 22. ISO 25178–1:2016. Geometrical product specifications (GPS) - Surface texture: Areal – Part 1: Indication of surface texture (2016) 23. Rastogi, P., Gharde, S., Kandasubramanian, B.: Thermal effects in 3D printed parts. In: Singh, S., Prakash, C., Singh, R. (eds.) 3D Printing in Biomedical Engineering. MHFNN, pp. 43–68. Springer, Singapore (2020). https://doi.org/10.1007/978-981-15-5424-7_3

Implementation of Industry 4.0 Elements in Industrial Metrology – Case Study Vojtech Stepanek, Jakub Brazina(B) , Michal Holub, Jan Vetiska, Jiri Kovar, Jiri Kroupa, and Adam Jelinek Department of Production Machines, Systems and Robotics, Faculty of Mechanical Engineering, Brno University of Technology, Brno, Czech Republic {Vojtech.Stepanek,Jakub.Brazina,Adam.Jelinek}@vut.cz, [email protected], {vetiska,kovar,kroupa}@fme.vutbr.cz

Abstract. The presented paper deals with the use of Industry 4.0 elements in industrial metrology applied to a production cell located at the Institute of Manufacturing Machines, Systems and Robotics, Faculty of Mechanical Engineering, Brno University of Technology. Conventional machine tool measurement methods can only improve the geometric accuracy of the machine tool, but the contribution of dynamic forces and the selected technological parameters are also reflected in the working accuracy during the production of a real part. The aim is to improve the manufacturing accuracy of the machine tool by measuring the machined parts using a single-purpose measuring station. The control system of this station was designed using virtual commissioning technology on Siemens platforms (NX MCD and Simatic PLC). The resulting measured data are stored in a local database. The design includes the creation of an infrastructure where the initial data processing will be handled by edge computing and the subsequent visualization will be done in the cloud environment MindSphere from Siemens and in virtual reality (Oculus & HTC). Keywords: Industry 4.0 · Machine tool accuracy · Production systems · Virtual reality · Production data processing · Edge computing · Virtual commissioning · Machine tool accuracy

1 Introduction Currently, there is an effort to develop applications related to information and communication technologies (ICT) and their implementation in industrial practice and to link them to cyber-physical systems (CPS). By integrating the real and virtual world using the Internet environment, cyber-physical manufacturing (CPM) elements are emerging in the industrial field [1]. These elements can be introduced in the design of new concepts of manufacturing systems. Self-optimizing processes can be included here, which can be part of intelligent processes and solve the following tasks [2]:

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 296–308, 2022. https://doi.org/10.1007/978-3-030-90421-0_25

Implementation of Industry 4.0 Elements

• • • • •

297

Automatic intelligent measurement by using CNC metrology Off-line/on-line CNC programming of measuring instruments Automatic changing of workpieces Automatic changing of probes and sensors Automated evaluation of measuring results.

The process of digitalization of the production cell and its extension with Industry 4.0 elements are described in papers [3, 4]. These include the creation of a virtual model of the manufacturing cell for VR/AR using photogrammetry, data structure design, assessment of the geometric accuracy of CNC axes and integration of risk analysis and functional safety. Another element to increase the automation is the use of the CNC machine tool as a measuring device [5, 6]. By using CPM elements, measurement uncertainty can be efficiently reduced to provide sufficient information about the state of the machine and the environment to ensure additional measurement capability on CNC machine tools. It is the setup of the CNC machine tool itself that has a significant impact on the resulting dimensional and shape accuracy of workpieces during finishing [7]. Changes in the geometric accuracy of the machine tool can be expected due to changing environmental conditions, but also, for example, internal heat sources in the machine tool [8]. One way to react flexibly to these changes is to introduce measuring equipment into the production process. Research related to the integration of Industry 4.0 elements in order to ensure longterm production eligibility or increase it is part of the presented paper. The design of a single-purpose measuring station, its virtual commissioning, connection to the control system of the production cell, data processing using edge computing, data storage in a database systems, connection to virtual reality and the design of data visualisation in the cloud environment MindSphere are an integral part of the implemented CPM within the test-bed. The designed workstation aims to consider changing production conditions and adapt production to ensure long-term stable production accuracy. The acquired data can be further used in terms of production process traceability and quality control.

2 Machine Tools Accuracy 2.1 Geometric, Working and Manufacturing Accuracy 2.1.1 Geometric Accuracy Geometric accuracy is one of the basic parameters of the machine and is directly related to the production quality of the mechanical structure of the machine. Geometric accuracy is defined as the tightness of the tool alignment error with respect to the workpiece. It is measured in the unloaded state so that unwanted dynamic forces are not introduced into the measurement. On a three-axis machine that has a serial TTT kinematics arrangement (e.g. basic three-axis milling machine), six geometric errors can be defined for each linear axis and three errors of perpendicularity of the linear axes with respect to each other. Thus, a three-axis machine has a total of 21 geometric errors.

298

V. Stepanek et al.

2.1.2 Working Accuracy Working accuracy is a machine parameter that talks about the quality and surface integrity of the manufactured part. The test of working accuracy is most often performed at the time of acceptance of the machine by the customer. The test piece for 3-axis machining is specified in the ISO 19791–7:2021 standard or is predefined when placing the machine order. Compared to geometric accuracy, the effects of the component manufacturing technology are reflected here. 2.1.3 Production Accuracy Production accuracy is the highest link in the chain. Long-term influences such as changes in climatic conditions, operator influence, tool wear, etc. are also reflected in this parameter. To maintain the long-term capability of the manufacturing process, appropriate statistical tools are used. 2.2 Factors Affecting the Manufacturing Accuracy of a Machine Tool 2.2.1 Error in the Geometric Design of the Machine Tool Structure This is directly related to the accuracy with which the mechanical structure is manufactured. Examples include the perpendicularity of the surfaces, the straightness and flatness of the surfaces for the location of the rolling guiding, the shape of the rolling guiding profile along its entire length, the accuracy of the running of the bearings, and many others. Since it is not possible to produce components with zero shape deviation, this error always occurs in machine tools. Almost all these errors can be measured, but not all errors can be fully compensated [10]. 2.2.2 Temperature Effects This is another significant factor, as the machine tool cannot be operated in an environment with zero temperature gradient. This gradient is caused by the change in ambient temperature but also by passive effects inside the machine. The most important sources of heat are the spindle, servo drives, the cutting process or friction in kinematic pairs. To prevent angular deformations due to temperature, a so-called temperature symmetric design or a sophisticated machine tempering system is used [10, 11]. 2.2.3 Technology Effects These effects are reflected in the choice of cutting conditions, which have a major influence on the final accuracy of the machined piece. In terms of chip machining technology, the correct choice of tool, chuck, spindle guidance accuracy or maximising the rigidity of the machine-tool-workpiece-workpiece (vise) system can be mentioned. Modern CAM software can already optimize toolpaths to minimize the change in cutting force overtime due to changes in chip cross-section.

Implementation of Industry 4.0 Elements

299

2.2.4 Influence of the Control System The correct setting of servo parameters has a direct influence on the dynamic behaviour of the machine. Together with the performance of the trajectory planner, it has a major influence on the accuracy of the tool guidance with respect to the workpiece. The influence of incorrectly set parameters of the control system is particularly evident at higher feed rates of the machine tool. 2.3 Machine Tool Measurement The assessment of the geometric accuracy of the vertical three-axis machining center and the increase of geometric accuracy is carried out according to the procedures described in publications [13, 14]. The main methods for measuring machine tool errors include the procedures described in ISO 230–2 and ISO 230–4. Other methods include increasing geometric accuracy by volumetric compensation. The instruments used in these measurements are mainly the Laser Interferometer XL-80 and the Double Ballbar QC20-W from RENISHAW. In addition, the LaserTRACER from ETALON and digital inclinometers from WYLER-AG. 2.4 Machine Tool Error Compensation Approaches Compensation data is always based on a uniform form. For a three-axis machine, the desired tool position in space is defined by the coordinate TCP = (X,Y,Z). The spatial vector ERRTCP = (X, Y, Z) is constructed for this desired position using compensation tables. The spatial error vector is defined as a function of the 21 three-axis machine errors defined in the previous section. The spatial error vector is obtained from FEM simulations or defined based on a set of measurements. 2.4.1 Writing to CEC (Cross Error Compensation) Tables The control system, which contains the G-code interpreter and the trajectory planner, has tables in its structure into which the various compensation data are written. Each manufacturer has its own specific form of CEC tables. Changing the compensation table can be done in two basic ways. The first is to manually overwrite rows or entire tables. This service intervention requires a knowledgeable operator and a gross error in this editing can lead to a machine crash. The second way of editing the compensation table is to use the more advanced functions of the G-code program. As part of the compensation write, the effects of the compensation table are first disabled, the table is unlocked for editing, and then its individual rows and columns are addressed and edited using the compensation data. Compensation can be performed unidirectionally or bidirectionally. Bidirectional compensation has the advantage of eliminating backlash in the motion mechanism. 2.4.2 Movement Axis Scaling The previous approach defined the exact value of compensation at partial positions of the motion axis and these data were then approximated piecewise by a smooth broken

300

V. Stepanek et al.

line. If the error waveform could be approximated by a single straight line along its entire length, the directive of this error would define the axis scale error. This error can be written into the control system using machine parameters. This approach of writing the compensation appears to be suitable for temperature expansion compensation, since the dependence of the length change on temperature is also linear in nature. 2.4.3 Parametric CAM Programming Another approach to eliminate workpiece errors is to use reverse engineering. In [12] the principle of error compensation based on data measured on a real piece is explained. For testing purposes, the 3D printing method is used as a manufacturing system, but the principle is also applicable to the field of CNC machine tools. All dimensions are defined parametrically using the synergy of MS Excel as data carrier and Autodesk Inventor as parametric modeler. The virtual model is then exported as a *.stl file and printed. The finished part is inspected using a coordinate measuring machine and the default dimensions of the virtual piece are adjusted based on the error of the data. After two iterations, some dimensions have already been refined by more than 80%. The same principle is also applicable to CNC machines. 2.4.4 Compensating Parameters in the Tool Table The last but not least of the compensation approaches is used by the Equator inspection system, specifically its IPC intelligent process control function block. When using these compensations, the EquatorTM inspection system is connected to one or more machines simultaneously via an industrial communication network. The manufactured part is inserted into the EquatorTM measuring device and the deviations from the required dimensional value are evaluated by comparison with a standard part (reference). Based on this data, the offset of the tool length and radius is specified, which is then written back to the machine in the form of compensation data. This method of compensation is primarily intended for larger batches of workpieces [12].

3 Description of the Experiment The improvement of the manufacturing accuracy of the machine tool is a long-term goal of the Institute of Production Machines, Systems and Robotics, Faculty of Mechanical Engineering, Brno University of Technology (IPMSR). The aim of this production cell is to automate the measurement of manufacturing accuracy, processing of measured data and compensation of the machine tool. The automation of the whole cell contributes to the collection of a large amount of data needed for accuracy evaluation and machine compensation. Thanks to automation, a faster production cycle time is achieved without the need for operator intervention, and automation also contributes to maintaining constant conditions throughout the process and thus obtaining valid data not affected by possible operator error.

Implementation of Industry 4.0 Elements

301

The workpiece to be machined and subsequently measured is a part with steel inserts (Fig. 1 - right). The machined part is then measured in a single-purpose measuring station, which acts as a comparison gauge that relates the workpiece errors to the dimensions of the standard piece (Fig. 1 - left). For this purpose, a special calibration piece was created, and its metrological continuity was ensured by measurements on a coordinate measuring machine at the TU Wien laboratories. The measurements are carried out using eight LVDT-type touch sensors, which have non-linearity within the measurement range. The linearity error was measured using an XL-80 interferometer. The measured data are then collected, subsequently stored in a database, and processed using Matlab software. Based on the processed data, the machine is compensated. The data is further visualized in a virtual reality environment, and in cloud applications. And it can also be used in artificial intelligence algorithms to further enhance accuracy of the machine tool.

Fig. 1. Standard piece (left) and testing workpiece 320 × 320 mm with steel inserts (Right)

4 UVSSR CELL and Implemented I4.0 Elements 4.1 UVSSR CELL Actual Layout The included equipment in the UVSSR CELL and its use in the teaching process has been described in [4, 9]. Since then, the cell has been extended to collect data from the included devices (Big Data), where this data is processed using Edge Computing and sent to the Siemens Mind Sphere cloud. The updated state of the cell can be seen in the figure (Fig. 2 - left). 4.2 Automatic Cell Cycle Description The cycle (Fig. 2 - right) takes place as the robot removes parts from the magazine of workpieces (1). The part is then fed into the machine tool (2) and the machining cycle is started. When the machining cycle is complete, the part is removed and placed in the measuring station (3) where it is measured. Finally, the machined and measured part is put back into the magazine (1).

302

V. Stepanek et al.

Fig. 2. UVSSR CELL elements (Left) and real UVSSR CELL layout (Right)

4.3 Virtual Commissioning of the Single Purpose Measuring Station Virtual commissioning is one of the disciplines that is finding increasing application in today’s industrial practice, as it responds to the often-opposing demands of customers to reduce production costs, flexibility of the designed devices and commissioning in the shortest possible time. The principle of virtual commissioning and its advantages have been described in [9]. In this paper is presented the possibility of using virtual commissioning in the design of a completely new device, which is a single-purpose measuring station. Siemens NX Mechatronics Concept Designer was used as simulation software in which a CAD model of the single-purpose measuring station was created and assembled. The CAD model was subsequently enriched with virtual actuators and sensors to which the corresponding signal structure was created. This signal structure accurately represents the response of the real device. The signals were then connected via a SharedMemory coupler to the Simit simulation platform. Next, a PLC control program was created and connected to Simit via the PLCSIM Advanced emulator. Within the created co-simulation (Fig. 3), the PLC program could be tuned using virtual hardware. This tuning process was carried out in parallel with the assembly of the real device. After the assembly was completed (Fig. 4), the completed and debugged PLC program was loaded into the single-purpose measuring station without any further modification. 4.4 Data and Communication Structure MQ queue/processing/DB/VR eCommunication between the included peripherals is done via the Profinet industrial fieldbus, which can be seen in the figure (Fig. 5). The main brain of the whole cell is the Siemens Simatic S71512C-1PN PLC, which communicates with the included peripherals and devices (robot, machine tool, measuring station, safety PLC). The PLC also collects data from sensors and devices, which are further read out using the S7.NET library at a time interval of at least ten times per second. The software collection point for this data is the RabbitMQ messaging system using the amqp protocol. The queuing of incoming data is divided into two basic groups: • Transient data - this is a queue of actual data that is intended for immediate consumption - a typical consumer in this case is virtual reality and machine state monitoring algorithms

Implementation of Industry 4.0 Elements

303

Fig. 3. Virtual commissioning of the single purpose measuring station

Fig. 4. Assembled single purpose measuring station

• Persistent data - it is a queue of data that always needs to be successfully processed and further appropriately stored, the processed data is stored in backup databases The stored data is organized into units that represent all measured and processed data for one day of machine operation. Under average machine operating conditions, this is approximately four gigabytes of data per database. The result is an organized and backed-up data source that again has two main tasks: • Historization of machine activity - this data is mainly used for virtual reality • Data source for AI/ML algorithms

304

V. Stepanek et al.

Fig. 5. Profinet network

4.5 Cloud and Edge Computing Part of the design of the presented measurement and data processing system is the implementation of the data part in the cloud environment. The operation of a local database and the necessary hardware has advantages in experimentation and simple operation (operation of one production cell in one location). However, for larger and more complex operations, which may operate in different locations and several continents within a single manufacturer, the use of a cloud architecture for the data processing system is preferable [15, 17]. Cloud in general terms means the provision and consumption of computing functions in the form of services available over a network. There are 3 basic modes of cloud operation - private, public and hybrid. Private cloud means that computing services are available within the internal corporate network. In public cloud, on the other hand, the service provider is a third party, and the services are available over the Internet. Hybrid cloud is a combination of both approaches. The basic types of cloud services include storage service and computing service [16, 17]. For industrial environments, the MindSphere cloud from Siemens (MdS) is available. Within MdS, both basic types of services can be consumed. Databases can be used to which data is sent automatically or programmatically via APIs to IoT gateway devices. These are further processed using publicly available or separately programmed applications. The results can be displayed directly in the MdS environment (Fig. 6) but can also be sent to other enterprise systems (database, CRM, etc.). The main advantage of MdS lies in the cost savings of creating and operating the necessary computer infrastructure. This task is taken over by Siemens and the customer can fully concentrate on the functionality and data. Reliable, globally available 24/7 service operation, data backup and redundancy, etc. are inherent in MdS. The connection (Fig. 7) to the industrial cloud

Implementation of Industry 4.0 Elements

305

opens the possibility to easily scale the entire solution and make it available to other partners. For applications that require low latency due to the tight coupling with the cyberphysical system and for applications that require pre-processing of data before it is sent, the cloud is extended with edge computing. Edge means that a computing unit connected to the cloud but located close to a given CPS is used. This unit runs containerized applications that process the data locally. The transformed data is then sent to MindSphere where it is further processed using the services provided. Thus, it is a hybrid mode that allows both direct use of the data in virtual reality at the production cell site in the current mode and automatic processing for further use in the cloud [16]. At the UVSSR CELL workplace, it is possible to view historical data and check the status of the entire workplace within the MindSphere cloud. It is possible to migrate individual applications to run in the cloud environment and thus have all functions associated with a given workplace available within a single system. Integration with the data processing application from the Laser Interferometer XL-80 system is planned.

Fig. 6. Diagram of available MindSphere services [18]

Fig. 7. Cloud connection diagram

306

V. Stepanek et al.

4.6 Virtual and Augmented Reality Virtual and augmented reality serves as a visualization tool for displaying current and historical states of the introduced UVSSR CELL workplace. The advantage of visualization using virtual and augmented reality is the display of data directly within the workplace which provides a better overview of the current state of the robotic line. Virtual and augmented reality are two different technologies, designed for different applications. The main goal of virtual reality is to allow a user or group of users to monitor the status of a workplace independent of its location. It is in the implementation of multiple users in a single virtual environment independent of their location that the main advantage of using this technology lies. Users can collaborate to solve a problem as if they were on site, while having access to all machine operating data, both current and historical. Another area where virtual reality takes place is in training operators or examining the ergonomics of the workplace. Augmented reality is particularly useful for on-site machine maintenance. The use of augmented reality devices such as Microsoft Hololens 2 allows information about individual conditions and processes to be displayed directly within the real workplace. Together with the ability to display specific processes of operations such as disassembly of individual components for maintenance of equipment or production technology, augmented reality technology is a suitable tool for maintenance and operation of the workplace. Within the UVSSR CELL workplace, virtual and augmented reality is used to display current and historical machine conditions such as temperatures, positions of individual axes or volumetric variations in the machining area (Fig. 8). This data gives the operator an overview of the accuracy at which the production line is able to produce final products.

Fig. 8. Volumetric deviations in the machining space in VR - Oculus (Left) and AR - Hololens 2 (Right)

5 Conclusion This article deals with the implementation of Industry 4.0 elements with a focus on metrology to ensure long-term production accuracy on CNC machine tools. Manufacturing accuracy is affected by a considerable number of external and internal influences, such as temperature or the effects of the chosen machining technology. In order to

Implementation of Industry 4.0 Elements

307

develop a functional compensation algorithm, it is necessary to find the key influences and to minimize their effects in an appropriate way. Not only for this purpose, the UVSSR CELL has been under development at the Institute of Production Machines, Systems and Robotics, Faculty of Mechanical Engineering, Brno University of Technology for a long time. The aim is to automate the production process, the post-process measurement and the subsequent compensation of the machine tool. For this reason, the production cell has been extended with a single-purpose measuring station using Industry 4.0 elements. The virtual commissioning method was already applied in the design phase to optimize the design and the correct set-up of the PLC control program. Data from workpiece measurements are further collected using the presented data structure and used to evaluate the geometric specification of the workpiece by comparison with a standard piece. By implementing Siemens MindSphere, cloud services, ML/AI or Edge Computing methods, progressive diagnostic and compensation applications can be created that can be directly connected to local augmented reality (Hololens 2) or remote data management and visualization, including connection to higher-level production planning systems. The use of cloud services and ML/AI will be the focus of further research activities focused on the prediction of dimensional accuracy of workpieces. Acknowledgment. These results were obtained with the financial support of the Faculty of Mechanical Engineering, Brno University of Technology (Grant No. FSI-S-20–6335).

References 1. Majstorovic, V., Stojadinovic, S., Zivkovic, S., Djurdjanovic, D., Jakovljevic, Z., Gligorijevic, N.: Cyber-physical manufacturing metrology model (CPM3) for sculptured surfaces – turbine blade application. Procedia CIRP 63, 658–663 (2017). ISSN 2212–8271, https://doi.org/10. 1016/j.procir.2017.03.093 2. Bauer, J.M., Bas, G., Durakbasa, N.M., Kopacek, P.: Development trends in automation and metrology. IFAC-PapersOnLine 48(24), 168–172 (2015). ISSN 2405–8963, https://doi.org/ 10.1016/j.ifacol.2015.12.077 3. Holub, M., Tuma, Z., Kroupa, J., Kovar, J., Blecha, P.: Case study of digitization of the production cell, pp. 253–262. Springer, Cham (2020). ISBN 978–3–030–31343–2 4. Holub, M., et al.: Industry 4.0 in educational process. Accessed 26 Oct 2020, ISBN 978–3– 030–62783–6, https://doi.org/10.1007/978-3-030-62784-3_27 5. Holub, M., Jankovych, R., Andrs, O., Kolibal, Z.: Capability assessment of CNC machining centres as measuring devices. Measurement 118, 52–60 (2018). ISSN 0263–2241,https://doi. org/10.1016/j.measurement.2018.01.007 6. Mutilba, U., Sandá, A., Vega, I., Gomez-Acedo, E., Bengoetxea, I., Yagüe Fabra, J.A.: Traceability of on-machine tool measurement: uncertainty budget assessment on shop floor conditions. Measurement 135, 180–188 (2019). ISSN 0263–2241, https://doi.org/10.1016/j.mea surement.2018.11.042 7. Holub, M., et al.: Experimental study of the volumetric error effect on the resulting working accuracy—roundness 10 (2020). ISSN 2076-3417, https://doi.org/10.3390/app10186233 8. Ramesh, R, Mannan, M., Poo, A.N.: Error compensation in machine tools — a review: Part II: thermal errors: Part II. Int. J. Mach. Tools Manuf. 40(9), 1257–1284 (2000). ISSN 0890–6955, https://doi.org/10.1016/S0890-6955(00)00010-9

308

V. Stepanek et al.

9. Brazina, J., Vetiska, J., Stanek, V., Bradac, F., Holub, M.: Virtual commissioning as part of the educational process. In: 2020 19th International Conference on Mechatronics - Mechatronika (ME), pp. 1–7 (2020). https://doi.org/10.1109/ME49197.2020.9286613 10. Introduction to Precision MACHINE DESIGN and Error Assessment. vyd. Boca Raton, USA: CRC Press is an imprint of Taylor & Francis Group, cca (2011). ISBN 978–0–8493–7887–4 11. Design of CNC machine tools IV.0. ed. Prague: MM publishing, CA (2018). MM special. ISBN 978–80–906310–8–3. 12. Gauging system Equator™RENISHAW. https://www.renishaw.com/en/equator-gauging-sys tem--12595, Accessed 28 June 2021 13. Holub, M.: Geometric accuracy of machine tools. In: Davim, J.P. (ed.) Measurement in Machining and Tribology. MFMT, pp. 89–112. Springer, Cham (2019). https://doi.org/10. 1007/978-3-030-03822-9_3 14. Marek, J.: Geometric Accuracy, Volumetric Accuracy and Compensation of CNC Machine Tools, vol. 5. Rijeka: IntechOpen (2020). ISBN 978–1–83962–351–6, https://doi.org/10.5772/ intechopen.92085 15. Industrial IoT platform: Buy it pre-built or make your own? (2020). https://www.plm.automa tion.siemens.com/global/en/topic/industrial-iot-platform/70070 16. Jelínek, A.: Production accuracy monitoring [online]. . Master thesis. Brno University of Technology, Faculty of Mechanical Engineering, Department of Production Machines, Systems and Robotics. Thesis supervisor Michal Holub. Brno (2020). Accessed 28 June 2021, https://www.vutbr.cz/studenti/zav-prace/detail/125219 17. Foster, I., Gannon, B.: Dennis, cloud computing for science and engineering (2017). ISBN 978–0262037242, https://cloud4scieng.org/ 18. MindSphere Architecture. Siemens: Ingenuity for life [online]. Munich, Germany: Siemens 1996–2021. Accessed 10 Sept 2021. Dostupné z: https://developer.mindsphere.io/concepts/ concept-architecture.html

Multi-criteria Decision Support with SWARA and TOPSIS Methods to the Digital Transformation Process in the Iron and Steel Industry Yıldız Sahin ¸ and Yasemin Bozkurt(B) Industrial Engineering, Kocaeli University, ˙Izmit, Turkey [email protected], [email protected]

Abstract. Industrial revolutions are seen as the beginning of radical changes in a wide variety of fields. There have been four industrial revolutions since the first industrial revolution, which emerged in the 18th century, when steam-powered machines laid the foundation of mechanization. With the first three revolutions, mechanization has increased, however, as a result of the fourth industrial revolution, a major digital transformation has become necessary. It is almost a necessity to follow the developing technologies closely in order to survive and succeed in the competitive environment of the globalizing world. Companies that adapt to the 4th Industrial Revolution can achieve success in their goals in this war of existence. In this study, digital transformation in the iron and steel industry, which affects many sectors, is discussed. In order to achieve success in the transition process, the applicability of Industry 4.0 technologies was evaluated with SWARA and TOPSIS Methods. Keywords: Iron and steel industry · Technology · Industry 4.0 · SWARA · TOPSIS

1 Introduction As a result of the increasing globalization and competition environment with the development of technology, companies had to adapt to digitalization. With technological developments, industrial revolutions have occurred, and the efficiency, convenience and similar benefits provided by each revolution to humanity have increased constantly. Industry 4.0 technologies that make digitalization mandatory can be listed as big data, internet of things, cloud computing, additive manufacturing, simulation, augmented reality, system integration, cyber security and autonomous robots. These technologies have created cyber-physical systems and smart factories, and the human factor used in factories has shifted from muscle power to brain power. While this may lead to the disappearance of some professions, it will also allow new ones to emerge. Steel has been seen as a remarkable metal throughout the industrial revolutions. Steel consumption is increasing in industrialized societies and plays a serious role in almost © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 309–322, 2022. https://doi.org/10.1007/978-3-030-90421-0_26

310

Y. Sahin ¸ and Y. Bozkurt

every phase of life. This is because iron and steel is a very important metal that is an input for many industries. Countries that realize digital transformation with Industry 4.0 have an important position in the world iron and steel industry. It is seen that the Turkish iron and steel industry has recently taken a step towards digital transformation. In this study, it is aimed to identify Industry 4.0 technologies that should be applied primarily in the iron and steel industry. SWARA and TOPSIS methods were used in the evaluation process.

2 Industry 4.0 and Digital Transformation Industry 4.0 is a concept that was first mentioned at the Hannover Fair in 2011 [1]. It officially started in 2011 with Germany’s adoption as an industrial policy [2]. Industry 4.0 basically consists of the integration of supply chains, production facilities and service systems in order to establish value-added networks [3, 4, 5]. It can be defined as the management of production processes by machines without the need for manpower [6]. The digitalization of products has a great difference from other industrial revolutions in terms of technology, as it is characterized by the existence of cloud computing and big data systems, as well as new technologies that integrate the biological, physical and digital worlds, which have a great impact on all disciplines, economies and industries [7, 8]. The 4th Industrial Revolution includes technologies in a wide variety of fields that integrate and interact with each other [2]. It creates the factories of the future, where the production lines are connected with each other with the help of sensors, the data flow is instantaneous, and thus the software and algorithms of the entire system can become instant reports [9]. With the expectation that digitalization in businesses will reach larger dimensions with Industry 4.0 and it has become mandatory, digitalization has begun to be called digital transformation [10]. It is aimed to increase speed, quality, flexibility and cheapness. With the effect of digital transformation, businesses have communicated with the outside world more than ever before and have established a close relationship with consumers. This communication has become more transparent with the created cyber systems, which has led to the formation of new databases [11]. Technologies have recently started to become easily accessible and applicable as never before. The cost of sensors, processors and similar components, which are the basis of digital transformation, has decreased over time. In addition, with the help of developing sensors and internet technologies, more data has been produced than ever before. Thanks to the increasingly convenient and applicable technologies, the use of digital transformation technologies by companies has increased [12]. 2.1 Industry 4.0 Technologies The digital transformation process is defined as the digitization of all areas where production and service-based economic activities take place. In this process, technologies such as the internet of things, cyber-physical technologies, horizontal and vertical integration, artificial intelligence, cloud computing, big data, autonomous robots, simulation, cyber security, additive manufacturing have a great role.

Multi-criteria Decision Support with SWARA and TOPSIS Methods

311

1) Internet of Things: IoT technology can be defined as the interconnection of objects that benefit from the interaction of physical systems with each other through a network [13]. Intelligent sensor devices in the internet of things technology have the ability to introduce themselves, create a network, store the data they have acquired, and transfer them to cloud services with public analysis capability. Users’ access to these services is provided by easy-to-use web services [14]. With the help of self-organizing smart entities, personal products will be produced, and even batch sizes such as a single product will be possible [15]. 2) Horizontal and Vertical Integration: One of the most important goals of the digital transformation process is to provide system integration. It can be defined as all interconnected systems being in constant communication with each other and being able to monitor and control each other when necessary [16]. With the help of various integrations, problems can be answered quickly, productivity and profitability can be achieved in production, the supply chain of the manufacturer can be integrated by ensuring the continuity of logistics activities, energy and time savings can be realized by acting in harmony with technological infrastructure internet systems [17]. 3) Big Data: One of the most important concepts of the digital era is big data. It is a set of data that is too large to be processed with existing resources and that it is difficult to store and analyze [18]. With big data technology, it is possible to transform the collected data into a workable and meaningful [6]. In the digital economy, not only sensors on the production line, but also security cameras, online clicks, status updates on social media, electricity meters, customer call records, point of sale records are also used to provide data flow. The ease of access and analysis of such big data is very important for the competitive advantages and growth of the business. With the help of these data, it is possible for businesses to have information and a say in customer demands, product and service development stages, market structures and many imaginable areas [19]. 4) Cloud Computing: Cloud computing technology is a service model that provides its users with adaptable and online access to different computing resources. In this way, there may be a reduction in fixed costs originating from ICT [20]. Cloud computing technologies also play a role in extremely important issues such as providing business flexibility, creating new employment areas, establishing cooperation between institutions and organizations, increasing productivity in the economy and increasing public sector efficiency [21]. There are many advantages that cloud computing systems will provide. In addition to these advantages, it is very important to ensure the security and confidentiality of the collected information [17]. 5) Cyber Security: Ensuring cyber security is one of the most important issues in the digital age [22]. Cyber threats pose a danger to the software and hardware structure of IT systems as well as data. By providing unauthorized, malicious access, attacks such as modification, damage and theft of software, hardware and data of users may occur [23]. The cyber security system undertakes tasks such as identifying attacks coming from the internet, responding to these attacks and ensuring protection from these attacks [24]. Investments to be made in these technologies and training of expert cyber security personnel are needed in order to avoid any disruption in this network-driven system [20].

312

Y. Sahin ¸ and Y. Bozkurt

6) Autonomous Robots: With the help of information communication hardware and software in robots, they can make artificial intelligence applications and communicate with the external environment. Robots that have the ability to take action without any outside interference, thanks to their decision-making mechanisms, are called autonomous robots [17]. There has been a great increase in the use of robotic technologies with the developments in information communication technologies. However, the production and development capacity for these robotic systems can be costly and troublesome to set up [17]. 7) Augmented Reality: It can be defined as technologies that combine virtual and reality, and as a result, provide users with three-dimensional imaging and real-time interaction [25]. It helps to close some gaps between product development and product operation [26]. In addition, it allows employees to make easier decisions and improve working procedures [27]. 8) Simulation: It is a simulation of the actual physical structures created in the computer environment. Firms test the operation of the planned transactions with the help of simulations. In this way, they prevent problems that are difficult to predict in many areas and ensure the continuity of the operation of the process, providing a competitive advantage [28, 29]. Today, three-dimensional simulations are generally used in manufacturing services [15]. 9) Additive Manufacturing and 3D Printers: It is a rapid production technique that consists of adding different three-dimensional data organized as layers instead of classical production methods [30]. Three-dimensional printers are used to transform the three-dimensional computer data into a real object. [31]. For this reason, it has a great role in producing customized products for customer needs [32]. Additive manufacturing accelerates and facilitates the production of complex parts. It provides the unification of the design and production process, while reducing the amount of machinery used in production [15].

3 Methodology 3.1 SWARA Method The SWARA method, also called Gradual Weight Evaluation Ratio Analysis, is a weighting method first proposed by Zavadskas, Turskis and Keršuliene [33] in 2010. SWARA method is widely used because of less comparison application than other methods [34]. It is a subjective method. This method is also called the expert-focused method, as the opinions of the experts have a significant share in the processes of evaluating the criteria and calculating the weights [35]. In the literature, the SWARA method has been used to determine criterion weights in application studies carried out in different areas. Material selection for airframe manufacturing, management of currency risk, cargo company selection, cold storage selection, determination of bank preferences of corporate customers, evaluation of automotive sector companies in the Fortune 500 list, region selection for truffle production, supplier selection, and server selection, competency-based IT personnel selection are among these applications [36–45].

Multi-criteria Decision Support with SWARA and TOPSIS Methods

313

3.1.1 Implementation Steps of SWARA Method Step 1: Decision makers determine the most important criterion according to them. The criterion considered to be the most important is given a score of 1.00. Other criteria are scored based on the most important criterion. Step 2: The average of the importance scores determined by the decision makers for each criterion is calculated. This calculation is performed using Eq. (1). l indicates the number of decision makers. l k k=1 pj ; j = 1, ..., n (1) pj = l Step 3: The criteria are sorted and compared according to their relative mean importance scores from largest to smallest. Comparison is made by taking the differences of consecutive mean importance scores. As a result of this process, sj values, which are the comparative importance of the mean value, are determined for each criterion. Step 4: Using Eq. (2), the coefficient values of the criteria, cj , are calculated. The operations are performed by assigning 1 point to the cj coefficient of the most important criterion.  1, j = 1 j = 1, 2, ..., n (2) cj = sj + 1, j > 1 Step 5: The sj value, which represents the adjusted weights for each criterion, is calculated. The sj value of the first row criterion is set to 1. Adjusted weights for other criteria are found with the help of Eq. (3). ⎧ ⎨ 1, j = 1   sj = sj−1 , j = 1, 2, ..., n (3) ⎩ cj , j > 1 Step 6: Using Eq. (4), the final weights (wj ) for the criteria are calculated by dividing the corrected weight values by the sum of the corrected weight values [44]. 

sj

wj = n



j=1 sj

, j = 1, 2, ..., n

(4)

3.2 TOPSIS Method The TOPSIS Method is a multi-criteria decision-making method developed by Hwang and Yoon in 1981 and has been applied in various fields such as supplier selection, employee and corporate performance evaluation, and housing selection problem, selection of drilling parameters of magnesium AZ91, evaluation of effect of edible coatings on the shelf-life of fresh strawberries [46–52]. The TOPSIS Method examines the alternatives in terms of their proximity to the positive ideal solution and their distance from the negative ideal solution [53]. In addition, ideal and non-ideal solutions can be determined with this method. It is an intuitive method and easy to understand. It achieves highly reliable results [54].

314

Y. Sahin ¸ and Y. Bozkurt

3.2.1 Implementation Steps of TOPSIS Method [53] Step 1: Initial decision matrix (A) is composed with alternatives in the rows and criteria in the columns. In the matrix, m represents the alternatives and n represents the criteria. ⎤ ⎡ a11 a12 ... a1n ⎥ ⎢a ⎢ 21 a22 ... a2n ⎥ ⎥ ⎢ A=⎢ . .... . ⎥ i = 1, 2, .., m; j = 1, 2, . . . , n ⎥ ⎢ ⎣ . .... . ⎦ am1 am2 ... amn Step 2: The normalized decision matrix (R) is calculated with the Eq. (5) rij =

aij m  2 i=1 aij

(5)

These calculations are performed for each element and the R matrix is revealed. ⎡ ⎤ r11 r12 ... r1n ⎢r ⎥ ⎢ 21 r22 ... r2n ⎥ ⎢ ⎥ R=⎢ . .... . ⎥ i = 1, 2, .., m; j = 1, 2, . . . , n ⎢ ⎥ ⎣ . .... . ⎦ rm1 rm2 ... rmn Step 3: After the weights of the criteria are determined, the weighted normalized decision matrix (v) is composed by multiplying the weights of the criteria (wj ) determined by each element in the (R) matrix. vij = rij ∗ wj i = 1, 2, .., m; j = 1, 2, . . . , n

(6)

Step 4: While composing the positive ideal solution set, the largest values are selected for the maximization-oriented criteria, while the smallest values are selected for the minimization-oriented criteria. Equation (7) is used while composing the positive ideal + set. The  set to be created with the help of this equation is expressed as A = solution ∗ ∗ ∗ v1 , v2 , .., vn .     A+ = max vij | J , min vij | jJ  (7) Equation (8) is used while creating the negative ideal solution set. The  set to be created with the help of this equation is expressed as A− = v1− , v2− , .., vn− .     A− = min vij | jJ , max vij | jJ  (8) Step 5: Using the Euclidian Distance Approach, the distances of the criteria from the positive ideal solution and the negative ideal solution are determined for the alternatives. The distance to the positive ideal solution (Si+ ), the distance to the negative ideal solution (Si− ) are calculated with the help of the Eq. (9) and Eq. (10).  2 n  + vij− vj∗ (9) Si = j=1

Multi-criteria Decision Support with SWARA and TOPSIS Methods

Si−

 2 n  vij− vj− = j=1

315

(10)

Step 6: Using the distances from the positive and negative ideal solution, the relative closeness (Ci∗ ) to the ideal solution is calculated for the alternatives. The value of Ci∗ is between 0 and 1. This value is calculated with the help of the Eq. (11). Ci∗ =

Si−

Si− + Si+

(11)

4 Multi-criteria Integration Assessment of Industry 4.0 Technologies in the Iron-Steel Industry Industry 4.0 has a very important role in the strong presence of companies in global competition. For this reason, companies are working on integrating Industry 4.0 technologies into their structures. The transition process is a gradual one and can take a long time. In this study, it is aimed to determine Industry 4.0 technologies that need to be invested primarily in the iron and steel sector by using multi-criteria decision-making methods. The SWARA method was used in the weighting phase of the criteria, and the TOPSIS method, which is widely used in the literature and produces successful results, was used in the evaluation and decision-making phase of the alternatives. 4.1 Determination of Evaluation Criteria While determining the evaluation criteria, the studies in the literature were taken as basis and 10 criteria were determined. These criteria are cost, applicability, qualified workforce, technological infrastructure, compliance with strategies, management and organization, sustainability, security, efficiency, employment interruption [12, 55]. 1) 2)

Cost (C): It is the monetary value required by the firm for the investment in question. Applicability (A): It refers to the physical suitability of the investment in question for the iron and steel industry. 3) Qualified workforce (QF): It refers to the situation of qualified personnel required for investment. 4) Technological infrastructure (TI): It refers to the availability of the necessary equipment to make the investment. 5) Compliance with the strategies (CWS): It indicates the compatibility of the investment with the firm’s strategies. 6) Management and Organization (MO): It expresses the support given by the management to the employee in order to obtain maximum efficiency from Industry 4.0 (Planning, training, etc.). 7) Sustainability (S): It refers to the lifetime of the benefit that the investment will provide to the company. 8) Security (SC): Indicates how safe the investment is for the firm. 9) Efficiency (E): It expresses the ability of the investment to provide income to the firm. 10) Employment cut (EC): It refers to the decrease in employment due to investment.

316

Y. Sahin ¸ and Y. Bozkurt

4.2 Identification of Alternatives In the scope of the study, since it is aimed to determine the Industry 4.0 Technologies that will be invested primarily, alternatives have been determined as the Internet of Things, Horizontal and Vertical Integration, Big Data, Cloud Computing, Cyber Security, Autonomous Robots, Augmented Reality, Simulation, and Additive Manufacturing and 3D Printers from Industry 4.0 technologies. 4.3 Determination of Criterion Weights by SWARA Method All evaluations were carried out specifically for the iron and steel industry. For this reason, while determining the criteria weights, the opinions of three expert decision makers (DM) working in the field of digital transformation in the relevant sector were used. The average opinions obtained from the decision makers were used in the SWARA Method and the criteria weights were determined (Table 1). Table 1. Determination of criterion weights by SWARA method Order of importance Criteria DM1 DM2 DM3 Average Sj

Cj

Sj

Wj

1

(TI)

1,00

0,95

0,90

0,950

1

1

0,1260

2

(E)

0,90

1,00

0,75

0,883

0,067 1,067 0,9375 0,1181

3

(S)

0,80

0,90

0,85

0,850

0,033 1,033 0,9073 0,1143

4

(MO)

0,70

0,80

1,00

0,833

0,017 1,017 0,8924 0,1124

5

(A)

0,90

0,70

0,50

0,700

0,133 1,133 0,7874 0,0992

6

(C)

0,95

0,50

0,60

0,683

0,017 1,017 0,7745 0,0976

7

(CWS) 0,65

0,45

0,80

0,633

0,050 1,050 0,7376 0,0929

8

(QW)

0,75

0,35

0,75

0,617

0,017 1,017 0,7255 0,0914

9

(SC)

0,85

0,30

0,35

0,500

0,117 1,117 0,6497 0,0818

10

(EC)

0,50

0,10

0,20

0,267

0,233 1,233 0,5268 0,0664

4.4 Evaluation of Alternatives with the TOPSIS Method Normalization is done by dividing each value in the column by the square root of the sum of the squares of all the values in the column (Table 2). The weighted normalized decision matrix shown in Table 3 is calculated by multiplying the criteria weights obtained by the SWARA method with the values in the normalized decision matrix.

Multi-criteria Decision Support with SWARA and TOPSIS Methods

317

Table 2. Initial matrix and criterion weights Weights of criteria

0,098

0,099

0,091

0,126

0,093

Alternatives

(C)

(A)

(QW)

(TI)

(CWS) (MO)

IoT

69,104 66,039 86,446 47,622 71,628 74,553 78,195 45,549 81,433 64,633

Horizontal and vertical integration

39,149 57,388 58,723 68,444 96,549 94,727 84,716 62,743 91,258 50,133

Big data

54,004 79,581 83,300 64,792 82,682 74,889 73,986 46,723 82,682 74,889

Cloud computing

54,004 64,633 80,000 59,274 71,138 71,138 81,206 53,601 69,521 62,880

Cyber security

36,717 74,650 86,233 39,040 86,535 96,549 81,206 80,671 96,549 74,680

Autonomous robots

96,638 69,932 71,138 16,355 31,707 43,089 49,529 73,061 62,944 98,305

Augmented reality

68,176 12,599 64,029 24,662 25,650 35,569 39,149 33,019 44,814 60,991

Simulation

19,574 72,590 66,096 78,195 83,712 64,633 75,987 63,164 70,607 64,633

Additive 50,133 9,086 manufacturing and 3D printers

0,112

0,114

0,082

0,118

0,066

(S)

(SC)

(E)

(EC)

41,602 27,967 10,772 34,200 54,387 60,000 38,259 82,217

Table 3. Weighted normalized decision matrix (C)

(A)

(QW)

(TI)

(CWS) (MO)

(S)

(SC)

(E)

(EC)

IoT

0,0387 0,0355 0,0365 0,0388 0,0322 0,0404 0,0423 0,0210 0,0437 0,0200

Horizontal and vertical integration

0,0219 0,0308 0,0248 0,0557 0,0435 0,0514 0,0458 0,0289 0,0490 0,0155

Big data

0,0302 0,0427 0,0351 0,0527 0,0372 0,0406 0,0400 0,0215 0,0444 0,0231

Cloud computing

0,0302 0,0347 0,0337 0,0482 0,0320 0,0386 0,0439 0,0247 0,0373 0,0194

Cyber security

0,0206 0,0401 0,0364 0,0318 0,0389 0,0523 0,0439 0,0371 0,0518 0,0231

Autonomous robots

0,0541 0,0375 0,0300 0,0133 0,0143 0,0234 0,0268 0,0336 0,0338 0,0304

Augmented reality

0,0382 0,0068 0,0270 0,0201 0,0115 0,0193 0,0212 0,0152 0,0241 0,0188

Simulation

0,0110 0,0390 0,0279 0,0636 0,0377 0,0350 0,0411 0,0291 0,0379 0,0200

Additive 0,0281 0,0049 0,0175 0,0228 0,0048 0,0185 0,0294 0,0276 0,0205 0,0254 manufacturing and 3D printers

318

Y. Sahin ¸ and Y. Bozkurt

While determining positive ideal solutions from the weighted normalized decision matrix, the greatest values in those columns were selected for maximization-oriented criteria, and the smallest values in that column were selected for minimization-oriented criteria. While determining negative ideal solutions, the smallest value in maximizationoriented criteria and the greatest value in minimization-oriented criteria were selected. In this study, the cost and employment cut criteria were determined on the basis of minimization, while the other criteria were determined on the basis of maximization (Table 4). Table 4. Positive and negative ideal solution sets (C)

(A)

(QW)

(TI)

(CWS) (MO)

(S)

(SC)

(E)

(EC)

A+ 0,0110 0,0427 0,0365 0,0636 0,0435 0,0523 0,0458 0,0371 0,0518 0,0155 A-

0,0541 0,0049 0,0175 0,0133 0,0048 0,0185 0,0212 0,0152 0,0205 0,0304

Si+ and Si− values were calculated using Eq. (8) and Eq. (9). Afterwards, the relative closeness values to the ideal solution were obtained by using the Si+ and Si− values. Si+ , Si− , Ci∗ values and the obtained order are shown in Table 5. Table 5. Topsis method solution results Si+

Si-

Ci*

Order of alternatives

IoT

0,0455

0,0673

0,5968

6

Horizontal and vertical integration

0,0232

0,0892

0,7934

1

Big data

0,0326

0,0801

0,7109

4

Cloud computing

0,0372

0,0713

0,6571

5

Cyber security

0,0346

0,0860

0,7132

3

Autonomous robots

0,0841

0,0434

0,3402

7

Augmented reality

0,0895

0,0241

0,2126

9

Simulation

0,0269

0,0895

0,7691

2

Additive manufacturing and 3D printers

0,0884

0,0318

0,2647

8

5 Conclusions With Industry 4.0, which was first mentioned at the Hannover Fair in 2011, a new era began in the production understanding of companies. With the start of this period, companies in various sectors have made an effort to develop themselves and take an important position in world competition by adapting their production systems to Industry

Multi-criteria Decision Support with SWARA and TOPSIS Methods

319

4.0 technologies. In this direction, one of the sectors that attempt to adapt their production systems to Industry 4.0 technologies is the iron and steel sector. Since iron and steel products are used in many areas, the demand for these products is quite high. Therefore, in order to meet this demand, iron and steel companies will be able to increase production in a more controlled and more efficient way by implementing Industry 4.0 technologies. In this study, Industry 4.0 technologies, which should be applied primarily in the iron and steel industry, were determined by considering 10 evaluation criteria. The SWARA Method was used to weight the evaluation criteria. In ordering the alternatives, the TOPSIS method, which has found a very common application in the literature and known to be reliable, was used. In order to obtain the application data, the opinions of three decision makers working in the Turkish iron and steel industry were used for SWARA and TOPSIS scoring. As a result of the evaluation obtained with the SWARA method, it has been determined that the most important criterion for the transition to Industry 4.0 technologies is the technological infrastructure. It has been seen that the cost is in the 6th place in the order of importance of the criteria. This shows that the investment cost for such important and powerful changes can be ignored. The employment cut was ranked last in the importance ranking of the criteria. At this point, it was thought that the use of new technologies could provide employment gains rather than causing employment cuts. In the study, Industry 4.0 technology, which should be applied primarily in the Iron and Steel sector, was evaluated with the TOPSIS method and determined as Horizontal and Vertical Integration. Horizontal and vertical integration was seen in line with the management and organizational structure as well as the strategies, and it was thought that it would increase productivity if implemented. The second priority technology is the simulation technology, which is also used by many iron and steel companies. Simulation is in the second place due to its low cost, compatibility with company strategies and sufficient technological infrastructure. This is followed by Cyber Security, Big Data, Cloud Computing, Internet of Things, Autonomous Robots, Additive Manufacturing and 3D Printers, Augmented Reality. While it is thought that autonomous robots will have a great impact on production systems, it has fallen behind in the priority preference order due to the lack of infrastructure and the fact that it can be used in a limited part of the production processes. Acknowledgment. The authors would like to thank Turkish iron-steel industry employee Hande VURSAN ¸ and Sedanur Selay KASAP, who contributed to obtaining the data of the study by sharing their expert opinions.

References 1. Soylu, A.: Endüstri 4.0 ve Giri¸simcilikte Yeni Yakla¸sımlar. Pamukkale Sosyal Bilimler Dergisi 32, 43–57 (2018) 2. Öztuna, B.: Endüstri 4.0 (Dördüncü Sanayi Devrimi) ˙Ile Çalı¸sma Ya¸samının Gelece˘gi. Gece Kitaplı˘gı Yayınları, Ankara (2017)

320

Y. Sahin ¸ and Y. Bozkurt

3. Salkın, C., Öner, M., Üstünda˘g, A., Çevikcan, E.: A Conceptual Framework for Industry 4.0: Managing The Digital Transformation. Springer, Cham (2018). https://doi.org/10.1007/9783-319-57870-5 4. Hwang, G.: Challenges for innovative HRD in the era of the 4th industrial revolution. Asian J. Innov. Policy 8(2), 288–301 (2019) 5. Ndung, N., Signé, L.: Capturing the Fourth Industrial Revolution, A Regional and National Agenda, pp. 1–14 (2020) 6. Gabaçlı, N., Uzunöz, M.: IV. Sanayi Devrimi: Endüstri 4.0 ve Otomotiv Sektörü. In: 3rd International Congress on Political, Economic and Social Studies (ICPESS 2017), Ankara, pp. 149–174 (2017) 7. Türkel, S., Ye¸silku¸s, F.: Dijital Dönü¸süm Paradigması: Endüstri 4.0. Avrasya Sosyal ve Ekonomi Ara¸stırmaları Dergisi 7(5), 332–346 (2020) 8. Schwab, K.: The Fourth Industrial Revolution, World Economic Forum, Switzerland (2016). ISBN-13: 978-1-944835-01-9 9. Do˘gru, B.N., Meçik, O.: Türkiye’de Endüstri 4.0’ın ˙I¸sgücü Piyasasına Etkileri: Firma Beklentileri. Süleyman Demirel Üniversitesi ˙Iktisadi ve ˙Idari Bilimler Dergisi 23(Endüstri 4.0 ve Örgütsel De˘gi¸sim Özel Sayısı), 1581–1606 (2018) 10. Klein, M.: ˙I¸sletmelerin Dijital Dönü¸süm Senaryoları-Kavramsal Bir Model Önerisi. Elektronik Sosyal Bilimler Dergisi 19(74), 997–1019 (2020) 11. Acaralp, M.C.: ˙Insan Kaynakları Yönetiminde Endüstri 4.0 ve Dijitalle¸sme Etkisi Yetenek Yönetimi. Proje Çalı¸sması, Akdeniz Karpaz Üniversitesi, Sosyal Bilimler Enstitüsü, Lefko¸se, Kıbrıs (2017) 12. Ataman, A.C.: Savunma Sanayinde Endüstri 4.0 Olgunluk Parametrelerinin Tereddütlü Bulanık AHP Yöntemi ˙Ile Önceliklendirilmesi. Yüksek Lisans Tezi, Bahçe¸sehir Üniversitesi, Fen Bilimleri Enstitüsü, ˙Istanbul, Türkiye (2018) 13. Gubbi, J., Buyya, R., Marusic, S., Palaniswami, M.: Internet of Things (IoT): a vision, architectural elements, and future directions. Future Gener. Comput. Syst. 29(7), 1645–1660 (2013) 14. Miorandi, D., Sicari, S., Pellegrini, F.D., Chlamtac, I.: Internet of Things: vision, applications and research challenges. J. Ad Hoc Netw. 10, 1497–1516 (2012) 15. Banger, G.: Endüstri 4.0 Ekstra (1 b.). Dorlion Yayınları, Ankara (2017) 16. Pérez-Lara, M., Saucedo-Martínez, J., Marmolejo-Saucedo, J., Salais-Fierro, T., Vasant, P.: Vertical and horizontal integration systems in industry 4.0. Wireless Netw. 26(7), 4767–4775 (2018). https://doi.org/10.1007/s11276-018-1873-2 17. Sahin, ¸ C.: Ülkelerin Endüstri 4.0 Düzeylerinin Copras Yöntemi ˙Ile Analizi: G-20 Ülkeleri ve Türkiye. Yüksek Lisans Tezi, Bartın Üniversitesi, Sosyal Bilimler Enstitüsü, Bartın, Türkiye (2019) 18. Uzkurt, C.: Yenilik (˙Inovasyon) Yönetimi ve Yenilikçi Örgüt Kültürü. Beta Yayınevi, ˙Istanbul (2017) 19. United Nations: Information Economy Report 2017: Digitalization, Trade and Development. United Nations Publication (2017) 20. Gözüküçük, M.F.: Dijital Dönü¸süm ve Ekonomik Büyüme. Yüksek Lisans Tezi, ˙Istanbul Ticaret Üniversitesi, Sosyal Bilimler Enstitüsü, ˙Istanbul, Türkiye (2020) 21. Turan, M.: Bulut Bili¸sim ve Mali Etkileri: Bulutta Vergi. Bilgi Dünyası 15(2), 296–326 (2014) 22. Özsoylu, A.F.: Endüstri 4.0. Çukurova Üniversitesi ˙I˙IBF Dergisi 21(1), 41–64 (2017) 23. Sa˘gırlıo˘glu, S., ¸ Alkan, M.: Siber Güvenlik Ve Savunma: Önem, Tanımlar, Unsurlar Ve Önlemler. Grafiker Yayınları, Ankara (2018) 24. Piedrahita, A.F.M., Gaur, V., Giraldo, J., Cardenas, A.A., Rueda, S.J.: Virtual incident response functions in control systems. Comput. Netw. 135, 147–159 (2018)

Multi-criteria Decision Support with SWARA and TOPSIS Methods

321

25. Altınpulluk, H., Kesim, M.: Geçmi¸sten Günümüze Artırılmı¸s Gerçeklik Uygulamalarında Gerçekle¸sen Paradigma De˘gi¸simleri. Akademik Bili¸sim Eski¸sehir: Anadolu Üniversitesi (2015). https://doi.org/10.13140/2.1.3721.2967 26. Rentzos, L., Papanastasiou, S., Papakostas, N., Chryssolouris, G.: Augmented reality for human-based assembly: using product and process semantics. In: 12th IFAC Symposium on Analysis, Design, and Evaluation of Human-Machine Systems, Las Vegas, vol. 46, pp. 98–101 (2013) 27. Vaidya, S., Ambad, P., Bhosle, S.: Industry 4.0–a Glimpse. Procedia Manuf. 20, 233–238 (2018) 28. Çelen, S.: Sanayi 4.0 ve Simülasyon. Int. J. 3D Printing Technol. Digit. Ind. 1(1), 9–26 (2017) 29. Kayar, A.: ˙Imalat Sektöründeki ˙I¸sletmelerde Endüstri 4.0’a Geçi¸s ˙Için Dijital Olgunluk Seviyesinin Belirlenmesi: Yeni Bir Model Önerisi. Yüksek Lisans Tezi, ˙Istanbul Ticaret Üniversitesi, Fen Bilimleri Enstitüsü, ˙Istanbul, Türkiye (2019) 30. Özsoy, K., Duman, B.: Eklemeli ˙Imalat (3 boyutlu baskı) Teknolojilerinin E˘gitimde Kullanılabilirli˘gi. Int. J. 3D Printing Technol. Digit. Ind. 1(1), 36–48 (2017) 31. Rennung, F., Luminosu, C.T., Draghici, A.: Service provision in the framework of industry 4.0. Procedia Soc. Behav. Sci. 221, 372–377 (2016) 32. Kim, H., Lin, Y., Tseng, T.L.B.: A review on quality control in additive manufacturing. Rapid Prototyping J. 24, 645–669 (2018) 33. Keršuliene, V., Zavadskas, E.K., Turskis, Z.: Selection of rational dispute resolution method by applying new stepwise weight assessment ratio analysis (SWARA). J. Bus. Econ. Manag. 11(2), 243–258 (2010) 34. Bircan, H.: Kriter A˘gırlıklandırma Yöntemleri. Nobel Akademik Yayıncılık, Ankara (2020) 35. Zolfani, S.H., Banihashemi, S.S.A.: Personnel selection based on a novel model of game theory and MCDM approaches. In: Proceedings of 8th International Scientific Conference Business and Management, Vilnius, Lithuania, pp. 191–198 (2014) 36. Boyacı, A., Tüzemen, M.Ç.: Bütünle¸sik SWARA-MULTIMOORA Yakla¸sımı ile Uçak Gövdesi için Malzeme Seçimi. Gazi Üniversitesi Fen Bilimleri Dergisi 8(4), 768–782 (2020) 37. Aydın, Z.: Kur Riskinin Akademisyenler Tarafından Yönetilmesinin SWARA Yönetimi ˙Ile ˙Incelenmesi. Gümrük Ticaret Dergisi 7(20), 51–63 (2020) 38. Uluta¸s, A.: SWARA Tabanlı CODAS Yöntemi ˙Ile Kargo Sirketi ¸ Seçimi. MANAS Sosyal Ara¸stırmalar Dergisi 9(3), 1640–1647 (2020) 39. Katrancı, A., Kundakçı, N.: SWARA Temelli Bulanık COPRAS Yöntemi ile So˘guk Hava Deposu Seçimi. Optimum Ekonomi ve Yönetim Bilimleri Dergisi 7(1), 63–80 (2020) 40. Çakır, E., Bilge, E.: Bütünle¸sik SWARA - MOORA Yöntemi ile Kurumsal Mü¸sterilerin Banka Tercihlerinin Belirlenmesi: Aydın ˙Ilinde Bir Uygulama. Anemon Mu¸s Alparslan Üniversitesi Sosyal Bilimler Dergisi 7(6), 269–289 (2019) 41. Çınaro˘glu, E.: Fortune 500 Listesinde Yer Alan Otomotiv Sektörü Firmalarının SWARA Destekli COPRAS Yöntemi ile De˘gerlendirilmesi. Çankırı Karatekin Üniversitesi ˙Iktisadi ve ˙Idari Bilimler Fakültesi Dergisi 9(2), 593–611 (2019) 42. Yücenur, N., Senkan, ¸ Ç., Kara, G.N., Türker, Ö.: Birle¸stirilmi¸s SWARA-COPRAS Yakla¸sımını Kullanarak Trüf Mantarı Yeti¸stirilmesi için Bölge Seçimi. Erzincan Üniversitesi Fen Bilimleri Enstitüsü Dergisi 12(3), 1232–1253 (2019) 43. Adalı, E., I¸sık, A.: Bir Tedarikçi Seçim Problemi ˙Için SWARA ve WASPAS Yöntemlerine Dayanan Karar Verme Yakla¸sımı. Int. Rev. Econ. Manag. 5(4), 56–77 (2017) 44. Yurdo˘glu, H., Kundakçı, N.: SWARA ve WASPAS Yöntemleri ile Sunucu Seçimi. Balıkesir Üniversitesi Sosyal Bilimler Enstitüsü Dergisi 20(38), 253–269 (2017) 45. Dahooie, J.H., Abadi, E.B.J., Vanaki, A.S., Firoozfar, H.R.: Competency-based IT personnel selection using a hybrid SWARA and ARAS-G methodology. Hum. Factors Ergon. Manuf. Serv. Ind. 28(1), 5–16 (2018)

322

Y. Sahin ¸ and Y. Bozkurt

46. Supçiller, A.A., Çapraz, O.: AHP-TOPSIS Yöntemine Dayalı Tedarikçi Seçimi Uygulaması. Ekonometri ve ˙Istatistik 13(12. Uluslararası Ekonometri, Yöneylem Ara¸stırması, ˙Istatistik Sempozyumu Özel Sayısı), 1–22 (2011) 47. Yükçü, S., Ata˘gan, G.: TOPSIS Yöntemine Göre Performans De˘gerleme. Muhasebe ve Finansman Dergisi 45, 28–35 (2010) 48. Ak O˘guz, M., Köksal, M.: AHP ve TOPSIS Yöntemi ile Tedarikçi Seçimi. ˙Istanbul Ticaret Üniversitesi Fen Bilimleri Dergisi 17(34), 69–89 (2018) 49. Alkan, T., Durduran, S.S.: Konut Seçimi Sürecinin AHP Temelli TOPSIS Yöntemi ile Analizi. Necmettin Erbakan Üniversitesi Fen ve Mühendislik Bilimleri Dergisi 2(2), 12–21 (2020) 50. Organ, A., Kaçaro˘glu, M.O.: Entropi A˘gırlıklı TOPSIS Yöntemi ˙Ile Türkiye’deki Vakıf Üniversiteleri’nin De˘gerlendirilmesi. Pamukkale ˙I¸sletme ve Bili¸sim Yönetimi Dergisi 7(1), 28–45 (2020) 51. Varatharajulu, M., Duraiselvam, M., Kumar, M.B., Jayaprakash, G., Baskar, N.: Multi criteria decision making through TOPSIS and COPRAS on drilling parameters of magnesium AZ91. J. Magnes. Alloys 8(38), 1–18 (2021). https://doi.org/10.1016/j.jma.2021.05.006 52. Khodaei, D., Hamidi-Esfahani, Z., Rahmati, E.: Effect of edible coatings on the shelf-life of fresh strawberries: a comparative study using TOPSIS-Shannon entropy method. NFS J. 23, 17–23 (2021) 53. Hwang, C.L., Yoon, K.: Multiple Attribute Decision Making: Methods and Applications, A State-of-the-Art Survey. Lecture Notes in Economics and Mathematical Systems. Springer, New York (1981). https://doi.org/10.1007/978-3-642-48318-9 54. Atan, M., Altan, S.: ¸ Örnek Uygulamalarla Çok Kriterli Karar Verme Yöntemleri. Gazi Kitabevi, Ankara, Türkiye (2020) 55. Sahin, ¸ Y., Gürkaya, G., Ayaz, A.: Determining the industry 4.0 component to be applied primarily in the retail sector with the analytic hierarchy process. In: International Symposium for Production Research 2017, Vienna, Austria, pp. 631–640 (2017)

Sensor Based Intelligent Measurement and Blockchain in Food Quality Management Gizem Sen ¸ 1,2(B) , ˙Ihsan Tolga Medeni2 , Kamil Öncü Sen ¸ 1 , Numan M. Durakbasa3 , and Tunç Durmu¸s Medeni2 1 The Scientific and Technological Research Council of Turkey (TUBITAK), Ankara, Turkey

{gizem.sen,oncu.sen}@tubitak.gov.tr

2 Ankara Yıldırım Beyazıt University, Ankara, Turkey

{tolgamedeni,tuncmedeni}@ybu.edu.tr 3 University of Technology Vienna, Vienna, Austria [email protected]

Abstract. Today’s information and communication technologies (ICT) which include interdisciplinary science integration, which meets applications in food manufacturing, are used for production processes and quality management. Food production and quality management are ever-developing together with the development of metrology, sensor-based measuring instruments, and ICT technology which is used to design for the evaluation. Quality management is starting from measuring the accurate and reliable data from the production. And it continues with the transfer/store of the collected data in a secure chain. The data that collected from the food production process transform an information by the processing operations that contain artificial intelligence or statistical prediction system which helps the decision makers in food production management. This information shared with the experts and quality managers for the monitoring the system. This sharing process need to be done in secure way to keep safe the system from foreign/internal intervention. Blockchain is a secured and distributed database solution that provides decentralized management of transaction data. Blockchain application structure focuses on three different structures: Blockchain ledger, Blockchain network and stakeholders. With the help of these structures, Blockchain applications have security, privacy, efficiency, performance, usability, data integrity and scalability features. With this study, we aim to further consolidate the quality processes by aiming to include the Blockchain technology, which has the specified features, with the quality processes. Our designed model encompasses; wireless sensor network (WSN) which using for collecting, processing the sensor-based measuring data of the condition of a food production facility, and distributing the information using by blockchain technology. This approach targets; higher secured monitoring system, progress further reliable quality management and efficiency. Keywords: Digitalization · Blockchain · Quality management · Metrology

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 323–334, 2022. https://doi.org/10.1007/978-3-030-90421-0_27

324

G. Sen ¸ et al.

1 Introduction From the beginning of the manufacturing history, enterprises have been trying to control production process using minimum resources to produce the best quality products at lower costs. Therefore, not only the research and development (R&D) companies but also manufacturers are focus on firstly the real-time data collection for the production or environmental areas using digital sensor networks and secondly storing and analysing data related to production processes, accessing data independent of time and space [1]. This shifted a paradigm in the industry where digital transformation gains importance in production process and rules of the game are being rewritten. The digital transformation has been triggered with the “Industry 4.0” which was published in 2011 as a study of the German government [2]. It was named as fourth stage of industrialization after mechanization, electrification, and information stage. The focus lies on the improvement of current production systems and total quality management (TQM) to increase the awareness of digitalization [3]. New approaches announced with transition to the Industry 4.0, such as digitalization, smart manufacturing, and the cognitive factory affects not only the efficiency of production process but also the quality management. In addition, with the inclusion of quality management authorized blockchain technology in the model, it becomes even more reliable with data that is strengthened in security and increased in accuracy. These new connected systems, called cyber-physical systems (CPS) [4], interact with each other using standard internet-based protocols or industry-specific communication protocols and provide data from related processes. The main precursors of cyber-physical systems are the internet of things (IoT), secure communication, cloud computing, industrial internet, big data, and smart manufacturing [5]. All CPS core technologies work together to provide data on production status for management [6, 7]. CPS-based production management needs accurate and reliable measurement data with sufficient power to evaluate production processes. Because metrics are the main source of information for CPS, they must be acquired with universal standards to increase interoperability with other systems. Due to the great progress made through CPS, the control of raw materials, production outputs at various stages and products in storage areas has been completely digitized. The accuracy of this digitally obtained data is very important both for the quality control to be performed and for the monitoring process. Requires the highest accuracy of measurement data for quality management and monitoring process. These measurement data must be obtained with absolute accuracy and must not be manipulated to achieve the desired TQM point. Measurements from any part of the manufacturing process should be as accurate as possible and this data should be securely distributed to the relevant experts. Blockchain technology has been proposed to ensure that the digital data obtained is not manipulated and to ensure the security of the data in its delivery to the relevant experts. In the direction of the industry’s revolutionary vision of 4.0, quality management and the authorized blockchain technology, the systems of intelligent sensor-based measurement systems, production processes and information technologies are increasingly interconnected. Blockchain can be used to help some challenges when creating applications for smart manufacturing including trust, security, traceability, reliability, and agreement automation within the manufacturing value chain [8].

Sensor Based Intelligent Measurement and Blockchain

325

In recent years, many studies on IoT, blockchain and related topics have been investigated in various technical aspects [9]. Many studies have been presented to provide review articles with different scopes in this research area. Comprehensive research has been presented on the integration of blockchain and IoT, categorized by application areas such as smart city, smart production, smart energy, etc. [10]. Ferrag et al. presented a comprehensive survey of existing blockchain protocols for IoT networks. They provided an overview of the application areas of blockchain technologies in IoT (e.g. internet of cloud, internet of energy, edge computing, internet of vehicles etc.). Also, the threat models handled by blockchain protocols in IoT networks are classified [11]. Nguyen et al. state that a new paradigm of blockchain and cloud of things integration, called BCoT, has been adopted as cloud of things offers flexibility and scalability functions to increase the efficiency of blockchain operations [9]. Zhang et al. proposed a privacy-preserving and user-controlled data sharing architecture with fine-grained access control, based on the blockchain model (Hyperledger Fabric) and attribute-based cryptosystem. Also, the consensus algorithm in this system is the Byzantine fault tolerance mechanism [12]. Stamatellis et al. offered a privacypreserving Electronic Health Records (EHR) management solution which is called PREHEALTH. They presented proof-of-concept implementation of Hyperledger Fabric and an Identity Mixer (Idemix) [13]. Liang et al. designed mobile device and cloud system. A blockchain-based and mobile user-controlled system is proposed for personal health data sharing and collaboration. The mobile application is used to make manual input to collect health data from personal wearable and medical devices to share data with healthcare providers and health insurance companies. To maintain the integrity of data, a proof of integrity and validation is permanently retrieved from the cloud database and connected to the blockchain network with each record [14]. Nguyen et al. have been proposed a combination that blockchain and decentralized interplanetary file system (IPFS) in a mobile cloud platform. An Ethereum blockchain has been installed on the Amazon cloud where organizations can interact with the EHR sharing system via an advanced mobile Android app. In addition, the peer-to-peer IPFS storage system is integrated with blockchain to achieve a decentralized data storage and data sharing [15]. Benhamouda et al. designed an architecture supporting private data on Hyperledger Fabric using secure on-chain MPC protocols and implemented a demo auction application using it. A smart contract was implemented that runs the auction on confidential data using a simple secure MPC protocol built using the EMP toolkit library [16]. Pahontu et al. presented a distributed architecture, in the form of an integrated platform for real-time control and monitoring for residential networks, with secure data management based on Blockchain technology. A Blockchain network based on Hyperledger Fabric was developed to evaluate the possibilities of storing the real-time data into a distributed ledger [17]. Chen et al. proposed a distributed ledger system called Flowchain for peer-to-peer networks and real-time data transactions. The proposed Flowchain is a dedicated blockchain system for the IoT that can process and record transactions in a real-time manner. Flowchain presents a new mechanism called Virtual Blocks to provide such real-time transactions [18]. In addition, studies continue in the field of blockchain integration with IoT technology, which is one of the basic components of Industry 4.0.

326

G. Sen ¸ et al.

Li et al. proposed a blockchain-based security architecture for distributed cloud storage. The proposed architecture has been compared with other two traditional architectures in terms of network performance and security, with acceptable network transmission delay. Also, a genetic algorithm has been customized to solve the file block replica placement problem between multiple users and multiple data centres in the distributed cloud storage environment and simulations have been made on the proposed architecture compared to three different cloud storage architectures [19]. Lee et al. discussed the potential implications of blockchain technology in the development and implementation of real-world Cyber-Physical Production Systems (CPPS). A three-level blockchain architecture has been proposed to adapt, develop, and incorporate blockchain technology with manufacturing advances for Industry 4.0 [20]. Mohammed et al. proposed a middleware approach for utilizing blockchain services and capabilities to enable more secure, autonomous, traceable, and reliable smart manufacturing applications. For this reason, they proposed Man4Ware as a service-oriented middleware. They summarized Man4Ware as services that deliver devices, components, and software functions to smart manufacturing applications. Also, the blockchain services added to Man4Ware will improve the environment and facilitate more traceable, secure, and immutable features in smart manufacturing applications [8]. An improved P2P File System Scheme based on IPFS and Blockchain IPFS is a peer-to-peer version-controlled filesystem. Chen et al. added a blockchain to the original IPFS so that each node’s information can be saved to the blockchain [21]. Rathee et al. proposed a secure wireless mechanism using Blockchain technology that stores each record into number of blocks to preserve transparency and secure every activity of smart sensors [22]. Ozyilmaz and Yurdakul described a standardized IoT infrastructure where data are stored on a distributed storage service that is fault-tolerant and resistant to distributed denial of service (DDOS) attacks and data access is managed by a decentralized, trustless blockchain. The illustrated system used LoRa as the emerging network technology, Swarm as the distributed data storage platform, and Ethereum as the blockchain platform. Using blockchain’s decentralized, trustless nature in combination with DDOS-resistant, fault-tolerant data storage, a new type of IoT back end created [23]. Ghaderi et al. describe Hyperledger Fabric’s architecture and important aspects of implementation. They present a secure remote monitoring platform based on Hyperledger Fabric technology. They discuss critical issues for presented Hyperledger Fabric based platform implementation in terms of transaction and block size, latency, scale, and number of nodes in the network, throughput of transactions per second [24]. Like health data, smart manufacturing, logistics, communication data, etc. need information confidentiality and non-manipulation. Studies on blockchain infrastructure have gained momentum in these areas as well. In a centralized industry design tasks are performed on a central server. On the other hand, there is not any single point of failure in decentralized industry design. The information is recorded in at least two systems parallelly. Thus, the system network may not face system failure. Most sensors-based schemes rely on the central clientserver structure, which is connected to cloud servers by using the Internet today. These schemes are not used efficiently these days. There appears to be a need to propose new

Sensor Based Intelligent Measurement and Blockchain

327

frameworks such as decentralized designs for wireless sensors and Peer-to-Peer (P2P) networks. Blockchain technology, which is a new structure in relation to this framework, has started to become a phenomenon. Every sensor that detects data will be recorded in the blockchain network to ensure transparency and monitor the current situation by staff and managers at all levels. No one can modify, modify, or monitor information or IoT/sensors after they are published on the chain [25]. Blockchain, an up-to-date technology, is an open and distributed ledger that can record transactions between different parties efficiently and in a permanent and verifiable way [26]. Blockchain stores transaction data in blocks which are linked together to form a chain. The transaction time and sequence are recorded in blocks. Blocks are logged into the blockchain where a discrete network administered by rules which are agreed on by the participants of network. Each block includes a hash value. There are the hash of the previous block and timestamped batches of recent valid transactions in every block. Each block is linked the previous block. In this way, a block cannot be inserted between two existing blocks and any block cannot be altered. Thus, each block validates of the previous block which means entire blocks are validated [27].

Fig. 1. Transaction of blockchain

Since 2008, the development of Bitcoin became the first example of a Blockchain application [27]. When it comes to blockchain, projects such as Bitcoin, Ethereum, and Ripple are the first to come to mind, but the use of blockchains is not only cryptocurrencies. One of the projects that expands its usage area is Hyperledger. Hyperledger is an open source blockchain project which is announced by the Linux Foundation in December 2015. Hyperledger is a global collaboration which includes leaders in Internet of Things (IoT), manufacturing, supply chains, banking, finance, and technology. Hyperledger Fabric is a distributed operating system for permissioned blockchains without any cryptocurrency built in. Fabric supports smart contracts which is the first blockchain platform written in common programming languages such as: Golang, Java and Node.js. Unlike a public permissionless blockchain, users know each other in Fabric platform [24]. This provides trust in the network. Fabric supports consensus algorithms’ diversity. This feature emphasizes the advantage of the Fabric over the other blockchains. Fabric customizes the network efficiently rather than other blockchain platforms by using this feature. The Fabric does not need to use a cryptocurrency. There is no need for mining

328

G. Sen ¸ et al.

to implement the consensus mechanism. This system has advantages that reduces both system risk and cost of running consensus protocol and minimizes latency problems. The cost of this mechanism is almost equal to other distributed mechanisms. For these reasons Hyperledger Fabric is one of the most useful platforms with respect to latency and transaction processes. Instead of permissionless system, Hyperledger Fabric offers a secure platform. This platform supports confidential contracts and private transaction. Public permissionless networks serve anonymous and untrusted platforms. On the other hand, the Fabric platform has permissioned network which the participants are known to each other. Hyperledger Fabric can be integrated into any industry. Fabric is an extensible blockchain platform for running distributed applications. Various consensus protocols are supported by Hyperledger Fabric. Thus, it can be adapted to different usage situations and trust models. Hyperledger Fabric may not require a native cryptocurrency which include costly mining and smart contract to leverage consensus protocols. Avoiding a cryptocurrency reduces risks, attacks, and costs. So, Hyperledger Fabric can be configurated to present solution requirements for various industry use cases. Many studies on Hyperledger Fabric are in the literature. Among these studies, the focus has been on studies for integrating Hyperledger Fabric into businesses. We propose a secure quality management system using an intelligent measurement system to monitor the aging room’s environmental conditions using edge computing and wireless sensor network (WSN) structures. This structure provides a secure monitoring and controlling system to keep the environment condition in the international quality standards. The designed system can be secured with blockchain technology for keeping the system in a quality confidence interval that determined by international quality standards. This quality confidence interval may prevent waste during the keeping process. Both keeping the quality of the production under control and monitoring the processes are important crossing issues for the manufacturers, and with this offered model, we offer a solution for secure quality management, audit and monitor.

2 Method The integration of Industry 4.0 and IoT solutions into existing production systems contributes to the transformation of production facilities into smart factories. In addition, with the digital transformation of infrastructures, it makes possible to monitor and control production processes much more effectively. With this digitalization provide an opportunity to collect the up-to-date status data from the production facilities and the relevant experts responsible for the processes have been provided with the opportunity to access this data non-spatial. Independence from this location not only increased the efficiency of data distribution, TQM, and monitoring processes, but also accelerated the transition from centralized systems to distributed systems (such as the cloud). While creating these distributed structures, it is of great importance that the data be transmitted in a secure way. The data collected for data analysis is confidential information of organizations. If this confidential information (product specifications or manufacturing conditions) falls into the hands of competitors or malicious attackers, it can create serious vulnerability for organizations. There are security systems used to deliver this critical data to relevant stakeholders or systems (e.g. cloud). We used blockchain technology for the secure distribution of data in the model created in this study.

Sensor Based Intelligent Measurement and Blockchain

329

In our previous works, we used digitalization components to detect and predict the environmental problems of actual confectionery factories in the critical point on the production line, storage, and packaging units based on environmental factors. For this purpose, today’s technologies such as IIoT, WSN, statistical failure prediction models and cloud-based simulation system were implemented in a real confectionery factory [28, 29]. Within the scope of the study, the focus is on the resting room, which is a very critical process for confectionery production and product quality. The resting room is used to keep the produced dragee sugar under ideal conditions. The rest room is a specially sheltered room with a special air conditioning system and light system. To keep the sugar dragee untreated (bare), the air is produced at the appropriate temperature and humidity with the air conditioning system. This room is used to create the most suitable conditions for the unprocessed sugar dragee to reach the optimum hardness specified in the quality documents. Moisture is one of the most important factors in sugar production. The increase in the amount of water in the sugar dragee shortens the shelf life. If the humidity is below the predicted value, the quality of the dragee sugar becomes poor. The structure designed to collect all the data used for analysis has been prepared in accordance with international standards. For example, the humidity sensor level has been determined by ISO 22000, an international standard, and all controls have been designed according to this standard. Some of the most prominent quality control systems used in the food industry given in the below: • • • • • •

Safe Quality Food (SQF) 2000 [30], Global Food Safety Initiative (GFSI) [31], British Retail Consortium (BRC) [32], International Food Standard (IFS) [33], Hazard Analysis and Critical Control Point (HACCP) [34], International Organization for Standardization [35].

The environmental condition monitoring and quality management/audit in confectionery production will be characterized by international standards are given in above. In addition to the quality management systems given above various quality control systems are being used in the food sector to achieve standardization. Our main goal of proposing such monitoring and quality control systems in the quality assessment of all areas of the food industry is to assure collected data protection and to facilitate the TQM of food production. Furthermore, the adoption and implementation of secured digitalized food control systems and TQM provide food production businesses with the foolproof and competitive edge they need to survive in the global market. The designed model (the central node where data is collected and transferred to the experts and quality auditors) is supported by blockchain (Hyperledger Fabric) technology and ensures that the data is securely transmitted to the relevant persons or systems (organizational experts, quality auditor, cloud, etc.). Also, this model can maintain the transparency of the collected data and record (with a distributed structure) all the data entries from the sensors. The proposed model is given in the Fig. 2. In this model, a sensor based measurement structure (node) implemented in to the aging room. It can also make communication (sending data to experts and TQM auditors)

330

G. Sen ¸ et al.

Fig. 2. The blockchain based secure total quality management and controlling model

with blockchain technology. That measurement node can not only collect the data but also can distribute the data with Hyperledger Fabric blockchain structure this ability makes it intelligent. That designed structure provides scalability and functionality for holding on the product quality under the determined threshold values and quality standards. The designed structure can be reviewed with the following steps: • • • •

The measurement (by sensors) Secure communication between experts and quality auditors Transparency of the data TQM and motoring the system

The model was established using Hyperledger Fabric for data storage and sharing in the designed system. In this model, two different blockchain structures were created. The first is designed to add all sensor data generated every 15 min to the chain and create a block at the end of the day. Since the air conditioner used in the system has the technical feature to change the room conditions within 15 min, this period is taken as the basis. In addition, it has been observed that the data size obtained from 5 sensors based on 15-min periods is 2 MB. This is the most suitable size for creating a block [24]. The block structure is given in Fig. 3. Whether the room condition meets the appropriate conditions for the aging process will be checked by the blocks shared with experts. By transmitting the data securely to the experts, it was ensured that the control mechanism and confidential information of the factory were protected. The second block structure, given in Fig. 4, is formed by taking the daily average of the environmental condition data of the aging chamber for each sensor. Averages are stored creating a data payload of 2 MB in one block. These blocks are shared with the quality auditor, allowing the system to be remotely audited within the framework of international standards. The block is produced by the method of generating

Sensor Based Intelligent Measurement and Blockchain

331

Fig. 3. Designed blockchain structure

Fig. 4. Designed blockchain structure

average data from the raw data received from the sensors. By automating the system and ensuring the transparency of this data and the absence of outside interference, the data in the quality inspection is ensured to be completely accurate and non-manipulated. With such a model, it is ensured that the auditors carry out their audits remotely and securely without visiting the institutions, and at the same time, the system can be audited daily. Thus, the system is automated, and a transparent quality management model is created, free from manipulation.

3 Conclusions Within the scope of this study, a structure in which continuous data flow is provided with the Industry 4.0 vision and components on a real production line, and the process of safely transmitting and storing this data to stakeholders with the help of blockchain technology is modeled. The model created within the scope of this study is based on controlling the environmental conditions of the waiting environment in the process called “aging room”, where the products obtained as a result of production gain consistency. With this model, the data collection and data distribution process has been automated and a safe and transparent process has been created that does not allow for external intervention or manipulation. With the reliable data, both the continuity of the manufacturer’s system control and the continuity of the audit of the independent quality auditor are ensured. The proposed model combines both industry 4.0 components and blockchain (Hyperledger Fabric) technology, offering superior advantages of remote continuous control and quality audit mechanism such as creating security and transparency, increasing efficiency, and reducing costs, allowing real-time information sharing. With this model, it has been evaluated that all small and medium-sized enterprises (SMEs) can make cost-effective production at international standards and increase their competitiveness.

332

G. Sen ¸ et al.

˙ Acknowledgment. We thank TÜBITAK (The Scientific and Technological Research Council of Turkey) and Durukan Sekerleme ¸ San. Tic. A.S. ¸ for encouraging us in our study.

References 1. Sen, ¸ K.Ö., Durakbasa, M., Baysal, M., Sen, ¸ G., Ba¸s, G.: Smart factories: a review of situation, and recommendations to accelerate the evolution process. In: Durakbasa, N.M., Gencyilmaz, M.G. (eds.) ISPR 2018, pp. 464–479. Springer, Cham (2019). https://doi.org/10.1007/978-3319-92267-6_40 2. Zhou, K., Liu, T., Zhou, L., Liu, T.: Industry 4.0: towards future industrial opportunities and challenges. In: 2015 12th International Conference on Fuzzy Systems and Knowledge Discovery FSKD 2015, pp. 2147–2152 (2016) 3. Chen, F., Gao, B., Selvaggio, M., et al.: A framework of teleoperated and stereo vision guided mobile manipulation for industrial automation, pp. 1641–1648 (2016) 4. Wang, S., Zhang, C., Wan, J.: A smart factory solution to hybrid production of multi-type products with reduced intelligence. In: 2016 IEEE Information Technology, Networking, Electronic and Automation Control Conference, pp. 848–853. IEEE (2016) 5. Balogun, O.O., Popplewell, K.: Towards the integration of flexible manufacturing system scheduling. Int. J. Prod. Res. 37(15), 3399–3428 (1999) 6. Priore, P., de la Fuente, D., Puente, J., Parreño, J.: A comparison of machine-learning algorithms for dynamic scheduling of flexible manufacturing systems. Eng. Appl. Artif. Intell. 19(3), 247–255 (2006) 7. Liu, Q., Wan, J., Zhou, K.: Cloud manufacturing service system for industrial-cluster-oriented application. J. Internet Technol. 15(3), 373–380 (2014) 8. Mohamed, N., Al-Jaroodi, J.: Applying blockchain in industry 4.0 applications. In: 2019 IEEE 9th Annual Computing and Communication Workshop and Conference (CCWC), pp. 0852– 0858. IEEE, January 2019 9. Nguyen, D.C., Pathirana, P.N., Ding, M., Seneviratne, A.: Integration of blockchain and cloud of things: architecture, applications and challenges. IEEE Commun. Surv. Tutor. 22(4), 2521–2549 (2020) 10. Panarello, A., Tapas, N., Merlino, G., Longo, F., Puliafito, A.: Blockchain and IoT integration: a systematic survey. Sensors 18(8), 2575 (2018) 11. Ferrag, M.A., Derdour, M., Mukherjee, M., Derhab, A., Maglaras, L., Janicke, H.: Blockchain technologies for the internet of things: research issues and challenges. IEEE Internet Things J. 6(2), 2188–2204 (2018) 12. Zhang, Y., He, D., Choo, K.K.R.: BaDS: blockchain-based architecture for data sharing with ABS and CP-ABE in IoT. Wireless Commun. Mob. Comput. 2018 (2018) 13. Stamatellis, C., Papadopoulos, P., Pitropakis, N., Katsikas, S., Buchanan, W.J.: A privacypreserving healthcare framework using hyperledger fabric. Sensors 20(22), 6587 (2020) 14. Liang, X., Zhao, J., Shetty, S., Liu, J., Li, D.: Integrating blockchain for data sharing and collaboration in mobile healthcare applications. In: 2017 IEEE 28th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC), pp. 1–5. IEEE, October 2017 15. Nguyen, D.C., Pathirana, P.N., Ding, M., Seneviratne, A.: Blockchain for secure EHRs sharing of mobile cloud based e-health systems. IEEE Access 7, 66792–66806 (2019) 16. Benhamouda, F., Halevi, S., Halevi, T.: Supporting private data on hyperledger fabric with secure multiparty computation. IBM J. Res. Dev. 63(2/3), 1–3 (2019)

Sensor Based Intelligent Measurement and Blockchain

333

17. Pahontu, B., Arsene, D., Predescu, A., Mocanu, M.: Application and challenges of blockchain technology for real-time operation in a water distribution system. In: 2020 24th International Conference on System Theory, Control and Computing (ICSTCC), pp. 739–744. IEEE, October 2020 18. Chen, J.: Flowchain: a distributed ledger designed for peer-to-peer IoT networks and realtime data transactions. In: Proceedings of the 2nd International Workshop on Linked Data and Distributed Ledgers (LDDL2), January 2017 19. Li, J., Liu, Z., Chen, L., Chen, P., Wu, J.: Blockchain-based security architecture for distributed cloud storage. In: 2017 IEEE International Symposium on Parallel and Distributed Processing with Applications and 2017 IEEE International Conference on Ubiquitous Computing and Communications (ISPA/IUCC), pp. 408–411. IEEE, December 2017 20. Lee, J., Azamfar, M., Singh, J.: A blockchain enabled cyber-physical system architecture for industry 4.0 manufacturing systems. Manuf. Lett. 20, 34–39 (2019) 21. Chen, Y., Li, H., Li, K., Zhang, J.: An improved P2P file system scheme based on IPFS and Blockchain. In: 2017 IEEE International Conference on Big Data (Big Data), pp. 2652–2657. IEEE, December 2017 22. Rathee, G., Balasaraswathi, M., Chandran, K.P., Gupta, S.D., Boopathi, C.S.: A secure IoT sensors communication in industry 4.0 using blockchain technology. J. Ambient Intell. Humaniz. Comput. 12(1), 533–545 (2021). https://doi.org/10.1007/s12652-020-02017-8 23. Ozyilmaz, K.R., Yurdakul, A.: Designing a Blockchain-based IoT with Ethereum, swarm, and LoRa: the software solution to create high availability with minimal security risks. IEEE Consum. Electron. Mag. 8(2), 28–34 (2019) 24. Ghaderi, M.R., Asgari, S., Ghahyazi, A.E.: How can hyperledger fabric blockchain platform secure power plants remote monitoring. In: 2020 28th Iranian Conference on Electrical Engineering (ICEE), pp. 1–7. IEEE, August 2020 25. Iansiti, M., Lakhani, K.: The truth about blockchain. Harv. Bus. Rev. 95, 118–127 (2017) 26. Manav Gupta, J.W.: Blockchain for Dummies 3rd IBM Limited Edition (2020). IBM: https:// www.ibm.com/tr-tr/blockchain/what-is-blockchain 27. Subic, A., Xiang, Y., Pai, S., de La Serve, E.B.: Industry 4.0: Why Blockchain is at the Heart of the Fourth Industrial Revolution and Digital Economy. Capgemini, Paris, France (2017) 28. Sen, ¸ K., Durakbasa, M., Ba¸s, G., Sen, ¸ G., Akçatepe, O.: An implementation of cloud based simulation in production. In: Durakbasa, N.M., Gençyılmaz, M.G. (eds.) ISPR -2019. LNME, pp. 519–524. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-31343-2_45 29. Sen, ¸ K.Ö., Durakba¸sa, M.N., Erdöl, H., Berber, T., Bas, G., Sevik, U.: Implementation of digitalization in food industry. In: DAAAM International Scientific Book 2017, pp. 091–104 (2017). Chapter 08 30. Safe Quality Food: SQF Quality Code, Edition 8 (SQF Standard No. 200) (2017). https://www. sqfi.com/wp-content/uploads/2018/08/SQF-Code-Edition-8-Quality-Guidance-FINAL.pdf 31. Global Food Safety Initiative: Governance Model and Rules of Procedure (2018). https:// www.mygfsi.com/images/GFSI_Governance_Model_And_Rules_Of_Procedure/GFSI_G overnance_Model_June2018_.pdf 32. British Retail Consortium (BRC): BRC Global Standard for Food Safety ISSUE 8 (2018). https://www.brcbookshop.com/bookshop/brc-global-standard-food-safety-issue-8/c24/p-414153 33. International Featured Standards (IFS): IFS Food 6.1 (2018). https://www.ifs-certification. com/index.php/en/download-standards?item=251

334

G. Sen ¸ et al.

34. United States Food and Drug Administration (FDA): Hazard Analysis Critical Control Point (HACCP), Food HACCP and the FDA Food Safety Modernization Act: Guidance for Industry (2017). https://www.fda.gov/downloads/Food/GuidanceRegulation/GuidanceD ocumentsRegulatoryInformation/UCM569798.pdf 35. International Organization for Standardization: Occupational health and safety management systems - Requirements with guidance for use (ISO/DIS Standard No. 45001) (2018). http:// www.iso.org/iso/catalogue_detail?csnumber=63787

Study of the Formation of Zinc Oxide Nanowires on Brass Surface After Pulse-Periodic Laser Treatment Serguei P. Murzin1,2(B) and Nikolay L. Kazanskiy1,3 1 Samara National Research University, Samara, Russia

[email protected]

2 TU Wien, Vienna, Austria 3 IPSI RAS - Branch of the FSRC «Crystallography and Photonics» RAS, Samara, Russia

[email protected]

Abstract. Surface morphology of the selectively oxidized copper-zinc metallic material generated by laser treatment was studied using a scanning electron microscope. It was determined that the intensity of the formation of arrays of zinc oxide nanoobjects in the center is much higher than at the periphery, since the highest temperature during heating was recorded in the central zone. Typical ZnO nanowires, some of which reached lengths exceeding 3 µm, were formed by laser pulsed-periodic irradiation. The number density of nanowires amounted ~107 mm–2 . In addition to nanowires in the region of higher temperatures, the so-called ZnO nanosheets were present on the surface of the samples. Experimental design techniques were developed. It is planned to use the microscopy techniques that are designed to provide an image of the sample surfaces with high spatial resolution, as well as information on the composition, structure and some other properties of the surface and subsurface layers. For the analysis of individual nanowires, the method of atomic force microscopy is better suited, since it is used to determine the surface relief with a resolution from tens of angstroms up to atomic. This method ensures the direct and detailed analysis of the structure of the subsurface layer, as well as the topographic analysis of the samples surface. Keywords: Formation · Zinc oxide nanowires · Brass surface · Pulse-periodic laser treatment

1 Introduction The development of opto- and microelectronics, microsystem and sensor equipment leads to increased requirements to boost efficiency and reduce energy consumption, while at the same time decreasing mass and size of components. Interest in both, fundamental research and the wide range of practical applications focuses on nanostructured morphologies of metal oxides (for example, nanowires, nanorods, nanotubes, etc.). These demonstrate unique properties that significantly exceed those of their macroscale counterparts [1]. Due to the growing demand in the areas of sensorics, catalysis, photonics, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 335–343, 2022. https://doi.org/10.1007/978-3-030-90421-0_28

336

S. P. Murzin and N. L. Kazanskiy

electronics and other high-tech areas [2, 3], the importance of further studies increases significantly. Zinc oxide is one of the semiconductor compounds that has practical and promising potential applications [4, 5] thanks to its piezo- and ferroelectric properties. Particular attention is paid to the production of structures based on nanoelements [6, 7], which can be used for creation of sensor devices that are significantly superior to currently commercially available sensors [8, 9]. Among possible technical implementations of these structures are metal oxide layered materials that present interest as functional electro-contact materials. Application of nanostructured arrays based on ZnO with a smoothly varying morphology, depending on the coordinates on the plane is promising. In Ref. [10], the possibilities of generating nanomaterials were evaluated and the synthesis of nanoporous and composite nanomaterials based on ZnO was carried out using pulsedperiodic laser irradiation. In Ref. [11], the possibility of forming structures of layered nanomaterials based on zinc oxide by laser pulse-periodic irradiation was presented. Nanostructured arrays with a morphology of ZnO nanowires varying from the center to the periphery were obtained on a conductive base made of a copper-zinc alloy [12]. In Ref. [13], the regularities and features of the formation of arrays of zinc oxide nanoobjects with varying morphology were determined. This became possible through laser treatment of a selectively oxidized copper-zinc alloy. Among the most important characteristics of the obtained nanostructures, which largely determine their properties, are their size and two-dimensional spatial number density, i.e. the number of nanoobjects per surface area unit [7, 14]. Therefore, accurate determination of the size of nanoobjects is of great importance. Scanning electron microscopy (SEM) and atomic force microscopy (AFM), along with other surface research methods, are widely used to study nanostructures. Such important advantages as obtaining a three-dimensional image of the surface, in-situ measurements, make atomic force microscopy methods one of the most widely used in the study of nanoobjects. In this case, the purpose of processing SEM and AFM images is to obtain reliable information on the size distribution of nanoobjects located on the surface under study. Each company that manufactures both scanning electron microscopes and scanning probe microscopes (atomic force microscope, scanning tunneling microscope, near-field optical microscope, etc.) [15–17] supplies them with their own software. All these programs, despite all their differences, have common properties. First, their main function is to provide raster scanning, i.e. movement of the probe over the surface of the sample. This is done by monitoring the states of digital-to-analog converters and analog-to-digital converters of the digital electronics unit. Second, these must provide recording and realtime display of probe movement information (topography) and additional parameters measured by the probe (such as probe current). When using all these programs, it is possible to build maps of these parameters along the scanned field, and as a result obtain the required images. All manufacturing companies also provide programs and methods for processing raster images, but not all of the proposed image processing tools are convenient and not all programs contain sufficient sets of tools [18, 19]. Various

Study of the Formation of Zinc Oxide Nanowires on Brass Surface

337

processing algorithms are used, the need for which depends on the purpose of the experiment and on the specific situation. The purpose of these investigations was to study of a selectively oxidized copper-zinc alloy, which was subjected to pulse-periodic laser treatment in order to formations of zinc oxide nanowires on the sample surface, as well as identifying further research steps.

2 Results and Discussion Samples from Cu-Zn brass alloy L62 with dimensions 30 × 20 × 0.05 mm were processed. A CO2 laser with a pulse frequency that ranged from 2 Hz to 5 kHz was used for investigations. The chosen frequency for the pulse-periodic laser irradiation was 100 Hz, while the average laser beam power was 330 W. In this case, a non-stationary stress-strain state was formed in samples by the influence of the sound waves that were induced by the laser irradiation [12, 13]. Laser pulses generated sound waves, which caused nonstationary local deformation that is considered as one of the key conditions that allows intensified mass transfer in metallic materials that occurs in the solid phase [20]. The structure investigation of the material surface was carried out after pulse-periodic laser treatment realized in air. During this treatment an oxide coating was formed on the surface of brass. It consisted of elongated needle-shaped crystals that had a lemon-yellow color. Further investigations with modified parameters such as increased treatment time, revealed that the color turned into a whitish-gray, which is representative for zinc oxide. The elemental composition of the whitish-gray film formed on the material surface during laser irradiation was analysed. The analysis was performed using an electron probe energy-dispersive microanalysis system. It was discovered that the zinc proportion from metalls amounted around 98%. This indicated that, the main component on the surface of the Cu–Zn alloy was zinc oxide. The oxidation of the material surface was intensified by the heating of the brass foil realized in air. The higher oxidation rate and higher diffusion of Zn than of Cu were the factors that influenced the preferred formation of ZnO. A Quanta scanning electron microscope was used to study the surface morphology of the selectively oxidized copper-zinc metallic material, that was generated by laser treatment. It was found that both the increase of temperature and the uneven distribution of zinc in the alloy that causes local concentrations are majorly influencing the morphology of ZnO nanostructures. The formation of arrays of zinc oxide nanoobjects was intensified in the centre since the temperature reaches the highest values in the central zone during heating, in comparison with the periphery. The image of the surface at the periphery of the heat-affected zone obtained by SEM is shown in Fig. 1. Typical ZnO nanowires formed during laser pulsed-periodic treatment are presented in Fig. 2. These structures were formed in the central region were the maximum temperature reached the value of 650 °C. The number density of nanowires amounted ~107 mm−2 . Nanowires reached lengths exceeding 3 µm (Fig. 3).

338

S. P. Murzin and N. L. Kazanskiy

Fig. 1. SEM image of the surface at the periphery of the heat-affected zone.

Fig. 2. Nanowires on the surface of the copper-zinc alloy after laser treatment.

Study of the Formation of Zinc Oxide Nanowires on Brass Surface

339

Fig. 3. Nanowire exceeding the length of 3 µm.

In the region of higher temperatures on the surface of the samples, in addition to the described nanostructures, the so-called ZnO nanosheets were present. The surface structure shown in Fig. 4, consisting of nanowires and nanosheets, was formed by laser irradiation in a region with a maximum temperature slightly above 650 °C. The magnification limit and the resolution achieved in a scanning electron microscope directly depend on the minimum achievable probe size, which is characterized by the diameter of an electron beam finely focused on the sample surface. Using mathematical methods of image processing, it is possible to improve the quality of the resulting images. However, at certain magnifications, the image becomes not very clear, which is illustrated in Fig. 5. For a more detailed study of the created nanostructures, as well as for the analysis of individual nanowires, the AFM method is better suited. The major advantage of AFM in comparison to competing technologies such as optical microscopy and electron microscopy is that AFM does not use lenses or beam irradiation. Therefore, results are not compromised in spatial resolution due to diffraction and aberration effects. A resolution of AFM is in the order of fractions of a nanometer, which is three orders of magnitude better than the optical diffraction limit. The information is gathered by “touching” the surface with a probe. Piezoelectric elements enable precise scanning. When using the AFM method, as a rule, the application of tapping mode allows us to increase the resolution of the microscope during the investigation of objects with reduced mechanical rigidity. Various lateral and frictional forces that can lead to displacement of structures on the sample plane are also excluded.

340

S. P. Murzin and N. L. Kazanskiy

Fig. 4. Image of ZnO nanowires and nanosheets formed on the surface of a copper-zinc alloy after laser treatment.

Fig. 5. SEM image of ZnO nanowires.

Study of the Formation of Zinc Oxide Nanowires on Brass Surface

341

The existing algorithms for constructing size distributions of nanoobjects are based on the so-called “threshold” method [21]. In this method, a threshold value on the z-axis is selected for the 3D image under analysis. As the third dimension, both the topographic height is used if the image is obtained with an atomic force microscope, and the intensity of the reflected or transmitted light is used if the image is obtained with an optical or scanning microscope. A visual three-dimensional image of a surface is obtained only after appropriate mathematical processing of digital information, which is in the form of two-dimensional arrays of integers. Any object whose height is greater than the threshold value is identified as being investigated. It should be noted that this method can be used only when the typical size of nanoobjects is much smaller than the characteristic surface roughness. In this process, the boundaries between a nanoobject or a group of nanoobjects and the surface on which they are located are distinguished, i.e. segmentation is carried out. During the scanning process, various fluctuation surges appear, which must be smoothed or filtered. The design of experiment for the laser treatment of samples will be developed in the following way. The initial phase consists of tests employing a simply configured shape of the laser spot. The results of the subsequent evaluations will be used for the design of experiments with increasingly complex geometries of the laser spot and processing conditions. It is planned to use the microscopy techniques that are designed to provide an image of the sample surfaces with high spatial resolution, as well as information on the composition, structure and some other properties of the surface and subsurface layers. The resolution of SEM, as well as scanning probe and X-ray microscopes is greater than that of an optical microscope and allows to study the elements of the substructure in detail. In the SEM mode of detection of secondary electrons, images displaying the topography of the surfaces will be generated. Back-scattered electron images are going to provide information about the distribution of different elements in the samples. Further, analysis of X-ray signals will be used to map the distribution and estimate the quantity distribution of elements in the samples. For the analysis of individual nanowires, the AFM method will be applied.

3 Conclusions The selectively oxidized copper-zinc metallic material that was generated by laser treatment was analysed using a Quanta scanning electron microscope. This study allowed us to investigate the surface morphology of sample. It was determined that two factors mainly influenced the morphology of ZnO nanostructures, namely the increase in temperature and the local concentration of zinc due to its unequal distribution in the alloy. The intensity of arrays formation of zinc oxide nanoobjects at the periphery of heat-affected zone was much lower than in the centre since the highest temperature during heating was in the central zone. Laser pulsed-periodic irradiation ensured the formation of ZnO nanowires in the region with a maximum temperature of up to 650 °C. The number density of nanowires amounted ~107 mm−2 . Some of ZnO nanowires reached lengths exceeding 3 µm. Not only nanowires were present in the region of higher temperatures, but also ZnO nanosheets were formed on the surface of the samples. Techniques for the design of experiments were developed. The microscopy techniques that are designed to provide images with high spatial resolution and to display

342

S. P. Murzin and N. L. Kazanskiy

information about the structure, composition and other parameters of the surface and subsurface layers of the sample will be used. The resolution of scanning probe and X-ray microscopes is greater than that of optical microscopes and allows a detailed investigation of the elements of the substructure. For the analysis of individual nanowires, the AFM method will be applied. The major advantage of AFM in comparison to technologies such as electron microscopy and optical microscopy is the absence of lenses or beam irradiation. Therefore, the results are not affected in spatial resolution by the diffraction and aberration effects. The direct and detailed analysis of the structure of the subsurface layer and the topography analysis of the samples surface is ensured by this method. Acknowledgment. We would like to express our great appreciation to Professor Numan M. Durakbasa for the provided images of nanoobjects and his useful and constructive recommendations on this article. We further acknowledge the University Service Centre for Transmission Electron Microscopy (USTEM) at TU Wien for their support on scanning electron microscopy.

References 1. Malik, R., Tomer, V.K., Mishra, Y.K., Lin, L.: Functional gas sensing nanomaterials: a panoramic view. Appl. Phys. Rev. 7(2), 1–99 (2020). Article no. 021301 2. Kolahalam, L.A., Kasi Viswanath, I.V., Diwakar, B.S., Govindh, B., Reddy, V., Murthy, Y.L.N.: Review on nanomaterials: synthesis and applications. Mater. Today Proc. 18, 2182–2190 (2019) 3. Diao, F., Wang, Y.: Transition metal oxide nanostructures: premeditated fabrication and applications in electronic and photonic devices. J. Mater. Sci. 53(6), 4334–4359 (2018) 4. Borysiewicz, M.A.: ZnO as a functional material, a review. Crystals 9(10), 1–29 (2019). Article no. 505 5. Theerthagiri, J., et al.: A review on ZnO nanostructured materials: energy, environmental and biological applications. Nanotechnology 30(39), 1–27 (2019). Article no. 392001 6. Aisida, S.O., et al.: Irradiation-induced structural changes in ZnO nanowires. Nucl. Instrum. Methods Phys. Res. Sect. B Beam Interact. Mater. Atoms 458, 61–71 (2019) 7. Campos, A.C., et al.: Growth of long ZnO nanowires with high density on the ZnO surface for gas sensors. ACS Appl. Nano Mater. 3(1), 175–185 (2020) 8. Bhati, V.S., Hojamberdiev, M., Kumar, M.: Enhanced sensing performance of ZnO nanostructures-based gas sensors: a review. Energy Rep. 6, 46–62 (2020) 9. Kang, Y., Yu, F., Zhang, L., Wang, W., Chen, L., Li, Y.: Review of ZnO-based nanomaterials in gas sensors. Solid State Ionics 360, 1–22 (2021). Article no. 115544 10. Murzin, S.P.: Determination of conditions for the laser-induced intensification of mass transfer processes in the solid phase of metallic materials. Comput. Opt. 38(3), 392–396 (2015) 11. Murzin, S.P., Kryuchkov, A.N.: Formation of ZnO/CuO heterostructure caused by laserinduced vibration action. Procedia Eng. 176, 546–551 (2017) 12. Murzin, S.P., Safin, A.I., Blokhin, M.V.: Creation of zinc oxide based nanomaterials by repetitively pulsed laser treatment. J. Phys. Conf. Ser. 1368(2), 1–6 (2019). Article no. 022004 13. Murzin, S.P., Kazanskiy, N.L.: Arrays formation of zinc oxide nano-objects with varying morphology for sensor applications. Sensors 20(19), 1–19 (2020). Article no. 5575

Study of the Formation of Zinc Oxide Nanowires on Brass Surface

343

14. Arbiol, J., Xiong, Q.: Semiconductor Nanowires: Materials, Synthesis, Characterization and Applications. Woodhead Publishing Series in Electronic and Optical Materials. Woodhead Publishing, Cambridge (2015) 15. Niessen, F.: CrystalAligner: a computer program to align crystal directions in a scanning electron microscope by global optimization. J. Appl. Crystallogr. 53, 282–293 (2020) 16. Borrajo-Pelaez, R., Hedstrom, P.: Recent developments of crystallographic analysis methods in the scanning electron microscope for applications in metallurgy. Crit. Rev. Solid State Mater. Sci. 43(6), 455–474 (2018) 17. Tian, W., Yang, L.: Principle, characteristic and application of scanning probe microscope series. Recent Patents Mech. Eng. 6(1), 48–57 (2013) 18. Zhao, Z.-Q., Zheng, P., Xu, S.-T., Wu, X.: Object detection with deep learning: a review. IEEE Trans. Neural Netw. Learn. Syst. 30(11), 3212–3232 (2019). Article no. 8627998 19. Bai, H., Wu, S.: Nanowire detection in AFM images using deep learning. Microsc. Microanal. 27(1), 54–64 (2021) 20. Murzin, S.P.: Laser irradiation for enhancing mass transfer in the solid phase of metallic materials. Metals 11(13), 1–26 (2021). Article no. 1359 21. Kim, N.H., Lee, S.-Y.: Vision-based approach to automated analysis of structure boundaries in scanning electron microscope images. J. Vacuum Sci. Technol. B Nanotechnol. Microelectron. 29(1), 0110331–0110336 (2011)

Lean Production

A New Model Proposal for Occupational Health and Safety Mesut Ulu1 and Semra Birgün2(B) 1 Occupational Health and Safety Department, Bandırma Onyedi Eylül University,

Balıkesir, Turkey [email protected] 2 Industrial Engineering Department, Do˘gu¸s University, Istanbul, Turkey [email protected]

Abstract. Occupational health and safety aims to protect the life, health and safety of employees by taking measures for occupational accidents, health problems, occupational diseases and various risks that may occur in the working environment. Even if the assurance systems are fully implemented, the success rate will be adversely affected when it comes to the human factor. Although OHS is subject to various laws and procedures, the processes need to be improved and made lean in order to be implemented successfully. The workplace environment needs to be regular, visible and quickly accessible in order to be safe. Applying lean philosophy for this purpose brings great success in creating this environment. This study presents a “Lean Occupational Health and Safety Model”, which acts as a guide for meeting the occupational health and safety requirements of the environment. The aim of the presented model is to increase the success of OHS practices by making work environments lean and visual. Successful results were obtained by applying the model step by step to a laboratory. The risks identified as a result of applying the model were corrected using Lean Techniques. Keywords: 5S · Kaizen · Lean · Occupational health and safety · Risk management - Visual management

1 Introduction “Occupational health and safety” is systematic practice in order to determine the negativity that can be experienced in the workplace in advance and eliminate these negativities. Occupational health and safety (OHS) significantly affect the employer, the worker and the state. Related laws, regulations and professional organizations require various regulations to prevent accidents and occupational diseases and to establish healthy and reliable work environments. Such arrangements ensure the protection of worker’s health and, on the other hand, enhance the reputation of the enterprise. The state, on the other hand, saves on expenditures arising from occupational accidents and occupational diseases if favorable conditions are met. Today, work is being carried out by the state to make OHS practice compulsory in the public and private sectors. The principles of OHS are fulfilled by occupational safety professionals through practices and audits. Although there are efforts to establish/protect OHS through laws © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 347–356, 2022. https://doi.org/10.1007/978-3-030-90421-0_29

348

M. Ulu and S. Birgün

and regulations, it is a continuous development process. OHS culture must be created and sustained in the enterprise. For this purpose, all employees should be given trainings to create awareness about OHS and the importance of it should be understood. The enterprise should be visualized and free of any unnecessary substances. Employees should be able to do their work more easily and without wear, distances should be reduced, processes and methods should be improved with continuous improvements. The more clean, orderly and ergonomic the workplace layout, the healthier and safer the environment will be created. It is of great importance to have lean philosophy in order to provide this whole environment. It is obvious that the desired environment can be provided in the workplace with lean applications. This study presents a lean occupational safety model created with the aim of improving OHS processes. In the following sections of the study, the interaction of OHS and Lean philosophy were briefly explained. The Lean Occupational Health and Safety Model was introduced and the results of an application of the model was summarized.

2 Lean Philosophy and Occupational Health and Safety Due to the Industrial Revolution, fundamental changes in the quantity and form of production have led to the emergence of important problems. The number of deaths at a young age and crippling have increased due to unhealthy and unsafe working conditions such as difficulties in the working environment, too long working hours (18 h), poor working conditions for women and children workers and this has created great unrest in society. Riots have started in many countries and occupational health and safety has become a social issue. OHS which incorporates many branches of science such as medicine, social sciences, psychology and engineering, encompasses the methodical work intended to protect the employees from conditions that could harm their health due to hazards in connection with the smooth execution of the work in the workplaces and protect the environment of the workplace, the enterprise and the affiliates and create a more secure business environment, accordingly. Occupational Health refers to a healthy (physically and spiritually) workplace where the hazards that may arise from materials and equipment used in the workplace depending on the workplace environment conditions of the workforce are eliminated or the effect is minimized [1, 2]. The concept of Occupational Safety refers to the protection of the employee against technical risks in the working environment. According to another statement, Occupational Safety is considered as the systematic work to ensure the legal, technical and medical measures required in order for the employer not to get harm bodily and spiritually from the hazards arising due to the work being done [3]. On the other hand, Lean philosophy, originated in Toyota [4] aims to improve the performance and therefore increase the effectiveness of an environment/system by detecting and eliminating or minimizing waste in the system [5]. Waste means that any activity, method, workflow and/or movement that does non-value added to the product or service. According to [6], lean production is defined as “the pursuit of concurrent improvement in all measures of efficiency such as manufacturing quality, cost, delivery, safety and morale by the elimination of waste through projects that change the physical organization

A New Model Proposal for Occupational Health and Safety

349

of work on the shop floor, logistics and production control throughout the supply chain, and the way of human effort in business, production and support tasks” [7]. Lean spreads a closer look at an organization’s value streams, removing all non-value-added activities, and continually aligning all required activities with external and internal customers [8]. According to the Lean philosophy, any activity, method, workflow and/or movement that does non-value added to the product or service is considered a waste. Ohno [9] collected wastes in seven classes: “unnecessary motion, waiting time, overproduction, processing time, defect production, unnecessary inventory, and transport”. [10] took into consideration “support levers (manegerial actions), waste elimination techniques and performance measures” in the three-level pyramid model that Ohno developed to eliminate the seven wastes. “Support Levers” are located at the lowest level of the pyramid as employment security, reward systems, team organization, training, problem solving techniques, time dedicated for improvements, education killing, supply chain involvement, tiered suppliers, communication, budgets, general working conditions. In the middle of the pyramid “Waste Elimination Techniques” as a cross trained work force, inventory reduction, JIT purchasing, preventive maintenance, set-up reduction, product simplification, quality at source, they leveled and mixed production, layout improvement, housekeeping, pull control, automation and visual control are given. At the top of the pyramid is “Performance Measures”. “Number of skills, inventory, production batch size, the batch you purchased/lead time, JIT deliveries as proportion of downtime, setup time, part count per product, the first time the right proportion, length, production flow, visual control audit lead time to introduce new products, delivery reliability, labor productivity, product cost, an improvement in the number of” are given as measures of performance. Thus Hallihan et al. showed what techniques would be used to achieve lean by removing wastes from the surface, what support would be used and what the result would be measured. Lean approach means avoiding waste by using lean techniques and solutions in all types of value production. These lean techniques and solutions also play an important role in maintaining the regularity, cleanliness, visibility, safety and health of employees. For example, it is important for the safety of processes and employees to provide inventory reduction caused by reducing waste, a completely clean, tidy and visible environment with TPM, 5S, visual tools, signs and Poka-yoke applications used in visual management, etc. Therefore, being lean intersects at the same point with OHS. It is expected that the dangers and risks that would threaten human health and safety in an orderly, organized, visible and free from unnecessary things will be minimized. For example, [11] focuses on strategies to deal with variability, emphasizing two typical lean production concepts – autonomation and visual management – which can be used in safety management to detect variability. [12] conducted a survey in 2013 and stated that 88% of those who were surveyed had observed a positive impact of their lean activities on the health and safety performance [13]. The authors evaluated OHS performance by providing empirical evidence on lean techniques [14]. Results from 10 case reviews have shown that lean practices have a positive impact on OHS performance. [15] conducted a systematic review with 18 articles about the effect of lean on productivity and OHS in the garment industry. This review

350

M. Ulu and S. Birgün

study indicated that positive effects of lean on a variety of items such as injuries, accidents, ergonomics, fatigue, teamwork, workers’ commitment, turnover rate and absenteeism. In another study, in the pull manufacturing system, where lean principles are applied, [16] designed the near-miss management system for the purpose of occupational safety. In 2014, [17] applied the 5S method to hospital medical laboratories in their research. Compared to previous years, there was a significant improvement in the laboratory non-compliance score of 69.7%. [18] stated that as a result of the 6S (5S + Safety) method applied to increase productivity rate and efficiency throughout ink manufacturer, waste is reduced, a cleaner and safer work environment is provided, activities that do not add value are reduced, and a visual workplace is provided. [19] proposed a systematic approach to the design of lean-oriented OHS systems using axiomatic principles in his study. A holistic roadmap for the application of OHS systems to a manufacturing system has been obtained as a result of the study. The proposed OHS system design is applied from the shipbuilding industry to a true shipyard system, and its feasibility has been laid out focuses on both the current and future workforce. The authors [20] provided a technical and scientific model integrated with lean principles for ergonomic risk analysis. As a result of the action research project, [15] carried out in the garment industry, they stated that the lean philosophy has positive effects on OHS and creates synergy. [21] developed a construction safety system based on different lean construction tools, and indicated that the implementation of lean has a significantly positive effect on improving overall safety systems, by using system dynamics and simulation. With the target of zero accident, [22] added safety level to the 5S method and expanded it to 6S and tested it in an integrated high school laboratory. Studies on OHS have shown the most different approach in Japanese businesses. Japanese businesses refer to job security as “Anzen - the securing of safety”. In the world industry where digitalization have risen to upper levels and therefore safety and security come to the fore, the Japanese enterprises enabled all that work to adopt the security approach with “Anzen Daiichi” equivalent to the English word “Safety First” motto [23]. In this study, a model involving the application of lean techniques and aiming to establish and maintain the Occupational Health and Safety System was proposed and the results obtained by testing the model in a chemistry laboratory were summarized.

3 A Lean Occupational Health and Safety Model The lean approach aims to maximize customer satisfaction internal and external with a more efficient and secure system by ridding unnecessary factors called muda of any system/process. While the lean techniques applied can reduce costs, increase productivity and quality within the company, they also provide a safe and reliable working environment free from the risks and hazards envisaged. Therefore, it is undeniable that the making lean and visualizing of work environments will play a role in enhancing the success of OHS practices. In order for the working environment to be safe, it must be regular, visible, quickly accessible, free of hazards and risks. OHS requirements and the lean approach are united in this common point. Applying lean philosophy is a great success in providing such an environment. This study presents a “Lean Occupational Health and Safety Model”, which acts as a guide for meeting the OHS requirements of

A New Model Proposal for Occupational Health and Safety

351

the environment. The lean occupational safety model is designed as a cyclical methodology consisting of eight steps [24]. These steps aimed to protect employee health and safety by minimizing danger and risk and ensuring standardization and sustainability. Before this model is implemented, it should be ensured that all employees receive lean approach trainings with the support of senior management and create lean awareness. When a lean culture is established in the enterprise over time, it is obvious that the success to be achieved will be higher both in terms of the business objectives and OHS objectives. The lean OHS model is given as follows (Fig. 1).

Fig. 1. Lean occupational health and safety model

3.1 The stages of the model Lean occupational safety stages according to the model can be summarized as follows: Step 1: Current State Analysis. In this step, the system to be examined must be observed according to the requirements of occupational safety, the existence/use of the system, its location and location, its neighbor systems, the work/projects being carried out, the tools, materials, etc. available in the system, other resources, number of employees, inventory status, sections if any, columns, windows, doors, etc. architectural features, etc. characteristics must be determined.

352

M. Ulu and S. Birgün

Step 2: Identifying Problems. The examined system, should be examined with OHS approach, the storage and preservation of materials, having unnecessary material, making it available by the tube and fire tube pressure existing in the system status and placement of wastes in storage and/or disposal of forms, contents unknown packaging/package, cables, stacking, ventilation system, chemicals, dust, noise, lighting, vibration, thermal comfort measurements, such as/frequency, ergonomics, instruction presence/visibility, protective equipment, work order/irregularity, Medicine/First Aid Cabinet and availability of materials, etc. their status must be determined by examination. At this stage, photography, video, etc. records are advised to be kept. Step 3: Risk Analysis. The determination of existing or external hazards in the system, the factors leading to the transformation of these hazards into risks and the risks arising from the hazards are analyzed and rated and control measures are decided at this stage. Quantitative and qualitative methods can be used for risk assessment [25]. FMEA [26], Danger and Operability Study Methodology [27], Failure Tree Analysis [28], Fine Kinney Method [24], Checklist, Risk Assessment Decision Matrix Methodology, L Type Matrix, Multi-Variable X Type Matrix Diagram [29] are some of the risk analysis methodologies. Step 4: Identification and Selection of Risk Groups. At this stage, the importance ranks, and numbers of the risks identified as a result of the risk assessment are determined. The decision is made to eliminate significant and high-value risks. Step 5: Suggestions for Solutions. As a result of risk assessment, solutions are developed according to OHS requirements and in line with lean principles in order to correct high and significant risks and reduce the risk size. Kaizen [30–32], 5S [33–36], Kanban [37– 39], TPM [40–43], Group Technology [44–48], standardization [49–51], visual factory [52, 53], etc. are the most frequently referenced lean techniques. Step 6: Suggestions and Implementation of Lean Techniques. The improvements proposed in Step 5 can be carried out according to a plan and provide a secure working environment. Step 7: Current Status and Risk Assessment. Risk factors for the improved system are re-measured and evaluated. Step 8: Ensuring Standardization and Sustainability. One of the basic principles of the lean approach is to ensure standardization and sustainability. The stages that are fulfilled are standardized. Risk assessment should be continuously reviewed in order to ensure that significant changes made at the workplace are not interrupted and to be sustainable in order to maintain success in risk assessment studies. These steps should be repeated in certain periods to ensure that the system is safe and reliable in order to maintain the continuity of the improvement and the sufficiency of the measures taken. This period is recommended as 1 year but may need to be done more frequently in high-risk sectors. 3.2 Model Validation The application of the model was carried out in a chemistry laboratory. The laboratory was first examined in terms of chemical products, layout, employees, working environment and conditions, and in the second step, problems were defined from the OHS point of view. In the third step, risk analysis was performed applying the Fine Kinney method

A New Model Proposal for Occupational Health and Safety

353

and 20 different risks were determined (as seen in Table 1), and then these risks are assigned to risk groups. In the next step, the solutions were presented to correct high and significant risks and to reduce the risk size. 5S, Kaizen and visual factory applications (Table 1) were implemented to reach the solutions. Risk assessment was made again according to the new situation and it was determined that 18 risk groups were no longer a risk. Kaizen applications were proposed for Ventilation system and Furnace, however, could not be implemented due to time and building constraints. Thus, more suitable working conditions have been obtained and the environment has been made safer. Table 1. Risk factors, and lean solutions Risk factors

Kaizen

Storage of chemicals Surplus chemical Pressure tube Chemical waste Storing chemicals in a household refrigerator Storing chemicals in different containers

Stacking Ventilation system Lacking of occupational health and safety Ergonomics Instructions Personal protective and equipment Fire tube Irregular work environment Failure to perform first aid (medicine cabinet) Laboratory doors Furnace Fume cupboard

Visual factory √

√ √ √





√ √

√ √

Chemical with unknown content (MSDS) Electrical wiring

5S √

√ √



* √





√ √ √









√ √







* √

The stage of standardizing the steps followed and ensuring sustainability continues.

354

M. Ulu and S. Birgün

4 Conclusions Occupational diseases and occupational accidents remain one of the most important problems in modern societies. As a result of current risks, adverse conditions in the working environment and measures not taken, hundreds of thousands of employees suffer occupational diseases or accidents every year. Today, occupational health and safety practices applied to overcome occupational diseases and occupational accidents have started to be secured by law. Risk analyses will be carried out within the scope of occupational health and safety management system and deficiencies will be identified and corrected, plans to prevent existing risks will be prepared. In this way, occupational accidents and occupational diseases will be kept to a minimum even if they cannot be completely prevented. In this study, a lean Occupational Health and Safety model was proposed for the application of lean techniques within the scope of occupational health and safety. The model, consisting of 8 steps, was applied in a chemistry laboratory and 18 of the 20 identified risk factors - for 2 of the special conditions, recommendations could not be applied - ceased to be a risk. It is hoped that the lean occupational safety model proposed in this study will serve as a guideline for creating a safe environment in manufacturing and service sector, in laboratories, shopfloors and workplaces.

References 1. Badri, A., Gbodossou, A., Nadeau, S.: Occupational health and safety risks: towards the integration into project management. Saf. Sci. 50(2), 190–198 (2002) 2. Towlson, D.: NEBOSH: International General Certificate in Occupational Safety and Health. RRC Bussiness Training, London (2003) 3. Coenen, P., Gilson, N., Healy, G.N., Dunstan, D.W., Strakera, L.M.: A qualitative review of existing national and international occupational safety and health policies relating to occupational sedentary behaviour. Appl. Ergon. 60, 320–333 (2017) 4. Fawaz, A.A., Jayant, R., Kim LaScola, N.: A classification scheme for the process industry to guide the implementation of lean. Eng. Manage. J. 18(2), 15–25 (2015) 5. Birgün, S., Gülen, K.G., Anol, Y.: Yangın söndürme cihazları üretim süreçlerinin yalınla¸stırılması, IX. Ulusal Üretim Ara¸stırmaları Sempozyumu Bildiriler Kitabı, Eski¸sehir, pp. 661–672 (2009) 6. Baudin, M.: Lean Production: The End of Management Whack-a-Mole. Palo Alto (1999) 7. Ertay, T., Birgün Barla, S., Kulak, O.: Mapping the value stream for a product family towards lean manufacturing: a case study. In: CD-ROM of International Conference on Production Research ICPR-16, Prague (2001) 8. Hoppmann, J., Rebentisch, E., Dombrowsk, U., Zahn, T.: A framework for organizing lean product development. Eng. Manage. J. 23(1), 3–15 (2011) 9. Ohno, T.: Toyota Production System: Beyond Large-Scale Production. CRC Press, New York (1988) 10. Hallihan, A., Sackett, P., Williams, G.M.: Jit manufacturing: the evolution to an implementation model founded in current practice. Int. J. Prod. Res. 35(4), 901–920 (1997) 11. Saurin, T.A., Formoso, C.T., Cambraia, F.B.: Towards a common language between lean production and safety management. In: Proceedings IGLC-14, Santiago, Chile, July 2006, pp. 483−495 (2006)

A New Model Proposal for Occupational Health and Safety

355

12. Cudney, E.A., Furterer, D., Dietrich, D.: Lean Systems: Applications and Case Studies in Manufacturing, Service and Healthcare. CRC Press, New York (2013) 13. Nesic, B., Vasovic, D., Tanovic, P., Nesic, N., Nesic, L.: Lean implementation and OHS. In: The 15th International Conference Risk and Safety Engineering Kopaonik, 16–18 January, p. 232 (2020) 14. Longoni, A., Pagell, M., Johnston, D., Veltri, A.: When does lean hurt? – An exploration of lean practices and worker health and safety outcomes. Int. J. Prod. Res. 51(11), 3300–3320 (2013) 15. Hamja, A., Maalouf, M., Hasle, P.: The effect of lean on occupational health and safety and productivity in the garment industry – a literature review. Prod. Manuf. Res. 7(1), 316–334 (2019) 16. Gnoni, M.G., Andriulo, S., Maggio, G., Nardone, P.: Lean occupational safety: an application for a near-miss management system design. Saf. Sci., 53, 96–104 (2019) 17. Do˘gan, Y., Özkütük, A., Do˘gan, Ö.: Laboratuvar Güvenli˘ginde “5S” Yönteminin Uygulaması ve Çalı¸san Memnuniyeti Üzerine Etkisi. Mikrobiyol Bul. 48(2), 300–310 (2014) 18. Sukdeo, N.: The application of 6S methodology as a lean improvement tool in an ink manufacturing company. In: IEEE, pp. 1666–1671 (2017) 19. Babur, F., Cevikcan, E., Durmu¸so˘glu, B.: Axiomatic design for lean-oriented occupational health and safety systems: an application in shipbuilding industry. Comput. Ind. Eng. 100, 88–109 (2016) 20. Dos Santos, E.F., Dos Santos, L., Nunes, L.: Methodology of risk analysis to health and occupational safety integrated for the principles of lean manufacturing. Adv. Soc. Occup. Ergon. 487 (2017) 21. Wu, X., Yuan, H., Wang, G., Li, S., Wu, G.: Impacts of lean construction on safety systems: a system dynamics approach. Int. J. Environ. Res. Publ. Health 16, 221 (2019) 22. Jiménez, M., Romero, L., Fernández, J., Espinosa, M., Domínguez, M.: Extension of the lean 5s methodology to 6s with an additional layer to ensure occupational safety and health levels. Sustainability 11, 3827 (2019) 23. Anzen (2020). http://www.gembapartner.com/hizmetler/anzen-guvenligi/ 24. Ulu, M.: ˙I¸s sa˘glı˘gı ve güvenli˘gi alanında yalın tekniklerin kullanımı: Bir laboratuvar uygulaması, Master Thesis, Beykent Üniversitesi, ˙Istanbul, Turkey (2017) 25. Ilbahar, E., Kara¸san, A., Cebi, S., Kahraman, C.: A novel approach to risk assessment for occupational health and safety using Pythagorean fuzzy AHP and fuzzy inference system. Saf. Sci. 103, 124–136 (2018) 26. Ulu, M., Sahin, ¸ H.: Hata türü ve etkileri analizi tekni˘gi ile bir mühendislik fakültesinde risk de˘gerlendirmesi. Electron. Lett. Sci. Eng. 16(2), 63–76 (2020) 27. Galante, E., Bordalo, D., Nobrega, M.: Risk assessment methodology: quantitative HazOp. J. Saf. Eng. 3(2), 31–36 (2014) 28. Deng, X.W., Feng, Y.X., Liang, L., Jiang, D.X.: Failure tree analysis of security system in large steam turbine. East China Electr. Power 40(1), 92–95 (2012) 29. Rausand, M.: Risk Assessment: Theory, Methods, and Applications, vol. 115. Wiley, Hoboken (2013) 30. Imai, M.: KAIZEN - The Key to Japan’s Competitive Success. Random House, New York (1986) 31. Berger, A.: Continuous improvement and kaizen: standardization and organizational designs. Integr. Manuf. Syst. 8(2), 110–117 (1997) 32. Graban, M., Swartz, J.E.: Healthcare Kaizen: Engaging Front-Line Staff in Sustainable Continuous Improvements. CRC Press, New York (2018) 33. Womack, J.P., Jones, D.T.: Lean Thinking. Free Press, New York (2003) 34. Chapman, D.: Clean house with lean 5S. Qual. Prog. I, 27–32 (2005)

356

M. Ulu and S. Birgün

35. Alli, B.O.: Fundamental Principles of Occupational Health and Safety. International Labor Office, Geneva (2018) 36. Filip, F.C., Marascu-Klein, V.: The 5S lean method as a tool of industrial management performances. IOP Conf. Ser. Mater. Sci. Eng. 1–6 (2015) 37. Monden, Y.: Toyota Production System. Industrial Engineering and Management Press, Norcross (1983) 38. Kumar, C.S., Panneerselvam, R.: Literature review of JIT-KANBAN system. Int. J. Adv. Manuf. Technol. 32, 393–408 (2007) 39. Spearman, M.L., Woodruff, D.L., Hopp, W.: Conwip: a pull alternative to kanban. Int. J. Prod. Res. 28(5), 879–894 (1990) 40. Chand, G., Shirvani, B.: Implementation of TPM in cellular manufacture. J. Mater. Process. Technol. 103, 149–154 (2000) 41. McKone, K.E., Schroeder, R.G., Cua, K.O.: The impact of total productive maintenance practices on manufacturing performance. J. Oper. Manage. 19, 39–58 (2001) 42. Rishi, J.P., Ramachandra, C.G., Srinivas, T.R.: Keys to succeed in implementing Total Preventive Maintenance (TPM) and lean strategies. Int. J. Mod. Trends Sci. Technolo. 2(07), 25–30 (2016) 43. Wireman, T.: Total Productive Maintenance. Industrial Press, New York (2004) 44. Kusiak, A.: The generalized group technology concept. Int. J. Prod. Res. 25(4), 561–569 (1987) 45. Singh, N., Rajamani, D.: Cellular Manufacturing Systems: Design, Planning and Control. Chapman & Hall, London (1996) 46. Irani, S.A.: Handbook of Cellular Manufacturing Systems. Wiley, New York (1999) 47. Hyer, N., Wemmerlov, U.: Reorganizing The Factory: Competing Through Cellular Manufacturing. Productivity Press, Portland (2002) 48. Ham, L., Hitomi, K., Yoshida, T.: Group Technology: Applications to Production Management. Kluwer Nijhoff Publishing, Boston (1985) 49. Rybkowski, Z.K., Kahler, D.L.: Collective kaizen and standardization: the development and testing of a new lean simulation. In: Proceedings IGLC, pp. 1257–1268 (2014) 50. Nallusamy, S.: Efficiency enhancement in CNC ındustry using value stream mapping, work standardization and line balancing. Int. J. Performability Eng. 12(5), 413–422 (2016) 51. Mikvaa, M., Prajová, V., Yakimovich, B., Korshunov, A., Tyurin, I.: Standardization - one of the tools of continuous improvement. Procedia Eng. 149, 329–332 (2016) 52. Greif, M.: The Visual Factory: Building Participation Through Shared Information. Productivity Press, Portland (1991) 53. Bilalis, N., Scroubelos, G., Antonıadıs, A., Emiris, D., Koulourıotıs, D.: Visual factory: basic principles and the ‘zoning’ approach. Int. J. Prod. Res. 40(15), 3575–3588 (2002)

An Integrated Value Stream Mapping and Simulation Approach for a Production Line: A Turkish Automotive Industry Case Onur Aksar, Duygu Elgun, Tu˘gçe Beldek, Aziz Kemal Konyalıo˘glu, and Hatice Camgöz-Akda˘g(B) Management Engineering Department, Istanbul Technical University, Istanbul, Turkey {beldek,konyalioglua,camgozakdag}@itu.edu.tr

Abstract. Lean production practices are increasing day by day, especially in manufacturing companies, and in parallel with this situation, the importance given by companies to lean production concept is also increasing. Companies need continuous improvement, to be innovative, to identify problems in the system in a limited time and act in order to maintain their competitive strength in today’s environment . In this paper, under the roof of lean thinking and lean manufacturing concept, a literature review and simulation study for a manufacturing industry have been made on improvement studies, especially in the automotive sector, in an adaptation to Industry 4.0, and on the methods, frameworks and modelling used in these studies. Furthermore, the aim of the study is to guide the sectors to improve their processes by analysing the current situation with the concept of lean and making use of the necessary technologies and models. In the research, current and future situation analysis were made using lean production approach, the system was simulated with ARENA software and its performance was measured. Accordingly, a financial analysis of the proposed improvement and a decision were made. Keywords: Lean production · Automotive sector · Lean thinking · Lean manufacturing · Industry 4.0 · Simulation · Value Stream Mapping (VSM)

1 Introduction Lean thinking as an innovation first originated in the Japanese automobile manufacturer Toyota Motor Corporation in the 1950s under the leadership of Taiichi Ohno. Ohno’s ultimate and primary goal was to make Toyota more competitive against strong rivals in Japan. Ohno classified processes in manufacturing as value-adding, non-value-adding but still necessary, and wasteful. While defining 7 different wastes, he analyzed in the production processes, he argued that these wastes must be detected and eliminated. From the 1950s to the 1980s, Ohno made Toyota the world automobile manufacturing leader by applying the methods like Just-in-time and Jidoka he developed. Time to time, the methods developed by Taiichi Ohno passed into the literature as the Toyota Production System. In their book “The machine that changed the world”, Womack et al. (1990) by comparing Henry Ford’s mass production with TPS, they analyzed © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 357–371, 2022. https://doi.org/10.1007/978-3-030-90421-0_30

358

O. Aksar et al.

these two different production methods. In the 2000s, the Lean Thinking concept that emerged with TPS began to be used in different industries. New lean tools have been developed. The new tools developed were used in sectors such as automobile production, service, health, transportation and many processes become lean while costs are reducing. The importance of lean manufacturing has gradually increased with the emergence of Industry 4.0. Buer et al. (2018), expressed lean is essential for a successful Industry 4.0 change, proved that lean is an ideal establishment when moving towards Industry 4.0. Process improvements continue uninterruptedly in the automotive sector, where lean manufacturing has emerged. As a result of pursuing perfection which is the basic principles of lean thinking, lean applications in this industry are carried out using various lean thinking tools. Although it is difficult to identify problems in manufacturing without investigate properly, in general in the Automotive Industry, various actions that do not add value to customer in the production line create problems such as labour, material and energy loss in the company.

2 Literature Review 2.1 Lean Thinking Lean Thinking is an organizational discipline which is first introduced by Toyota Motor Corporation under the management of Taiichi Ohno through the discovery of the Japanese production system. Lean Thinking seeks to improve business output and value creation by decrease (or disposal) of waste behavior where waste is described as nothing adds value to the customer (Noto and Cosenz 2020). Muda. It is a Japanese word that means “waste”, specifically any human action that consumes energy but produces little value: mistakes that need rectification, creation of products no one needs so inventories and leftover goods collect, production phases that are not necessarily required, transfer of workers and transportation of goods for no reason from one location to another, groups of workers hanging around in a downstream operation waiting when an upstream activity has not arrived on schedule, items and services which do not serve the client’s needs. The first 7 forms of Muda were described by Taiichi Ohno (1912–1990). Perhaps many more are there. Nevertheless, there may be several forms of Muda, from even the most basic examination of what is achieved in an ordinary day in the average company, it is impossible to deny the “Muda” is everywhere. Thankfully, Muda has a good cure: lean thinking. It offers a way to define the value, design the greatest sequence of value-generating tasks, execute these tasks without disruption if someone asks and execute them extremely efficiently. In brief, lean thinking is lean because it allows a means of increasing commitment, less resources, less time and less space – thus getting the consumers closer to what they want to buy (Womack and Jones 1996).

An Integrated Value Stream Mapping and Simulation Approach

359

Toyota Production System is based on a series of (technical) practices that concentrate on enhancing customer satisfaction. As two important methods, Toyota itself emphasizes ‘just-in-time’ (what is needed, when it is needed, and in the amount needed’) and ‘jidoka’ (‘automation with a human touch’). Just-in-time is an approach to waste reduction by having available parts and products when a consumer needs them and in the required amount. Parts are pushed iteratively in small batches (with an optimum size of one, i.e., one-piece-flow) without work-in-process to ensure just-in-time. Pull replenishment systems that enable consumer demand to initiate production are used where one-pieceflow is not feasible. An article’s consumption then generates a need for resupply, which is shown by what are known as Kanban cards. Jidoka is accomplished by means of equipment that stops automatically after identifying quality issues. This enables one user to track and control multiple devices and processes simultaneously (through visual cues such as andon display boards) visually. Such protocols have the opportunity to recognize and signal challenges in live time and lay the groundwork for training and development (Mazzocato et al. 2020). 2.1.1 Principles of Lean Thinking There are five main principles of lean thinking. These principles depend on the book of Womack and Jones. Specify Value. To “revaluate value from the customer’s point of view” and to disregard current resources and technologies” appears to be the core philosophy of this principle. Preparing Value Stream. It meant by this principle, modelling and designing of the manufacturing system, covering product creation, ordering execution and appropriate manufacturing, in particular for the purpose of sorting out inefficient and preventable practices. Make Value Flow. This principle, although considering the decline of lead times in general, relates directly to the one-piece flow process, rather than a flow composed of lots. Pull Value from the Producer. In compliance with the principle, instead of the manufacturing system throwing goods, often undesirable, onto the consumer, the customer pulls the item from the production system as desired. Pursue Perfection. Although delivering a product that is ever closer to what the consumer really needs, there is really no limit to the phase of minimizing energy, prices, and errors (Koskela 2004). 2.1.2 Benefits of Lean Thinking Lean thinking is well known in industrial processes and the advantages are well recognized. Lean production encourages enterprises to run with lower inventories, helping businesses to save working capital; reduced lead times in production, resulting in a shorter turnaround time to consumer demands; fewer waste, i.e. process effluent or redesign of out of specification substance; and increased quality (often referred to as the first time) (Melton 2004). Figure 1 shows the benefits of lean thinking.

360

O. Aksar et al.

Fig. 1. Benefits of lean thinking (Melton 2004)

2.1.3 Tools of Lean Thinking Lean thinking tools did not evolve in an instant or at a certain time. Over the years, many researchers have introduced a new tool or technique to lean thinking. As a result, it has become increasingly difficult to separate these tools and techniques. Currently, about 25 tools and techniques are used in lean thinking applications (Kulkarni et al. 2014). In their study, Leita and Vieira (2015) examined Lean Applications in the service industry and showed that there is no significant tool to be used in the service industry but mixed tools are preferred as a lean thinking approach which are Value Stream Mapping, Kanban, Just in Time, 5S etc. Villarreal et al. (2016), used VSM with simulation as a tool of Lean Thinking in the transportation industry. James and Haque (2010) explored how lean thinking tools can be used in the production process of a new product. In this research, they showed that the application of tools such as JIT and VSM to NPI (New product introduction) processes was successful. Jorma et al. (2016) showed that in the Finnish health system, it is possible to achieve financial savings and provide better service through management and process improvement efforts using Lean tools. Randhawa and Ahuja (2017) utilized 5S for to improve management performances in Indian manufacturing industry and found that the adoption of 5S has allowed manufacturing companies to achieve considerable advantages such as overall operational change, efficiency, consistency, protection, moral standards of workers, productive use of workspace, and cost optimization. The findings further illustrate the reduction by holistic 5S application of severe chronic manufacturing system issues such as shortages, failure, demoralized workers, diminishing income, and unhappy consumers. Saravanan et al. (2018a,b) applied lean tool which is “Single Minute Exchange of Die” (SMED) to decrease production losses and to improve efficiency in small scale industries, thus decreased total change over time by about 67.72%.

An Integrated Value Stream Mapping and Simulation Approach

361

2.2 Lean Manufacturing 2.2.1 Birth of Lean Manufacturing The idea of lean manufacturing in Japan showed up in the Toyota Motor Company during the 1940s, since Japanese mechanical organizations confronted a lack of assets after the Second World War, which incited them to look for a creation framework to confront their shortfall. This idea came as one of the answers to face the shortage of resources. The establishers of Toyota had the option to examine and recognize strengths and weaknesses of the huge production framework embraced by the American organization around then, and the main center of another elective framework to the customary framework arose, known as the Toyota Production System. His first plans were started by Engineer Ohno, whose objective was to decrease squander on all buys and production stages (Mady et al. 2020). Alefari et al. (2020) the term ‘Lean Manufacturing’ was proposed by Krafcik (1988), a long while after the main presentation of the Toyota Production System. The allencompassing point of lean is to accomplish a similar exhibition (and further improve it) while utilizing less input, for example, less time, less space, less human exertion, less machinery, less material and less expense. 2.2.2 The Need for Lean Manufacturing The immediate need to lessen negative corporate natural effects while upgrading their monetary strength and positive cultural advantages is pulling in organization pioneers to actualize different quality improvement frameworks, for example, lean manufacturing, six sigma, sustainable manufacturing, and circular economy ideas, approaches, and advancements. These methodologies are significant, with lean manufacturing (LM) among the main frameworks, whenever actualized inside a proper structure. Usage of LM implements is now ensuring competitive benefits, for example, enhancements in product quality, profitability, worker health and security, and consumer loyalty (Yadav et al., 2020a,b). Applying lean manufacturing in businesses guarantees competitiveness in the global and homegrown business sectors, produces open doors for new work, immerses fast development of the labor and raises technological capability levels. All things considered, the main objective of TPS was to develop productivity and reduce cost by wiping out waste or non-value-added works. Be that as it may, its advantage was moved up to worldwide predominance as far as cost, quality, adaptability and fast reaction across nations and enterprises obviously showed the beneficial outcome of instruments and standards of lean manufacturing. The motivation behind lean is not to boost the advantages of only a solitary unit at the same time, rather, an entire set (Goshime et al. 2019).

362

O. Aksar et al.

Lean Manufacturing Practices (LMPs) have been of interest for authoritative operational competitiveness for quite a long time. Key standards of LMPs incorporate the distinguishing identification and disposal of all non-value-added exercises, or waste, and include workers in endeavors toward constant improvement. Examination on lean manufacturing recommends that while LMPs do not really fuse ecological obligation, such practices can add to moderate a portion of these natural effects on account of their characteristic spotlight on waste end. While LMPs might not have an immediate expectation to lessen ecological effect, their usage had improved energy productivity, diminished waste and emanations, and decreased stock waste (Bai et al. 2019). Drawing on the modern industrial ecology, lean manufacturing has grasped measure harm to the actual climate and inordinate utilization of natural resources as extra welds of waste these days. LM looks to gradually decrease squander depending on changing structure (for instance, level of autonomy), adjusting or presenting new cycles (for instance, mixed-mode processing, single-minute exchange of die(SMED) or just-in-time(JIT) and new routines (for instance, six sigmas, orpoka-yoke) (Ghobadian et al. 2020). Goienetxea et al. (2019) conducted a study to define the current approaches and, frameworks for lean and simulation combination, while also defining key aspects and challenges of study by noting that the benefits of integrating Lean and simulation are addressed by numerous writers to better help decision-makers in system design and development. As a result of this study, Goienetxea et al. (2019) stated that risen interest in integrating Lean and simulation in the sense of Industry 4.0 and in integrating them with “optimization, Six Sigma, as well as sustainability”, are the key trends found and is likely that the number of publications in these areas will keep going to rise. 2.2.3 Taxonomy for Lean Manufacturing, VSM and Simulation As a result of literature review, according to examined applications, 28 articles were classified based on Author(s), Offer, Methodology, Industry and Conclusion. Figure 2 shows the taxonomy of the reviewed studies.

An Integrated Value Stream Mapping and Simulation Approach

363

Author(s)

Offer

Methodology

Industry

Conclusion

Melton, T. (2004)

Demonstrate that lean tools cannot be expressed simply by using lean tools or by changing steps in production processes Determining the differences between the two companies by evaluating the performance criteria of the LM tools they use. To determine the relationship between applications in LM and applications in environmental management

Lean Assessment

General

What matters is that the business changes completely and how the supply chain works, how the employees are managed.

Benchmarking

Manufacturing Industry

A standard benchmarking process has been established for evaluation in the LM field.

To eliminate waste, and improved operational procedures and productivity To decrease lead time and distance

Value Stream Mapping Value Stream Mapping

Sundar et al. (2014)

Putting lean elements in the order of application to successfully implement lean manufacturing

Andrade et al. (2015)

To show the link between VSM and simulation

Analyses of the Manufacturing exploratory survey Industry results Value Stream Automotive Industry Mapping

Boonsthonsatit and Jungthawan (2015)

To decrease production lead time, tptal cycle time and to increase value added activities

Value Stream Mapping

Automotive Industry

Kolberg and Zühlke (2015)

Providing an overview of current lean manufacturing and automation technology combinations

Lean Automation

Manufacturing Industry

It has been revealed that Industry 4.0 and lean manufacturing can add value to users together.

Lacerda et al. (2015)

Creating customer value through eliminating vaste

Automotive Industry

Jorma et al. (2016)

Controlling and enlarging patient and treatment process more effectively

Cycle time was decreased from 370 s to 140 s on the assembly sub process. They provided better service through management and process improvement efforts using Lean tools.

Sanders et al. (2016)

Removing the barriers to lean manufacturing practises with Industry 4.0 Increase warehousing and routing operations efficiency

Value Stream Mapping Total Quality Management, Six Sigma, Lean Six Sigma Lean Automation

Gurumurthy et al. (2009)

Yang et al. (2011)

Belokar et al. (2012) Palak et al. (2014)

Villarreal et al. (2016)

Maunzagona and Telukdarie (2017) In order to evaluate the application with simulation modeling method while applying the lean concept Randhawa and Asuja (2017)

For to improve management performances in Indian manufacturing industry

Empirical Analysis Manufacturing Industry

Value Stream Mapping, Simulation Value Stream Mapping with Simulation 5S

Empirical evidence has shown that environmental management practices are an effective instrument for addressing the inconsistencies between LM and environmental efficiency.

Automotive Industry

Total production has improved by 67%.

Automotive Industry

Lead time is reduced by 66.7%, non-value added time decreased by 25.6%.

Healthcare Industry

Manufacturing Industry Transportation Industry Mining İndustry

Manufacturing Industry

In line with the simulation method, VSM is a significant option in decision-making in terms of improvement in the manufacturing process Total cycle time was reduced by 21.3% and there are increase in value added by 293.33%

The theory that Industry 4.0 is equipped to realize lean production has been put forward. Number of routes decreased from 30 to 22, the distance travelled decreased by 32 per cent, the average number of clients served by each route increased by 23. It is emphasized that simulation modeling together with lean manufacturing evaluation is an important part for improving operations and increasing productivity in the mining industry. 5S has allowed manufacturing companies to achieve considerable advantages such as overall operational change, efficiency, consistency, protection, moral standards of workers, productive use of workspace, and cost optimization. Productivity was increased from 7 parts to 10 parts and process time was reduced by 24% It is emphasized that companies can make more efficient decisions using simulation and optimization. It showed the beneficial outcome of instruments and standards of lean manufacturing. Total change over time was decreased by about 67.72%.

Saravanan et al. (2017)

To improve productivity

Uriarte et al. (2017)

Supporting companies while developing their system

Goshime et al. (2018)

Develop productivity and reduce cost by wiping out waste or non-value-added works To decrease production losses and to improve efficiency In order to make improvements in the system using the production line balancing theory To maximize efficiency and productivity

Toyota Production System Single Minute Exchange of Die Simulation

Bai et al. (2019)

Creating sustainable development and competitive positioning effect with LMPs

Multiple-Criteria Decision Making

General

The findings have facilitated the identification of an "investment focus" for a better Lean Manufacturing Practice selection.

Goienetxea et al. (2019)

To define the current approaches and, frameworks for lean and simulation combination Empirically analyzing the integrated impact of Industry 4.0 and Lean Manufacturing Practices on Sustainable Organizational Performance To increase sustainability and reduce the waste in assembly line Evaluating the practices of companies in the lean transition period to review commonly used lean techniques Reviewing and conceptualizing a holistic framework for LM implementation benefits.

Simulation

General

Value Stream Mapping

Manufacturing Industry

Lessen negative corporate natural effects while upgrading their monetary strength and positive cultural advantages

General Lean manufacturing, Six sigma, Sustainable Manufacturing, Circular Economy Ideas

It is proved that lean applications and simulation together help to make better decisions by decision-makers. Real-time data provided by Industry 4.0 has been found functional in the preparation of VSM, which is the first step of LMPs. Wasted time was reduced and productivity was increased by doing it sustainable. It is determined that there is a significant difference in lean evaluation methodologies depending on factors such as company size, type, and the main purpose of the evaluation. A conceptual structural model was created by matching a comprehensive list of all technical and human inputs required for the LM implementation. Competitive benefits were achieved, such as improvements in product quality, profitability, worker health and safety, and consumer loyalty.

Saravanan et al. (2018) Shan et al. (2018) Veres et al. (2018)

Kamble et al. (2019)

Kumar & Mathiyazhagan (2019) Adel et al. (2020)

Basu and Dan (2020)

Yadav et al. (2020)

Gantt Chart, Value Automotive Industry Stream Mapping Lean, Simulation General

A detailed application roadmap was created by ordering the steps by ensuring proper integration between LM elements.

5S

Automotive Industry Manufacturing Industry Manufacturing Industry Automotive Industry

Gantt Chart

Automotive Industry

Lean Assessment

Manufacturing Industry

Conceptual Structural Model

General

They emphasized that the improvement effect with simulation optimization is important. There is a positive relationship between 5S and productivity

Fig. 2. Taxonomy for VSM, lean manufacturing and simulation

3 Methodology The methodology used in this research paper is a comprehensive analysis of the value stream mapping implementation process of a company that produces machine parts. With Value Stream Mapping methodology, wastes are presented in a visualized way. Seven waste have been determined by drawing the Current State Map (CSM). In this paper, it has been worked with a machine parts company that operates in Turkey and produces body and engine parts in the automotive industry. The company’s

364

O. Aksar et al.

future vision has a strong emphasis on lean manufacturing techniques. Observations in the company have been made and data has been recorded in order to create a VSM. The company product range includes shafts, front axle beams, front axle assemblies, steering knuckles, etc. 3.1 Value Stream Mapping for the Company Value Stream Mapping is carried out using a step-by-step method. The initial step is to choose a product family as the improvement aim and create a “Current State Map” for the value stream that has been chosen (Braglia et al. 2006). It has been decided to perform Value Stream Mapping for the most produced product type in the shaft line. The Current State Map should be drawn using a common collection of symbols and based on data obtained directly at the production site (Braglia et al. 2006). Figure 3 shows the Current State Map for the company.

Fig. 3. Current state map

3.2 Machine Downtime and Analysis of Machine Downtime Data Machine downtime refers to the total downtime of a machine on the production line due to various reasons during the shift. Problems begin to occur in production lines where machine downtime is high. At the same time, this situation leads to various waste such as cost, labor and raw material loss. According to Fox et al. (2008), machine downtimes can be planned or unplanned, which creates a cost table that is difficult to calculate in manufacturing factories.

An Integrated Value Stream Mapping and Simulation Approach

365

Machine Stop Rate is the total time that a machine has stopped in a shift divided by the total shift time. Thus, Machine Stop Rate is one of the important data which shows the efficiency of the production line. There are many reasons for machinery downtime on the production line. Some of them are as follows: Changeover time, Absence of Worker, Cleaning of desk, quality control studies. In order to find 7 wastes, which is one of the main principles to apply LM methods, machine downtime and rates, and the reasons for machine downtime should be analyzed correctly. According to Zammori et al. (2006), to analyse and identify the seven wastes, it may be helpful to break down the downtime reported for each production terminal into its key components such as “machine failure” “cleaning time” and “absence of operator”. Using the data received from the company working with, machine downtimes were calculated and various pie charts were created for each termination. The main point here is to draw attention to the ratio of the reasons that constitute the stoppage times of the machines and to reveal the most important reasons. The data which is used is for 2 months in accordance with the aim of looking broadly to analyse. Table 1 illustrates the stop times of the processes. Table 1. Stop times of processes Process name

Stop time(hours)

Metal straightening

17.1

Centering

21.7

Turning

121.2

Tournant straightening

4.1

Turning diameter

21.8

Tapering

72

Channel openning

30.6

Induction hardening

22.3

Sandblasting

0

Crack control

2

Over process wastes have been analyzed in both VSM and company data. The low control budget and poor machine positioning of the factory were the causes of waste found for this line. 3.3 Simulation in ARENA Software Arena Software provides a significant technology for performing process simulation and optimizing this simulation. It allows working with large data groups to measure the performance of the system. After measuring the performance of the system, the efficiency of the scenarios created to improve the system can be measured in Arena Software also.

366

O. Aksar et al.

There are many different data sets that affect the performance of the system in the production line, which consists of 11 processes. Current Value Stream Mapping was created by taking a snapshot of the production area where time study was performed with the Stopwatch technique. However, in order to make an improvement suggestion and draw a future state map, it is necessary to consider the physical character of the production process. Since the production process is naturally continuous valued, simulating continuous data in the Arena Software helps to identify and eliminate waste in the system. Arena Software basically provides a good approximation by enabling input analysis, visualization of wastes in the production line, output analysis and multiple replications of the system. In addition, it allows the statistically appropriate augmentation of the dataset, thus making the results more reliable. Data can be analyzed with the Input Analyzer tool, reliable results can be obtained by running the system endlessly with the run tool, performance indicators can be analyzed by creating a result report, and scenario analysis can be performed using the process analyzer tool. Since two different resources cannot be used in one process in the Arena Program, machines and operators are considered as separate processes. Transfer allocation was also used as the process realization method of the operators. While modeling the simulation, various variables were added. Values such as Unit Holding Cost, Unit Shortage Cost, Demand Size and Unit Intermediate Cost were calculated and used in statistical analysis. Figure 4 shows the ARENA module based on current state map given in Fig. 3.

Fig. 4. Arena module of simulation

Various calculations and assumptions are required to get the correct outputs from the simulation and to observe the changes accurately. In order to construct the simulation correctly, the following assumptions and calculations have been made. • At the beginning of the simulation, it was assumed that the raw materials were ready. • The standard time for operators to transport a product from one process to another is 1 min. • Sales price data is taken from the company, which is $240.

An Integrated Value Stream Mapping and Simulation Approach

• • • •

367

Holding Cost rate is calculated as %25 of the sales price which is $130 per unit. Shortage cost is assumed to be $110 per unit. Unit intermediate shortage cost is assumed to be $6. Demand which is coming from customer of the company is calculated annually and adapted to 1 day as 78 units. It is expected to remain stable in the coming years.

Then, considering the standard error found by using these 30 values, the number of replications to be performed was calculated as 417 by taking epsilon 1.5 value. Since 30 replications have already been performed, the number of additional replications 417 has been found. After calculations, Fig. 5 shows the future state map for eliminating bottlenecks and waste times.

Fig. 5. Future state map

Future Value Stream Map, which is the second part of VSM method, one of the Lean practices, was organized according to the results of scenario analysis. Adding Centering Machine, the selected scenario, was demonstrated in Future VSM. Accordingly, Lead Time is calculated as 4762 min, and Value-Added Time Percentage is calculated as 3.7799%. Furthermore, Since the two bottlenecks that stand out in the output analysis were observed in the centering and tapering processes, it was suggested to increase the number of tapering machines from 1 to 2 along with the number of centering machines. Figure 6 shows the scenario analysis based on processes.

Fig. 6. Scenario analysis based on VSM

368

O. Aksar et al.

The capacity of the tapering process and centering process has doubled. Accordingly, the daily “centering queue number in” has decreased from 4,195 to 3,981 and the daily “tapering queue number in” has decreased from 0,408 to 0,097. Queues in other processes are expected to be slightly positively affected accordingly. The system number out value has increased from 53,26 to 54,11 with the improvement in queues. A decrease in lead time was also observed with the increase in the capacity of centering and tapering machines, as a result of these, while shortage cost and intermediate stock cost decreased, a small increase was observed in holding cost. As a result, when the total cost is considered on a daily basis, a reduction in cost is observed. Total Cost Current System – Total Cost Scenario 1 = 2826,33 – 2732,66 = 93,67$ per day. As stated in the assumption part of the report, it has been accepted that the daily scenario will be realized throughout the year. Similar to scenario 1, the annual decrease in total cost is calculated as follows, taking into account an average of 22 working days per month. = 93,67 * 22 * 12 = 24.728,88$ An annual decrease of 24.728,88$ in total cost will be considered as revenue from financial calculations. Figure 7 shows the financial analysis based on scenarios of VSM.

Fig. 7. Scenario analysis based on financial parameters

Positive NPV values are observed in both scenarios. When choosing between alternative scenarios, the scenario with high IRR is expected to be preferred. In this case, Scenario 1 can be preferred to Scenario 2. In addition, the return on investment ratio was calculated in the continuation of the financial analysis. ROI is an indicator of how many percent profit was made in line with the scenario. Scenario 1 has a greater ROI value than Scenario 2. Then, A smaller payback period is more attractive as the payback period represents the time required to reach the breakeven point. Consequently, Scenario 1 is preferred to Scenario 2, considering all financial parameters.

4 Conclusions To sum up, the literature review has been done by considering the history of lean as well as today’s technological advances. After making definitions for each title, applications were investigated in accordance with the titles. According to examined applications, to better evaluate the most efficient approach linked to industries, 28 articles was classified based on Author(s), Offer, Methodology, Industry and Conclusion. Various methods were used such as Gantt Chart, Value Stream Mapping, 5S, Six Sigma Tools, Total

An Integrated Value Stream Mapping and Simulation Approach

369

Quality Management, Lean Six Sigmas etc. in different industries such as Transportation, Healthcare, Mining, Automotive and General Manufacturing etc. Thus, 10 applications were about Automotive Industry, and all of these offered better processes, lower cost and eliminated waste. In addition, in the Automotive Industry, these goals were achieved by Value Stream Mapping, Toyota Production System and 5S method as lean thinking tools. Moreover, VSM with simulation was applied in various industries, so it is proved that using together these two methods is highly efficient while improving processes and it helps to make better decisions especially in automotive industry. Data such as cycle time, downtimes, uptimes and operator number are collected from the company. So VSM method applied the shaft production line as a lean tool. According to Current VSM, value added time is calculated as 0.10897%. The data obtained using the Arena Simulation Program has been expanded and simulated. After determining the process to be improved by making output and bottleneck analyzes, scenarios were created. In order to increase the capacity of the centering process, where bottleneck formation was observed in scenario 1, the number of machines was increased from 1 to 2. In scenario 2, the number of machines was increased from 1 to 2 by increasing the capacity for both machines, taking into account the bottlenecks in the centering and tapering processes. Financial analysis was performed by evaluating the data separately for each scenario. In the financial analysis, Net Present Value, Internal Rate of Return, Return on Investment and Payback Period values were found and two scenarios were compared. In the financial analysis, 1 increase in Centering Machines was determined as the most profitable scenario, and the Future Situation Map was drawn, in which the bottlenecks in the centering and tapering processes were improved as a result of the capacity increase. Thus, the percentage of added value determined as the key performance parameter increased from 0.10897% to 3.7799%.

References Adel, A.A., Badiea, A.M., Albzeirat, M.K.: Techniques and assessment of lean manufacturing implementation: an overview. Int. J. Eng. Artif. Intell. 1(4), 35–43 (2020) Alefari, M., Almanei, M., Salonitis, K.: Lean manufacturing, leadership and employees: the case of UAE SME manufacturing companies. Prod. Manuf. Res. 8(1), 222–243 (2020) Andrade, P.F., Pereira, V.G., Del Conte, E.G.: Value stream mapping and lean simulation: a case study in automotive company. Int. J. Adv. Manuf. Technol. 85(1–4), 547–555 (2015). https:// doi.org/10.1007/s00170-015-7972-7 Bai, C., Satir, A., Sarkis, J.: Investing in lean manufacturing practices: an environmental and operational perspective. Int. J. Prod. Res. 57(4), 1037–1051 (2019) Basu, P., Dan, P.K.: A comprehensive study of manifests in lean manufacturing implementation and framing an administering model. Int. J. Lean Six Sigma 11(4), 797–820 (2020). https:// doi.org/10.1108/IJLSS-11-2017-0131 Belokar, R.M., Kumar, V., Kharb, S.S.: An application of value stream mapping in automotive industry: a case study. Int. J. Innovative Technol. Exploring Eng. 1(2), 152–157 (2012) Boonsthonsatit, K. Jungthawan, S.: Lean supply chain management-based value stream mapping in a case of Thailand automotive industry. In: 4th International Conference on Advanced Logistics and Transport (ICALT), Valenciennes, pp. 65–69 (2015). https://doi.org/10.1109/ICAdLT.2015. 7136593

370

O. Aksar et al.

Buer, S.-V., Strandhagen, J.O., Chan, F.T.S.: The link between Industry 4.0 and lean manufacturing: mapping current research and establishing a research agenda. Int. J. Prod. Res. 56(8), 2924–2940 (2018) Fox, J., Brammal, J., Yarlagadda, P.: Determination of the financial impact of machine downtime on the Australia Post large letters sorting process. In: Gudimetla, P., Yarlagadda, P. (eds.) Proceedings of the 9th Global Congress on Manufacturing and Management (GCMM2008), pp. 1–7 (2008) Ghobadian, A., Talavera, I., Bhattacharya, A., Kumar, V., Garza-Reyes, J.A., O’Regan, N.: Examining legitimatisation of additive manufacturing in the interplay between innovation, lean manufacturing and sustainability. Int. J. Prod. Econ. 219, 457–468 (2020). https://doi.org/10.1016/ j.ijpe.2018.06.001 Goshime, Y., Kitaw, D., Jilcha, K.: Lean manufacturing as a vehicle for improving productivity and customer satisfaction: a literature review on metals and engineering industries. Int. J. Lean Six Sigma 10(2), 691–714 (2019) Jorma, T., Tiirinki, H., Bloigu, R., Turkki, L.: Lean thinking in Finnish healthcare. Leadersh. Health Serv. 29(1), 9–36 (2016) Kamble, S., Gunasekaran, A., Dhone, N.C.: Industry 4.0 and lean manufacturing practices for sustainable organisational performance in Indian manufacturing companies. Int. J. Prod. Res. 1–19 (2019). https://doi.org/10.1080/00207543.2019.1630772 Koskela, L.J.: Moving on - beyond lean thinking. Lean Constr. J. 1(1), 24–37 (2004) Kumar, N., Mathiyazhagan, K.: Manufacturing excellence in the Indian automobile industry through sustainable lean manufacturing: a case study. Int. J. Serv. Oper. Manage. 34(2), 180–196 (2019) Lacerda, A.P., Xambre, A.P., Alvelos, H.M.: Applying value stream mapping to eliminate waste: a case study of an original equipment manufacturer for the automotive industry. Int. J. Prod. Res. 54(6), 1708–1720 (2016). https://doi.org/10.1080/00207543.2015.1055349 Braglia, M., Carmignani, G., Zammori, F.: A new value stream mapping approach for complex production systems. Int. J. Prod. Res. 44(18–19), 3929–3952 (2006). https://doi.org/10.1080/ 00207540600690545 Mady, S.A., Arqawi, S.M., Al-Shobaki, M.J., Abu-Naser, S.S.: Lean manufacturing dimensions and its relationship in promoting the improvement of production processes in industrial companies. Int. J. Emerg. Technol. 11(3), 881–896 (2020) Maunzagona, S.A., Telukdarie, A.: The Impact of Lean on the mining industry: a simulation evaluation approach. INCOSE Int. Symp. 27(1), 965–981 (2017). https://doi.org/10.1002/j. 2334-5837.2017.00406.x Mazzocato, P., Savage, C., Brommels, M., Aronsson, H., Thor, J.: Lean thinking in healthcare: a realist review of the literature. BMJ Qual. Saf. 19(5), 376–382 (2020) Melton, T.: To lean or not to lean? (that is the question). Chem. Eng. 759, 34–37 (2004) Noto, G., Cosenz, F.: Introducing a strategic perspective in lean thinking applications through system dynamics modelling: the dynamic value stream map. Bus. Process. Manage. J. (2020). https://doi.org/10.1108/BPMJ-03-2020-0104 OSD: Küresel De˘gerlendirme Raporu [Global Assessment Report] (2020). http://www.osd.org.tr/ osd-yayinlari/kuresel-otomotiv-sektoru-degerlendirme-raporlari/ Palak, P.S., Vivek, A.D., Hiren, R.K.: Value stream mapping: a case study of automotive industry. Int. J. Res. Eng. Technol. 03(01), 310–314 (2014). https://doi.org/10.15623/ijret.2014.0301055 Randhawa, J.S., Ahuja, I.S.: Evaluating impact of 5S implementation on business performance. Int. J. Product. Perform. Manage. 66(7), 948–978 (2017). https://doi.org/10.1108/IJPPM-082016-0154 Sanders, A., Elangeswaran, C., Wulfsberg, J.P.: Industry 4.0 implies lean manufacturing: research activities in Industry 4.0 function as enablers for lean manufacturing. J. Ind. Eng. Manage. (JIEM) 9(3), 811–833 (2016). https://doi.org/10.3926/jiem.1940

An Integrated Value Stream Mapping and Simulation Approach

371

Saravanan, V., Nallusamy, S., Balaji, K.: Lead time reduction through execution of lean tool for productivity enhancement in small scale industries. Int. J. Eng. Res. Africa 34, 116–127 (2018). https://doi.org/10.4028/www.scientific.net/jera.34.116 Saravanan, V., Nallusamy, S., George, A.: Efficiency enhancement in a medium scale gearbox manufacturing company through different lean tools - a case study. Int. J. Eng. Res. Africa 34, 128–138. https://doi.org/10.4028/www.scientific.net/jera.34.128 Shan, H., Yuan, Y., Zhang, Y., Li, L., Wang, C.: Lean, simulation and optimization: the case of steering knuckle arm production line. In: 2018 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM) (2018). https://doi.org/10.1109/ieem.2018. 8607821 Sundar, R., Balaji, A.N., Kumar, R.M.S.: A review on lean manufacturing implementation techniques. Procedia Eng. 97, 1875–1885 (2014). https://doi.org/10.1016/j.proeng.2014.12.341 Veres, C., Marian, L., Moica, S., Al-Akel, K.: Case study concerning 5S method impact in an automotive company. Procedia Manuf. 22, 900–905 (2018) Villarreal, B., Garza-Reyes, J.A., Kumar, V.: A lean thinking and simulation-based approach for the improvement of routing operations. Ind. Manag. Data Syst. 116(5), 903–925 (2016). https:// doi.org/10.1108/IMDS-09-2015-0385 Womack, J.P., Jones, D.T.: Lean Thinking: Banish Waste and Create Wealth in Your Corporation. Simon & Schuster, New York (1996) Womack, J.P., Jones, D.T., Roos, D.: The Machine that Changed the World: The Story of Lean Production. HarperCollins Publishers, New York (1990) Yadav, G., Kumar, A., Luthra, S., Garza-Reyes, J.A., Kumar, V., Batista, L.: A framework to achieve sustainability in manufacturing organisations of developing economies using industry 4.0 technologies’ enablers. Comput. Ind. 122, 103280 (2020). https://doi.org/10.1016/j.com pind.2020.103280 Yadav, G., Luthra, S., Huisingh, D., Mangla, S.K., Narkhede, B.E., Liu, Y.: Development of a lean manufacturing framework to enhance its adoption within manufacturing companies in developing economies. J. Cleaner Prod. 245, 118726 (2020) Yang, M.G., Hong, P., Modi, S.B.: Impact of lean manufacturing and environmental management on business performance: an empirical study of manufacturing firms. Int. J. Prod. Econ. 129(2), 251–261 (2011). https://doi.org/10.1016/j.ijpe.2010.10.017

Eliminating the Barriers of Green Lean Practices with Thinking Processes Semra Birgün1(B) and Atik Kulaklı2 1 Industrial Engineering Department, Do˘gu¸s University, Istanbul, Turkey 2 College of Business Administration, American University of the Middle East,

Kuwait City, Kuwait [email protected]

Abstract. The Green Lean (GL) approach aims to prevent environmental waste, ensure ecological balance and sustainability, and achieve this economically and effectively. Since the second half of the 20th century, Lean is a management philosophy that businesses apply to increase their performance, increase customer satisfaction, and ensure sustainability and continuous development. It is known that the protection of scarce resources in the world, when applied together with the Green approach for a sustainable economy, creates synergy and provides more practical benefits to the environment and humanity. However, performing these GL applications may not always give the desired effect due to the obstacles encountered. Many authors have investigated the factors affecting the success of GL and made recommendations. In this study, the root causes of these obstacles were reached by applying the Theory of Constraints Thinking Process with a different approach, the undesirable effects they created were examined, and solutions were presented to overcome these obstacles. With the proliferation of successful GL applications, it is expected that pollution in the world will decrease, more efficient and economical production and scarce resources will be used accurately, and thus a sustainable green economy will be provided. Keywords: Barriers of green lean · Green Lean · Lean · Sustainability · Theory of Constraints · Thinking process

1 Introduction With the rapid development of industrialization and technology, the damage to the environment is increasing. While the consumption of scarce resources continues, the environmental wastes cause significant damage to nature. Gas-liquid-solid wastes are everywhere, and some countries are becoming dumps of technological wastes. The mobilization that started in Germany in the 1980s to protect nature ensure environmental sustainability and improve living and working conditions have now become planned and widespread with regulations, standards, and laws. It is not enough to apply these sanctions only in production processes to not harm the environment and ensure sustainability. It is necessary to take environmental protective © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 372–383, 2022. https://doi.org/10.1007/978-3-030-90421-0_31

Eliminating the Barriers of Green Lean Practices

373

measures during the production, starting from the design stage to the usage of the product [1]. The material and product function to be used in the design phase should not harm the environment, and the production processes should be designed accordingly. Taguchi’s statement for defining quality is “the least harm caused by a certain product/service after shipment” [2]. Therefore, it should make and implement product and process designs that will not cause harm to society and the environment or minimize the damage. Since the world we live in is unique, protecting the environment should be understood as sustaining humanity. It is the responsibility of “green management” to ensure that both the product/service and the production systems do not harm the environment and society [2]. Green Management tries to ensure environmental sustainability through legal sanctions. It regulates the activities such as minimizing environmental waste, reducing their impact on the environment, green product development, human resources awareness training. Green Production ensures minimizing pollution, reducing waste, slowing down the depletion of natural resources, and optimizing the use of raw materials and energy. A green production is an approach that supports the production of renewable products/services that will not harm the environment. According to [3], green production has become a competitive element for companies due to demanding ecolabels. Competition through environmental risks and related global awareness and efficiency necessitates being greener and more productive. Some companies have adopted a lean philosophy to reduce waste on an operational basis and improve sustainability. The lean concept arose from Toyota’s struggle for survival after the Second World War. It can be found widespread application as a requirement of improving the system, reducing costs, increasing efficiency, and customer satisfaction in developed and developing countries [4]. Lean philosophy allows managers to look at the system from customers’ eyes and improve system productivity and customer satisfaction by removing non-value-adding issues [5]. The goal of lean management is to eliminate waste and coordinate all associated tasks [4]. Taiichi Ohno [6], known as the father of Toyota Production System, defined “waste” as any human activity that absorbs resources but creates no value. In other words, performing a wasteful activity adds no value but incurs cost. Ohno categorized seven types of waste, namely “overproduction”, “waiting time”, “transport”, “processing time”, “unnecessary inventory”, “unnecessary motion”, and “defect production” [7]. In addition to seven wastes, [8] identified the ’underutilized people’ as the eighth waste. In this philosophy, continuous improvement is the way to perfection. Companies can maintain their competitive power, decrease waste, increase the efficiency and quality of production systems, and improve their competitive power by applying the lean-approach.

2 Green Lean While Green and Lean were defined as “parallel universes” by some authors [9], [10] defined Green as the dissemination of Lean in public in terms of waste reduction and pollution prevention [11]. Many authors state [11–14] that lean practices are a catalyst for achieving Green goals, and they create synergies together. It is expected that the benefit to be obtained due to the mutual interaction of both approaches will be greater than the benefit to be obtained by the independent application of each. Therefore, it is mainly

374

S. Birgün and A. Kulaklı

recommended to apply Green and Lean simultaneously. Through its systematic focus on increasing added value for customers by eliminating waste, Lean leads to a significant improvement in the environmental performance of organizations [10]. [11] identified green manufacturing as a natural extension of lean manufacturing. Moreover, they also stated that lean is green in line with lean principles without any effort. It is because Lean creates a naturally sustainable and waste-free environment by using continuous improvement to achieve excellence. [15] developed Green Lean methodology based on the principle of waste prevention to achieve continuous incremental improvement in seven steps. [11] provided a guideline for greening lean supply chains [16]. A short supply chain is naturally green; as the length increases and distances, it gets more prominent. However, the chain cannot be green even if it is lean, and it is possible to conflict between green and lean approaches. Research has proven that the environmental impact is more substantial with the application of Green and Lean together. When companies evaluate their processes, they add green metrics to their excellence criteria using lean techniques and tools. [17] developed a lean and green model. The model stabilizes value stream (VS) with lean techniques and measures environmental VS by defining environmental aspects and impacts. Metrics include energy, water, metallic and contaminated waste, other waste, oils and chemicals, and effluents. The next step is to improve environmental VS’s and ensure continuous improvement. According to [18], “Green manufacturing drives Lean results, particularly improved cost performance.“ Green techniques regard the environment as a constraint for developing and manufacturing products and services, whereas Lean approaches see it as a crucial resource while lean practices reduce waste; they can sometimes have practices that harm the environment, such as frequent deliveries. [19] emphasized that lean practices alone cannot be sufficient for all environmental problems in their research with 17 manufacturers. It shows the existence of a potential conflict between lean principles and the objectives of environmentally friendly practices” [3]. Lean and Green supply chain strategies coincide with their requirements for external audit and continuous review, their need for efficient systems to reduce the production of unwanted by-products, and their enormous impact on functional processes throughout the supply chain. On the contrary, [20] stated that it is difficult for companies to take advantage of the environmental benefits that come with lean, that the implementation of environmental initiatives can be time-consuming, and that changing technology to make processes and products more environmentally friendly requires a significant upfront investment [11]. Waste can be defined as lean, which Ohno’s classification, while Green is an inefficient use of resources and non-product output such as scrap and emissions. Furthermore, [21] identified that the ninth waste implies unnecessary or excessive resources and harmful substances released into the environment [22]. Although the goals of Green and Lean are different, both aim to eliminate inventory, transportation, by-products, or all unnecessary outputs, namely waste [3]. Inventory is the most significant source of waste for lean and causes much waste. It requires an open-closed warehouse, indoor-outdoor transportation, energy consumption, gas emission, loss of opportunity (money tied to capital), cost burden. For this reason, it is reducing unnecessary materials, and therefore

Eliminating the Barriers of Green Lean Practices

375

inventory creates a beneficial effect both in terms of protecting natural resources and reducing waste. Reducing or eliminating seven wastes, energy use, emissions, and space requirements is reduced, contributing to green management [2]. Many researchers have expressed green and lean as providing environmental benefits and contributions together. However, there are also publications [23, 24] that mention many obstacles to implement green and lean together. As the first factor affecting the success of every new project, the support of the top management is also the essential factor in the implementation of Green Lean. The lack of support from the top management is the biggest obstacle to the implementation of Green Lean. The work of [25] has proved it. Apart from this, many authors have researched this subject and have revealed the following barriers to success. [24] found that 15 barriers to the Green Lean implementation were identified as lack of environmental awareness, fear of failure, poor quality of human resources, lack of expertise training and education, fund constraints, lack of statistical Lean and Green thinking, inappropriate identification of areas and activities to be ’leaned and greened’ and unreliable ’data collection system’, lack of Kaizen culture, lack of visual and statistical control during Green Lean implementation, lack of government support to integrate green practices, high cost, lack of communication and cooperation between departments [26], lack of top management involvement in adopting Green lean initiative, resistance to change, and poor corporate culture separating environmental and continuous improvement decisions. [27] surveyed in the construction sector and identified the obstacles of Green Lean success that lack of professionals related to Lean and Green areas, lack of knowledge about integrated Lean Green approach, misconceptions, limited availability of Green materials and products, inadequate knowledge concerning green technologies, lack of availability and reliability of Green suppliers and sub-contractors, lack of top management commitment, inadequate government commitment, employee resistance to change, unsupportive project organizational structures, the high initial cost of green construction, increased pressure on laborers for quality achievement, and attitudinal barriers. [25] obtained an initial list of 57 barriers from the literature review and categorized them into twelve barriers that lack of environmental knowledge, lack of top management commitment, resistance/ fear to change, financial restrictions, lack of training of employees, insufficient government support, technological constraints, absence of good communication, lack of dedicated supplier, absence of sound planning systems, lack of awareness about potential benefits, and lack of positive culture. The authors were using the DEMATEL Method; they identified the "Resistance to change", "lack of top management commitment" and "lack of training to employees" were the most prominent barriers. Studies show that many factors affect the success of Green Lean applications. However, polluted and destroyed nature companies are pushing more and more for green applications. Although the laws impose sanctions, the success of companies in this regard will significantly affect environmental sustainability. The lean approach ensures to be more effective by creating synergy in success in this regard. The managers have to increase the success of Green Lean applications. This study aims to reach solutions by using the Theory of Constraints (TOC) - Thinking Process (TP) to overcome the obstacles in front of Green Lean to benefit managers in this regard. The benefits of using both

376

S. Birgün and A. Kulaklı

approaches together are described above. The following section describes the Thinking Process application.

3 Ways to Increase Success in Lean Green: Application of Thinking Processes This section explains the TOC-TP application, which identifies and eliminates the root causes of success barriers scanned in the literature. Eliyahu Goldratt invented the TOC management philosophy that aims to improve by managing constraints. The thinking system is used in performance improvement and is a valuable tool for revealing the connections between unrelated or divergent elements that are not connected or separated from each other [28]. Goldratt likens the system to a chain and states that the chain can be as strong as its weakest link; therefore, this weak link is a constraint for the system, and every constraint is an opportunity for the system to evolve [29]. TOC uses the Logical Thinking Process (TP) to find constraints and improve the system. TP includes examining the constraints (the root causes) that limit the system’s performance, suggesting solutions, finding the prerequisites for solutions, and eliminating the difficulties encountered during implementation [30]. TP uses Current Reality Tree (CRT), Conflict Resolution Diagram (CRD) / Evaporative Cloud (EC), Future Reality Tree (FRT), Precondition Tree (PRT), Transition Tree (TT) [31]. These logic trees are a logic map that shows where something went wrong, how the issue should be handled, and what exactly needs to happen to drive profitable growth [29]. Each Logical Tree structure presents the cause-effect relationships within the analyzed problem with diagrams and figures. When Logical TP is applied, three questions are sought to be answered: “What to change?”, “What to change to?” and “How to cause the change?”. Logical TP’s correct and effective use provides inescapable access to root causes and provides practical solutions to root problems [28]. 3.1 Current State Analysis CRT was arranged (Fig. 1) as the first step to determine the root causes of the factors that cause unsuccess. It also finds the answer to the “What to change?” question. Moreover, it is used to determine the system’s current state and identify the main problems encountered [29]. According to the studies reviewed, lack of top management support, financial constraints, government support, and resistance to change can be said to be influential failure factors. While organizing the CRT, it was aimed to find the root causes of these factors that affect success - in other words, undesirable effects - and taking into account the other mentioned factors, poor management of human resources and operational failure were also examined. As a result of the CRT evaluation, insufficient government support, financial constraints, inadequacy in HR management, inadequate operational management, unreliable data collection system, lack of green suppliers, and limited green products/materials were determined as the root causes leading to failure. Thus, the “What to change?” question is answered. The purpose of finding root causes; all or most of the undesirable effects they cause can be eliminated by eliminating these factors. The next step is “What to change to?”

Eliminating the Barriers of Green Lean Practices

377

Fig. 1. Current reality tree for GL executions

and the EC and FRT are used to eliminate undesirable effects and root causes. Based on the fact that chronic organizational problems arise due to conflicts [32], EC’s were prepared, and solutions were searched to eliminate failure factors. 3.2 Root Cause Solutions with ECs When it comes to the root cause of resistance to change; it is seen that inadequate HR management (HRM) is the source of all the effects that will create this undesirable effect. The limited vision of the HRM causes mistakes and deficiencies in both workforce selection and career management, a company culture that is open to change cannot be created, GL training to be given are insufficient, fears and concerns of employees are superior to change since Green and Lean awareness cannot be given adequately, the potential benefits of GL are not realized, and when financial difficulties are added to these, it creates resistance to change. It is possible to overcome this factor with GL-conscious and robust HR management. Alternatively, this obstacle can be overcome by educating, raising awareness, and motivating the GL employees by getting professional support from external sources. Figure 2 presents these two alternative solutions to overcome resistance to change. Inadequate operational management; will have adverse effects on the selection, acquisition, use, and sustainability of both technology and materials and will cause operational difficulties. Production costs will be high with the low efficiency that arises as a result of not being able to fully establish the lean and not fully understanding the requirements of Green. There will be problems in communication and cooperation between units due to incompetent management. For success, operational management should be strengthened to overcome these problems. In order to eliminate this problem, the conflicts in Fig. 3 should be considered.

378

S. Birgün and A. Kulaklı

Fig. 2. Eliminating resistance to change

Fig. 3. Enhance successful production management

If the existing management team is to be preserved, the personnel should be trained and strengthened in management by obtaining the support of consultants. Another solution is to change the staff and continue with the experts experienced GL. It is also recommended to apply cost and benefit analysis to be made on this subject. In today’s digital world, accessing and using the correct data is of great importance for the security and sustainability of the company. Reliable information systems are essential for data security. A single mistake leads to wrong decisions, wrong plans, and practices. Such a situation creates many negative situations, up to the company’s reputation or

Eliminating the Barriers of Green Lean Practices

379

collapse due to unsuitable investments. In order to avoid this situation, either the existing system should be improved by obtaining external support and necessary investments, or a completely reliable new system should be established (Fig. 4).

Fig. 4. Providing sustainable management with real and accurate data

One of the root causes of success is a lack of government support. It has essential duties to spread green practices around the world. Most countries enforce sanctions with regulations and standards. Although the government has sanctioned for Green, this is not the case for lean. For this reason, laws, regulations, which include simplification and encourage the spread of Green Lean applications, should be the government’s responsibility. It is necessary for a sustainable world and economic enterprises. Not only sanctions but also financial incentives by the government will be an excellent motivation for GL practices. The government should implement various programs and policies to increase environmental awareness and potential benefits for the Green Lean approach to all stakeholders. Non-governmental organizations are also expected to fulfill their duties in this regard. Although the solution related to the government is not within the firm’s responsibility, alternative solutions are presented in Fig. 5 as an opinion. Working with green suppliers is an essential factor, especially for the sustainability of green management. The supplier is also an essential parameter for lean and advocates the need to work with fewer suppliers for more successful procurement and lower production costs. Lean considers its suppliers as a unit of their own company and supports them in their technical, financial, time, and quality development to work for their purposes. Relationships with suppliers are long-term. From a GL perspective, working with dedicated green suppliers will positively affect success. Naturally, the scarcity of green materials and the limited availability of green products since green-managed companies have not yet become widespread put GL managers in a difficult situation. Scarce resources have led to the emergence of production management. Production managers have to use scarce resources and manage their processes

380

S. Birgün and A. Kulaklı

Fig. 5. Providing government support

most effectively in order to provide the products as their customers need. Therefore, this is a factor for operational failure and causes GL failure. It can be expected that the increase in state sanctions will spread green production and thus contribute to the solution of this problem. 3.3 Solution Suggestions with FRT The final stage of this study is to create the FRT [33, 34], which is a proficiency-based structure for showing how the changes decided on the current situation will contribute to the desired outcomes and the cause-effect relationship between the changes to be made in the existing system and the consequences that might occur. The FRT allows the picture of a strategy, vision, or plan for an organization to be seen [28]. Root causes identified by CRT were evaporated with ECs in the second stage, and FRT was designed (Fig. 6). The success of HRM is ensured by strengthening the existing staff with external support. With the successful practices of HRM, it is possible to work with a qualified workforce by creating an educated, GL conscious, and open-minded employee. HRM training also contributes to the improvement of operational success. Failure in operational management can be resolved by consulting and training. Management by objectives and making the right decisions also ensure working with dedicated green suppliers. Most countries have government sanctions and government support for green practices. The government has regulations on the green economy and pressures on the implementation of environmental standards. It is expected that the government’s incentives for green will play an essential role in the deployment of green practices. It is expected that the government will produce policies and strategies to deploy the lean philosophy, which is green and provides excellent economic benefits throughout the business and supply

Eliminating the Barriers of Green Lean Practices

381

Fig. 6. Feature reality tree

chains. However, it can be said that the two approaches that create synergy together will create more significant benefits. Successful and sustainable GL applications will be possible due to the realization of all these proposed solutions.

4 Conclusions Lean philosophy, which has been applied to production systems since the second half of the 20th century, has enabled businesses to work more efficiently with fewer resources, reducing both production costs and increasing customer satisfaction. In lean systems, where everything that does not add value in line with excellence is considered waste, unnecessary resource consumption is also avoided. Today, lean principles find application in every sector in order to increase company performance and reputation in the world. In the last few decades while governments were producing green policies with the aim of protecting natural resources and the environment, businesses started to act with green environment and sustainability awareness. The green approach deals with outputs and inputs and gives importance to product designs that would not harm the environment. When applied alone, lean and green provide significant savings for companies. However, by applying both together, it is possible to combine the benefits of these two approaches, each of which is successful in its own right. Thus, more economical, greener, and more sustainable environments are created. Some difficulties may be encountered in implementing this approach, which is called Green Lean, and sometimes it may not produce the expected effect. This study investigated that the factors leading to the failure of enterprises can be eliminated by considering the barriers previously published by researchers. By applying TP, the barriers to success and their effects were examined, and ways to eliminate them were suggested. As an actual result, essential obstacles such as “resistance to change” and “lack of support from top management” in the literature

382

S. Birgün and A. Kulaklı

arise from the failure of human resources management. As a result of the government’s awareness of GL, increasing support and incentives and green practices will eliminate many negative factors. In short, the success of GL is mainly dependent on accurate and professional managers, serious training programs, and government support. Successful GL management, reduction of waste, less resource consumption, renewable product designs will make our world cleaner and more sustainable.

References 1. Kulakli, A.: Yeni ürün geli¸stirme sürecinde bilgi payla¸sımının önemi ve de˘ger yaratılmasına olan katkıları, V. Üretim Ara¸stırmaları Sempozyumu, ˙Istanbul Ticaret Üniversitesi, ˙Istanbul, Türkiye 265, 271 (2005) 2. S. Birgün and A.Kulaklı. “Yalın Felsefe ve Ye¸sil Yönetim”, Ye¸sil Yönetim, Ed. G. Mert & M. Tekin, Nobel, 2021. 3. Prasad, S., Sharma, S.K.: Lean and green manufacturing: concept and its implementation in operations management. Int. J. Adv. Mech. Eng. 4(5), 2250–3234 (2014) 4. Birgün, S., Kulaklı, A.: Scientific Publication Analysis on Lean Management in Healthcare Sector: The Period of 2010–2019. ˙Istanbul Ticaret Üniversitesi Sosyal Bilimler Dergisi Prof. Dr. Sabri Orman Özel Sayısı, 478–500 (2020) 5. Birgün, S.: A hybrid approach for process improvement. In: Proceedings of the I.S.M.C. 2019 15th International Strategic Management Conference, pp. 417–429 (2019), ISBN: 978-60581347-1-3 6. Ohno, T.: Toyota production system: beyond large-scale production, CRC Press (1988) 7. Birgün, S., Gülen, K.G.: Key value stream approach for increasing the effectiveness of business processes. AURUM – J. Eng. Syst. Architecture 4(2), 201–223 (2020) 8. Keyte, B., Locher, D.: The Complete Lean Enterprise Value Stream Mapping for Administrative and Office Processes. Productivity Press, N.Y. (2004) 9. Larson, T., Greenwood, R.: Perfect complements: synergies between Lean production and eco–sustainability initiatives. Environ. Qual. Manage. 13(4), 27–36 (2004) 10. King, A., Lenox, M.: Lean and Green: an empirical examination of the relationship between Lean production and environmental performance. Prod. Oper. Manage. 10, 244–256 (2001) 11. Dües, C.M., Tan, K.H., Lim, M.: Green as the new lean: how to use lean practices as a catalyst to greening your supply chain. J. Clean. Prod. (2011). https://doi.org/10.1016/j.jclepro.2011. 12.023 12. Hallam, C., Contreras, C.: Integrating lean and green management. Manag. Decision 54(9), 2157–2187 (2016). https://doi.org/10.1108/MD-04-2016-0259 13. Duarte, S., Cruz-Machado, V.: Modeling lean and green: a review from business models. Int. J. Lean Six Sigma 4(3), 228–250 (2013) 14. Galeazzo, A., Furlan, A., Vinelli, A.: Lean and green in action: interdependencies and performance of pollution prevention projects. J. Clean. Prod. 85(1), 191–200 (2014) 15. Posteuc˘a, A.: Green Lean Methodology: Enterprise Energy Management For Industrial Companiıes. Annals of the Academy of Romanian Scientists Series on Engineering Sciences, 71-84, ISSN 2066 – 8570 Volume 5, Number 1 (2013) 16. Venkat, K., Wakeland, W.: Is Lean Necessarily Green?; Pdf from Website. In: Proceedings of the 50th Annual Meeting of the ISSS (2006) Accessed via http://www.cleanmetrics.com/ pages/ISSS06-IsLeanNecessarilyGreen.pdf. Accessed 25 July 2011 17. Pampanelli, A.B., Found, P., Bernardes, A.M.: A Lean & Green model for a production cell. Journal of Cleaner Production 85, 19–35 (2014)

Eliminating the Barriers of Green Lean Practices

383

18. Bergmiller, G.G., McCright, P.R.: Are Lean and Green programs synergistic? In: Industrial Engineering Research Conference, Miami, Florida, USA, May 30–June 3 (2009) 19. Rothenberg, S., Pil, F.K., Maxwell, J.: Lean, green, and the quest for superior environmental performance. Prod. Oper. Manage. 10(3), 228–243 (2001) 20. Mollenkopf, D., Stolze, H., Tate, W.L., Ueltschy, M.: Green, lean, and global supply chain. Int. J. Phys. Distribution Logistics Manage. 40(1–2), 14–41 (2010) 21. Vinodh, S., Arvind, K.R., Somanaathan, M.: Tools and techniques for enabling sustainability through lean initiatives. Clean Technol. Environ. Policy 13(3), 469–479 (2011). https://doi. org/10.1007/s10098-010-0329-x 22. Green, K.W., Inman, R.A., Sower, V.E., Zelbst, P.J., Impact of J.I.T.: TQM and green supply chain practices on environmental sustainability. J. Manuf. Technol. Manag. 30(1), 26–47 (2019). https://doi.org/10.1108/JMTM-01-2018-0015 23. Kumar, R.B.R., Agarwal, A., Sharma, M.K.: Lean management–a step towards sustainable green supply chain. Competitiveness Rev. 26(3), 311–331 (2016) 24. Cherrafi, A., Elfezazi, S., Garza-Reyes, J.A., Benhida, K., Mokhlis, A.: Barriers in green lean implementation: a combined systematic literature review and interpretive structural modelling approach. Accepted for publication in Production Planning & Control: The Management of Operations (2017) https://doi.org/10.1080/09537287.2017.1324184 25. Singh, C., Singh, D., Khamba, J.S.: Analyzing barriers of Green Lean practices in manufacturing industries by DEMATEL approach. J. Manuf. Technol. Manage. 32(1), 176–198 (2021). https://doi.org/10.1108/JMTM-02-2020-0053 26. Chatterjee, A., Kulakli, A.: A study on the impact of communication system on interpersonal conflict. Procedia – Soc. Behav. Sci. 210, 320–329 (2015) 27. Pandithawatta, T.P.W.S.I., Zainudeen, N., Perera, C.S.R.: An integrated approach of LeanGreen construction: Sri Lankan perspective. Built Environ. Project Asset Manage. 10(2), 200–214 (2020). https://doi.org/10.1108/B.E.P.A.M.-12-2018-0153 28. Birgün, S., Altan, Z.: A Managerial Perspective for the Software Development Process: Achieving Software Product Quality by the Theory of Constraints. Agile-Approachesfor-Successfully-Managing-and-Executing-Projects-in-the-Fourth-Industrial-Revolution, Advances in Logistics, Operations, and Management Science (A.L.O.M.S.) Book Series H.B. Bolat and G. T. Temur (Eds), 243–266, ISSN:2327–350X, I.G.I. Global (2019) 29. Yumurtacı, B., Onursal, F.S.: Theory of constraints-thinking processes and analysis of reasons of individual separation from individual pension system. J. Busıness Res.-Turk 11(4), 3269– 3282 (2019) 30. Birgün, S., Öztepe, T., Sim¸ ¸ sit, Z.T.: Bir Ça˘grı Merkezinde Mü¸steri Sikayetlerinin ¸ Dü¸sünce Süreçleri ile De˘gerlendirilmesi. Proceedings of XIth. Symposium for Production Research, 265–275 (2011). https://doi.org/10.1016/j.sbspro.2015.11.372 31. Dettmer, W.H.: Goldratt’s Theory of Constraints: A system approach to continuous improvement, American Society for Quality Press (1997) 32. Gupta, M., Boyd, L., Kuzmits, F.: The evaporating cloud: a tool for resolving workplace conflict. Int. J. Conflict Manage. 22(4), 394–412 (2011). https://doi.org/10.1108/104440611 11171387 33. Kim, S., Mabin, V.J., Davies, J.: The theory of constraints thinking processes: retrospect and prospect. Int. J. Oper. Prod. Manage. 28(2), 155–184 (2008) 34. Dalcı, I., Kosan, L.: Theory of constraints thinking-process tools facilitate goal achievement for hotel management: a case study of improving customer satisfaction. J. Hospitality Mark. Manage. 21(5), 541–568 (2012)

Miscellaneous Topics

Competency Gap Identification Through Customized I4.0 Education Scale Murat Kocamaz1 , U. Gökay Çiçekli1 , Aydın Koçak1 , Haluk Soyuer1 , Jorge Martin Bauer2 , Gökçen Ba¸s2 , Numan M. Durakbasa2 , Yunus Kaymaz3 , Fatma Demircan Keskin1 , ˙Inanç Kabasakal1(B) , Erol Güçlü2 , and Ece Soyuer2 1 Department of Business Administration, Ege University, Bornova, Turkey

{murat.kocamaz,gokay.cicekli,aydin.kocak,haluk.soyuer, fatma.demircan.keskin,inanc.kabasakal}@ege.edu.tr 2 Institute of Production Engineering and Photonic Technologies, TU Wien, Vienna, Austria {j.bauer,goekcen.bas,numan.durakbasa,erol.gueclue}@tuwien.ac.at, [email protected] 3 Department of Logistics Management, Iskenderun Technical University, ˙Iskenderun, Turkey [email protected]

Abstract. The key competencies and knowledge of employees in a new era of production systems based on the concept of Industry 4.0 are becoming increasingly important for business and education worldwide. The intensive interaction of intelligent production systems and people in the context of Industry 4.0 is the future of an agile, flexible, environmentally friendly, safe and efficient working environment. The key to this lies in the education and training of people in such advanced production systems. One of the most important prerequisites for companies to be successful in Industry 4.0 implementation and sustain their competitiveness and innovation is to have a skilled workforce aligned with Industry 4.0. In order to provide this prerequisite, it is necessary to determine what the competencies required for Industry 4.0 in businesses are, identify the competency gaps, and create custom education plans to close these deficiencies. From this point of view, the CEP I4.0 project funded under the Erasmus+ Programme aims to provide a customized training plan to close Industry 4.0 competitiveness gaps in order to support competitiveness and innovation in companies. This paper presents the need analysis that has been carried out based on the data collected from the participants from Turkey and Austria so far, with the details of the open-ended questions directed to the participants and analysis findings. Moreover, the main and sub-dimensions of the Industry 4.0 competence scale being developed are also presented. Keywords: Industry 4.0 · Workforce · Competency gap · Vocational education and training · Competence assessment

1 Introduction In the context of rapidly increasing digitalisation, the concept of “Industry 4.0” has gained a lot of importance in recent years. This concept is attracting more and more © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 387–400, 2022. https://doi.org/10.1007/978-3-030-90421-0_32

388

M. Kocamaz et al.

attention in all branches of industry. Digital integration forms the basis for all further steps and is implemented on the basis of various aspects such as production process, hardware, software, technology, engineering and management. Advancing technological developments such as networking, machine-to-machine communication, artificial intelligence (AI), cyber-physical systems (CPS), Internet of Things (IoT), Big data and digital twins mean a significant increase in efficiency and optimisation of production systems. In this way, greater individualisation towards customised customer solutions is made possible. The integrated systems in developing high technology tools and methodology will ensure to achieve safe, efficient and cost-effective solutions by using advanced technology methods as a part of next-generation factory strategy. Conventional quality control shifts to production quality to meet demand for small loT multi-production customized with higher quality for Industry 4.0. Those new models can be developed on the basis of intelligent production technologies and integrated management systems as well as extensive use of the IT technologies, parallel-processing computing and advanced engineering data exchange techniques to provide flexible and adaptive production processes and that assists machines and people in context-aware manners via the use of modern computing technology. As today, the notion of Industry 4.0, which presents a paradigm shift in terms of countries and geographical regions in a macro perspective, and industries, businesses and workforce in a micro perspective, realizes a digital transformation in many areas. The digital reflection of the internet in the value chains of the cooperation between supply, production and business processes in real time is possible with the concept of Industry 4.0. The concept of Industry 4.0, which has become crucial in terms of countries and geographical regions, appears with different names in different geographical regions, although it is of German origin. Although countries in different geographical regions have named this concept differently, the concept of Industry 4.0 emerges as areas where technological transformation can be realized, and technological breakthroughs can be achieved. In terms of industries and businesses, we can see improvements, productivity increases and new methods of businesses. It can be easily noted that the mentioned digital transformation has changed not only in the industry part, but also in terms of workforce. In this sense, businesses make inferences from their employees not only in terms of certain technological competencies, but also about how much they can adapt to this digital transformation. CEPI4.0 Project [1], entitled as “A Customized Education Plan Based on Industry 4.0 Competency Gaps”, aims to explore gaps in competencies towards the Industry 4.0 landscape in businesses, and present a custom education plan based on those deficiencies. The project is funded by EU within the context of Erasmus+ Programme (KA202 – Strategic Partnership for Vocational Education and Training). There are four partners in collaboration for the project. Ege University, TU Wien, Universitatea Tehnica ClujNapoca and Slovenska Technicka Univerzita v Bratislave. As one of the beginning steps of project implementation, the project group has recently conducted a need analysis to collect data towards designing a scale that helps to design custom education plans. This study presents the need analysis and its findings, as well as the scale being developed

Competency Gap Identification

389

for CEPI4.0 Project. The need analysis data examined in this study involves responses from participants in Turkey and Austria.

2 Competence Assessment in Industry 4.0 Environment Competence within the scope of Industry 4.0 can be defined as the level of knowledge a person has about Industry 4.0 technologies and the degree to which this person benefits from the advantages offered by these technologies. Determination of a competence required for a job, as well as assessment in employees are both challenges in this context. A comprehensive approach for competency measurement in manufacturing can involve assessing competencies organised into a taxonomy involving multiple categories regarding functional, technical, and cognitive aspects required for tasks [2]. According to the literature regarding Industry 4.0 readiness and maturity issues, [3] examined the technical skills required for Industry 4.0 from aspects from production management to information technology. They tried to make inferences about the future needs of companies along with the current skill requirements of the manufacturing environment in the context of Industry 4.0. [4] provided a definition of competencies and qualifications of future engineers. Thus, they underlined that the skill development methods of Industry 4.0 should be designed taking into account the needs of new generation workers and designers and should meet them. [5] examined which competencies are important in the Industry 4.0 transformation process through literature analysis and focus groups. Their findings indicated that the majority of the competency list consisted of behavioral competencies. In addition to those, process management, data analysis, machine learning, and IT-related competencies were also included in the list. [6] indicated that the identification of necessary competence requires a properly determined workforce. The authors also classified the competences according to technical, methodological, social competences and personal competences. Furthermore, qualification of workforce and development of the competence continues as cycle which improves gradually and the main difference between competence and qualification is that competence development focuses to determine necessary requirements and to expose the gaps while qualification aims to conclude these gaps. [7] tried to determine the gap between the actual and the expected skill sets in terms of master studies among students and identified a list of topics such as quality management, maintenance management, data analytics, logistics and supply chain etc. Moreover, the underlying technologies of Industry 4.0 promise self-aware and automated systems capable of implementing strategies utilizing predictive maintenance, prognostics and health management in manufacturing [8]. [9] developed an approach for intelligent metrology applications, which have an important role in assuring quality in smart manufacturing systems. Their proposed approach was based on a measurement system that can be remotely controlled, highly accurate, and applicable in smart manufacturing systems. [10] underlined the importance of virtual learning environment in the context of I 4.0 environment which is also significant for the distant working space for the future workforce of businesses. [11] highlighted the necessity of the collaboration between universities and industry eventually this will lead as an imperative for I4.0 education

390

M. Kocamaz et al.

and they proposed the significance of remoteness of the operations and presence for education. As the authors indicated, the tele operations will not hinder the effectiveness of the resources while sustaining the environmental friendly aims of I4.0. [12] tried to determine the job roles in businesses and introduced a four phased framework in order to a successful application and determination of skill qualification of workforce. Knowledge, skills, abilities and other different attributes can be referred to the competency models and they are known that they are crucial for jobs which are needed to be effective [13, 14]. According to The U.S. Employment and Training Administration [15], there are nine level in the represented competency model framework and depicted the necessary skills, knowledge and abilities for a successful performance. [16] highlighted that for different I4.0 attributes, some different skill sets will be applicable in the context of workforce segments. Competence assessment can also be described as a part of organizational change and continuous improvement. The International Organization for Standardization has issued several standards to enhance formal (ISO 21001:2018 [17]) or non-formal education (ISO 29993:2017 [18]) institutions and issued the ISO/IEC 40180:2017 Standard [19], which presents a reference framework for quality assurance, quality management and quality improvement. Moreover, another relevant reference document to our project is the ISO 30401:2021 Standard [20], which sets requirements and guidelines to establish Knowledge Management Systems for various types of organizations. As a part of CEPI4.0 project, our study presents preliminary steps towards competence assessment. These steps involve designing a need analysis form as data collection tool, collection of qualitative data, and examination of responses to identify the dimensions and subdimensions of assessment tool. The methodological approach being followed in the project has been detailed in the next section. Moreover, a quick overview of our analysis has been found compatible with a prior study [21] that was conducted primarily with the participation of our project team. The need analysis findings, as well as the dimensions and subdimensions of the scale are presented in the findings section.

3 Methodology In this study, several open-ended questions have been prepared and directed to the participants included in the target group via Google Forms. The responses of the participants have been examined one by one on the basis of each question, the relevant keywords they mentioned in their responses have been marked out and grouped by their frequencies. Details on data collection and research questions are presented in this section. 3.1 Data Collection The target group for the study includes students in higher education, professionals, graduates, and academic researchers. Accordingly, our approach towards identifying the skills required throughout the Industry 4.0 transformation included raising openended questions towards this target group. Data was collected via Google Forms from participants in Turkey, Austria, Slovakia and Romania. The following questions were directed to the participants that voluntarily involved in our study:

Competency Gap Identification

391

– In your opinion, which areas should be covered in an Industry 4.0 training? – How could you prioritize the areas you mentioned? Could you explain the reasons? – What are the fundamental skills essential for professionals working on the areas you mentioned? – In your opinion, how should those trainings be organized (synchronous, asynchronous, face-to-face, with applications, etc.)? – If you are involved in Industry 4.0 applications – what are the main issues that you deal with? Accordingly, could you provide potential solutions? – Regarding the trainings on Industry 4.0 in your country, do you think the trainings are sufficient? Can you evaluate those trainings, and suggest solutions to cover their drawbacks, if any? As a requirement of the Erasmus+ Project, forms had been prepared in the languages of the countries of participating universities. Moreover, an English copy of the form were also provided to prevent overlooking the expats and foreign students who might be interested in the study. The form additionally assessed the awareness of the respondents to the following Industry 4.0 related concepts: Virtual/Augmented Reality, Rapid Prototyping, Additive Manufacturing, Cyber-Physical Systems, Internet of Things, Cloud Technology, Big Data, Artificial Intelligence and Blockchain.

4 Findings Of Need Analysis 4.1 Collected Data The data collection form initially asked for the occupation of the participants. Data from participating countries were collected in separate forms. Accordingly, the following table summarizes the occupation of the participants in data that only involve Austria and Turkey for this paper. Table 1. Descriptive statistics for participants Austria Turkey Total Professional

11

1

12

Both student & professional

4

8

12

Student

8

1

9

Researcher

1

0

1

24

10

34

As Table 1 suggests, most of the participants (24 out of 34) are occupied in a job. The participants who declared their occupation as ‘Professional’ work in various sectors including IT, education, manufacturing, telecommunication, finance, insurance and automotive.

392

M. Kocamaz et al.

5 Responses Obtained When the participants were asked which areas of business, or topics are more relevant to Industry 4.0 trainings, the responses were found quite consistent. The majority of those respondents underlined that manufacturing is the most relevant area of business in this context. This business area is followed by IT and Supply Chain Management, which have been declared by 8 and 6 participants, respectively. Another popular area, Human Resources, is especially relevant with scope of this study since our paper discusses assessment of gaps and skills for Industry 4.0. Table 2. Mostly highlighted areas in business management Business Area

No of responses

Manufacturing

22

IT

8

SCM

6

HR

3

R&D

1

Finance

1

Other Services

1

Within the responses obtained for the same question, the participants also highlighted several Industry 4.0 technologies. Accordingly, the most relevant technology was Artificial Intelligence that was mentioned along with machine learning by several participants. To prevent duplication, both options were consolidated as AI in the results. The frequency of those technologies is summarized in the table below: Table 3. Mostly highlighted Industry 4.0 technologies Industry 4.0 Technology

No of responses

Artificial Intelligence

6

Cyber-Physical Systems

4

Internet of Things

4

Cloud Technology

4

Virtual/Augmented Reality

2

Additive Manufacturing

2

Big Data

1

Rapid Prototyping

1

Competency Gap Identification

393

As the last section of our data collection tool, the respondents were asked to choose how familiar they are with a list of Industry 4.0 related technologies. The results demonstrate that most respondents are highly aware of the concept of AI, with an average score of 4.71. Likewise, Additive Manufacturing, Big Data, Virtual Reality, Cloud Technology, Internet of Things, Blockchain, and Augmented Reality can be pointed out as top Industry 4.0 technologies that the respondents are highly aware. A cross validation with both findings suggests that, the popular key technologies marked out (in Table 4) are intersecting with those mentioned in answers for the open-ended question (in Table 3) located before in the form. Table 4. Participant awareness in Industry 4.0 technologies (Average of scores ranging from 5 (highest) to 0 (lowest)) Industry 4.0 Technology

Awareness

Artificial Intelligence

4.71

Additive manufacturing - 3D Printing

4.62

Big Data

4.58

Virtual Reality

4.44

Cloud Technology

4.33

Internet of Things

4.29

Blockchain

4.25

Augmented Reality

4.09

Cyber-Physical Systems

3.68

Rapid Prototyping

3.35

Cognitive Computing

3.00

Another critical question raised in the form was on the fundamental skills essential for professionals. Despite the variety of skills mentioned in the responses, the most relevant and popular ones were presented in Table 5. Accordingly, the need analysis findings suggest that programming is the top skill for professionals in the Industry 4.0 landscape. Moreover, data analysis and other analytical skills were reported among most critical skills in this context. Table 5. Top skills towards Industry 4.0 mentioned in responses Skill

No of responses

Programming

8

Data Analysis

6

Analytical Skills

4

Creativity/Innovative Mindset

3

Problem Solving

3

394

M. Kocamaz et al.

In one of the questions, the respondents were asked to list the themes with high priority in an Industry 4.0-related education program. According to the responses obtained, such education programs should prioritize Production Management and relevant topics. The second most frequent theme found within responses was ‘Human Resources/Training/Education’, which highlights the importance of the human factor along with ‘Organizational Change’ in Industry 4.0 transformation. Moreover, Supply Chain Management and Automation were also prioritized in the responses. Another note regarding the responses was the existence of all management areas in business. Such responses involve Finance, Marketing, Supply Chain Management, Production Management, and Human Resources (Table 6). Table 6. Themes mentioned according to the responses Themes mentioned with high priority

No of responses

Production Management

7

Human Resources/Training/Education

6

SCM

5

Automation

4

AI/Machine Learning

3

Mechanical Engineering

3

Human-Machine Interaction

2

Quality

1

Data Processing

1

Organizational Change

1

Additive Manufacturing

1

Smart Manufacturing

1

Predictive Maintenance

1

VR

1

Mechatronics

1

Electrical and Electronics Engineering

1

Health

1

IoT

1

Marketing

1

Finance

1

Information Technologies

1

Cloud & Cyber Security

1

Software

1

Competency Gap Identification

395

Among the themes with high priority, digital technologies were also listed by several respondents. Such themes involve IoT, Artificial Intelligence / Machine Learning, Virtual Reality, Data Processing, Cloud & Cyber Security, Software, and Information Technologies as an umbrella term. Moreover, the responses also underline the priority of Production Management and several relevant topics including Automation, Mechanical Engineering and Quality.

6 Scale Factors and Sub-Factors Industry-oriented Industry 4.0 studies attract the attention of professionals and academia in many numerous aspects. This concept, which occupies a significant place in the agenda of many countries, especially within the scope of industrial and technology strategies of nations, draws attention on talent development and analysis, and qualifications of workforce. There may be differences between the expectations of the businesses and the abilities of the employees as of certain periods. Although businesses try to confine these differences with various activities, one of the factors in closing this gap is the effectively created training contents and scales. [22] scrutinized this issue, emphasizing the difference in ability between what is expected and what is available, [23], on the other hand, stated that the complexity in these processes differentiates the skill sets needed by the employees. In this context, an Industry 4.0 competence scale has created based on the scale of [21]. The proposed scale that is given in Table 7 includes five categories as “Technology Enabled Quality Control”, “Virtual Product Development”, “Production Monitoring with Augmented Reality”, “Total Facility Maintenance and IoT”, and “Manufacturing Execution Systems for Operational Excellence”. There are 56 sub-dimensions in total under these five categories. Considering the results of the participants’ answers within the scope of this study, it is seen that the participants especially focus on certain themes. By considering the related themes, which are also supported by the study revealed on [21], participants mostly focus on production, supply chain management, automation, artificial intelligence, humanmachine interaction and so forth. Furthermore, it is necessary to state that some of the mentioned themes which are not mentioned in Table 7 such as health, finance, marketing can be considered as non-technical themes which of course need more attention to analyse and need more attention. Table 7. Main dimensions and the sub-dimensions of competence scale (Adapted from [21]) Categories

Main Dimensions

Sub-Dimensions

Technology Enabled Quality Control

Data Management

Predictive Quality Management Intelligent Systems Process reliability Customization Product design and prototyping (continued)

396

M. Kocamaz et al. Table 7. (continued)

Categories

Main Dimensions

Sub-Dimensions Geometrical Product Specifications and Verification Predictive maintenance Process reliability Real timed monitoring IT security

Nano-Technology

Nanomaterials Nanosensors and nano instruments Nano fabrication, Nano manufacturing

Virtual Product Development

Robotics in Inspection

Robotics

Virtual Reality

Visibility/viewability Ergonomics Aesthetic quality

3-D Printing

Complexity of geometry 3D Printing Accuracy Additive manufacturing

Production Monitoring With Augmented Reality

IoT(1)

Intelligent manufacturing IoT enabled manufacturing Cloud manufacturing

Augmented Reality

Main components: Information system, interaction system, system for guided work Remote Maintenance Support AR-based Simulation Augmented Assembly AR assistance on robot programming and monitoring

Total Facility Maintenance And IoT

Smart Factory

Smart machines, smart devices, smart manufacturing processes, smart engineering, and smart logistics Energy management, Environmental management, Integrated management, Sustainability Condition monitoring systems (continued)

Competency Gap Identification

397

Table 7. (continued) Categories

Main Dimensions

Sub-Dimensions Digital Transformation CPS Cyber-physical systems Digital Transformation Autonomous Robots

IoT(2)

Predictive maintenance Employee productivity

Data Mining

Main categories: classification, estimation, segmentation, description/prediction Failure prediction Employee safety

Manufacturing Execution Systems for Operational Excellence

Big Data

Main Component: Volume, Variety, Velocity, Value MapReduce Hadoop High-Performance Computing Cluster (HPCC)

Business Intelligence

Structured, Semi-Structured an Unstructured Data Real-time data warehousing Automated anomaly and exception detection Data Mining Data visualization Geographic information systems

Artificial Intelligence

Expert systems Artificial neural networks Fuzzy logic Genetic algorithms Hybrid system

398

M. Kocamaz et al.

7 Conclusion And Further Studies This paper discusses the effects of the transformation process with Industry 4.0, which has started to take place on the enterprises’ human resources. Today, Industry 4.0 has a fundamental role in the “strategic road map” of many countries. The importance of education and training to increase the availability of Industry 4.0 skills was mentioned in European Parliament Policy Department. Each year more countries carried out projects in this field. Determining the competencies related to critical technologies in selected leading sectors and developing innovative training programs to meet these competencies will be shaped by Industry 4.0 with the cooperation of all stakeholders. This study presents a part of the needs analysis and being developed Industry 4.0 competence scale that has been carried out in the beginning stages of the currently ongoing CEP I4.0 Erasmus+ project, which aims to develop a customized education plan to close the Industry 4.0 competency gaps. In the needs analysis, a series of openended questions that will guide the customized Industry 4.0 education plan were asked to the participants, such as which areas should be included in an Industry 4.0 training, how they could prioritize these areas, what are the fundamental skills that professionals should have to work on these areas. In the responses of the participants, manufacturing was the most relevant topic in Industry 4.0 training, followed by IT, Supply Chain Management, and Human Resources. In their responses to the topics that should be included in an Industry 4.0 training, the participants also mentioned Industry 4.0 technologies as well as the topics. Accordingly, Artificial Intelligence, Cyber-Physical Systems, Internet of Things, and Cloud Technology were among the Industry 4.0 technologies most mentioned by the respondents. According to the participants, the most prominent fundamental skills for Industry 4.0 were programming, data analysis, analytical skills, creativity/innovative mindset, and problem-solving. In one of the critical questions for the design of the custom Industry 4.0 education plan, participants were asked to list high-priority themes in an Industry 4.0 education program. While all management areas in the business world were among the high priority themes in the participants’ responses, the top priority theme was Production Management, followed by Human Resources/Training/Training and Supply Chain Management. In the needs analysis, the themes on which the responses of the participants are focused are quite consistent with the previous Industry 4.0 scale presented in the study of [21], carried out by most of the team members in this project. In the later stages of the project, it is planned to assess the competence levels of individuals using this scale and to present customized training plans to individuals according to their levels in the dimensions of this scale. Acknowledgement. This study partially presents the research conducted for the project: ‘CEPI4.0: A Customized Education Plan Based on Industry 4.0 Competency Gaps’ (ProjectIdentifier:2019–1-TR01-KA202–077366) funded by EU Commission within the ERASMUS+ Programme (KA202 – Strategic Partnership for Vocational Education and Training).

Competency Gap Identification

399

References 1. CEPI 4.0 Project: A Customized Education Plan Based on Industry 4.0 Competency Gaps. http://cepi40.ege.edu.tr/. Accessed 01 Sep 2021 2. Werner, T., Weckenmann, A.: Sustainable quality assurance by assuring competence of employees. Measurement 45(6), 1534–1539 (2012) 3. Pinzone, M., Fantini, P., Perini, S., Garavaglia, S., Taisch, M., Miragliotta, G.: Jobs and skills in Industry 4.0: an exploratory research. In: IFIP International Conference on Advances in Production Management Systems, pp. 282–288. Springer, Cham (2017) 4. Richert, A., Shehadeh, M., Plumanns, L., Groß, K., Schuster, K., Jeschke, S.: Educating engineers for industry 4.0: Virtual worlds and human-robot-teams: empirical studies towards a new educational age. In: 2016 IEEE Global Engineering Education Conference (EDUCON), pp. 142–149. IEEE (2016) 5. Prifti, L., Knigge, M., Kienegger, H., Krcmar, H.: A competency model for “industrie 4.0” employees. In: Leimeister, J.M., Brenner, W. (Hrsg.): Proceedings der 13. Internationalen Tagung Wirtschaftsinformatik (WI 2017), pp. 46–60, St. Gallen, S (2017) 6. Hecklau, F., Galeitzke, M., Flachs, S., Kohl, H.: Holistic approach for human resource management in Industry 4.0. Procedia CIRP 54, 1–6 (2016) 7. Dumitrescu, A., Lima, R., Chattinnawat, W., Savu, T.: Industry 4.0 competencies’ gap analysis. Industry, vol. 4(3), pp.138–141 (2019) 8. Demircan Keskin, F., Kabasakal, ˙I.: Future prospects on maintenance through industry 4.0 transformation. In: Dorczac, R., Arslan, H., Musialik, R. (eds.) Recent Researches on Social Sciences, pp. 159–165, Jagiellonian University Institute of Public Affairs Publishing, Kraków, Poland (2018) 9. Durakbasa, N.M., Bas, G., Riepl, D., Bauer, J.M.: An innovative educational concept of teleworking in the high precision metrology laboratory to develop a model of implementation in the advanced manufacturing industry. In: Proceedings of 2015 12th International Conference on Remote Engineering and Virtual Instrumentation (REV), pp. 242–248. IEEE (2015). https://doi.org/10.1109/REV.2015.7087288 10. Schuster, K., Groß, K., Vossen, R., Richert, A., Jeschke, S.: Preparing for industry 4.0– collaborative virtual learning environments in engineering education. In: Frerich, S., et al., (eds.) Engineering Education 4.0, pp. 477–487, Springer, Cham (2016) 11. Durakbasa, N., Bas, G., Bauer, J.: Implementing education vision in the context of industry 4.0. In: IOP Conference Series: Materials Science and Engineering. 448(1), p. 012045). IOP Publishing (2018) 12. Benešová, A., Tupa, J.: Requirements for education and qualification of people in Industry 4.0. Procedia Manuf. 11, 2195–2202 (2017) 13. Mavrikios, D., Papakostas, N., Mourtzis, D., Chryssolouris, G.: On industrial learning and training for the factories of the future: a conceptual, cognitive and technology framework. J. Intell. Manuf. 24(3), 473–485 (2013) 14. Campion, M.A., Fink, A.A., Ruggeberg, B.J., Carr, L., Phillips, G.M., Odman, R.B.: Doing competencies well: best practices in competency modeling. Pers. Psychol. 64(1), 225–262 (2011) 15. ETA Industry Competency Initiative, Competency Model Clearinghouse. Available: https:// www.careeronestop.org/CompetencyModel/ 16. Fitsilis, P., Tsoutsa, P., Gerogiannis, V.: Industry 4.0: Required Personnel Competences. Industry4.0 3(3), 130–133 (2018) 17. ISO 21001:2018: Educational organizations - Management systems for educational organizations - Requirements with guidance for use 18. ISO 29993:2017: Learning services outside formal education - Service requirements

400

M. Kocamaz et al.

19. ISO/IEC 40180:2017: Information technology - Quality for learning, education and training - Fundamentals and reference framework 20. ISO 30401:2021: Knowledge management systems – Requirements 21. Kaymaz, Y., Kabasakal, ˙I., Çiçekli, U.G., Kocamaz, M.: A conceptual framework for developing a customized i 4.0 education scale: an exploratory research. In: Proceedings of the International Symposium for Production Research 2019, papers, pp. 203–216, Springer, Cham (2019) 22. Aulbur, W., Bigghe, R.: Skill development for Industry 4.0: BRICS skill development working group. Roland Berger GMBH (2016) 23. Gebhardt, J., Grimm, A., Neugebauer, L.M.: Developments 4.0 Prospects on future requirements and impacts on work and vocational education. J. Techn.l Educ. 3(2), 117–133 (2015)

Design of a Routing Algorithm for Efficient Order Picking in a Non-traditional Rectangular Warehouse Layout Edin Sancaklı(B) , ˙Irem Dumlupınar, Ali Osman Akçın, Ezgi Çınar, ˙Ipek Geylani, and Zehra Düzgit Department of Industrial Engineering, Istanbul Bilgi University, Istanbul, Turkey {edin.sancakli,irem.dumlupinar,osman.akcin,ezgi.cinar, ipek.geylani}@bilgiedu.net, [email protected]

Abstract. Warehouses have gained considerable attention in recent years, especially due to the increasing interest for e-commerce. Order picking is the process of finding and removing products from a warehouse to fulfil customer orders. Order picking is the most time-consuming and costly operation in manual picker-to-parts warehouses. Order picking problem is similar to Traveling Salesman Problem in the sense that the order picker corresponds to the salesman whereas items to be picked correspond to cities to be visited. Hence, order picking problem is known to be NP-hard. The objective of order picking problem is to minimize total travelled distance of an order picker. For only single-block and two-block traditional rectangular warehouses, optimal picker routing can be found. However, there is no optimal algorithm for three or more block traditional rectangular warehouses. Some popular routing heuristics are applied for single-block and multi-block traditional rectangular warehouses, such as S-Shape, Largest Gap, Aisle-by-aisle, Combined/Combined+ heuristics. In this study, we consider the order picking problem of a merchandising company. The objective of the company is to minimize total travelled distance during order picking which leads to an increase in throughput (the number of picks per time). The warehouse under consideration can be said to be non-traditional rectangular due to its block and aisle configuration. Therefore, order picking heuristics, which are proposed for traditional rectangular warehouses, cannot be directly implemented. Accordingly, we modified popular picker routing heuristics to apply for the non-traditional rectangular warehouse layout. Then, a meta-heuristic algorithm is implemented to obtain the best picking sequence of items and order picking route. Finally, we examined the impact of different storage assignment policies on total travelled distance. Keywords: Order picking · Picker routing · Warehouse · Routing heuristic · Meta-heuristic algorithm · Storage assignment

1 Introduction One of the key components of logistics systems that provide high levels of customer service and supply chain efficiency are warehouses. Besides that, warehouses are places © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 401–412, 2022. https://doi.org/10.1007/978-3-030-90421-0_33

402

E. Sancaklı et al.

where goods that will be sold or distributed later are stored and have an important place in the supply chain. According to [1], warehouse activities like receiving, storage, order picking, and shipping are critical to each supply chain. Receiving activities contain the assignment and scheduling activities of unloading the items. Storage activities contain the movement of the items received and placing them in the settled storage place. Order picking activities contain obtaining the items in a set of customer orders. Shipping activities contain the packing of the picked items and assignment of the shipping vehicles. Among these operations, [1] shows that order picking is the costliest and most time-consuming warehouse activity. In this study, we focus on the order picking problem of a merchandising company. As proposed by [2], there are five different order picking systems with respect to the order picking types: i) Picker-to-parts, ii) Pick-to-box, iii) Pick-and-sort, iv) Parts-topicker, v) Fully automated. In picker-to-parts systems, the order picker walks to collect the orders. In pick-to-box systems, the warehouse is split into zones and connected by conveyors and order picker puts the orders into a box placed on the conveyor. In pickand-sort systems, the boxes, that contain different orders, flow on the conveyor, and the order picker takes the proper item for an order from the pick-list. In parts-to-picker systems, the orders are brought to the order pickers with automated vehicles. In fully automated systems, the orders are collected with fully autonomous vehicles or robots instead of manual picking. The warehouse under consideration employs a manual picker-to-parts type of order picking system where an order picker walks to obtain the items in a pick-list. We are interested in the best picking sequence of items and the best route that will minimize Total Travelled Distance (TTD) of the order picker. Order picking problem is similar to the Traveling Salesman Problem (TSP). The order picker visits the item locations whereas the salesman visits the cities. Since the TSP is an NP-Hard problem, the order picking problem is also an NP-Hard problem. Hence, different solution approaches such as heuristic and meta-heuristic algorithms are employed to solve order picking problem. Warehouse layout plays an important role in these methods. We implement some popular picker routing heuristics and genetic algorithm to solve the order picking problem of the company. In Sect. 2, problem statement is given. Section 3 consists of the solution approaches. Storage assignment policies are explained in Sect. 4. Framework of experiments is explained in Sect. 5 whereas the experimental results are given in Sect. 6. Conclusion is in the last section.

2 Problem Statement Order picking is the operation in which a collection of items is taken from a storage system. It consists of finding a sequence in which items must be collected from the warehouse in view of the issue of deciding order picking routes. In this study, we deal with a picker routing problem in a merchandising company where a picker-to-parts order picking system is employed. In the current situation, the items in the customer orders cannot be collected in targeted time due to inefficient order

Design of a Routing Algorithm for Efficient Order Picking

403

picking process. There is no systematic order picking routing policy. So, this causes a waste of time and inefficient usage of labor. Every delay in the warehouse returns to the company as money loss and unsatisfied customers. A traditional rectangular warehouse can be classified as single-block (Fig. 1a) and multi-block (Fig. 1b), regarding the number of blocks. An aisle is a walking area between shelves where the order picker travels to pick items. Blocks are the group of shelves divided by cross aisles. In the traditional rectangular multi-block warehouse layouts, cross-aisles are perpendicular to the main aisles and they give the ability to change aisles and increase the movement possibility. The depot is the Pick-up/Deposit (P/D) point of the warehouse.

Fig. 1. A. Single-block traditional rectangular warehouse b. Multi-block traditional rectangular warehouse

The non-traditional multi-block rectangular warehouse layout of the company is given on Fig. 2. Gray-colored parts are unused areas surrounded by walls. These areas block the entrance of some aisles. It is the challenging part because most of the picker routing heuristics, which will be mentioned in Sect. 3, are developed for traditional rectangular layouts. An efficient picker routing policy is essential to minimize total travelled distance of the order picker in the non-traditional rectangular warehouse layout.

3 Solution Approaches for Order Picking Problem A route is a path that an order picker follows by collecting all items in a pick-list. Routing is a strategy of extracting the route through the warehouse for picking the orders. As previously mentioned, order picking problem is NP-hard due to its resemblance to TSP. Therefore, various picker routing policies have been developed for minimizing total travelled distance of the order picking route. Some most commonly used picker routings for multi-block warehouses are: • S-Shape Heuristic: Starting from the depot point, the order picker enters the aisle with at least one piece of item and crosses that aisle along its entire length, and collects the items. This heuristic approach is called S-Shape because when the route took by the order picker is observed, that order picker moves in the shape of the letter S. S-Shape heuristic is developed for single-block traditional rectangular warehouses and also extended to multi-block traditional rectangular layouts.

404

E. Sancaklı et al.

Fig. 2. Non-traditional warehouse layout

• Aisle-by-Aisle Heuristic: Aisle-by-Aisle heuristic is developed for multi-block traditional rectangular layouts [3]. In this heuristic, each aisle has to be visited once and the items along the aisles are collected. • Combined Heuristic: According to [4], the order picker starts picking items from the farthest and leftmost block from the depot. Each sub-aisle containing items is visited only once and all items are collected. In addition to picker routing heuristics, meta-heuristic algorithms, such as genetic algorithm is applied to solve the problem. In this study, we modified the picker routing heuristics (S-Shape, Aisle-by-Aisle, Combined) to adapt them for our nonrectangular multi-block warehouse layout. Additionally, we solved the problem with genetic algorithm and compared their results. Before implementing each method, the shortest distance between each item pair must be provided as input. To find the shortest distance between each item pair, the Manhattan distance method is employed. In Manhattan distance, the sum of the absolute differences between the cartesian coordinates of two item locations are considered. The location of each item in a pick-list is addressed by cartesian coordinate system. Then, a distance matrix is formed which contains the shortest distance between each item pair in the pick-list. These distances are used in the computation of total travelled distance of the order picker. The implementations of the routing heuristics and genetic algorithm are explained in the following sub-sections: 3.1 Modified S-Shape Heuristic The S-Shape heuristic is modified according to the layout of non-traditional multi-block warehouse as follows. In the modified S-Shape heuristic, order picker picks the items

Design of a Routing Algorithm for Efficient Order Picking

405

taking the original S-Shape heuristic into account. After finishing the farthest block with the pick locations, order picker has to pick the items on the next block from the closest pick location (rightmost/leftmost) of the warehouse in the original developed heuristic. As we have an obstacle which prevents the passage between the second and third block for a few aisles, we had to modify the heuristic to always start picking from the rightmost pick location in the second block to reduce the total travelled distance. For the same reason, after finishing the second block, the order picker starts to pick the items from the leftmost pick location and then returns to the depot point. 3.2 Modified Aisle-by-Aisle Heuristic In the modified Aisle-by-Aisle heuristic, the order picker picks the items according to original Aisle-by-Aisle heuristic until facing an obstacle which prevents the passing between two blocks. In that situation, the order picker completes the picking process without changing the current block and exits from where she/he entered. Then, the order picker continues the picking process for the rest of the aisles with pick locations according to original Aisle-by-Aisle heuristic. 3.3 Modified Combined Heuristic In the modified Combined heuristic, the order picker determines the rightmost aisle that contains at least one pick location and the farthest block from the depot and enters these sub-aisles with the pick locations. If the traversing distance of the next sub-aisle with pick locations is greater than the return back distance, the order picker picks the items and exits from same cross-aisle. If not, the order picker picks the items and traverses the sub-aisle entirely. When the order picker picks all the items in the farthest block, he/she goes to the second block and starts to pick the items from the rightmost sub-aisle with pick location. Then, the order picker goes to the last block before depot point and picks the items starting from the leftmost aisle with pick location. If there is no sub-aisle with pick location in the last block, the order picker returns to the depot point. 3.4 Genetic Algorithm Genetic algorithm is an iterative algorithm beside the other single-pass heuristics (Modified S-Shape, Modified Aisle-by-aisle, Modified Combined heuristics). This method cannot guarantee the optimal solution, but it can obtain good solutions within a reasonable running time [5]. Before implementing genetic algorithm, we create the shortest distance matrix based on a pick-list by using Manhattan distance method. Then, an initial population is generated that includes random picking sequences. Here, each picking sequence corresponds to a chromosome. The starting and ending gene of each chromosome is always the depot point. TTDs are calculated for each chromosome of the population and their fitness scores are calculated based on total travelled distances. The fitness score is computed by taking the reciprocal of TTD for each chromosome. Selection probabilities of each chromosome are calculated. Roulette Wheel technique is used to select the chromosomes

406

E. Sancaklı et al.

for the mating pool. After the selection phase, PMX crossover operation is applied to selected parents in the mating pool to generate offspring based on the crossover probability. Following the crossover phase, the Swap mutation operation is applied to the off-spring based on the mutation probability. If a pair of parents were not selected for crossover, the number of the offspring gets less than the population size which is known as generation gap. So, in order to keep population size the same, we use elitist approach method. In the elitist approach, chromosomes from the current population with the best fitness scores fill the generation gap. Finally, the offspring generated from the crossover operation, mutation operation and transmitted with elitist approach (in case of a generation gap) become the next generation. These steps continue until a termination criterion is met which is the number of iterations (generations) in this study. Genetic algorithm is a parametric meta-heuristic and finding a good combination of these parameters is important. These parameters are population size, crossover probability, mutation probability and number of iterations (generations). Deciding the population size is the beginning of the selection phase. There is no standard value for this number. If the population size is low, it causes inefficient result due to poor searching or too high population size may cause slow rate of convergence. The crossover probability determines the rate at which parent pairs are likely to combine to form new individuals or offspring. Mutation phase maintains the diversification of searching area that selection and crossover phases did not reach. The mutation probability is the rate at which the offspring chromosomes produced by the crossover process randomly change some selected genes and add new genetic material to the population. And the number of iterations (generations) determines the amount of iterations that the algorithm will run before stopping. Parameter tuning must be done to find the best combination of the parameters.

4 Storage Assignment Policies Storage assignment policy is the method to determine where to put items. Currently, the company implements random storage policy. On one hand, random storage is utilized where an item may be placed at any vacant position in the warehouse. Items are placed randomly, as there is no pre-determined location to put the items. Since the locations of the items placed in the warehouse vary, this may lead to an increase in search and total travelled distance. On the other hand, ABC analysis is one of the known varieties of region-based storage that effectively separates items according to their own areas. ABC analysis is based on dividing the inventory items into three classes: Class A contains 10% of the most frequently requested items, whereas Class C contains 70% of the least demanded items. 20% of the remaining items belong to Class B. A single criterion is considered in the traditional ABC analysis, which takes into account the turnover rate [6]. We applied ABC analysis by using the past order quantity data of all items in the company. All items are ranked according to decreasing total order quantity. The total order for each item is divided by the total order quantity to find the order rate for an item. Then, the cumulative order rates are calculated and classes are determined accordingly. After ABC analysis of the items, the A, B, and C regions in the warehouse were determined in order to assign the new locations of the items which can be seen in Fig. 3.

Design of a Routing Algorithm for Efficient Order Picking

407

Fig. 3. The A, B, and C regions in the warehouse

5 Framework for Experimental Study 5.1 Warehouse Layout In Fig. 2, the non-traditional rectangular 3-block warehouse layout is given. In Fig. 2, A is the index for the aisles, A = {1, 2,…, 12}; B is the index for the blocks, B = {1, 2, 3}; C is the side of the shelf, C = {R: Right, L: Left}; S is the index for the shelves, S = {1, 2,…}; L ABC is the length of block B in aisle A on side C; W ABC is the width of the block B in aisle A on side C; RABCS is the length of the shelve S on block B in aisle A on side C; U ABC is the length of the cross aisle between the blocks B & B + 1 in aisle A on side C; E ABC is the width of the aisle on block B between the aisles A & A + 1 on side C. X locations refer to the middle point of every aisle in the horizontal direction and Y locations refer to the middle point of every shelf in the vertical direction. While determining the X and Y coordinates, we assume that the order picker always walks to the middle of the aisles and cross-aisles and can pick the item from the middle points of the shelves and aisles. The depot (P/D) point is our reference point and located at the front right of the warehouse. 5.2 Locations of Items in Pick-Lists Subject to Random & ABC Class-Based Storage Assignment Policy We consider 6 different pick-lists with different number of items. An example of a 16item pick-list (namely Pick-List No:1) is considered here where the locations of the items are determined based on random storage policy as given in Fig. 4a. Then, based on ABC class-based storage assignment policy, new locations of the items are determined which is shown in Fig. 4b. Previously, all items of the company (more than 30,000 items) are assigned to a class (A, B or C) based on ABC classification as mentioned in Sect. 4. The new locations and classes of the items are determined based on this groundwork. This is done for all items in all pick-lists.

408

E. Sancaklı et al.

Fig. 4. Locations of Pick-List No:1 based on (a) Random and (b) ABC class-based storage assignment

5.3 Parameter Tuning for Genetic Algorithm There are several parameters whose values need to be set and that have impact on implementation of genetic algorithm. However, setting these parameters is problemoriented. The parameters to be tuned are: • • • •

Population Size = {10, 50, 100, 200} Crossover Probability = {0.5, 0.6, 0.7, 0.75, 0.8, 0.85, 0.9} Mutation Probability = {0.01, 0.05, 0.1} Number of Iterations (Generations) = {500, 750, 1000}

Design of a Routing Algorithm for Efficient Order Picking

409

There are 4*7*3*3 = 252 combinations in total. In parameter tuning, for every combination of the parameters, the algorithm has been run ten times and average TTD is recorded. Making multiple replications is necessary due to random generation of the initial population and generated uniform random numbers for crossover and mutation operations. The best ten average TTD values are tabularized in Table 1 . Accordingly, the best (minimum) average TTD is obtained when population size is 100; crossover probability is 0.85; mutation probability is 0.1; and number of iterations is 1000. These values are used in the computational tests of genetic algorithm. Table 1. The best combinations of the parameters of genetic algorithm in terms of average TTD.

6 Experimental Results We conducted an experimental study by considering 6 different pick-lists and 2 different storage assignment policies: random and ABC class-based. We computed the TTD with Modified S-Shape, Modified Aisle-by-Aisle and Modified Combined picker routing heuristics and genetic algorithm and compared their results. We have also illustrated the picker routings for picker routing heuristics based on a sample pick-list (Pick-list No:1). In Fig. 5a and b, the picker routings are shown according to the Modified S-Shape heuristic for Pick List No:1 with random and ABC class-based storage, respectively. In Fig. 6a and b, the picker routings are shown according to the Modified Aisleby-Aisle heuristic for Pick List No:1 with random and ABC class-based storage, respectively. In Fig. 7a and b, the picker routings are shown according to the Modified Combined heuristic for Pick List No:1 with random and ABC class-based storage, respectively. Picker routes are found by implementing the routing heuristics and the best picker routes are also determined by genetic algorithm for 6 different pick-lists under 2 different storage assignment policies. Table 2 shows the comparisons of TTDs for all pick-lists with both storage assignment policies. As it can be seen in Table 2 , genetic algorithm is superior to picker routing heuristics in 9 out of 12 cases. Applying ABC class-based storage assignment significantly decreases the TTDs in the majority of the cases.

410

E. Sancaklı et al.

Fig. 5. Picker Routes according to the Modified S-Shape Heuristic for Pick List No:1 with (a) Random; (b) ABC class-based storage Assignment

Fig. 6. Picker Routes according to the Modified Aisle-by-Aisle Heuristic for Pick List No:1 with (a) Random; (b) ABC class-based storage Assignment

Design of a Routing Algorithm for Efficient Order Picking

411

Fig. 7. Picker Routes according to the Modified Combined Heuristic for Pick List No:1 with (a) Random; (b) ABC class-based storage Assignment

Table 2. Comparison of TTDs of pick-lists before and after the ABC storage assignment.

7 Conclusion We considered order picking problem in a non-traditional rectangular 3-block warehouse layout where picker-to-parts order picking system with random storage assignment is

412

E. Sancaklı et al.

employed. The objective is to minimize the total travelled distance of the order picker. Order picking problem is similar to TSP which is known to be NP-Hard. There is no optimal solution for order picking problem with 3 or more blocks, hence the problem is solved by heuristics or meta-heuristics. Popular picker routing heuristics (S-Shape, Aisle-by-Aisle, Combined) are modified for the non-traditional layout of the warehouse. Genetic algorithm is also applied for the problem. Results of genetic algorithm and picker routing heuristics are compared with different pick-list sizes subject to random and ABC class-based storage assignment policies. Genetic algorithm is shown to be superior to single-pass picker routing algorithms, owing to its iterative nature. ABC-class based storage assignment policy is shown to provide less total travelled distance compared to random storage assignment policy.

References 1. Tompkins, J.A., White, J.A., Bozer, Y.A., Tanchoco, J.M.A.: Facilities Planning. John Wiley & Sons, NJ (2010) 2. Dallari, M.M., Marchet, G., Melacini, M.: Design of order picking system. Int. J. Adv. Manuf. Technol. 42(1), 1–12 (2009) 3. Vaughan, T., Petersen, C.: The effect of warehouse cross aisles on order picking efficiency. Int. J. Oper. Res. 37, 881–897 (1999) 4. Roodbergen, K.J., Koster, R.: Routing methods for warehouses with multiple cross aisles. Int. J. Prod. Res. 39(9), 1865–1883 (2001) 5. Michalewicz, Z., Janikow, C.Z.: Genocop: a genetic algorithm for numerical optimization problems with linear constraints. Commun. ACM 39, 175 (1996) 6. Liu, J., Liao, X., Zhao, W., Yang, N.: A classification approach based on the outranking model for multiple criteria ABC analysis. Omega 61, 19–34 (2016)

Education in Engineering Management for the Environment Peter Kopacek1 and Mary Doyle-Kent2(B) 1 Institute for Mechanics and Mechatronics (IHRT), TUWien, Vienna, Austria

[email protected]

2 Department of Engineering Technology, Waterford Institute of Technology, Waterford, Ireland

[email protected]

Abstract. Manufacturing engineering is now moving into the digital era, otherwise known as Industry 4.0. The next paradigm shift that is fast approaching and this is the era of personalisation and customisation, known as Industry 5.0. To facilitate this personalisation of products, human and technology must work together seamlessly in a manner where the best of both is leveraged. Human centred systems are at the heart of this revolution. One of the most important enablers of this change is the education of our technologists, our university degrees and post graduate programmes must embed this expertise into their core modules. This contribution looks the educational aspects of the MSc in Engineering Management in the Technical University of Vienna Austria (TUWien) as an example of education which has evolved to facilitate emerging trends. An introduction and a short history of this executive postgraduate MSc is presented, as it is a long established and well know programme which dates back to 1995. Keywords: Education · Technology · Human factors · Ethics · Cost oriented Automation (COA) Human Centered Systems · Environment

1 Introduction Modern manufacturing environments are evolving rapidly as new technologies are developed. As part of a European Union publication Breque et al. in 2021 describe future developments of Industry 5.0 as “a forward-looking exercise, a way of framing how European industry and emerging societal trends and needs will co-exist. As such, Industry 5.0 complements and extends the hallmark features of Industry 4.0. It emphasises aspects that will be deciding factors in placing industry in future European society; these factors are not just economic or technological in nature, but also have important environmental and social dimensions.”[1] They emphasise that going forward the technological frameworks of highest importance will be (i) Individualised human-machine-interaction; (ii) Bioinspired technologies and smart materials; (iii) Digital twins and simulation; (iv) Data transmission, storage and analysis technologies; (v) Artificial Intelligence; (vi) Technologies for energy efficiency, renewables, storage and autonomy. As a result of these changes the engineering environment has become more challenging than before. With today‘s increased technical complexity and competitive pressures, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 413–420, 2022. https://doi.org/10.1007/978-3-030-90421-0_34

414

P. Kopacek and M. Doyle-Kent

engineering managers must confront new highly technical problems and manage complex tasks. To manage effectively in such a dynamic and often unstructured environment, managers must understand the interaction of technical, organizational and behavioural variables in order to form a productive engineering team [2]. According to Mills and Treagust in 2003 “engineering and management practices have to deal with uncertainty, with incomplete and contradicting information from an organization’s environment.” In addition, continuous technological and organizational changes in the workplace impose challenges to the individual. However, the prevailing mode of teaching is similar to the teaching practices of the 1950’s, with large classes and single discipline, lecture-based courses. Hence, recent developments show a slow change towards student-centred learning such as problem-based and project-based learning [3]. Fischer in 2004 and Kopacek et al. in 2013 described the following key dimensions of educational competence: 1. Technical Competence: the individual has sufficient subject knowledge and can plan and organise so as to achieve maximum results. 2. Administrative Competence: the individual has a range of business knowledge, can follow rules, procedures and guidelines set out by the organisation and can perform to the expected standards set out by the organisation. 3. Ethical Competence: The individual has moral standards which guide them in their decision making activities in the work environment. 4. Productive Competence: The individual is efficient and capable of producing desirable results. Productive competence particularly focuses upon the capability of the professional to continuously develop their knowledge and skills. 5. Personal competence: The individual can manage time, possesses necessary ‘people skills’, time management, communications and conflict management skills to operate effectively in the working environment [4, 5].

2 Human Centered Systems and Mechatronics Kopacek in 2019 stated that industry 4.0 combines production methods with state-ofthe-art information and communication technology. The driving force behind this development is the rapidly increasing digitization of the economy and society, and the result, manufacturing and the work environment are irreversibly changing exponentially. In the tradition of the steam engine, the production line, electronics and information technology, smart factories are now determining the fourth industrial revolution. The technological foundation is provided by intelligent, digitally networked systems that will make largely self-managing production processes possible [6]. The focus of Europe is now on the implementation of strategic concepts. With a strong technical foundation, the challenge in Europe is to balance the opportunities of digitalisation in industrial value creation with the needs of a human-centric world of employment. Europe sees Industry 4.0 and 5.0 as a socio-technological challenge. Reclaiming industrial competitiveness is critical in manufacturing as well as the preservation of sustainable careers. Breque et al. state that “there a consensus on the need to

Education in Engineering Management for the Environment

415

better integrate social and environmental European priorities into technological innovation and to shift the focus from individual technologies to a systemic approach.” [1]. A framework of the emerging Industry 5.0 can be seen in Fig. 1 as proposed by Doyle-Kent in 2021 [7]. The importance of education leading this field cannot be underestimated, and an interdisciplinary approach combining the technological, social ethical, industrial, environmental and management aspects is required going forward to enable graduates not just to survive but to flourish in their future careers.

Fig. 1. Conceptual framework illustrating Industry 5.0 [7]

Additionally an Industry 5.0 definition has been put forward by Doyle-Kent so that researchers can easily conceptualisation this paradigm shift that is fast approaching. “Industry 5.0 is the human-centered industrial revolution which consolidates the agile, data driven digital tools of Industry 4.0 and synchronises them with highly trained humans working with collaborative technology resulting in innovative, personalised, customised, high value, environmentally optimized, high quality products with a lot size one.” [7].

3 Engineering Management at TU Wien In the past, the areas of engineering and management were regarded as two very different and unrelated disciplines. Trained technical experts undertook the process and technical aspects of engineering, whilst a different type of person altogether, often with an unrelated background and experience, oversaw the management of an engineering business or technical processes. Because of the evolution of the manufacturing industry new skills and new approaches are required by staff because the hi-tech processes and

416

P. Kopacek and M. Doyle-Kent

systems need to be operated with maximum efficiency and effectiveness. The need for a new kind of manager has been heightened by the new international nature of most businesses. There is an increasing demand from customers to deal with people familiar with the technical aspects of a product and who are also experts in business management and customer relationships [8]. The teaching methods and the materials are equally important in modern education. Giving the student exposure to real life problems that can concretise the theory and in Engineering Management both what and how are given equal importance. Internationally distinguished experts are members of this highly acclaimed faculty, either through their sound interdisciplinary scientific knowledge or their extensive practical experience in the field of engineering management. The program is customized for small and medium-sized enterprises as well as for as departments of large companies facing a growing and increasingly competitive market. It is designed to prepare graduates from technical and economic universities for leadership roles in technological, corporate and national affairs. The part-time master’s program is characterized by its internationality. Lecturers are affiliated to universities and industrial enterprises in nine countries like USA and numerous European Countries and seek to communicate technical, economic and juridical skills in an interdisciplinary way. By cross-linking theory, practice and case studies in a targeted manner, this knowledge can then be implemented directly in the companies and businesses of the participants. The main goal of the above design was to develop a curriculum which would enable graduates to be conversant with business issues, and appreciate these in the context of the implementation of “new “technologies [9]. MSc Engineering management students can access the TUWien Pilotfactory which was designed to bring students into a real-life learning environment. This facilitates the technical and social education of the students and is based on the concept of the ‘digital twin’ in manufacturing [10]. It uses scenario-based learning (SBL) is rooted in situated learning and cognition theory [11, 12]. Situated learning theory claims that learning is most effective when it takes place in its natural context where the acquired knowledge is going to be used. Thus, knowledge can be transformed to competencies of action. Cobb similarly states that learning is effective when it constantly shifts between “thinking” – a process of abstract conceptualization, “feeling” – largely based on experiences, “watching” – a process of observation and reflection and “doing” – an active stage of experimentation [13].

4 MSc in Engineering Management – A Closer Look 4.1 The History The first idea for a postgraduate, executive Engineering Management MSc program at TU Wien came up in 1992 as a cooperation with the Oakland University in Rochester (MI). The main goal has been to educate managers for SME‘s as well as Department Heads of large companies from the producing industry [8].

Education in Engineering Management for the Environment

417

After some discussions and visits a general cooperation agreement between Oakland University and TU Wien was signed on January 25, 1995 in the Rectors office of TU Wien. The main points were: • An international faculty, • A two week stay at Oakland University with lectures and company visits • Participants receive a MSc degree in Engineering Management from Oakland University as well as a certificate from TU Wien. On October 20, 1995 the first program was launched with 11 participants in Austria. The following programs (until 2005) took place in different locations in Lower Austria and Vienna. Since 2007 this program has been running under the framework of the Continuing Education Center (CEC) of TUWien with TUWien lecturers but without an agreement with Oakland University. 4.2 Facts and Figures The highlights of our program are summarised in the following table: Table 1. MSc in Engineering Management TUWien highlights More than 25 years’ experience, more than 200 graduates Master of Science Degree of TU Wien Executive program – number of participants limited International Faculty – Universities and Industry International participants – up until now from more than 41 different countries Part time – 13 weekend modules (Friday to Tuesday) According to the Bologna Convention Teaching is in English only Contents are modernized continuously Evening lectures which are delivered by distinguished guest speakers Company visits Alumni Club Regular Club meetings for Networking Graduate profile: approximately 2/3 are in (high) management positions, 1/3 have founded their own companies

418

P. Kopacek and M. Doyle-Kent

4.3 Modules Taught in 2020–2021 Module A - Production Management Probability and Statistics Production Systems Systems Engineering Project Management & Logistics Technology Company Visits Module B - Engineering Informatics IT & Management IT & Production Module C - Business Management Accounting Financing Marketing Operations Management Management Information Systems International Law Human Factors Module D - Master’s Thesis Master’s Thesis The Master’s Thesis is an important part of the postgraduate program. It consolidates and integrates what has been learned and establishes a vital link between theory and practice. Students are encouraged to choose a specific and practical problem from their occupational activity and to solve it by the acquired knowledge. A supervisor, who has the role of a mentor, will advise and support throughout the whole process. Some parts of finished theses are published in scientific journals and/or presented on scientific events.

5 Msc in Engineering Management and the Environment One of the main goals of this program is to give the participants a deeper insight in an “Environment friendly Management.” Most of these topics are addressed in the lectures on Technology. Examples or areas that are taught and discussed are as follows: “End of Life – (EoL) Management”. “Resource Efficiency”.

Education in Engineering Management for the Environment

419

“Heating, Ventilating, Air Conditioning (HVAC)”. “Domotics”. “Smart Cities”. “Bionics”. “Transport of the future”. Furthermore MSc Theses are written in these research areas and some are, or will be, presented at International Scientific Events and also published in Scientific Journals.

6 Summary and Outlook As the working environment changes so must our educational offering to both undergraduates and post graduate students. The MSc in Engineering Management in TU Wien has a long standing history of educational excellence aimed at both Austrian and international students over a number of decades. One of our main goals is to adapt the contents of the lectures according to the newest developments from program to program. According to the international developments in the future we have to add some additional items related to the environment. This will take priority over the coming years as the program evolves and adapts.

References 1. Breque, M., De Nul, L., Petridis, A.: Industry 5.0. Towards a sustainable, human-centric and resilient European industry. Publications Office of the European Union, Luxembourg (2021). https://op.europa.eu/en/publication-detail/-/publication/468a892a-5097-11ebb59f-01aa75ed71a1/ 2. Thamhain, H.J.: Engineering Management: Managing Effectively in Technology-Based Organisations. J. Wiley and Sons (1992). ISBN-13: 978–0471828013 3. Mills, J., Treagust, D.: Engineering education - Is problem based or project - based learning the answer? Australian Journal of Engineering Education, vol. 3, no. 2–16 (2003) 4. Fischer, G.: Industry and Participant Requirements for Engineering Management. Ph.D. thesis; Vienna University of Technology, Vienna (2004) 5. Kopacek, P., Stapleton, L., Hajrizi, E.: From Engineering to Mechatronics Management. In: Proceedings of the IFAC Workshop on “Supplementary Ways to Improving International Stability - SWIIS 2013”, University for Business and Technology, Prishtina, Kosovo, p.1–4; Elsevier (2013). https://doi.org/10.3182/20130606-3-XK-4037.00053 6. Kopacek, P.: Robo-Ethics a Survey of Developments in the Field and their Implications for Social Effects. IFAC-PapersOnLine 52(25), pp. 131–135 (2019). https://doi.org/10.1016/j.ifa col.2019.12.460 7. Doyle-Kent, M.: Collaborative Robotics in Industry 5.0 (Doctoral dissertation, Wien) (2021). https://doi.org/10.34726/hss.2021.70144 8. Kopacek, P.: Higher education in engineering management with impacts of TECIS. IFACPapersOnLine 52(25), 164–167 (2019). https://doi.org/10.1016/j.ifacol.2019.12.466 9. MSc in Engineering Management (2021). https://cec.tuwien.ac.at/index.php?id=12025&L= 1&pk_campaign=em_googleads&gclid=CjwKCAjwuIWHBhBDEiwACXQYsRRBjRTSz 2tg351wSh7aQeVi598lhTz6Wqvb9AYaBQpli9uXoBxWhhoCEywQAvD_BwE 10. Pilotfactory TUWien (2021). https://www.pilotfabrik.at/?lang=en

420

P. Kopacek and M. Doyle-Kent

11. Lave, J., Wenger, E.: Situated Learning: Legitimate Peripheral Participation. Cambridge University Press (1991). ISBN: 0 521 42374 0 12. Cobb, P., Bowers, J.: Cognitive and Situated Learning Perspectives in Theory and Practice, Educational Researcher, vol. 28, no. 2, pp. 4–15, March 1999. https://doi.org/10.3102/001 3189X028002004 13. Schar, M.: Scenario Based Learning - Designing Education Lab, Scenario Based Learning - Designing Education Lab, December 2015. https://web.stanford.edu/group/design_educat ion/cgibin/mediawiki/index.php/Scenario_Based_Learning

Hybrid Flowshop Scheduling with Setups to Minimize Makespan of Rubber Coating Line for a Cable Manufacturer Diyar Balcı(B) , Burak Yüksel, Eda Ta¸skıran, Güliz Hande Aslım, Hande Özkorkmaz, and Zehra Düzgit Department of Industrial Engineering, ˙Istanbul Bilgi University, Istanbul, Turkey {diyar.balci,burak.yuksel,eda.taskiran,hande.aslim, hande.ozkorkmaz}@bilgiedu.net, [email protected]

Abstract. We consider the hybrid flowshop scheduling problem of a cable manufacturing company. We focus on rubber coating line to minimize the makespan. Industrial cables differ based on the number of cores and core colors. The rubber coating line is composed of three stages with two identical parallel machines in each stage. These stages are i) Isolation; ii) Bunching; iii) Sheathing. The input of the isolation stage is copper wire whereas the output of the isolation stage is cores with different colors. Before the isolation stage, a sequence-dependent setup time is required based on a cross-section of cables and a sequence-independent setup time is required to change the color of cores. The input of the bunching stage is cores with different colors whereas the output of the bunching stage is bunched cores. Before the bunching stage, a sequence-independent setup time is necessary based on the number of cores. The input of the sheathing stage is bunched cores whereas the output of the sheathing stage is rubber-coated cables. Before the sheathing stage, a sequence-dependent setup time is needed based on area. Since the problem is NP-hard, genetic algorithm is employed. With the help of genetic algorithm, we can determine the best job-machine assignments for each stage, the best job sequence on each machine at each stage, start and completion times of jobs on each stage, and the best makespan. Keywords: Scheduling · Hybrid flowshop · Sequence-independent setup times · Sequence-dependent setup times · Makespan · Genetic algorithm

1 Introduction Production management includes many connected activities. Scheduling of production systems has always gained considerable attention. According to [1], scheduling is the process of determination and distribution of limited sources to jobs for specific time intervals with the target of optimizing single or multiple objectives. Despite all scheduling problems aim to increase the efficiency of the existing system, solution methods may change according to the production environment and specified characteristics. Therefore, analyzing and understanding the existing production environment and constraints is the basis of a scheduling study. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 421–435, 2022. https://doi.org/10.1007/978-3-030-90421-0_35

422

D. Balcı et al.

In scheduling theory, Graham’s 3-field notation problem classification was first introduced by [2] and used to represent the scheduling characteristics of the production environment. In the α/β/γ notation, (α) represents the machine environment, (β) represents constraints and details of processing characteristics whereas (γ ) contains the objective function(s) to be optimized. An environment that has solely one machine is a single-machine environment and each job is processed and completed by only this machine. A single machine environment is indicated as (1) in the α-field. A parallel machine environment consists of more than one machine in parallel that performs only one operation. According to [1], there are three different types of parallel machine environments which are: i) identical parallel machines (Pm), ii) uniform parallel machines (Qm), and iii) unrelated parallel machines (Rm) where m is the number of machines. In an identical parallel machine environment, the speeds of the machines are the same. If the speeds of the machines are different for at least one machine but independent of the job, then it is called a uniform parallel machine environment. In an unrelated parallel machine environment, the speed of at least one machine may change according to the job. Flow shop (Fm) is a multi-stage production process. According to [1], every single job must be processed in the same machine routing. A variant of regular flow shop is described in [3] as a flow line with more than one machine in parallel at least for one stage, but jobs still must be processed by only one machine for each stage. In this environment, a stage with more than one machine performs as a parallel machine environment. In a job shop (Jm) environment, jobs may have different machine routings. Unlike the three environments mentioned above, jobs can visit some machines more than once or may not visit a machine at all. In the β field, different kinds of constraints and details of processing characteristics are described such as release dates, setup times, etc. There may be more than one entry in this field. The time when a job is ready for processing can be defined as release time. So, a job cannot be processed before its release time. Setup time can be described as the required time for arranging and getting all sources ready to operate a job. In general, there are two types of setup times which are: i) sequence-dependent and ii) sequence-independent. If the setup time depends only on the following job, it is called sequence-independent setup time (ST). On the other hand, if the setup time changes according to both the following and the preceding job, it is called sequence-dependent setup (SDST). Finally, the objective is represented in the γ field. Most of the scheduling problems aim to minimize performance measures such as maximum lateness, number of tardy jobs, total weighted tardiness etc. One of the most commonly used objectives in production is minimizing the makespan where the makespan is defined as the completion time of the last job in the system. In this study, we focus on the rubber coating line of a cable manufacturing company where industrial cables are coated with rubber. A cable is a bundle of conductive wires and is represented with two specifications which are the number of cores in the cable and the cross-section of cores. For example, a cable with 3 × 2.5 specifications mean that the cable consists of 3 cores with a cross-section of 2.5 mm2 each. There are three different sequential operations to be performed: i) Isolation, ii) Bunching, and iii)

Hybrid Flowshop Scheduling with Setups to Minimize Makespan

423

Sheathing. Each of these three stages consists of two identical parallel machines which is represented by a three-stage hybrid flow shop. In the first stage, there are sequencedependent setup times depending on the cross-section and also sequence-independent setup times based on the color of the cores. In the second stage, sequence-dependent setup times are required which depend on the number of cores of cables. And for the third stage, there are sequence-dependent setup times depending on the total area of the cable. The objective is to minimize the completion time of the last job in an order list. Hence, the problem under consideration is represented by HFS/SDST&ST/Cmax in terms of Graham’s 3-field notation. In Sect. 2, the production flow is described, and the problem is defined. In Sect. 3, mathematical models are explained. Genetic algorithm implementation is described in Sect. 4. In Sect. 5, parameter tuning is explained. Sections 6 and 7 include a small-size and actual-size problem solved with genetic algorithm, respectively. The designed user interface is given in Sect. 8. The conclusion is in Sect. 9.

2 Production Flow & Problem Definition The rubber coating line is the main focus of this study. During the production of a cable, the steps from copper wire to rubber-coated final form are shown in Fig. 1.

Fig. 1. Process flow of rubber coating line for cable manufacturing

The main component of a cable, which is called the core, is processed in the isolation stage. Isolation color may change according to the product. There are five main colors which are: i) brown, ii) black, iii) grey, iv) blue and v) yellow-green. Color combinations of cores are shown in Fig. 2. In the isolation stage, two different kinds of setups are addressed which are color setup and cross-section setup. If the color of isolation for the next core to be processed is different from the current one, preparing the color of isolation for the next core is called the color setup. Color setup does not depend on the job sequence so the color setup time when switching from one color to the other four is fixed and takes 40 min as shown in Fig. 3. If the cross-section area of the following core is different from the current one, machines need to be arranged for the following cross-section. A cross-section setup depends on the job sequence, and setup time varies depending on the current and the following jobs. Table 1 shows the partial matrix of setup times depending on the crosssection.

424

D. Balcı et al.

Fig. 2. Color combination according to number of cores

Fig. 3. Sequence-independent Color Setup Times Before Isolation Stage (in minutes)

Table 1. Sequence-dependent setup times depending on cross-section before isolation stage (in minutes) Cross-section (mm2 ) 0.75 1.0 1.50 2.50

0.75

1.0

1.50

2.50

4.0

60

75

80

85

60

75

80

60

75 60

4.0

In the bunching process, cores with different colors are bunched onto each other to form a stronger structure for perfect conductivity. The number of cores may change according to the product. In the bunching stage, machines are allocated according to the number of cores to be processed. If there are less than 5 cores, bunching is done in machine 1 and machine 2 is used for more than 5 cores for the bunching operation. To be able to start this process, all needed colors of cores must be completed in stage 1. The setup time here varies depending on the number of cores in the cable as shown in Table 2. As the last process, the whole core bundle is covered by fluid rubber. In this sheathing stage, setup time depends on the total area of the previous job and the current job. In Table 3, setup times for the sheathing process are shown for four different total areas.

Hybrid Flowshop Scheduling with Setups to Minimize Makespan

425

Table 2. Sequence-independent setup times depending on number of cores before bunching stage (in minutes) Number of cores

Setup times (min)

1

10

2

20

3

30

4

40

5

50

6

60

7

70

12

120

24

240

Table 3. Sequence-dependent setup times depending on area before sheathing process (in minutes) Area (mm2 ) 2 3 4

2

3

4

4.5

90

135

135

90

135 90

4.5

3 Mathematical Models The 3-stage hybrid flowshop environment of the rubber coating line is shown in Fig. 4. By an efficient scheduling policy, idle times can be minimized and makespan minimization can be achieved. A unique job definition is not available for all stages in 3-stage hybrid flow shop environment due to the inputs and outputs of stages as shown in Fig. 5 and Fig. 6. A detailed 9-job example is exhibited in Fig. 7. The first stage’s input and output are different than the second stage’s input and output. So, we have two different job definitions. Although the environment is a 3-stage hybrid flowshop, we decomposed the environment into two environments (Fig. 4) due to the job definition, such that: • Stage 1 (Isolation): modeled as identical parallel machines • Stage 2 (Bunching) & Stage 3 (Sheathing): modeled as a two-stage hybrid flowshop Accordingly, we used two linked mathematical models to obtain the exact solution for the problem:

426

D. Balcı et al.

Fig. 4. 3-stage hybrid flowshop environment

STAGE 1

Fig. 5. 1st (Isolation) Stage’s Input and Output

STAGE 2

Fig. 6. 2nd (Bunching) Stage’s Input and Output

Fig. 7. Example for job transformation between stages

3.1 Mathematical Model for Identical Parallel Machines in Isolation Stage (Adapted from [4]) Sets: N = {1,2,…,n} jobs. M = {1,2} machines in isolation stage. Indices: i, j ∈ N jobs.

Hybrid Flowshop Scheduling with Setups to Minimize Makespan

427

m ∈ M machines. k ∈ N job order. Parameters: Pj : processing time of job j. Sij : setup time required switching from a job i to job j. BM: large positive number. Decision Variables: cmax : makespan of the isolation stage. cj : completion time of job j. tj : starting time of job j. x jkm : 1, if job j is assigned to the mth machine on the k th order; 0 otherwise. Mathematical Model: min z = cmax subject to: cj + BM ∗ (1 − xj1m ) ≥ Pj

∀j, m

(1)

cj − BM ∗ (1 − xj1m ) ≤ Pj

∀j, m

(2)

cj + BM ∗ (2 − xjkm − xi(k−1)m ) ≥ ci + Pj + Sij

∀i, j, m

i = j, k > 1 (3)

cj − BM ∗ (2 − xjkm − xi(k−1)m ) ≤ ci + Pj + Sij

∀i, j, m

i = j, k > 1 (4)

cmax ≥ cj 

xjkm ≤ 1

j

  k

 j

xjkm −

m

 i

xjkm = 1

xi(k−1)m ≤ 0

tj ≥ ci + Sij − BM ∗ (2 − xjkm − xi(k−1)m ) tj ≤ ci + Sij + BM ∗ (2 − xjkm − xi(k−1)m ) tj ≤ BM ∗ (1 − xj1m )

∀j

(5)

∀k, m

(6)

∀j ∀m

(7) k >1

∀i, j, m ∀i, j, m ∀j, m

(8) i = j, k > 1

i = j, k > 1

(9) (10) (11)

428

D. Balcı et al.

tj ≥ −BM ∗ (1 − xj1m ) xjkm ∈ {0, 1}

∀j, m

∀j, k, m

(12) (13)

tj ≥ 0

∀j ∈ N

(14)

cj ≥ 0

∀j ∈ N

(15)

cmax ≥ 0

(16)

In the identical parallel machine model, the objective function is to minimize the makespan (of the isolation stage). Constraint (1) and Constraint (2) calculate the completion time of the jobs assigned to the first order (position). Constraint (3) and Constraint (4) calculate the completion time of jobs that are not in the first order. Constraint (5) imposes a lower bound on the makespan. Constraint (6) and Constraint (7) specify job assignments to machines at each stage. Constraint (8) is the consecutiveness constraint. It guarantees that if a job is assigned to a machine other than the first order, there is always an assigned job before this job on this machine. Constraint (9) and Constraint (10) calculate the start time of jobs that are not in the first order. Constraint (11) and Constraint (12) compute the start time of jobs on the first order. Constraint (13) is the binary restriction whereas Constraint (14), Constraint (15), Constraint (16) are non-negativity restrictions for the decision variables. Outputs (in terms of completion times of jobs at the end of isolation stage) of this mathematical model are used to calculate release times of the jobs for the bunching stage and become an input of the next mathematical model: 3.2 Mathematical Model for 2-Stage Hybrid Flowshop in Bunching & Sheathing Stages (adapted from [5]) Sets: L = {1, 2,…,l} jobs. M = {1, 2} machines. K = {1, 2} stages (1: Bunching stage; 2: Sheathing stage). Indices: i, j ∈ L jobs. m ∈ M machines. s ∈ L job sequence. t ∈ K stages.

Hybrid Flowshop Scheduling with Setups to Minimize Makespan

429

Parameters: Pjt : processing time of job j at stage t. Rj : release time of job j (computed based on outputs (namely completion times) of the former mathematical model). Bij : setup time required when switching from a job i to job j. nj : number of cores of job j. BM: large positive number. Decision Variables: cmax : makespan of the sheathing stage. cjt : completion time of job j at stage t. fj : starting time of job j at the bunching stage. xjtms : 1, if job j is assigned to the mth machine on the sth order at stage t; 0, otherwise. Mathematical Model: min z = cmax subject to:   m

s

 j

xjtms = 1

xjtms ≤ 1

∀j, t

(1)

∀t, m, s

(2)

cj2 + BM ∗ (1 − xj2m1 ) ≥ cj1 + Pj2

∀j, m

(3)

cj1 + BM ∗ (1 − xj1m1 ) ≥ fj + Pj1

∀j, m

(4)

cj1 + BM ∗ (2 − xi1m(s−1) − xj1ms ) ≥ Pj1 + fj

∀i, j, m and s > 1 and i = j

cj2 + BM ∗ (2 − xi2m(s−1) − xj2ms ) ≥ Pj2 + bij + ci2 cjt ≥ cj(t−1) + Pjt  j

xjtms −

 i

cmax ≥ cjt

∀i, j, m and s > 1 and i = j (6)

∀j and t > 1

xitm(s−1) ≤ 0

(5)

∀t, m and s > 1 ∀j, t

(7) (8) (9)

fj ≥ Rj − BM ∗ (2 − xi1m(s−1) − xj1ms )

∀i, j, m and s > 1

(10)

fj ≥ ci1 − BM ∗ (2 − xi1m(s−1) − xj1ms )

∀i, j, m and s > 1

(11)

430

D. Balcı et al.

fj ≤ BM ∗ (1 − xi1m1 ) + Rj

∀j, m

fj ≥ −BM ∗ (1 − xi1m1 ) + Rj xj11s = 0

cjt ≥ 0

(13)

∀s and nj > 5

(14)

∀j, t, m, s

(15)

xjtms ∈ {0, 1} fj ≥ 0

∀j, m

(12)

∀j ∈ L

(16)

∀j ∈ L,∀t ∈ K

(17)

cmax ≥ 0

(18)

In the hybrid flowshop model, the objective function is to minimize the makespan of the sheathing stage. Constraint (1) guarantees that every job in each stage will be assigned to only one sequence on a machine. Constraint (2) blocks the situation that one machine processes more than one job at the same time. Constraint (3) calculates the completion time of the job on the first order at the sheathing stage. Constraint (4) calculates the completion time of the job on the first order at the bunching stage. Constraint (5) calculates the completion time of jobs that are not in the first order and at the bunching stage. Constraint (6) calculates the completion time of jobs that are not in first order at the sheathing stage. Constraint (7) ensures the consecutiveness of jobs on stages, which means one job cannot be processed at the sheathing stage if it is not processed in the bunching stage. Constraint (8) guarantees that the jobs are processed successively on machines. Constraint (9) enforces a lower bound on the makespan. Constraint (10) and Constraint (11) calculate the start time of jobs in the bunching stage for the jobs that are not on the first order. Constraint (12) and Constraint (13) calculate the start time of the jobs on the first order in the bunching stage. Constraint (14) prevents jobs with 5 cores or more from being processed on machine 1 in the bunching stage. Constraint (15) is the binary restriction whereas Constraint (16), Constraint (17), and Constraint (18) are non-negativity restrictions for the decision variables. 3.3 NP-Hardness of the Problem We solved the isolation stage’s parallel machine problem with different job sizes. Based on the running times tabularized in Table 4, we concluded that even the 1st (isolation) stage of the problem is NP-hard. So, it is not possible to obtain an exact solution by using the proposed mathematical models in a reasonable running time. Hence, we decided to employ genetic algorithm to solve the problem within a reasonable running time.

Hybrid Flowshop Scheduling with Setups to Minimize Makespan

431

Table 4. Running times for identical parallel machine mathematical model for 1st (isolation) stage Number of jobs

Running times

2 jobs

0.03 s

3 jobs

0.03 s

4 jobs

0.13 s

5 jobs

0.17 s

6 jobs

0.81 s

7 jobs

4.06 s

8 jobs

30.70 s

9 jobs

599.53 s

≥ 10 jobs

>8h

4 Implementation of Genetic Algorithm The following steps of the genetic algorithm are implemented for isolation, bunching, and sheathing stages: • Step 1: Generating Initial Population: Chromosomes are generated by assigning jobs to machines randomly. • Step 2: Evaluating the Fitness Score of Chromosomes: The fitness score of each chromosome is calculated by taking the reciprocal of the objective function which is the maximum completion time in a stage. • Step 3: Elitism: In this step, the elitist approach is implemented by carrying 20% of the best chromosomes directly to the next generation. • Step 4: Selection: The roulette wheel selection method is implemented for the rest of the population after elitism step. A uniform random number between 0 and 1 is generated for each chromosome and parents are selected accordingly and a mating pool is formed. • Step 5: Crossover: For the crossover operation, the partially mapped crossover (PMX) method, with randomly selected cut-off points, is used to generate offspring according to [6]. Crossover probability is taken 1 to avoid generation gap. • Step 6: Mutation: A uniform random number between 0 and 1 is generated for each offspring. The offspring with an associated uniform random number greater than the mutation probability is mutated. For the mutation operation, the insertion method is used based on [7]. Termination criterion: The algorithm stops after completing a pre-specified number of generations (iterations).

432

D. Balcı et al.

5 Parameter Tuning The performance of genetic algorithm heavily depends on the parameters. To implement the algorithm efficiently and to obtain the best solution, determining the best parameter set has vital importance. For the genetic algorithm, there are three parameters to tune: i) Probability of mutation = {0.005, 0.01, 0.05, 0.09} ii) Number of chromosomes (population size) = {50, 100, 150, 200} iii) Number of generations (iterations) = {100, 500} There are 4*4*2 = 32 different parameter combinations. Since uniform random numbers are generated in selection and mutation steps, we run the algorithm for 10 replications for each parameter combination and take the advantage of the makespan value. As it is seen from the pivot chart in Fig. 8, the best (minimum) makespan value is obtained when probability of mutation = 0.05; number of chromosomes = 100; number of generations = 100.

Fig. 8. Parameter tuning

6 Verification of Genetic Algorithm with a Small Size Problem To verify the genetic algorithm, we solved a 9-job problem both with the mathematical models and genetic algorithm. Please note that 9 is the maximum number of jobs which can be solved with the mathematical models in a reasonable running time, as shown in Table 4. Also note that we used the same 9-jobs exhibited in Fig. 7. In this particular example, 9 jobs in the isolation stage corresponds to 3 jobs in the bunching and sheathing stages. Figure 9 shows the Gantt chart formed according to the outputs of the mathematical models whereas Fig. 10 shows the Gantt chart formed according to the outputs of the genetic algorithm. The best makespan of genetic algorithm is 60.9 h whereas the optimum makespan is 57.02 h according to outputs of the mathematical models. Hence, there is a 6.8% deviation in terms of the makespan value. Since the meta-heuristic methods do not guarantee optimality, this deviation is reasonable in this sense.

Hybrid Flowshop Scheduling with Setups to Minimize Makespan

433

7 Solving Actual Size Problem with Genetic Algorithm The company generally makes 2-week schedules. Hence, we implemented genetic algorithm for a past 2-week order list which contains 46 jobs in the isolation stage and 31 jobs in bunching and sheathing stages. In this way, the actual makespan of the company can be compared to the makespan value obtained with the genetic algorithm. The makespan value obtained by the genetic algorithm is 308.44 h, equally 12.85 days. The company stated that this 2-week order list was actually completed in 14 days. Hence, if the genetic algorithm was employed by the company, the makespan would be shortened by 1.15 days which means an 8.21% reduction for the makespan. During 1.15 days, the company could accept more orders and increase its profitability through more sales.

Fig. 9. Gantt chart of the optimal solution for 9 jobs

Fig. 10. Gantt chart of the best solution with genetic algorithm for 9 jobs

8 Design of User Interface By designing a user interface, we aim to show the company which job is assigned to which machine on which stage, when a job starts and ends, and what the best makespan is. A Gantt chart can be formed based on the outputs of the genetic algorithm as shown in Fig. 11. On the horizontal axis, timeline is shown (in hours). On the vertical axis, stages are indicated by s, machines are indicated by m. There are four separate buttons on the right hand side. Three of these buttons show the job-machine assignments for each stage, start and completion times of jobs and the last button gives the best makespan value.

434

D. Balcı et al.

Fig. 11. User interface

9 Conclusion In this study, the objective is to minimize the makespan of the rubber coating line for a cable manufacturing company. This can be achieved by reducing idle times by a systematic job-machine assignment and setup configuration. The production layout is a 3-stage hybrid flowshop and there are sequence-independent and sequence-dependent setups before stages. Due to different inputs and outputs between the first and second stages, the problem is decomposed into two sub-problems. The first (isolation) stage is modeled as two identical parallel machines. The second (bunching) and third (sheathing) stages are modeled as a two-stage hybrid flowshop. A mathematical model is used for each sub-problem. The completion times of jobs from the former mathematical model are used in the computation of release times for the latter model. The mathematical model for the isolation stage is solved with different job sizes and running times are analyzed. We observed that even 10-job problem could not be solved in a reasonable running time in the isolation stage which supports the NP-hardness of the problem. However, 2-week schedules generally include more than 10 jobs. Hence, we employed genetic algorithm to solve large size problems within a reasonable running time. To determine the best parameter values for the genetic algorithm, parameter tuning was done. A 2-week order list is solved with genetic algorithm. A smaller makespan is obtained with the genetic algorithm compared to the company’s actual makespan. By decreasing the makespan through efficient job-machine assignments and setup configuration, more orders can be accepted which may lead to increased sales and profit. A user interface is designed for the use of employees in the company. By using the developed user interface, employees can see the production schedule of an order list as a Gantt chart with all job-machine assignments for each stage, start and completion times for all jobs and the best makespan value.

References 1. Pinedo, M.: Scheduling, vol. 29. Springer, New York (2012) 2. Graham, R.L., Kan, A.H.G.R.: Optimization and approximation in deterministic sequencing and scheduling: a survey. Erasmus University, Rotterdam (1979)

Hybrid Flowshop Scheduling with Setups to Minimize Makespan

435

3. Kurz, M.E., Askin, R.G.: Scheduling flexible flow lines with sequence-dependent setup times. Eur. J. Oper. Res. 159(1), 66–82 (2004) 4. Akyol, E., Saraç, T.: Paralel Makina Çizelgeleme Problemi için bir Karma Tamsayılı Programlama Modeli: Ortak Kaynak Kullanımı. Gazi Üniversitesi Fen Bilimleri Dergisi Part C: Tasarım ve Teknoloji 5(3), 109–126 (2017) 5. Ozdemir, M.S.: A mathematical modelling approach for multiobjective multi-stage hybrid flow shop scheduling problem. In: International Symposium of the Analytic Hierarchy Process (ISAHP) (2016) 6. Kahraman, C., Engin, O., Kaya, I., Yılmaz, M.K.: An application of effective genetic algorithms for solving hybrid flow shop scheduling problems. Int. J. Comput. Intell. Syst. 1(2), 134–147 (2008) 7. Murata, T., Ishibuchi, H., Tanaka, H.: Genetic algorithms for flow shop scheduling problems. Comput. Ind. Eng. 30(4), 1061–1071 (1996)

Improving Surface Quality of Additive Manufactured Metal Parts by Magnetron Sputtering Thin Film Coating Binnur Sa˘gba¸s1(B) , Hüseyin Yüce2 , and Numan M. Durakbasa3 1 Mechanical Engineering Department, Yildiz Technical University, Istanbul, Turkey

[email protected]

2 Mechatronics Engineering, Marmara University, Istanbul, Turkey

[email protected]

3 Institute of Production Engineering and Photonic Technologies, Vienna University of

Technology (TU Wien), Vienna, Austria [email protected]

Abstract. Laser powder bed fusion is a novel production technology that generates 3D geometries from CAD data of a functional part. It is a kind of additive manufacturing (AM) process that builds up the geometries layer-by-layer. Because of its layer based nature surface quality of manufactured parts is not its required level. Post processing operations such as sand blasting, shot peening, polishing, heat treatment and coating, are effective choice to increase surface quality of the AM parts. In this study, it is aimed to define ability of titanium thin film coating on the metal based additive manufactured surfaces as a transition layer, by DC magnetron sputtering. Preliminary results of an ongoing study which focused on generating multilayered hard coatings on to the AM manufactured AlSi10Mg substrates were reported. Surface roughness and friction coefficients of the samples were decreased while hardness and wettability were slightly increased by thin film coating of the substrates. Keywords: Additive manufacturing · Post processing · AlSi10Mg · Surface quality · Coating

1 Introduction Additive Manufacturing (AM) is novel and rapidly developing production technology which provides opportunity to generate 3D geometries from CAD data without any fixture, tool and die. Being compatible with industry 4.0, AM techniques are seen as the production technology of the future [1]. Laser powder bed fusion (LPBF) is one of the AM method which uses powder form of the materials and generate 3D geometries by fusing the powders layer by layer [2]. With its lighter weights, easy processability, accessibility and higher corrosion resistance, AlSi10Mg is one of the most widely used material in LPBF applications especially aerospace and automotive industries [3–6]. However its lower hardness and poor tribological properties are great concern especially © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 436–443, 2022. https://doi.org/10.1007/978-3-030-90421-0_36

Improving Surface Quality of Additive Manufactured Metal Parts

437

for AM manufactured parts with low surface quality. Because of its layer based nature, AM parts have not yet reached the desired level of surface quality which has important effect on friction properties, wear and corrosion resistance and service life of the parts. For improving surface integrity of the parts post processing techniques such as shot peening, sand blasting, polishing, heat treatment and coating processes are generally used [7–11]. Various coating techniques such as plasma spray, dip coating, sol-gel, chemical vapour deposition and physical vapour deposition (PVD) can be used for improvement surface quality of the functional parts. Magnetron sputtering is a kind of PVD method that deposits coating on to the substrate surface in gaseous plasma. High energy ions erode target surface and liberated ions are deposited as thin film on the substrate in the vacuum environment. Titanium nitride (TiN) and titanium carbide (TiC) coatings are widely used for improving surface properties of the parts [12, 13]. However, it is stated in previous studies that nano-structured TiN films deposited by magnetron sputtering method do not provide sufficient adhesion to metal surfaces, which reduces the coating strength and galvanic corrosion resistance [14]. If a soft substrate surface is covered with a hard layer, since the interfaces of the coating will exhibit different deformation properties in the functional part exposed to load and impact, a high stress occurs due to this deformation mismatch and damages the hard and soft layers. Therefore, transition layers with relatively lower hardness properties and load-absorbing ability are needed before hard coatings. A thin Ti layer is generally deposited as a transition layer on the substrate. Ghasemi et al. [14] investigated the corrosion resistance and load bearing properties of Ti / TiN coatings, which were deposited by reactive magnetron sputtering on 7075 aluminum substrate. The authors stated that nano-structured coatings consisting of TiN, TiOxNy and TiO2 phases significantly increase the corrosion resistance and load carrying ability of the substrate material. Studies on the metal AM surface coating are very few and they generally focused on bioactive coatings for biomedical applications. There is lack of studies about hard coating methodology of metal AM surfaces especially for aerospace, defence and automotive industry applications. In this study, it is aimed to define ability of titanium thin film coating on the metal based additive manufactured surfaces as a transition layer, by DC magnetron sputtering. Preliminary results of an ongoing study which focused on generating multilayered hard coatings on to the AM manufactured AlSi10Mg substrates were reported and discussed.

2 Materials and Methods In this study, disc shaped AlSi10Mg alloy samples were designed with Fusion 360 in 30 mm diameter and 3 mm thickness and they were manufactured by laser powder bed fusion technology (LPBF) with EOS M290 which uses 400 W Yb-fibre laser. 3D geometry of the samples was directly produced from CAD data under argon atmosphere by melting AlSi10Mg powders layer by layer. The powder, produced by gas atomization method, was provided by EOS. The medium size of the powder is about 30 µm. Chemical composition can be seen in Table 1 as stated by the producer in its data sheets [15]. Also, SEM image of the powder and manufactured sample can be seen in Fig. 1.

438

B. Sa˘gba¸s et al. Table. 1. Chemical composition of AlSi10Mg

Al (%)

Si (%)

Fe (%)

Cu (%)

Mn (%)

Mg (%)

Ni (%)

Zn (%)

Pb (%)

Sn (%)

Ti (%)

Balance

9.0–11.0

0.55

0.05

0.45

0.2–0.45

0.05

0.10

0.05

0.05

0.15

Fig. 1. AlSi10Mg powder and manufactured test sample.

Excessive powder on the surface of the manufactured sample was cleaned by pressuarized air and the surface was polished by 800, 1000 and 1200 grid SiC papers. The samples were cleaned in ethyl alcohol and deionized water respectively by ultrasonic cleaner during 20 min. DC Magnetron sputtering system (Nanovak) was used for the coating of AlSi10Mg substrate. Titanium coating layer is generally used as a buffer layer before hard coating such as TiN of the substrate. For improving surface quality of the substrates pure titanium target was used for the sputtering to define the properties of titanium coating layer as initial layer before hard coating on the metal AM surfaces. After 8 h coating process the surfaces were inspected by X-Ray Difraction (XRD) (PANalytical X’Pert PRO) and Scanning Electron Microscope (SEM-EDS) analysis to define phase structures and chemical composition of the coatings. 3D areal surface roughness measurements of substrates were determined by Sensofar S lynx according to ISO 25178–2 [16]. Measurements were taken before and after coating process to determine the effect of thin film coating on the surface quality of the substrate. Surface wettability of the samples was determined by KSV CAM 200 contact angle measurement system. Equilibrium water contact angle measurement were taken after waiting 15 s from 5 µl water drop deposition by syringe. Contact angles were measured in five distinct areas within the sample surface and average values of the results were calculated and reported.

Improving Surface Quality of Additive Manufactured Metal Parts

439

The friction and wear properties of the deposited coatings were carried out in accordance with ASTM G99–05 standard [17], with TRIBOtechnic ball-on-disc wear test device in dry environment under 1N load with Al2 O3 ball in 6 mm diameter. Worn surfaces of the substrates were inspected by Zeiss EVO® LS 10 SEM analysis. Micro Vickers hardness of both coated and uncoated samples were measured by AOB hardness tester. 10 g load was applied during 10 s on to the surface. 15 measurements were taken from different region of the substrates and their arithmetic mean values were reportded.

3 Results and Discussion Different surface roughness parameters were recorded for a comprehensive evaluation of the surface topographies. Besides the arithmetical mean height of surface (Sa) values, root mean square value (Sq), maximum pit height (Sv), the maximum height of surface (Sz), skewness (Ssk) and kurtosis (Sku) parameters, which has important effect on friction, lubrication and wear behaviour of the surface, were also considered. 3D roughness parameters were reported in the Table 2. Also, 3D topographic images and 2D profiles of the surfaces can be seen in the Fig. 2. Surface roughness of the uncoated samples were measures as Sa = 1.2440 µm while coated one recorded as 0.1401 µm. It is clear from the result that surface quality of the samples in terms of roughness were highly improved by thin film coating. Table 2. 3D surface roughness values. Sa (µm)

Sq (µm)

Sz (µm)

Sv (µm)

Sku

Ssk

Uncoated

1.2449

1.4709

8.3331

4.3559

2.1011

0.0357

Coated

0.1401

0.1884

3.0730

1.0656

6.4949

0.3980

Micro Vickers hardness value of the uncoated sample was recorded as 151.73 HV while it was 191.87 HV for the coated samples. Very high hardness values are not expected as a transition layer is intended to be deposited to support the multi-layered hard coating that will then be deposited. However, the coating process appears to slightly increase the hardness value, which is the result of the substrate surface being successfully coated with a relatively harder metal, Ti. The average friction coefficients of the uncoated and coated surfaces were recorded as 0.430 and 0.394, respectively, while the initial friction coefficients were measured as 0.146 and 0.044, respectively. The coating layer reduced the initial friction coefficient by 69.86%. As the wear test continued, the substrate surface was reached and the friction coefficient began to be recorded for the uncoated surface. This increased the average friction coefficient. Friction coefficient graphs and wear track of the coated surface are shown in the Fig. 3. SEM images of the coated sample and wear track can be seen in the Fig. 4. The coating layer was able to cover the entire surface, including deep grooves and high

440

B. Sa˘gba¸s et al.

Fig. 2. 3D topographies and 2D profiles of the uncoated (a) and coated (b) substrates

AlSi10Mg-Coated AlSi10Mg-Uncoated

Fricon Coefficient

1 0.8 0.6 0.4 0.2 0

0

10

20

30 Cycles

40

50

60

a

b

Fig. 3. Friction coefficient plots (a) and coated sample after wear test (b)

Substrate

Coating

Plouging

a Fig. 4. Sem images of coated substance (a) and wear track of the surface (b)

b

Improving Surface Quality of Additive Manufactured Metal Parts

441

peaks. It is seen that the “shadow effect” that occurs especially on rough surfaces and causes the roughness walls and bottoms to be completely covered is extremely low. However, cracks appear on the coating surfaces from place to place. It is thought that this situation arises depending on the coating parameters. It is expected to be resolved by parameter optimization and appropriate surface preparations in ongoing studies. As expected, the wear resistance was found to be low since titanium was deposited as a soft transition layer. Although abrasive wear was seen in places, stratification was observed in general. This situation may have arisen especially due to the low adhesion force between the substrate and the coating layer. Better results are expected with proper preparation of the substrate surface, optimum substrate temperature and optimum coating parameters. Experimental studies are being continued in this direction. SEM-EDS and XRD analysis are shown in Fig. 5. The analysis results also confirm that the substrate surface was mostly covered with Ti. These results show that additive manufacturing surfaces with extremely high deviations in terms of surface texture can be successfully coated with thin film coating techniques such as Magnetron sputtering to improve their surface properties.

Fig. 5. Sem-eds analysis and XRD pattern of the coated surface.

Surface contact angles of the uncoated and coated samples were recorded as 134.865° and 117.138° respectively which can be seen in the Fig. 6. Surface wettability of the coated substrate increased in small quantities. Increasing wettability provides better adhesion between the coating layers. Moreover, higher wettability provide better lubrication especially for the top surface of the coating. It may be possible to decrease contact angle and increase surface wettability by thin film coating of metal AM surfaces. Further tribology tests under fluid lubrication condition and analysis required to make an exact conclusion about lubrication properties of the coating.

442

B. Sa˘gba¸s et al.

Fig. 6. Contact angle measurements of uncoated and coated substrate

4 Conclusıon In this study, ability of thin film coating on the metal based additive manufactured surfaces by DC magnetron sputtering was investigated. LPBF manufactured AlSi10Mg substrate surface was coated by Ti target. Coated surfaces were analysed in terms of their roughness, chemical composition, tribological behaviour, hardness and wettability properties. It can be concluded from the results that, • Surface of the metal AM part with deep groves and high peaks could be successfully coated by DC magnetron sputtering method. • Surface roughness of the substrates were decreased by thin film coating. • Surface hardness and wettability were increased while friction coefficients were decreased. • Coating technologies are effective way to increase surface quality of the functional AM parts. • Further experiments with different coating parameters and targets are necessary to investigate optimal coatings on functional metal AM parts. In the ongoing studies the authors are continuing to coating processes and analysis to generate hard coating on AM surfaces.

References 1. Sagbas, B., Durakbasa, M.N.: Industrial computed tomography for nondestructive inspection of additive manufactured parts. In: Durakbasa, N.M., Gençyılmaz, M.G. (eds.) ISPR -2019. LNME, pp. 481–490. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-31343-2_42 2. Galba, M.J., Reischle, T.: Additive manufacturing of metals using powder-based technology. In: Bandyopadhyay, A., Bose, S. (eds.) Additive Manufacturing, pp. 97–142. CRC Press/Taylor & Francis Group, Boca Raton (2016) 3. Li, W., et al.: Effect of heat treatment on AlSi10Mg alloy fabricated by selective laser melting: microstructure evolution, mechanical properties and fracture mechanism. Mater. Sci. Eng. A 663, 116–125 (2016) 4. Van Cauwenbergh, P., et al.: Unravelling the multi-scale structure–property relationship of laser powder bed fusion processed and heattreated AlSi10Mg. Sci. Rep. 11, 1–15 (2021) 5. Macías, J.G.S., Douillard, T., Zhao, L., Maire, E., Pyka, G., Simar, A.: Influence on microstructure, strength and ductility of build platform temperature during laser powder bed fusion of AlSi10Mg. Acta Mater. 201, 231–243 (2020)

Improving Surface Quality of Additive Manufactured Metal Parts

443

6. Ashkenazi, D., Inberg, A., Shacham-Diamand, Y., Stern, A.: Gold, silver, and electrum electroless plating on additively manufactured laser powder-bed fusion alsi10mg parts: a review. Coatings 11, 422 (2021). https://doi.org/10.3390/coatings11040422 7. Sagbas, B.: Post-processing effects on surface properties of direct metal laser sintered AlSi10Mg parts. Met. Mater. Int. 26(1), 143–153 (2019). https://doi.org/10.1007/s12540019-00375-3 8. Han, Q., Jiao, Y.: Effect of heat treatment and laser surface remelting on AlSi10Mg alloy fabricated by selective laser melting. The Int. J. Adv. Manuf. Technol. 102(9–12), 3315–3324 (2019). https://doi.org/10.1007/s00170-018-03272-y 9. Park, T.H., Baek, M.S., Sohn, Y., Lee, K.A.: Effect of post-heat treatment on the wear properties of AlSi10Mg alloy manufactured by selective laser melting. Arch. Metall. Mater. 65, 1073–1080 (2020) 10. Nahmany, M., Hadad, Y., Aghion, E., Stern, A., Frage, N.: Microstructural assessment and mechanical properties of electron beam welding of AlSi10Mg specimens fabricated by selective laser melting. J. Mater. Process. Technol. 270, 228–240 (2019) 11. Metel, A.S., et al.: Surface Quality of Metal Parts Produced by Laser Powder Bed Fusion: Ion Polishing in Gas-Discharge Plasma Proposal” Technologies 9(2), 27 (2021). https://doi. org/10.3390/technologies9020027 12. Lu, C., et al.: A novel anti-frictional multiphase layer produced by plasma nitriding of PVD titanium coated ZL205A aluminum alloy. Appl. Surface Sci. 431, 32–38 (2018) 13. Sanchez-Lopez, J.C., Dominguez-Meister, S., Rojas, T.C., Colasuonno, M., Bazzan, M., Patelli, A.: Tribological properties of TiC/a-C: H nanocomposite coatings prepared via HiPIMS. Appl. Surf. Sci. 440, 458–466 (2018) 14. Ghasemi, S., Shanaghi, A., Chu, P.K.: Corrosion behavior of reactive sputtered Ti/TiN nanostructured coating and effects of intermediate titanium layer on self-healing properties. Surface Coating Technol. 326, 156–164 (2017) 15. EOS Aluminium AlSi10Mg data sheet. https://www.eos.info/03_system-related-assets/mat erial-related-contents/metal-materials-and-examples/metal-material-datasheet/aluminium/ material_datasheet_eos_aluminium-alsi10mg_en_web.pdf August 20 2021 16. ISO 25178–2:2019 Geometrical product specifications (GPS)—surface texture: areal—part 2: terms, definitions and surface texture parameters 17. ASTM G99:2017, Standard Test Method for Wear Testing with a Pin-On-Disk Apparatus

Labor Productivity as an Important Factor of Efficiency: Ways to Increase and Calculate Ilham Huseynli(B) Department of Mathematics and Statistics, Azerbaijan State University of Economics (UNEC), Baku, Azerbaijan

Abstract. The paper examines the impact of the formation of an innovative market that responds more quickly to the growing needs of modern society and the use of advanced technologies and equipment on labor productivity, analyzes various aspects of labor productivity growth and its calculations as one of the most important statistical indicators of economic development. It has been determined that increasing labor productivity to achieve economic efficiency can be successfully achieved through the transition to an innovative economy based on new technologies and the mobilization of all production resources. An indicator characterizing the relationship between productivity, production costs and production volume has been identified. Keywords: Labor productivity · Economic efficiency · Statistical indicators · Innovations · Human capital

1 Introduction Representatives of the economic currents and schools of economics of different eras have almost agreed on one issue: this is the concept of labor productivity and its economic essence. Labor productivity is one of the most important statistical indicators characterizing the labor market. It represents the aggregate of newly created goods and services per unit of working time. This indicator is the result of purposeful human activity in various sectors and sectors of the national economy and is characterized by the amount of material wealth created by him at the same time. Labor productivity is considered to be one of the important factors determining the standard of living in the social sphere, and economic efficiency in economic activity. Therefore, increasing this indicator is always considered a priority, and therefore is an important factor in determining the standard of living and efficiency of economic activity, which determines the better life and well-being of man. It is possible to increase its level by increasing the quantity of product produced per unit of working time, or by reducing the time spent on the production of a unit of product [1]. At all times, in order to achieve economic efficiency, the resources to increase labor productivity and the main factors affecting it have been considered a priority of the research. Therefore, it is important to classify the factors affecting labor productivity on a scientific basis in the correct planning of labor productivity in enterprises, its level and dynamics, as well as in the full identification of possible sources of labor savings [2]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 444–451, 2022. https://doi.org/10.1007/978-3-030-90421-0_37

Labor Productivity as an Important Factor of Efficiency

445

2 Importance of Increasing Labor Productivity, Influencing Factors and Sources of Reserves Observations and research show that increasing labor productivity, transforming the economic system, knowledge and information economy, and possible mobilization of production resources to achieve economic efficiency can be successfully addressed. For this purpose, along with the acquisition and application of technological innovations, the economical and efficient use of labor and financial resources, as well as minimizing costs and losses are considered necessary steps. The resources to increase productivity in enterprises and corporations include the study and application of advanced scientific and technological innovations, the correct and appropriate deployment of productive forces, modernization of existing equipment, concentration and specialization of production, identification and further development of competitive industries, etc. is directly related to the implementation of the necessary steps. It should also be noted that the increase in labor productivity occurs directly with the influence and participation of resources, its specific factors [3]. These factors include: logistical, organizational, economic and social. Each factor combines separate components: 2.1 Material and Technical Factors: – mechanization, automation, robotization of production on the basis of scientific and technological innovations; – application of new technology and modernization of existing equipment; – renovation of structures and parts, other parameters; – development and application of new technological processes; – improving product quality and competitiveness. 2.2 Organizational Factors: – – – – –

level of concentration and specialization of production; degree of sustainability, harmony and proportionality of the production process; improvement of production management; effective organization of workplaces in accordance with production results; the ratio of the optimal number of employees in the relevant category working in the field of production; – creation of healthy, safe and morally-aesthetically favorable working conditions, etc. 2.3 Economic Factors: – rewarding the team for the overall result; – rewarding each employee individually with money and other financial resources, etc.

446

I. Huseynli

2.4 Social Factors: – high provision of employees’ right to rest; – further improvement of social and living conditions; – stimulation and support of education and professional level, increase of modern knowledge and skills. It should be noted that the resources to increase labor productivity are inexhaustible, in one form or another and are repeated in a certain quantitative ratio. This is directly related to the continuity of the rational application of scientific and technological innovations. For this reason, the rational application of new techniques and technologies is a decisive factor in the development of production and increasing the dynamics and level of labor productivity. It should also be noted that the introduction of new machines and mechanisms, production equipment, the introduction of advanced materials, the introduction of new technological production methods and rules - all this is a serious basis for the effective development of production and a significant reduction in labor [4]. The effectiveness of the implementation of these necessary measures is not possible, first of all, without increasing the level of qualification, new knowledge and skills of employees. The importance of analyzing the level of use of working time should also be noted when determining the factors affecting labor productivity and the resources to increase it. One of the most important sources of productivity within the enterprise is, of course, the reduction and minimization of working time losses. Working time losses are classified into two areas: full-time and part-time work losses. 1. Loss of full-time work can be attributed to the following: – absenteeism due to illness, all-day downtime with the permission and instruction of management; – Loss of working time due to unexcused absences, etc. 2. The following can be attributed to the losses of working time: – losses related to the organizational and technical conditions of production; – some time losses that depend on the employee himself. The efficient use of employee time is, as a result, a key factor in ensuring the efficiency of the production process and increasing productivity. As the volume of production increases as a result of the introduction of new innovations, the importance of saving time increases. Because in the modern innovation and knowledge economy, the time factor is of exceptional importance. If we consider that the economic nature of labor productivity is determined by the volume of production per unit of time, then the meaning and logic of this idea becomes clear. As we have noted, one of the main factors influencing the formation and growth of productivity is natural resources. This factor, which plays a special role in determining labor productivity, is the last element of many production processes - land, water, raw materials and mineral resources. Differences in the volume of natural resources affect

Labor Productivity as an Important Factor of Efficiency

447

the living standards of countries. For example, the first economic success of the United States was due to the fact that at that time there was a large amount of vacant land suitable for agricultural production. Nowadays, for example, there is no doubt that some countries in the Middle East, such as Qatar, Kuwait or Saudi Arabia, are included in the list of the richest countries in the world only because they have large oil reserves. Although natural resources are one of the key factors in productivity and economic growth, their existence is not always a prerequisite for the creation of highly efficient production and social welfare. In oil-rich Nigeria, for example, poverty has worsened since pre-oil production. Japan has become one of the richest and most powerful countries in the world, despite the scarcity of natural resources. The main reason for its economic growth, as it is known, is not resource richness, but a healthy transition to a knowledge economy, timely emphasis on human capital development, stimulating the introduction of new innovations and, consequently, competitive production and export potential with higher technology. One of the important factors in increasing labor productivity as a result of the application of more modern innovations in the knowledge economy is the acquisition of technological knowledge and its effective application to the production process, as well as the service sector. A few hundred years ago, the vast majority of Americans worked on farms, and the agricultural machinery and equipment that existed at that time required a lot of labor to produce. However, in the post-technological revolution, a small part of the working-age population of the United States has been able to provide food to almost the entire country and export a significant portion of it through the introduction and stimulation of new innovations, scientific innovations and innovative proposals. Thus, the development of new agricultural technologies, the application of innovations has allowed to expand the production of competitive end products by increasing labor productivity. This, in turn, has led to an increase in the provision of quality food and social welfare to farmers and the population. A number of other factors affect the increase in labor productivity - the improvement of the institutional infrastructure of the enterprise, the introduction of machinery and equipment that meet new and modern standards, as well as technological innovations. Therefore, labor productivity as an important statistical indicator is the introduction of new innovations, human capital, modern technologies, etc. fields are also used as the main statistical indicator. For this reason, the calculation of the level and dynamics of labor productivity by sectors and sectors of the economy, as well as at the micro level, the identification of various factors affecting it is one of the main tasks of modern economics, especially statistics. It should be noted that labor productivity is also an important statistical indicator of the relationship between production costs and production volume. Productivity is mainly characterized by two groups of indicators. These include unipolar and multinational productivity indicators. While a single-factor productivity indicator is calculated on the basis of only one factor of production, multi-factor productivity is determined on the basis of several main factors of production. It should also be added that labor productivity indicators in the economy are calculated according to different concepts and approaches. The most common and practical of these is product launch, and the other is the concept

448

I. Huseynli

of value added. Both concepts and the way they are calculated differ in their economic nature [5]. Table 1 lists the key components that include productivity indicators based on these concepts. Table 1. . Labor productivity indicators

Production volume indicators Product production

Added value

Labor productivity

Based on the product produced labor productivity

Labor productivity based on value added

Productivity of capital

Capital productivity based on the product produced

Value-added capital productivity

Productivity based on total labor and capital expenditures



Multi-factor productivity based on value added

Productivity based on total labor, capital and intermediate costs

Multi-factor productivity based on the product produced



Bilateral indicators

It should be noted that labor productivity, calculated on the basis of value added, is considered a more accurate and adequate indicator in international practice. However, capital analysis and multinational productivity indicators are also used in economic analysis. The presented productivity indicators are used to analyze and compare the efficiency of various enterprises, large companies and corporations, as well as individual industries. These productivity indicators are also used to compare the cost of production in different countries. However, comparisons based on these indicators do not always justify themselves. Why and for what reasons? First of all, the methodology, rules and methods of calculating costs in different countries, as well as the components of costs in many cases differ significantly. In this case, it is advisable to use productivity indices that characterize the change in the level of productivity. Productivity index is defined as the physical volume index of output as the physical volume index of costs. However, it should be noted that in some cases, the labor productivity index for the product produced is also calculated by dividing the index of physical volume of production by the index of physical volume calculated on labor costs. The main economic essence of this indicator is to characterize the efficient use of labor resources in production. This indicator is also accepted as one of the important indicators of labor statistics, including labor productivity statistics. Labor costs are determined by the number of hours actually worked or hours paid, and the quality of labor is taken into account. However, the labor productivity index does not fully reflect the level of personal qualities

Labor Productivity as an Important Factor of Efficiency

449

of employees, professionalism, mastery of modern knowledge and skills, the impact of productive activities on productivity, labor intensity. Because the volume of production depends on technical and organizational factors, as well as the efficient use of production capacity. When conducting statistical analysis, it is advisable to calculate the Gross Domestic Product based on the purchasing power parity index of currencies (in US dollars) to compare the level of labor productivity by country. The statistical indicator of purchasing power parity is obtained from the comparison of prices obtained on the basis of specially selected types of products and services by countries and is expressed in a single currency. In this case, comparable prices by country are calculated not on the basis of the exchange rate, but on the basis of purchasing power parity. During the calculations, the amount of labor costs is determined by the amount of compensation for labor in the value added produced.

3 Some Points of Calculating the Level and Dynamics of Labor Productivity When determining labor productivity during statistical analysis, it is always important to pay special attention to which sector and sector of the economy this indicator belongs to. For example, in the agricultural sector, labor costs per unit of output are calculated, and in trade, the turnover per worker is calculated. There are different approaches in other areas. The main indicator of labor productivity, which is the same for all sectoral structures of the economy, is a direct or direct indicator: ω=

Q , T

(1)

where Q is the volume of product produced in a single time or by one worker indicates the amount of product produced. The opposite of this figure is per unit of output characterizes the calculated labor consumption, labor capacity. During economic analysis, labor costs are calculated according to the nature of the calculation in different units: man-hours worked, man-days, the average number of employees for the accounting period or the average number of all employees. During the calculations, in some cases, it is necessary to calculate labor productivity in accordance with the unit of labor costs. The main indicators to be calculated in this case are the average hourly and daily productivity indicators. Average hourly productivity (ωs ) - is calculated by dividing the volume of output during the reporting period by the number of man-hours actually worked. Average hourly productivity is the average cost of productivity per hour of actual time worked. By multiplying this indicator by the length of the working day, we get the average daily productivity. It should be noted that when calculating the average productivity of an employee in the analysis of the labor market, they mainly use the following universal communication formula [6]: ω1sih = ωs · T g · T d · di ,

(2)

450

I. Huseynli

where ωs - average hourly productivity, T g - average length of the working day, T d average length of work period (expressed in days), di - represents the share of employees in the total number of production staff. The existing dependence relationship between these indicators can be expressed by the following formula using indices: Iω1sih = Iωs · IT g · IT d · Idi .

(3)

As a rule, statistical analysis uses the natural, labor and value method when determining changes in the level and dynamics of labor productivity. The natural method is considered acceptable only in the manufacture of any type of product. However, it is not advisable for enterprises and corporations that produce a wide range of products to use this method. In this case, it is necessary to use other methods of labor productivity: labor and value methods. When assessing changes in the level and dynamics of labor productivity by the labor method, first of all, it is necessary to determine the actual labor capacity, and the formula of the labor productivity index is as follows:    q1 t0 q0 t0 q1 t0 :  =  , (4) Iω s =  T1 T0 q1 t1 where t0 - labor cost for different types of product units during the comparison period expresses the labor capacity that characterizes. When the prices of product assortments are known, the change and determination of the level of labor productivity is calculated by the value method, and the index formula is as follows:   ω1 q1 p0 q0 p0 =  :  . (5) Iω = T1 T0 ω0 It should also be noted that the value method is used more when analyzing changes in the dynamics of labor productivity. This method is used both at the enterprise level and in individual economic sectors and sectors, as well as at the level of the general economy.

4 Conclusions and Recommendation – Increasing labor productivity to achieve economic efficiency can be successfully achieved through the transition to an innovative economy based on new technologies and the mobilization of all production resources. – In order to compare the level of labor productivity by country, it is expedient to calculate the Gross Domestic Product based on the purchasing power parity of currencies. – Productivity is an indicator that characterizes the relationship between production costs and production volume. – It is more expedient to use different concepts when calculating labor productivity indicators in the economy. The most common and most practical of these is product launch, and the other is the concept of value added.

Labor Productivity as an Important Factor of Efficiency

451

– When calculating the average productivity of an employee during the analysis of the labor market, it is necessary to use the following universal communication formula, based mainly on labor costs: ω1sih = ωs · T g · T d · di ,

Acknowledgment. The authors would like to thank the editor and anonymous reviewers for constructive, valuable suggestions and comments on the work.

References 1. Armin, F., Ernst, F.: Why labour market experiments? Labour Econ. 10(4), 399–406 (2003) 2. Simon, D., Frank, W.: The Law of the Labour Market: Industrialization. Employment, Legal Evolution, Ind. Labor Relations Rev. 60(1), 142–145 (2006) 3. Emanuela, A., Anthony, S., Diane, S., Robert, F.E.: The labour market for nursing: a review of the labour supply literature. Health Econ. 12(6), 465–478 (2003) 4. Grimshaw, D., Fagan, C., Hebson, G., Tavora, I.: Making work more equal, p. 368. Manchester University Press, A new labour market segmentation approach (2017) 5. Joanne, L., Stephen, M.: Spatial changes in labour market inequality. J. Urban Econ. 79, 121–138 (2014) 6. Huseynli, I.: Labor productivity as an important factor of efficiency: ways to increase and calculate, Silk way, No.4, 2020, pp. 20–28 (2020)

Risk Governance Framework in the Oil and Gas Industry: Application in Iranian Gas Company Mohsen Aghabegloo, Kamran Rezaie(B) , and S. Ali Torabic School of Industrial Engineering, College of Engineering, University of Tehran, Tehran, Iran {m.aghabegloo,krezaie,satorabi}@ut.ac.ir

Abstract. Various stakeholders with conflicting views have made the risk management process in the oil and gas industry a problematic issue. Moreover, systematic risks such as increased population in industrial areas or significant reduction in oil and gas prices have questioned the effectiveness of organizational risk management practices. Therefore, a proper framework should be applied in which the knowledge, values, and interests of different stakeholders are integrated into the risk management decision-making process. This study suggests the International Risk Governance Council’s framework for the oil and gas industry. The framework is applied in the biggest Iranian gas company, and its different phases are elaborated for the case study in the oil and gas industry. Then, the elements of different phases are prioritized using a fuzzy hybrid multi-attribute decisionmaking method to provide valuable information for resource allocation in different elements. Keywords: Risk management · Risk governance · Risk assessment · Fuzzy sets theory · Multi-attribute decision making

1 Introduction The oil and gas industry has experienced severe accidents with catastrophic environmental, safety, and financial consequences. Marsh JLT, [1] has been reviewing and analyzing the 100 most significant losses in the hydrocarbon industry since 1977. In most cases, major events have occurred due to uncontrolled escalated minor incidents [1], which highlights the importance of risk management measures in the oil and gas industry. However, the uncertainties involved in the business environment and the complicated relationship between various stakeholders in the industry have hindered the effective implementation of the risk management process. Moreover, systematic risks such as increased population density in industrial areas, the impact of oil price on disruptive incidents in industrial plants, different interpretations of risk definition among stakeholders, etc., have reduced the effectiveness of existing enterprise risk management measures. Therefore, applying a proper risk governance framework to support the effectiveness and efficiency of enterprise risk management is inevitable. The risk governance framework applies governance principles to the risk management process and determines how the risk data is collected, analyzed, and communicated © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 452–463, 2022. https://doi.org/10.1007/978-3-030-90421-0_38

Risk Governance Framework in the Oil and Gas Industry

453

among different stakeholders. It also indicates the interdependencies between risk management and management decisions [2]. In this way, the risk management processes are well implemented and maintained, and the management decisions are taken spanning different stakeholders [3]. As a result, shareholders are persuaded to invest in production technology advancement, which enhances production effectiveness and efficiency [4]. Moreover, oil and gas production continuity will be ensured while human hazards, environmental degradation, and other devastating effects of uncertainties are well controlled. In this paper, the International Risk Governance Council (IRGC) risk governance framework has been suggested as a feasible and comprehensive basis for developing risk governance in the oil and gas industry.

2 Literature Review Risk assessment in the oil and gas industry has been widely studied in the literature [5]. Skogdalen and Vinnem, [6] studied the integration of human and organizational factors through the quantitative risk assessment (QRA) method. Casal and Olsen, [7] investigated the influence of work climate on hydrocarbon leaks as a major risk factor. Besides human factors, accident risks during operations have been widely studied in the literature [5]. Aghabegloo et al., [8] proposed a fuzzy risk management framework to analyze physical asset risks throughout its life-cycle. They applied the model in the oil and gas industry. Darbra et al., [9] investigated different scenarios of domino effects of accidents and analyzed different aspects of these events, including the probability of the event, consequences, and severity. Guo et al., [10] proposed a fuzzy Bayesian network model for risk assessment of accidents for storage tanks. The model predicts the probability of the accidents and the main contributing factors to assist decisionmakers in optimal resource allocation for risk management measures. Bucelli et al., [11] applied risk barometer methodology for risk assessment of oil and gas facilities. They evaluated the probability and severity of oi spill considering safety barriers. Academic works on the oil and gas sector’s risks have primarily addressed operational risk assessment and model development, while risk management approaches are inevitable for integrating risk assessment results into the organizational risk management process [3]. Aven et al., [12] proposed a decision framework for risk management consisting of problem definition, stakeholders, consequence analysis, managerial decision, and review. The framework is applied in an operator of a major source of gas production in Europe. Dawotola et al., [13] studied failures and degradations in oil and gas pipelines and applied the analytical hierarchy process (AHP) to prioritize inspection and maintenance activities. A regional risk governance framework is proposed by Haapasaari et al., [14]. The framework was used to analyze maritime safety as a holistic system whose results highlighted the scientific risk assessment and stakeholders’ input as the main elements. Neves et a., [15] applied ISO 31000 for oil spill risk management considering oil pollution preparedness, planning, and risk communication. Different stakeholders have different types of knowledge about risks, leading to different approaches towards risk mitigation options [16]. To overcome such issues and optimize addressing stakeholders’ concerns in the risk management process, IRGC proposed a framework to structure risk governance in order to improve decision making and risk management [2, 17–20].

454

M. Aghabegloo et al.

While valuable studies have been conducted about the risk assessment and management in the oil and gas industry, the literature still lacks a practical framework for risk governance in the oil and gas industry. In this paper, the IRGC framework is suggested as a feasible framework for the oil and gas industry and applied in South Pars Gas Company (SPGC). First, different phases of the framework are elaborated according to the SPGC context. Then, a hybrid fuzzy DEMATEL- ANP approach is applied to assess each phase of the model’s effect on the success of implementing and embedding the risk management process and risk-based decision making in SPGC.

3 IRGC Risk Governance Framework in South Pars Gas Company The International Risk Governance Council developed the IRGC risk governance framework in 2006. While organizational risk management frameworks such as ISO 31000 and COSO solely focus on decision-making in an organization, the IRGC framework emphasizes the decision-making process with various stakeholders (1). In this section, different interlinked phases of the framework are elaborated, considering its application in SPGC. SPGC is one of the Iran National Gas Company subsidiaries, established in 1998, and is responsible for the operation of onshore facilities of multiple phases of the South Pars gas field. The IRGC framework which is applied for SPGC is presented in Fig. 1 [2].

Pre assessment

Cross-Cutting aspects Management

• • • •

Communication Documentation Stekeholder Engagement Context of SPGC

Appraisal

Characterisation and Evaluation

Fig. 1. IRGC risk governance framework forSPGC

Risk Governance Framework in the Oil and Gas Industry

455

3.1 Pre-assessment At this phase, different perceptions of risks, issues to be addressed, and baselines for risk assessment and management are determined. Moreover, changes in insurance coverage and premium limits can be identified according to corrective actions or improvements that occurred compared to previous periods of risk assessments. Thus, this step consists of two components: gathering views and strategy determination. 1) Gathering views: Gathering views includes two elements: problem framing and early warning. • Problem Framing: In this element, the gas refinery risk areas are identified. These risk areas should be updated based on the refinery’s new projects or changes in its external stakeholders. The effects of such a change on insurance coverage and premium limits should be considered. • Early warning: In this element, an initial assessment is performed to identify hazards and threats. At this stage, the assessors of the South Pars Gas Company or third party assessors (such as insurance assessors) identify significant shortcomings in the risk management system of the refinery facilities 2. Strategy Determination: Strategy determination includes two elements: screening possible approaches and approach selection. 3.2 Appraisal The appraisal phase provides sufficient knowledge to decide on the appropriate risk management strategy. This phase consists of two components: “hazard identification and risk assessment” and “concern assessment”. 1) Hazard identification and risk assessment: Risk characteristics are assessed in this component. In this way, the probability of occurrence or probability distribution of events and the range of risk consequences are identified and described. The assessment is conducted according to the type of hazard, vulnerability, and level of exposure of the refinery facilities. Therefore, this component includes three elements: hazard identification, risk and vulnerability assessment, and risk classification. • Hazard Identification: in hazard identification, current conditions of gas plants, lessons learned from the past accidents, existing organizational culture, types of accident scenarios, and effective and efficient hazard identification methods should be considered. • Risk and vulnerability assessment: including the probability of occurrence and severity of accident scenarios, potential injuries, risk creation and control processes, physical asset vulnerability, reliability of risk assessment, its comprehensiveness and accuracy, Degree of uncertainty • Risk classification: The identified risks and hazards are classified as software risks, hardware risks, and emergency risks according to Marsh risk classification for the hydrocarbon industry [21].

456

M. Aghabegloo et al.

Concern assessment: Managerial decisions on risk management closely relate to decision-makers’ past experiences, how they perceive risk, and the level of attention to socio-economic issues. Concern assessment can provide valuable information about the organization’s risk-taking behavior and the organization’s concerns and values about the socio-economic impacts of risks. This component includes three elements as follows: • Risk perception: the day-to-day concerns of senior and middle managers about the potential risks to the gas plant or SPGC, and differences between the views of the SPGC senior managers and senior and middle managers in the gas plants are considered in this element. • Social concerns: destructive effects on the safety or health of surrounding human communities and effects on the quality of the living environment are considered in this element • Socio-economic impacts: in this element, the consequences of failure to consider national responsibility towards the oil and gas industry and the economic vulnerability of employees are addressed. 3.3 Characterization and Evaluation Different risks may have different dimensions that affect how they are assessed and managed. During the evaluation phase, various information on risks should be obtained. This information is used to complete the risk profile and risk classification. This classification helps plan for the involvement of different stakeholders in risk governance and design risk management strategies. This phase consists of two components: characterization and risk evaluation. 1) Characterization: This component includes three elements: risk profile, risk assessment, and risk mitigation options. • Risk profile: Risks are divided into three categories in terms of characteristics: simple, complex, and high uncertainty. • Judgment of the seriousness of risks: the seriousness of risks is determined according to the performed assessment and risk profile. • Risk mitigation options: in this element, an initial list of risk mitigation options is prepared. Risk evaluation: risk evaluation consists of two elements: judging risk tolerability and acceptability and the need for risk mitigation measures. • Judging risk tolerability and acceptability: in this element, decisions are made about the tolerability and acceptability of risks. Based on the risk assessment and the organizational behavior towards risk acceptance, each risk can be in one of the three levels of acceptable (AC), as low as reasonably possible (ALARP), and unacceptable (UN). In the ALARP area, the suitability of existing deterrent systems and safety barriers to the level of risk must be demonstrated. • Need for risk mitigation measures: in this element, risk mitigation measures against transferring risks (insurance) are examined.

Risk Governance Framework in the Oil and Gas Industry

457

3.4 Management This phase consists of two components: decision making and implementation. 1) Decision-making: the risk management system should be established to facilitate the systematic decision-making process. Feasibility studies should be conducted for various proposed mitigation options to gather enough information about the options. Moreover, risk management options should be evaluated according to predefined criteria such as effectiveness, efficiency, sustainability, etc. In this component, different risk management strategies are examined. For this, options should be identified, assessed, and evaluated. • Options identification: in order to identify risk management options, the knowledge attribute of the risks must be considered. Simple risks are managed with conventional and accepted solutions, such as using API or IPS standards. Strategies for dealing with complex risks are determined based on the expert opinions of internal and external consultants and are identified based on the best available information and complex mathematical modeling. For example, designing buildings based on blast resistance concerns according to impact analysis studies. For risks with high uncertainty, recovery strategies or contingency measures are usually proposed. • Options assessment: in this element, risk management strategies are determined, and the cost and benefit of risk mitigation measures versus risk transfer (analyzing the costs of insurable risks and the cost of transferring these risks) are examined. • Evaluating and selecting options: stakeholders and people involved in the risk management process, responsibilities in implementing risk management strategies, criteria for prioritizing and evaluating risk-taking options, results from implementing risk management strategies, and prerequisites, requirements, and infrastructure needed for the option implementation are determined. Implementation: in this component, the selected options are implemented, and the effectiveness of the options is monitored and reviewed. This component includes three elements: realization of selected options, monitoring and control, and feedback from risk management practice. • Realization of selected options • Monitoring and control • Feedback from risk management practice 3.5 Cross-Cutting Aspects of Risk Management This phase includes aspects that affect all previous steps and should be considered during the risk assessment, risk evaluation, and decision-making process. The components of this phase are communication, documentation, stakeholder involvement, and the context of the gas plants and SPGC.

458

M. Aghabegloo et al.

4 IRGC Risk Governance Framework in South Pars Gas Company Elements introduced in the previous section may have different effects on the success of risk governance in SPGC. Therefore, the relative importance of these elements should be calculated to help decision-makers in the optimal allocation of resources to implement each element of the different phases of the framework. In this way, a hybrid multi-attribute decision-making (MADM) is applied. Due to the interdependencies among different phases, the analytical network process (ANP) developed by Saaty, [22], which is the general form of the analytical hierarchy process (AHP) method, is used [23]. However, the pure ANP has some shortcomings as it does not consider the uncertainty associated with human judgment [24]. Accordingly, fuzzy ANP is applied in which triangular fuzzy numbers are used to transform linguistic terms into quantitative judgments. The first step in ANP is to construct an unweighted supermatrix through relative comparisons of the factors in the network structure. The network of factors is usually structured through the Delphi process. However, we applied the fuzzy Decision Making Trial and Evaluation Laboratory (DEMATEL) method to map the network of factors considering the direct and indirect influential flows among the elements [25–28]. Moreover, to calculate the weighted supermatrix in fuzzy ANP, we used the influence degree of DEMATEL method factors rather than the normal average method. The steps are elaborated in this section. 4.1 Calculate the Average Matrix Each member of the experts’ committee is asked to indicate to what extent he/she believes  a phase j affects the phase j in terms of an uncertain linguistic term expressed as a triangular fuzzy number (TFN) (E˜ jj k = (e1  , e2  , e3  ) according to Table 1. The jj k

jj k

jj k

 average matrix N˜ (whose (j, j ) element is expressed as n˜ jj k = (n1  , n2  , n3  )), can jj k jj k jj k be calculated based on the different experts’ opinions about relations between phases by the following equation:  t k eJ J  t nJJ = ∀j, j. (4.1) g

where k is the index of the expert committee and g denotes the number of experts, and nt Jj denotes the tth element of triangular fuzzy number n˜ jj k . 4.2 Calculate the Normalized Direct- Relation Matrix ˜ whose (j, j ) element is expressed as d˜  = The normalized direct relation matrix D jj   d 1 , d 2 , d 3 , is calculated by the following equation: jj k

jj k

jj k

t dJJ =

nt Jj  ∀t max j ntJ j j

where d t Jj denotes the tth element of triangular fuzzy number d˜ jj .

(4.2)

Risk Governance Framework in the Oil and Gas Industry

459

4.3 Acquire the Total Relation Matrix

   The total relation matrix H˜ whose (j, j ) element is expressed as h˜ jj = h1  , h2  , h3  , jj jj jj is calculated as follows: ˜ +D ˜2 + ··· + D ˜ n) H˜ = lim (D

(4.3)

n→∞

Defuzzify the total relation matrix as follows: hjj =

h1  + h2  + h3  jj

jj

jj

3

∀j, j



(4.4)

4.4 Calculate Fuzzy Weighted Supermatrix The fuzzy weighted supermatrix is calculated from equations below ∼



Ww = H. W i = 1, 2, 3     1 2 3 1 ˜ wij = wwij = ( wwij , wwij , wwij W

n×n

(4.5)

  2 , wwij

n×n

  3 , wwij

n×n

(4.6)

4.5 Acquire Fuzzy ANP Weighs ∼

Rise Ww to limiting power l the same as Eq. (7) to obtain fuzzy global priority vectors, ˜ f , called fuzzy ANP weights W  l Wfi = lim Wwi i = 1, 2, 3

(4.7)

  ˜ f = w1 , w2 , w3 W f f f

(4.8)

l→∞

Table 1. Linguistic Terms and Corresponding Membership Functions Linguistic term

Membership Function

None

(0,0,1)

Low

(1,1,3)

Moderate

(1,3,5)

Considerable

(3,5,7)

High

(5,7,9)

Extreme

(7,9,9)

460

M. Aghabegloo et al. Table 2. Total relation matrix by considering the threshold value Pre-assessment Appraisal Characterization Management Cross-cutting and evaluation aspect

Pre-assessment

0

0

0

0

0

Appraisal

0

0

0.285

0.336

0

Characterisation 0 and Evaluation

0

0

0.359

0

Management

0.371

0.395

0.31

0.277

0.358

Cross-cutting aspect

0.465

0.495

0.461

0.551

0

Defuzzify the fuzzy ANP weighs as follows: Wf =

wf1 + wf2 + wf3 3

∀j, j



(4.9)

Based on the interview with the experts’ committee in SPGC and the national Iranian gas company (NIGC) total relation matrix by considering the threshold value of 0.272 is as Table 2. According to pair-wise comparison constructed in the fuzzy ANP method based on the acquired total relation matrix, the final weighs of each element are presented in Table 3. Table 3. Weighs of elements derived from the ANP method Element

Weight

Element

Realization of selected options

0.152872

Options identification

Weight

Element

Weight

Risk perception 0.030606

Socio-economic impacts

0.006578

0.125692

Screening possible approaches

0.024578

Social concerns

0.005578

Feedback from risk management practice

0.117194

Approach selection

0.019301

Documentation

0

Stakeholder Engagement

0.115603

Judging risk tolerability and acceptability

0.017218

Risk classification

0

Evaluating and selecting options

0.091451

Early warning

0.013988

Judgment of the seriousness of risks

0 (continued)

Risk Governance Framework in the Oil and Gas Industry

461

Table 3. (continued) Element

Weight

Element

Weight

Option assessment

0.082265

Monitoring and 0.012494 control

Communication

0.075838

Risk mitigation options

0.011271

Hazard identification

0.05071

Context of SPGC

0.007889

Risk and vulnerability assessment

0.031993

Risk profile

0.006927

Element

Weight

Problem framing

0

Based on the results shown in Table III, the realization of selected options is the most influential element in the success of the risk management process in SPGC. Therefore, top management should invest in selected options to ensure the effectiveness of the risk governance framework. “Realization of selected options” is interrelated with “options identification” as both are in the management phase (see table IV). Therefore, SPGC should allocate adequate resources (including person-day and financial resources) for these two elements. Moreover, stakeholder engagement is an influential element that should be considered in identifying and realizing risk mitigation options. .

5 Conclusions The complicated relationship between various stakeholders and the complexity of systematic risks have threatened the effectiveness of risk management practices in the oil and gas industry. This paper suggests implementing the IRGC risk governance framework for oil and gas risk management. Following managerial tips should be taken into account: • Drivers and benefits of risk governance framework: drivers and benefits of risk governance framework should be extensively studied in the company to capture all interested parties’ support. Integrating management decision and risk management, directing shareholders investments, ensuring gas production continuity, mitigating safety and environmental risks, and addressing emerging risks are paramount importance in this regard. • SPGC-specific framework: IRGC is a generic framework whose components should be applied considering the organization’s context. For this, different components of the framework are elaborated based on the SPGC context to manage uncertainties in gas plant operation. • Priorities of IRGC elements for SPGC: to understand the impact of each element on the success of the IRGC framework and optimize resource allocation, the relative importance of each element should be calculated. A fuzzy DEMATEL - ANP approach

462

M. Aghabegloo et al.

is applied in this paper. “Realization of selected options” and “options identification” are the two most significant elements. Therefore, SPGC should gather all stakeholders’ views about different options of risk mitigation since the gas operation is usually affected by the introduced options. In addition, realizing the selected options ensures the practicality of the framework and helps continuity of the gas operation with reduced strategic, operational, safety, and environmental risks. Future research may focus on the application of other risk governance frameworks in the oil and gas industry and conducting comparative studies of their results.

References 1. Marsh, 100 Largest Losses in the Hydrocarbon Industry 1974–2019 (2020) 2. IRGC, Risk governance: Towards an integrative approach, Geneva (2017) 3. Goerlandt, F., Ronald, P.: An Exploratory Application of the International Risk Governance Council’s Risk Governance Framework to Shipping Risks in the Canadian Arctic. In: Governance of Arctic Shipping, pp. 15–41. Springer, Cham (2020) 4. Mohammed, H.K., Knapkova, A.: The impact of total risk management on company’s performance. In: Procedia-Social and Behavioral Sciences, pp. 271–277 (2016) 5. Yang, X., Stein, H., Paltrinieri, N.: Clarifying the concept of operational risk assessment in the oil and gas industry. Saf. Sci. 108, 259–268 (2018) 6. Skogdalen, J.E., Vinnem, J.E.: Quantitative risk analysis offshore—human and organizational factors. Reliab. Eng. Syst. Saf. 96(4), 468–479 (2011) 7. Casal, A., Olsen, H.: Operational risks in QRAs. Chem. Eng. Trans. 48, 589–594 (2016) 8. Aghabegloo, M., Rezaie, K., Torabi, S.A.: Physical asset risk management: a case study from an asset-intensive organization. In: The International Symposium for Production Research, pp. 667–678 (2020) 9. Darbra, R.M., Palacios, A., Casal, J.: Domino effect in chemical accidents : main features and accident sequences, vol. 183, pp. 565–573 (2010). https://doi.org/10.1016/j.jhazmat.2010. 07.061 10. Guo, X., Jie, J., Khan, F., Ding, L.: Fuzzy bayesian network based on an improved similarity aggregation method for risk assessment of storage tank accident. Process Saf. Environ. Prot. 144, 242–252 (2020). https://doi.org/10.1016/j.psep.2020.07.030 11. Bucelli, M., Paltrinieri, N., Landucci, G.: Integrated risk assessment for oil and gas installations in sensitive areas. Ocean Eng. 150, 377–390 (2018) 12. Aven, T., Vinnem, J.E., Wiencke, H.S.: A decision framework for risk management, with application to the offshore oil and gas industry. Reliab. Eng. Syst. Saf. 92(4), 433–448 (2007) 13. A. Dawotola, P. H. A. J. M. Van Gelder, W., Vrijling, J.K.: Multi Criteria Decision Analysis framework for risk management of oil and gas pipelines. In: RELIABILITY, Risk and Safety, pp. 307–314 (2010) 14. Haapasaari, P., Helle, I., Lehikoinen, A., Lappalainen, J., Kuikka, S.: A proactive approach for maritime safety policy making for the Gulf of Finland: Seeking best practices. Mar. Policy 60, 107–118 (2015) 15. Neves, A.A.S., et al.: Towards a common oil spill risk assessment framework–adapting ISO 31000 and addressing uncertainties. J. Environ. Manage. 159, 158–168 (2015) 16. Parviainen, T., Lehikoinen, A., Kuikka, S., Haapasaari, P.: Risk frames and multiple ways of knowing: Coping with ambiguity in oil spill risk governance in the Norwegian Barents Sea. Environ. Sci. Policy 98, 95–111 (2011)

Risk Governance Framework in the Oil and Gas Industry

463

17. Renn, O.: Stakeholder and public involvement in risk governance. Int. J. Disaster Risk Sci. 6(1), 8–20 (2015) 18. Renn, O.: Risk governance: coping with uncertainty in a complex world. Routledge (2017) 19. Renn, O., Klinke, A., Van Asselt, M.: Coping with complexity, uncertainty and ambiguity in risk governance: a synthesis. Ambio 40(2), 231–246 (2011) 20. Klinke, A., Renn, O.: Adaptive and integrative governance on risk and uncertainty. J. Risk Res. 15(3), 273–292 (2012) 21. Marsh, Benchmarking The Middle East Onshore Energy Industry (2014) 22. Saaty, T.L.: Decision Making with Dependence and Feedback: Analytic Network Process. RWS Publications, Pittsburgh (1996) 23. Saaty, T.L.: The Analytic Hierarchy Process. McGraw-Hill, New York (1980) 24. Choua, Y.-C., Sun, C.-C., Yen, H.-Y.: Evaluating the criteria for human resource for science and technology (HRST) based on an integrated fuzzy AHP and fuzzy DEMATEL approach. Appl. Soft Comput. 12(1), 64–71 (2012) 25. Li, C.-W., Tzeng, G.-H.: Identification of a threshold value for the DEMATEL method using the maximum mean de-entropy algorithm to find critical services provided by a semiconductor intellectual property mall. Expert Syst. Appl. 36(6), 9891–9898 (2009) 26. Chen, F.-H., Hsu, T.-S., Tzeng, G.-H.: A balanced scorecard approach to establish a performance evaluation and relationship model for hot spring hotels based on a hybrid MCDM model combining DEMATEL and ANP. Int. J. Hosp. Manag. 30(4), 908–932 (2011) 27. G.-H. Tzeng, C.-H. Chiang, and C. Wei Li, “Evaluating intertwined effects in e-learning programs: A novel hybrid MCDM model based on factor analysis and DEMATEL,” Expert Syst. Appl., vol. 32, no. 4, pp. 1028–1044, 2007. 28. Liou, J.J., Yen, L., Tzeng, G.-H.: Building an effective safety management system for airlines. J. Air Transp. Manag. 14(1), 20–26 (2008)

Operations Research Applications and Optimization

A Nurse Scheduling Case in a Turkish Hospital Edanur Yasan, Tu˘gba Cesur, Tuba Nur Aslan, Rana Ezgi Köse, Aziz Kemal Konyalıo˘glu(B) , Tu˘gçe Beldek, and Ferhan Çebi Management Engineering, Istanbul Technical University, Istanbul, Turkey {koser19,konyalioglua,beldek,cebife}@itu.edu.tr

Abstract. Today, each of the hospitals provides services all day. For this reason, there is a direct relationship between the psychological and physical well-being of the nurses working in the hospital and the quality of the service they provide. In order to provide the best service, the shifts of nurses should be balanced and fair. In this study, a mathematical model is developed for the nurses working in the private hospital in Yalova to be assigned to the two shifts determined as equally as possible. In the solution of the model, mixed integer linear programming method and GAMS Optimization program are used. Keywords: Nurse scheduling · Shift scheduling · Mixed integer linear programming

1 Introduction In the service and production sectors, it is necessary to plan the personnel needs in a systematic way in order to ensure that the works are carried out in a regular order and to ensure customer satisfaction. Hence, while the personnel needs are met, the workforce of the personnel in the service and production sectors is scheduled. Personnel scheduling is generally made in order to meet the workforce need for the sector in the best possible way. The working plans of the personnel working in the health sector are more important than other services, as the working environments and conditions of the personnel involve serious difficulties both psychologically and physically. For this reason, most of the studies on personnel scheduling are carried out in the field of health. One of the problems that are especially emphasized in the health sector is the nurse scheduling problem. Nurses work under very intense and difficult conditions in the care and treatment of patients, which are needed 24 h a day, 7 days a week in hospitals. For example, the inadequacy of the number of nurses, the high number of patients per nurse, the social problems experienced by nurses and the negative working conditions make it difficult for nurses to fulfil their duties. Therefore, shift hours should be arranged for nurses to work in a healthy and efficient manner. In this study, it is aimed to determine and schedule the number of nurses needed in each shift. Day and night shifts are evaluated in the study. In addition, the mathematical model created by considering the shifts and working days requested by some nurses © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 467–475, 2022. https://doi.org/10.1007/978-3-030-90421-0_39

468

E. Yasan et al.

are tried to be solved. While constructing the model, Mix integer programming (MIP) method is used to assign nurses to two shift types. Furthermore, in this study, nurse scheduling section in the second part, literature research in the third part, mixed integer programming in the fourth part, the application of the mathematical model developed in the fifth part, and finally the results of the study plan obtained in the sixth part are mentioned.

2 A Nurse Scheduling Case in a Turkish Hospital Hospitals provide uninterrupted service in the morning and evening. For this reason, sometimes there may be negative situations among the personnel working in hospitals and especially nurses due to the difficulty of working hours. In addition, due to the inadequacy of the number of nurses required in many hospitals and the lack of nurses with sufficient experience, there may be a decrease in providing the necessary care and health service to the patients. For this reason, nurse scheduling studies in the field of service sector are more important than those in other service sectors and are more involved in academic studies than others. Nurses work in many different areas in health institutions, in the services of various polyclinics. Working conditions, duties and responsibilities of nurses are determined by relevant laws and regulations. This work pattern differs from hospital to hospital. In this case, while some hospitals work in 3 shifts, other hospitals work in 2 shifts. Similarly, nurses in one hospital may work both night and day shifts, while in another hospital there may be nurses who work continuously at night or in continuous day shifts. There are also full-time nurses. Because of the insufficient number of numbers or any other reason, nurses may work overtime. In this case, a nurse who leaves the sixteen-hour night shift returns to work without fully resting. Similarly, a nurse may have to work the night shift constantly, despite her reluctance. In order to minimize the negative impact of these working conditions, a better working schedule needs to be established. Usually nurse charts are made every week or every month. In many of the health institutions in our country, nurse scheduling studies are carried out manually by the specialist nurses or the responsible nurses of the department. In this case, it becomes difficult to create a schedule that will meet the need for nurses. As a result of these, it becomes difficult to create a quality nurse chart and accordingly, the quality of service is prevented. It is necessary to benefit from mathematical models in nurse scheduling problems in order to make a schedule that will both provide advantage in time and provide quality service.

3 Literature Review When many studies from the past to the present are examined, it is possible to find many studies on scheduling. It is seen that the majority of these studies are carried out in the field of health. Studies using many different purposes and constraints have been examined in the literature. Some of the methods used in solving these scheduling problems are mathematical models, artificial intelligence and heuristics.

A Nurse Scheduling Case in a Turkish Hospital

469

Miller et al. [17] made a nurse schedule that balances nurse schedules’ personnel coverage and timing preferences, provided that nurse schedules meet certain feasibility constraints. Rosenbloom and Goertzen [21], presented an algorithm for hospital nurses’ scheduling. They discussed the resulting charts in a cyclical and optimal manner. The advantage of this algorithm is that it can be easily implemented on a microcomputer. Chen and Yeung [8], developed a hybrid expert system called ‘Nurse Help’ to provide nurses with flexible and efficient scheduling. Inoue et al. [14] applied genetic algorithm to the nurse scheduling problem. In the study, they presented the nurse scheduling support system using an interactive evolutionary algorithm. Dowsland and Thompson [10] developed a two-stage model for the nurse scheduling problem. In the first stage, they created the model by taking into account both the working and vacation days of the nurses during the two-week planning period. In the second stage, they made assignments to the determined shifts using heuristic algorithms. Rogers et al. [20] discussed the working hours of nurses who are hospital personnel. They aimed to minimize the working hours of nurses. Aickelin and White [4] modeled a complex nurse scheduling problem with an integer programming and evolutionary algorithm. Trilling et al. [24] discussed the working shifts of anesthesiology nurses. They developed a mathematical model to solve the scheduling problem of anesthesiology nurses working in a French public hospital. Gutjahr and Rauner [12] conducted the Ant Colony Optimization approach. In addition to working days, hours, working styles and qualifications of the nurses, they created a mathematical model in line with the preferences of both the nurses and the hospital. Özda˘go˘glu et al. [18] handled and simulated patient data in the emergency room. They organized the restructuring of the doctor and nurse task plans of the emergency departments. Burke et al. [7] developed a multifunctional hybrid model using variable neighborhood search and integer programming methods together. They applied this model which was developed by them on nurses working in a hospital. Karaatlı and Güngör [15] proposed a solution to the nurse scheduling problem. A fuzzy multi-objective model was developed due to the work confusion of nurses. In the model, it was determined the number of nurses to work in the shift and provided an optimal result. Ba˘g et al. [5] examined the nurse scheduling problem. They used the 0–1 goal programming method to solve the problem. In order to determine the weights of their targets, they used the analytical network process method. Öztürko˘glu and Çalı¸skan [19] enabled nurses to create weekly schedules according to their own preferences. Karpuz [16] formulated the nurse scheduling problem as a two-stage stochastic integer programming. Ünal [25] examined the personnel scheduling problem in a government institution in the service sector by using the goal programming and analytical hierarchy process method together, taking into consideration the personnel requests as well as the legal and institutional objectives. Thongsanit et al. [23] made assignments to meet the workday needed by balancing nurse shifts and classifying nurses according to their experience. Agyei [2] et al. proposed a mathematical model for nurses working in a hospital in Ghana. Elomri et al. [11] determined the number of intern doctors who should work in the Oncology and Hematology departments of the hospital and created their work plans accordingly. Sulak and Bayhan [22] solved the nurse scheduling problem with the goal programming method. Cirit and Kaya [9] arranged the nurses’ shifts at Eski¸sehir Özel Sakarya Hospital. Varlı and Eren [26] carried out the shift assignments of nurses

470

E. Yasan et al.

working in the intensive care, emergency and operating room departments of a hospital in Kırıkkale. In this study, in addition to the problems discussed in the literature review, a mathematical model was created by giving nurses in intensive care the right to choose both the shifts and working days they want. In addition, the Mix integer programming (MIP) was used in the study.

4 Mixed Integer Programming Problems in which some or all of the linear and nonlinear variables are defined discretely are called “discrete optimization” or “integer optimization” problems. The model in which only some of its variables are expressed as integers is called Mixed Integer Programming (MIP). The mixed integer programming solution method is similar to integer programming in many ways. As in integer programming, the optimal solution is obtained by the simplex method without the integer requirement. The Gomory boundary condition is formulated based on the integer programming with the largest fractional value in the optimal solution. The mathematical representation of mixed integer linear programming is as follows;  Zmax/min = cjxj Constraints



aijxj ≤ bi (i = 1, 2.., .n)

Xj ≥ 0, int

5 Problem Definition In a private hospital in Yalova, the intensive care department consists of general intensive care, surgical intensive care, coronary intensive care and isolation intensive care services. In our study, the general intensive care service was examined. In this section, the general structure of the department and the constraints of the problem will be mentioned by giving information. In the hospital, nurses have two shifts to work. Nurses on day shift work between 08:00 and 18:00, while those working on night shifts work between 18:00 and 08:00. The scheduling process of the nurses working in the department is currently carried out by the nurse supervisor. However, with this scheduling, nurses’ requests and demands cannot be adequately responded to, and this situation causes dissatisfaction. In addition, the nurse supervisor spends a long time for this scheduling. The rules to be applied in the shift schedule to be created by talking with the nurse supervisor and the nurses in the hospital are as follows: • Nurses cannot work two night shifts in a row,

A Nurse Scheduling Case in a Turkish Hospital

• • • •

471

Day and night shifts cannot be combined, At least 4 nurses should work for each day and night shift, Nurses must have at least 8 days off during the month, It is not allowed to work more than 5 consecutive days.

In this study, a scheduling model will be conducted by taking into account the special requests of nurses regarding their working arrangements. For this reason, the nurses in the team were interviewed and the information on the days they wanted to be off during the month was obtained and it was learned in which shift they wanted to work. In the reviewed articles, nurses’ day-off requests are considered, but shift requests are not taken into consideration. In addition to the rules, the model will be established by paying attention to the seniority of the nurses. The aim of this study is to prepare the scheduling according to the demands of nurses in a short time.

6 Methodology In this section, the MIP (Mixed Integer Programming) model that we have proposed for the scheduling problem will be introduced. MIP models are one of the basic mathematical methods used in operations research. Notation: Indices • i index for nurses i = {1, 2,..12}. • j index for days j = {1, 2,..30}. • k index for days k = {1: day shift,2: night shift}. Variables  x ijt =

1, if ith nurse has tth type shift in day j 0, Otherwise

Parameters • ci Penalty points in case of non-compliance with the nurses’ demands depending on their seniority. • uij Table containing the information of the day the nurses want to be off. • git Table containing the type of shift that nurses do not want to work. There is only one variable in the study. This variable shows which nurse will work in which shift on which day. In our study, the tables containing the nurses’ days off and the shift they do not want to work are important for the continuity of the scheduling. It will be possible to prepare the schedule in a short time by re-running the model by taking the requests every month through the tables. The objective is minimizing penalty points regarding of the information of the day the nurses want to be off and minimizing the shifts that nurses do not want to work.       xijtuijci+ xijtgit Zmin = i

j

t

i

j

t

472

E. Yasan et al.

Constrainst xij2 + xi(j+1)2 ≤ 1

∀i, ∀j

xij1 + xij2 ≤ 1

∀i, ∀j

xij2 +xi(j + 1)1 ≤ 1  i

 i

 t

 t

(1) (2)

∀i,∀j

(3)

xij1 ≥ 4

∀i, ∀j

(4)

xij2 ≥ 4

∀i, ∀j

(5)

xijt ≤ 22

∀i,∀j

(6)

xijt + xi(j+1)t + xi(j+2)t + xi(j+3)t + xi(j+4)t + xi(j+5)t ≤ 5 xijt ∈ {0, 1}

∀i, ∀j

(7) (8)

The shifts in the intensive care unit of the hospital are formed in two shifts as night and day shifts. Constraint (1) ensures not to assign night shifts consecutively. In line with the information collected from the nurses, this restriction was added to the hospital management rules. At the same time, it is not desirable for a nurse to combine day and night shifts in order to reduce the working motivation of nurses and their margin of error. Constraint (2) prevents each nurse from being on night shift after the day shift. Constraint (3), on the other hand, prevents each nurse from being on day shift after night shift. The intensive care department, unlike the other departments of the hospital, has the same workload during the day and night. The hospital management wants a certain amount of nurses to be in the hospital in order to manage the work for the intensive care department during night and day shifts. Constraint (4) ensures that there are at least 4 nurses in the day shift, and Constraint (5) ensures that there are at least 4 nurses in the night shift. Constraint (6) ensures that each nurse works for a maximum of 22 days per month in order to rest and spare time for her private life, so that at least 8 days of regular leave are allowed per month. Again, these leave dates are requested to be distributed regularly on a monthly basis for the morale and motivation management of nurses. Therefore, Constraint (7) ensures that a nurse is not required to work more than 5 consecutive days. The aim of this model is the satisfaction of nurses as well as scheduling the workforce effectively and optimally. Thus, each nurse is asked about the days and shifts they do not want to work. Hospital management wants to give priority to nurses with high seniority level. For this reason, a penalty point was given for the days and shifts that the nurses did not want, and the ranks of the personnel were taken into account in this penalty score. In the objective function, it was aimed to keep this penalty score to a minimum and to ensure nurse satisfaction.

A Nurse Scheduling Case in a Turkish Hospital

473

7 Results The results obtained by running the model is shown in Table 1. The scheduling problem in the hospital was solved by transferring the established model to GAMS after the creation of the mathematical model. The objective function of the model is to minimize the penalty score for not meeting the nurses’ demands. By running the model, this score was calculated as 173. Table 1. Results of the scheduling model Nurses

Day shifts

Night shifts

1

3, 4, 5, 6, 9, 10, 11, 12, 15, 16, 17, 18, 21, 22, 23, 24, 27, 28, 29, 30

1, 7, 13, 19, 25

2

1, 2, 3, 6, 7, 8, 9, 12, 13, 16, 17, 18, 19, 22, 4, 10, 14, 20, 24 23, 26, 27 28, 29, 30

3

19

4

1, 3, 5, 7, 9, 11, 13, 15, 17, 20, 22, 24, 26, 28, 30 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30

5

1, 4, 5, 8, 9, 12, 13, 14, 15, 22, 27, 28

2, 6, 10, 16, 18, 20, 23, 25, 29

6

1, 2, 7, 10, 11, 14, 17, 20, 21, 22, 23

3, 5, 8, 12, 15, 18, 24, 26, 28, 30

7

1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29

8

2, 3, 4, 5, 8, 9, 10, 13, 14, 15, 16, 19, 20, 23, 24, 25, 26, 29, 30

6, 11, 17, 21, 27

9

6, 7, 10, 15, 20, 21, 24, 25, 28

1, 3, 4, 8, 11, 13, 16, 18, 22, 26, 29

10

1, 2, 5, 6, 7, 8, 11, 12, 13, 16, 17, 18, 21, 24, 25, 26, 29, 30

3, 9, 14, 19, 22, 27

11

4, 11, 18, 19, 20, 25, 26, 27

2, 5, 7, 9, 12, 14, 16, 21, 23, 28, 30

12

14

2, 4, 6, 8, 10, 12, 15, 17, 19, 21, 23, 25, 27, 29

8 Conclusions The nurse scheduling is a problem that hospitals encounter quite often in their daily life. Nurses work under very intense and difficult conditions in the care and treatment of patients. Usually nurse charts are made every week or every month manually by the specialist nurses or the responsible nurses of the department. In order to manage time in an effective way, minimizing the workforce needed for scheduling and the negative impact of these working conditions, a better working schedule needs to be established.

474

E. Yasan et al.

For this reason, a lot of ideas have been given to it and various approaches have been developed in literature. Various methods have been examined in this problem, which takes into account the priorities of nurses and at the same time tries to position the workforce correctly and effectively. In this study, nurses working in a private hospital in Yalova were assigned to shifts by evenly distributing the necessary workforce; nurses’ shift and working day requests were taken into consideration. With these results, it is aimed to increase the satisfaction of nurses and to get better efficiency in the days they work. The work was carried out by taking into account the two shifts determined per day and employee requests. The model has been solved in GAMS. Based on the results, penalty score has been minimized and a schedule has been generated. With the help of this model, nurses’ requests have been met and manual work has been removed. In summary, this study supports the determination of nurse scheduling by taking consideration the hospital needs and nurses’ priorities, and contributes to the literature which has different approaches.

References 1. Neumann, J., Morgenstern, O.: Theory of Games and Economic Behaviour. Princeton University Press, Princeton (1943) 2. Agyei, W., Denteh, W.O., Andaam, E.A.: Modeling nurse scheduling problem using 0–1 goal programming: a case study of Tafo government hospital, Kumasi-Ghana. Int. J. Sci. Technol. Res. Issue 3, 5–10 (2015) 3. Aickelin, U., Dowsland, K.A.: An indirect genetic algorithm for a nurse-scheduling problem. Comput. Oper. Res. 31(5), 761–778 (2004) 4. Aickelin, U., White, P.: Building better nurse scheduling algorithms. Ann. Oper. Res. 128(1– 4), 159–177 (2004) 5. Ba˘g, N., Özdemir, N.M., Eren, T.: 0–1 Hedef Programlama ve ANP Yöntemi ˙Ile Hem¸sire Çizelgeleme Problemi Çözümü. Int. J. Eng. Res. Dev. 4(1), 2–6 (2012) 6. Bakır, M.A., Altunkaynak, B.: Tamsayılı Programlama Teori. Modeller Ve Algoritmalar, Nobel Yayın (2003) 7. Burke, E.K., Li, J.P., Qu, R.: A hybrid model of integer programming and variable neighbourhood search for highly-constrained nurse rostering problems. Eur. J. Oper. Res. 203, 484–493 (2010) 8. Chen, J.G., Yeung, T.W.: Hybrid Expert-System Approach to Nurse Scheduling. Computers (1992) 9. Cirit, E., Kaya, K.: Eski¸sehir Özel Sakarya Hastanesi’nde Hem¸sire Vardiya Çizelgeleme Problemi Çözümü. Yüksek Lisans Tezi, Eski¸sehir (2016) 10. Dowsland, K.A., Thompson, J.M.: Solving a nurse scheduling problem with knapsacks. Networks Tabu Search J. Oper. Res. Soc. 51(7), 825–833 (2000) 11. Elomri, A., Elthlatiny, S., Mohamed, Z.S.: A goal programming model for fairly scheduling medicine residents. Int. J Sup. Chain. Mgt, IJSCM 4, 2050–7399 (2015) 12. Gutjahr, W.J., Rauner, M.S.: An aco algorithm for a dynamic regional nurse-scheduling problem in Austria. Comput. Oper. Res. 34(3), 642–666 (2007) 13. Halaç, O.: Kantitatif Karar Verme Teknikleri (Yöneylem Ara¸stırması). Evrim Da˘gıtım,Ðstanbul (1991) 14. Inoue, T., Furuhashi, T., Fuji, M., Maeda, H., & T: Development of nurse scheduling support system using interactive ea. In: Systems, Man, and Cybernetics. leee Smc’99 Conference Proceedings. 1999 Ieee International Conference, 5, 533 (1999)

A Nurse Scheduling Case in a Turkish Hospital

475

15. Karaatlı, M., Güngör, ˙I: Hem¸sire Çizelgeleme Sorununa Bir Çözüm Önerisi ve Bir Uygulama. Alanya ˙I¸sletme Fakültesi Dergisi 2(1), 22–52 (2010) 16. Karpuz, E.: Nurse Scheduling and Rescheduling Problem Under Uncertainty. Middle East Technical University, Ankara, Yüksek Lisans Tezi (2015) 17. Miller, H.E., Pierskalla, W.P., Rath, G.J.: Nurse scheduling using mathematical programming. Oper. Res. 24(5), 857–870 (1976) 18. Özda˘go˘glu, A., Yalçınkaya, Ö., Özda˘go˘glu, G.: Ege Bölgesi’ndeki Bir Ara¸stırma Ve Uygulama Hastanesinin Acil Hasta Verilerinin Simüle Edilerek Analizi. ˙Istanbul Ticaret Üniversitesi (2009) 19. Öztürko˘glu, Y., Çalı¸skan, F.: Hem¸sire Çizelgelemesinde Esnek Vardiya Planlaması Ve Hastane Uygulaması. Dokuz Eylül Üniversitesi Sosyal Bilimler Enstitüsü Dergisi 16(1), 115–133 (2014) 20. Rogers, A.E., Hwang, W.T., Scott, L.D., Aiken, L.: The working hours of hospitals staff nurses and patient safety. Health Aff. 23(4), 202–212 (2004) 21. Rosenbloom, E.S., Goertzen, N.F.: Cyclic nurse scheduling. Eur. J. Oper. Res. 31(1), 19–23 (1987) 22. Sulak, H., Bayhan, M.: A model suggestion and an application for nurse scheduling problem. J. Res. Bus. Econ. Manage. 5(5), 755–760 (2016) 23. Thongsnit, K., Kantangkul, K., Nithimethirot, T.: Nurse’s shift balancing in nurse scheduling problem. Silpakorn U Sci. Tech. J. 10, 43–48 (2015) 24. Trilling, L., Guinet, A., Le Magny, D.: Nurse scheduling using integer linear programming and constraint programming. Ifac Proceed. Vol. 39(3), 671–676 (2006) 25. Ünal, F.M.: Analitik Hiyerar¸si Prosesi Ve Hedef Programlama ˙Ile Nöbet Çizelgeleme Probleminin Çözümü. Yüksek Lisans Tezi, Kırıkkale Üniversitesi Fen Bilimleri Enstitüsü, Kırıkkale (2015) 26. Varlı, E., Eren, T.: Hem¸sire çizelgeleme problemi ve bir hastanede uygulama. Apjes 5(1), 34–40 (2017)

An Analytical Approach to Machine Layout Design at a High-Pressure Die Casting Manufacturer O˘guz Emir(B) and Tülin Aktin Department of Industrial Engineering, ˙Istanbul Kültür University, ˙Istanbul, Turkey {o.emir,t.aktin}@iku.edu.tr Abstract. Sahin ¸ Metal was founded in 1975 on a 7500 m2 area in ˙Istanbul to supply high-pressure aluminum casting parts to a variety of industries, especially automotive manufacturers. 64 different products are produced with various routings through 14 workstations in the plant. Being a Tier 2 company, these products are then sent to Tier 1 firms and finally reach the leading vehicle manufacturers. Nowadays, increasing competition, changing customer demands, and quality targets bring the necessity of restructuring internal processes. In this regard, Sahin ¸ Metal plans to rearrange the existing machine layout to minimize the distance traveled between departments by taking into account the material flow. This study aims to determine an efficient machine layout design by implementing analytical approaches. The study is started by visiting the production facility and meeting with the company’s engineers to determine the project roadmap. Following that, the data collection process is initiated, and ABC analysis is performed to define product classes. After this identification, two approaches are utilized simultaneously. The Hollier method is employed to find a logical machine arrangement. In addition, a mathematical model based on the Quadratic Assignment Problem (QAP) is developed to obtain the optimum machine layout. The developed integer nonlinear model is solved by CONOPT using GAMS software under various scenarios. Finally, these results are compared with the existing system, and a convenient layout design is proposed to the company. Keywords: Facilities design and planning · Machine layout design · Nonlinear programming · Quadratic assignment problem · Hollier method

1 Introduction Manufacturing systems that convert raw materials into finished products have always been a cornerstone of the global economy. Therefore, the manufacturing system of a company is expected to be both timely and costly efficient in the intensifying competition. Also, companies are being progressively demanded to ensure high levels of operational efficiency, flexibility, and responsiveness in today’s market environment. These challenges can be undertaken by designing appropriate facility layouts to some extent [17]. Modern manufacturing plants focus on determining machine order and arrangement on the facility floor [5]. The machine layout problem involves arranging a given number of machines to a given number of sites. In general, these problems aim to minimize the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 476–488, 2022. https://doi.org/10.1007/978-3-030-90421-0_40

An Analytical Approach to Machine Layout Design

477

total distance traveled and backtracking movements by designing a feasible machine layout in the production line. Thus, this decision prevents companies from incurring extra costs [1]. Layout decisions have importance in establishing how materials, operations, and processes flow through the production system [17]. Reference [9] emphasizes that material handling and layout-related costs correspond to 20 to 50% of total operating expenses in manufacturing. Hence, developing a practical layout is a significant attempt that enhances throughput, reduces costs, and saves time [5]. Facility layout problems contain similarities with the machine layout problem. Many algorithms are designed for these specific problems, and some algorithms can also be applied to both problems. For the facility layout problems, some models have been developed by the researchers. Reference [2] classifies these models as quadratic assignment problems (QAP), quadratic set covering problems, integer programming models, mixed-integer programming models (MIP) and graph-theoretic models. In this paper, the QAP introduced by Koopmans and Beckman in 1957 is used to model the layout problem [15]. Traditionally, this model is one of the most frequently used formulations and is known to be NP-complete [11]. The model’s objective function is a second-degree polynomial function of the variables, and the constraints are identical to the constraints in the assignment problem [9]. After the first appearance of QAP, several researchers in academia have used the model, and some have attempted to linearize the QAP by using other approaches [4, 6]. For some machine layout problems, heuristic and meta-heuristics algorithms have also been developed to solve the larger QAP due to its combinatorial nature. For instance, reference [7] uses several heuristic algorithms derived from the cutting plane procedure to reach the exact solution and reduce the computational effort required when solving QAP. Reference [8] proposes an ant colony algorithm to solve the inter-cell layout problem, which is formulated as a quadratic assignment model. The developed algorithm indicates better performance results compared to other facility layout algorithms. Reference [14] presents a novel heuristic method validated in a discrete event simulation model to minimize the total distance that products travel between the 140 machines in a furniture factory. Reference [3] develops a flexible and user-friendly software which uses two heuristic approaches; namely, simulated annealing and genetic algorithm to help layout planners. Sahin ¸ Metal, a family-owned business, was founded in 1975 in ˙Istanbul and recently moved to its 7500 m2 modern facility to supply high-pressure aluminum casting parts to various industries, especially automotive manufacturers. The company has grown its business volume and staff over the years and aims to become one of the sought-after companies in the sector with its technology investments and business plans. The company is specializing gradually to supply automotive sub-industry needs by delivering to the leading automotive manufacturers in Europe. The plant’s current casting production capacity is 3000 tons/year, and 64 different products are produced by following various routings through 14 workstations in the plant. Being a Tier 2 company, the majority of its production is exported to Tier 1 firms. Production in Sahin ¸ Metal starts by melting the raw materials in melting pots and then transferring them to the die casting benches. Following that, other internal die casting processes are carried out respectively. The plant comprises various workstations as casting, trim, calibration, boring, vibration, sandblasting, quality control, CNC processing, industrial or ultrasonic washing, dyeing, sealing control, and final control. The company is capable of manufacturing and maintaining all the molds and apparatus needed in its fully equipped molding area. In addition, the firm can provide a wide range of services with its advanced machining and assembly lines in accordance

478

O. Emir and T. Aktin

with customer demands. Sahin ¸ Metal is aware of the fact that the competition is getting more challenging in the world. In this regard, they plan to rearrange the existing machine layout to remove layout-related costs. This study aims to find the machine layout that ensures the best placement for Sahin ¸ Metal by implementing analytical approaches under some restrictions. Minimizing percentage in backtracking movements and minimizing total flow distance are the most frequently used objectives in traditional machine layout problems [16]. Thus, an analytical layout design procedure that combines the Hollier method and QAP model is proposed for A-class products to find the optimal machine sequencing and layout. The remainder of the paper is structured as follows: Sect. 2 supplies the study’s methodology by clarifying the methods that are performed. In this section, ABC analysis, Hollier methodology and developed integer nonlinear mathematical models are described. In Sect. 3, results obtained from each method are illustrated, and proposed layout improvements are compared. At the end of the paper, concluding remarks and improvement suggestions are shared with researchers.

2 Methodology The methodology flow that is followed in this study is illustrated below in Fig. 1. Initially, data collection is carried out by visiting the production facility regularly and organizing short meetings with the project team. Then, an ABC analysis is performed using the products’ selling prices and annual demand values to define product classes. Products with the highest sales revenues labeled as A-class are used for further analysis. After the product classification, Hollier method is implemented to find a logical machine arrangement and a mathematical model is developed simultaneously. Results obtained from alternative scenarios are compared with the current machine layout, and a suitable design is proposed to the company. 2.1 Product Classification Using ABC Analysis ABC analysis is an inventory categorization technique that is widely preferred and traditionally used to support decision-making in the production environment, operations management, and stock management. This technique classifies items as A, B, and C classes based on their perceived importance and highlights the items with the highest sales value to allocate resources accordingly [13]. Literature shows that researchers follow diverse calculation steps, but the following steps are pursued in this study: 1. List the sales prices and annual demand quantities of all existing products. 2. Calculate the annual sales value of each item by multiplying their sales price and annual demand quantity. 3. Sort the calculated sales values of items from largest to smallest. 4. Calculate the cumulative sales value by summing up the sales value of all items and determine the percentage value of each item considering the total sales value. 5. Determine the percentage of the number of inventory items. 6. Define ABC classes of items based on their percentage in total sales value and the number of inventory items, as shown in Table 1.

An Analytical Approach to Machine Layout Design

Fig. 1. Methodology flow of the study

Table 1. ABC classification Class

% of number of inventory items (% of parts)

% in total sales value (% of monetary value)

A

10 – 25

70 – 80

B

25 – 50

15 – 20

C

50 – 60

5 – 10

479

480

O. Emir and T. Aktin

2.2 Machine Sequencing Using Hollier Method In 1963, Hollier proposed two heuristic machine sequencing algorithms (Hollier 1 and 2) to minimize backtracking flows and maximize in-sequence flows in a production line [10]. Hollier method 1 is used in this paper, and the steps of this approach are summarized below: 1. The “From-To” chart is formed using the part routing data. The data in the chart display the total number of part movements between machines or workstations in the cell. 2. After developing the “From-To” chart, “From” and “To” sums are calculated for each machine. 3. Machines are arranged based on their minimum “From” or “To” sums. If the minimum number is found in the “To” sums row, the machine is placed at the beginning of the sequence. If the minimum number is found in the “From” sums column, the machine is placed at the end of the sequence. 4. After each machine is selected, the “From-To” chart is redesigned by removing the row and column corresponding to the selected machine. Consequently, “From” and “To” sums are recalculated. 5. Steps 3 and 4 are repeated until all machines are sequenced. This method also holds some tie-breaking rules, and these rules are shared in Fig. 2.

Fig. 2. Tie breaking rules of Hollier method 1

2.3 Machine Layout Optimization via Mathematical Model The literature recommends that the Quadratic Assignment Problem (QAP) model is commonly used for layout problems [9, 11, 12]. Therefore, an integer nonlinear mathematical model is developed based on QAP to minimize the overall weighted distance traveled between departments. Flow quantities between departments are included in the objective function as weights. Indices, parameters, and decision variables of the proposed model are defined below:

An Analytical Approach to Machine Layout Design

481

Indices: i, k = 1,…,14 (departments). j, l = 1,…,14 (locations). Parameters: rsi = Required area for department i (square meters). asj = Available area at location j (square meters). DNEW ik = New distance between departments i and k (meters). SUMD = Total new distance (meters). d jl = Distance matrix between locations (meters). f ik = Flow matrix between departments (units). Decision variable: X ij = 1, if department i is assigned to location j; 0, otherwise. Base Model with Area Restrictions (SMW_0): minZ =

14 14 1   fik .djl .Xij .Xkl 2 i=1 j=1 i = k j = l

(1)

Subject to: 14 

Xij = 1, ∀i

(2)

Xij = 1, ∀j

(3)

j=1 14  i=1

rsi .Xij ≤ asj , ∀i, j

(4)

Xij = 0, 1, ∀i, j

(5)

The objective function (1) of the developed model minimizes the total weighted distance. The second equation is the first constraint, and it states that each department should be assigned to one location. The third equation guarantees that each location will have only one department. The fourth equation assures that the available area at each location is not exceeded during the assignment. Finally, the fifth equation shows that Xij are binary variables.

3 Implementation and Results This section encompasses the implementation of the developed methodology. The project is launched to reduce the distance traveled between departments by restructuring the

482

O. Emir and T. Aktin

existing machine layout of the plant. For this purpose, regular factory visits and data collection steps are realized initially. During the observation, it is worth bearing in mind that it is significant to understand the system as a whole, considering the material and process flows in the plant layout. Figure 3 shows the current machine layout and the sequence of each department that is followed in the production facility. The machine sequence is numbered on the right side of each department name in the given figure. CURRENT LAYOUT SANDBLASTING_7 DYEING_12

SEALING_CONTROL_13

INDUSTRIAL_WASHING_11

QUALITY_CONTROL_8

ULTRASONIC_WASHING_10

FINAL_CONTROL_14

VIBRATION_6

BORING_5

CNC_PROCESSING_9

CASTING_2

CALIBRATION_4

TRIM_3

RAW_MATERIAL_1

Fig. 3. Current machine layout of the plant

Basically, it would be wise to concentrate on the most important products when designing an efficient machine layout. Therefore, an ABC analysis, which is a simple inventory categorization technique, is performed to identify the most crucial stock keeping units (SKU). To this end, product portfolio, annual demand quantities, and sales price of each product are requested from Sahin ¸ Metal. In addition, the duration of data interval is specified as 2020, and all necessary data are gathered accordingly to attain consistent results throughout the project. The results of the ABC analysis indicate that, A-class products correspond to 75% of total sales and 25% of items. There are 16 products available in this class with a total sales value of approximately 80 million Euros. Table 2 demonstrates the ABC classification results containing the number of products and sales value for each class. Due to their critical importance for the company, only A-class products are considered in further analysis. Table 2. ABC classification results Class

Number of products

% of number of inventory items (% of parts)

Total sales value (e)

% in total sales value (% of e)

A

16

25

79,294,559

75

B

16

25

21,264,589

20

C

32

50

5,135,647

5

An Analytical Approach to Machine Layout Design

483

After the product classification, a simple yet effective method which was developed by Hollier is employed to arrange a logical machine sequence that maximizes the proportion of in-sequence moves and minimizes backtracking moves. This method uses the “From” and “To” sums of flow for each machine in the cell, and the developed “FromTo” chart for this study can be observed in Fig. 4. The data contained in the chart show the number of part movements between machines in the cell.

1 2 3 4 5 6 7 8 9 10 11 12 13 14

1

2

3

4

5

6

7

8

9

10

11

12

13

14

0 0 0 0 0 0 0 0 0 0 0 0 0 0

5,396 0 0 0 0 0 0 0 0 0 0 0 0 0

0 5,396 0 0 0 0 0 0 0 0 0 0 0 0

0 0 999 0 0 0 0 0 0 0 0 0 0 0

0 0 411 591 0 0 0 0 0 0 0 0 0 0

0 0 3,369 408 1,002 0 0 0 0 0 0 0 0 0

0 0 617 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 432 0 0 0 0 0 0 0 0

0 0 0 0 0 777 0 216 0 0 0 0 0 0

0 0 0 0 0 275 0 216 216 0 0 0 0 0

0 0 0 0 0 1,884 0 0 633 0 0 0 0 0

0 0 0 0 0 0 0 0 0 491 0 0 0 0

0 0 0 0 0 0 0 0 145 0 296 0 0 0

0 0 0 0 0 1,410 617 0 0 216 2,221 491 440 0

5,396 5,396 5,396 999 1,002 4,778 617 432 994 707 2,517 491 440 0

0

5,396

5,396

999

1,002

4,779

617

432

993

707

2,517

491

441

5,395

29,165

1.0000 1.0000 1.0000 1.0000 0.9998 1.0000 1.0000 1.0010 1.0000 1.0000 1.0000 0.9977 0.0000

Fig. 4. From-to chart

This method intends to place the machines in a logical order based on From/To ratios. Also, the developed algorithm has its own tie-breaking rules and consists of several steps. In line with the steps mentioned in the methodology part, a logical machine sequence is obtained. Figure 5 indicates the flow diagram of the Hollier 1 method. The material flow between machines can be categorized into three groups. The first group is called “in-sequence moves” and occurs when material flow is between two adjacent machines. The second group is called “backtracking moves” and occurs when material movement is in the reverse direction. The last group is called “by-passing moves” and occurs when machines are not adjacent. After determining the material flows, the percentage of movements is computed for each group respectively. For instance, the percentage of in-sequence movement is computed by summing all flows representing in-sequence moves, divided by the total number of flows.

Fig. 5. Flow diagram of Hollier 1

484

O. Emir and T. Aktin

Table 3 displays the machine sequence and percentage of in-sequence and backtracking moves. The distribution of flows is as follows: 48.88% in-sequence moves and 0% backtracking moves. The percentages indicate a suitable machine order obtained with the Hollier method. Based on the achieved result, it is decided to maintain the existing machine sequence in the plant. Table 3. Percentage of in-sequence and backtracking moves Hollier Method

Machine Sequence

In-sequence %

Backtracking %

1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14

48.88

0

In parallel with the Hollier method, a QAP model presented in the methodology is developed to obtain an efficient machine layout. Due to the integer nonlinear structure of this model, a suitable solver is required. GAMS has many different solvers that employ nonlinear programming (NLP) algorithms, and CONOPT is one of the solvers which is designed for large-scale NLP problems. In this study, the developed model is solved by CONOPT under various scenarios. GAMS 35.1.0 with its solver CONOPT3 (version 3.17K) has provided solutions in less than one second. For the parameters, the technical drawing of the facility layout plan is acquired by an AutoCAD file from Sahin ¸ Metal. With the technical drawing provided, the total available area at locations (in square meters), the distances between each station (in meters), the required area for each department (in square meters) and finally, the distance matrix are measured. For the flow matrix, production records of 2020 are considered. Since machine layout design is a costly procedure, human expertise is also required to approve the solution provided by the mathematical model in obtaining an implementable machine layout [12]. Thus, expert opinion is taken at every stage of the study. Taking the expert opinions into consideration, it is decided to fix the location of some departments. In this direction, four different scenarios are generated to evaluate the alternative layout plans. The total weighted distance of the current layout is measured as 392,915.5 m. The first scenario is created by fixing the raw material and casting departments, which are the first two processes in the production line. When the proposed model is run for scenario 1, the objective function value is found as 359,577 m with a %8 reduction in the total weighted distance. The second scenario is constructed by fixing the trim and boring departments, which are located side by side in the current layout. The total weighted distance of the second scenario is determined as 231,645 m with a 41% reduction. Similar to the second scenario, the third scenario is designed by fixing casting and calibration departments, which are adjacent to the current layout. The total weighted distance of the third scenario is computed as 245,136 m with a 38% reduction. The last scenario is created by fixing the casting and trim departments in line with experts’ opinions. The total weighted distance of the fourth scenario is found as 229,630 m with a 42% reduction. The detailed comparison results are summarized in Table 4, including the model codes, characteristics, and the obtained total weighted distance results of each scenario.

An Analytical Approach to Machine Layout Design

485

Table 4. Comparison of current layout and scenario results Model name

Model code

Characteristic

Objective function

Total Weighted Distance (meters)

Current Layout

-

Model with Area Restrictions

Minimize total weighted distance

392,915.5

Scenario 1

ESMW_1

Expanded Model with Fixing Raw Material and Casting Departments

Minimize total weighted distance

359,577

Scenario 2

ESMW_2

Expanded Model with Fixing Trim and Boring Departments

Minimize total weighted distance

231,645

Scenario 3

ESMW_3

Expanded Model with Fixing Casting and Calibration Departments

Minimize total weighted distance

245,136

Scenario 4

ESMW_4

Expanded Model with Fixing Casting and Trim Departments

Minimize total weighted distance

229,630

Based on the comparison table above, it is observed that scenario 4 provides the best result for Sahin ¸ Metal with a minimum total weighted distance of 229,630 m. Figure 6 demonstrates the resulting machine layout of this scenario.

ESMW_4 : Scenario 4 SANDBLASTING_7 QUALITY_CONTROL_8

VIBRATION_6

INDUSTRIAL_WASHING_11

BORING_5

ULTRASONIC_WASHING_10

FINAL_CONTROL_14

DYEING_12

CALIBRATION_4

CNC_PROCESSING_9

CASTING_2

RAW_MATERIAL_1

TRIM_3

SEALING_CONTROL_13

Fig. 6. Layout of scenario 4

486

O. Emir and T. Aktin

As shown in Table 4, scenario 2 gives the second-best result with a total weighted distance of 231,645 m. These results are shared with the company’s engineers for further discussion and analysis. The evaluations reveal the fact that, the layout obtained in scenario 2 can be realized with less cost than scenario 4. It is observed that while seven departments need to be relocated in scenario 4, only four departments will undergo relocation in scenario 2. Moreover, the relocation of these four departments will take place between each other in the second scenario. Since the total weighted distance values found in the second and fourth scenarios are very close to each other, it is decided that this difference is negligible when the relocation cost of machines are considered. Consequently, all project team members have agreed on rearranging the machine layout suggested by scenario 2, which provides a 41% reduction in the total weighted distance compared to the current layout. Figure 7 demonstrates scenario 2 that is selected for layout implementation. ESMW_2 : Scenario 2 SANDBLASTING_7 DYEING_12

VIBRATION_6

INDUSTRIAL_WASHING_11

QUALITY_CONTROL_8

ULTRASONIC_WASHING_10

FINAL_CONTROL_14

CALIBRATION_4

BORING_5

CNC_PROCESSING_9

CASTING_2

RAW_MATERIAL_1

TRIM_3

SEALING_CONTROL_13

Fig. 7. Layout of scenario 2

4 Conclusions Nowadays, increasing competition, changing customer demands, and quality targets force companies’ manufacturing systems to become more flexible and responsive. For this reason, enterprises gradually tend to use production resources more efficiently, reduce manufacturing costs and pursue continuous improvement to gain a place in the market. The arrangement of the machines in the plant area is widely addressed as a machine layout problem, and it is generally accepted that layout decisions have a significant impact on manufacturing costs. A well-designed plant layout enhances throughput, contributes to the efficiency of operations, saves time, and reduces operating expenses. This study handles a machine layout problem in a company operating in ˙Istanbul, Turkey. Sahin ¸ Metal, a family-owned business, supplies high-pressure aluminum die casting parts to a variety of industries. As a Tier 2 company, it is specializing in the automotive sector moderately by manufacturing automotive sub-industry needs to facilitate Tier 1 companies or leading vehicle manufacturers in Europe. The manufacturer plans to restructure its internal processes to become one of the preferred companies in the sector

An Analytical Approach to Machine Layout Design

487

where the competition is getting heated. In this regard, it considers the rearrangement of the existing machine layout to minimize the distance traveled between departments. The main objective of this study is to determine an efficient machine layout design by implementing analytical approaches under some restrictions. The study begins by observing material, process flows and organizing short meetings to determine the project’s scope. Then, the data collection process is initiated to apply analytical approaches for the subsequent phases. ABC analysis is implemented as the first approach to identify the most significant SKUs in the plant. As a result of this analysis, 16 products that correspond to 75% of total sales, and %25 of items are labeled as A-class with a total sales value of approximately 80 million Euros. After defining Aclass items for Sahin ¸ Metal, two approaches are employed simultaneously. The Hollier method results validate the existing machine sequence, and the distribution of flows is computed as 48.88% for in-sequence moves, and 0% for backtracking moves. In parallel with the Hollier method, a mathematical model based on the QAP is developed. The model is solved by GAMS software / CONOPT solver under alternative scenarios. After evaluating these scenarios with the project team, and considering the relocation costs that will emerge, the second-best scenario which reduces the total weighted distance of the existing system by 41% is proposed as the new machine layout. For future studies, some alternative objective functions can be implemented to observe the effects on the layout design. Note that, the proposed model does not take costs into account explicitly. Hence, the objective function of the model can be changed so as to minimize the total relocation cost. Another option is to use material handling costs instead of flow values as coefficients in the objective function. Furthermore, the mathematical model can be enhanced and also supported by heuristic algorithms. Acknowledgment. This study was carried out as a part of the graduation project of undergraduate industrial engineering students. The researchers express their gratitude to Simge Nur Ta¸skın, Kübra Özen, ˙Ismet Dönmez and Muratcan Özçelik for their support. The researchers would also like to thank Sahin ¸ Metal managers and engineers for providing the necessary data, and for their continuous collaboration throughout the course of the project.

References 1. Diponegoro, A., Sarker, B.R.: Machine assignment in a nonlinear multi-product flowline. J. Oper. Res. Soc. 54, 472–489 (2003) 2. Kusiak, A., Heragu, S.S.: The facility layout problem. Eur. J. Oper. Res. 29, 229–251 (1987) 3. Balakrishnan, J., Cheng, C., Wong, K.: FACOPT: a user friendly facility layout optimization system. Comput. Oper. Res. 30, 1625–1641 (2003) 4. Kaufman, L., Broeckx, F.: An algorithm for the quadratic assignment problem using Benders’ decomposition. Eur. J. Oper. Res. 2(3), 207–211 (1978) 5. Hassan, M.: Machine layout problem in modern manufacturing facilities. Int. J. Prod. Res. 32, 2559–2584 (1994) 6. Bazaraa, M.S., Sherali, H.D.: Benders’ partitioning scheme applied to a new formulation of the quadratic assignment problem. Naval Res. Logistics 27, 29–41 (1980) 7. Bazaraa, M.S., Sherali, H.D.: On the use of exact and heuristic cutting plane methods for the quadratic assignment problem. J. Oper. Res. Soc. 33, 991–1003 (1982)

488

O. Emir and T. Aktin

8. Solimanpur, M., Vrat, P., Shankar, R.: Ant colony optimization algorithm to the inter-cell layout problem in cellular manufacturing. Eur. J. Oper. Res. 157, 592–606 (2003) 9. Kouvelis, P., Chiang, W., Fitzsimmons, J.A.: Simulated annealing for machine layout problems in the presence of zoning constraints. Eur. J. Oper. Res. 57(2), 203–223 (1992) 10. Hollier, R.H.: The layout of multi-product lines. Int. J. Prod. Res. 2(1), 47–57 (1963) 11. Sahni, S., Gonzalez, T.: P-complete approximation problem. JACM 23(3), 555–565 (1976) 12. Heragu, S.S., Kusiak, A.: Machine layout: an optimization and knowledge-based approach. Int. J. Prod. Res. 28(4), 615–635 (1990) 13. Kampen, T.J., Akkerman, R., Donk, D.P.: SKU classification: a literature review and conceptual framework. Int. J. Oper. Prod. Manage. 32, 850–876 (2012) 14. Kanduc, T., Rodic, B.: Optimisation of machine layout using a force generated graph algorithm and simulated annealing. Int. J. Simul. Modelling 15, 275–287 (2016) 15. Koopmans, T., Beckmann, M.: Assignment problems and the location of economic activities. Econometrica 25, 53–76 (1957) 16. Ho, Y., Moodie, C.L.: Machine layout with a linear single-row flow path in an automated manufacturing system. J. Manuf. Syst. 17, 1–22 (1998) 17. T.˙Iç, Y., et al.: Application of cellular manufacturing and simulation approaches for performance improvement in an aerospace company’s manufacturing activities. In: International Conference on Applied Human Factors and Ergonomics (AHFE 2018), vol 793, Springer, Cham (2019). doi: https://doi.org/10.1007/978-3-319-94196-7_47

Hybrid Approaches to Vehicle Routing Problem in Daily Newspaper Distribution Planning: A Real Case Study Gizem Deniz Cömert, U˘gur Yıldız, Tuncay Özcan, and Hatice Camgöz Akda˘g(B) Management Engineering Department, Istanbul Technical University, Istanbul, Turkey {denizgi,yildizugu,tozcan,camgozakdag}@itu.edu.tr

Abstract. This paper presents a case study that addresses the problem of vehicle routing to improve the distribution services of a newspaper distributor operating in Istanbul, Turkey. It is the priority of newspaper distribution companies to improve the distribution activity, to reduce the number of personnel, to reduce the cost incurred in the distribution process, and to provide services with the least number of vehicles in order to address the problem of newspaper distribution (NDP), which has become even more costly with the increase in the rate of reading electronic newspapers. The purpose of this study is to examine the capacitated vehicle routing problem (VRP) with heterogeneous vehicles and to evaluate the results through case analysis. K-means were used for clustering the customers and simulated annealing (SA) and particle swarm optimization (PSO) based metaheuristic approaches were proposed to solve this real-world problem. Finally, the performance of the proposed approaches was evaluated. The numerical results showed that the proposed SA based approach with k-means performed better than PSO and can find the most suitable solutions within reasonable computation time in large-scale problems. Keywords: Newspaper distribution problem (NDP) · Vehicle Routing Problem · K-means Algorithm · Particle swarm optimization · Simulated annealing

1 Introduction Nowadays, companies need to optimize their supply chains and therefore supply chain management is very important for the sustainability of companies. Especially a problem that may occur in the supply chain of perishable goods such as newspapers with a oneday life will cause the newspaper to not reach the reader and its value will turn into scrap [1]. For this reason, an effective distribution system is one of the most critical factors for newspaper companies. Logistics is one of the most important areas of supply chain management and the cost of delivery often accounts for a large part of the total logistics costs but beside logistic costs, customer satisfaction is also important part of the logistic activities. Based on this, Vehicle Routing Problems (VRP) were first introduced by Dantzig and Ramser in 1959 © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 489–499, 2022. https://doi.org/10.1007/978-3-030-90421-0_41

490

G. D. Cömert et al.

[2]. VRP is an NP-hard optimization problem basically aim to finding the set of routes minimizing the overall minimum total cost or total travel times. Although many different VRP variants have been derived from the past to the present, most knowns are Capacitated Vehicle Routing Problem (CVRP), Vehicle Routing Problem with Time Window (VRPTW), Multiple Depot Vehicle Routing Problem (MDVRP), Vehicle Routing Problem with Pick Up and Delivery (VRPPD), Vehicle Routing Problem with Backhauls (VRPB) [3]. In this study, we consider the daily newspaper distribution problem of a newspaper distribution company in Turkey. This firm distributes newspapers to 413 newsagents on daily basis. Due to the large size and high cost of the distribution network used, it is difficult to achieve the target service level. Our aim in this article, is to provide a perspective to the problems which have large size customer and high distribution cost, with the solution approaches we propose. Routes are examined to determine which would minimize the number of vehicles used in distribution system and, simultaneously, the total distance travelled from depot to newsagents. A classic integer programming model is proposed to determine the routes of the vehicles in the distribution process but due to big size of customers, in the first phase k-means were exploited for clustering customers which are close to each other and in the second phase, all clusters were taken as independent travelling salesman problem (TSP), and solved with metaheuristic methods. The remainder of this paper is as follows. In Sect. 2, previous research on related studies is reviewed. Problem structure and the mathematical model are constructed in Sect. 3. Computational results obtained from the model are discussed in Sect. 4 and conclusion remarks are made in the last section.

2 Literature Review The history of The Vehicle Routing Problem (VRP) starts when Dantzig and Ramser set an algorithmic approach to find a solution to the problem of gasoline delivery to service stations in the late 1950’s [4]. After their benchmark work, the interest level to VRP increased significantly and today the problem addresses researchers and practitioners of so many different disciplines. The goal of the VRP is to find the optimal route for the vehicles that are serving the customers while minimizing the overall transportation expenses. The classical VRP route always starts and ends at the warehouse, and all the customers must be served. The minimization of the cost parameter can be satisfied by either reducing the amount of the vehicles required to finish the route or reducing the amount of total distance of transportation [5]. After Dantzig and Ramser published their first work on VRP in 1959, so many researchers came up with so many different variations of the VRP and algorithms to solve the necessities of the real-world problems. Newspaper delivery problem are described in the literature as “newspaper distribution problem (NDP)” and majority of NDP considers the dimensions of Vehicle Routing Problem (VRP) [6]. In this section, only the literature related to NDP is presented within the scope of the study. NDP deals with the newspaper delivery service between the distribution warehouse and dealers. The general aim of the problem is to find the most suitable routes for each vehicle to provide newspaper delivery service to all customers in line with capacity, time

Hybrid Approaches to Vehicle Routing Problem in Daily NDP

491

and demand constraints. As can be seen from this explanation, although the majority of the problem is either to minimize the total travel/delivery time [6, 8, 9] or distance [1], there are studies in the literature that include different objective functions such as minimizing number of trucks/stuffs used [8, 10], or include recycling policy to the study [11], minimizing cost and delays [12]. In the literature, the newspaper delivery problem was solved with different methods like exact methods, heuristic aand metaheuristic algorithms depending on the size and complexity of the problem. Integer programming [1] and dynamic programming [7] are some of the examples of exact methods which would be useful when the problem size is small. Heuristic algorithms which were used for solving the NDP in the literature are saving algorithm [9], sweep algorithm [10] and the metaheuristics are simulated annealing [8, 12]; tabu search [13, 14]; genetic algorithm [6, 15]; firefly algorithm [11].

3 Methodology In this section, the methodology (Fig. 1) and suggested algorithms followed in solving the problem are detailed.

Fig. 1. Methodology of the newspaper delivering problem.

492

G. D. Cömert et al.

3.1 Data Collection and Preparation At this step, the addresses of 413 customers were acquired from the company’s data base. The obtained addresses turned into GPS coordinates by using the Google Maps application and additionally the depot coordinates were added to the data pool. The converted latitude and longitude coordinates of the mentioned locations are shown in Fig. 2.

Fig. 2. Coordinates of the customer locations and depot.

In this study, the newspaper company had 8 trucks with heterogeneous capacities for the distribution of newspapers. In addition, customers’ demands are known and heterogeneous. 3.2 K-Means Clustering Process In 1967, MacQueen introduced the k-means algorithm to the literature [16]. Since then, the k-means clustering algorithm is defined as the most commonly used clustering algorithm because of its simple usage [17], and it is known as one of the simple unsupervised learning algorithms for clustering purposes. The algorithm easily classifies a data set into several clusters. This algorithm aims to define a k-centroid for each cluster. Centroids should be scattered carefully since it is possible to get different results from different placements. That’s why placing the centroids far away from each other gives better results. The first step is taking the points of the data and assigning them to the closest nearby centroid. This gives an early draft. After finishing the assignment step, it is required to re-calculate k new centroids as centers of the clusters resulting from the previous step. After the placement of the mentioned new k-centroids, the data points need to be re-attached to these new k-centroids. This process continues up until the k centroids stop

Hybrid Approaches to Vehicle Routing Problem in Daily NDP

493

moving. At the end, this algorithm aims at minimizing an objective function, which is a squared error function in this case. W (S, C) =

K  

yi − ck 2

(1)

k=1 i∈Sk

Where S is a K-cluster partition of the entity set represented by vectors yi (i ∈ I) in the M-dimensional feature space, consisting of non-empty non-overlapping clusters Sk , each with a centroid ck (k = 1, 2, … K) [18]. Kaufman and Rousseeuw [19] introduced the silhouette width, a coefficient that has good experimental results. This concept uses the difference between the within cluster tightness and separation from the others. The meaning of a close to zero value is, the entity can be appointed to another cluster. But if the width values are close to 1, that means set is clustered correctly. The largest average silhouette width, over different K, indicates the best number of clusters [18]. Table 1. Silhouette index values Number of groups

Feasibility

Silhouette index

3

Not feasible

0.575

4

Not feasible

0.524

5

Feasible

0.449

6

Feasible

0.439

7

Feasible

0.386

8

Feasible

0.392

In this particular study, k-means silhouette index was used on the data set to identify the 5 clusters shown on Fig. 3 which is the optimum number of routes and vehicles as well. Regarding to Table 1, silhouette Index gave better results for 4 clusters on the data but the clusters were exceeding the vehicle capacities. That’s why, five out of eight vehicles were used on the 5 clustered routes and these groups are illustrated at Fig. 3. While the routes created have a total demand that will not exceed the capacity of the vehicles; the number of customers on the routes is unbalanced. As a result of K-Means algorithm 81, 90, 79, 103 and 60 customers were assigned to the 5 clusters respectively, and the vehicle capacities for the routes were calculated as 1069.5, 1020, 755.5, 1178, 553 kg.

494

G. D. Cömert et al.

Fig. 3. K-means clustering algorithm customer’s groups

3.3 Proposed Model and Assumptions In this specific case, the steps of the mathematical solution of Vehicle Routing Problem for the newspaper delivery problem can be summarized as the following. Assuming G = (V, A) as a network in which V = {0, 1,…, n} is the vertex set and A ⊆ V × V is the arc set. Vertex 0 hint to the depot, and V\{0} the locations except the depot on the whole route network. The parameter cij represents a cost (traveling distance in this problem) between vertices i and j. the parameters K and Ck and di are the number of vehicles, capacity of vehicle k and demand of the customers respectively. The binary variable xijk present the status of if vehicle k visited vertex j from vertex i. It takes the value of 1 if k. vehicle visits customer j from customer i, 0 otherwise. In addition, another binary variables yik take a value of 1 if vertex i is visited by vehicle k in the optimal solution and take a value of 0, otherwise. The formulation is as follows: 

min

cij

(i,j)∈A

K 

xijk

(2)

k=1

subject to K 

 j∈V \{i}

yik = 1 ∀i ∈ V \ {0}

(3)

k=1

xijk =



xjik = yik ∀i ∈ V , k = 1, . . . , K

(4)

j∈V \{i} K 

y0k = K

(5)

k=1

uik − ujk + Ck xijk ≤ Ck − dj

∀i, j ∈ V \ {0}, i = j, k = 1, . . . , K

(6)

Hybrid Approaches to Vehicle Routing Problem in Daily NDP

495

∀i ∈ V \ {0}, k = 1, . . . , K

di ≤ uik ≤ Ck xijk ∈ {0, 1}

∀(i, j) ∈ A, k = 1, . . . , K

yik ∈ {0, 1}

∀i ∈ V , k = 1, . . . , K

Equation 2 represents objection function of the problem which is minimizing the total travel distance of the operations when trucks visited the customers. Constraint 3 ensure that each customer visited exactly once. Constraint 4–5 ensure that same vehicle enters and leaves a given customer vertex, and K vehicles leave the depot. Constraint 6 is the sub-tour elimination constraint also include capacity constraints which is proposed by Miller Tucker Zemlin (MTZ). 3.4 Particle Swarm Optimization PSO is a population-based searching method proposed by Kennedy and Eberhart (1995) that has been inspired from considering the behaviour of natural group organisms (e.g., bees, fish and birds swarm) [20]. The underlying logic of PSO is that a randomly determined swarm of particles is assumed to be an initial solution space. After calculating the fitness value for each particle, the best value for the swarm and the best value for the particle are recorded. For each particle in the first iteration, its initial position will be the best solution. The velocities and positions of the particles are then updated using the Eq. 2 and 3. The fitness values are calculated by replacing the new positions in the cost function, and the best position for each particle and the best position for the swarm are calculated and recorded. Vi+1 = wi ∗ V i + C1 ∗ r1 ∗ (pbest − xi ) + C2 ∗ r2 ∗ (gbest − xi )

(7)

xi+1 = xi + Vi+1

(8)

In Eq. 7, c1 and c2 are constant numbers, while r1 and r2 are random numbers in the range of [0–1] and velocity vector Vi changes in range of [−vmax, +vmax]. The variable ‘pbest’ stores the best position of the particle, and ‘gbest’ stores the value of the best position of the swarm. The parameter values of the particle swarm optimization algorithm used in this study are given in Table 2. Table 2. The parameter values of the particle swarm optimization algorithm Parameter

Value

Maximum number of iterations

3000

Population size

300

C1 , C2

2

W

0.99

496

G. D. Cömert et al.

3.5 Simulated Annealing Algorithm Simulated annealing algorithm was first introduced by Kirkpatrick et al. in 1982 for solving difficult combinatorial optimization problems. The main idea of the simulated annealing algorithm is to accept solutions in which the solution does not improve in some cases in order not to get stuck with local minimums. It is an algorithm with a probabilistic approach [21]. In cases where the temperature is high, different and new answers are discovered in the solution space, while the algorithm tries to improve the discovered answers as it cools. The probability of accepting a bad solution is calculated with the Boltzmann function. It is calculated as e(−δ /T) ; where δ is the difference in fitness values f between the new solution and the existing solution and T is a control parameter called the current temperature. Where the temperature is high, the probability of accepting solutions is high, and when the temperature is low, the probability is low, but not impossible. The basic steps of a standard simulated annealing algorithm are as follows. Step 1. Randomly generate an initial solution and assign it as the best solution S. Also set the iteration index t to 0. Step 2. Set an initial temperature value TB and assign the current temperature value T 0 = TB. Step 3. From the best solution, randomly generate a neighbouring solution S 1 ∈ N(S). Step 4. Calculate the difference between the objective function values of the generated solution S1 and the solution S δ = C(S 1 ) – C(S)) Step 5. If S 1 is better than S (δ < 0) then assign solution S 1 to solution S. If S 1 is worse than S, Bolltzman function is used if e(−δ /T) > p is provided at the current temperature Tt (p is a randomly generated number between 0 and 1) S 1 would be new solution otherwise, keep S as current solution. (Bolltzman function) Step 6. Change the temperature T according to T = β * T. (β cooling factor) Step 7. If the stopping criterion is met, stop the search, otherwise increase the iteration index t by one and go to the third step. The parameter values of the SA algorithm used in this study are given in Table 3. Table 3. The parameter values of the simulated annealing algorithm Parameter

Value

Maximum number of iteration

3000

Maximum number of sub-iteration

150

Initial temperature (T )

250

Temperature cooling factor (β)

0.99

Hybrid Approaches to Vehicle Routing Problem in Daily NDP

497

4 Computational Results In this section, we will present the results obtained after applying the particle swarm optimization and simulated annealing algorithms and compare with the current system. The suggested algorithms were run on a Microsoft Win 10 operating system installed laptop with Intel Core i7 10750H processor and 8 GB of RAM. This research consists of 413 nodes with different demand values ranging from 2 to 144 kg and 8 vehicles with heterogeneous capacities. K-Means Algorithm clustered the used data set into five routes, that’s why five out of eight vehicles were used. After clustering, each route modelled as travelling salesman problem instead of vehicle routing problem and solved with metaheuristic approaches. Initially, the model was run with IBM Cplex which had to be stopped for a certain period of time because the problem size was large and the solution took too long, and valid solutions could not be reached. By changing the binary variables to float with LP relaxation, the lower limit for 5 vehicles was calculated as 330 km. Then, the model was run with two different metaheuristic algorithms which are simulated annealing (SA) and particle swarm optimisation (PSO). Each route was tested 10 times with both algorithms and the fitness value and elapsed times as well as the mean, min, max, and standard deviation of the fitness values were recorded and compared in Table 4 below. Table 4. Statistical results of the calculations Particle swarm optimization

Simulated annealing

R1

R2

R3

R4

R5

R2

R3

R4

110.83

247.66

187.33

216.46

102.92

71.38

177.75

128.97

149.67

77.53

Min

95.42

230.49

171.74

205.93

93.32

67.84

167.04

123.36

139.06

72.50

Max

117.88

265.55

196.59

236.36

110.43

74.01

190.13

135.93

164.31

85.56

6.42

10.37

8.45

10.14

5.74

1.76

7.14

3.85

8.66

4.05

199.27

194.10

196.47

207.44

191.47

207.35

195.83

204.19

203.07

202.10

Mean

Std. Dev. Calc. Time

R1

R5

According to the results, good solutions were found in both algorithms and provided improvements according to the existing situation. However, simulated annealing gives a much better solution, minimizing the total path even more. It is understood that three vehicles can be saved in both algorithms. When the SA was run with different parameter values the results improved in a positive direction in which fitness value reached down to 473 km for the total five routes.

498

G. D. Cömert et al.

5 Conclusion This paper presents a real-world case study for the Capacitated Vehicle Routing Problem and two different solution approaches to this problem. We have used different algorithms for solving the exact same problem and finally compared the results of these different solution approaches to come up with a conclusion of which method serves this problem’s needs better and finds a better optimal solution area. According to our research results, as an exact algorithm, Branch and Bound, was selected and solved with IBM CPLEX, but since the problem size was too large and complex for this algorithm type, the used program was not able to find a valid solution. As a second solution, we examined the metaheuristic solutions and solved the same problem with a MATLAB R2020b coded Simulated Annealing Algorithm and the Particle Swarm Algorithm. As a result of the calculations, it has been determined that the SA algorithm gives much faster and better results than the PSO algorithm. However, when the parameters of the PSO algorithm are changed and the number of iterations is increased to 10,000, even if the results improve, the result is found to be 30% costlier than the SA solution. In conclusion, we can say that the k-means based Simulated Annealing approach gave better results for this CVRP problem. Our research did not include the time window, as future research, Capacitated Vehicle Routing Problem with Time Window can be examined for the current problem. Acknowledgment. The authors would like to express their sincere gratitude to Prof. Dr. Sule ¸ Itır Sato˘glu who gave valuable ideas that inspired this paper.

Appendix-I

Hybrid Approaches to Vehicle Routing Problem in Daily NDP

499

References 1. Eraslan, E., Derya, T.: Daily newspaper distribution planning with integer programming: an application in Turkey. Transp. Plan. Technol. 33(5), 423–433 (2010) 2. Dantzig, G.B., Ramser, J.H.: The truck dispatching problem. Manag. Sci. 6(1), 80–92 (1959) 3. Nanda Kumar, S., Panneerselvam, R.: A survey on the vehicle routing problem and its variants. Intell. Inf. Manag. 04 (2012). https://doi.org/10.4236/iim.2012.43010 4. Braekers, K., Ramaekers, K., Nieuwenhuyse, I.: The vehicle routing problem: state of the art classification and review. Comput. Ind. Eng. (2015) 5. Laporte, G.: What you should know about the vehicle routing problem. Naval Res. Logist. (NRL) 54, 811–819 (2007) 6. Kamble, S., Raut, R., Gawankar, S.: Optimizing the newspaper distribution scenarios using genetic algorithm: a case study of India. Am. J. Appl. Sci. 14, 478–495 (2017) 7. Holt, J., Watts, A.: Vehicle Routing and Scheduling in the Newspaper Industry. Vehicle Routing: Methods and Studies. Studies in Management Science and Systems - Volume 16 (1988) 8. Buer, M.G., Woodruff, D.L., Olson, R.: Solving the medium newspaper production/distribution problem. Eur. J. Oper. Res. 115, 237–253 (1999) 9. Mantel, R.J., Fontein, M.: A practical solution to a newspaper distribution problem. Int. J. Prod. Econ. 30–31, 591–599 (1993) 10. Arunya, B., Suthikarnnarunai, N., Srinon, R.: Strategic planning for newspaper delivery problem using vehicle routing algorithm with time window (VRPTW). Eng. Lett. 18 (2010) 11. Osaba, E., et al.: A discrete firefly algorithm to sovle a rich vehicle routing problem modelling a newspaper distribution system with recycling policy. Soft Comput. 21, 5295–5308 (2017). https://doi.org/10.1007/s00500-016-2114-1 12. Sangbok, R., Bok, S.Y.: A two-stage heuristic approach for the newspaper delivery problem. Comput. Ind. Eng. 30(3), 501–509 (1996) 13. Russell, R., Chiang, W.-C., Zepeda, E.D.: Integrating multi-product production and distribution in newspaper logistics. Comput. OR 35, 1576–1588 (2008) 14. Chiang, W.-C., Russell, R., Xu, X., Zepeda, D.: A simulation/metaheuristic approach to newspaper production and distribution supply chain problems. Int. J. Prod. Econ. 121(2), 752–767 (2009) 15. Carter, A., Ragsdale, C.: Scheduling pre-printed newspaper advertising inserts using genetic algorithms. Omega 30, 415–421 (2002) 16. Dubes, R.C., Jain, A.K.: Algorithms for Clustering Data. Prentice Hall (1988) 17. Büyüksaatçı Kiri¸s, S., Özcan, T.: Metaheuristics approaches to solve the employee bus routing problem with clustering-based bus stop selection. In: Artificial Intelligence and Machine Learning Applications in Civil, Mechanical, and Industrial Engineering, pp. 217–239 (2020). https://doi.org/10.4018/978-1-7998-0301-0.ch012 18. Kodinariya, T., Makwana, P.R.: Review on determining number of Cluster in K-Means Clustering (2013) 19. Kaufman, L., Rousseeuw, P.: Finding Groups in Data: An Introduction to Cluster Analysis. Wiley, New York (1990) 20. Ai, T.J., Kachitvichyanukul, V.: A particle swarm optimization for the vehicle routing problem with simultaneous pickup and delivery. Comput. Oper. Res. 36(5), 1693–1702 (2009) 21. Rutenbar, R.A.: Simulated annealing algorithms: an overview. IEEE Circuits Devices Mag. 5(1), 19–26 (1989). https://doi.org/10.1109/101.17235

Monthly Logistics Planning for a Trading Company ˙Ilayda Ülkü(B) , Huthayfah Kullab, Talha Al Mulqi, Ali Almsadi, and Raed Abousaleh Industrial Engineering Department, Istanbul Kultur University, Istanbul, Turkey [email protected], {1600005207,1600004420,1600005108, 1700000847}@stu.iku.edu.tr

Abstract. In this study, the transportation problem is considered in order to minimize the total traveling distance between the nodes. The nodes represent the hospitals and the warehouse. The data is obtained by Salem Trading Company (STC) that operates in various medical sectors including Urology, Gynecology, Endoscopy, Surgery, and other operations. STC has two types of vehicles; the big vehicle and the small vehicle. The big vehicle is used for the machines and it contains up to 4 machines. Whereas, the small vehicle is used for delivering the masks and gloves and its capacity is up to 200 boxes. In order to serve all the market need, they currently have a traditional logistics plan to ship products from the main warehouse to the Hospitals located in Qatar. In this study, there are two stages proposed to solve the transportation problem. In the first stage, the assignment problem is used to determine the unused capacity of each vehicle used. Then, the output of the first stage is used as an input in the second stage in order to apply the Vehicle Routing Problem (VRP). With this second stage, the optimal routes are obtained with the minimum cost. Firstly, the required data will be obtained from the company, which include the number of hospitals, the demand of each hospital, the capacity of the vehicles, the distance between hospitals and the warehouse, loading and unloading time of the vehicle, and the volume of the products. This study focus on finding the low-cost weekly logistics plan for hospitals in Qatar. Two mathematical models are developed to solve the problem with GAMS software. As a result, by using the assignment and VRP model, the unused capacity is increased from 38 to 18 and reduce the cost from 2470 TL to 1170 TL as in Qatar the cost of the fuel is not high, also on the time restriction is focused. Keywords: Mixed integer programming · Cutting stock problem · Assignment problem · Vehicle Routing Problem · Logistic planning

1 Introduction For organizations who want to compete with their competitors in today’s competitive environment, one of the essential decision areas in logistics is the product distribution. In this context, the distribution networks of the companies in the most optimal way needs

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 500–509, 2022. https://doi.org/10.1007/978-3-030-90421-0_42

Monthly Logistics Planning for a Trading Company

501

to be determine by companies. One of the leading problems of route optimization is the vehicle routing problem. Basically, VRP can be efficiently used for logistics problems and schedules of the companies. One of the articles discusses how they can refuel the vehicles with low price even if they will visit a far station [1]. There are one of the interesting topic: Optimizing electric vehicle routing problems with mixed backhauls and recharging strategies in multi-dimensional representation network, which is affected by three factors: timedependent pickup and delivery requests, limited recharging station capacity, and battery remaining capacity of electric vehicles Main topic of the article is to reach minimizing the total travel cost for the shipments [2]. Another article is about VRP which present a new mathematical model for multi-depot multicompartment aim to minimization of the number of vehicles and then minimization of the total traversed routes [3]. Also, one of that talk about VRP the routing problem with occasional drivers they design and implement a multi-start heuristic which produces solutions with small errors when compared with optimal solutions obtained by solving an integer programming formulation with a commercial solver [4]. Also coming with the usage of VRP in the growing environmental concern related to the economic activity has been transferred to the field of transport and logistics in recent decades. Therefore, environmental targets are to be added to economic targets in the decision-making, to find the right balance between these two dimensions. In real life, there are many situations and problems that are recognized as multiobjective problems. This type of problems containing multiple criteria to be met or must be considered [5]. We also know that one of the most effective ways for logistics companies to increase their core competitiveness is to reduce time. And this is where VRP comes in. This article provides an integer programming model with the lowest overall costs for the multi-depot vehicle routing problem under a timevarying road network [6]. The routing problem in vehicle can be identified as a generalization of the travelling of person Problem. It is the issue of finding out the lowest cost delivery directions or paths from a depot to a set of geographically dispersed clients. Among many studies conducted based on the VRP, it does not present very realistic applications [7]. This study is about the distribution and logistics plans of Salem Trading Company that is operates in various medical sectors including Urology, Gynecology, Endoscopy, Surgery, and other Operations and serving for 10 different locations. All the locations located in Qatar. This study focuses on the hospitals and the warehouse located in Qatar. All the logistics lines of the STC in Qatar are included in this study. Nowadays, STC struggle with the order delivery problem. Therefore, in this study, with the help of mathematical programming the delivery problem is solved. Assignment problem and Vehicle Routing Problem (VRP) are used to determine the optimal weekly plan, unused capacity, and optimal route for STC hospitals in Qatar. Assignment problem is one of the useful types of linear programming model which helps you to reach the allocation of several and various resources to the activities on one-to-one basis. Assignment problem does consider time, profit, and cost. The second one, VRP is an integer programming model answering the optimal set of routes for capacitated vehicles to deliver products to the hospitals. The finding of the VRP is a set of optimal routes for identical fixed capacity vehicles from the main warehouse to the hospitals to satisfy the demands.

502

˙I. Ülkü et al.

2 Problem Definition Salem Trading Company was established in 2007 by Mohammed Zouher Almsadi. STC operates in various medical sectors including Urology, Gynecology, Endoscopy, Surgery, and other Operations. In addition, the company has one warehouse located in Doha and has10 different Hospitals to serve. This study examines 10 of these Hospitals, which located in Qatar. The STC warehouse is responsible for delivery to these Hospitals to satisfy its demands. Delivery time are between 8.00 a.m.–1.00 p.m. and delivery days determining according to demands. The warehouse satisfies weekly demands of each Hospital because if delivery is the daily delivery cost will increase. STC uses 4 cars for delivery, and all the cars turn back to the warehouse when they finish the delivery. The capacity of each car is 4 pallets. In this study, by using the data of weekly demand, capacity of a vehicle, distance between the central warehouse and Hospitals, optimal route for the weekly logistics plan is obtained. To find the optimal route. The integerprogramming model is implemented, and the outcomes are observed using GAMS to solve the model, and in the Table 1 below shows the company Current weekly logistics plan. Table 1. Current weekly logistics plan of STC Monday

Tuesday

Wednesday

Thursday

Friday

Turkish Hospital

Al Emadi Hospital

Doha Clinics

Hamad General Hospital

Ambulatory Al Ahli Care Hospital Hospital

Turkish Hospital

Al wakrah Hospital

Hazm Al khor Mubaireek Hospital Hospital

Cuban Hospital

Turkish Hospital

Al Emadi Hospital

Hamad General Hospital

Doha Clinics

Hamad General Hospital

Ambulatory AlAhli Care Hospital Hospital

Alwakrah Hospital

Hazm Ambulatory Mubaireek Care Hospital Hospital

Al khor Hospital

Cuban Hospital

Turkish Hospital

Al Emadi Hospital

Doha Clinics

Hamad General Hospital

Al Ahli Hospital

Al wakrah Hospital

Hazm Al khor Mubaireek Hospital Hospital

Cuban Hospital

Al wakrah Hospital

Ambulatory Al Ahli Care Hospital Hospital Hamad General Hospital

Saturday

Sunday

Hazm Mubaireek Hospital

3 Methodology This method is solved by combining two kinds of stage, as shown in Fig. 1. In first stage assignment, problem model is used to obtain vehicle assignment by minimizing

Monthly Logistics Planning for a Trading Company

503

unused capacity. Results of the assignment problems give us how many vehicles will be used in a day for delivery and where these vehicles will go. In second stage, a VRP solution approach is implemented by minimizing cost and Time used. Outputs of the assignment problem is used as input of the VRP. Solution of the VRP give us optimal vehicle routes. According to these stages, schedule of the weekly delivery is determined for STC Hospitals in Doha.

Fig. 1. Modelling of the methodology

4 Data Collection In this section, data collection and part are introduced. First, equipment information’s are taken from STC Company. Equipment’s are Pallets, vehicles, warehouse, product types, and Hospitals and depot locations. STC Company obtains pallet and information. STC use pallet for packaging of item to mix products. Pallet capacity is 1.63 m3 . As STC have vehicles for distribution and the capacities are 4 pallets each vehicle Table 3 shows the average weekly demand of STC company hospitals since the vehicle’s capacity is 4 Pallets, the demand is divided for 4 or less. Company has one central depot; this depot total capacity of warehouse is approximately 3000 m2 . According to the feature of the products, they keep and carry. Hospitals are separated in Qatar country some of them are in Doha and rest of them are outside Doha city. STC Hospitals have demand from the warehouse more than once Weekly which depends on patients’ rate of the Hospitals. Monthly Demands are obtained from the company and converted demands to the weekly demand. Demands were in different format, which were one by one order that is why demands converted into pallet form to reach the better values as shown Fig. 2. Converting the demand from boxes to pallets form. While converting these demands, meter is used to calculate volume of the demand as pallet.

˙I. Ülkü et al.

504

Fig. 2. Converting the demand from boxes to pallets form

4.1 Assignment Problem This section includes schedule of the weekly logistics plan of STC, which are in Istanbul. Aim of this section is to create a schedule with maximum unused capacity and optimal schedule for finding optimal routes. Output of schedule is going to be the input of next step which is VRP. Model indices and parameters are as follows: i, j:

Index of all nodes (1_ HamadGeneralHospital, 2_AmbulatoryCareHospital, …, 10_ Cuban Hospital) k: Demand type (pallet) tot jk : The distance between node i and node j (km) vcapk : Vehicle capacity at node i for the k type of demand delivery (units) Model variables are as follows:

obj: yi : X ij : uci,k :

Total unused capacity Takes value 1 if the vehicle i is used 0 otherwise Takes values 1 if the vehicle i visits node j 0 otherwise Unused capacity of vehicle i for product type k

With the help of assignment model the orders are assigned to the corresponding vehicles by considering the vehicle capacities.  ucik (0) minz = i

k

The objective function (0) seeks to minimize the unused capacity.  X ij = 1 ∀j

(1)

Constraint (1) ensures that each node j must be visited once by a vehicle i.  (X ij ∗ tot jk ) + uci,k = vcapk ∗ yi ∀i

(2)

i

j

Constraint (2) to balance vehicle capacity with the required order for each vehicle i and product type k.  X ij ≤ 10 ∗ yi ∀i (3) j

Monthly Logistics Planning for a Trading Company

505

Constraints (3) provide that the sum of node j must be visited at most by the vehicle i. 4.2 Vehicle Routing Problem The output of the assignment model is used as the input of the vehicle routing problem. With the help of VRP, the set of optimal routes is determined for identical fixed capacitated vehicles. Each route starting point is from the main warehouse to the hospitals to satisfy their demands. In this study, Vehicle Routing Model of 10 STC Hospitals located in Qatar is given. Objective of the VRP is to reach optimal route. The integer model is developed as follows. Model indices and parameters for the VRP are as follows: i, j:

Index of all nodes obtained from assignment model for each day and vehicle k: Demand type (PALLET) Daily time limit (350 min) tcapk : f c: Fuel cost per km (0,282 TL/km) av: Average of the velocity of vehicles (km per minute)(0,5 km/min) BigM: Maximum amount of order in the system deliverytimeik : Time spent to deliver the orders at node i for the k type of demand delivery (minutes) Number of orders to be delivered at node i for the k type of demand tot ik : delivery (units) The distances between nodes i and j (km) disij : Vehicle capacity at node i for the k type of demand delivery (units) vcapk : The traveling time between nodes i and j (min) timeij : TotDemand(K): Total demand capacity of daily delivery (PALLET). Model variables are as follows: X ij : Fijk : Ai : Z:

Takes value 1, if node j is visited after node i; 0, otherwise. The amount of product k shipped from node i to node j. The cumulative time spent while moving from node i to node j (min). Total travelling cost in the system.

The finding of the VRP is a set of optimal routes for identical fixed capacity vehicles from the main warehouse to the hospitals to satisfy the demands. The integer linear programing model of the VRP problem is shown below.  minz = (Fc ∗ disij ∗ X ij ) (4) i,j=i

The objective function (4) seeks to minimize the total transportation cost.  X ji ≥ 1 i,j=i

(5)

506

˙I. Ülkü et al.

The constraint (5) ensures that exactly one arc enters to each demand node.  X ij ≥ 1

(6)

The constraint (6) ensures that exactly one arc exits from each demand node.   X ji = X ij

(7)

j,j=i

i,j=i

j

The constraint (7) ensures that balance between incoming and outgoing arcs at any given demand node. A1_Depot = 0 The constraint (8) ensures that starting time of the first node is 0.   Aj ≥ Ai + deliverytimeik + timeij − 1 − X ij ∗ BigM The constraint (9) determines the cumulative arrival time for each node.   AI˙ + deliverytimeik + timei1_Depot − 1 − X i1_Depot ∗ BigM ≤ tcapk

(8)

(9)

(10)

The constraint (10) is used to balance the daily time limit by including the arrival time and time between nodes.

5 Implementation and Results The developed models are solved using GAMS 23.6.5/CPLEX 12.2.0.2 solver on a computer with 16.00 GB RAM and 2.90 GHz processor speed. The solution time of the first stage model is on the basis of seconds, whereas the second model is solved on the basis of the seconds. The numerical results of the assignment problem of STC can be found in the following sub-sections. 5.1 Current Situation and Proposed Solution Current situation for STC Company is shown below. Both stages are included VRP, and Assignment Problem and total cost of the current situation is 2470 TL Total unused capacity is 38 as shown in Table 2. The proposed assignments and total unused capacities of each daily route of the proposed model is represented in Table 3. Total unused capacities are determined as 18 and also the number of vehicles used in a day, which hospital should be visited, daily unused capacities and unused vehicles of the days are shown in Table 3. As a result, we can see that with assignment model we able to decrease the unused capacity from 38 to 18. And the decrease the cost from 2470 to 1170.

Monthly Logistics Planning for a Trading Company

507

Table 2. Unused capacity total for each day a week Days

Unused capacity (pallets)

Monday

6

Tuesday

6

Wednesday

6

Thursday

6

Friday

4

Saturday

3

Sunday

7

Total unused capacity (pallets)

38

Table 3. Assignment model results table Days

Vehicle 1

Vehicle 2

Vehicle 3

Vehicle 4

Monday

5_4_HazmM

10_3_Cuba~~& 8_4_AlAhl~

4_1_Alkho~

3_4_Alwak~

3

Tuesday

1_2_Hamad~ & 9_2_AlEma~

1_4_Hamad~ & 10_2_Cuba~

7_1_DohaC~

3_1_Alwak~

1

Wednesday

10_1_Cuba~ & 7_3_DohaC~

8_1_AlAhl~

7_2_DohaC~

3_2_Alwak~

4

Thursday

1_5_Hamad~ & 8_3_AlAhl~

8_2_AlAhl~

1_1_Hamad~

9_3_AlEma~

2

Friday

5_1_HazmM~ & 2_2_Ambil

6_3_Turki~

6_1_Turki~

5_2_HazmM~

4

Saturday

1_3_Hamad~

2_4_Ambul~ 5_3_HazmM~ 6_2_Turki ~

6_4_Turki~ 9_1_AlEma~

4_3_Alkho~

3

Sunday

4_2_Alkho~

2_1_Ambul~

2_3_Ambul~ 3_3_Alwak~

Total unused capacity (Units)

Total unused capacity (units)

1 18

508

˙I. Ülkü et al.

After applying VRP method, the following figure represents that there are two nodes to be visited as in Fig. 3 and the corresponding map is shown in Fig. 4. For each output that is represents in Table 3 there is an optimal route of each vehicle and for each working day.

Fig. 3. Wednesday optimal route with vehicle 1

Fig. 4. For Wednesday google maps route with vehicle 1

6 Conclusion In this study, unused capacity of the STC logistics vehicles and weekly logistics plan of STC are obtained. Optimal routes and the cost of the routes are determined by using the data such as distances including the Hospital-warehouse and Hospital-to-Hospital distance and the between the hospitals and warehouse. Capacity of the vehicles had important role to find the optimal routes. This study consists of two related stages Assignment Problem stage and VRP stage. First stage includes Assignment Problem outputs, which helps us to find unused capacity schedule. Assignment Problem makes problem easier to reach the more efficient and useful solution to determine unused capacities. Outputs of the Assignment Problem, which is first stage, are used for the second stage, which is VRP stage. By using the outputs of first stage as the input in the second stage, optimal

Monthly Logistics Planning for a Trading Company

509

route for the STC logistics plan is obtained as the output of second stage. Results are obtained because of these stages and show in tables and figures. One of the most relevant reason to do this study was about the companies in Qatar never taking care of their low-cost & time problems, which are getting problems with a higher cost and spending much time and getting bankrupted. Acknowledgment. We would like to express our gratitude to all those who gave us the possibility to complete this study. We want to thank Assist. Prof. ˙Ilayda ÜLKÜ, Assist. Prof. Duygun Fatih DEM˙IREL Assoc. Prof. Fadime ÜNEY YÜKSEKTEPE to provide us with a once-in-a-lifetime opportunity to work on this fantastic project, which also helped us in doing a lot of research and we came to know about so many new things we are thankful to them. Also, STC General Manager Mohammed Zouher ALMSADI, Warehouse Facility Manager Ahmed ALMSADI and Purchase Manager Mohanad ALHAKAWATI for the sharing their data and their interest.

References 1. Neves-Moreira, F., Amorim-Lopes, M., Amorim, P.: The multi-period vehicle routing problem with refueling decisions: traveling further to decrease fuel cost? Transp. Res. Part E Logist. Transp. Rev. 133, 101817 (2020) 2. Yang, S., Ning, L., Tong, L.C., Shang, P.: Optimizing electric vehicle routing problems with mixed backhauls and recharging strategies in multi-dimensional representation network. Expert Syst. Appl. 176, 114804 (2021) 3. Alinaghian, M., Shokouhi, N.: Multi-depot multi-compartment vehicle routing problem, solved by a hybrid adaptive large neighborhood search. Omega 76, 85–99 (2018) 4. Archetti, C., Guerriero, F., Macrina, G.: The online vehicle routing problem with occasional drivers. Comput. Oper. Res. 127, 105144 (2021) 5. Molina, J.C., Eguia, L., Racero, J., Guerrero, F.: Multi-objective vehicle routing problem with cost and emission functions. Procedia-Soc. Behav. Sci. 160, 254–263 (2014) 6. Fan, H., Zhang, Y., Tian, P., Lv, Y., Fan, H.: Time-dependent multi-depot green vehicle routing problem with time windows considering temporal-spatial distance. Comput. Oper. Res. 129, 105211 (2021) 7. Fachini, R.F., Armentano, V.A.: Logic-based Benders decomposition for the heterogeneous fixed fleet vehicle routing problem with time windows. Comput. Ind. Eng. 148, 106641 (2020)

Solving the Cutting Stock Problem for a Steel Industry ˙Ilayda Ülkü(B) , Ugur Tekeo˘glu, Müge Özler, Nida Erdal, and Ya˘gız Tolonay Industrial Engineering Department, Istanbul Kultur University, Istanbul, Turkey [email protected], {1401046022,1600002214,1700003587, 1700002241}@stu.iku.edu.tr

Abstract. The problem of stock cutting takes an important place in the steel industry, as it is in many sectors. It is desired to ensure a minimum amount of waste in every manufacturing factory. Reaching the minimum level of waste reduces the costs of factories and companies and strengthens the possibility of making profits. In this way, it is ensured that our resources are used correctly in our developing world. In this study, the Cutting Stock Problem has been handled, in order to minimize the amount of waste within the scope of the complete the study and to solve this problem. In this study, first of all, the size of the steel to be used in line with the customer’s request, the process of taking the steel plates from the stock and the molds of the parts to be cut are prepared by the production planning department, and these operations are carried out with the program used in the factory. Then the dies are sent to the workshop for cutting, where they are cut. Before creating the model, the data of the molds made for the previous cut are collected. In order to create the correct mathematical model, the data are collected by making simultaneous measurements at the cutting table. The cutting Stock Problem is used to formulate the molds prepared for cutting and the amount of waste loss is calculated. Determined parameters and constraints GAMS software is used to create a mathematical model using the GAMS MIP (Mixed Integer Programming) method and then analyze the created model. As a conclusion, we investigated 3 different scenarios. According to those experiments, the study with the proposed scenario can achieve a better production line especially about time and waste amount. Those developments can be important earnings for a real production line. Keywords: Cutting Stock Problem · Mathematical modelling · Mixed integer programming · Production efficiency · Waste minimization

1 Introduction Cutting stock problem has a big part on metal, paper and vary of different production and construction industry. Generally, Two-dimensional cutting stock problems are used for getting some refined parts from raw material with cutting sequences. For getting needed parts one or multiple cutting sequences can be used and we can get different refined parts from the same raw material. After the cutting sequences some trashes can be appear. If © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 510–518, 2022. https://doi.org/10.1007/978-3-030-90421-0_43

Solving the Cutting Stock Problem for a Steel Industry

511

the quality of waste parts are suitable, the producer can reuse that parts for a new cutting sequence. Therefore getting the best profit and efficiency from raw materials, companies and engineers try to find and develop the most suitable cutting pattern. These patterns get shaped according to customer order like shape, amount and size. Cutting patterns can provide some advantages like efficiency, less waste, more fitted parts with the same raw material (like metal sheets, or paper blocks). The cutting patterns shown in the Fig. 1.

Fig. 1. Cutting patterns

In this study, YILGENC˙I’s laser cutting patterns for rollers and trailer parts and chassis parts are used. As a raw material, metal sheets are used. With that material and cutting patterns, the producer tries to get two demented roller parts. For getting less trash and satisfy customer needs, these metal cutting patterns optimization is studied. Most optimal patterns and less trash can be calculated, with different methods according to customer needs and constraints. Nowadays companies and producers have to vary different solution methods for different types of materials and needs. They can work vertical and horizontal stripes or t-shape lines. On the other hand, to get the non-geometric wrought parts the heuristic models can be more suitable. These solutions include large and hard mathematical models because of that, those problems are categorized as Nphard problems. In the study, to optimize the proposed model for finding the most optimal cutting pattern a mathematical modeling is developed and solved with the help of GAMS software. In addition, some production constraints are included into the model, such as; size, amount of needed finished parts, and size of sheets. After implementing the first sequence, the iterations for getting new parts from trashes are used. With these

512

˙I. Ülkü et al.

constraints and several iterations, the objective of the study is to minimize the trash and ensure customer needs.

2 Problem Definition In engineering, the cutting stock problem is used for getting machined parts with specific dimensions from raw material. That raw material can be a block of wood, metal sheet, or paper roll. The main aim of the cutting stock problem is to get the maximal machined part from raw material while minimizing the waste. The cutting stock problem is classified as an optimization problem in math. These optimization problems can contain very complex computations. Those complex computations are named as Np-Hard problem. For solving those complex calculations vary of different engineers and mathematicians implement different methods and tries to get the most optimal solution. Generally, those methods evolve to a heuristic method. The basis of the heuristic method is selfdiscovering. Some solutions are getting shapes from past experiences. These strategies rely on the use of easily accessible, if loosely applicable, information to guide problemsolving for people, machines, and abstract topics. The cutting stock problem that is used in the literature is mainly to solve the problems in the metal industry and in the woodworking sector. With heuristic approaches, the most common method is the column generation method. Column generation is a method for dealing with linear programs with an exponential number of variables. It is the dual of intersection generation, which handles linear programs with an exponential number of constraints. In the article, Tieng et al. [1] and Albayrak [2] investigated the heuristic methods. Similar to their studies Dammak [3] and Malaguti [8] included the two-dimensional cutting stock problem in their studies. On the other hand, Demircan [4], Tanır [5], Struckmeier [6], Gbemileke et al. [7], and Kasımbeyli [9] discussed the one-dimensional cutting stock problem in their research.

3 Methodology The MIP (Mixed Integer Programing) model is developed and solved with the help of GAMS in order to transfer the cutting drawings prepared for the stock cutting problem. The laser cutting benches obtain minimum waste from the steel plates to be cut. The following Fig. 2 represents the important steps of the study. In this section, the data that are used as parameters will be explained in details. Also the variables and the model indices are introduced under this section. Model indices and parameters are as follows for a product that includes 3 parts in the production. scrapi : firsti : second i : third i : ct i : p:

The scrap amount The number of 1st part in the pattern The number of 2nd part in the pattern The number of 3rd part in the pattern The cutting time The price of the product

Solving the Cutting Stock Problem for a Steel Industry

513

Fig. 2. Important steps of the study

c: h1 : h2 : h3 : oq: ch: cm:

The square centimeter waste cost The holding cost of one first The holding cost of one second The holding cost of one third The order quantity per week The cutting hour capacity per week The cutting minutes capacity per week Model variables are as follows:

xi : Takes value 1 if the number pattern i is cut, 0 otherwise y: Takes value 1 if the number of products produced, 0 otherwise z: The number of products that the firm will produce to maximize its weekly profits 3.1 Mathematical Model The objective function is Maximize total profit. In Eq. (1),there are holding cost for each part in the pattern and total scrap cost. The difference between the price multiplied with the number of products produced and total cost maximizes our objective function.  z = py − c

 i

 scrapi xi − h1

 

 first i xi − h2

i



 i

 second i xi − h3

 

 third i xi

(1)

i

Equation (2) shows that cutting time. The number pattern i cut is restricted with cutting hour capacity per week.  ct i xi < cm (2) i

514

˙I. Ülkü et al.

Equation (3) shows that number of produced products. The number pattern i cut must satisfy the order quantity per week.  xi < oq (3) i

In Eq. (4), due to the structure of the product (depending on the product tree) there is 1 Part1 in 1 Trailer Hitch, therefore the number of Trailer Hitches produced cannot be greater than the total number of Part1 in the cut templates.  first i xi > y (4) i

In Eq. (5), due to the structure of the product (depending on the product tree) there is 1 Part2 in 1 Trailer Hitch, therefore the number of Trailer Hitches produced cannot be greater than the total number of Part2 in the cut templates.  second i xi > y (5) i

Equation (6) shows that due to the structure of the product (depending on the product tree) there is 1 Part3 in 1 Trailer Hitch, therefore the number of Trailer Hitches produced cannot be greater than the total number of Part3 in the cut templates.  third i xi > y (6) i

In Eq. (7), y is the number of product produced. y ≥ 0 that must be integer

(7)

In Eq. (8), xi is the pattern i cut. xi ≥ 0 that must be integer

(8)

With those equations, the integer variables are obtained. 3.2 Data Collected The company shared with us the patterns, product quantity, waste amount, cutting time, price and holding costs. Yılgenci Company shared with us the patterns, product quantity, waste amount, cutting time, price and holding costs. According to Table 1 and 2, current and proposed solution have several patterns. The patterns have different No. 1 Sub Product Quantity, No. 2 Sub Product Quantity, No. 3 Sub Product Quantity, Waste Amount and Cutting Time respectively. While the proposed solution has 6 patterns, current solution has just 5 patterns. Holding costs are remaining the same for each solution.

Solving the Cutting Stock Problem for a Steel Industry

515

Table 1. Proposed gams input Pattern 1

Pattern 2

Pattern 3

Pattern 4

Pattern 5

Pattern 6

No. 1 sub product quantity

4

9

6

5

7

7

No. 2 sub product quantity

8

3

6

8

7

6

No. 3 sub product quantity

10

6

8

7

3

4

Waste amount (cm2 )

10.35

10.5

10.9

10.8

11.25

11.47

25.59

25.3

25.29

24.02

23.38

Final production amount

Price

Cost

Holding cost 1

Holding cost 2

Holding cost 3

38

315.894

0.5

2

0.5

0.3

Cutting 26.28 time (min)

Table 2. Current gams input Pattern 1

Pattern 2

Pattern 3

Pattern 4

Pattern 5

No. 1 sub product quantity

19

19

0

0

0

No. 2 sub product quantity

0

0

25

13

0

No. 3 sub product quantity

1

1

0

20

16

Waste amount (cm2 )

6.07

6.07

7.65

6.75

15.2

Cutting time (min) 34.1

34.1

32.41

35.13

15.2

Final production amount

Price

Cost

Holding cost 1

Holding cost 2

Holding cost 3

38

315.894

0.5

2

0.5

0.3

3.3 Product Requirements and Their Weights S700 MC high-strength steel plates are preferred for the automotive industry. S700 MC are highly resistant to impact and useful in cold forming and are highly resistant in the

516

˙I. Ülkü et al.

load-bearing area. The S700 MC which is going to covered are used in the steel trailer connection piece. S700 MC is produced in many sizes. The S700 MC are going to used in the project which is shown in Table 3. Table 3. Product requirements Raw material type

Raw material proportions

1

S700-MC

1500 mm × 1500 mm

2

S700-MC

1500 mm × 1500 mm

3

S700-MC

1500 mm × 1500 mm

4

S700-MC

1500 mm × 1500 mm

5

S700-MC

1500 mm × 1500 mm

6

S700-MC

1500 mm × 1500 mm

3.4 Output Data According to Table 4 proposed solution has pattern 2.4 and 5 respectively. The absolute and relative gap is a little different than 0. Total waste for each pattern is 407.7 and the time for each pattern is 962.08. For finding the total cutting time. the number of pattern used and total waste for each pattern used multiplied in each pattern and the multiplication will summed up. The employees are working 8 h in per day. By dividing total cutting time hour to 8 the total cutting time can be found. Table 4. Proposed Gams output Pattern 2

Pattern 4

Pattern 5

Waste amount

10.5

10.8

11.25

Cutting time

25.59

25.29

24.02

Number of pattern used

12

24

2

Total waste for each pattern

126

259.2

22.5

Total time for each pattern

307.08

606.96

48.04

Total waste amount Total waste for each pattern

Total time for each pattern

407.7

962.08

Total cutting time (min)

Total cutting time (hrs)

Total cutting time (day)

962.08

16.034

2.004

According to Table 5 pattern 2.3 and 4 are used in current solution. The absolute gap and relative gap remains as expected. The total waster for each pattern is 249.44

Solving the Cutting Stock Problem for a Steel Industry

517

and the total time for each pattern is 1303.83. Total cutting time in a day is 2.716. When the proposed and current solution difference is clearly represented that the proposed solution gives better solution. Table 5. Current gams output Pattern 2

Pattern 3

Pattern 4

Waste amount

6.07

7.65

6.75

Cutting time

34.1

32.41

35.13

Number of pattern used

17

5

16

Total waste for each pattern

103.19

38.25

108

Total time for each pattern

579.7

162.05

562.08

Total waste amount Total waste for each pattern

Total time for each pattern

249.44

1303.83

Total cutting time (min)

Total cutting time (hrs)

Total cutting time (day)

1303.83

21.73

2.716

4 Conclusion In this study. a laser cutting machine of YILGENC˙I company and their real production data for cutting stock problem in the metal industry is used. The main aim of the cutting stock problem is to get the maximal machined part from raw material while minimizing waste. The cutting stock problem is classified as an optimization problem in math. These optimization problems can include too complex computations. Those complex computations are known as Np-Hard problems. In that way. YILGENC˙I’s cutting stock patterns for cutting sequence are optimized. In this study. the objective is to minimize the waste amount and satisfy needed parts by developing better cutting patterns. Programming and analyzing sections are made via GAMS software. In the solution and optimization phase. Different scenarios are formed with different types and amounts of cutting patterns. Those formed cutting patterns were used to create different cutting scenarios. Scenarios include the different types and number of cutting planes. After the implementation phase. a proposed model was created. To make comparisons with the current situation. several experiments are done with the proposed model. The GAMS outputs of the proposed model can satisfy the final product amount and same cost with the current situation. However. with the proposed scenario. better waste amounts are achieved (for the proposed scenario we have 1.342. for the current situation 1.568). The proposed scenario has more optimal total cutting time. The cutting time difference is 338 min. At the same time. there is a suitable gap in the GAMS outputs such as; the absolute gap:0.900000 and the relative gap:0.000012.

518

˙I. Ülkü et al.

To sum up according to the study. the proposed scenario can achieve a better production line especially about time and waste amount. Those developments can be important earnings for the YILGENC˙I’s production line. Acknowledgement. In a world where our resources dwindling every day the concept of optimization is spreading expeditiously each passing day. the try to the resources more efficiently and take an efficient steps for reducing cost. The optimum usage of raw materials in the steel industry is one of the main reasons that cannot be ignored as well as in many sectors. The Cutting Stock Problem handled and the studies progressed in that way in this study. We want to thank being the first place Dr. ˙Ilayda Ülkü who is giving us to study support to complete this study. all our professors who providing the whole information and giving the necessary data in a period of the study. We want to special thank to YILGENC˙I A.S. ¸ Chairman of the Board of Directors Orhan Yılgenci who ensuring to access all datas easily during the study period. For helping to develop this study thanks for the Factory Manager Serkan Kayaalp. who provided us with all the necessary information and Production Planning Engineer Erhan Kor. who provided us with all the feedbacks within the scope of the study that deals with all kinds of our subjects. Finally. we would like to extend our thanks to the YILGENC˙I family for their warm welcome to our families who were our main supporters during the study period.

References 1. Tieng, K., Sumetthapiwat, S., Dumrongsiri, A., Jeenanunta, C.: Heuristic for two dimensional rectangular guillotine cutting stock. Thailand Stat. 14(2), 147–164 (2016) 2. Albayrak, E.: ˙Iki boyutlu dikdörtgen s¸ekilli stok kesme problemleri için sezgisel-metasezgisel algoritma ve yazılım geli¸stirme (Heuristic-metaheuristic algorithm and software development for two-dimensional rectangular stock cutting problems). Yayınlanmamı¸s yüksek lisans tezi. Balıkesir Üniversitesi Fen Bilimleri Enstitüsü (Unpublished master’s thesis. Balikesir University Institute of Science and Technology) (2013) 3. Mellouli, A., Dammak, A.: An algorithm for two dimensional cutting stock problem based on a pattern generation procedure. Inf. Manag. Sci. 19(2), 201–218 (2008) 4. Tanir, D., Ugurlu, O., Guler, A., Nuriyev, U.: One-dimensional cutting stock problem with divisible items. https://arxiv.org/ftp/arxiv/papers/1606/1606.01419.pdf 5. Tanir, D., Ugurlu, O., Guler, A., Nuriyev, U.: One-dimensional cutting stock problem with divisible items: A case study in steel industry. TWMS J. App. Eng. Math. 9(3), 473–484 (2019) 6. Struckmeier, F., León, F.P.: Nesting in the sheet metal industry: dealing with constraints of flatbed laser-cutting machines. Procedia Manuf. 29, 575–582 (2019) 7. Ogunranti, G.A., Oluleye, A.E.: Minimizing waste (off-cuts) using cutting stock model: the case of one dimensional cutting stock problem in wood working industry. J. Ind. Eng. Manag. 9(3), 834–859 (2016). Online ISSN: 2013-0953 8. Furini, F., Malaguti, E.: Models for the two-dimensional two-stage cutting stock problem with multiple stock size. Comput. Oper. Res. 40(8), 1953–1962 (2013) 9. Kasımbeyli, N., Demirci, D.: Bir Boyutlu Kesme Problemi için Üç Amaçli Bir Matematiksel Model ve Çözüm Algoritmasi (A mathematical model with three objectives and the solution algorithm for one dimensional cutting stock problem). Endüstri Mühendisli˘gi Dergisi 29(3–4), 42–50 (2019)

Quality Management

Comparison of Optical Scanner and Computed Tomography Scan Accuracy Michaela Kritikos1(B) , Jan Urminsky2 , and Ivan Buransky1 1 Department of Machining and Computer-Aided Technologies, Slovak University of

Technology in Bratislava, Bratislava, Slovakia {michaela.kritikos,ivan.buransky}@stuba.sk 2 Department of Welding and Joining of Materials, Faculty of Materials Science and Technology, Slovak University of Technology in Bratislava, Bratislava, Slovakia [email protected]

Abstract. Nowadays, component quality control is part of the production process in every area of the engineering industry. There are many possibilities for product quality assessment - contact and non - contact methods. Their differences are in the accuracy of data we are able to get from different instruments. In recent years, the application of non - contact computed tomography has seen a great rise. The most common non - contact method used in the industry is optical scanning. Those two techniques of component accuracy evaluation are different in many ways. Both of them have some advantages and disadvantages. One of the biggest positives of computed tomography is the possibility of internal structure control (defects, inclusions, avoids). But there is still a question of accuracy of the scan. This paper is dealing with comparison of the scan obtained from 3D optical scanner GOM ATOS Triple Scan II and CT device METROTOM 1500 from Zeiss company. The reference values were obtained by contact method by application of coordinate measuring machine CenterMAX from Zeiss company. The geometrical product specifications were evaluated. Keywords: Coordinate measuring machine · Optical scanner · Computed tomography · Scan accuracy · Geometrical product specifications

1 Introduction In modern industry, there is a constant need to use more efficient measurement methods that provide both high speed and high accuracy [1]. Acquiring 3D point data from physical objects is increasingly being adopted in a variety of product development processes, such as quality control and inspection, reverse engineering and many other industrial fields. A variety of sensor technologies, such as the tactile method and optical techniques, have been developed to meet the requirement of surface digitization [2]. Non-contact 3D digitization techniques are constantly evolving, and scanners based on triangulation or scanners working with structured light projection are becoming more accurate, flexible and affordable and this allows their use in industry. Although they still © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 521–530, 2022. https://doi.org/10.1007/978-3-030-90421-0_44

522

M. Kritikos et al.

do not have the accuracy of coordinate measuring machines (CMMs), structured light scanners are very fast and accurate and achieve decimal detail under optimal conditions [3]. Structured Light (SL) systems enable robust high quality capture of 3D geometry, and are actively used throughout several fields. These systems can be constructed using commercial off the shelf (COTS) hardware, making them accessible and affordable. The obtainable accuracy and precision of such systems vary considerably and are mainly functions of several design parameters [4]. In contrast to the coordinate measuring machines are the optical scanners, for instance, less suitable to wide and high precise metrological role. On the other hand, there are applications where it is more useful to use optical non-contact measuring systems instead of contact methods. When measuring small parts, optical measuring systems are very effective [5]. Optical 3D scanners can be used in a wide range of applications, in checking dimensions, deviations of machining products but also welded joints [6]. Computer tomography (CT) can be considered as a third revolutionary development in coordinate metrology, following the tactile coordinate measuring machines and the optical 3D scanners. CT is still a new technology, introduced as a dimensional metrology tool since 2005 (first CT machine exhibited at the Control Fair in Germany) [7, 9]. CT has recently been allowed to extend the measuring possibilities. This method uses Xraying the objects. CT scanners are known in medicine for a long time, but for technical imaging of 3D objects, these devices have been used for 10 years. The image from CT scanner allows for estimation of the geometry of manufactured products as well as internal closed surfaces. It also allows for analysis of pores in material interior or estimation of subassemblies distortion during joining [8]. CMMs or optical measuring instruments like an optical scanner can measure the external surface of a part, but not the internal or other structures inaccessible to tactile or vision-based inspection. Despite these limitations, measurements can still be achieved with conventional CMMs or optical instruments by disassembling the object (in the case of inaccessible locations) and through extrapolation methods that can determine the contact location at zero force. CT metrology tools, that eliminate the above difficulties, are increasingly being used by manufacturing companies spanning a variety of industries such as automotive, electronics, aerospace, medical devices, cast materials, injection moulding, plastics, rubber, and 3D printing or additive manufacturing. Its increasing relevance is evident by the recent growth of surveys in the field of dimensional metrology referring to CT as a tool for non-destructive dimensional quality control, i.e., for traceable measurement and geometrical tolerance verification of industrial components [9]. 3D printing, also called additive manufacturing, is a way for manufacturing parts layer-upon-layer [10]. According to VDI 3405 are existing different techniques of additive manufacturing [11]. In this article, PolyJet (photopolymer jetting) technology was used. PolyJet technology uses UV energy to cure layers of photopolymer. The amount of energy that reaches each layer is related to several aspects of the manufacturing procedure, such as jetting head displacement strategy or UV irradiation pattern [10]. This 3D printing technology is used because it has many benefits, for example, creating smooth, detailed prototypes that convey final-product aesthetics, producing accurate jigs, fixtures and other manufacturing tools, achieving complex shapes, intricate details and delicate features, incorporating the widest variety of colours and materials into a single model

Comparison of Optical Scanner and Computed Tomography Scan Accuracy

523

for unbeatable efficiency [12]. Typically, PolyJet technology is used for manufacturing mould tooling [13]. The accuracy of 3D printing technology depends on the 3D printing method, machine and environment [14–16]. In the last years the computed tomography method is finding its application in the production process in many companies. The result of the CT scanning can be generating as STL model which is comparable to a model acquired by optical 3D scanning. But the price of the CT model is much more expensive than optical scans. There is a presumption that the CT scan is more accurate. This paper experiment leads to verification if there is possibility to substitute CT scans by optical scans with lower price at the same accuracy.

2 Experimental Investigation The aim of this experimental investigation is comparison of the accuracy of the scan obtained by optical 3D scanner and computed tomography. The reference values used for comparison of achieved data from scans were obtained by CMM CenterMAX with VAST XTR gold measuring head-The maximal permissible error MPEE of this measuring device is 1.5 + L/250 µm at 26 °C. Computed tomography and optical scanners as well have some limits which had to be taken into consideration in decision of the kind of investigated part. The biggest limit of CT application was part density and size and the biggest limit of optical 3D scanner was the shiny surface. The CAD model was designed in the Solidworks software and it was built from different features to have possibility to evaluate different geometrical product specifications (Fig. 1).

Fig. 1. Part’s CAD model

After it was used the PolyJet Additive Manufacturing (AM) for creating part. In AM, parts are manufactured layer-upon-layer. The Objet30 PRO 3D printer was used. PolyJet

524

M. Kritikos et al.

is a powerful 3D printing technology that produces smooth, accurate parts, prototypes and tooling. With microscopic layer resolution and accuracy down to 0.014 mm, it can produce thin walls and complex geometries using the widest range of materials available with any technology. The parameters used for manufacturing are shown in the Table 1. Table 1. Parameters of 3D printing [17] Build resolution (XYZ resolution)

600 × 600 × 900 DPI

Build size

294.0 × 192.0 × 148.6 mm

Layer thickness

Horizontal build layer range between 16 µm–36 µm

Build modes

High speed: 28 µm resolution

Model materials

Neutral: VeroPureWhite™

Print time

16 h 59 min

Total material

1.408 g

The contact measurement for achievement of reference values was performed by using Calypso software. The strategies of measurement and method of evaluation were set and the same measuring program was used for STL models evaluation from optical scanner and from CT device. It was used circle path for rotational elements measurement and a polyline for planes measurement. The parameters of scanning on the CMM were set according to producer recommendations. Geometrical product specifications (concentricity, flatness, angularity, perpendicularity, parallelism, position, roundness, straightness) and dimensions (diameters and distance) were evaluated. Scanning process was carried out by GOM ATOS Triple Scan II optical 3D scanner on which was installed the chosen measuring volume with scanning parameters shown in the Table 2. The scanner has SO (small object) configuration with one projector and two cameras. Table 2. Parameters of 3D scanning by GOM ATOS Triple Scan II [18] Sensor

ATOS II Rev. 02

Camera position

SO

Name of volume

MV 170

Measuring volume (L × W × H)

170 × 130 × 130 [mm]

Measuring point distance

71 [µm]

Recommended reference points (∅)

0.8 [mm]

Measuring distance

490 [mm]

Camera angle

28 [°]

Focal length camera lenses

23 [mm]

Focal length projector lens

26 [mm]

Comparison of Optical Scanner and Computed Tomography Scan Accuracy

525

For selected measuring volume is recommended reference points with diameter Ø0.8 mm which were stuck on scanned object. These points allow tracking of scanned object in the measuring space of the scanner, correct orientation of this object and joining of each 2D image to the resulting 3D form. Object which we want to scan has to be able to reflect the blue light fringe projection back to the camera system. It means that an object which is shiny reflects the light too much because of scattering of incident light. For this reason, we use surface coating by titanium powder which allows correct reflection of light and contrast of the surface of the object. ATOS scanner (Figure) runs with GOM ATOS Professional V7.5 software. There were set the parameters of digitization. Camera focus and the projector focus as well as the polarization filter for camera were adjusted for the best contrast of sample surface. The full resolution, normal exposition time as well as high quality of scan were adjusted. Position, angle and number of rotations of sample situated on the rotary table in relation to measuring space, were set. Subsequently, digitizing was performed. Consequently, other steps were set, like deleting redundant scanned features and objects (e.g. surface of rotary table), and next followed standard mesh polygonization and post processing with more details. As a result, from these steps the digital model of the 3D printed sample in the STL format was exported and can be used for further evaluation. Obtained scanned model from 3D optical scanner has visible defects in holes. The possibility of deeper holes scanning is problematic due to dimension of holes and poor quality of reflecting the blue light fringe projection back to the camera system (Fig. 2).

Fig. 2. Scan achieved by optical scanner

526

M. Kritikos et al.

Another step was CT scanning by METROTOM 1500 from Zeiss company. This device is an industrial computed tomograph designed primarily for scanning of products made of light metals (aluminium alloys) and plastic. The basic system consists of an X-ray tube using a wolfram target to emit radiation. The X-ray beam has conical shape, it is emitted from a source of X-Ray radiation through the scanned object and falls onto the junction detector. The parameters for data achievement were as following: – – – – –

current: 160 kV voltage: 700 µA magnification: 1024 × 1024 px nr. of projections: 1500 voxel size: 198.6 µm

Scan gotten from CT consists of visible layers which are the result of the 3D printing method (Fig. 3).

fig. 3. CT scan

Achieved results confirmed the presumption that the results gotten from CT scan evaluation are closer to reality representation because they are closer to reference values (Table 3). The scans were evaluated by using the program from Calypso software to rule out errors which could be caused by different software and evaluation methods application.

Comparison of Optical Scanner and Computed Tomography Scan Accuracy

527

Table 3. Comparison of achieved results Characteristics

CMM references

Atos scan

CT scan

Deviations – Atos (actual nominal)

Deviations – CT (actual - nominal)

Concentricity (mm)

0.078

0.0595

0.0802

−0.0185

0.0022

Flatness (mm)

0.0789

0.0832

0.0762

0.0043

−0.0027

Angle 75° (°)

74.6667

74.6469

74.6309

−0.0198

−0.0358

Angularity_60° (mm)

0.262

0.296

0.306

0.034

0.044

Diameter 10–1 (mm)

10.2012

10.2351

10.1837

0.0339

−0.0175

Diameter_10–2 (mm)

10.1693

10.1876

10.1778

0.0183

0.0085

Paralelism cylinders (mm)

0.0061

0.0292

0.0123

0.0231

0.0062

Angle cone (°)

20.0745

20.1602

20.0931

0.0857

0.0186

Position ball (mm)

0.2036

0.2429

0.1712

0.0393

−0.0324

Perpendicularity (mm)

0.154

0.217

0.212

0.063

0.058

Roundness (mm)

0.1003

0.1268

0.1256

0.0365

0.0253

Straightness (mm)

0.0613

0.0784

0.0747

0.0171

0.0134

Paralelism planes (mm)

0.2096

0.2773

0.1759

0.0677

−0.0337

0.0739

−0.0041

Distance (mm)

110.2491

110.323

110.245

Maximum optical scan deviation was 0.0857° achieved in evaluation of cone angle and the minimum deviation was 0.0043 mm achieved in flatness evaluation. Maximum CT scan deviation was 0.044 achieved in angularity evaluation of the plane and the minimum deviation was 0.0022 mm in concentricity evaluation. From the result, it can be predicted that the scan accuracy is at least 1/5 of the voxel size (198.6 µm).

3 Discussion and Conclusion The effectivity of the quality control is very important to be taken into consideration. The most used methods of part´s quality evaluation in practise are CMM or non - contact scanners (optical and computed tomography). These methods of dimensional and angular evaluation of parts are used often in many kinds of industrial sectors. The price of

528

M. Kritikos et al.

different method application is various. Therefore it is very important to have information about the accuracy of different methods of getting data and to choose the best way (economically and technically) for specific tasks. The experimental investigation in this paper is dealing with comparison of the optical and CT scan. Both of these methods have advantages and disadvantages and it has be taken into account when choosing the right method. Optical scanners have disadvantage in measurement of the holes, where it is very difficult to see by cameras in the high deep. The scanning process of the above elements is not precise due to configuration and location of optical scanner cameras and projector. Optical scanning has limitations because the device works on the principle of stereovision, which allows scanning of free form surfaces which do not contain elements that exceed the dimensional and geometric limits of the principle of obtaining points from the scanned sample surface by triangulation. Computed tomography devices have very big disadvantage in the measurement of materials with high density, what is not a problem for optical scanners. Also the big limitation of the CT devices is the size of the part - the measuring space is the cylinder with the height of 350 mm and the diameter of 350 mm. The comparison of the accuracy of the scans gotten by application of these two methods shown that the optical scan is less accurate than the CT scan. The range of the deviations from nominal values in evaluation of the optical scan was from 0.0043 mm in the evaluation of flatness. Roundness and straightness deviations were at the level of a hundredth of millimeter. The highest deviations were achieved in evaluation of the cone angle. The highest deviation from nominal value was 0.0857°. All the deviations were less than 0.1 mm. There it can be found the limit for application of optical scanners in the complex part measurement. The deviations could be also influenced by the measuring point distance, which was in this case 0.071 mm. The step width in the Calypso software has been set to 0.05 mm. The CT scan deviations were from 0.0022 mm (the closer value to nominal value - concentricity evaluation) to 0.044 mm (angularity evaluation). The highest deviations were achieved also in angular evaluation. Results gotten from CT scan evaluation confirmed the presumption that CT scan accuracy is easily calculable as one-fifth of the voxel size. Acknowledgment. This research was supported by the research project KEGA No. 022STU4/2019. The authors express their sincere thanks for financial contributions.

References 1. Sładek, J., Błaszczyk, P.M., Kupiec, M., Sitnik, R.: The hybrid contact–optical coordinate measuring system. Measurement 44(3), 503–510 (2011). ISSN 02632241. https://doi.org/10.1016/j.measurement.2010.11.013. (https://www.sciencedirect.com/ science/article/pii/S0263224110003039) 2. Li, F., Stoddart, D., Zwierzak, I.: A performance test for a fringe projection scanner in various ambient light conditions. Procedia CIRP 62, 400–404 (2017). ISSN 2212-8271. https://doi. org/10.1016/j.procir.2016.06.080. https://www.sciencedirect.com/science/article/pii/S22128 27116306989

Comparison of Optical Scanner and Computed Tomography Scan Accuracy

529

3. Bernal, C., de Agustina, B., Marín, M.M., Camacho, A.M.: Performance evaluation of optical scanner based on blue LED structured light. Procedia Eng. 63, 591–598 (2013). ISSN 18777058. https://doi.org/10.1016/j.proeng.2013.08.261. https://www.sciencedirect.com/science/ article/pii/S1877705813014744 4. Eiriksson, E., Wilm, J., Pedersen, D., Aanæs, H.: Precision and accuracy parameters in structured light 3-D scanning. ISPRS – Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. XL-5/W8, 7–15 (2016). https://doi.org/10.5194/isprs-archives-XL-5-W8-7-2016 5. Vagovský, J., Buranský, I., Görög, A.: Evaluation of measuring capability of the optical 3D scanner. Procedia Eng. 100, 1198–1206 (2015). ISSN 18777058. https://doi.org/10.1016/j.proeng.2015.01.484. https://www.sciencedirect.com/science/ article/pii/S1877705815005111 6. Urminský, J., Jáˇna, M., Marônek, M., Moroviˇc, L.: Analysis of weld joint deformations by optical 3D scanning. Acta Polytechnica 56(1), 76–80 (2016). ISSN 1805-2363. https:// doi.org/10.14311/APP.2016.56.0076. https://dspace.cvut.cz/bitstream/handle/10467/67237/ 3028-6910-2-PB.pdf?sequence=1&isAllowed=y 7. Angel, J., De Chiffre, L.: Comparison on computed tomography using industrial items. CIRP Ann. 63(1), 473–476 (2014). ISSN 0007-8506. https://doi.org/10.1016/j.cirp.2014.03.034. https://www.sciencedirect.com/science/article/pii/S0007850614000377 8. Gapinski, B., Wieczorowski, M., Marciniak-Podsadna, L., Dybala, B., Ziolkowski, G.: Comparison of different method of measurement geometry using CMM, optical scanner and computed tomography 3D. Procedia Eng. 69, 255–262 (2014). ISSN 18777058. https://doi.org/10.1016/j.proeng.2014.02.230. https://www.sciencedirect.com/science/ article/pii/S187770581400232X 9. Villarraga-Gómez, H., Lee, C., Smith, S.T.: Dimensional metrology with X-ray CT: a comparison with CMM measurements on internal features and compliant structures. Precision Eng. 51, 291–307 (2018). ISSN 0141-6359. https://doi.org/10.1016/j.precisioneng.2017.08.021. https://www.sciencedirect.com/science/article/pii/S014163591630157X 10. Gay, P., Blanco, D., Pelayo, F., Noriega, A., Fernández, P.: Analysis of factors influencing the mechanical properties of flat PolyJet manufactured parts. Procedia Eng. 132, 70–77 (2015). ISSN 1877-7058. https://doi.org/10.1016/j.proeng.2015.12.481. https://www.sciencedirect. com/science/article/pii/S1877705815043921 11. VDI 3405. Additive Fertigungsverfahren Grundlagen, Begriffe, VCerfahrensbeschreibungen - Additive manufacturing processes, rapid manufacturing . Basic, definitions, processes. ICS 25.020, December 2014, Düsseldorf, p. 40 (2014). 12. Stratasys Ltd. What is PolyJet Technology? (2021). https://www.stratasys.com/polyjet-tec hnology 13. Krizsma, S., Kovács, N.K., Kovács, J.G., Suplicz, A.: In-situ monitoring of deformation in rapid prototyped injection molds. Addit. Manuf. 42, 102001 (2021). https://doi.org/10.1016/ j.addma.2021.102001 14. Kim, T., et al.: Accuracy of a simplified 3D-printed implant surgical guide. J. Prosthet. Dent. 124(2), 195-201.e2 (2020). https://doi.org/10.1016/j.prosdent.2019.06.006 15. Ibrahim, D., et al.: Dimensional error of selective laser sintering, three-dimensional printing and PolyJetTM models in the reproduction of mandibular anatomy. J. Cranio-Maxillofacial Surg. 37(3), 167–173 (2009). https://doi.org/10.1016/j.jcms.2008.10.008 16. Park, J.-M., Jeon, J., Koak, J.-Y., Kim, S.-K., Heo, S.-J.: Dimensional accuracy and surface characteristics of 3D-printed dental casts. J. Prosthet. Dent. (2020). https://doi.org/10.1016/ j.prosdent.2020.07.008

530

M. Kritikos et al.

17. Stratasys Ltd., Objet30, Spec Sheet, Stratasys Headquarters, 7665 Commerce Way, Eden Prairie, MN 55344 (2020). PSS_PJ_Objet30V5_1020a. https://www.stratasys.com/-/media/ files/printer-spec-sheets/pss_pj_objet30v5_1020a.pdf?la=en&hash=B7A84C417AA825C 66CC95E0886CB40C000736317 18. GOM Optical Measuring Techniques, ATOS TipleScan: User Manual – Hardware. ATOS II and III Triple Scan With 400 mm and 800 mm Camera Support, p. 47 (2013). https://support. gom.com

Researches Regarding the Development of a Virtual Reality Android Application Explaining Orientation Tolerances According to the Latest GPS Standards Using 3D Models Grigore Marian Pop(B) , Radu Comes, Calin Neamtu, and Liviu Adrian Crisan Department of Design Engineering and Robotics, Technical University of Cluj-Napoca, Cluj-Napoca, Romania {grigore.pop,radu.comes,calin.neamtu, liviu.crisan}@muri.utcluj.ro

Abstract. This paper presents a virtual reality application developed for engineering students aiming to help them develop their skills to correctly interpret the geometrical tolerances for orientation, like parallelism, perpendicularity and angularity, according to the latest Geometrical Product Specification (GPS) Standards. The previous studies focused on the impact of mobile learning on student achievement show that the method could be one promising educational method. The educational applications used on mobile devices develop a friendly environment and generate a positive effect on learning. Having spatial skills is essential for engineering students who need to imagine different shapes and orientations. The authors have discovered based on their experience in teaching courses like computer aided design and tolerances and dimensional control, that many students have difficulties with understanding 3D spatial geometry and the transitions between 2D drawings and 3D parts. This kind of skills can be developed through specific training with appropriate tools, like the virtual reality application developed and presented in this paper. Based on a long research the authors managed to develop this training application that offers the viewer a 360-degree view of different mechanical parts presenting 3D geometrical tolerances annotations according to the latest GPS standards. The developed application also presents 3D models of the meaning based on the skin model, the tolerance zones, the condition of conformity as well as the datum features. Keywords: Mobile learning · Geometrical Dimensioning and Tolerancing (GD&T) · Orientation tolerances · Geometrical Product Specification (GPS) · Virtual reality · Unity 3D

1 Introduction The COVID-19 pandemic has closed Universities all across the world, as a result the teaching process has changed dramatically, with the rise of e-learning, teaching was undertaken remotely and on digital platforms. With the sudden shift away from the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 531–538, 2022. https://doi.org/10.1007/978-3-030-90421-0_45

532

G. M. Pop et al.

classrooms and laboratories, teachers were put in the situation to develop new attractive ways for transmitting the information to the students. This paper aims to design and implement a VR-learning application to improve the quality of the teaching process of Tolerances and Dimensional Control and 3D Modelling courses for engineering students, covering the following lines of studies: design, robotics, industrial and mechanical engineering. The developed application will provide information regarding the indication and interpretation of the location tolerances according to the latest ISO GPS standards, using 3D interactive models designed to highlight the tolerance zones. The application will be developed in three different languages: English, German, and Romanian. The awareness that plus/minus tolerancing is not sufficient for unique definition of the parts geometry raised up in industry in the middle of 20’th century. Finally, in 1969 recommendation ISO/R 1101-1: Tolerances of form and of position – Part 1 Generalities, symbols, indications on drawings were published. It was replaced in 1983 by first edition of the standard ISO 1101 Technical drawings – Geometrical tolerancing – Tolerancing of form, orientation, location and run-out – Generalities, definitions, symbols, indications on drawings. Similar needs in the USA drove to publish standard USASI Y14.5-1966 Geometric dimensioning and tolerancing that was pre-ceded by three editions of military standard MIL-STD-8. Currently the 4th edition of International Standard ISO 1101:2017 and 6th edition of American Standard ASME Y14.5-2018 are valid [1].

Fig. 1. Skin model

Manufactured parts always differ from the original CAD model due to the imperfections and variations of the manufacturing process. The shape of the real workpiece surface (Skin model) in the geometrical meaning conforms with the shape of nominal surface only approximately. For example, the surface of a cylindrical shaft may have the shape a cone or a barrel, and a section in the plane perpendicular to the axis can look like an ellipse. These differences from nominal shape are called form deviations (see Fig. 1). The aims of the developed VR-learning application are that the students will be able to:

Researches Regarding the Development of a Virtual Reality

• • • • • • •

533

understand what a geometrical feature is, understand what a skin model is, be aware of the relationships between geometrical features (ideal and non-ideal), have a general knowledge on geometrical tolerances, interpret the ISO orientation geometrical tolerances, recognize symbols of geometrical tolerances, know the rules of indication of geometrical tolerances.

2 Literature Review Learning in engineering, design, tolerancing and geometrical product specifications has been shown to rely heavily on students’ ability to visualize or manipulate spatial information and to understand both two dimensional (2D) and three-dimensional (3D) visual representations [2]. When studying and learning GD&T students need to be able to imagine the mechanical parts in different orientations, manipulate 3D models, and mentally reconstruct drawings from 2D to 3D or vice versa, on paper or in Computer Aided Design environment. Virtual reality technologies have started to be adopted in almost any domain. The main advantage of this technology is represented by their ease of use allowing users to interact with virtual elements using specific controllers or in some cases natural gestures which are tracked in real time by various sensors and cameras. Virtual reality applications are defined to simulate real parts within a virtual environment. Specific VR applications have been applied in a wide variety of areas such as medicine- cataract surgery training [3], automotive mechanics [4], factory planning [4], aviation inspection and maintenance [4]. The real parts are transferred to the virtual reality environment either directly using existing 3D CAD models of specific parts [4] or by using various 3D scanning techniques and equipment capable to capture both the geometry (at a 1:1 scale) and visual appearance (texture) of real-life parts and components [5]. Based on a systematic analysis about the knowledge representation methods in product design and tolerancing the authors found out that VR can simulate the scenarios of manufacturing, assembling, and exploitation, which helps the designer acquire important information from different stages of product lifecycle and apply them to design verification and testing. VR has proved its value for maintainability design in many industrial fields, therefore the authors decided to use this method for teaching engineering students the geometrical tolerances and design [7]. Geometrical Design and Tolerancing is used to establish and communicate the engineering tolerances within manufacturing. It is a language of symbols that communicates the permittable tolerance for variation. Understanding the concepts of geometry and their 3D design space is still considered a difficult subject area for some students. Therefore, a requirement of a learning innovation arises to overcome the problems faced while understanding the tolerance indication 2D and 3D as well as the tolerance zone and the condition of conformity. The developed application aims to presents all these explanations in two different languages (Fig. 2 - romanian and 3-german) in an VR environment.

534

G. M. Pop et al.

Fig. 2. Parallelism indication and tolerance zone representation [6]

Fig. 3. Parallelism indication and tolerance zone representation [6]

Depending on the characteristic the tolerance zone is one of the following, according to EN ISO 1101:2017: • the space within a circle, • the space between two concentric circles, • the space between two parallel circles on a conical surface,

Researches Regarding the Development of a Virtual Reality

• • • • • • • • • •

535

the space between two parallel circles of the same diameter, the space between two parallel straight lines, the space between two non-equidistant complex lines, the space within a cylinder, the space between two coaxial cylinders, the space within a cone, the space within a single complex surface, the space between two parallel planes (Figs. 2 and 3), the space within a sphere, the space between two non-equidistant complex surfaces

3 Results For the virtual reality application aimed at helping students to better understand orientation tolerances, the existing 3D CAD models of real machine parts and components have been transferred towards the virtual reality environment. Three of the case study parts are presented in Fig. 4.

Fig. 4. Case study parts

The virtual reality application has been developed in Unity 2021.1.16 and the 3D CAD models have been exported from SolidWorks 2021 as STEP files and imported in 3ds Max 2021 where each model was exported as an OBJ file to facilitate its integration into Unity. The 3D models within the virtual reality environment have custom mesh collider to better facilitate their manipulation regarding their individual unique shape. Currently the application integrates 3 examples of the four orientation tolerances. Each orientation tolerance has two interactivity elements. The first element is based on the 3D CAD model of a real part that has attached the associated tolerance annotation. The tolerance annotations have been defined using 3D elements, therefore the user can rotate the part within the virtual environment and the tooltip annotation will remain active. The parts are positioned on individual tables and behind each table there is a wallpaper that presents each individual orientation tolerance annotation both in 2D, 3D

536

G. M. Pop et al.

and there is also a 3D representation of the definition of tolerance zone. A screen capture from Unity 3D highlighting the visual appearance of the application is presented in Fig. 5 (Fig. 6).

Fig. 5. Screen capture from Unity 3D

Fig. 6. Manipulating the 3D parts within the orientation tolerance learning VR application

The second element of interaction is focused on the dynamic visualization of the definition of the tolerance zone. The user can walk around the visual representation and to scale it using the VR controllers, therefore have a better understating of this aspect (Fig. 7).

Researches Regarding the Development of a Virtual Reality

537

Fig. 7. Tolerance zone in the VR application

Currently the developed VR application created to facilitate orientation tolerances learning supports two languages (German and Romanian). The authors intend to extend the functionality of the application by adding more case study parts for each orientation tolerance and to create an English version.

4 Conclusions The authors managed to develop a training application that offers the students a 360degree view of different mechanical parts presenting 3D geometrical tolerances, 3D models of the tolerance zones, the condition of conformity as well as the datum features. A screen capture video of the VR application and be accessed at the following link: https://youtu.be/2I28Ad53t7w. The definition of a virtual reality training application represents a step forward which enables students to better understand orientation tolerances by having the possibility to interact with 3D parts that have attached various tolerances. The most important aspects are represented by the possibility to visualize the tolerance zone. Virtual reality environments can be used to improve the teaching-learning process enabling a wide variety of 3D parts to be added to the application with ease. Within our proposed orientation tolerance learning VR application, we used 3D models of existing components so that the virtual environment experience is closely linked to the real life. From our experiments with the students, it was concluded that most users enjoyed manipulating and analysing the case study parts and they easily adapted to the Oculus Quest 2 Virtual reality application. The main advantage of the proposed virtual reality learning system is that it is a light and compact standalone Android based Virtual Reality headset. The proposed virtual reality application can be easily integrated to practical laboratory classes allowing students to better understand various aspects regarding tolerances.

538

G. M. Pop et al.

Further features that will be added to the presented application: • represent and explain all the geometrical tolerances according to ISO 1101 with explanations in three different languages (English, German and Romanian), • explanations regarding the 3D measurements of the explained geometrical tolerances using POWERIINSPECT form AUTOCAD, • indications regarding the use of GDT in the design of different machine parts.

References 1. Humienny, Z.: New ISO geometrical product specification standards as a response to industry 4.0 needs. In: Wang, L., Majstorovic, V.D., Mourtzis, D., Carpanzano, E., Moroni, G., Galantucci, L.M. (eds.) Proceedings of 5th International Conference on the Industry 4.0 Model for Advanced Manufacturing. LNME, pp. 306–312. Springer, Cham (2020). https://doi.org/10. 1007/978-3-030-46212-3_23 2. Carbonell-Carrera, C., Jaeger, A.J., Saorín, J.L., Melián, D., de la Torre-Cantero, J.: Minecraft as a block building approach for developing spatial skills. Entertainment Comput. 38, 100427 (2021) 3. Thomsen, A.S.S., et al.: Operating room performance improves after proficiency-based virtual reality cataract surgery training. Ophthalmology 124(4), 524–531 (2017). https://doi.org/10. 1016/j.ophtha.2016.11.015 4. Quevedo, W.X., et al.: Virtual reality system for training in automotive mechanics. In: De Paolis, L.T., Bourdot, P., Mongelli, A. (eds.) AVR 2017. LNCS, vol. 10324, pp. 185–198. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-60922-5_14 5. Comes, R., Neamt, u, C., Grajdeanu, A., Bodi, S.: Virtual reality training system based on 3d scanned automotive parts. ACTA TECHNICA NAPOCENSIS – Ser. Appl. Math. Mech. Eng. 64(1), 55–62 (2021) 6. Pop, G.M., Cris, an, L.A., Tripa, M.: Toleranzen und Passungen_ISBN 978-606-737-398-1. U.T. PRESS (2019). http://www.utcluj.ro/editura/. ISBN 978-606-737-398-1 7. Guo, Z., et al.: A hybrid method for evaluation of maintainability towards a design process using virtual reality. Comput. Ind. Eng. 140, 106227 (2020)

Simulation and Modelling

Comparing the Times Between Successive Failures by Using the Weibull Distribution with Time-Varying Scale Parameter Nuri Çelik1 and Aziz Kemal Konyalıo˘glu2(B) 1 Department of Mathematics, Gebze Technical University, 41400 Kocaeli, Turkey

[email protected]

2 Department of Management Engineering, Istanbul Technical University, Istanbul, Turkey

[email protected]

Abstract. In this study, we propose a method for comparing three or more means of time between two failures (TBF) for different systems or processes. Traditionally, one-way ANOVA is used to compare the means of three or more group, however, this method is based on the normal distribution assumption. For this reason, we propose one-way ANOVA model whose error distribution is the Weibull distribution with time-varying scale parameter. Since, in reliability analysis, the TBFs are generally not identically and independently distributed and the Weibull distribution with time-varying scale parameter has been proposed to analyse the TBFs. We obtain the estimations of the unknown parameters with the maximum likelihood methodology. We also propose a new test statistic for testing the difference mean time failures. A Monte Carlo simulation study is performed for comparing the power of this proposed test statistics with the traditional ones. Simulation results show that the proposed methodologies are more preferable. Keywords: Failure process · Weibull distribution · Time-varying parameter · One-way ANOVA

1 Introduction The Weibull distribution has been widely used in various fields of applied statistics such as reliability, engineering, finance and geography for many years. It is commonly used for modelling reliability data and it has diverse application areas like vacuum tubes, capacitors, ball bearings, relays, and material strengths. The Weibull distribution can model the skewed (right and left), or symmetric data. Therefore, it brings us some flexibility. The Weibull distribution can also model a hazard function that is decreasing, increasing or constant which allow us to portray any kind of a lifetime. Let T be a random variable whose distribution is the Weibull then the probability density function (pdf) of T is (1)  β   β t β−1 ηt e , β > 0, η > 0, x > 0 f (t) = η η © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 541–549, 2022. https://doi.org/10.1007/978-3-030-90421-0_46

(1)

542

N. Çelik and A. K. Konyalıo˘glu

The shape parameter β represents the aging characteristics and the scale parameter η is the characteristic life and proportional to various life measures (Rinne 2008). The cumulative distribution (cdf) and the reliability function of the Weibull distribution can be written as (2) and (3)  β

F(t) = 1 − e

t η

(2)

 β

R(t) = e

t η

.

(3)

respectively. The mean time to failure (MTTF) describes the average of lifetime and is given by the Eq. (4).   1 MTTF = ηΓ 1 + (4) β and the dispersion of lifetime is given by the Eq. (5).      2 1 σ 2 = η2 Γ 1 + −Γ2 1+ β β

(5)

The modified version of Weibull distribution has been widely used for modelling various types of reliability data such as the reflected Weibull distribution introduced by Cohen (1973), the discrete Weibull distribution introduced by Nakagawa and Osaki (1975), complementary Weibull distribution (Drapella 1993), reciprocal Weibull distribution (Mudholkar and Kollia 1994), reverse Weibull distribution (Murthy et al. 2004a, b) and exponentiated Weibull distribution introduced by Mudholkar and Srivastava (1993). Additionally, there are plenty of works using the Weibull distribution related to the reliability analysis, see for example, Crow (1982), Smith (1991), Xie and Lai (1996) and Murthy et al. (2004.a, 2004.b), Nelson (1985) and Baraldi et al. (2020). Detailed information and characteristics of the Weibull distribution can be found in the book written by Rinne (2008). When modelling failure time data, there are two main choices; that are one-time-use and multiple-time-use (or repairable) systems. Repairable system models allow us the reliability growth or decay of a system. These systems have two basic random variables. One of them is the time to the ith failure (Ti ) and them time between the (i − 1)th and ith failures Xi = Ti − Ti−1 . A lifetime distribution model is usually inappropriate for modelling TBF of a repairable system since the TBFs are generally not independent and identically distributed, (Jiang 2015). For this reason, the Weibull distribution with time-varying scale parameter given by (6)   (6) ηi = a 1 − e−i/b , i = 1, 2, 3, . . . is proposed for Xi ’s.

Comparing the Times Between Successive Failures

543

In this paper, we propose one-way ANOVA model whose error distribution is the Weibull distribution with time varying scale parameter. Since, if someone wants to compare the means of TBFs for three or more systems, traditional ANOVA which is based on the normal distribution will not be suitable. To the best of our knowledge, there is no previous work assuming the Weibull distribution with time varying scale parameter as an error distribution in the context of ANOVA. Therefore, this paper can be very useful for scientist who are willing to compare TBFs for different systems or different processes. The rest of the paper is organized as follows. In Sect. 2, one-way ANOVA model with the Weibull distributed with time varying scale parameter error term is studied. The maximum likelihood (ML) estimators are derived and the test statistics based on these ML estimators is proposed. In Sect. 3, a Monte Carlo simulation study is performed in order to compare the performances of these proposed estimators and the test statistics based on them with the traditional estimators and the test statistics. A real-life example is analyzed in Sect. 4 to present the application of the proposed model.

2 One-Way ANOVA with Time-Varying Parameter Weibull Distribution Consider the following one-way ANOVA model (7), yij = μ + αi + εij , i = 1, 2, . . . , a; j = 1, 2, . . . , n

(7)

where yij is the response corresponding to the jth observation in the ith treatment, μ is the overall mean, αi is the effect of ith treatment and εij are the independently and identically distributed (iid) error terms. It is generally assumed that the distribution of the responses in each group is normal with mean zero and the constant variance σ2. The unknown model parameters are estimated by using ML methodology and it is well-known that when the distribution is normal ML estimators satisfies all statistical properties. However, non-normal distributions are more prevalent than the normal distribution especially in life-time models. There are many studies dealing with non-normal error terms in oneway ANOVA models as Senoglu and Tiku (2001), Celik et al. (2015). However, as mentioned before a lifetime distribution model is usually not significant for modelling TBFs. Therefore, there is no need to assume the distribution of error terms are normal or any other life-time distribution while comparing these TBFs. For this reason, we propose a one-way ANOVA model with time-varying parameter Weibull distribution. Consider the model (7) and assume the distribution of εij (i = 1, 2, .. . , a; j = 1, 2, . . . , n) is the Weibull distribution with time varying scale parameter W β, ηij .  β   εij  β εij β−1 ηij f εij = e ηij ηij

(8)

To obtain the ML estimators of the unknown parameters in model (8), the log-likelihood function is given in (9). lnL = Nln(β) − β

a i=1

n j=1





ln ηij + (β − 1)

a i=1

n j=1

 a ln zij −

i=1

n j=1



zij ηij

β

(9)

544

N. Çelik and A. K. Konyalıo˘glu

is maximized with respect to the unknown parameters μ, αi , β, ai and bi . Here zij = yij − μ − αi . By differentiating the log-likelihood function with respect to the unknown parameters and equating them to zero we obtain the following (10) like,

a n  zij β−1  1 

a n  1  ∂lnL +β =0 = (β − 1) i=1 j=1 zij i=1 j=1 ηij ∂μ ηij    n   n 



zij β−1 1 1 ∂lnL +β =0 = (β − 1) ∂αi zij ηij ηij j=1

j=1

    

n n  a a

zij β zij zij ∂lnL N

− =0 = + ln ln ∂β β ηij ηij ηij i=1 j=1 i=1 j=1 

  n zij β−1 ∂lnL

= −1 =0 ∂ai ηij j=1

 β−1  n zij ∂lnL ai −i/b 1 = e 1− =0 ∂bi ηij ηij ηij

(10)

j=1

Solutions of these equations are the ML estimators. These equations have no explicit solutions; therefore, we resort to iterative methods such as Newton-Raphson, modified maximum likelihood method or iteratively reweighting algorithm. In order to perform iterative methods, the likelihood equations in (10) can be rearranged as shown below and given in (11).   1−β 1−β an g .. , αˆ i = μˆ i. − μˆ .. − g i. − g .. , βˆ = a n (11) μˆ = μˆ .. + β β i=1 j=1 tij where μˆ .. , μˆ i. , g .. , g i. , gij , tij are given in (12) a n n a n n i=1 j=1 wij yij j=1 wij yij i=1 j=1 gij j=1 gij μˆ .. = a n , μˆ i. = n , g .. = a n , g i. = n , i=1 j=1 wij j=1 wij i=1 j=1 wij j=1 wij   β−2    β n a

zij zij zij 1 1 gij = , wij = , tij = ln −1 (12) zij ηij ηij ηij ηij2 i=1 j=1

After performing iterative methods for μ, αi and β the likelihood equations for ai and bi can be solved. In one-way ANOVA, our aim is to compare the equality of treatment effects, in other words, to test the following null hypothesis as given in (13). H0 : αi = 0, i = 1, 2, , . . . , a

(13)

For testing the hypothesis, we traditionally use the following test statistics given in (14) 2   n ai=1 yi. − y.. /(a − 1) (14) F= 2 a n  y − y /(N − a) ij .. i=1 j=1

Comparing the Times Between Successive Failures

545

This test statistics is based on the least square (LS) estimators and ANOVA models are reasonably robust against certain types of departures from the model and Type-I error of the F -statistics is not much different than that for a normal distribution, see Senoglu and Tiku (2001), Kutner et al. (2005), Celik et al. (2015). However, we use the model with time varying scale parameters and it breaks the second assumption of the equality of variances among each group (heteroskedasticity) and this causes some problems in F -statistics. For heteroskedasticity, Welch’s F test is the most commonly used method and the main idea to perform Welchs F test is based on using weights (wi) to decrease the effect of unequal variances. The weights can be found as given (15). wi =

ni Si2

(15)

where ni is the sample size of the ith group and S 2 is the observed variance of each group. In this paper, we use the variance of the Weibull distribution with time varying scale parameter ηi , see further details about modified Welch’s test at Celik (2020).

3 Simulation Study In this section we compare the powers of the test statistics for testing (13). We use traditional F statistics based on normality and constant variances, Welch’s F statistics (FWelch) used for unequal variances, the FWbl obtained by using the parameters with the Weibull distribution and the proposed F ∗ based on Welch method but assuming the Weibull distribution with time varying scale parameter. As a sake of brevity, we take 3 treatments in one-way ANOVA. Then, we simulate the error terms from the Weibull distribution with time-varying scale parameter with different values of ai, bi and β. The simulation study is run by [|100,000/n|] times via Matlab programming, where [|.|] denotes rounding a decimal to the nearest integer number. We choose the following setting in our simulation: μi(μ + αi) = 0, (i = 1, 2,…, a). Table 1 shows the results. In Table 1, we use β as 0.5, 1.0 and 2.0 and we choose the different values of ai and bi. Power values are same for other ai and bi values, therefore we did not reproduce them for the sake of brevity. As can be noticed from Table 1, the power values of the test statistics based on the Weibull distribution with time varying scale parameter is higher than the corresponding test statistics based on traditional methods for all β, ai and bi values. Additionally, It is clear from Table 1, the power values of the test statistics based on the Weibull distribution with time varying scale parameter is much more higher than the corresponding test statistics based on traditional methods when ai and bi values are not close to each other.

546

N. Çelik and A. K. Konyalıo˘glu Table 1. Values of the power for F, FWELCH , FWBL , and F* ; α = 0.050

β

0,5

d

F

1,0 FWelch

FWbl

F∗

F

2,0 FWelch

FWbl

F∗

F

FWelch

FWbl

F∗

a1 = 1.0, a2 = 1.5, a3 = 2.0, b1 = 0.9, b2 = 1.0, b3 = 1.1 0

0.039

0.041

0.044

0.051

0.035

0.043

0.048

0.052

0.031

0.041

0.044

0.053

0.1

0.06

0.08

0.08

0.09

0.07

0.08

0.08

0.09

0.07

0.09

0.10

0.10

0.2

0.16

0.21

0.21

0.25

0.15

0.20

0.18

0.21

0.18

0.18

0.16

0.29

0.3

0.27

0.33

0.34

0.39

0.25

0.35

0.35

0.38

0.25

0.29

0.32

0.40

0.4

0.40

0.45

0.47

0.51

0.40

0.49

0.49

0.61

0.39

0.47

0.49

0.63

0.5

0.61

0.66

0.67

0.70

0.59

0.68

0.7

0.78

0.58

0.68

0.68

0.72

0.6

0.75

0.79

0.81

0.85

0.74

0.80

0.82

0.92

0.70

0.82

0.80

0.84

0.7

0.89

0.90

0.91

0.95

0.87

0.91

0.92

0.99

0.81

0.90

0.92

0.95

a1 = 1.5.a2 = 2.5.a3 = 3.0.b1 = 0.9.b2 = 1.5.b3 = 1.8 0

0.036

0.043

0.043

0.053

0.033

0.045

0.048

0.053

0.031

0.042

0.047

0.055

0.1

0.06

0.09

0.08

0.10

0.06

0.08

0.07

0.10

0.07

0.09

0.10

0.11

0.2

0.15

0.21

0.20

0.26

0.14

0.22

0.18

0.21

0.17

0.19

0.18

0.28

0.3

0.27

0.35

0.34

0.39

0.25

0.36

0.36

0.39

0.25

0.29

0.33

0.43

0.4

0.4

0.46

0.46

0.52

0.39

0.49

0.48

0.61

0.39

0.46

0.51

0.64

0.5

0.62

0.69

0.66

0.71

0.59

0.69

0.70

0.79

0.58

0.68

0.67

0.72

0.6

0.74

0.81

0.82

0.85

0.73

0.81

0.83

0.92

0.69

0.81

0.81

0.85

0.7

0.87

0.91

0.92

0.95

0.85

0.92

0.92

0.99

0.82

0.89

0.92

0.97

a1 = 1.5.a2 = 4.0.a3 = 7.5.b1 = 1.0.b2 = 2.0.b3 = 2.5 0

0.029

0.044

0.046

0.054

0.03

0.044

0.046

0.056

0.026

0.039

0.041

0.057

0.1

0.05

0.09

0.08

0.11

0.06

0.08

0.09

0.12

0.05

0.09

0.10

0.11

0.2

0.14

0.21

0.2

0.27

0.16

0.19

0.18

0.23

0.13

0.17

0.16

0.32

0.3

0.29

0.35

0.35

0.42

0.24

0.36

0.36

0.41

0.21

0.26

0.31

0.48

0.4

0.38

0.49

0.48

0.55

0.41

0.49

0.48

0.65

0.36

0.45

0.47

0.69

0.5

0.6

0.68

0.67

0.78

0.55

0.67

0.69

0.79

0.49

0.63

0.65

0.8

0.6

0.72

0.81

0.82

0.91

0.71

0.81

0.8

0.93

0.67

0.79

0.79

0.92

0.7

0.83

0.91

0.9

0.98

0.81

0.92

0.91

0.99

0.77

0.88

0.87

0.99

4 Application In this section, we consider the data given by Prentice and Breslow (1978) and studied by Kalbfleisch and Prentice (1980). This data is about the cancer patients as collected by the Veterans Administration Lung Cancer Study Group. The patients have a certain type of therapy for four different broad groups tumours (1, squamous; 2, small; 3, adeno; 4, large). The data is multidimensional but we use only one dimension that is the month from diagnosis. Table 2 shows the data.

Comparing the Times Between Successive Failures

547

Table 2. Veterans administration lung cancer data Tumuors broad group Squamous

7 5 3 9 11 5 10 29 18 6 4 58 1 9 11

Small

3 9 2 4 4 3 5 14 2 3 4 12 4 12 2 15 2 5 3 2 25 4 1 28 8 1 7 11 4 23

Adeno

19 10 6 2 5 4 5 3 4

Large

16 5 15 2 12 12 5 12 2 2 8 11 5 8 13

To identify the distribution of the error terms, we use the Q-Q plot technique. The Q-Q plot of normal distribution and the Weibull distribution is shown in Fig. 1.

Fig. 1. QQ plots (normal and Weibull distribution)

The parameter estimations for normal, the Weibull and the Weibull with time- varying scale parameter are given in Table 3. The estimations are obtained iteratively for both the Weibull and the Weibull with time-varying scale parameter. Table 3. Parameter estimation Model

Estimations

Normal distribution

μ = 8.61, α1 = 3.79, α2 = −1.21, α3 = −2.17 α4 = −0.41, σ = 8.79

Weibull distribution

μ = 8.68, α1 = 3.98, α2 = −1.61, α3 = −2.01 α4 = −0.36, β = 1.17, η = 9.16

Weibull w,th time-varying scale parameter

μ = 8.65, α1 = 3.85, α2 = −1.37, α3 = −1.85 α4 = −0.63, β = 1.39, σ = 9.16 a1 = 15.79, a2 = 9.36, a3 = 8.87, a4 = 10.25 b1 = 1.21, b2 = 1.13, b3 = 1.05, b4 = 1.19

The test statistics and p-values are given in Table 4. As can be noticed from Table 4, F statistics, (FWelch), Kruskal Wallis χ 2 and FWbl are agreeing for not rejection H0. On the other hand, the proposed test statistics (F*) based on the Weibull distribution with time-varying scale parameter decide to reject H0.

548

N. Çelik and A. K. Konyalıo˘glu Table 4. Test statistics and p-values Model

Test statistics

p-value

F

1.438

0.240

FWelch

0.871

0.468

χ2

5.281

0.152

FWbl

3.752

0.061

F∗

4.132

0.032

5 Conclusions In reliability analysis, the Weibull distribution is the most popular distribution in order to model the life time or failure data. However, the life time distributions are not suitable for modelling the times between successive failures (TBF) of a repairable system, Since TBFs are not independently and identically distributed. Therefore, in this article, we propose a one-way ANOVA method in order to compare the time between two successful failures (TBF) based on the Weibull distribution with time varying scale parameter. The parameter estimates of the unknown model parameters are obtained by using ML methodology. Simulation study show that, the power values of the test statistics based on the Weibull distribution with time varying scale parameter is higher than the corresponding test statistics based on traditional methods for all β, ai and bi values. This difference is getting bigger when β values increases and ai and bi values are not close to each other. Therefore, this proposed method will be a good alternative for comparing mean time failures for three or more groups.

References Baraldi, P., Bani, I., Zio, E., McDonnell, D.: Industrial equipment reliability estimation: a Bayesian Weibull regression model with covariate selection. Reliab. Eng. Syst. Saf. 200, 106891 (2020) Celik, N., Senoglu, B., Arslan, O.: Estimation and testing in one-way ANOVA when the errors are skew-normal. Revista Colombiana De Estadstica 38(1), 7591 (2015) Celik, N.: Welchs ANOVA: Heteroskedastic skew-t error terms. Commun. Stat. Theory Methods (2020) Cohen, A.C.: The reflected Weibull distribution. Technometrics 15, 867873 (1973) Crow, L.H.: Confidence interval procedures for the Weibull process with applications to reliability growth. Technometrics 24, 67–72 (1982) Drapella, A.: Complementary Weibull distribution: unknown or just forgotten. Qual. Reliab. Eng. Int. 9, 383385 (1993) Jiang, R.: Introduction to Quality and Reliability Engineering. Springer, Heidelberg (2015). https:// doi.org/10.1007/978-3-662-47215-6 Kalbfleisch, J.D., Prentice, R.L.: The Statistical Analysis of Failure Time Data. Wiley, New York (1980) Kutner, M.H., Nachtsheim, C.J., Neter, J., Li, W.: Applied Linear Statistical Models, 5th edn. McGraw-Hill Irwin, New York (2005)

Comparing the Times Between Successive Failures

549

Mudholkar, G.S., Kollia, G.D.: Generalized Weibull family: a structural analysis. Commun. Stat. Theory Methods 23, 11491171 (1994) Mudholkar, G.S., Srivastava, D.K.: Exponentiated Weibull family for analyzing bathtub failure rate data. IEEE Trans. Reliab. 42, 299302 (1993) Murthy, P., Bulmer, M., Eccleston, J.A.: Weibull model selection for reliability modelling. Reliab. Eng. Syst. Saf. 86(3), 257–267 (2004) Murthy, D.N.P., Xie, M., Jiang, R.: Weibull Models. Wiley, New York (2004) Nakagawa, T., Osaki, S.: The discrete Weibull distribution. IEEE Trans. Reliab. 24, 300301 (1975) Nelson, W.: Weibull analysis of reliability data with few or no failures. J. Qual. Technol. 17, 140–146 (1985) Prentice, R.L., Breslow, N.E.: Retrospective studies and failure time models. Biometrika 65(1), 153158 (1978) Rinne, H.: The Weibull Distribution Handbook. Taylor and Francis Group, New York (2008) Senoglu, B., Tiku, M.L.: Analysis of variance in experimental design with nonnormal error distributions. Commun. Stat. Theory Methods 30(7), 13351352 (2001) Smith, R.L.: Weibull regression models for reliability data. Reliab. Eng. Syst. Saf. 34(1), 55–76 (1991) Xie, M., Lai, C.D.: Reliability analysis using an additive Weibull model with bathtub-shaped failure rate function. Reliab. Eng. Syst. Saf. 52(1), 87–93 (1996)

Modelling of a Microwave Rectangular Waveguide with a Dielectric Layer and Longitudinal Slots I. J. Islamov(B) , E. Z. Hunbataliyev, R. Sh. Abdullayev, N. M. Shukurov, and Kh. Kh. Hashimov Department of Radioengineering and Telecommunication, Azerbaijan Technical University, Baku, Azerbaijan

Abstract. The characteristics of a microwave rectangular waveguide with a dielectric layer and longitudinal slots are investigated. It is shown that by choosing the dielectric constant and the thickness of the dielectric layer, one can significantly reduce the transverse dimensions of the linear waveguide-slot grating without deteriorating the electrodynamic characteristics. It is revealed that a waveguideslot grating based on a microwave rectangular waveguide with a dielectric layer is wider than a waveguide-slot grating of the same electrical length based on a hollow microwave rectangular waveguide. Keywords: Rectangular waveguide · Dielectric constant · Directional diagram · Waveguide-slotted grating

1 Introduction Slotted waveguide arrays (SWA) are widely used in ground and airborne radar, radio relay, radio navigation systems [1–3]. When the main maximum of the directional pattern (DP) is oriented along the normal to the SWA plane, and when it deviates by a certain angle from the normal, there is a danger of the appearance of interference maxima of higher orders. To avoid the appearance of these maxima, the distance between the slits is reduced in various ways. One of them is the use of waveguides of a complex shape [1–3], the other is the slowing down of the wave in the waveguide by introducing a dielectric insert into it. In both cases, the critical wavelength of the main type increases, which makes it possible to reduce the transverse size of the waveguide and, as a consequence, to decrease the distance between the radiators in adjacent linear arrays. Deceleration of the wave in the waveguide allows you to bring the emitters closer together within one linear array. Symmetrical filling of the waveguide with a dielectric layer, placed parallel to the narrow walls of the waveguide, expands the operating frequency band of the waveguide, changes the structure of the field in it, leads to a change in the resonance frequencies and energy characteristics of the slots [4]. This can be used to control the SWA parameters. In a SWA with longitudinal slots cut in a checkerboard pattern in the wide wall of a waveguide with a dielectric layer, at sufficiently large wave slowdowns, a situation arises © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 550–558, 2022. https://doi.org/10.1007/978-3-030-90421-0_47

Modelling of a Microwave Rectangular Waveguide

551

when the regions of the location of adjacent slots in the longitudinal direction partially overlap. As is known [5–8], the accuracy of modeling the characteristics of SWA depends significantly on the consideration of the interaction between the slots in the inner and outer spaces. The aim of this work is to study the characteristics of a linear SWA taking into account the interaction of longitudinal slots in the case when the regions of the location of adjacent slots along the waveguide axis partially overlap.

2 Statement and Solution of the Problem A system of N longitudinal emitting slots in a rectangular waveguide with cross section a × b with lossless dielectric filling is considered. The general view of the system is shown in Fig. 1. The filling is assumed to be three-layer with layer sizes a1 , a2 , a3 , and with permittivities ε1 , ε2 , ε3 , the layers are located parallel to the narrow walls of the waveguide. The conductivity of the waveguide walls is assumed to be ideal. Radiating narrow longitudinal slits with length Lν and width dv (Lv /dv ≤ 0, 1) are cut in a checkerboard pattern in the upper wide wall of the waveguide. The waveguide is excited in the section z = 0 by a LE10 main wave of unit amplitude. The regions where adjacent slits are located along the z axis in the interval zv+1 ≤ z ≤ zv + Lv can intersect. It is required to determine the amplitudes of the excitation of the slits Vν of a given linear SWA. The vector of the electric field in each of the slits was approximated by a set of vector coordinate functions eq The vector of the    i eq excited by each slit in magnetic field H the waveguide in the region of the location of the slit itself (zv ≤ z ≤ zv + Lv ) and beyond (z ≺ zv , z  zv + Lv ) was determined by the method of its expansion in eigenwaves [9]. In [10], this method was modified in order to obtain a solution more suitable for calculations. In the region of the slot location, the scattered field was represented as a series expansion in LE and LM eigenwaves and potential functions P of a waveg- Fig. 1. General view of the slit system uide with a three-layer dielectric filling [10]. Taking into account the wall thickness of the waveguide, the condition of continuity of the tangential components of the magnetic fields on the inner and outer surfaces of each slot was described by two equations. The field at each slit was approximated by one half-period of the sinusoidal function. For N slots, the Galerkin method was used to obtain a system linear algebraic equations (SLAE) of 2N. The idea of obtaining such a SLAE was outlined in [11], where this method was called the method of induced magnetomotive forces (MDF).

552

I. J. Islamov et al.

In our case, the SLAE looks like this: ⎧ N  i   ⎪ v i δ ⎪ ⎪ Vv Yμ,v + Yμ,v μ,v + Vμ Yμ,μ = Fμ , ⎨ v=1



N  ⎪ v e v ⎪ Y  Y   + Y   δμ ,v  = 0, + V V ⎪  μ ⎩ v μ,μ μ ,v μ ,v 

(1)

μ = 1, . . . , N ; μ = 1, . . . , N .

(2)

v =1

Here, the indices μ, ν are used to number the inner surfaces of the slots, and the indices   i − internal intrinsic (μ = v) and μ , v are used for the outer surfaces of the slots; Yμ,v v − mutual (μ = v) conductances of longitudinal slots in a three-layer waveguide; Yμ,v intrinsic (μ = v) and mutual (μ = v) conductivity of slits in resonators formed by  cavities of slits in the wall of the waveguide [12]; Y e   − external intrinsic μ = v μ ,v   and mutual μ = v conductivity of the gaps [12]; F m is the magnetomotive force at the longitudinal slot. In the MDF method, the interaction between the slots inside the waveguide is taken into account using mutual internal conductivity: i Yμ,v =

R N

iLE Yμ,v,rn +

n=1 r=0

N N

iLM Yμ,v,rn +

n=0 r=1

R N

iP Yμ,v,rn .

(3)

n=0 r=0

Here r is the number of the root of the corresponding dispersion equation, n is the number of half-waves in the waveguide along the y axis. iLE , Y iLM , Y iP Functions Yμ,v,rm μ,v,rm μ,v,rm are: iLE Yμ,v,rm

 LE  LE LE LE LE



γrn α1,r α3,r I (z)Crn LE LE LE

fv α3,r , = × fμLE α1,r   LE 2 N LE (π/Lv )2 − γrn rn





LM −(ωkn )2 ε1 ε3 I LM (z)Crn iLM LM LM

Yμ,v,rm fvLM α3,r , = × fμLM α1,r   LM 2 N LM (π/Lv )2 − γrn rn iP Yμ,v,rm =





−(1 − δ0n )FC P P P P P α f α , × f μ v 1,r 3,r iωμ0 b P

(4)

(5)

(6)

where     T T sin π (L − d )/L π π/Lμ BT e−iγrn dz + 2iγrn v z μ ,  2  2 T Lv π/Lμ − γrn   Lv sin(π dz /Lv ) , F = 0, 5 (Lv − dz ) cos(π dz /Lv ) + π

T I T (z) = 2iγrn F+

BT = 1 + e−iγrn Lv e2iγrn d + e−iγrn Lμ + e−iγrn (Lv −Lμ ) , T

T

T

T

(7)

(8) (9)

Modelling of a Microwave Rectangular Waveguide

T d

sin 0, 5αi,r η T T = cos αj,r fηT αj,r x0η T d 0, 5αj,r η

553

(10)

T − longitudinal wave numbers determined from the dispersion equations Here, γrm for LE-,LM-waves and potential functions (the index T takes the values LM, LM, or P);

T = T ) − k 2 – transverse wavenumber along the x-axis in the dielectric k 2 εj − (γrn αj,r n layer with number j = 1, 2, 3, kn = nπ/b; k = 2π/λ, λ is the wavelength in free space; ω− circular frequency; μ0 = 4π · 10−7 Hn/m is the magnetic constant; δ0n − Kronecker LE , N LM − normalizing factors appearing in the orthogonality condition for symbol; Nrm rm   LE and LM waves; P − the norm of the potential function; dz = zv − zμ is the distance between the beginning of the slits along the z axis; η = μ, v− numbers of interacting slits; x0η is the displacement of the slits along the x coordinate (Fig. 1); C T are factors that appear in the derivation of dispersion equations, the specific form of which can be found in [10]. It should be noted that for any arrangement of the slits along the z axis, the expression i (μ  = v) includes terms Y iLE , Y iLM due to the for mutual internal conductivity Yμ,v μ,v,rm μ,v,rm interaction of two gaps in the LE and LM waves. If the areas where the slits are located along the z axis overlap at least partially, in the expression for it is necessary to take into account the third term, which describes the interaction between adjacent gaps in the potential field in the section (zv+1 ≤ z ≤ zv + Lv ) In the case when μ = v the above expressions describe the internal intrinsic conductivities of longitudinal slots in a three-layer waveguide. If the regions of the location of the slits along the z axis do not overlap, then to solve the problem it is better to use the method of successive approximations [12], in which only the intrinsic conductivities and magnetomotive forces of the slits are used.

3 Research Results On the basis of the obtained solution, a program was developed for calculating the characteristics of a no resonant linear SWA. The studies were carried out for linear gratings with identical slots. Such arrays are of practical interest in view of the simplicity of manufacture, and also for the reason that by switching the antenna excitation, it is possible to achieve a doubling of the scanning sector. In the calculations, the distribution of the electric field along the slots was approximated by one half-period of the sinusoidal function. It was assumed that the dielectric constant of the layers adjacent to the narrow walls of the waveguide is ε1 = ε3 = 1, and the middle dielectric layer with ε  1 is located symmetrically. In order to reduce the level of the far side lobes of the antenna pattern [4], the partial dielectric filling of the waveguide was chosen so that the slowdown of the fundamental wave LE 10 in the frequency range corresponding to the single-mode operation of the waveguide would satisfy the condition λ/λg  1 (λg is the wavelength in the waveguide). In this case, the regions of the location of adjacent slits of approximately half-wavelength partially overlap along the z axis. It was important to find out how correctly the developed mathematical model describes real physical processes. For this purpose, an experimental verification of the calculated results was carried out.

554

I. J. Islamov et al.

A linear SWA based on a 20 × 40 mm waveguide was investigated. The dielectric constant of the layer is ε = 5, its thickness is a2 = 1, 8 mm. The deceleration of the fundamental wave with this filling varied from 1 to 1,25 in the frequency band 7, 6 ≤ f ≤ 11, 55 GHz. The distance dz between adjacent slots of the same length l = 16, 5 mm was chosen equal to 0, 5 · λg0 = 10 mm. The wavelength λg0 = 20 mm in the waveguide with the indicated layer corresponded to the frequency f0 = 11, 53 GHz, at which the slits are excited in phase. The frequency range in the computational and experimental studies was below the frequency of 11, 53 GHz, at which the main maximum of the AP is directed along the normal to the grating. Note that the frequency f0 is near the critical frequency fc2 = 11, 55 GHz of the LE 11 wave. The length of the slots was chosen so that when the slots were displaced from the narrow walls of the waveguide by x0 = 6 mm, their resonant frequency fp = 9, 1 GHz was approximately in the middle of the considered frequency range. DP in the H-plane of a linear array of 15 slots at frequencies of 8,0; 9,2 and 10, 4 GHz are shown in Fig. 2. The frequency dependence of the deviation angle of the DP maximum θm from the normal to the axis of the linear grating is shown in Fig. 3, where negative angles θm correspond to the rotation of the DP beam towards the generator (z = 0). From those shown in Fig. 2 and Fig. 3 dependences it is seen that the results of calculating the DP and θm are in good agreement with the experimental data. The difference between the calculated and experimental DP in terms of the angle of rotation is within the range of the grating alignment accuracy, which was approximately ±2°. The scattering parameters of this grating were also investigated - the voltage standing wave ratio (VSWR) at the input and the attenuation coefficient (power transmission coefficient |Q12 |2 ) at the output. The maximum VSWR values in the selected wavelength range were found to be 1,28 (experiment) and 1,22 (calculation). Comparison of the experimental and calculated values of the transmission coefficient presented in Fig. 4 confirm the good mutual correspondence of these frequency dependences. Thus, the results of the studies performed confirm the possibility of approximating the field distribution along identical longitudinal slots, the regions of which along the z axis partially overlap, by the half-period of the sine function. This approximation provides reliable calculated characteristics of the SWA based on a waveguide with a dielectric layer in a band of ±13% relative to the resonant frequency fp of the slot. It is hoped that for other geometrical and electrical parameters of the system, the developed model will provide a correct description of the physical processes in the SWA of the type under consideration. Of practical interest is the study of the grating gain Gm . It simultaneously takes into account the nature of the antenna pattern, its directional action (directivity) and power emissivity W Gm = Dm W .

(11)

Here Dm is the coefficient of directional action at the maximum of the DP of the SWA. Developers, as a rule, are interested in the maximum possible antenna gain with its minimum dimensions, as well as the uniformity of the gain in the scanning frequency range. This means that the antenna emissivity should tend to unity, and the directivity should be of great importance. These are conflicting requirements. On the one hand, the

Modelling of a Microwave Rectangular Waveguide

Fig. 2. Directional patterns

Fig. 3. Frequency dependence of the angle of rotation main maximum of the pattern

555

Fig. 4. Frequency dependences of the transmission coefficients of linear SWA with a different number of slots

slots in the SWA should be sufficiently emitting to ensure the efficiency of the antenna radiation as a whole. On the other hand, the more the slots radiate, the more decaying the distribution of the field amplitudes at the slots along the antenna becomes, which reduces the effective antenna length and its directivity. Therefore, a compromise is needed here. The behavior of the emissivity in power W the directivity Dm and the gain Gm of a linear grating in the frequency range for a different number of slots is well illustrated by the data presented in Fig. 5, a, b, c, respectively. It can be seen that for SWA with 7 and 11 slots, even at the resonant frequency fp = 9, 1 GHz, the power emissivity does not reach the value W = 1. But at N = 15, 20, 30, the emissivity is already practically equal to 1. However, the directivity at this frequency decreases significantly. The degree of decrease in the directional action coefficient of a linear SWA can be estimated by comparing it with the directional action coefficient of a continuous linear system of emitters Dm0 of the same electrical length with a constant amplitude distribution. This is shown by the example of SWA with N = 30 in Fig. 5. In the vicinity of the resonant frequency, the amplitude distribution of the grating becomes rapidly decreasing, which leads to the expansion of the main lobe of the antenna pattern, the disappearance of zero values of the near side lobes and, as a consequence, to a decrease in the directional action coefficient. This is confirmed, for example, by the DP of the SWA with N = 15 (f = 9, 2 GHz), shown in Fig. 2. As we move away from the resonant frequency, the directivity increases, most significantly at N = 30. The shorter the antenna, the less changes in the frequency range the nature of the antenna array and, accordingly, the directivity (Fig. 5, b, N = 11, 7). As a result, for N = 15 ÷ 20, gain behaves in the considered frequency range most uniformly, approaching in level to the directional action coefficient of a radiating system with a constant amplitude distribution. The greatest interest is the comparison of the characteristics of scanning SWA based on a hollow waveguide and a waveguide with a slow-wave structure inside in the form of a dielectric layer. The nature of the frequency dependence of Gm is determined, among other things, by the frequency at which the single slits in the grating resonate. In Fig. 6, a, b show the results of calculating the energy characteristics W and Gm for two variants of gratings of the same length: based on a waveguide with a dielectric layer (ε2 = 10, a2 = 1 mm) and a hollow waveguide.

556

I. J. Islamov et al.

The grating based on the waveguide with a dielectric layer contained 36 slots, and the grating based on the hollow waveguide - 21. The studies were carried out for two values of the length of the slots: L = 16, 4 mm and L = 14, 6 mm. The slits with a length L = 14, 6 mm resonate when the maximum of the pattern is directed along the normal, fp = 10 GHz and the slits with a length L = 16, 4 mm at a frequency fp = 9 GHz. It can be seen that the behavior of W and Gm in the considered frequency range substantially depends on the choice of the slot length. The energy parameters of the grating based on a waveguide with a dielectric in the case of L = 16, 4 mm behave more uniformly than in the case of L = 14, 6 mm. This means that a clear improvement in the performance of the scanning antenna can be achieved by special choice of the slit length. Comparison of the curves in Fig. 6 it follows that an antenna based on a waveguide with a dielectric has clear advantages over an antenna based on a hollow waveguide.

Fig. 5. Influence of the number of slots on the energy and directional characteristics of a linear SWA based on a waveguide with a dielectric layer with parameters ε2 = 5, a2 = 1,8 mm

As a result of the investigations carried out, it was found that for a SWA with N = 30, 40, it is possible to achieve that the gain changes by no more than 1 dB in the considered frequency band when the DP maximum deviates in the angle sector from −2° to − 45° (Fig. 7). In this case, the gain differs from the directional action of a linear system with a constant amplitude distribution by no more than 1,5 dB. For this, the dielectric layer must be selected so that the slowdown of the main type of wave varies within 1, 6 ≥ λ/λg ≥ 1, 3 (the areas where the adjacent slots are located strongly overlap). In addition, by choosing the parameters of the partial dielectric filling, the frequency f0 , the length of the slots, and their displacement x0v , the frequency scanning band can be transferred to the operating range of a waveguide with a large transverse size. For example, as seen in Fig. 7, a SWA based on a waveguide with a cross section of 40 × 20 mm with a dielectric layer with (ε2 = 10, a2 = 2 mm) and slits with a length of L = 21 mm operates in the frequency range of a waveguide with a cross section of 35 × 15 mm.

Modelling of a Microwave Rectangular Waveguide

Fig. 6. Frequency dependence of the emissivity (a) and gain (b) of linear SWA of the same electrical length

557

Fig. 7. Frequency dependence of the gain of the linear SWA at N = 40, (ε2 = 10, x0 = 6 mm)

4 Conclusions The problem of exciting a system of longitudinal slots in a rectangular waveguide with a dielectric layer parallel to its narrow walls has been solved. The solution takes into account the interaction between gaps in the interior and exterior space. The internal interaction also takes into account the case when the regions of the location of adjacent slits along the waveguide axis partially overlap. The calculations were carried out in the approximation of the electric field distribution along the slots in the form of a half-period of a sinusoidal function. It has been experimentally confirmed that such an approximation of the field on identical longitudinal slots provides reliable calculated characteristics of the SWA based on a waveguide with a dielectric layer in a band of ±13% relative to the resonant frequency of the slot. An SWA based on a waveguide with a dielectric layer is wider than an SWA based on a hollow waveguide. For a SWA with the number of slots N = 30, 40 (the electric length of the grating is of the order of 10–12 wavelengths), in the case when the slowdown of the main type of wave varies within 1, 6 ≥ λ/λg ≥ 1, 3 to ensure that during frequency scanning of the BP maximum in the sector of angles from –2° to –45° the antenna gain changes by no more than 1 dB. It is shown that by choosing the dielectric constant and the thickness of the dielectric layer partially filling the waveguide, it is possible to significantly reduce the transverse dimensions of the linear SWA without deteriorating the energy characteristics. Such linear SWA can be used to create two-dimensional gratings with the formation of BP beams in a given direction, as well as SWA with the possibility of scanning in two planes in a wide sector of angles without the appearance of interference maxima of higher orders. Acknowledgment. The authors would like to thank the editor and anonymous reviewers for constructive, valuable suggestions and comments on the work.

References 1. Shestopalov, Y.V., Kuzmina, E.A.: On a rigorous proof of the existence of complex waves in a dielectric waveguide of circular cross section. Prog. Electromagn. Res. 82, 137–164 (2018)

558

I. J. Islamov et al.

2. Mazur, M., Mazur, J.: Operation of the phase shifter using complex waves of the circular waveguide with periodical ferrite-dielectric filling. J. Electromagn. Waves Appl. 25(7), 935– 947 (2011) 3. Calignanoa, F., et al.: High-performance microwave waveguide devices produced by laser powder bed fusion process. Proc. CIRP 79, 85–88 (2019) 4. Nair, D., Webb, J.P.: Optimization of microwave devices using 3-D finite elements and the design sensitivity of the frequency response. IEEE Trans. Magn. 39(3), 1325–1328 (2003) 5. Belenguer, A., Esteban, H., Boria, V.E.: Novel empty substrate integrated waveguide for high-performance microwave integrated circuits. IEEE Trans. Microw. Theory Tech. 62(4), 832–839 (2014) 6. Islamov, I.J., Ismibayli, E.G., Gaziyev, Y.G., Ahmadova, S.R., Abdullayev, R.: Modeling of the electromagnetic field of a rectangular waveguide with side holes. Prog. Electromagn. Res. 81, 127–132 (2019) 7. Islamov, I.J., Shukurov, N.M., Abdullayev, R.Sh., Hashimov, Kh.Kh., Khalilov, A.I.: Diffraction of electromagnetic waves of rectangular waveguides with a longitudinal. In: IEEE Conferences 2020 Wave Electronics and its Application in Information and Telecommunication Systems (WECONF), INSPEC Accession Number: 19806145 (2020) 8. Khalilov, A.I, Islamov, I.J., Hunbataliyev, E.Z., Shukurov, N.M., Abdullayev, R.Sh.: Modeling microwave signals transmitted through a rectangular waveguide. In: IEEE Conferences 2020 Wave Electronics and its Application in Information and Telecommunication Systems (WECONF), INSPEC Accession Number: 19806152 (2020) 9. Islamov, I.J., Ismibayli, E.G.: Experimental study of characteristics of microwave devices transition from rectangular waveguide to the megaphone. IFAC-PapersOnLine 51(30), 477– 479 (2018) 10. Ismibayli, E.G., Islamov, I.J.: New approach to definition of potential of the electric field created by set distribution in space of electric charges. IFAC-PapersOnLine 51(30), 410–414 (2018) 11. Islamov, I.J., Ismibayli, E.G., Hasanov, M.H., Gaziyev, Y.G., Abdullayev, R.: Electrodynamics characteristics of the no resonant system of transverse slits located in the wide wall of a rectangular waveguide. Prog. Electromagn. Res. Lett. 80, 23–29 (2018) 12. Islamov, I.J., Ismibayli, E.G., Hasanov, M.H., Gaziyev, Y.G., Ahmadova, S.R., Abdullayev, R.: Calculation of the electromagnetic field of a rectangular waveguide with chiral medium. Prog. Electromagn. Res. 84, 97–114 (2019)

Supply Chain Management and Sustainability

A Literature Analysis of the Main European Environmental Strategies Impacting the Production Sector Denisa-Adela Szabo, Mihai Dragomir(B) , Virgil Ispas, and Diana-Alina Blagu Universitatea Tehnic˘a din Cluj-Napoca, Cluj-Napoca, Romania {denisa.szabo,mihai.dragomir,virgil.ispas, diana.blagu}@muri.utcluj.ro

Abstract. The paper presents the first step in a doctoral research project oriented toward improving the policy framework in Romania and Europe related to the environmental transformations of the manufacturing sector. A policy scanning is performed identifying the key provisions of the main strategic documents (for circular economy, bioeconomy and decarbonization). The studied documents are then analysed using a matrix diagram approach against four criteria relevant to their foreseen implementation in the field. The driving motive for this analysis is represented by the need to tailor the generic strategies in a way that reflects the current challenges of industrial companies. Although, there is a wide diversity of situations on European level, the aspects related to environmental policies tend to be similar in different countries due to the influence of European Commission directives. However, since environmental performance must be an integral part of competitiveness, these policies must allow room for improvement. The conclusions discuss the current gaps in the framework and the possibilities to reduce them. Keywords: Environmental policy · Circular economy · Bioeconomy · Decarbonization

1 Introduction This paper aims to analyse three important strategies centred upon improving the environmental results of European economies face with critical challenges such as climate change and resource depletion. The objective of the paper is to identify as accurately as possible those provisions of the previously specified documents that significantly affect the manufacturing sector and to study them based on customized analysis models. We will investigate the measures proposed by each strategy and determine the degree of impact they have on companies, people and stakeholders involved in industries oriented toward delivering physical goods on the B2B and B2C markets.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 561–567, 2022. https://doi.org/10.1007/978-3-030-90421-0_48

562

D.-A. Szabo et al.

The European Green Deal created by the European Commission aims to create a modern and functional climate and environmental framework for the entire European Union, being one of the most ambitious strategies in the world in this field. This strategy supports economic and social development from the perspective of resources efficiency and net zero emissions, with a more competitive economy that will foster the wellbeing of citizens and the protection of the environment [1, p. 2]. The strategy integrates existing policies and aims to be applied as many Member States as possible that can cooperate to meet the proposed objectives [1, p. 4]. The main goal is to reduce green gas emissions (GHG) to net zero by 2050, a goal proposed by the United Nations Framework Convention on Climate Change (UNFCC), which will include the first “climate” law to help achieve climate neutrality in the EU [1, p. 4]. The bioeconomy encompasses all economic activities that make use of biological components within their production, consumption, and recycling processes [2, p. 4]. The EU has been promoting an ambitious strategy in this field since 2012, with an update in 2018. This strategy aims to achieve 5 major goals for a sustainable and impactful bioeconomy and has three main directions for supporting the goals: strengthen and expand biology-based economic sectors, support the creation of local bio economies, research the ecological capacity the environment can make available for economic activities [2, pp. 9−10]. As a consequence of mismanagement of resources, GHG emissions are on the rise, which is why the European Commission has created a strategy for a climate-neutral economy that is more competitive and better structured in terms of resources used – Circular Economy Action Plan [3, p. 4]. The aims are to reduce the CO2 output and to reach a 100% increase in the rate of re-use of materials in the next decade, while supporting European companies to develop more sustainable products and production processes [3, p. 4]. Also, circularity can bring considerable material savings in all supplier-customer relationships and beyond [3, p. 9].

2 Literature Review In order to study the impact of environmental strategies on production companies, the first step that we performed was to consult some of the latest publications on this topic. Since the European Union has a complex approach to the subject and many national variations, it is difficult to ascertain a single point of view but, as it can be seen bellow, the consensus seems to be in favor of the positive impact of the current policies of the continental block (see Table 1).

A Literature Analysis of the Main European Environmental Strategies Table 1. The latest publications analysing EU environmental policy Scientific content

Source

This paper examines the (Greco, Germani, Grimaldi, & implications at company level of Radicic, 2020) [4] integrating environmental approaches into the innovation strategy of the firms. At policy level, the authors conclude that policy generating bodies should complement each other in order to provide adequate support for companies both in the short and the long terms This paper discusses and analysis the EU circular economy policies as reflected by the communication of the European Commission in this direction. The main finding is that policies tend to be ambitious but general and vague, while call for actions are too detailed, and thus somehow disconnected from the strategic intent

(Friant, Vermeulen, & Salomone, 2021) [5]

This article also deals with the (Ladu, Imbert, Quitzow, & challenges of achieving of Morone, 2020) [6] successful policy mix, this time in the case of forests, that combine the circular economy approach with the bioeconomy approach. As a conclusion, the authors recommend integrating climate related measures with sustainable management practices and supporting third party mechanisms In a complex study of ca. 90 various policies that affect the biomass sector of the European bioeconomy, these researchers conclude there is a lack of complementarity and a lack of alignment to the EU strategy and propose concrete operational measures along the value chain to overcome these deficiencies

(Singh, Christensen, & Panoutsou, 2021) [7]

563

564

D.-A. Szabo et al.

3 Methodology and Analysis In the second part of the study, namely the part of methodology implementation, we selected for the mentions strategies four analysis criteria to determine their applicability in the production sector, assigning grades from 1 to 5 in a matrix correlation diagram, assuming all of the strategies have the same importance weight (33,33%). At the end, we create the total for each strategy and performed comparison among them, as well as between the analysis criteria use. The European Green Deal supports the transition of all sectors of national economies to a net zero GHG state [1, p. 4], which is a highly complex undertaking. Due to this fact, the authors evaluate the complexity score of this document to the maximum grade, 5 points. Also, the ambition is very high, as mentioned in the document [1, pp. 4−5] thus obtaining the same score. New industrial policies will be launched and implemented in tandem with economic, social, and educational measures [1, p. 4]. This strategy aims at radically transforming the EU in the next three decades, which be require considerable efforts on the part of the authorities, companies, and the people. The applicability of the proposed measures to production is low, this document is not a specific one on the industrial side, and companies are not prepared to support such a major goal, namely, to reduce GHG emissions, in such a short timeframe. If production will be outsourced to outside countries without the necessary improvements, there is a real risk of the deal failing [1, p. 5]. The score obtained for this criterion is 2 points. The level of comprehension is generally very low, this pact needs to be much better promoted and it needs to be operationalized to achieve the targeted objectives. Policies need to be known at both national and European level, they must be clear (which is not the case at the moment), and proper institutions and funding need to be set up. In conclusion, for this criterion we awarded 1 point. The bioeconomy strategy is relying on its adoption and implementation by companies, financial institutions, and knowledge providers [2, p. 105]. Due to this characteristic, bioeconomy requires the transformation of almost all industrial and economic processes, involving a very high level of complexity, similar in scope to that of the decarbonization proposed by the European Green Deal. As a consequence, the rank we allocated for this correlation is high (4). We can say that the ambitions of the bioeconomy strategy are great, the implementation of this concept would influence employment, climate actions, industrial development, and biodiversity [2, pp. 5−7]. Our evaluation situates this level at 5. In terms of the applicability to production, bioeconomy approaches have a significant impact upon the primary sector (generation of raw materials) as well as on the secondary one industrial processing [2, p. 6]. However, since the EU has an economy based on the tertiary sector (services, innovation, creative industries, etc.) we can deduce that not all economic activity will be transformed in light of this strategy. The score we have allocated for these criteria is thus 3. To achieve the objectives proposed by the strategy, some programs of the EU will include dedicated provisions (Life, Horizon, Interreg, etc.) [2, p. 64]. Also, in 2018, almost half of the EU member states (13 countries) were developing or already implementing their own bioeconomy national strategy [2, pp. 82−83]. Based on this data we can conclude that this strategy has a medium level of knowledge.

A Literature Analysis of the Main European Environmental Strategies

565

The Circular Economy Action Plan is a concrete strategic document that promotes sustainability throughout the European economy by supporting all actors to minimize the amount of resources they consume and to implement measures to reutilize the generated waste as new materials for new product life cycles [3, p. 5]. These results can not be achieved without complex changes taking place in the product design phase, the product realization phase, or without the involvement of most European consumers and organizations. We rationalize these transformations as a complex metamorphosis of all continental, national and corporate relationships with a very long-time horizon. The score for complexity is the same as the score for European Green Deal. This plan has a high ambition, which is highlighted by the fact that it seeks to extend the Eco-design Directive, not only to energy-related products, but trying to address a wider and more diverse range, so that they are ecologically engineered throughout their a life cycle, in the direction of circularity of materials [3, p. 6]. Applicability to production is one with a high impact, with each Member State, company or consumer wanting to generate as little waste as possible, manage their resources and be competitive in today’s market. When this is implemented, it also supports the reduction of carbon dioxide emissions. The cooperation on large scale makes the transition to the circular economy a profound one in the EU and beyond [3, p. 24]. We can say that the degree of comprehension at the moment is average. Table 2. Matrix diagram for evaluating the effectiveness of European public policies Complexity of Ambition of the Applicability to Degree of Total the strategy strategy production comprehension European Green Deal

5

5

2

1

13

A sustainable bioeconomy for Europe

4

5

3

3

15

New circular economy action plan

5

5

4

3

17

14

15

9

7

Total

From what we observed in the matrix diagram we constructed (Table 2) for evaluating the effectiveness of the three European public policies analyzed, the highest score is obtained by the “Action Plan of the circular economy”. This highlights the fact that this plan is well-known and understood, producing some concrete results already. The scientific principles that it uses are accessible to large areas of the economy and the modifications it requires have, many times, an economic justification. For the bioeconomy strategy, based on the score obtained, we can conclude that it is being developed and implemented in some countries and some sectors, but it does not have a complete mainstream appeal. This might also come from the fact that the knowledge involves is rather specific and unapplicable to important industries (e.g., automotive, oil and gas,

566

D.-A. Szabo et al.

healthcare, etc.). With the lowest score we can find the European Green Deal, which is the newest plan that establishes an integrated vision for 2050. It is probable that in the next few years it will become more well known, more impactful, and it might even incorporate the other strategies. Comparing the three European policies on each criterion, we can say that all have a high degree of complexity, each with a unified vision and compound objectives affecting many areas of the European economies. In terms of ambition, we note that all three have a maximum score, they are strongly oriented towards achieving the proposed transformations and are trying to form long-term alliances with all stakeholders from all member states and beyond. The next criterion, applicability to production, scored low, as these policies have a general scope for all domains. There is a need to create specific projects for companies, which will bring them into a more intimate relationship with the manufacturing sector. The level of comprehension obtained the lowest score, which means concrete measures must be taken for these European policies to be widely implemented. To solve this problem, there is a need for extensive awareness campaigns of these policies, the introduction in the education system of courses or workshops in this regard, the creation of promotional sites and platforms, etc. To enable a better comparison, we present below a spider diagram showing the relative scores obtained with the matrix diagram (see Fig. 1 below).

Complexity of the strategy 5 4 3 2 1 Degree of comprehension

0

Ambion of the strategy

Applicability to producon European Green Deal

A sustainable bioeconomy for Europe

New circular economy acon plan

Fig. 1. Comparison of EU environmental strategies

4 Conclusions The study performed by the authors in this paper realizes a bird’s eye view comparison of three of the most important European strategies dedicated to environmental protection. It is evident that these documents will affect the industrial domain, but it is not clear to which degree or in what manners. All of them have a very high level of ambition combined with a complex structure and approach, which, in our experience, is hard to be enthusiastically embraced by companies, especially the small ones. At the same time, we estimate the comprehension level in the production sector to be average, thus

A Literature Analysis of the Main European Environmental Strategies

567

imposing the need for more effective communication on these topics, especially for the two planning documents that could have the greatest influence upon manufacturing practices and processes (i.e., the bioeconomy and the circularity strategies). In many situations, the companies and people involved in production are not aware of these tendencies and do not have the proper support or funding to make sure they are applied in each firm. We consider it is up to the researchers to develop instruments that will simplify the work of companies and engineers in bringing to life the provisions of the societal policies. Another important lesson that can be seen is that the strategies are not fully complementary and there are many redundancies as well as important conflicts among their requirements (e.g., many bioresources supporting by the bioeconomy strategy to be used as raw materials might come into disagreement with the need to obtain net zero emissions proposed by the European Green Deal). Analysis such us the one above and the identification of policy gaps and policy conflicts should become a common exercise by all stakeholders to improve these documents overtime and to support their implementation. The authors of this paper will use the results of the current analysis in conjunction with other policy related research to propose better ways to achieve the improvement of strategic documents related to environmental conditions that impact the manufacturing sector, including methods to map policy evolution, to involve relevant stakeholders with technical expertise and to support awareness raising on larger scale.

References 1. European Comission: The European Green Deal. Brussels (2019) 2. European Comission: A sustainable bioeconomy for Europe: strengthening the connection between economy, society and the environment. Brussels (2018) 3. European Comission: Circular Economy Action Plan (2020) 4. Greco, M., Germani, F., Grimaldi, M., Radicic, D.: Policy mix or policy mess? Effects of crossinstrumental policy mix on eco-innovation in German firms. Technovation, 102194 (2020) 5. Friant, M.C., Vermeulen, W.J., Salomone, R.: Analysing European Union circular economy policies: words versus actions. Sustain. Prod. Consumption 27, 337–353 (2021) 6. Ladu, L., Imbert, E., Quitzow, R., Morone, P.: The role of the policy mix in the transition toward a circular forest bioeconomy. Forest Policy Econ. 110, 101937 (2020) 7. Singh, A., Christensen, T., Panoutsou, C.: Policy review for biomass value chains in the European bioeconomy. Glob. Trans. 3, 13–42 (2021)

A Raw Milk Production Facility Design Study in Aydın Region, Turkey Hasan Erdemir, Mehmet Yılmaz, Aziz Kemal Konyalıo˘glu(B) , Tu˘gçe Beldek, and Ferhan Çebi Management Engineering Department, Istanbul Technical University, Istanbul, Turkey {erdemirh15,yilmazmehmet127,konyalioglua,beldek, cebife}@itu.edu.tr

Abstract. Food is one of the most important needs and indispensable issues for the whole world to sustain our lives from the first days of humanity to the present day. Even though some food habits of humans seem to have changed since the huntergatherer era of humanity, there are still many common points in common. One of these is the consumption of milk and dairy products. Milk has become one of the most important consumer goods as it is a secretion produced by mammals to feed their offspring, and it has been used as food by human beings. The importance of milk, which was consumed without realizing its content and benefits in the early periods, increased with new findings in the light of science and technology and this was reflected in the consumption demand. It is especially consumed in terms of protein and calcium content. It is also an important source of Vitamin B2 (riboflavin), Vitamin B12, Vitamin A, Thiamine, Niacin, Phosphorus, and Magnesium. There was a break in the increase of the world population, especially with the industrial revolution, and the increasing migration from the village to the city caused the majority of the population to gather in cities. Before the industrial revolution, people living in villages had very easy access to dairy products, but later on, they could not have easy access due to the inadequacy of the transportation facilities and the rapid deterioration of milk. With the invention of pasteurized milk technology, the growth of bacteria and spoilage was delayed a little by shocking the milk at high temperatures, and with the development of transportation and logistics facilities, the milk produced in the countryside was delivered to the cities with the cold chain. After a certain point, UHT (Ultra High Temperature) Technology was discovered due to the insufficient durability of the pasteurized milk, so that the milk could be frozen at higher temperatures and stored for longer periods without the need for a cold chain, which was seen as a new solution to the increasing demand, these two methods are common it is used as. In the light of these developments, the decrease in nutritional values in heat-treated milk, especially UHT milk, has been reacted by many scientists, but its consumption has been continued because solution approaches could not be made. Especially with the effects of social media and the recent pandemic, with the effect of the healthy lifestyle trend in the world, the demand for natural products and additive-free products has increased and is gradually increasing. At this point, states are starting to make the consumption of raw milk legal by updating their legislation in the light of developing technology. In the light of all these explanations, this study, it was aimed to carry out a study to deliver the milk to the consumer in a certain region by keeping the cold chain in © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 568–577, 2022. https://doi.org/10.1007/978-3-030-90421-0_49

A Raw Milk Production Facility Design Study

569

a sterile environment without any treatment after milking. In this context, firstly the literature was searched and the livestock sector was researched, the general characteristics and types of milk were examined, raw milk was researched and similar business models were examined, and researches were made on the logistics of raw milk. In this study, it is aimed to design a raw milk production facility in a pilot region, in Aydın, Turkey and to establish a distribution business model for proposing a decision support. Keywords: Raw milk · Facility design · Production · Distribution business model · Logistics

1 Introduction After the industrial revolution, food supply to the cities became difficult due to the increasing population in the world as a result of industrialization and the migration from rural to urban areas. In order to solve this problem, methods that increase productivity, extend shelf life and give artificial color and flavor in food have been used. In this way, production efficiency has increased, but these methods have negative effects on human health both in the medium and long term. Today, with increasing awareness and awareness, people want to consume natural products that do not contain additives. This change trend has started to be seen in milk and dairy products as well as in all food products. Therefore, the demand for raw milk has increased over time with the effect of regulations and has brought new opportunities in this sector. However, the number of companies that provide reliable milk supply to consumers by meeting this increasing demand in the sector is very few. Most consumers cannot find a business that sells reliable raw milk in their area. The aim of this study is to design a raw milk production facility in a pilot region and to establish a distribution business model.

2 Literature Review Animal husbandry has become an important issue in the world and in Turkey, with people becoming more conscious about nutrition and food from past to present, and with adequate and balanced nutrition of animal origin being of vital importance. Especially with the effect of the restrictions that came with the pandemic and the stagnant trade networks, states and individuals have gained serious awareness in terms of food access and food supply. The dairy industry is one of the oldest industries in the world. There are many organizations that publish reports that do specific research for the industry. One of the most important of these is the International Dairy Federation (IDF) publishes a report every year in the light of the data they collect from different parts of the world. Figure 1 shows world milk production between 2005–2012.

570

H. Erdemir et al.

Fig. 1. 2005–2012 World milk production [1]

The livestock sector is one of the sectors with the highest potential in our country and it is the main source of income for a significant part of the population. It has serious advantages especially thanks to the existence of fertile agricultural lands and suitable climatic conditions. The National Dairy Council, which conducts research and publishes reports specific to the dairy sector in our country, is one of the most important organizations that scientifically produce policies for the development of the sector, lead its implementation and undertake the task of market regulation. Another important parameter when evaluating the sector is milk production. Milk production is important to see the milk yield compared to the amount of animals. Table 1 shows raw milk production by years and types. Table 1. Raw milk production by years and types [2] Year

Cattle

Buffalo

Sheep

Goat

Total

Tonnes

%

Tonnes

%

Tonnes

%

Tonnes

%

2014

16.998.850

91.2

54.803

0.3

1.113.937

6.0

463.270

2.5

18.630.859

2015

16.933.520

90.8

62.761

0.3

1.177.228

6.3

481.174

2.6

18.654.682

2016

16.786.263

90.8

63.085

0.3

1.160.413

6.3

479,401

2.6

18.498.161

2017

18.762.319

90.6

69.401

0.3

1.344.779

6.5

523.395

2.5

20.699.894

2018

20.036.716

90.6

75.742

0.3

1.446.271

6.5

561.826

2.5

22.120.716

2019

20.782.374

90.5

79.341

0.3

1.521.455

6.6

577.209

2.5

22.960.379

The SWOT analysis of the livestock sector was made as a result of the comparisons made by examining the world Turkey livestock sector. Considering the strong points of the sector, one of the strongest points is the abundance of arable and productive agricultural areas. The wide variety of products produced also allows the production of products in many categories. The fact that the labor force is cheap compared to America and Europe is also important in terms of decreasing inputs. Government incentives and increasing mechanization rate are also strengths of the sector. Considering the SWOT, the strengths and other factors have been determined for decision makers working in raw milk sector.

A Raw Milk Production Facility Design Study

571

Considering the weak points of the sector, the high input costs in feed prices, which have increased in recent years, are among the weakest points of the sector. High input costs in tractor and equipment costs are among the important problems that force the farmer. The fact that the majority of enterprises are small and medium-sized, weak R&D studies, and low level of knowledge and education in the sector can be shown as other weak points. When we look at the opportunities of the sector, it can be said that the population is in an increasing trend and the young population is high. The fact that the world population is in the trend of living in general can be stated as an important opportunity for our country at this point. The opportunity to export to the Middle East market, where the dairy industry is not developed due to its geographical location, and the domestic and national production move, which was especially emphasized after the pandemic, can be said as the opportunities of the sector. Considering the threats to the sector, the most important threats are the continuous impact of the depreciation of the Turkish lira against foreign currency units and fluctuations in recent years. The increasing exchange rate, especially in an economy based on imported goods, is one of the biggest threats to competitiveness in the sector. The young population’s lack of interest in animal husbandry and the gradual increase in migration from village to city are another important threat in the sector. It threatens the sector at the point of supply of qualified workforce, which plays a critical role for the continuity of the sector. The fact that the effects of global warming and climate change are increasing day by day and the lack of price stability due to the deficiencies in government regulations are other threats.

3 Methodology In this study, it was decided to carry out 8 steps: definition of the problem, facility location selection, capacity planning, establishment of distribution network, marketing plan, human resources planning, cost analysis and financial analysis. Facility location selection is one of the most important stages when establishing a business model. The facility location is not a feature that can be easily changed due to both land investment and facility establishment costs. Therefore, before the facility is established, the location of the facility should be intensively researched and the most suitable region should be selected in line with the company’s goals. While making this evaluation, many methods such as ANP, TOPSIS, ELECTRE and PROMETHEE can be used [3]. As we have stated in our sector researches, one of the most important decisions in the livestock sector is to determine the scale of the enterprise. Since the scale of most enterprises in our country is very small, the enterprise is not sustainable and the quality of enterprises is low. If the scale is more than necessary, excess demand will result in product production and cause loss of resources for the enterprise due to the short shelf life of raw milk. Therefore, before the establishment of the plant, it is necessary to calculate the demand of the market and determine the scale of the plant in line with this demand and the resources at hand. For scale planning, methods such as break-even analysis and integer programming are used. In a study prepared by Kara and Ero˘glu in the literature, the TP model was established for a dairy farm in the Ere˘gli district of Konya [4].

572

H. Erdemir et al.

Planning of distribution channels is one of the most important decisions in our study. According to the results obtained in the above-mentioned survey, the majority of consumers do not consume raw milk because they do not have access to raw milk. Therefore, to meet this demand, end consumers should be reached by establishing a wide network. Another requirement is to keep the milk at the required temperature during the procurement process of raw milk and the waiting period of the product before sale. For this, the temperature of the milk is monitored instantly by using the necessary technologies. At the marketing point of the products, milk will be delivered to the end consumer by electronic vending machines. A brand name will be found at the point of creating the perception of trust in milk and expressing that there is a new approach outside the existing milk tradition and that brand will be made visible to the consumer. Along with the brand’s logo and motto, the perception of simplicity and trust in milk will be created. Looking at the channels in marketing; First of all, a website will be prepared in accordance with our vision and all information that the consumer can find answers to every question in his mind will be accessible here. The business will be registered to “Google My Business” and “Yelp” and its visibility will be ensured on Google and Google maps, which are very popular search engines. At the same time, content containing the consumer’s healthy life and correct information will be produced through e-mail marketing channels. Especially at the information point we got from the surveys, we saw that the biggest obstacle to raw milk consumption is 80% reliability. According to the findings of Lou & Yuan’s study, it has been seen that the credibility, attractiveness and perceived similarity of influencer marketing (to their followers) positively affect the trust of their followers in their branded publications. Since influencers often develop trustworthy and attractive online personalities, it is not surprising to observe that the perceived credibility and attractiveness of influencers can influence their followers’ trust in their sponsored content. In addition, followers tend to follow influencers they identify with, and therefore, followers’ perceived similarity with influencers positively affects their trust in branded posts created by influencers (2019). It will be ensured that the brand correctly expresses itself to the target audience and reflects its values through influencers who are effective in our target region. At the point of social media marketing, which has become an important marketing channel in recent years, targeted advertisements and content will be produced especially through Facebook, Instagram, Twitter and Youtube. At the same time, search engine optimization will be carried out through the website and it will rise to the top in searches. Another stage of marketing channels is television advertisements. Here, on the other hand, since its effectiveness is lower than the others, it will be progressed with a smaller budget and share. Human resource planning is one of the most important issues in this business model. Because in this business, it is very critical for people from different educational levels to work in a team spirit. Establishing a corporate culture is a challenging issue as there are different workplaces such as plant, distribution network and offices. According to Obeidat et al., with organizational culture, employees’ commitment to the company increases and they spend more effort in line with the goals of the company [5]. Therefore, in this study, it is aimed to increase the company culture and the competency levels of

A Raw Milk Production Facility Design Study

573

the employees by using training and development programs while managing human resources in the enterprise. After these studies, a cost calculation is made to get an estimated cost. In this cost account, everything from the cost of the land on which the facility will be established to the salaries of the employees is calculated. After this cost calculation, the sustainability of the company is calculated with financial analyzes and final decisions are taken regarding the establishment of the company.

4 Facility Plant Location Selection The selection of the facility location is critical because the installation of the facility is costly and the operation will be planned according to the location of this facility. In this study, it was decided to use analytical hierarchy method and factor rating method for facility location selection. It was decided that the facility location would be in the Aegean Region due to reasons such as low competition for the size of the market, suitability for animal husbandry, and well-known region. It is an important factor for the facility location that the Aegean Region includes markets with high population such as ˙Izmir, Aydın, Denizli, Mersin. For the facility location five alternatives; Nazilli, Aydın, Sarayköy, Denizli, Banaz, U¸sak, Çay, Afyon, were selected. While selecting these regions, the locations of the enterprises that are currently engaged in livestock activities, expert opinions, logistics opportunities, proximity to target markets and convenience for livestock were taken into consideration. Six criteria were chosen to be used in the Analytic Hierarchy Process (AHP) and factor rating method. These criteria are: Region Market Size, Proximity to Forage Crops, Land Cost, Operational Costs, The Distance of the Facility to the City Center, Competition in the Market. While selecting these criteria, similar feasibility reports, academic studies and expert opinions were used. According to the results of the AHP method, it was seen in Table 2 that the Nazilli region was more advantageous. Table 2. AHP results Final score

Rank

Sarayköy

0.442

2

Nazilli

0.223

1

Çay

0.218

4

Banaz

0.133

3

Ba¸smakçı

0.000

5

According to the result of the factor rating method, it was seen in Table 3 that the Nazilli region would be more advantageous.

574

H. Erdemir et al. Table 3. Factor rating method results Final score

Rank

Sarayköy

4.680

2

Nazilli

4.831

1

Çay

4.360

3

Banaz

4.341

4

Ba¸smakçı

4.189

5

Considering the results of the Factor Rating Method and the AHP method, it was decided to conduct the study in the Nazilli region.

5 Capacity Planning While forecasting, the districts within the 200 km border of Nazilli region, which is our facility location, were determined and demand forecasts were made for these regions. Since it is a new market and raw milk sales in the market are largely unregistered, there is no data on raw milk consumption. That’s why; By taking the last consumer annual milk consumption data in the 2018 milk report of the Chamber of Agricultural Engineers, TUIK 2020 Population data, demand forecast for 2021 was made using regression and a forecast model was developed for demand. In this model, the number of potential customers was found by multiplying the population in each district by 0.5, which is the raw milk consumption rate in the survey we conducted. Afterwards, an approximate market size was obtained by multiplying the annual drinking milk consumption data per capita of the Chamber of Agricultural Engineers. Afterwards, our targeted share in each district was found, since a 10% share was targeted in the first entry into the market. The consistency ratio has been determined less than 0.1 (0.08). Then, by dividing this demand by the vending machine capacity of 200 L, the number of vending machines that need to be placed to meet the demand is found. Results are shown in Table 4. Forecasting has been done by using regression analysis considering three factors. Table 4. Forecasting results City

District

Population

Demand forecast

Automat amount

Rounded

Distance from facility

Aydın

Efeler

292716

1664.07 L

8.3

9.0

44.4 km

Aydın

Nazilli

160877

914.57 L

4.6

5.0

0 km

Aydın

Söke

121940

693.22 L

3.5

4.0

83 km

Aydın

Ku¸sadası

121493

690.68 L

3.5

4.0

77.7 km

Denizli

Pamukkale

342608

1947.70 L

9.7

10.0

154 km

Denizli

Merkezefendi

321546

1827.97 L

9.1

10.0

167 km

A Raw Milk Production Facility Design Study

575

6 Distribution Routes Following the demand estimations, the distances from the point where the milk is produced are calculated so as not to exceed the 200 km limit in distribution, and the 6 regions to be worked on in this study are planned as 2 separate routes. On the Aydın route, starting from Nazilli, the production facility reaches Ku¸sadası by passing through Atça, Kö¸sk, Aydın Merkez, Davutlar. The targeted daily shipment on this route is calculated as 4000 L. The total distance to be traveled along this route is 106 km. On the Denizli route, the production facility starts from Nazilli and passes through Kuyucak, Buharkent, Sarayköy and reaches the center of Denizli. The targeted daily shipment on this route is calculated as 3400 L. The total distance traveled on this route is 92 km. At the same time, a daily shipment of 1000 L is planned to the Nazilli district of the plant location. The distribution route was mentioned in the previous section as two main routes. While determining the points to be distributed in each province within the distribution routes, the population of the districts of these provinces is taken as a basis. In the calculation of the number of vending machines, Aydın-Efeler (9 vending machines), AydınNazilli (5 vending machines), Aydın-Söke (4 vending machines), Aydın-Ku¸sadası (4 vending machines), Denizli-Pamukkale (10 vending machines), Denizli-Merkezefendi (10 vending machines) was calculated as. The locations of vending machines in the districts of these provinces were determined by ordering them from the highest to the least population.

7 Financial Analysis The financial analyzes are based on 2022–2026 dates. While making our calculations, the 2020 inflation rate of the Central Bank of the Republic of Turkey was taken as 14.6. In the table below, our income and expenses are shown by years (Table 5). Table 5. Income and expenses between 2022–2026 2022

2023

2024

2025

2026

Total sales (unit)

2737500

2737500

2737500

2737500

2737500

Wholesale

1368750

958125

547500

136875

136875

Automat sales

1368750

1779375

2190000

2600625

2600625

Automat unit 7.00 sale price

7.98

9.10

10.37

11.82

Wholesale unit price

3.42

3.90

4.44

5.07

3.00

Sales income 13687500.00 ()

174762200.00 22057461.00

27578941.56 31439993.38 (continued)

576

H. Erdemir et al. Table 5. (continued) 2022

2023

2024

2025

2026

937046.25

1388702.54

1948456.49

2637722.97

3007004.19

Operational 8439700.33 expenses ()

9621258.38

10968234.55

12503787.39 14254317.62

Bait expenses

4390950

5005683

5706479

6505386

7416140

R-enting expenses

38400

43776

49905

56891

64856

Employee expenses

3128459.84

3566444

4065746

4634951

5283844

Marketing expenses

120000

136800

155952

177785

202675

Animal care expenses

240000

273600

311904

355571

405350

Invoice expenses

120000

136800

155952

177785

202675

Other expenses

401890

458155

522297

595418

678777

Profit before taxes

4310753

6466239

9140770

12437431

14178672

1645339

2238738

2552161

Sales commission ()

Tax expenses 775936 Net profit

1163923

 3534817.80  5302316.04

 7495431.37   10198693.58 11626510.68

While calculating the return period of the investment, the net present value calculation method, which is a common method, was applied. According to this method, the investment will reach the breakeven point after 2.14 years and will exceed profitability after this point (Table 6). Table 6. Net present value Years

1

2

3

4

5

Discount factor

1

0.877

0.769

0.675

0.592

Cash flow

3534817.8

5302316.0

7495431.4

10198693.6

11626510.7

Present value

3534817.8

4651154.4

5767491.0

6883827.7

6883827.7

A Raw Milk Production Facility Design Study

577

In recent years, despite the continuous increase in feed and fuel inputs in our country, the relatively small increase in milk sales prices of the producer has greatly reduced the profitability of dairy farming. Considering the structure of the sector and the current situation, the profitability of the feasibility study we conducted is well above the industry standards. The problems we noticed at the beginning of our thesis work such as small scale in the sector, inadequacy of quality, not being able to benefit from technology; We have solved it with methods such as correct scale planning, milk vending technologies, correct demand planning. As a result, with the financial analysis we made, it is seen that this business model is a profitable business model that provides return on investment in a short period of 2.14 years.

8 Conclusions In conclusion, raw milk production and supply chain gains importance in accordance to the increase of the demand. It is difficult to reach raw milk from typical markets. There are a few facilities that produce and sell raw milk so that logistics and new marketing channels are very important. This study also seeks for a pilot region to design a new facility and a new business model. In future studies, the design may be extended to other regions in the country and other selling options may be concerned with more reliable consumer data. The study has been done in order to propose factor prioritization and to design a new raw milk facility by considering SWOT analysis and AHP results mentioned in the study.

References 1. International Dairy Federation Annual Report 2019–2020. https://fil-idf.org/about-us/annualreport/. Accessed 10 May 2021 2. Ulusal Süt Konseyi: Süt Raporu 2019 (2019). https://ulusalsutkonseyi.org.tr/wp-content/upl oads/Ulusal-Sut-Konseyi-Sut-Raporu-2019.pdf 3. Rahman, M.S., Ali, M.I., Hossain, U., Mondal, T.K.: Facility location selection for plastic manufacturing industry in Bangladesh by using AHP method. Int. J. Res. Ind. Eng. 7(3), 307–319 (2018) 4. Hasan, K.A.R.A.., Ero˘glu, A.: Tam Sayılı Do˘grusal Programlama Metodu ˙Ile Entansif Hayvancılık ˙I¸sletmesinin Kapasite Planlaması: Konya (Ere˘gli) Örne˘gi. Çukurova Tarım ve Gıda Bilimleri Dergisi 33(2), 31−46 (2018) 5. Obeidat, A.M., Abualoush, S.H., Irtaimeh, H.J., Khaddam, A.A., Bataineh, K.A.: The role of organisational culture in enhancing the human capital applied study on the social security corporation. Int. J. Learn. Intellect. Cap. 15(3), 258–276 (2018)

Recent Developments in Supply Chain Compliance in Europe and Its Global Impacts on Businesses Çiçek Ersoy and Hatice Camgoz Akdag(B) Management Enginering Department, Istanbul Technical University, Istanbul, Turkey {cicek.ersoy,camgozakdag}@itu.edu.tr

Abstract. Recently the new supply chain compliance rules in EU and Germany came into force. Consequently, the companies will implement a due diligence and a related risk management system for supply chain processes in order to establish the integrity of their business regarding decent work and environmental protection. The companies will carry out the risk analysis and implement the risk mitigating measures, not only inside their companies but also regarding their direct and indirect suppliers. This means that these recent regulations in EU and in Germany will have a global impact such as in the countries like Turkey, China, Brasil. This paper presents the framework regarding the supply chain compliance and explains the impact of the new regulations on businesses. Keywords: Supply chain · Global value chains · Compliance · ESG · Decent work · Third party due diligence

1 Introduction When it comes to compliance, the areas that first come to mind are operations carrying high commercial and legal risks for companies, such as competition law, protection of personal data, fight against bribery, corruption or money laundering. However, in the coming years, this classical definition of compliance will gradually expand to include different fields and concepts, and one of the new risk areas for companies will be ‘supply chain compliance’. This expansion is a result of the transformation of shareholder capitalism to stakeholder capitalism, which defines the mission of a corporation as serving not only to shareholders but also to customers, suppliers, workers, communities and environment. According to researches, firms with good environmental, social, and governance (ESG) compliance structure achieved higher performance and credit ratings through five factors: top-line growth, lower costs, fewer legal and regulatory interventions, higher productivity, and optimized investment and asset utilization.1 A comprehensive and holistic compliance organization and practice serves the interests 1 The Case For Stakeholder Capitalism, McKinsey Report, https://www.mckinsey.com/business-

functions/strategy-and-corporate-finance/our-insights/the-case-for-stakeholder-capitalism. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 578–584, 2022. https://doi.org/10.1007/978-3-030-90421-0_50

Recent Developments in Supply Chain Compliance in Europe

579

of all stakeholders and therefore creates an effective instrument to achieve the strategic objectives of the corporation in business. Supply chain compliance is assuring certain legal standards at all stages of the supply chain - not only in the company’s own operations, but also in the processes and operations of its direct or indirect suppliers. Supply chain from the compliance point of view is defined as all steps required to manufacture products and provide services, from the extraction of raw materials to delivery to the end customer. The products value chain, whether intra firm or inter firm, is designed mostly by transnational legal transactions. In other words, ‘the time of single multinational corporations on their own creating, manufacturing, and selling a given product is long gone’2 and the production processes are mostly shaped by suppliers from third world countries. As a result of ‘irresponsible’ international trade practices, the risk of human rights violations and environmental destructions has increased. Mitigating the legal and commercial risks arising from the global supply chain processes will save companies from negative legal consequences such as administrative fines or compensation. Considering the rise of ESG (environmental, social and governance) and the related recent legal developments in the EU and Germany, it may be argued that the most important legal risks regarding the supply chain will be the violation of human rights in these processes (e.g., forced labour and child labour)3 and/or the use of products/services that harm the environment and increase carbon emissions as inputs. Taking the recent developments into account, regular supply chain due diligences will be in the future considered as prerequisite for ensuring supply chain compliance in companies. Risk minimization will only be possible with the coordination of the company’s legal and compliance, purchase, and human resources, if any, sustainability departments. Many companies of different scales in non-EU countries like Turkey, Russia, Brasil or China are direct or indirect suppliers (sub-suppliers of the main supplier) of EU-based and German companies. The companies in the supplier position are also the driving force of exports from these non-EU countries to the EU. Although supply chain harmonization will apply to companies located in the EU and Germany and bring various actions to these companies, the law in the EU and especially in Germany foresees holistic harmonization in terms of the entire supply chain. For this reason, the cross-border effects of the law will also be inevitable, and EU and German companies will force their non-EU suppliers to comply with these regulations and to carry out and have the necessary audits done in this direction. These regulations will undoubtedly be applied to the subsidiaries of companies residing in Germany operating in non-EU countries, and these companies will also be subject to due diligence obligations stipulated by the law.

2 Kevin Sobel-Read, Global Value Chains: A Framework for Analysis, Transnational Legal

Theory 5, 2014, (364–407) 371. 3 For a detailed economic analysis of forced labour: ILO Report, Profits and Poverty: The Eco-

nomics of Forced Labour, 2014, https://www.ilo.org/wcmsp5/groups/public/---ed_norm/---dec laration/documents/publication/wcms_243391.pdf.

580

Ç. Ersoy and H. Camgoz Akdag

2 The New Supply Chain Compliance Due Diligence and Compliance Rules in Europe Ensuring the decent work plays an essential role for International Labour Organization (ILO) since the year 2000.4 To achieve this goal, ILO has developed and implemented the ‘decent work agenda’ in compliance with the UN Sustainable Development Goals.5 In recent years, fighting against forced labour and promoting the decent work has become also a fundamental policy in Europe.6 Nowadays the supply chain compliance, which is a crucial instrument for fostering decent work and strengthening the human rights has been at the heart of the in European Unions policy agenda. On 18th February the European Commission announced a ‘Communication on Trade Policy Review’,7 which aims to rebuild the EU’s trade policy towards the objectives green and digital transformation of the EU economy. For building a sustainable trade policy, the communication provided a framework for the European companies to manage the risk of the forced labour in their supply chain operations in conformity with international due diligence principles. On July 12, 2021, European Commission and the European External Action Service (EEAS) announced the ‘guidance on supply chain due diligence for European Union’ to meet the commitment explained in the communication.8 The guidance aims to address the risk of forced labour in supply chain operations and intends to create a framework for the compliance with the general principles combatting the forced labour. The guidance requires ‘supply chain due diligence’ from the perspective of protection of human rights, which aims to prevent the detrimental effects on employees of direct and non direct suppliers. The guidance gives a non-binding recommendation to companies in EU.9 Therefore this policy document does not create any legal obligations for businesses. It only provides practical advices on the effective use of existing international due diligence standards and principles. The guidance can be regarded as an indication for the future that the proposed law (mandatory human rights due diligence law) will be applicable also in the non-EU domiciled companies and will bring serious sanctions for non-compliance.10 After the annoucement of the European Union’s Green Deal11 in December 2019, which outlines EU’s roadmap to achieve climate change targets over the next 30 years 4 https://www.ilo.org/global/topics/decent-work/lang--en/index.htm. 5 https://sdgs.un.org/goals. 6 European Commission Communication, Promoting decent work for all – The EU contribution

7

8

9 10 11

to the implementation of the decent work agenda in the world, COM/2006/0249 final; Council Conclusions on Human Rights and Decent Work in Global Supply Chains, December 2020. European Commission Communication, Trade Policy Review – An Open, Sustainable and Assertive Trade Policy, COM (2021) 66 final; https://trade.ec.europa.eu/doclib/docs/2021/feb ruary/tradoc_159438.pdf. Guidance on Due Diligence for EU Businesses to Address the Risk of Forced Labour in Their Operations and Supply Chains; https://trade.ec.europa.eu/doclib/docs/2021/july/tradoc_ 159709.pdf. Guidance S. 2, Footnote 7. https://www.mayerbrown.com/en/perspectives-events/blogs/2021/02/the-eus-proposed-man datory-human-rights-due-diligence-law--what-you-need-to-know. https://ec.europa.eu/info/strategy/priorities-2019-2024/european-green-deal_en.

Recent Developments in Supply Chain Compliance in Europe

581

to prevent climate change and to leave sufficient resources for future generations, the transformation of supply chain compliance came into consideration. The green transition will impact the European industry by creating markets for clean technologies and products. As a result of this, the entire value chain in different sectors from food to logistics, tourism to construction, industry to finance, will be redefined according to the new policies and regulations. As a result of these legislative and policy developments both in the field of decent work and green deal transformation, the companies shall in the future redesign the supply chain management systems and processes in line with the global compliance standards. To adapt these new compliance rules into supply chain processes, it is crucial to build an effective collaboration of supply chain departments, legal & compliance departments and sustainability departments.

3 The New Supply Chain Due Diligence and Compliance Rules in Germany On 11 June 2021, the German parliament enacted the Federal Act on Corporate Due Diligence Obligations in Supply Chains12 (German Supply Chain Due Diligence Act). The new law aims to protect the rights of employees who work for the suppliers producing goods for the German market and to ensure an environmental sustainability for the global value chain. 3.1 Aim of Legislation By the enactment of the new German Supply Chain law, the protection of basic human rights and environment among the entire supply chain has become a duty and obligation of the undertakings. To achieve this goal, detailed due diligence obligations for the companies are created. It is important to underline the fact that the implementation of German social standards is not required by law. Ensuring the basic human rights will be sufficient. In the memorandum of the governmental draft of law, the aim of the new legislation is summarized as follows: “Through this Act, companies having a business seat in the Federal Republic of Germany above a certain size will be obliged to better fulfill their responsibility in the supply chain with regard to respect for internationally recognised human rights by implementing the core elements of human rights due diligence. This is intended, on the one hand, to strengthen the rights of people affected by corporate activities in supply chains and, on the other hand, to take into account the legitimate interests of companies in legal certainty and fair competitive conditions.”13 12 Gesetz über die unternehmerischen Sorgfaltspflichten zur Vermeidung von Menschenrechtsver-

letzungen in Lieferketten – Lieferkettensorgfaltspflichtengesetz – LkSG. https://www.fleschadvogados.com.br/publicacoes/german-due-diligence-in-sup ply-chains-act-and-the-operation-in-brazil/.

13 Translation:

582

Ç. Ersoy and H. Camgoz Akdag

3.2 Scope of Legislation The new law will enter into force in 1 January, 2023. From 2023 onwards the companies with more than 3,000 employees (over 600 companies in Germany) and from 2024 onwards, companies with more than 1,000 employees (2,900 companies) will be obliged to fullfil their due diligence obligations. After 2025 the area of application will be evaluated. 3.3 Obligations of Companies The new supply chain law in Germany set out requirements (including due diligence systems) for the companies, which are tiered according to different stages within supply chain: the company’s own business operations, the company’s direct suppliers, the company’s indirect suppliers. The requirements will differ as to the kind and extent of the business activity, the degree of influence the company has on committing the violation, the typically expected severity of the violation. The companies are responsible for the whole supply chain operations. With other words, companies must meet the due diligence obligations through the entire supply chain, from raw materials to the completed sales product. Some companies, according to the sector they are operating in, may already implementing some requirements brought by the new Supply Chain Law, for example the ones which are operating in compliance with the EU Conflict Minerals Regulation and/or the EU CSR Directive. Therefore a ‘tailor made’ compliance and due diligence Mechanism for every firm shall be applied by taking the existing measures into consideration. To comply with the new german supply chain law the undertakings must fullfil the following obligations:14 • Draft and adopt a policy statement respecting human rights. • Carry out a risk analysis: by implementing procedures for identifying disadvantageous impacts on human rights. • Build a risk management system (incl. remedial measures) to prevent potential adverse impacts on human rights • Establish a complaint mechanism • Implement transparent public reporting In case of the breach of law, the company is obliged to take immediate steps to avoid or remedy the situation in its own area of business. Furthermore, the company is obliged to take necessary measures to end or prevent the violation of a direct or indirect supplier. Here the companies must define and carry out concrete plans and strategies on suppliers to avoid or minimize the related risks. Within the scope of supplier audit (due diligence), the risk factors will be determined firstly. For example, the legal situation in the country in which the supplier operates (being a party to and implementing ILO agreements, protection of trade union rights, 14 https://www.bmz.de/resource/blob/74292/3054478dd245fb7b4de70889ed46b715/supply-

chain-law-faqs.pdf.

Recent Developments in Supply Chain Compliance in Europe

583

legal regulations on the employment of convicts in prisons, restrictions on working life according to ethnic and religious affiliation, etc.), the supplier’s policy of employing migrant workers, what kind of subcontracting model is applied, the supplier’s female employee policy will play a role in the supplier audit. With regard to decent work standards, the new law does not regulate any minimum vages. However, it makes reference to ilo standards and recommendations on decent vages. 3.4 Sanctions and Enforcement BAFA (German Federal Office for Economic Affairs and Export Control)15 as an established governmental authority is defined as the executive authority for the application of law. BAFA has the power of external monitoring of the companies, carrying out investigations and imposing fines. This means that the people may report their compliants regarding the supply chain relates human rights violations, besides the German courts, directly to BAFA. The most important consequence in case of non-compliance with the law are the administrative fines imposed by BAFA. Furthermore infringements of the law may result in exclusion of companies from public procurement processes up to 3 years. There are no civil sanctions according to the new law. The civil consequences under regular laws (German Civil Code-BGB) continues to apply. The law does not set out any criminal sanctions. Non governmental organizations may go to court on behalf of the affected persons (so-called representative action), even if they are not resident in Germany. This means that, the new law designs a better protection mechanism for injured parties, especially for the ones whose paramount legal position has been violated. Failure to comply with the obligations may result in administrative fines against the company ranging from EUR 100,000 for each violation. Regarding the required due diligence measures and standards, the Act imposes fines up to 8.000.000 EUR or up to 2% of the annual turnover for companies with more than e400 million ($486 million) in turnover.

4 Conclusion As a result of the paradigm change of shareholder primacy into stakeholder capitalism, responsible business and related business strategies of companies are gaining momentum. In the future the companies would be expected to create not only profits, but contribute to society and environment as the corporations will not only serve the interests of their stakeholders but also to all stakeholders, who are outside the company but interact directly with it, such as customers, suppliers, and non-shareholder investors. As a result of the shift to stakeholder capitalism, demonstrating the corporate compliance and responsible business conduct, and especially the supply chain compliance is 15 Bundesamt für Wirtschaft und Ausfuhrkontrolle, https://www.bafa.de/DE/Home/home_node.

html.

584

Ç. Ersoy and H. Camgoz Akdag

of utmost importance. During these transformation of business, the companies which resist to comply with the new system, will find themselves at a competitive disadvantage. The new regulations regarding the supply chain compliance in Europe establish a strict liability regime for corporate supply chains avoiding the exploitation of week employees of direct and indirect suppliers. Moreover the new legal regime of supply chains creates an instrument for the implementation of Green Deal and sustainability goals and ensures the transform of EU into a resource-efficient and competitive economy.

References 1. ILO Report: Profits and Poverty, The Economics of Forced Labour (2014). https://www.ilo.org/ wcmsp5/groups/public/---ed_norm/---declaration/documents/publication/wcms_243391.pdf 2. Sobel-Read, K.: Global value chains: a framework for analysis. In: Transnational Legal Theory, ILO’s Decent Work Agenda, vol. 5, pp. 364–407 (2014). https://www.ilo.org/global/topics/dec ent-work/lang--en/index.htm 3. United Nations Sustainable Development Goals. https://sdgs.un.org/goals 4. European Commission Communication, Promoting decent work for all – The EU contribution to the implementation of the decent work agenda in the world, COM/2006/0249 final. In: Council Conclusions on Human Rights and Decent Work in Global Supply Chains, December (2020) 5. The Case For Stakeholder Capitalism: McKinsey Report (2020). https://www.mckinsey.com/ business-functions/strategy-and-corporate-finance/our-insights/the-case-for-stakeholder-cap italism 6. European Commission Communication, Trade Policy Review – An Open, Sustainable and Assertive Trade Policy, COM (2021) 66 final. https://trade.ec.europa.eu/doclib/docs/2021/feb ruary/tradoc_159438.pdf 7. Guidance on Due Diligence for EU Businesses to Address the Risk of Forced Labour in Their Operations and Supply Chains. https://trade.ec.europa.eu/doclib/docs/2021/july/tradoc_ 159709.pdf 8. Eastwood, S., Ford, J., Reynolds, L.: Newsletter, The EU’s Proposed Mandatory Human Rights Due Diligence Law – What You Need to Know. https://www.mayerbrown.com/en/per spectives-events/blogs/2021/02/the-eus-proposed-mandatory-human-rights-due-diligencelaw--what-you-need-to-know 9. Federal Ministry for Economic Corporation and Develeopment, Supply Chain Law, FAQ’s. https://www.bmz.de/resource/blob/74292/3054478dd245fb7b4de70889ed46b715/sup ply-chain-law-faqs.pdf

Sustainable Factors for Supply Chain Network Design Under Uncertainty: A Literature Review Simge Yozgat1(B) and Serpil Erol2 1 Industrial Engineering, Çankaya University, Ankara, Turkey

[email protected]

2 Industrial Engineering, Gazi University, Ankara, Turkey

[email protected]

Abstract. The concept of sustainability, which is considered three pillars covering the concept of economic, environmental, and social factors, has become an effectual attempt to increase competitiveness for institutions. Being sustainable in the supply chain enables enterprises to respond to increasing customer needs in the most appropriate way. Today, traditional supply chains are replaced by sustainable logistics network designs due to environmental and social requirements. In this study, considering the uncertainty situation, the studies carried out on closed loop supply chains that are formed as a result of integration of forward and reverse logistics as well as forward and reverse logistics by itself are examined on the basis of sustainability factors. Sustainability sub-factors are also included in this study. As a result of the research, brief explanations can be seen about sustainable supply chain network under the uncertainty covering all three sustainability factors and gaps in the literature are clarified for future research opportunities. Keywords: Sustainability · Three pillars · Supply chain · Network design · Uncertainty

1 Introduction Supply chain network design and management emerges as a research subject that is frequently discussed today, and its importance is increasing day by day. In line with the increasing consumer awareness and green logistics concept, companies are trying to develop strategic plans and environmentally sensitive systems. These systems: It covers all phases of the product from raw material procurement to consumption. In addition, the fact that these systems are very difficult to imitate due to their long-term plans enables companies to make a difference by providing a competitive advantage in the market. Businesses are looking for ways to respond to increasing consumer needs, to maintain their profitability and to gain competitive advantage, rather than differentiation in product and sales policies. In this context, efficiency in production, distribution and recycling processes should be ensured and logistics channels should work more systematically and in coordination. This is possible by managing the supply chain effectively. But effective management of the supply chain; It is quite difficult and complex in terms of the number and structure of its existing channels and the integration of new technologies. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 585–597, 2022. https://doi.org/10.1007/978-3-030-90421-0_51

586

S. Yozgat and S. Erol

In its simplest definition, supply chain; It is the whole of logistics systems that covers the process of transforming raw materials into final products and/or services and delivering them to the end user. The control of material and information flow between suppliers, manufacturers, distributors, customers, and all other intermediary channels along the chain has created the term of supply chain management. Companies should analyse the market and competition conditions well and configure their supply chains to adapt to changing environmental conditions. Effective management of the supply chain is among the critical factors of sustainable success for companies in difficult economic and competitive conditions. Being sustainable in the supply chain will increase the success of companies in all activities as it will include many sub-processes from raw material supply to delivering the final product to demander. Sustainable supply chain management: In addition to providing economic competitive advantage to businesses, it is expressed as an approach that tries to add value to stakeholders at all stages of the chain in environmental and social dimensions [1]. With this approach, businesses try to increase all activities that will add value to the product or service throughout its movement in the chain. To achieve such developments, businesses need to lay out, design, work up and manage their whole supply chain with a sustainability manner by taking into consideration does not compromise the sustainability of involved players. This can only be possible by considering the sustainable supply chain factors together. The primary aim of most businesses is to try to minimize total chain cost or maximize total profit. In addition, businesses are trying to use their resources by developing environmentally sensitive systems with the effect of increasing consumer awareness and globalizing world. For a sustainable supply chain, it is not enough to consider only economic or environmental factors. Although the environmental dimension of sustainability is more prominent, it is possible to obtain more sustainable systems in supply chains where social factors are also addressed. Therefore, it is necessary to consider the concept of sustainability as a whole of economic, environmental and social factors, also known as the ‘three pillars’ of sustainability. Considering the factors together, it is shown in Fig. 1 that conflicting objectives can be defined under the same concept.

Fig. 1. Three pillars of sustainability [2]

Sustainable Factors for Supply Chain Network Design

587

By considering the factors together, the complexity of the problem increases when uncertainties such as demand in the supply chain and product return quantities are in question. However, today’s optimization-based support systems and developments in information technologies make sustainable supply chain network designs possible.

2 Literature Review It is possible to observe the economic dimension of sustainability in almost all studies from past to present studies on supply chain. In the objective functions of these studies, it is possible to frequently encounter models that minimize the total cost of the supply chain in question or maximize the total profit of the enterprise. It is frequently observed that the mathematical models in the studies are constructed in the deterministic model type in order to provide ease of solution. The stochastic approach is dominant in supply chain problems where there are uncertain quantities in parameters such as demand, return and waste quantities. The stochastic nature of the uncertainty has also been tried to be resolved with fuzzy or robust modelling logic. The number of objective functions varies depending on the number of sustainability factors of the supply chain studied. In general, studies that deal with a single dimension of sustainability such as economics are single objective, while studies that also consider environmental or social factors are multi-objective. The primary goal in supply chain problems is to optimize components such as delivery speed, costs, stock levels and production quantities. When the studies are examined in terms of solution methods, mostly optimization-based solution methods are used. In studies in which the optimization technique is used, a solution can be achieved in small-scale problem sizes. With the increase in complexity, optimization-based support systems are insufficient to solve the problem, especially in problem types that fall into the NP-hard class. For this reason, heuristic solution methods have been developed. Some of them used more than one metaheuristic method together in order to produce more effective solutions [3–6]. Apart from these, simulation-based solution methods are also used to solve the proposed models. An intelligent hybrid algorithm combining genetic algorithm and simulation solution methods has been designed [7]. In addition to the optimization solution methods, simulation techniques were used in the result display [8]. Moreover, simulation-based system dynamics optimization approach is also used under alternative scenarios in a closed-loop supply chains problem that reproduces [9]. Traditional supply chain problems mostly involve forward logistics activities. For these environments, the widely known standard mixed integer linear programming (MILP) approach was first developed by Mirchandani and Francis in 1989 [10]. At this time, a standard set of models for the concept of reverse logistics has not yet been established; however, a MILP model for recycling industrial by-products was developed by Spengler et al. in 1997 [11]. Afterwards, integrated studies including reverse flow activities were started in addition to forward logistics where production and distribution operations are carried out. These systems, which were first proposed by [12] in 2002 and formed as a result of the integration of forward and reverse logistics, are also called closed-loop supply chains. In this study, it has been shown that the most successful companies in reverse logistics are those that manage closed-loop supply chains that tie themselves tightly with forward supply chains. Closed-loop supply chain-themed studies, which include the economic dimension of sustainability, are frequently encountered

588

S. Yozgat and S. Erol

in the literature. The increase in the understanding of environmental awareness has also increased the importance given to the concept of green supply chain management. The balance between the reduction of cost and preservation of environment has become an efficient initiative for businesses to increase their sustainable competitiveness. In this context, the studies in which the economic and environmental factors of sustainability in closed-loop supply chains are handled together are as shown in Table 1. Table 1. Closed loop supply chains including environmental and economic factors Paper

Model Det.

Robust

Fuzzy

Sto.

Objective function

Methodology

Single

Multiple

Opt.

Heu.

Uncertainty Sim.

13

*

*

*

*

14

*

*

*

*

*

*

15

*

*

16

*

17

*

18

*

19

*

20

*

21

*

*

*

*

23 *

*

*

*

*

*

*

*

* *

22 24

* *

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

Det.*: Deterministic, Sto.*: Stochastic, Opt.*: Optimization, Heu.*: Heuristic, Sim.*: Simulation

Judging by the studies mentioned in Table 1, an integration perspective is presented to develop a green and sustainable closed-loop supply chain network under uncertain demand [13]. In the stochastic programming model, two objectives are proposed for CO2 emissions and total operating cost. The scenario-based method is suggested to represent the uncertain demand. A Lagrangian relaxation method was used to solve the model. He designed a closed-loop supply chain network [14] that considers sustainability dimensions and quantity discounts under uncertainty issues. For this purpose, a multi-objective stochastic optimization model is formulated for supplier selection and location-allocation-routing problem. The first and second objectives of the model seek to minimize total costs and environmental emission impacts, respectively. The third goal is to maximize the responsiveness of the proposed network. Also, the epsilon constraint method is used as a solution method. In order to produce robust closed-loop supply chain designs that reduce uncertainty and greenhouse gas emission burden, data-driven approaches have been proposed using ‘Big Data’ [15]. More specifically, a distributed robust optimization model (DRO) and an adaptive robust model (ARO) for the design of transport and locations of facilities for waste disposal of closed-loop supply chains

Sustainable Factors for Supply Chain Network Design

589

to address multiple uncertainties (customers’ expectations, demands, and uncertainties in recovery phase) have been developed. Historical data is used for both models based on uncertain parameters of previous periods to make robust decisions in next stages. The K-L divergence was included in the uncertain parameters to evaluate the value of the data. A two-stage multi-objective stochastic model for a closed-loop supply chain has been developed by considering risk, economic and environmental considerations simultaneously [16]. To solve the model, a series of memetic metaheuristics are taken into account. Also, the epsilon constraint approach was applied at small sizes to validate and solve the proposed framework of metaheuristic results. Virus Colony Search (VKS) and Keshtel Algorithm (KA) are adapted to the multi-objective model. Pareto optimal solutions were compared with four assessment metrics in different criteria. The purchase quantity, production costs, costs of allocating customers to distribution centers, demand and return rates are uncertain, and in addition, the rate of return depends on the customer’s demand. In other words, the returned number of products to recovery centers is assumed to be a fraction of what customers demand. A green urban closed-loop logistics distribution network model that minimizes greenhouse gas emissions and total operating cost has been proposed [17]. Multi-objective Pareto optimization results were achieved. These results also show that the combination of transport vehicles, including electric and fuel types, can efficaciously provide a win-win strategy that includes costs of distribution and carbon emissions, which would be proper for businesses. In addition, this study contributes to the literature on the vehicle routing problem by providing the concept of using the closed-loop cooperative logistics distribution mode to promote green logistics development and optimize existing logistics distribution issues. Heuristic methods were used to solve the single-purpose model. Tabu search, ant colony and adaptive quantum ant colony algorithms were designed to examine the results comparatively. [18] suggested a multi-stage, multi-product and multi-objective mixed integer linear programming model aimed at designing and planning a forward and reverse green logistics network. The primary goal here is to minimize processes, operating, transportation costs, and fixed costs of plant. Minimizing the amount of CO2 emissions based on grams appeared as the second objective. The third objective is to optimize the number of machines in the production line. The proposed model has been validated by applying it to the white goods industry with a few test problems. From the solution methodology perspective, to obtain set of pareto solutions, the epsilon constraint method is developed. In the multi-product circular closed-loop supply chain, Fuzzy Analytic Network Process (FANP), Fuzzy Decision-Making Trial and Evaluation Laboratory (FDEMATEL) and multi-objective mixed integer linear programming models have been established [19]. For order allocation and circular supplier selection, a hybrid approach has been developed considering the multi-depot, capacity green vehicle routing problem using heterogeneous vehicles. A fuzzy solution approach is proposed to convert the multi-objective model containing demand uncertainty into a single-objective one. The model has been applied to the automobile timing belt manufacturer to demonstrate the applicable side of the suggested model in real world applications. Balance between environmental and operational performance measures of shipping products of a closedloop supply chain network problem were examined [20]. To formulate the model linear programming method is used that strikes the balance between several costs, including

590

S. Yozgat and S. Erol

transportation of goods and emissions in the chain. The solution results are presented for several scenarios using a realistic network example. A multi-objective mixed integer linear programming model is presented for sustainable supply chain design that takes into consideration the principles of Life Cycle Assessment (LCA) and the traditional material balance constraints in the supply chain at each node [21]. Here, gas emissions from various production processes and transport systems, solid and liquid wastes are separated from each other. Some numerical scenarios are derived randomly for parameters and constraints to shoe the practical solvability of the model. Fuzzy – stochastic modelling approach is used in [22] to deal with different types of uncertainty. Considering to the network flexibility, the balance between the economic and environmental performance is tried to be set. Carbon emission is used to measure the environmental performance. Pareto optimal solution is obtained with sample average approximation based weighting method. In [23], which examines demand uncertainty, deals with retail market configurations. Two-stage stochastic program is used as a modelling approach. L-shaped algorithm is used as a solution methodology. Carbon emission is considered as environmental impacts of sustainable closed loop supply chain. Sensitivity analyses are available according to different policies and provide some managerial implications. A study conducted recently [24] examined reverse logistics in a closed-loop supply chain by developing a robust stochastic-based optimization model. Chance constraint method is used in this study to deal with the stochastic nature of the problem. Obtained results are compared with the proposed heuristic algorithm. A real-life case study in the automobile industry is applied considering to the carbon tax policy as an environmental impact of the supply chain. In summary, as it can be understood from Table 1, in cases where uncertainty is ignored, deterministic modelling approach is emphasized. The environmental and economic factors of sustainability have been examined for two purposes as well as combined for a single purpose. Optimization methods are frequently used as a solution technique. Customer demands, product return quantities and therefore uncertainties in model parameters are handled with robust, fuzzy, or stochastic modelling approaches. In some studies, in the literature, it is possible to come across logistics network designs that are only forward or only reverse, with a more specific approach rather than a closed-loop supply chain. However, due to the stochastic reverse flow of products, the inconsistent quality of used products, and the variation in prices of remanufactured and recycled products, designing a reverse logistics system has become more complex in comparison to a forward logistics network. Therefore, studies done about reverse logistics network design have increased tremendously in recent years. If we look at some of these studies, the uncertainty situation in reverse logistics network designs, which includes the economic and environmental factors of sustainability, has been investigated with the multi-objective stochastic modelling approach [25, 26]. Here, optimization techniques such as sampling average approximation procedure and epsilon constraint method were used as solution methods. A scenario-based approach is used to handle with stochastic parameters. In reverse logistics network design, environmental and economic factors including uncertainty [27] are examined in a single objective deterministic model [28] with a stochastic approach. Apart from these, a multi-objective deterministic model that includes uncertainty in parameters such as distance, demand and cost

Sustainable Factors for Supply Chain Network Design

591

and advanced supply chain planning [7] has used from simulation and heuristic techniques in the solution method. When looking at the forward logistics network design that considers the economic and environmental factors, [29] proposed a multi objective sustainable mathematical model for designing a bioenergy supply chain network. In addition to the economic and environmental factors, water and energy are also regarded as sustainability sources in this study. Geographic Information System (GIS) technique and goal programming method are used to solve the proposed model. To obtain a green and resilient supply chain, [30] proposed a robust multi objective optimization model. Heuristic techniques are used in the solution methodology. Economic and environmental factors are considered in the supply chain consisting of suppliers, manufacturers, and warehouses. A multi objective forward logistics network model is proposed with using robust mathematical modelling approach to address of economic and environmental factors of sustainability under uncertainty in [31]. To obtain pareto optimal solutions, epsilon constraint method is used. In all the studies mentioned above, social factors were not considered; however, when it comes to sustainability, economic, environmental, and social factors should be examined simultaneously. The studies in which the 3 pillars of sustainability are examined together are as shown in Table 2. Table 2. Supply chains including environmental, economic and social factors Paper

Model Det.

Objective function Robust

Fuzzy

Sto.

Single

Methodology Mult.

5

*

*

Opt.

Heu.

Logistics Sim.

Uncertainty

Forw.

Reverse

*

*

*

*

6

*

*

*

*

32

*

*

*

*

*

33

*

*

*

*

*

34

*

*

*

*

*

35

*

*

*

*

*

*

*

*

*

*

36

*

*

37

*

*

*

38

*

*

*

*

*

39

*

*

*

*

*

40

*

*

*

*

*

*

*

*

*

*

*

41

*

42

*

44

*

*

*

*

* *

*

*

Det.*: Deterministic, Sto.*: Stochastic, Mult.*: Multiple, Opt.*: Optimization, Heu.*: Heuristic, Sim.*: Simulation, Forw.*: Forward

In Table 2, it is possible to see the studies in which only forward, only reverse or both are integrated in terms of logistics. Here, more emphasis is placed on robust, fuzzy, and stochastic models, since the three pillars of sustainability are examined together

592

S. Yozgat and S. Erol

and systems that are more suitable for real life are in question. The number of established deterministic models is relatively less compared to Table 1. All models show a multi-objective function structure. Optimization techniques are mostly used in solution methods. If we look at the closed-loop supply chain studies, [5, 6], who used the deterministic modelling approach because there was no uncertainty, used from heuristic techniques in solution methods, while [26, 27] used from optimization techniques. While [34, 35] includes stochastic models, [36, 37] used fuzzy modelling techniques only in reverse logistics network designs with uncertainty. In terms of solution method, unlike other reverse logistics network designs, [37] has used from heuristic methods. Robust modelling techniques have been used [38–40] for forward logistics network designs with uncertainty. [41], which developed an interactive approach depending on two-phase stochastic programming and fuzzy probabilistic multi-objective programming, used optimization techniques in the solution method to overcome the problems related to uncertainties in demand, cost and capacity. As can be seen, it is possible to encounter uncertainty in studies involving only forward or reverse logistics network design; however, uncertainty was examined in only one of the studies in which forward and reverse logistics were integrated [42]. This study presents a decision support tool called as ToBLoOM – Triple Bottom Line Optimization Modelling for the design of a sustainable supply chain. A general multi-objective mixed integer linear programming model has been established that combines interdependent strategic and tactical decisions. Strategic decisions; facility location and allocation, capacity decisions, supplier and technology selections, transportation networks including both single and intermodal alternatives. Tactical decisions; procurement planning, purchasing levels, product recovery and remanufacturing. Demand uncertainty is also analysed with a stochastic approach in this study. The three pillars of sustainability are addressed with the multipurpose programming method. The economic dimension is measured in Net Present Value (NPV). Environmental impacts during the establishment of the factory, production and remanufacturing transportation are reduced with the LCA methodology ReCiPe [43]. The social dimension is measured by a socio-economic indicator implemented by the European Union as the Sustainability Development Strategy - Gross Domestic Product (GDP). The deterministic solution is defined as the assumed worst-case scenario in terms of economic performance. For more realistic solutions, stochastic objective functions are obtained by adding stochastic approach to some decision variables in the deterministic model by performing scenario analysis. An application study was conducted for a European-based company with markets in Europe and South America. This study makes a major contribution to the literature by emphasizing the various research gaps needed with an integrated method that allows simultaneous evaluation of particular interactive decisions in the closed-loop supply chain. [44] proposed a multi-objective mathematical model that examines forward logistics between suppliers, manufacturers, distributor centers and customers. Heuristic techniques are used in the solution of the proposed model. Artificial bee colony algorithm has been modified and adapted to the problem.

Sustainable Factors for Supply Chain Network Design

593

3 Sub-factors of Sustainability When sustainable supply chains are managed with the right strategic decisions, they will have various benefits for all members of the chain from beginning to end, including customers. With a correct planning, the deadlines of the enterprises will be shortened, waste will be reduced, costs will be reduced, and employee satisfaction will increase by providing better working conditions. In addition, increasing business reputation will provide a competitive advantage to the members in the chain. Apart from the benefits of sustainable supply chains to businesses and chain members, it is also of great importance with its sensitivity to the environment. In today’s conditions, much more attention should be paid to the protection of natural resources and to ensure their conscious consumption. In recent years, studies that focus on environmental factors as well as the economic dimension of sustainability proves this. With increasing consumer awareness, customers have become more sensitive to environmental and social factors. Considering these factors, “environmental, economic and social” factors should be considered together to increase the benefits of the sustainable supply chain. As stated in Table 2, studies that deal with these factors simultaneously are as stated in Table 3 based on sustainability sub-factors. As can be understood from the Table 3, LCA methods were used to evaluate the environmental effects of sustainability in [5]. The ReCipe method, which is used to convert long life cycle inventory results into a limited number of indicator points, is discussed in [33, 38, 42]. On the other hand, Eco-Indicator 99, the damage-oriented method of LCA, has been used in the analysis of environmental effects in supply chains [33, 34, 37]. Unconscious industrialization and the reduction of green areas, fossil fuel consumption and uncontrolled population growth are among the reasons that increase CO2 emissions. Therefore, while analysing environmental factors [35, 39, 41] CO2 emission was used. While examining the environmental impacts, [44] based on unit and fixed costs; [6] also discussed the environmental benefits and harms that result from the use of end-of-life products. [32] tried to minimize the total amount of fuel consumed. Economic factors in the studies; It has been tried to be resolved by performing cost, NPV and profit analysis. Social factors, on the other hand, have been expanded by differentiating according to the type of problem studied. Some of these are customer service levels, GDP, job opportunities and losses, working conditions and social life cycle analysis. Furthermore, seven focal subjects included in IS0 26000 covered as social responsibility dimension in [40]. Apart from that, making use of multi-criteria solution methods in his problem [36]; He examined environmental, economic, and social factors based on criteria. Table 3. Sustainability sub-factors Paper 5

Factors

Uncertainty

Environmental

Economic

Social

LCA

Cost

Job opportunities, losses (continued)

594

S. Yozgat and S. Erol Table 3. (continued)

Paper

Factors

Uncertainty

Environmental

Economic

Social

Environmental benefits, harms

Cost

Job opportunities, worker safety

32

Energy consumption

Profit

Customer service level

33

LCA/ReCipe, Eco-Indicator 99, CML 2002

Cost

Social benefit indicator

34

YDA/Eco-Indicator 99

Cost

Customer service level

Demand, waste

35

CO2 emissions

Expected profit

Job opportunities, losses

Parameter

36

Environmental criteria Economic criteria

Social criteria

Qualitative inputs

37

LCA/Eco-Indicator 99

NPV

Job opportunities, losses

Parameter

38

LCA/ReCipe

Cost

SLCA

Parameter

39

CO2 emissions

Cost

Unemployment, migration, traffic congestion

Demand, cost

40

Water footprint, CO2 footprint

Profit

ISO 26000

Parameter

41

CO2 emissions

Cost

Working conditions, social commitments

Demand, cost, capacity

42

YDA/ReCipe

NPV

GDP

Demand

44

Environmental effects Cost

6

Customer service level

SLCA*: Social Life Cycle Analysis

4 Conclusions and Future Research Directions In this study, peer-reviewed articles published in SCI indexed journals in the last decades in the scope of supply chain network design including sustainability factors under uncertainty are examined. In this context, closed-loop supply chain models that include the economic and environmental factors of sustainability and selected studies that include forward, reverse or closed-loop supply chain models that include the three pillars of sustainability are examined. The authors review studies about sustainable supply chain network design in the literature in six categories: sustainability factors, types of the

Sustainable Factors for Supply Chain Network Design

595

model, objective functions, solution methodologies, direction of the logistic and uncertainty issues. While sustainability factors are classified as economic, environmental, and social factors, types of models are also considered as deterministic, robust, fuzzy and stochastic approaches. Studies that deal with the objective function as single or multiple are included. Studies that develop an optimization, heuristic or simulation-based solution approach are also included. In addition, the logistics aspect of the studies is examined as forward, reverse or closed-loop supply chains. It was considered whether the studies included uncertainty or not. Papers published in international journals among electronic bibliographic sources such as Scopus, Web of Science and Science Direct were searched using different keywords. Referring to the studies in the literature, uncertainty factors which are examined according to the structure of the model established in studies involving uncertainty also vary. In addition, it is seen that some uncertainties, which change in the real world due to their nature in the real world, are examined in the parameters of demand and therefore product return, waste amounts, some costs and system constraints. However, the fact that there has not yet been a general study in this field, rather than problem-specific, about the research that deals with sustainability factors and situations involving uncertainty at the same time indicates that there is a gap in the literature. Acknowledgments. The authors sincerely thank Organizing Committee and all reviewers for their kind attentions and comments.

References 1. Seuring, S., Müller, M.: From a literature review to a conceptual framework for sustainable supply chain management. J. Clean. Prod. 16, 1699–1710 (2008) 2. Carter, C.R., Rogers, D.S.: A framework of sustainable supply chain management: moving toward new theory. Int. J. Phys. Distrib. Logist. Manag. 38, 360–387 (2008) 3. Wang, Y., et al.: Two-echelon logistics delivery and pickup network optimization based on integrated cooperation and transportation fleet sharing. Exp. Syst. Appl. 113, 44–65 (2018) 4. Wang, Y., Zhang, S., Guan, X., Peng, S., Wang, H., Liu, Y., Maozeng, X.: Collaborative multi-depot logistics network design with time window assignment. Exp. Syst. Appl. 140, 112910 (2020) 5. Sahebjamnia, N., Fathollahi-Fard, A.M., Hajiaghaei-Keshteli, M.: Sustainable tire closedloop supply chain network design: hybrid metaheuristic algorithms for large-scale networks. J. Clean. Prod. 196, 273–296 (2018) 6. Devika, K., Jafarian, A., Nourbakhsh, V.: Designing a sustainable closed-loop supply chain network based on triple bottom line approach: a comparison of metaheuristics hybridization techniques. Eur. J. Oper. Res. 235, 594–615 (2014) 7. Zhang, B., Li, H., Li, S., Peng, J.: Sustainable multi-depot emergency facilities locationrouting problem with uncertain information. Appl. Math. Comput. 333, 506–520 (2018) 8. Kim, J., Chung, B.D., Kang, Y., Jeong, B.: Robust optimization model for closed-loop supply chain planning under reverse logistics flow and demand uncertainty. J. Clean. Prod. 196, 1314–1328 (2018) 9. Georgiadis, P., Athanasiou, E.: Flexible long-term capacity planning in closed-loop supply chains with remanufacturing. Eur. J. Oper. Res. 225, 44–58 (2013)

596

S. Yozgat and S. Erol

10. Mirchandani, P.B., Francis, R.L.: Discrete Location Theory. Wiley Publication, New York (1990) 11. Spengler, T., Püchert, H., Penkuhn, T., Rentz, O.: Environmental integrated production and recycling management. Eur. J. Oper. Res. 97, 308–326 (1997) 12. Daniel, V., Guide, R., Van Wassenhove, L.N.: Closed-loop supply chains. In: Andreas Klose, M., Speranza, G., Van Wassenhove, L.N. (eds.) Quantitative Approaches to Distribution Logistics and Supply Chain Management, pp. 47–60. Springer, Heidelberg (2002). https:// doi.org/10.1007/978-3-642-56183-2_4 13. Zhen, L., Huang, L., Wang, W.: Green and sustainable closed-loop supply chain network design under uncertainty. J. Clean. Prod. 227, 1195–1209 (2019) 14. Ebrahimi, S.B.: A stochastic multi-objective location-allocation-routing problem for tire supply chain considering sustainability aspects and quantity discounts. J. Clean. Prod. 198, 704–720 (2018) 15. Jiao, Z., Ran, L., Zhang, Y., Li, Z., Zhang, W.: Data-driven approaches to integrated closedloop sustainable supply chain design under multi-uncertainties. J. Clean. Prod. 185, 105–127 (2018) 16. Fathollahi-Fard, A.M., Hajiaghaei-Keshteli, M.: A stochastic multi-objective model for a closed-loop supply chain with environmental considerations. Appl. Soft Comput. 69, 232–249 (2018) 17. Wang, J., Lim, M.K., Tseng, M.L., Yang, Y.: Promoting low carbon agenda in the urban logistics network distribution system. J. Clean. Prod. 211, 146–160 (2019) 18. Zarbakhshnia, N., Soleimani, H., Goh, M., Razavi, S.S.: A novel multi-objective model for green forward and reverse logistics network design. J. Clean. Prod. 208, 1304–1316 (2019) 19. Govindan, K., Mina, H., Esmaeili, A., Gholami-Zanjani, S.M.: An integrated hybrid approach for circular supplier selection and closed loop supply chain network design under uncertainty. J. Clean. Prod. 242, 118317 (2020) 20. Paksoy, T., Bekta¸s, T., Özceylan, E.: Operational and environmental performance measures in a multi-product closed-loop supply chain. Transp. Res. Part E Logist. Transp. Rev. 47, 532–546 (2011) 21. Chaabane, A., Ramudhin, A., Paquet, M.: Design of sustainable supply chains under the emission trading scheme. Int. J. Prod. Econ. 135, 37–49 (2012) 22. Yu, H., Solvang, W.D.: A fuzzy-stochastic multi-objective model for sustainable planning of a closed-loop supply chain considering mixed uncertainty and network flexibility. J. Cleaner Prod. 266, 121702 (2020) 23. Tao, Y., Wu, J., Lai, X., Wang, F.: Network planning and operation of sustainable closed-loop supply chains in emerging markets: retail market configurations and carbon policies. Transp. Res. Part E Logist. Transp. Rev. 144, 102131 (2020) 24. Shahparvari, S., Soleimani, H., Govindan, K., Bodaghi, B., Fard, M.T., Jafari, H.: Closing the loop: redesigning sustainable reverse logistics network in uncertain supply chains. Comput. Ind. Eng. 157, 107093 (2021) 25. Yu, H., Solvang, W.D.: Incorporating flexible capacity in the planning of a multi-product multi-echelon sustainable reverse logistics network under uncertainty. J. Clean. Prod. 198, 285–303 (2018) 26. Trochu, J., Chaabane, A., Ouhimmou, M.: A carbon-constrained stochastic model for ecoefficient reverse logistics network design under environmental regulations in the CRD industry. J. Clean. Prod. 245, 118818 (2020) 27. Sadrnia, A., Langarudi, N.R., Sani, A.P.: Logistics network design to reuse second-hand household appliances for charities. J. Clean. Prod. 244, 118717 (2020) 28. Yu, H., Solvang, W.D.: A carbon-constrained stochastic optimization model with augmented multi-criteria scenario-based risk-averse solution for reverse logistics network design under uncertainty. J. Clean. Prod. 164, 1248–1267 (2017)

Sustainable Factors for Supply Chain Network Design

597

29. Mahjoub, N., Sahebi, H.: The water-energy nexus at the hybrid bioenergy supply chain: a sustainable network design model. Ecol. Ind. 119, 106799 (2020) 30. Hasani, A., Mokhtari, H., Fattahi, M.: A multi-objective optimization approach for green and resilient supply chain network design: a real-life case study. J. Clean. Prod. 278, 123199 (2021) 31. Dehghani, E., Jabalameli, M.S., Naderi, M.J., Safari, A.: An environmentally conscious photovoltaic supply chain network design under correlated uncertainty: a case study in Iran. J. Clean. Prod. 262, 121434 (2020) 32. Soleimani, H.: A new sustainable closed-loop supply chain model for mining industry considering fixed-charged transportation: a case study in a travertine quarry. Resour. Policy, 101230 (2018) 33. Mota, B., Gomes, M.I., Carvalho, A., Barbosa-Povoa, A.P.: Towards supply chain sustainability: economic, environmental and social design and planning. J. Clean. Prod. 105, 14–27 (2015) 34. Feitó-Cespón, M., Sarache, W., Piedra-Jimenez, F., Cespón-Castro, R.: Redesign of a sustainable reverse supply chain under uncertainty: a case study. J. Clean. Prod. 151, 206–217 (2017) 35. Rahimi, M., Ghezavati, V.: Sustainable multi-period reverse logistics network design and planning under uncertainty utilizing conditional value at risk (CVaR) for recycling construction and demolition waste. J. Clean. Prod. 172, 1567–1581 (2018) 36. Zarbakhshnia, N., Wu, Y., Govindan, K., Soleimani, H.: A novel hybrid multiple attribute decision-making approach for outsourcing sustainable reverse logistics. J. Clean. Prod. 242, 118461 (2020) 37. Govindan, K., Paam, P., Abtahi, A.R.: A fuzzy multi-objective optimization model for sustainable reverse logistics network design. Ecol. Ind. 67, 753–768 (2016) 38. Ghaderi, H., Moini, A., Pishvaee, M.S.: A multi-objective robust possibilistic programming approach to sustainable switchgrass-based bioethanol supply chain network design. J. Clean. Prod. 179, 368–406 (2018) 39. Tsao, Y.C., Thanh, V.V.: A multi-objective mixed robust possibilistic flexible programming approach for sustainable seaport -dry port network design under an uncertain environment. Transp. Res. Part E: Logist. Transp. Rev. 124, 13–39 (2019) 40. Sherafati, M., Bashiri, M., Tavakkoli-Moghaddam, R., Pishvaee, M.S.: Supply chain network design considering sustainable development paradigm: A case study in cable industry. J. Clean. Prod. 234, 366–380 (2019) 41. Tsao, Y.C., Thanh, V.V., Lu, J.C., Yu, V.: Designing sustainable supply chain networks under uncertain environments: fuzzy multi-objective programming. J. Clean. Prod. 174, 1550–1565 (2018) 42. Mota, B., Gomes, M.I., Carvalho, A., Barbosa-Povoa, A.P.: Sustainable supply chains: an integrated modeling approach under uncertainty. Omega 77, 32–57 (2018) 43. Goedkoop, M., Heijungs, R., Huijbregts, M., De Schryver, A., Struijs, J., Van Zelm, R.: ReCiPe 2008: A Life Cycle Impact Assessment Method which Comprises Harmonised Category Indicators at the Midpoint and the Endpoint Level, vol. 1, pp. 1–126 (January 2009) 44. Zhang, S., Lee, C.K.M., Wu, K., Choy, K.L.: Multi-objective optimization for sustainable supply chain network design considering multiple distribution channels. Exp. Syst. Appl. 65, 87–99 (2016)

Capstone Projects

A Decision Support System for Supplier Selection in the Spare Parts Industry Ecem Gizem Babada˘g, ˙Idil Bıçkı, Halil Ça˘gın Ça˘glar, Ye¸sim Deniz Özkan-Özen(B) , and Yücel Öztürko˘glu Department of Logistics Management, University of Ya¸sar, Izmir, Turkey {yesim.ozen,yucel.ozturkoglu}@yasar.edu.tr

Abstract. Nowadays, maximizing production by reducing the risks in the supply chain and increasing the quality of the final product by choosing the best suppliers is one of the biggest problems faced by companies. Since both quantitative and qualitative criteria are involved in the selection of a supplier, it increases the complexity of decision problems. Decision support systems have been developed to assist in solving these problems. This paper is about creating an Analytical Hierarchy Process-based decision support system to facilitate supplier selection and find the best supplier within the specified criteria. Keywords: —Supply Chain Management · Decision Support System · Supplier Selection · Analytical Hierarchy Process · Supplier Selection Criteria

1 Introduction The flow management of goods and services, including all processes, such as transforming raw materials into final products, is basically called supply chain management (SCM) [1]. The active streamlining of a business’s supply-side activities is being involved into the SCM in order to maximize customer value and gain a competitive advantage in the marketplace. Developing and implementing supply chains that are as efficient and economical as possible is an effort of suppliers represented by SCM. As the role of suppliers will also be affecting in determining the success of a company, wrong supplier selection may cause loss of time, money, and customers for companies, while damaging brand value and reducing their competitive advantages. If the quality of raw materials provided by the supplier is poor, the quality of the produced product will also be affected, which causes a decrease in productivity [2–8]. Therefore, the cost efficiency is highly affected by the accuracy of supplier selection [2, 6]. A high-quality product is being guaranteed by the right supply of raw materials. The increase in process efficiency resulting from a good supply of raw materials will cause the company to reduce operating costs and increase marketing activities, leading to an increase in company profits [9]. Nowadays, maximizing the production by reducing the associated risks in the supply chain and enhancing the final product quality by selecting the best providers are among the most fundamental challenges encountered worldwide [10]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 601–615, 2022. https://doi.org/10.1007/978-3-030-90421-0_52

602

E. G. Babada˘g et al.

Both quantitative and qualitative indicators are involved in the provider selection, which is a multiple-criteria decision-making (MCDM) problem [11]. In the effective management of a modern supply chain network, the provider selection is described as the most important factor. High-quality products and customer satisfaction are being achieved by this factor. Robust analytical models and decision support tools for the ability to balance multiple subjective and objective criteria is highly required for an effective supplier selection [12]. The Supplier selection problem consists of two components, which are the identification of criteria and integration of experts’ assessments [13]. In order to find the best supplier, firstly the identification of criteria and afterwards a systematic methodology is required to integrate experts’ assessments [14]. Some of these problems are sufficiently structured to be solved by computers (calculation of criteria, calculation of efficiency, etc.) and some require a decision from the manager (selection of evaluation methods, selection of criteria, inputting values, etc.) within supplier selection, which is classified as a semi-structured problem [15]. A computer system, which is able to interact with decision makers can complete the process of selecting suppliers with various criteria. As it involves multiple criteria, supplier selection is a complex problem. As a result, conducting a supplier selection by creating a Decision Support System(DSS) will be of great importance [16]. The proper decision making in a complex environment requires the DSS as an important component. Employing the required data, models, documents, knowledge, and communication technologies to support people who are going to solve complicated problems is identified as DSSs, which are generally defined as interactive computer-based systems. Accommodating changes in dynamic and uncertain environments at present and/or deemed in the future can be achieved by these DSSs through an adaptive learning and evolution [17]. This subject has been studied a lot in the literature and many supplier selection and evaluation criteria have been determined by researchers. However, both strategy and customer expectations are quite different for each company. Therefore, generally accepted criteria list in the literature can often lead to wrong choices in different industries and sectors. So there is a big gap between literature and the real industry applications. To fulfil this gap, this study focuses on identifying potential supplier selection criteria in the spare parts industry. The case company has a large market share in the Aegean Region of Turkey, in the spare parts sector. Therefore, this study has two main objectives. Firstly, to prepare a company-specific process in line with the strategies of the case company. This is done to meet the general supplier selection criteria to be obtained after a detailed literature review. Then, to develop a decision support system for the continuity of this election process and to be used by the company. This paper is structured as follows, after the introduction in Sect. 2 literature review is presented. In Sect. 3 details of the methodology are given. Section 4 and 5 include implementation and discussions and conclusions respectively.

A Decision Support System for Supplier Selection

603

2 Literature Review and Modelling Perspective To solve the problem in the study, a detailed literature review was conducted by including literature review for supplier selection criteria and decision support systems used in supplier selection. To start with, there are many articles about supplier selection criteria, considering literature review. Starting with [18] suggested a decision support model regarding the supplier selection process based on analytic hierarchy process (AHP) using a case out of the automotive industry in Pakistan, a developing country, furthermore performing sensitivity analysis to check the robustness of the result of the supplier selection decision. In addition, [13] underlined that integrating experts’ assessments after identifying supplier selection criteria is of great importance. With help of this paper, an integrated method for dealing with supplier evaluation and selection problems is being proposed by combining the AHP and an uncertainty theory. A framework for reducing the purchasing risks associated with suppliers is suggested by this paper, using the solution methodology of AHP in order to select the supplier with the most vantages. Furthermore, [14] studied on the Combinative Distance-based Assessment (CODAS) method. This new developed method helps us handling multi-criteria decision-making problems by [12], covering a number of features that have not been considered in other MCDM methods. Computing the Euclidean distance and the Taxicab distance to determine the desirability of an alternative makes up the concept of this method. In order to rank the related suppliers, this developed method is applied with a real-world case study. AHP is defined as a common multi-criteria decision making (MCDM) method. It has been developed in order to provide flexible and easily understood ways of analysing complex problems. A complex problem can be broken into a hierarchy or levels through this method, making comparisons between all possible pairs existing in a matrix to give a weight for each factor and a consistency ratio. The AHP method was more likely to be used than any other MCDM method, according to [19]. The methodology of AHP, however, focuses on weighing the relative importance of criteria, even as dependencies among criteria are neglected. Besides, [20] used the Hierarchy FPP-FTOPSIS in his research, in order to rank and select the best suppliers based on 25 effective traits for evaluating and ranking these suppliers. Proposing a predictive model in order to determine suppliers’ efficiency score by integrating DEA with Artificial Neural Networks (ANN) has been aimed. The efficiency score of each alternative was evaluated in this model using DEA, after collecting a data set. Afterwards, a Multi-Layer Perceptron (MLP) ANN-based model was provided in order to predict the efficiency of suppliers. AHP has been used for both benchmark weight determination and performance evaluation within supplier selection. Furthermore, considering the literature review of [21], they firstly identified the supplier selection criteria in Iran and afterwards ranked the suppliers of ABZARSAZI Co., using a combined approach of fuzzy AHP and COPRAS. This is a decision-making process, which assists setting priorities through quantitative and qualitative aspects, while decision makers are unable to set the exact numerical values while conducting the related test. By this way, the AHP enabling users dealing with vagueness and uncertainty in the decision-making process, plays a key role in solving these issues [22]. The COPRAS

604

E. G. Babada˘g et al.

method, involving also ranking alternatives, assumes direct and proportional dependence of significance and priority of investigated alternatives within the system of criteria. In [5], the ANP is used to weight problem criteria and sub-criteria, as it is capable of considering interdependencies between problem elements. TOPSIS is being used for rank available suppliers. In the research of [23] additionally mentioned that multi-criteria DSS techniques called CDMA are the ultimate techniques with procurement decisions. So, these CDMA techniques have been reviewed (LGP, MAUT and AHP) and compared with the Iraqi procurement rules and criteria of the public sector. The research in total attempts to integrate multiple information sources to boost the procurement decision making process by integrating theoretical and empirical results in mutual, which describe the relations between information fusion and the decisionmaking management. Moreover [24] consequently proposes a decision support system, which aids in an optimal selection of supplier companies as a business initiative in the automotive manufacturing environment. The present study describes four main factors (quality, cost, flexibility, and reliability) and about approximately 13 sub-factors regarding the vendor selection. The main objective of this paper is to rate vendors by respect of various criteria. Analytical Hierarchy Process (AHP) has been used in the present study for evaluating and ranking vendors. Furthermore, [25] uses the fuzzy AHP-VIKOR based approach in order to contribute to filling this gap by proposing a comprehensive model of global supplier selection methods, extended towards sustainability risks consisting from (multi)th-tier suppliers. Rating the criteria is done by applying Fuzzy AHP, while ranking the suppliers is being realized by Fuzzy VIKOR. Saaty founded [25] uncertain, imprecise judgments of experts through the use of linguistic variables or fuzzy numbers, are being addressed through performances within fuzzy environments by the fuzzy AHP approach, while fuzzy assessments of criteria and alternatives in VIKOR are being involved through the fuzzy VIKOR technique. The fuzzy AHP shows its strength by its ability to handle uncertainty, performing pairwise comparisons to ensure consistent rankings from the decision makers; whereas the fuzzy VIKOR has the ability to handle large number of alternatives, generating alternative rankings based on proximity to reach an ideal solution. In addition to this, within this research [26] worked on data envelopment analysis models, a new hybrid methodology, in order to evaluate potential suppliers and select the best supplier (single sourcing) under a certain environment for a single period by applying the strategy of reducing the number of potential suppliers, the work of [31], aims evaluating and prioritizing the key supplier selection indicators and besides establishing the relationship between available alternatives and selected indicators, using step-wise weight assessment ratio analysis (SWARA) and weight aggregated sum product assessment (WASPAS). Criteria are being evaluated and ranked by the combined SWARA-WASPAS method within the supplier selection criteria. Based on these studies, criteria for supplier selection are summarized in Table 1.

A Decision Support System for Supplier Selection

605

Table 1. Supplier selection criteria Aim of method(s)

Features of criteria

Author(s)

Quality; Price; Delivery; Financial Measure; Technical Collaboration

The purpose of this survey is Dweiri et al. (2015) only to enumerate the critical success factors that will form the basis to identify the specific criteria and sub-criteria to formulate the AHP model

Technology; Chemical Constraints; Capacity; Unit Price

Stakeholders’ requirements for the decision-making process

Scott et al. (2015)

Quality; Technology Criteria for evaluating suppliers Um (2016) Capability; Pollution according to today’s standards Control; Green Environmental Management Quality, Direct Cost; Lead Time; Logistics Services

All these criteria are defined as benefit criteria, except the cost is defined as a cost criterion

Badi et al. (2017)

Cost; Quality; Delivery; Service; Flexibility

The importance of the criteria has been determined through a survey to determine the sustainable supplier

Fallahpour et al. (2017)

Quality; Delivery; Service; Technical Capability; Rejection Rate

All of the criteria were selected Ajalli et al. (2017) as a result of a survey. According to the surveys, these criteria were chosen as the most critical criteria in supplier selection

Cost; Quality; Flexibility; Technology Capability

ANP is used to weight problem Rouyendegh, B. D. (2017) criteria and sub-criteria because of its capability to consider interdependencies between problem elements and TOPSIS is used to rank available suppliers

Cost; Payment; Quality; Communication; Service

The selected criteria, as a result Juliana et al. (2017) of negotiations with the company; determined to be the most profitable for the company (continued)

606

E. G. Babada˘g et al. Table 1. (continued)

Aim of method(s)

Features of criteria

Author(s)

Specifications; Price; Time; Size

Optimising the multiple attributes, objectives, and goals are involved by the team multiple criteria decision making

Hussein and Abboodi (2018)

Quality; Cost; Flexibility; Reliability

Considering the attributes or sub-attributes, purchasing experts decided that these criteria were more important

Arvind Jayant (2018)

Economic; Quality; Economic criteria demonstrated Awasthi et al. (2018) Environment; Social; Global the greatest weight and global Risk risk displayed the least weight for Quality; Cost

Appropriate supplier selection affects finished cost of a product significantly and helps companies improve their competitiveness

Eydi and Fazli (2019)

Information Sharing; Joint Actions; Human Resources; Leadership; Policy and Strategy

The identified criteria that are important from the supplier quality view for supplier selection

Singh et al. (2020)

Packaging Management; Reverse Logistics; Service Quality; Price; Delivery

Criteria used to meet company’s Sarabi et al. (2020) focus

Management Ability; Price; Environmental; Awareness Shipping; Performance Service; Flexibility

Criteria are an indicator of supplier selection. Supplier scores are based on an assessment of each criterion

Utomo et al. (2020)

Furthermore, there are many articles about decision support system, considering literature review. Thereby, [27] indicates within this study, that a decision support system for integrated supplier selection and order allocation is in existence. The AHP-QFD method is being used by the decision support system in order to translate the importance of different stakeholder groups and the needs of these stakeholders into a weighted list of evaluation criteria from which any potential supplier can be turned into account. The multiple selection criteria, the material procurement capacity of each supplier and the supplier score are being taken into account within the optimization algorithm in order to make stochastic quality measurements of the supplied materials.

A Decision Support System for Supplier Selection

607

In addition, [28] requires a system that enables solutions to problems, which we currently face within the most appropriate decision support system method during supplier selection. This system is called the Analytical Hierarchy Process (AHP) method, which is constantly delivering accurate, fast and high-quality results. Besides, according to [29], The proposed research intends evaluating a Decision Support System (DSS) in order to select the most appropriate provider out of three present service providers companies. Therefore, the present study describes a DSS model in order to select a proper service provider within the machine manufacturing industry by using MULTIMOORA and FBWM under variability within this context, [30] evaluated criteria order to choose the best possible supplier by referring to the fuzzy AHP method, while most of the sections generally apply to the fuzzy MOORA method in order to determine specific criteria. Referring to the indications of the related research, a conclusion regarding the fewer requirements of BWM’s comparative data besides other decision-making methods can be drawn, including the providing of more reliable solutions. Concluding with [16] who described the process of selecting suppliers with various criteria, able to be completed by a computer system that enables interacting with decision makers. This system is defined as a decision support system (DSS) for supplier selection [15]. DSS Online is a decision support system which runs on a web-based software program. Users are able to access the website by entering their company data in order to get them analysed. Results are immediately accessible and updated interactively through the usage of web media. The DSS is a measuring tool, selecting new suppliers, categorized by both technical and cultural criteria. Entering data-based scores on technical criteria is sufficient to make this tool work. The score data is managed by using the Fuzzy Analytical Hierarchical Process (FAHP) method in order to rank suppliers. In the era of industry 4.0, a DSS, which has been used to select an internet-based supplier had to be accessible via the web or Android browsers. Therefore, the main idea of the author within this article is highly relevant to the implementations of industry 4.0. In the following section details of the methodology is presented.

3 Methodology: Analytical Hierarchy Process The Analytical Hierarchy Process was first considered by the Myers and Alpert duo in 1968. In 1977, it was developed as a model by Professor Thomas Lorie Saaty and made usable in the solution of decision-making problems. The AHP has been thought about the inability of abstract modelling approaches to provide the expected results in the solution of decision problems, and the need for an easily understood and applied method has emerged due to its mathematical simplicity to be used in the solution of complex decision problems. The AHP provides a comprehensive framework to incorporate rational and irrational preferences and intuitions into the decision-making process [31]. The main problem encountered in multi-criteria decision-making problems is to determine the weight, importance or superiority in order to be able to choose among various alternatives by considering more than one criterion. Analytical Hierarchy Method is used to evaluate the factors that are independent from each other at each level in their hierarchical structure [32]. In simple terms, AHP is a method that evaluates the alternative according to more than one criterion and is used to determine the most important

608

E. G. Babada˘g et al.

alternative in a decision problem. In general, AHP consists of three main principles, including: hierarchy framework, priority analysis and consistency verification [33, 34]. Step 1: Define the problem. The definition of the decision-making problem consists of two stage. In the first stage, decision points are determined. In other words, the answer to the question of how many results will the decision be evaluated on is sought. In the second stage, the criteria that affect the decision points are determined. Especially the correct determination of the number of criteria that will affect the result and detailed definitions of each criterion are important in terms of consistent and logical binary comparisons. Step 2: Develop a hierarchical framework. In this step, a hierarchy model is made for the selection of the best supplier using the AHP. The first step in the hierarchy is the overall goal, “to prepare a decision system to select the best supplier”. The second step is to decide on the most suitable criteria to prepare the system. Step 3: Construct pairwise comparison matrix. For the AHP, an accurate comparison of criteria is important, so a pairwise comparison matrix is created (size n × n). As a result of pairwise comparisons of criteria, a relative ranking matrix is formed. For values below the diagonal, it is calculated with the formula: 1/aij The number of the matrix affects the number of elements in each level. Step 4: Perform judgement for pairwise comparisons. Paired comparisons are created by comparing two selected criteria. Each criterion should be compared with each other. In this research, it was decided to use seven criteria. For example, when making pairwise comparisons as shown in the Table, if the product quality is too important and necessary according to the packaging method, a = 7 is evaluated. Step 5: Synthesize pairwise comparisons. The Average Normalized Column mean (ANC) method is used to calculate priority vectors. In the Average Normalized Column, the items of each column are divided by the sum of the column, and then the items in each resulting row are added. The result found at the end of this operation is divided by the number of criteria in row (n). 1  aij n , i, j = 1, 2, . . . n n i aij n

wi =

j=1

A Decision Support System for Supplier Selection

609

Step 6: Perform consistency verification. The criteria are personal preference, inconsistencies may arise in some cases. If the consistency ratio is 0.1, the comparison is not consistent. It is determined by the consistency ratio (CR). The consistency ratio (CR) is the ratio of the consistency index (CI) to the random index (RI) for the same ordered matrices. (1) Calculate the Eigenvalue. To calculate the eigenvalue, multiply the right of judgement matrix by the priority vector or eigenvector, obtaining a new vector. (2) Calculate the consistency index (CI).

CI =

λmax − n n−1

(3) Calculate consistency ratio (CR).

CR =

CI RI

Step 7: Steps 3–6 are performed for all levels in the hierarchy model. The third, fourth, fifth and sixth steps in the hierarchy model are all applied for each criterion. Step 8: Develop overall priority ranking. The overall priority vector can be obtained by multiplying the priority vector for the alternative methods by the vector of priority of the sub-criteria. The overall priority vector can be obtained by multiplying the priority vector for the supplier alternatives by the priority vector of the criteria. Step 9: Selection of most suitable supplier. A priority was determined for each criterion based on its strength, relative to the other criteria. The priorities of the criteria would indicate their relative importance in reaching the goal.

4 Implementation of the Study Implementation of the study is conducted by the participation of 3 decision-makers, who work in different positions in the company. Decision-makers are asked to make

610

E. G. Babada˘g et al.

pair-wise comparisons of criteria, and also to evaluate each supplier based on those criteria. Linguistic scale that is used for this study is presented in Table 2. As a result of numerous literature reviews, taking also the company’s important values into account, 7 criteria were determined in total, which are product quality, cost, communication systems, reliability, management approach, working relations/relations with employees and packaging method and a binary comparison matrix has also been created. Table 2. Importance scale table with linguistic variables Intensity of importance

Definition

Explanation

1

Equal importance

Two activities contribute equally to the objective

3

Moderate importance

Experience and judgement slightly favour one activity over another

5

Essential importance

Experience and judgement strongly favour one activity over another

7

Very strong importance

An activity is favoured very strongly over another; its dominance demonstrated in practice

9

Extreme importance

The evidence favouring one activity over another is of the highest possible order of affirmation

2, 4, 6, 8

Intermediate values

When compromise is needed between two

By applying the steps of AHP method, results of the supplier selection problem are presented in Table 3. Table 3. Summary of the results Priority vectors of criteria

0.350

0.237

0.159

0.105

0.070

0.048

0.032

Cost

Communication system

Reliability

Management approach

Labor relations/relations with employees

Packaging

Supplier score

Ranking

Quality of product

Supplier 1

0.466

0.466

0.466

0.466

0.466

0.466

0.466

0.466

1

Supplier 2

0.277

0.277

0.277

0.277

0.277

0.277

0.277

0.277

2

Supplier 3

0.161

0.161

0.161

0.161

0.161

0.161

0.161

0.161

3

Supplier 4

0.096

0.096

0.096

0.096

0.096

0.096

0.096

0.096

4

Results showed that, while Quality of Product and Cost are the most important criteria, supplier 1 has the best performance, and it is suggested to company to select

A Decision Support System for Supplier Selection

611

it. The decision-making model is structured based on the calculated weights in the AHP implementation. The proposed model is presented in two parts, interface form and supplier evaluation form. The entry of data to be made by a user form the company is being facilitated by the created interface form (Fig. 1). Data entry is only allowed within the fields that need to be filled in. Data entry is to determine the superiority of the criteria against each other. To start with the interface for, the entry of data made by the company (Fig. 1) is being facilitated by the created interface form. For example, in Fig. 1, a screenshot shows how the pairwise comparisons of the criteria are being done. At this moment, the fourth criterion, the “Communication System”, is compared with the other criteria. However, it has been already compared with the first three criteria, therefore the first 3 data fields are inactive and can not be edited while the next 4 data fields are eligible for data entry As shown in the figure, the user evaluates that the “Communication System” is 2 times more important than the criterion “Reliability” and 5 times more important than the criterion “Packaging”.

Fig. 1. Interface form

In order to evaluate the success rate of the suppliers 7 specified criteria (Fig. 2), a supplier evaluation form was created. Each criterion has a different score by means of its importance within this form. The score, which determines the importance level was obtained from the priority vector of the criteria comparison matrix, was multiplied by 100 and the supplier was evaluated out of a score of 100. For example, for a company that is satisfied with only the product quality, only the product quality box is checked and it is considered successful at a rate of 34,963%. For a company that is satisfied with only its packaging, only the packaging box can be marked and it can be considered successful at a rate of 3.175%. By ticking all boxes, a success rate of 100% will be achieved. Supplier scores can be sorted by the company and be filtered by address.

612

E. G. Babada˘g et al.

Fig. 2. Supplier evaluation form

5 Discussions and Conclusion A correct business partnership by means of choosing and evaluating the right supplier, creating a competitive advantage within other companies, and increasing income and customer delight is a very important factor for companies. The supplier evaluation and selection affect almost every decision made in the supply chain management [1]. Especially these are important for spare parts industry. Companies are obligated to deliver the right product to their customers at the right time, to desired place at the right quality and in the ordered quantity. Identifying suitable suppliers, which are consistently and cost-efficiently able to fulfil any requirement of the buying company, is the main aim of selection processes [35]. The purpose of this study was to first determine the supplier selection criteria specific to the company and then to prepare a AHP based DSS that will facilitate the selection by using these criteria. The decision support system is going to put different weights for every order into usage, as criteria such as price and lead time may show difference depending on any order. Analytical Hierarchy Method is the most successful decision analysis method, which has been adapted to real life. For this reason, it is possible to come across many applications of AHP in the literature. AHP has been the subject of many successful studies in the fields of marketing, finance, education, health, public policy, economy, energy, production, investment, location, sports and quality control. A Decision Support System provides support to the user in complex problems with many decision-making criteria. DSS’s are used to help users at different levels, including operational, tactical and strategic, to make decisions (TBD KamuB˙IB, 2010). With the developing world, every business is now making routine decisions. The importance of the decisions made in the enterprises is increasing day by day and the importance of the decisions made affects the competitive power of the company. When the problem is decision-making, there are many methods for decision-making, one of which is AHP method, which is used in every part of businesses today [36]. The Analytical Hierarchy Method has a huge range of uses, from complex management modeling problems to total quality management, from accounting and finance to manufacturing, from customer selection to personnel evaluation, from software evaluation to project selection, from strategy determination to investment decisions. On the other hand, AHP can be used in

A Decision Support System for Supplier Selection

613

many studies with operations research techniques such as integer programming, goal programming, dynamic programming, benefit/cost analysis, and methods such as fuzzy logic and data envelopment analysis [37]. Businesses are faced with many suppliers that provide a variety of raw materials and services. Choosing and evaluating the best suppliers for businesses is as complex as it is important. Companies may have to pay more attention to environmental management due to environmental as well as sociological and political problems. For example, managers can continue to collaborate with suppliers that care about environmental issues, such as global warming, if they want to protect and enhance the company’s image. For this reason, they prefer the company that best suits the environmental goals of the operator, not the supplier that offers the best price. The system has flexible features including quantitative and qualitative criteria, which can hereby also be used in other decision-making situations (location selection, phone selection, vehicle selection, etc.) This study can be expanded in the future in order to be used in decision-making processes involving wide-ranging alternatives and criteria.

References 1. Ghadimi, P., Toosi, F.G., Heavey, C.: A multi-agent systems approach for sustainable supplier selection and order allocation in a partnership supply chain. Eur. J. Oper. Res. 269(1), 286–301 (2017) 2. Gold, S., Awasthi, A.: Sustainable global supplier selection extended towards sustainability risks from (1+n)th tier suppliers using fuzzy AHP based approach. IFAC-PapersOnLine 48(3), 966–971 (2015) 3. Aksoy, A., Sucky, E., Öztürk, N.: Dynamic strategic supplier selection system with fuzzy logic. Procedia Soc. Behav. Sci. 109, 1059–1063 (2014) 4. Ghadimi, P., Heavey, C.: Sustainable supplier selection in medical device industry: toward sustainable manufacturing. Procedia CIRP 15, 165–170 (2014) 5. Rouyendegh, B.D., Saputro, T.E.: Supplier selection using integrated fuzzy TOPSIS and MCGP: a case study. Procedia Soc. Behav. Sci. 116, 3957–3970 (2014) 6. Grondys, K.: Economic and technical conditions of selection of spare parts suppliers of technical equipment. Procedia Econ. Finan. 27, 85–92 (2015) 7. Darabi, S., Heydari, J.: An interval- valued hesitant fuzzy ranking method based on group decision analysis for green supplier selection. IFAC Papers on Line 49(2), 12–17 (2016) 8. Dargı, A., Anjomshoae, A., Galankashi, M.R., Memari, A., Tap, M.B.: Supplier selection: a fuzzy-ANP approach. Procedia Comput. Sci. 31, 691–700 (2014) 9. Kotula, M., Ho, W., Dey, P.K., Lee, C.K.M.: Strategic sourcing supplier selection misalignment with critical success factors: Findings from multiple case studies in Germany and the United Kingdom. Int. J. Prod. Econ. 166, 238–247 (2015) 10. Gholipour, A., Safaei, A., Paydar, M.M.: A decision support system for supplier selection and order allocation in a multi-criteria, multi-profit and contingency environment. In: 1st International Conference on System Optimization and Business Management, Noshirvani Industrial University. Iranian Association for Research in Operations (2017) 11. Jaehn, F.: Sustainable operations. Eur. J. Oper. Res. 253(2), 243–264 (2016) 12. Moghadam, M., N., & Hosseinpour, F., (2016). Identification and evaluation of logistic factors for evaluating green suppliers using multi-criteria decision-making approach, in: National Conference on Modern Research in Engineering and Technology, Institute of Ideal Environmental Biosciences, Ardabil, Iran.

614

E. G. Babada˘g et al.

13. Um, S.W.: Supplier Evaluation and Selection Using AHP Method and Uncertainty Theory, pp. 210–702. Department of Industrial and Management Engineering, Gangneung-Wonju National University Gangneung-si (2016). https://doi.org/10.11159/icmie16.101 14. Badi, I., Shetwan, G.A., Abdulshaded, M.A.: Supplier selection using combinative distancebased assessment (CODAS) method for multi-criteria decision-making. In: Proceedings of the 1st International Conference on Management, Engineering and Environment, pp. 395–407 (2018) 15. Avila, P., Mota, A., Pires, A., Bastos, J., Putnik, G., Teixeira, J.: Supplier’s selection model based on an empirical study. Procedia Technol. 5, 625–634 (2012) 16. Utomo, T.D.: Preliminary study of web based decision support system to select manufacturing industry suppliers in industry 4.0 era In Indonesia. J. Southwest Jiaotong Univ. 54(6) (2019). https://www.researchgate.net/publication/343280312. Accessed Nov 2020 17. Kasie, F.M., Bright, G., Walker, A.: Decision support systems in manufacturing: a survey and future trends. J. Model. Manage. 12(3), 432–454 (2017) 18. Dweiri, F., Kumar, S., Khan, S., Jain, V.: Designing an integrated AHP based decision support system for supplier selection in automotive industry. Exp. Syst. Appl. 62, 273–283 (2016). https://doi.org/10.1016/j.eswa.2016.06.030 19. Goffin, K., Szwejczewski, M., New, C.: Managing suppliers: when fewer can mean more. Int. J. Phys. Distrib. Logist. Manag. 27, 422–436 (1997) 20. Fallahpour, A., Olugu, E., Musa, S., Wong, K., Noori, S.: A decision support model for sustainable supplier selection in sustainable supply chain management. Comput. Ind. Eng. 105, 391–410 (2017) 21. Ajalli, M., Azimi, H., Balani, A.M., Rezaei, M.: Application of fuzzy AHP and COPRAS to solve the supplier selection problems. Int. J. Supply Chain Manage. 6(3), 2051–3771 (2017) 22. Hafezalkotob, A., Hafezalkotob, A.: A novel approach for combination of individual and group decisions based on fuzzy best-worst method. Appl. Soft Comput. 59, 316–325 (2017) 23. Hussein, Z.J., Abboodi, C.H.: Selecting decision support system technique to choose best supplier in procurement of Iraqi public sector. Int. J. Civ. Eng. Technol. 9(5), 144–154 (2018) 24. Jayant, A.: An analytical hierarchy process (AHP) based approach for supplier selection: an automotive industry case study. Int. J. Latest Technol. Eng. Manage. Appl. Sci. (IJLTEMAS) 7(1), 36–45 (2018) 25. Awasthi, A., Govindan, K., Gold, S.: Multi-tier sustainable global supplier selection using a fuzzy AHP-VIKOR based approach. Int. J. Prod. Econ. 195, 106–117 (2018) 26. Eydi, A., Fazli, L.: A decision support system for single-period single sourcing problem in supply chain management. Soft. Comput. 23(24), 13215–13233 (2019). https://doi.org/10. 1007/s00500-019-03864-0 27. Scott, J., Ho, W., Dey, P.K., Talluri, S.: A decision support system for supplier selection and order allocation in stochastic, multi-stakeholder and multi-criteria environments. Int. J. Prod. Econ. 166, 226–237 (2015). https://doi.org/10.1016/j.ijpe.2014.11.008 28. Juliana, Jasmir, Jusia, P.A.: Decision support system for supplier selection using analytical hierarchy process (AHP) method. Sci. J. Inf. 4(2), 158–168 (2017) 29. Sarabi, P.E., Darestani, A.S.: Developing a decision support system for logistics service provider selection employing fuzzy MULTIMOORA & BWM in mining equipment manufacturing. Appl. Soft Comput. J. 98, 106849 (2020) 30. Qian, L.: Market-based supplier selection with price, delivery time, and service level dependent demand. Int. J. Prod. Econ. 147(C), 697–706 (2014) 31. Saaty, T.L., Vargas, L.G.: Uncertainty and rank order in the analytic hierarchy process. Eur. J. Oper. Res. 32(1), 107–117 (1987) 32. Min, H.: Location analysis of international consolidation terminal using the AHP. J. Bus. Logist. 15(2), 25–44 (1994)

A Decision Support System for Supplier Selection

615

33. Saaty, T.L.: The Analytic Hierarchy Process. McGraw Hill, New York (1980) 34. Vaidya, O.S., Kumar, S.: Analytical Hierarchy Process, an overview of applications. Eur. J. Oper. Res. 169, 1–29 (2006) 35. Kahraman, C., Cebeci, U., Ulukan, Z.: Multi-criteria supplier selection fuzzy AHP. Logist. Inf. Manag. 16(6), 382–394 (2003) 36. Tumincin, F.: Analitik Hiyerar¸sik Proses(AHP) ile bir karar destek sstemi olu¸sturulması: Bir Üretim ˙I¸sletmesinde Uygulama. T.C. Bartın Üniversitesi Sosyal Bilimler Enstitüsü ˙I¸sletme Anabilim Dalı (2016) 37. Aydın, G.: Analatik Hiyerar¸si Prosesi ve Bir Sanayi ˙I¸sletmesinde Uygulanması. T.C. Kocaeli Üniversitesi Sosyal Bilimler Enstitüsü (2008)

A Discrete-Time Resource Allocated Project Scheduling Model Berkay Çataltu˘g, Helin Su Çorapcı, Levent Kandiller, Fatih Ka˘gan Keremit, Giray Bingöl Kırba¸s, Özge Ötle¸s, Atakan Özerkan, Hazal Tucu, and Damla Yüksel(B) Department of Industrial Engineering, Ya¸sar University, Izmir, Turkey [email protected], [email protected]

Abstract. Project Management involves the implementation of knowledge and skills to meet project requirements with beneficial tools and techniques. Project Management consists of tools that provide improvement in the pillars of time, cost and quality. Discrete-Time Resource Allocated Project Scheduling Model (DTRAPS) is developed to minimize time, to maximize quality and to minimize the cost of a project together in one tool. By means of this tool, it is possible to allocate the resources over the project tasks in an optimized way with respect to each pillar of the triangle. Moreover, sensitivity analysis on the model is done with the number of activities, resource number and available time window parameters. The results indicate that the model is robust. Since the model is taking a longer CPU time in solving large problems, heuristics such as Greedy, Smallest Requirements First (SRF), Largest Requirements First (LRF) and Randomized are developed together with Swap improvement algorithm. Heuristics are compared and analyzed. Last of all, a dynamic and user-friendly decision support system is developed on ExcelVBA for the model solution via CPLEX solver and heuristics. Keywords: Project scheduling problem · Time · Quality · Cost · Discrete-time resource allocation · DTRAPS · Heuristics · Decision support system

1 Introduction During the life cycle of the projects, project management shall be involved to obtain better outcomes on pillars of time, quality and cost depending on the available resources. In a simple way, project management consists of getting the job done with a certain quality at a given time with respect to the allocated budget. More concisely, by the relative knowledge and skills, projects are planned, carried out with an optimized resource allocation in project management. Resource allocation is a crucial topic in which project managers are highly concerned. A project is conducted with a cluster of resources. Each of these resources has its specifications. Resource allocation is the scheduling of the resources required for the project’s activities with respect to the resource availabilities and the desired project time. These resources are the main factors that affect the project’s qualitative and quantitative features. Thus, a project manager is responsible for managing the deployment of the resources. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 616–629, 2022. https://doi.org/10.1007/978-3-030-90421-0_53

A Discrete-Time Resource Allocated Project Scheduling Model

617

Chen et al. (2020) worked on a competence-time-quality scheduling model of multiskilled staff by maximizing the total skill efficiency and the quality of the developed product, minimizing the development cycle. They used the Pareto ant colony optimization algorithm to conduct their study. Mahmoudi and Javed (2020) worked on Incorporating Potential quality loss cost in time-cost trade-off problem. They have revised the Kim, Khang and Hwang (I–II) methods and their objective was the same as these methods. They have strived to minimize the cost of crashing and potential quality loss. Finally, Salmasnia et al. (2011) worked on time, cost, and quality trade-off problems. They aimed to minimize the total expected completion time and total expected resources by integrating the TCQTP with a robust solution method. Banihashemi Sayyid and Khalilzadeh (2020) focused on multi-mode resource-constrained project scheduling problems. They used the data envelopment analysis method to minimize time, costs and environmental impacts while also maximizing quality. De Reyck and Herroelen (1998) worked on resource-constrained project scheduling problems with generalized precedence relations for minimizing the makespan. They made Branch and Bound Algorithm & Heuristics for the solution. The algorithm is a branching and constraint algorithm, which is the depth priority represented by the original project network, with a set of overlaps of nodes in the search path, with high priority relationships. A Resource-Constrained Multi-Project Scheduling Problem studied by Gonçalves et al. (2008), aims to present a genetic algorithm for the resource-constrained multi-project scheduling problem for the Optimal Utilization of Resources According to Priorities and Delay Times. To solve the resource-constrained multi-project scheduling problem (RCMPSP), they developed a new genetic algorithm (GA) approach. Their GA-Slack-Mod algorithm generates improved results close to the optimal. Kannimuthu et al. (2019) worked on multi-mode resource-constrained project scheduling problem to minimize the time & cost and to maximize the quality. They conducted their study by developing a new mathematical model and using the (Relaxed-Restricted Pareto) RR-PARETO3 algorithm. Karabati et al. (1995) had the purpose of maximizing the efficiency of the flow line with respect to certain performance criteria by finding an allocation of the resources for the discrete resource allocation problem. They made use of branch and bound algorithms & heuristics. Nevertheless, by means of surrogate relaxation of the formula, they came up with an effective solution procedure where the optimal objective value is bounded. Krüger and Scholl (2009)’s work is about RCPSP (Resource-Constrained Project Scheduling Problem). As a result of various experiments, they stated that it should be aimed at improving the results by developing new and improved solution procedures such as meta heuristics. Liu and Wang (2007) studied Resource Assignment Problems of Linear Construction Projects. The study presents a flexible model for resolving linear scheduling problems involving different objectives and resource assignment tasks using Constraint Programming as the searching algorithm for model formulation. Lombardi and Milano (2012) prepared a survey on resource allocation and scheduling. This survey was about a cross-disciplinary issue to develop new hybrid algorithms and filters. In this study, Hybrid Algorithms (Constraint Programming (CP) & Operations Research (OR)) and heuristic solution methods have been studied. Nie and Gao (2015) focused on the resource-constrained project scheduling problem and they tried to minimize the resource cost and the total project duration. They used strength Pareto evolutionary algorithm two

618

B. Çataltu˘g et al.

and evolutionary multi-objective optimization for this purpose. They considered the cost in two parts as resource usage and resource fluctuation (i.e., rehiring, releasing) cost. The last research belongs to Nudtasomboon and Randhawa (1997). They made a zeroone integer programming model for solving the resource-constraint multiple objective project scheduling problem, and also, this model’s aim is basically minimization of time, cost and make resource-leveling. Zero-one modeling has a unique combination of characteristics. These characteristics incorporate splittable and non-splittable positions, sustainable and non-inexhaustible assets, fluctuation in asset utilization, time-cost trade-offs, and numerous targets. With this literature review, the importance of considering other criteria besides time in project management was revealed and it was examined whether there was a similar study. It was found that none of them combine quality, time and resource-cost allocation in a single tool as it was done in this study. Finally, studies based on discrete-time resource allocation were dwelled on to be able to include resource allocation into this study. This report consists of five sections. In Sect. 2, the modeling of a. Discrete-Time Resource Allocated Project Scheduling Model (DTRAPS) is provided. The model formulation is verified and validated. Sensitivity analysis on DTRAPS is made at the end of this section. Later on, heuristics as Greedy, Smallest Requirements First, Largest Requirements First and Randomized are generated in Sect. 3 with Swap improvement algorithm. These heuristics are also verified, validated and sensitivity analysis is done on the heuristics with a research project. Heuristics and DTRAPS model results are then compared with a project instance set. The details of the decision support system are explained at the end of Sect. 4; finally, the conclusions are presented in Sect. 5.

2 Modelling and Solution Methodology DTRAPS, has three main goals: minimizing the total duration of the project, maximizing overall quality, and minimizing total resource costs. In addition to what is explained in this study, it is essential to address every corner of time, quality and cost triangle for project management, as mentioned earlier. DTRAPS has achieved this and made the decision-making process more suitable for real-life by including the cost into its objective function. The model is based on the following main assumptions: Each resource could be idle at a certain time period and a resource can be used for only one activity within a certain time window composed of consecutive time periods. When a resource is allocated to an activity, the resource’s time rating is deducted from the resource requirement of the activity. If more than one resource is allocated to an activity, their parallel contribution to the activity in terms of resource requirement is additive. In this case, the activity duration is the sum of durations of the resources which perform that activity in discrete time windows. On the other hand, if more than one resource is allocated to an activity, then the overall quality level of the activity is determined by the multiplication of individual quality levels of the allocated resources independent of their allocation time windows. This multiplication will be done with maximum and minimum quality-leveled resource which performs that specific activity. Lastly, the project’s cost will be the sum of the costs of the resources that take part in the project.

A Discrete-Time Resource Allocated Project Scheduling Model

619

Sets: A = Set of activities. P = Set of time periods. R = Set of nonidentical resources. Indices: i, j = Activity index; i, j = 1, . . . , n. t = Time period index; t = 1, . . . , TT. k = Resource index; k = 1, …, r. Parameters: SFi = Placement array of activity i ∈ A → if SFi = 1, then activity i is a starting activity, else if SFi = 2, then activity i is a finishing activity, otherwise activity i is an intermediary activity. P(i,j) = Predecessor pairs matrix of activities i, j ∈ A → if P(i,j) = 1, then activity i is an immediate predecessor of activity j. Di = Resource requirement of activity i (person-time unit). trk = Time rating of resource k. qrk = Quality rating of resource k. ck = Unit cost of resource k per period. αt = Perturbation parameter for time-related variables. αq = Perturbation parameter for quality-related variables. αxn = Perturbation parameter for calculating the minimum and maximum quality levels of activities. Decision Variables: t = 1 if resource k is allocated to activity i at time period t; 0 otherwise. xk,i yk = 1 if resource k is allocated to any activity at any time period; 0 otherwise. ti = duration of activity i (time units). ttit = 1 if activity i is done at time period t; 0 otherwise. Mxki = -ln form of the maximum quality level of the resources allocated for task i. Mnki = -ln form of the minimum quality level of the resources allocated for task i. LQi = -ln form of the quality level of activity i (LQi = Mxki + Mnki ). EBTi = Earliest beginning time of activity i. LBTi = Latest beginning time of activity i, where LBTi = LFTi − ti . LFTi = Latest finishing time of activity i. T = Total duration of the project. TSi = Time slack value of activity i, where TSi = LBTi − EBTi . BTi = Beginning time of activity i. FTi = Finishing time of activity i. LNHBQi = −ln(HBQi ), where HBQi = Highest beginning quality of activity i. LNLFQi = −ln(LFQi ), where LFQi = Lowest finishing quality of activity i.

620

B. Çataltu˘g et al.

LNQ = − ln(Q), where Q = Overall finishing quality of the project. Mathematical Model: Min

       ti + EBTi − LFTi + r(n + 1)TT FTi ] (ck yk )/r + αt + (n + 1)T + ∀k



∀t

∀i

∀i

+αq r(n + 1)LNQ +





LNHBQi −

∀i



∀i



LNLFQi + αxn n

∀i

∀i



Mnki − nr

∀i





∀i

(2.1)

Mxki

∀i

Subject to;  t xki ≤ 1 ∀k ∈ R, ∀t ∈ P

(2.2)

∀i

T  s=t+2

  t+1 s t ∀k ∈ R, ∀t ∈ P, ∀i ∈ A xki ≤ TT 1 + xki − xki  ∀t



t xki ∀i ∈ A, ∀t ∈ P

(2.5)

t xki ≤ ttit r ∀i ∈ A, ∀t ∈ P

(2.6)

∀k

∀k

ti =  ∀i

(2.4)

∀k

ttit ≤ 

t trk xki ≥ Di ∀i ∈ A

(2.3)



ttit ∀i ∈ A

(2.7)

t xki ≤ rTTyk ∀k ∈ R

(2.8)

∀t

∀t

T ≤ TT

(2.9)

    FTi ≥ ttit t + 1 − 99999 1 − ttit ∀t ∈ P, ∀i ∈ A

(2.10)

FTi ≤ FTj − tj ∀i, j : P(i,j) = 1

(2.11)

BTi = FTi − ti ∀i ∈ A

(2.12)

BTj ≥ FTi ∀i, j : P(i,j) = 1

(2.13)

  t t TT 1 − xki + txki ≥ BTi ∀k ∈ R, ∀t ∈ P, ∀i ∈ A

(2.14)

  t t −TT 1 − xki + txki ≤ FTi ∀k ∈ R, ∀t ∈ P, ∀i ∈ A

(2.15)

  t Mnki ≥ − ln(qrk ) − 99999 1 − xki ∀k ∈ R, ∀t ∈ P, ∀i ∈ A

(2.16)

A Discrete-Time Resource Allocated Project Scheduling Model

  t ∀k ∈ R, ∀t ∈ P, ∀i ∈ A Mxki ≤ − ln(qrk ) + 99999 1 − xki

621

(2.17)

LQi = Mxki + Mnki ∀i ∈ A

(2.18)

EBTi = 0 ∀i : SFi = 1

(2.19)

EBTj ≥ EBTi + ti ∀i, j : P(i,j) = 1

(2.20)

T ≥ BTi + ti ∀i : SFi = 2

(2.21)

LFTi ≤ LFTj − tj ∀i, j : P(i,j) = 1

(2.22)

LFTi ≤ T ∀i : SFi = 2

(2.23)

LBTi = LFTi − ti ∀i ∈ A

(2.24)

TSi = LBTi − EBTi ∀i ∈ A

(2.25)

LNHBQi = 0 ∀i : SFi = 1

(2.26)

LNHBQj ≥ LNHBQi + LQi ∀i, j : P(i,j) = 1

(2.27)

LNQ ≥ LNHBQi + LQi ∀i : SFi = 2

(2.28)

LNLFQi ≤ LNLFQj − LQj ∀i, j : P(i,j) = 1

(2.29)

LNLFQi ≤ LNQ ∀i : SFi = 2

(2.30)

BTi , FTi EBTi , LBTi , LFTi , TSi , LNHBQi , LNLFQi , T , LNQ ≥ 0 ∀i ∈ A

(2.31.1)

qi , ti , Mnki , Mxki ≥ 0 ∀i ∈ A

(2.31.2)

t , ttit , yk : 0 or 1 ∀k ∈ R, ∀t ∈ P, ∀i ∈ A xki

(2.31.3)

The objective function in this mathematical model (2.1) has many purposes, as mentioned above. Besides minimizing the total duration and maximizing the overall quality level of the project, minimization of the cost of allocated resources and keeping the quality levels and finishing times of activities in a logical boundary is considered in the objective function. Since the model has multiple goals, the number of resources (r), the number of activities (n) and the available time window parameter (TT) are used as weight

622

B. Çataltu˘g et al.

parameters to balance each part of the objective function. These weight parameters are also taken into consideration in the sensitivity analysis part. Constraint (2.2) ensures that a resource can only be assigned to one job at a time. Constraint (2.3) keeps the resource utilization continuous instead of intermittent. Constraint (2.4) provides the necessary resource allocation to finish a task. Constraint set (2.5 and 2.6) guarantees that there is not any redundant resource allocation for a task and provides decision variable (ttit ) to check if an activity i is done for any time period t. Constraint (2.7) uses this decision variable to calculate the duration of activities by summing over the distinct time periods. Since costs for resources are project-based, constraint (2.8) provides decision variable yk to keep which resources took part in the project. Constraint (2.9) ensures that if there are enough resources, the project will finish in the available time window. Constraint (2.10) dynamically calculates the finishing times of activities with respect to the time periods in which the activities are performed. Constraint (2.11) guarantees that for each predecessor activity pair, the predecessor activity finishes before its successor. Constraint (2.12) calculates the activities’ beginning times according to their finishing times and durations. Constraint (2.13) makes sure that for each predecessor activity pair, the successor activity starts after its predecessor is finished. While constraint (2.14) prevents the resource allocation to activity before the beginning of it, constraint (2.15) prevents the resource allocation after the finish of an activity. Whilst constraint (2.16) finds the highest quality level amongst the quality levels of each resource included in a task, constraint (2.17) finds the lowest quality level between them. As the assumption of the model holds, constraint (2.18) computes the activities’ quality level by multiplying the highest and lowest quality level of resources included that activity, which are found in constraint (2.16 and 2.17). Constraint (2.19) initializes the beginning time of the starting activities to 0 since the project’s starting time is always zero. Constraint (2.20) ensures that for predecessor activity pairs, the successor activity’s earliest beginning time is greater than or equal to the sum of the predecessor activity’s earliest beginning time and duration. Constraint (2.21), sets the total duration of the project to be greater than or equal to the finishing activities’ finishing time. For predecessor activity pairs, constraint (2.22) forces the predecessor activity’s latest finish time to be less than or equal to the difference of successor activity’s latest finish time and duration (i.e., ensures that predecessor activities always finish earlier than the successor activities). Constraint (2.23) sets the last activity’s latest finish time to the overall time of the project. Constraint (2.24) implies that each activity’s latest beginning time is found by subtracting the duration of the activity from the activity’s latest finishing time. Constraint (2.25) calculates the time slack values for each activity. Since all projects start with 100% quality, constraint (2.26) initializes the beginning quality of the starting activities to 100% by assigning 1 to the highest quality levels of activities. Constraint set (2.27) ensures that for each predecessor activity pairs, the successor activity’s highest beginning quality is less than or equal to the difference of predecessor activity’s -ln form of highest beginning quality and -ln form of quality level. (i.e., in the best case with 100% quality level, the successor activity can only have the same level of quality as the predecessor activity’s starting quality). Constraint (2.28) sets the quality of the project to be less than or equal to the difference of finishing activities’ -ln form of quality by the amount of their quality level. Constraint (2.29) forces the predecessor activity’s lowest finishing quality to be greater than or equal to

A Discrete-Time Resource Allocated Project Scheduling Model

623

the sum of successor activity’s -ln form of lowest finishing quality and -ln form of quality level. Constraint (2.30) sets the overall quality of the project to the finishing activity’s lowest finishing quality. Finally, constraint (2.31.1 and 2.31.2) is for sign restrictions t , tt t and y variables are binary variables. All and constraint (2.31.3) states that all the xki k i quality-based variables’ values are calculated via MS Excel since CPLEX does not let calculations with natural logarithms of decision variables. Table 1. The example problem Task No

Task Name

Definion of the Task

Predecessors of the Task

Resource Requirement of the Task (In person me unit)

Resource no

1 2 3 4 5 6 7 8

A B C D E F G H

Planning the research Recruitment of pollsters Designing surveys Training of pollsters Household sampling Prinng of surveys Conducng the researc Examinaon of the results

A A B,C C C D,E,F G

16 17 20 14 16 19 18 17

Resource 1: Resource 2: Resource 3: Resource 4: Resource 5: Resource 6:

Quality Time Rang (trk ) Rang (qrk) 2 3 2 3 4 2

0,97 0,99 0,98 0,96 0,96 0,96

Cost (ck ) 800 1100 880 970 1150 750

To verify DTRAPS, the example research project given in Table 1 is considered as a toy problem with six resources. Each activity, resource and time period was defined accordingly as in the set of activities A = {A1 , A2 . . . An }, set of resources R = {R1 , R2 . . . Rr } and set of time periods P = {P1 , P2 . . . PTT } activities of the project are defined in Table 1. There exist 8 activities in this project and 6 resources. Each activity’s resource requirements in person-time unit is also defined in Table 1. Each resource has its own time rating stated as trk , quality rating as qrk and cost as ck . Resource-related information was provided in Table 1. The given available time window parameter TT was provided as 13 days. The objective of the model is to minimize the total duration of the project while maintaining the best quality with the lowest cost. Within the given data, the model was run on IBM ILOG CPLEX Optimization Studio version 20.1.0, resource allocation matrix for activities within each day was provided in Table 2 where the rows represent the resources, columns represent the days and the cell values represent the activities done for that time period by the corresponding resource. Also, the total duration, total quality and cost of the project can be found in Table 3 as scenario 14 (original scenario). From Table 2, it can be said that the predecessor relationship was taken into consideration properly for each activity pairs and activity durations are 1, 2, 2, 1, 1, 2, 1, 2, respectively. All of the resources took part in the project. Also, with respect to Table 3, the project took 10 days to finish which is in the desired time window parameter TT (original scenario). The project was completed with 82% quality level and 5650 dollars. The model solution takes many hours to work on large problems, so CPLEX was running with 1 h limit for each project. Since the model has multiple objectives in one objective function, number of resource r, number of activities n and available time window parameter TT is used to balance each part of the objective function. Full-factorial sensitivity analysis for all parameters were made on the model and provided in Table 3.

624

B. Çataltu˘g et al. Table 2. Project resource allocation matrix Resource No Resource 1: Resource 2: Resource 3: Resource 4: Resource 5: Resource 6:

1 1 1 1 1 1 1

2 3 3 3 3 3

3 2 3 2 3 2

DAYS 5 6 6 6 6 6

4 6 6 2 2 2

6 5 5 5 5 5 5

7 4 4 4 4 4

8 7 7 7 7 7 7

9 8

10 8 8

8 8

8 8

Table 3. Cplex sensitivity table Scenario Number

n

r

TT

Total Duraon (T)

Total Quality (Q)

Cost

Objecve Value

CPLEX Run Time

Scenario 1 (minimum)

(n-1), (r-1), (TT-1)

7

5

12

10

0,80764

5650

8.333,908

00:05:57

Scenario 2 Scenario 3 Scenario 4 Scenario 5 Scenario 6 Scenario 7 Scenario 8 Scenario 9 Scenario 10 Scenario 11 Scenario 12 Scenario 13 Scenario 14 (original)

(n-1), (r-1), (TT) (n-1), (r-1), (TT+1) (n-1), (r), (TT-1) (n-1), (r), (TT) (n-1), (r), (TT+1) (n-1), (r+1), (TT-1) (n-1), (r+1), (TT) (n-1), (r+1), (TT+1) (n), (r-1), (TT-1) (n), (r-1), (TT) (n), (r-1), (TT+1) (n), (r), (TT-1) Original

7 7 7 7 7 7 7 7 8 8 8 8 8

5 5 6 6 6 7 7 7 5 5 5 6 6

13 14 12 13 14 12 13 14 12 13 14 12 13

10 10 10 10 10 10 10 10 10 10 10 10 10

0,82412 0,81605 0,80773 0,81605 0,81588 0,80764 0,81605 0,7994 0,80773 0,80764 0,80773 0,81588 0,81605

5650 5650 5650 5650 5650 5650 5650 5650 5650 5650 5650 5650 5650

8.932,392 9.530,667 9.582,238 10.300,262 11.018,521 10.884,368 11.722,091 12.559,931 9.235,149 9.908,345 10.581,549 10.663,006 11.470,826

00:07:05 00:11:09 00:05:24 00:07:00 00:10:24 00:05:50 00:14:10 00:07:41 00:05:37 00:07:29 00:08:28 00:05:34 00:12:05

Scenario 14 (original) Scenario 15 Scenario 16 Scenario 17 Scenario 18 Scenario 19 Scenario 20 Scenario 21 Scenario 22 Scenario 23 Scenario 24 Scenario 25 Scenario 26

Original (n), (r), (TT+1) (n), (r+1), (TT-1) (n), (r+1), (TT) (n), (r+1), (TT+1) (n+1), (r-1), (TT-1) (n+1), (r-1), (TT) (n+1), (r-1), (TT+1) (n+1), (r), (TT-1) (n+1), (r), (TT) (n+1), (r), (TT+1) (n+1), (r+1), (TT-1) (n+1), (r+1), (TT)

8 8 8 8 8 9 9 9 9 9 9 9 9

6 6 7 7 7 5 5 5 6 6 6 7 7

13 14 12 13 14 12 13 14 12 13 14 12 13

10 10 10 10 10 10 10 10 10 10 10 10 10

0,81605 0,82438 0,81588 0,81605 0,81605 0,81588 0,80773 0,81588 0,81588 0,80773 0,81605 0,7994 0,81605

5650 5650 5650 5650 5650 5650 5650 5650 5650 5650 5650 5650 5650

11.470,826 12.278,813 12.144,702 13.087,157 14.029,661 10.136,521 10.884,385 11.632,525 11.744,150 12.641,425 13.538,989 13.405,132 14.452,272

00:12:05 00:08:26 00:05:35 00:11:09 00:07:32 00:05:41 00:06:17 00:08:37 00:05:04 00:06:24 00:07:56 00:05:56 00:11:01

Scenario 27 (maximum)

(n+1), (r+1), (TT+1)

9

7

14

10

0,81588

5650

15.499,472

00:15:26

3 Heuristic Solution Methods DTRAPS mathematical model has multiple objectives, which takes too long to solve for large problems, as stated above. Therefore, following heuristic methods and improvement algorithms were also developed to find reasonable solutions to such problems in shorter times. Heuristics work by either giving the total number of resources or selecting the resources that will be used in the project. For the case that the total number of resources is given, this evaluation must be done according to the resources’ qualifications such as time rating (trk ), quality rating (qrk ) and cost (ck ). If the resources are selected specifically, the cost of the resources is not included in the score calculation since all the selected resources are going to be used and their cost are project-based. If they are not selected, the cost is included in the score calculation to select the resources that will conduct in the project. Therefore, it is needed to construct a ranking system amongst

A Discrete-Time Resource Allocated Project Scheduling Model

625

the resources. Each resource is assigned a total score based on their qualifications. Since resource qualifications are independent of each other, to calculate the total score of each resource and adjust the weights of each parameter on the score calculation, data normalization is done on the resource parameters for each resource k. Later, the resource’s normalized sub-scores are calculated for each parameter set. Finally, if the resources that will be used in the project are given, a general score of a resource is again calculated without cost consideration, as was stated above. Since it is desired for a resource’s time and quality rating to be high and cost to be low, the sub-score of time and quality is taken positive and the sub-score of cost is taken as negative for the resources. The assumptions of DTRAPS also hold for heuristics and resources are only allocated to available activities, which are the activities that their predecessors are already finished at least one day before and their requirements are larger than zero. In heuristic methods, the project’s and each activity’s quality and duration are calculated as it was done in the DTRAPS mathematical model. Furthermore, heuristic methods include time, cost and quality combined score calculation of the resulting resource allocation scenario. Moreover, an improvement algorithm named Swap is implemented on each heuristic except Randomized. The resulting scenario of the heuristic is considered as the main scenario in the Swap algorithm. The main scenario is also set as the best scenario since it will be compared later. Starting from the first day, a swap operation is done for two resources that are conducting distinct activities. This swapped resource allocation is then fixed. Later, the selected heuristic is implemented again and the results are considered as a new scenario. The best scenario is updated with a new scenario if the new scenario’s score is higher than the best scenario’s score (i.e., a gain is achieved). After that, all possible swap operations are done on the main scenario. In the end, if there is a gain achieved with any of the new scenarios, the main scenario is updated with that scenario. Above procedure lasts until there is no gain between the main scenario and any new swapped scenario. In this manner, heuristic results are improved. Greedy Construction Heuristic: Greedy heuristic initially prepares a resource list regarding the selected resources or given resource number by sorting them according to their scores in descending order. After that, the resource with the greatest score is selected. Also, an available activity is picked sequentially with respect to the activity number. If the selected resource’s time rating is more than enough to conduct the activity’s initial or remainder resource requirement, other resources are checked to evaluate if they satisfy this condition as well. Amongst the resources that satisfy this condition, the resource with the lowest time rating is allocated to the activity to prevent any excessive resource utilization. If there is no other resource such as the selected one, the initially selected resource is allocated to the selected activity. If the activity is finished with the allocation of that resource, the activity is set to be unavailable and its successors become available for the next day. If there is any available activity for that day, the resource allocation procedure continues with other resources or in the case of all resources are used within the day, allocation starts from the first resource the next day. If no available activity is left during the day, each activity is checked to understand whether the project is finished or not. If the project is not finished, the algorithm lasts until there is no activity with any resource requirement left. When it finishes, the resource allocation matrix is acquired and project duration is calculated with respect to the resource allocation. After

626

B. Çataltu˘g et al.

that, the quality and duration of activities are as well as the project duration and the overall quality. The cost of the project is calculated by summing the cost of the used resources. Hence, project results are acquired. Smallest Requirements First (SRF) Improvement Heuristic: This method works based on the idea of getting small jobs done first amongst available activities. The difference in the SRF heuristic from the greedy heuristic is that available activities are sorted in a list for each day with respect to their resource requirements in ascending order. Also, resources were allocated starting with the least resource-required available activity until the project ends. Largest Requirements First (LRF) Improvement Heuristic: This method works quite like the SRF heuristic but with a difference of getting the large jobs done first amongst available activities. In LRF heuristic, available activities were sorted with respect to their resource requirements in descending order and resources were allocated starting with the most resource-required available activity until the project ends. Randomized Heuristic: In the randomized heuristic, resource allocation was conducted distinctly from the above heuristics. Instead, all of the allocations were done randomly. The control mechanism for available activity and project finish is also included in this heuristic. Since randomized allocation usually is not able to give the optimized allocation in a single trial, a randomized heuristic is scheduled to run for 200 scenarios with the given project. The best scenario is updated for each cycle where there is a gain in the score of the new scenario. Amongst all of the scenarios, the scenario with the best score is selected as best scenario and also the resulting scenario of the randomized heuristic. This scenario’s quality, duration and cost are considered to be the project results. To make a comparative analysis on the developed heuristics, the example research project was again taken into consideration. The project was run on each heuristic with 7 different scenarios with different time ratings, quality ratings and costs of resources. Thus, the performances of the algorithms were compared and in a sense, sensitivity and parametric analyses were made. Also, validation of all the developed techniques was done this way. Greedy heuristic results are given in Table 4 and Fig. 1. Table 4. Scenarıo results of greedy heuristic sensitivity analysis Scenario Scenario-4 Scenario-1 Scenario-2 Scenario-3 Scenario-6 Scenario-7 Scenario-5

Time 8 11 11 11 11 11 15

Cost 5650 5650 5650 5650 6250 5050 5650

Scenario Scenario-3 Scenario-1 Scenario-4 Scenario-5 Scenario-6 Scenario-7 Scenario-2

Quality 0,70 0,78 0,78 0,78 0,78 0,78 0,86

Cost 5650 5650 5650 5650 6250 5050 5650

Scenario Scenario-4 Scenario-1 Scenario-2 Scenario-3 Scenario-6 Scenario-7 Scenario-5

Time 8 11 11 11 11 11 15

Quality 0,78 0,78 0,86 0,70 0,78 0,78 0,78

A Discrete-Time Resource Allocated Project Scheduling Model

627

Fig. 1. Scenario results graphs of greedy heuristic sensitivity analysis

After analyzing the heuristics, a project instance set was created with 4 small, 4 medium and 4 large sized project instances. Each instance was ran both on CPLEX and heuristics. Randomized heuristic was run with 50, 200 and 400 cycle numbers. 1 h time limit was put in CPLEX since large instances take a too long time. For small and medium-size instances, CPLEX found the optimal and performed mostly better than heuristic but heuristics are too close to the optimal solution in many instances. On the other hand, the Randomized heuristic with 400 cycles generally performed better than Randomized with cycles 200 and 50 as expected since more allocation possibility was tried in 400 cycles. In large-size instances, CPLEX was running for 1 h when heuristics are able to find solutions in 1–2 min on average.

4 Decision Support System Decision support system design, which is intended to provide great benefit to its users in project management, allows working and planning on the desired project with more than one solution method, and the results are displayed and saved in PDF format as a report. The data of the desired project (number of resources, number of tasks, resource and task properties) can be loaded from a ready information file or manually entered into the program by the user. The program also allows editing of tasks and resources during manual input. After the data is given to the program manually or by loading, the information page opens and the manually entered data can be saved as a file. Then, the problem is solved with the solution method preferred by the user, the results are displayed on the output screen, and different scenarios can be tried on it. Or, another function of DSS is that the user can directly upload a previously recorded scenario and work on it. DSS mainly includes two methods to solve the problem, CPLEX and Heuristics. The program allows us to work with more than one heuristic method; Greedy, Randomized, SRF, LRF. With these solution methods selected depending on the user’s preference, the assignment of resources to the activities takes place in the optimal or closest to the optimal way. The results appear on the Output screen. In addition to the results, a Gantt chart with resource assignments can be drawn on the output screen, and the resulting project scenario can be saved. Another feature of the program is that the user can manually assign the project to the project with the help of the program and then continue working on the solution user has created. Users can do these works on the scenario by changing the number of resources or by ensuring that the resources he/she wants work or not. Thus, the user can design different scenarios for the same project

628

B. Çataltu˘g et al.

with the desired ways and methods and generate reports thanks to the program. These reports and solutions created can be downloaded to the user’s computer in PDF format.

5 Conclusion In this study, a Discrete-Time Resource Allocated Project Scheduling (DTRAPS) model was built with the addition of a cost tool to bring the integration of time and quality one step closer to real life. With the sensitivity analysis, the model was examined and considered to be robust. Also, the model was validated since it provides logical solutions without violating any constraints and thus reaching optimal resource allocation. After building DTRAPS, since the model takes a long time to solve large problems, heuristics were developed to reach a fast and consistent solution. Initially, a Greedy heuristic that allocates the best resource to upcoming available activity was developed. Later on, SRF and LRF heuristics ranking the activities with respect to their resource requirements were introduced. These heuristics were then improved with a Swap improvement algorithm. In addition, a Randomized heuristic was generated. All of the heuristics’ sensitivity analyses were made, and results showed that heuristics are giving consistent solutions. Moreover, our comparison between heuristic and proposed model indicated that heuristics are indeed working properly and resulting in good resource allocation solutions. Finally, a dynamic and user-friendly decision support system that includes both the proposed model and heuristics was created on Excel-VBA for the user. Our DSS embraces the mathematical model with CPLEX or different heuristic algorithms and creates different scenarios by allowing work on this solution. It also allows the preparation of reports with these scenarios it creates.

References Banihashemi, S.A., Khalilzadeh, M.: Time-cost-quality-environmental impact trade-off resourceconstrained project scheduling problem with DEA approach. Eng. Constr. Archit. Manage. 28, 1979–2004. https://doi.org/10.1108/ecam-05-2020-0350. Chen, R., Liang, C., Gu, D., Zhao, H.: A competence-time-quality scheduling model of multiskilled staff for IT project portfolio. Comput. Ind. Eng. 139, 106183 (2020). https://doi.org/10. 1016/j.cie.2019.106183 De Reyck, B., Herroelen, W.: A branch-and-bound procedure for the resource-constrained project scheduling problem with generalized precedence relations. Eur. J. Oper. Res. 111(1), 152–174 (1998). https://doi.org/10.1016/s0377-2217(97)00305-6 Gonçalves, J., Mendes, J., Resende, M.: A genetic algorithm for the resource constrained multiproject scheduling problem. Eur. J. Oper. Res. 189(3), 1171–1190 (2008). https://doi.org/10. 1016/j.ejor.2006.06.074 Kannimuthu, M., Raphael, B., Palaneeswaran, E., Kuppuswamy, A.: Optimizing time, cost and quality in multi-mode resource-constrained project scheduling. Built Environ. Project Asset Manage. 9(1), 44–63 (2019). https://doi.org/10.1108/bepam-04-2018-0075 Karabati, S., Kouvelis, P., Yu, G.: The discrete resource allocation problem in flow lines. Manage. Sci. 41(9), 1417–1430 (1995). https://doi.org/10.1287/mnsc.41.9.1417

A Discrete-Time Resource Allocated Project Scheduling Model

629

Krüger, D., Scholl, A.: A heuristic solution framework for the resource constrained (multi-)project scheduling problem with sequence-dependent transfer times. Eur. J. Oper. Res. 197(2), 492–508 (2009). https://doi.org/10.1016/j.ejor.2008.07.036 Liu, S.S., Wang, C.J.: Optimization model for resource assignment problems of linear construction projects. Autom. Constr. 16(4), 460–473 (2007). https://doi.org/10.1016/j.autcon.2006.08.004 Lombardi, M., Milano, M.: Optimal methods for resource allocation and scheduling: a crossdisciplinary survey. Constraints 17(1), 51–85 (2012). https://doi.org/10.1007/s10601-0119115-6 Mahmoudi, A., Javed, S.A.: Project scheduling by incorporating potential quality loss cost in time-cost trade-off problems. J. Model. Manag. 15(3), 1187–1204 (2020). https://doi.org/10. 1108/jm2-12-2018-0208 Nie, S., Gao, J.: PeerJ PrePrints. Discrete time-cost trade-off model for optimizing multi-mode construction project resource allocation (2015).https://doi.org/10.7287/peerj.preprints.1299v2 Nudtasomboon, N., Randhawa, S.U.: Resource-constrained project scheduling with renewable and non-renewable resources and time-resource trade-offs. Comput. Ind. Eng. 32(1), 227–242 (1997). https://doi.org/10.1016/s0360-8352(96)00212-4 Salmasnia, A., Mokhtari, H., Abadi I., N.K.: A robust scheduling of projects with time, cost, and quality considerations. Int. J. Adv. Manuf. Technol.60(5–8), 631–642 (2011).https://doi.org/ 10.1007/s00170-011-3627-5

An Optimization Model for Vehicle Scheduling and Routing Problem Tunay Tokmak, Mehmet Serdar Erdogan(B) , and Yi˘git Kazanço˘glu International Logistics Management, Ya¸sar University, Izmir, Turkey {mehmet.erdogan,yigit.kazancoglu}@yasar.edu.tr

Abstract. Vehicle scheduling has a significant impact on the logistic operations of businesses and effective scheduling can increase customer satisfaction. In this context, a vehicle scheduling model is developed to enhance the distribution operations of a company. The aim of the developed mixed-integer linear programming model is to minimize the number of vehicles departing in a day in order to decrease the extreme density that the company experiences on certain days. While designing the mathematical model, due date constraints have been taken into consideration. However, to propose better solutions, due dates are expanded one day, two days and three days respectively and the model is solved for each case. As due dates extended, the number of vehicles departing in a day decreased significantly. The model is solved using IBM ILOG CPLEX (OPL) software for seven days, and fifteen days periods by analyzing one month’s data acquired from the company’s database. As longer periods are optimized, the model generates better results. However, solution time increases. Keywords: Vehicle scheduling · Vehicle routing · Optimization · MILP · Due date

1 Introduction Vehicle scheduling can be described as the process of devising a timetable for vehicles to distribute or collect goods from customers and it has great importance in logistics planning because it is a form of decision-making regarding the allocation of available capacity or resources to activities, tasks or customers over time. Scheduling certainly plays a crucial role in any type of industry. For example, a manufacturing company has to schedule its production process considering diverse elements that affect the process such as production capacity, available workforce, inventory capacity, volatile demand, etc. However, the scheduling process is quite complex and it usually requires comprehensive algorithms to reach an optimal schedule that takes into consideration all affected mechanisms. In logistics operations, scheduling algorithms usually focus on minimizing inventory and transportation costs as well as satisfying customer expectations. Specifically, in this study, our main focus will be proposing a vehicle scheduling algorithm that sets on-time delivery as an objective.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 630–638, 2022. https://doi.org/10.1007/978-3-030-90421-0_54

An Optimization Model for Vehicle Scheduling and Routing Problem

631

The vehicle routing problem (VRP) is an integer programming problem in logistics whose focus is to determine an optimal set of routes to deliver orders to a given set of customers. In other words, a vehicle routing algorithm is the generalized form of traveling salesman problem (TSP). While generating a solution to a vehicle routing problem, the balance between cost reduction and customer satisfaction is crucial as well as scheduling of vehicles to be used to deliver goods. Vehicle routing problems can be divided into two main categories which are single depot and multi depot. In this study, distribution operations of a company which experiences a density in terms of finding appropriate vehicles for delivery were enhanced by developing an integer linear mathematical model to ameliorate vehicle scheduling and routing of the company. The company rents vehicles of a homogenous fleet and delivers products each day. Therefore, the problem may be titled a homogenous fleet multi-period scheduling problem. The mathematical model is developed by using IBM ILOG CPLEX software and the objective of the model is to minimize the maximum number of vehicles departing in a day. The rest of the paper is organized as follows: Sect. 2 reviews the relevant literature. Section 3 defines the problem. Section 4 gives the results of the study. Finally, Sect. 5 gives the concluding remarks.

2 Literature Review Che et al. (2011) [1] proposed a memetic algorithm with giant-tour representation to solve a multi-period vehicle routing problem. The study demonstrated that the memetic algorithms have great potential in solving large-scale problems. Phiboonbanakit et al. (2018) [2] aimed at creating a mathematical model which combines machine learning with well-known optimization heuristic algorithms in order to solve vehicle routing problems. Rui et al. (2018) [3] developed an algorithm that solves the problem of the open pit mine car scheduling problems by analyzing the classical particle swarm optimization algorithm and the traditional immune particle swarm optimization algorithm. Huisman et al. (2001) [4] developed a Robust Solution Approach to dynamic vehicle scheduling. The study demonstrated that late trip initials and delay costs are reduced using dynamic scheduling instead of traditional scheduling. Bouderar et al. (2019) [5] developed a skyline algorithm for vehicle routing problems. Wen et al. (2010) [6] solved a dynamic multi period vehicle routing problem (DMPVRP). The aim of DMPVRP is to minimize routing distance and customer waiting time, thus decreasing the workload. Zäpfel and Bögl (2008) [7] developed multi-period vehicle routing and crew scheduling with an alternative of outsourcing which is outsourcing external drivers in the case. Moin et al. (2011) [8] formulated a hybrid genetic algorithm for a multi-product multi-period problem. In the study multi-supplier products and a homogeneous vehicle fleet are examined. Archetti et al. (2015) [9] solved multiperiod vehicle routing problems with due dates by using a variable mip neighborhood descent algorithm. The aim was to develop routes for each day so that some costs (costs of shipping, inventory and penalties for deferred service) are minimized. Yang and Xuan (2019) [10] developed a model for Multistage Heterogeneous Fleet Scheduling with fleet sizing criterion. The study on the heterogeneous fleet of with

632

T. Tokmak et al.

fleet sizing criterion is seen as an opportunity for the improvement of freight transport scheduling systems. Soleimaniamiri and Li (2019) [11] developed a scheduling model for Heterogeneous Connected Automated Vehicles at a General Conflict Area. Setiawan et al. (2019) [12] developed a mathematical model for Heterogeneous vehicle routing problems with multi-trips and multi-products (HVRPMTMP) with fixed and variable vehicle cost using a four-index vehicle flow formulation. Zhang et al. (2020) [13] modeled a vehicle scheduling problem aimed at scheduling quarantine vehicles. They proposed a hybrid water wave optimization and neighborhood search algorithm to tackle the problem. Heuvel et al. (2008) [14] focused on integration of timetabling and scheduling in the study. Models based on time space network extended and a local search algorithm derived to assure integration. Moreno et al. (2019) [15] developed a hybrid algorithm to solve the Vehicle Scheduling Problem with Multiple Depots. They integrated a Set Partitioning (SP) model and the methodology is also used to solve real-life problems that arise in Colombia. Faiz et al. (1998) [16] aimed to reach optimal planning of vehicle scheduling during a crisis. To solve this problem, two different integer linear programming models are used which are arc-based integer linear programming models and path-based integer linear programming models. Ceder (2011) [17] focused on timetable development and scheduling with different vehicle types. He used the deficit-function theory to create timetables. Laurent et al. (2009) [18] represent a multi period vehicle scheduling problem where vehicles deliver a single type of product from multi depots. The objective of the model was to propose the least costly schedule to minimize transportation costs as well as inventory costs at retailers. An exact algorithm is developed to solve this problem. All feasible schedules are generated from each depot to each retailer and a set of vehicle schedules are selected optimally by solving the shortest path problem. Löbel (1999) [19] argues multi-depot heterogeneous vehicle scheduling problems. The general problem consists of allocating vehicles to deliveries while minimizing the number of vehicles departing and the operational costs. In the article a new formulation based on list-graph coloring was used and an iterative tabu search was devised for vehicle minimization. Kung et al. (2005) [20] investigated the solution of the multi commodity flow formulation. Which can be described as an integer LP formulation based on an arc-oriented assignment problem with additional path-oriented flow conservation constraints.

3 Problem Definition The company rents one type of truck to meet the demands of its customers. Loading is done every day of the year to meet the demands of the customers. This problem is Multi-Period, as more than one day is scheduled. The reason for calling this problem as Vehicle Scheduling is that it is a Multi-Period. Scheduling of these vehicles will be made by planning which vehicles will be released on which day. This is also a Vehicle Routing Problem. Because the routing of the vehicles that will be released every day will also be done. So the problem is a combination of Vehicle Routing Problem and Scheduling. In our particular data, the demands of the customers of the company are always the same as the demands of the vehicles. In other words, a vehicle fills the burden of a customer. For this reason, there is no point in minimizing cost in the objective function. So, the following question arises here: Why is routing done if a single vehicle serves

An Optimization Model for Vehicle Scheduling and Routing Problem

633

a single customer? We will set up the model in such a way that it is functional when the demands change. When we set it up in this way, we can produce solutions for all situations where a vehicle serves a customer or more than one customer. The company is experiencing extreme intensity on certain days and has difficulty in finding a vehicle. In addition, there is a maximum capacity constraint they can produce in a day. To prevent this intensity, the maximum number of vehicles to be released in one day will be minimized in the objective function. The mathematical is given below: Sets: D Depot. C Customers. L Locations: D ∪ C. T Days. Parameters: di Due date of customer i ∈ C. distij Distance between location i ∈ L and location j ∈ L. Decision Variables:  1 If a vehicle travels from location i ∈ L to location j ∈ L at day t ∈ T Xijt 0 otherwise  1 If otherwise a vehicle visits customer i ∈ C at day t ∈ T Yit 0 otherwise MaxV Maximum number of vehicles that are used in a day. Objective Function: MinimizeMaxV + 0.01 ∗ Subject to;





 i∈L

 j∈L

t∈T

Xijt ∗ distij

Yit = 1 ∀i ∈ C

(3.1)

v∈V t∈T

 

Yit = 0 ∀i ∈ C

(3.2)

v∈V t∈T :t>di



Xijt = Yit ∀i ∈ C, t ∈ T

(3.3)

Xjit = Yit ∀i ∈ C, t ∈ T

(3.4)

j∈L:i=j



j∈L:i=j



j∈L:i=j

Xjit −

 j∈L:i=j

Xjit = 0 ∀i ∈ C, t ∈ T

(3.5)

634

T. Tokmak et al.

 i∈C



Yit ≤ 1000 ∗



Xijt ∀t ∈ T

(3.6)

i∈D j∈C

Yit ≤ MaxV ∀t ∈ T

(3.7)

i∈C

Constraint (3.1) ensures that each customer will be visited. Constraint (3.2) guarantees that customers are certainly not visited except on the due date. Constraint (3.3) refers to departing from each customer. It is implied in constraint (3.4) that entering every customer. The balance constraints are shown in the constraint (3.5). Constraint (3.6) ensures that Y variable will be connected to X variable. Constraint (3.7) enables that MaxV function to be determined.

4 Results The above mentioned company experiences intensity in terms of finding appropriate vehicles for delivery. Therefore, a mixed integer linear programming model is developed by using IBM ILOG CPLEX software and the model’s objective is to find the minimum number of vehicles required that depart in a day to deliver goods on time. The model is developed in a way that it considers routing of a vehicle in case its capacity exceeds demand of a customer. Additionally, sensitivity analysis is done by exceeding due dates to offer alternative and better solutions to the company. As the due date gap increased it is observed that the maximum number of vehicles required decreased as well as solution time for each period.The model was run in a computer with Intel ® Core ™ i5-8265U CPU @1.60 GHz processor and 16.0 GB RAM. The Table 1 represents the model aimed at minimizing the number of vehicles departing within a specific period of time. The data required to build the model was obtained by examining the total demand of customers and the number of vehicles departing in specific periods to fulfill the demands of the customers. The number of vehicles departing in a day is recorded in the table as the maximum number of vehicles before optimization to indicate the total number of vehicles departing before implementation of the optimization model built using IBM ILOG CPLEX (OPL) software. As the first approach while building the mathematical model one month period was divided into five periods as first one-week period, second one week period, third oneweek period, fourth one week period and fifth one-week period and the model was solved individually for these periods. The last one-week period solely consists of three days because August is used as a sample month to extract data. These periods are represented in the table as 7-1, 7-2, 7-3, 7-4, 7-5. As a second approach, one month period is divided into two periods which are represented as 15-1 and 16-2 in the table. The first period indicates fifteen days and the second period indicates 16 days. The model has been solved individually for these periods. Due date gaps indicate the number of days added to original due dates for delivery of the products.

An Optimization Model for Vehicle Scheduling and Routing Problem

635

Table 1. Maximum number of vehicles departing in a day Period

Due date Gap 0 Due date gap 1

Due date gap 2

Due date gap 3

Max V

Max V

Max V

Solution time (s)

Max V

Solution time (s)

Solution time (s)

Solution time (s)

Max V before optimization

7–1

28

4.19

22

2.99

22

2.71

22

2.48

28,00

7–2

26

0.34

16

0.34

12

0.29

10

0.26

26,00

7–3

30

1.82

19

1.37

19

1.32

19

1.63

30,00

7–4

29

3.89

22

5.05

22

4.79

22

4.28

29,00

7–5

42

2.63

32

0.33

32

0.32

32

0.32

42,00

15–1

28

144.96

20

103.90

18

79.90

17

113.39

28,00

16–2

42

371.16

26

324.99

26

265.74

26

248.34

42,00

According to Table 1, as due date gaps increase for periods, the number of vehicles departing in a day decreases continuously. Additionally, solution time for the model becomes less in most cases. In other words, by exceeding due dates by a few days, the model produces better results in terms of both numbers of vehicles departing in a day and solution times. However, due date gaps do not have an effect on results after a breakpoint. An important inference is that by optimizing longer periods, better results are obtained. For example, by optimizing the fifteen days period, the model produced 28, 20, 18 and 17 while it produced 28, 22, 22, 22 by optimizing a seven days period. However, the model could not be solved for one month using IBM ILOG CPLEX (OPL) software within reasonable times. When results compared with the number of results before the optimization model applied, for the first seven days, the number of vehicles departing remained stable when the due date gap was 0 and it decreased to 22 from 28 when due date gaps were 1, 2 and 3. In the second period, when the due date gap was 0, the maximum vehicle number stayed the same but as due date gaps increased, the number of vehicles decreased to 16, 12, and 10 respectively from 26. In the third period when the due date gap was 0, the maximum number of vehicles stayed stable. However, as due data gaps increased the number of maximum vehicles decreased substantially to 19. For the fourth seven days period, when the due date gap was 0, the maximum number of vehicles did not change but as the due date gaps increased it decreased to 22 from 29. For the last seven days, when the due date gap was 0, the maximum number of vehicles remained the same while it decreased to 32 from 42 for all due date gaps except due date gap 0. For the fifteen days and sixteen days optimization model, when due date gaps were 0, the maximum number of vehicles remained the same which are 28 and 42 respectively. However, as due date gaps increased the numbers have decreased to 20, 18, 17 respectively for fifteen days period and to 26 for sixteen days period. Lastly Table 2 represents the average number of vehicles and average solution time for each scenario.

636

T. Tokmak et al. Table 2. Average solution time and number of vehıcles Period

Average solution time for period

Average number of vehicles for period

7–1

3.0925

23.50

7–2

0.3075

16.00

7–3

1.5350

22.00

7–4

4.5025

23.75

7–5

0.9000

34.50

15–1

110.5375

20.75

16–2

302.5575

30.00

When comparing the average maximum number of vehicles produced by model in a period to the maximum number of vehicles before optimization, the number has decreased for all periods. They decreased from 28, 26, 30, 29, 42, 28, 42 to 23.5, 16, 22, 23.75, 34.5, 20.75, 30 respectively for each period. As expected, solution times increase as scenario becomes larger.

5 Conclusion In this paper, a vehicle scheduling problem of a homogeneous fleet is considered. Since vehicles will be loaded every day, the problem can be called a vehicle scheduling problem. However, the model is also formulated as a vehicle routing problem for the situations where the demands of customers do not equal the capacity of vehicles. In other words, when a single vehicle does not serve a single customer, vehicles must be routed because of fluctuating demands which may mean the demand of a single customer is less than a vehicle capacity. We also need to take deadlines of delivery into consideration because customers expect their goods within a time interval. The main objective was to minimize the total number of vehicles released in one day. In this manner, the company will be able to find vehicles for deliveries without struggle and satisfy its customers. The model is also the vehicle routing model, but in our data, we did not get any routing result because the demand always corresponds to the vehicle capacity. But we will see this routing in cases where demand is less than capacity. Finally, to summarize the results, while building the mathematical model, in the first approach one-month period was divided into five periods as first one-week period, second one-week period, third one-period, fourth one-week period, and fifth one-week period. Then the model is solved individually for these periods. In the second approach one-month period was divided into two periods as fifteen and sixteen days and the model was solved individually. According to this, as due date gaps increase for periods, the number of vehicles departing in a day decreases continuously and solution time for the model becomes less in most cases. In other words, by exceeding due dates by a few days, the model produces better results in terms of both numbers of vehicles departing in a day and solution times. However, due date gaps do not have an effect on results after a

An Optimization Model for Vehicle Scheduling and Routing Problem

637

breakpoint. When we compare the average maximum number of vehicles produced by model in a period to the maximum number of vehicles before the optimization it is seen that the number has decreased for all periods.

References 1. Che, C.H., Zhang, Z., Lim, A.: A memetic algorithm for solving multiperiod vehicle routing problem with profit. In: Proceedings of the 13th Annual Conference Companion on Genetic and Evolutionary Computation, pp. 45–46 (July 2011) 2. Phiboonbanakit, T., Horanont, T., Supnithi, T., Huynh, V.N.: Knowledge-based learning for solving vehicle routing problem. In: Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers, pp. 1103–1111 (October 2018) 3. Rui, H., Peng, C., Chao, Z.: Vehicle scheduling algorithm based on ımproved ımmune-pso with adaptive search strategy. In: Proceedings of the 2nd International Conference on Business and Information Management, pp. 18–22 (September 2018) 4. Huisman, D., Freling, R., Wagelmans, A.P.: A robust solution approach to the dynamic vehicle scheduling problem. Transp. Sci. 38(4), 447–458 (2004) 5. Bouderar, B., Alaoui, L., Hadi, M.Y.: A skyline algorithm for solving the vehicle routing problem. In: Proceedings of the 4th International Conference on Big Data and Internet of Things, pp. 1–7 (October 2019) 6. Wen, M., Cordeau, J.F., Laporte, G., Larsen, J.: The dynamic multi-period vehicle routing problem. Comput. Oper. Res. 37(9), 1615–1623 (2010) 7. Zäpfel, G., Bögl, M.: Multi-period vehicle routing and crew scheduling with outsourcing options. Int. J. Prod. Econ. 113(2), 980–996 (2008) 8. Moin, N.H., Salhi, S., Aziz, N.A.B.: An efficient hybrid genetic algorithm for the multiproduct multi-period inventory routing problem. Int. J. Prod. Econ. 133(1), 334–343 (2011) 9. Archetti, C., Jabali, O., Speranza, M.G.: Multi-period vehicle routing problem with due dates. Comput. Oper. Res. 61, 122–134 (2015) 10. Li, B., Yang, X., Xuan, H.: A hybrid simulated annealing heuristic for multistage heterogeneous fleet scheduling with fleet sizing decisions. J. Adv. Transp. 2019, 1–19 (2019) 11. Soleimaniamiri, S., Li, X.: Scheduling of heterogeneous connected automated vehicles at a general conflict area. In: Transportation Research Board (TRB) 98th Annual Meeting. Transportation Research Board (2019) 12. Setiawan, F., Masruroh, N.A., Pramuditha, Z.I.: On modelling and solving heterogeneous vehicle routing problem with multi-trips and multi-products. Jurnal Teknik Industri 21(2), 91–104 (2019) 13. Zhang, M.X., Yan, H.F., Wu, J.Y., Zheng, Y.J.: Quarantine vehicle scheduling for transferring high-risk individuals in epidemic areas. Int. J. Environ. Res. Pub. Health 17(7), 2275 (2020) 14. Van den Heuvel, A., Van den Akker, J., Van Kooten, M.: Integrating timetabling and vehicle scheduling in public bus transportation, Reporte Técnico UU-CS-2008-003, Department of Information and Computing Sciences, Utrecht University, Holanda (2008) 15. Moreno, C., Falcón, L., Bolaños, R., Subramanian, A., Zuluaga, A., Echeverri, M.: A hybrid algorithm for the multi-depot vehicle scheduling problem arising in public transportation. Int. J. Ind. Eng. Comput. 10(3), 361–374 (2019) 16. Faiz, T.I., Vogiatzis, C., Noor-E-Alam, M.: A column generation algorithm for vehicle scheduling and routing problems. Comput. Ind. Eng. 130, 222–236 (2019) 17. Ceder, A.A.: Optimal multi-vehicle type transit timetabling and vehicle scheduling. Procedia Soc. Behav. Sci. 20, 19–30 (2011)

638

T. Tokmak et al.

18. Laurent, B., Hao, J.K.: List-graph colouring for multiple depot vehicle scheduling. Int. J. Math. Oper. Res. 1(1–2), 228–245 (2009) 19. A. Löbel. Solving large-scale multiple-depot vehicle scheduling problems, In Computer-aided transit scheduling, pages 193–220, Springer, Berlin, Heidelberg, 1999. 20. Forbes, M.A., Holt, J.N., Watts, A.M.: An exact algorithm for multiple depot bus scheduling. Eur. J. Oper. Res. 72(1), 115–124 (1994)

Applying Available-to-Promise (ATP) Concept in Multi-Model Assembly Line Planning Problems in a Make-to-Order (MTO) Environment Mert Yüksel, Ya¸sar Karakaya, Okan Özgü, Ant Kahyao˘glu, Dicle Dicleli, Elif Onaran, Zeynep Akkent, Mahmut Ali Gökçe(B) , and Sinem Özkan Department of Industrial Engineering, Ya¸sar University, Izmir, Turkey {ali.gokce,sinem.ozkan}@yasar.edu.tr

Abstract. We consider a multi-model assembly line production planning problem. We assume an environment where orders with several different model types with varying quantities are received by contract manufacturers in a Make-to-Order (MTO) environment. The models are similar enough in such a way that they share some common critical raw materials/parts and are to be produced on an already balanced multi model assembly line. Due to MTO and contracts, there are significant costs associated with earliness and tardiness in addition to inventory and production costs, capacity, and other operational constraints. The challenge is to be able to make quick and accurate decisions regarding whether or not to accept an order and provide a due date along with raw material procurement and production plan that can be followed. This problem is closely related to Available to Promise (ATP) systems. Increased revenue and profitability are expected with the better management of ATP systems by reducing the amount of missed market opportunities and improving operational efficiency. This study aims to develop an effective solution method for this problem, which will minimize earliness, tardiness, lost sales, inventory holding, FGI, subcontracting, overtime, and raw material costs. We present the results of a detailed review of related literature. This study fills the gap in the literature of assembly line planning problems that covers Available-toPromise by considering shipping decisions on critical raw materials required for production in a make-to-order environment. After determining a gap in the literature, a novel mathematical model has been developed to solve the problem on hand. While this developed mathematical model offers an acceptable calculation time for problems where the production time required to meet the total demand does not exceed the total inhouse production time, it does not offer a fast solution method for the problems where the total inhouse production time is insufficient to meet the total demand. For this reason, a heuristic algorithm has been developed that provides faster results with near-optimal solutions for this type of problem. We present the results of our experimentation with both models. Keywords: Multi-model assembly line · Make-to-order · Available-to-promise · Production planning · Heuristic

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 639–652, 2022. https://doi.org/10.1007/978-3-030-90421-0_55

640

M. Yüksel et al.

1 Introduction Assembly lines are an important part of manufacturing systems. When a number of the model types is the case to classify assembly lines, they can be categorized into 3 types [1]. These types are single-model assembly lines, mixed-model assembly lines, and multimodel assembly lines. Single-model assembly lines are designed to produce exactly one model type. However, multiple model types can be manufactured in mixed-model assembly lines (MMAL) and multi model assembly lines (MuMAL). Different products in MMALs can be manufactured continuously while MuMALs produce different model batches with a setup time/cost between them. So, reduction of these setups should be considered in MuMALs. Different model products are manufactured in different batches to minimize the setup impacts. A mixed-model assembly line is using when there is a similarity of model products and there is no necessary setup for process adjustment [2]. A generic illustration for a MuMAL is shown in Fig. 1. Producing multiple types of products can help businesses to grow exponentially by acquiring different market segments at low costs. Contract manufacturing companies can satisfy the demand of different customer segments utilizing the Multi-Model Assembly Line. These products to be produced/assembled have similar product characteristics and can be produced on the same specially designed assembly lines. Such contract manufacturers work in a Make-To-Stock environment, where customers ask request for quotation (RfQ) for a specific quantity of specific models in a single order and a delivery date. This problem is specifically related to Available-to-Promise (ATP) Systems. An important concept in supply chain planning, ATP is to quote a quantity and delivery date depending on various conditions. In most cases, survival for such companies depends on their ability to make quick and accurate production planning decisions for whether or not to accept the order, and provide a realistic delivery date while taking into account raw material/parts requirements, capacity, and various operational constraints. Such problems arise especially in, but are not limited to, electronic equipment manufacturing (TV sets, phones, etc.) and are referred to as Electronics Contract Manufacturing (ECM).

Fig. 1. Multi-model assembly line framework

In many cases, a RfQ comes with several models with different quantities. If the order is accepted, the manufacturer can only deliver the order when all the models in order are completed.

Applying ATP Concept in Multi-Model Assembly Line Planning Problems

641

It is crucial to managing the capacity in that setting. Most such companies would utilize overtime and even subcontracting themselves if the need arises and makes financial sense. Model types that can be produced by the contract manufacturer will have many common critical raw materials. In the case of TV manufacturing, these can be common PCB, plastic, or other electronic equipment. Managing inventory and procurement of these common critical raw materials is required to support the assembly plan. In a real-life setting, there are multiple delivery methods for these raw materials/parts, with different lead times and associated costs. For such a contract manufacturer, there are regularly made decisions of whether or not to accept an order, how to plan the production on MuMAL, how to plan procurement of common critical raw materials, and how to best utilize capacity under operational decisions. The answer to a RfQ requires making such an analysis over and over again. Success or failure in the market will depend heavily on how well these decisions are made. We, therefore believe that there is a need for models to make these decisions fast and accurately. Our literature review presented in Sect. 2 shows this void. The rest of the report is organized as follows: In Sect. 2, a concise literature review is provided in Section 2. The proposed MILP model formulation is presented in Section 3.A. In Section 3.B, an experimental study is presented. The proposed heuristic method is specified and the comparisons are made in Section 3.C. Section 4 summarizes the study along with a discussion on future research directions.

2 Literature Review In this section, we present a review of assembly line planning problems that covers Available-to-Promise. We summarize our findings in Table 1 chronologically. During this study, we observe that many researchers are dealing with mixed model assembly line sequencing problems, adding the option to accept or reject orders in the MTO production environment. Bard et al. [3] considers a mixed-model assembly line sequencing problem in combination with heuristic and branch and bound. Their objective is to minimize the total length of the line. They can only verify optimality for 20 units due to time limitations. Xiaobo and Ohno [4] formulate a mixed-model assembly line sequencing problem and propose simulated annealing (SA) and branch and bound methods for the objective of total conveyor stoppage time. They find an optimal solution for small size problems with the branch-and-bound method while the SA method is used for large size problems. Akkan [5] applies three metaheuristic optimization approach for order scheduling problems: (1) compaction heuristic (2) what-if heuristic (3) min-frag cost heuristic. Their objective is to minimize the present value of the cost of rejecting orders and inventory holding cost. McMullen [6] considers a mixed-model assembly line sequencing problem to minimize the weighted sum of the usage rate and the total number of setups. He uses Tabu Search to solve the problem. O˘guz et al. [7] solves the problem of order acceptance and order scheduling with mixed integer linear programming. The purpose of the problem is to maximize total revenue. Since the order acceptance and sequencing problem is NP-Hard, they develop three heuristic methods for industrial-sized problems to avoid high CPU times. Manavizadeh et al. [8]

642

M. Yüksel et al.

studies bi-criteria sequencing methods by using particle swarm optimization (PSO) and simulated annealing for a mixed-model assembly line problem to minimize total utility work, idle, setup cost, production rate variation, and total tardiness costs for all orders. Because the problem on hand is NP-hard, small-sized problems are solved optimally and compared with the results of PSO and SA meta-heuristics. Sadjadi et al. [9] presents a MINLP and Genetic Algorithm models to solve mixed-model assembly line sequencing problem considering the priority of customers and different penalty costs based on the priorities with three objectives: (1) total utility minimization (2) product rate variation minimization (3) total earliness and tardiness cost minimization. This consideration is specifically for mixed-model U-lines that such as automobile and truck assembly. Rabbani et al. [10] proposes a U-shaped assembly line mixed-model sequencing problem and develop a hybrid GA-Beam search algorithm solution by considering downstream help and storage of kits for the objective of minimizing the total conveyor line stoppages time and total tardiness. Nazar and Pillai [11] presents a mathematical model and a bitwise mutation algorithm for the problem to find sequences that minimize the variations of production rate and part usage rate. While the mathematical model used for small-size problems, cannot solve in a reasonable computation time for large size problems and they propose a modified bit-wise mutation algorithm to deal with the large-size sequencing problems. MMAL sequencing problem is addressed in Tanhaie et al. [12] using a TOPSIS Method and MOPSO Algorithm to minimize the number of utility workers, total setup cost, and total tardiness and earliness costs. Tanhaie [13] presents the Lagrangian relaxation method to solve mixed-model assembly line sequencing problem by dividing the model into two stages. The objective is maximizing the profit, minimizing costs of utility workers and idle time of operators, and total tardiness and earliness costs by considering the number of parts used in the production of customer orders constraints. A few research articles also consider the MMAL sequencing problems in hybrid MTO and MTS environments. Robinson and Carlson [14] develop a MIP model to solve real-time resource allocation and scheduling problems to minimize inventory holding and production costs in a hybrid MTS and MTO system. Kalantari et al. [15] present a decision supports system for order acceptance/rejection in a hybrid MTS and MTO environment to minimize the total cost (i.e., overtime, subcontract, tardiness/earliness, and penalty costs). They use the TOPSIS method to determine the ranks of the customers. Overall, as seen in the literature review, different mathematical models have been developed to solve assembly line sequencing or scheduling problems in MTO/MTS production environment. However, the lack of contributions on ATP concept for MuMAL problems in MTO mode is clear. We believe we fill this gap in the literature by considering shipping decisions on critical raw materials required for production in our explained problem environment.

Applying ATP Concept in Multi-Model Assembly Line Planning Problems

643

Table 1. Literature review Author-Date

Problem definition

Production environment

Order acceptance/rejection

Objective

Solution method

Bard et al. (1994)

Mixed Model Assembly Line Seq. Problem

MTO

No

Minimizing the total length of the line

Mixed Int. Nonlinear Program with heuristic + branch & bound

Xiaobo and Ohno (1997)

Mixed Model Assembly Line Seq. Problem

MTO

No

Minimizing the total conveyor stoppage time

Simulated annealing & branch and bound

Akkan C. (1997)

Order Scheduling Problem

MTO

Yes

Minimizing the present value of the cost of rejecting orders and inventory holding cost

Compaction, What If & Min-Frag-Cost Heuristic

McMullen P.R. (1998)

Mixed Model Assembly Line Seq. Problem

MTO

No

Min. # of setups and material usage rate

Tabu Search

O˘guz et al. (2010)

Order Acceptance & Order Sched Dec.

MTO

Yes

Maximizing total revenue

MILP

Manavizadeh et al. (2013)

Mixed Model Assembly Line Seq. Problem

MTO

No

Minimizing total utility work cost, total setup cost and total tardiness costs

Particle Swarm Opt. - Simulated Annealing

Sajida et al. (2016)

Mixed Model Assembly Line Seq. Problem

MTO

No

Minimizing total utility, product rate variation, total early/tardy cost

MINLP and Genetic Algorithm

Manavizadeh et al. (2017)

U-shaped Assembly Line Mixed Model Sequencing Problem

MTO

No

Minimizing of the total conveyor stoppage time and total tardiness

GA-Beam Search Algorithm

Nazar and Pillai (2018)

Mixed Model Assembly Line Seq. Problem

MTO

No

Minimizing part usage rate variation and production rate variation (PRV)

Bit-wise mutation algorithm

Tanhaie et al. (2020)

Mixed Model Assembly Line Seq. Problem

MTO

No

Minimizing the number utility workers and minimizing total setup cost, total tardiness and earliness costs

TOPSIS Method and MOPSO Algorithm

Tanhaie et al. (2020)

Mixed Model Assembly Line Seq. Problem

MTO

Yes

Max profit, min. costs of utility worker, idle time, total tardiness and earliness costs

Lagrangian Relaxation

(continued)

644

M. Yüksel et al. Table 1. (continued)

Author-Date

Problem definition

Production environment

Order acceptance/rejection

Objective

Solution method

Robinson & Carlson (2007)

Real-time Resource Alloc & Sched

MTS-MTO

Yes

Minimizing inventory holding & production costs

MILP

Kalantari et al. (2011)

Order Acceptance/Rejection Problem

MTS-MTO

Yes

Min. regular time, overtime and outsourcing and lateness/ earliness penalty

MILP

This Study (2021)

Multi-Model Assembly Line Plan. Problem

MTO

Yes

Min. the total tardiness/earliness, raw material, inventory holding & production costs

MILP & Heuristic

3 Problem Formulation and Solution Methodology In this section, we present the details of two methods developed for the problem on hand. We consider a contract manufacturer in an MTO environment, regularly receiving RfQ for varying quantities of multiple models in a single order. The production is to take place on an already balanced MuMAL. The models have several common critical raw materials with multiple possible lead times with varying costs (Faster delivery higher cost). The company can reject or accept orders as a whole but not partially. Once accepted, decisions need to be for the production plan for the MuMAL (including setups for models) and procurement of common critical raw materials. There are earliness and tardiness costs associated with each order and an order is not completed until all models in that order are ready. When regular capacity is insufficient to meet orders, this amount can be increased by overtime or subcontracting. Below, we first explain the proposed mathematical model for this problem and then the heuristic method. A. Mathematical Model A mixed-integer linear programming model is developed to solve the MuMAL production planning problem. The sets, indices, parameters, and decision variables of the proposed MILP model are defined in Table 2. In the following of the section, the objective function and constraints are presented with detailed explanations. Objective Function min

|L| |M | |O| |T |  |T |     (Coear Eo + Cotar To + Colost xo ) + (Clinv Invlt ) + (CmFGI FGItm ) o=1

+

|M |  |O| |T |   t=1 m=1 o=1

t=1 l=1 |T | 

Cmsub SUBtom +

t=1 m=1

(C over Ot ) +

t=1

|K|  |L| |T |   t=1 k=1 l=1

Clkrm rmltk

Applying ATP Concept in Multi-Model Assembly Line Planning Problems

645

Table 2. Sets, indices, parameters, and decision variables

Constraints |T | 

(INPtom + SUBtom ) + xo dom = dom ∀o ∈ O, ∀m ∈ M

(1)

t=1 |O| 

INPtom ≤ Mjtm ∀t ∈ T , ∀m ∈ M

(2)

o=1

jtm ≤

|O| 

INPtom ∀t ∈ T , ∀m ∈ M

(3)

o=1 |M | 

jtm (sm + pm − γm ) +

m=1 |M |  m=1

|M | |O|  

(INPtom )γm ≤ RTCt + OTCt ∀t ∈ T

(4)

o=1 m=1

jtm (sm + pm − γm ) +

|M | |O|   o=1 m=1

(INPtom )γm − RTCt ≤ Ot ∀t ∈ T

(5)

646

M. Yüksel et al. |M |  m=1



dom −

|M |  t 

(INPtom + SUBtom ) ≤ M (1 − ut  o ) ∀t  ∈ T , ∀o ∈ O

(6)

m=1 t=1

uto ≤

|M | 

(INPtom + SUBtom ) ∀t ∈ T , ∀o ∈ O

(7)

m=1 |T | 

  uto = 1 − xo ∀o ∈ O

(8)

t=1 |O|  (SUBtom ) ≤ Mytm ∀t ∈ T , ∀m ∈ M

(9)

o=1 SUB ytm LBm ≤

|O|  (SUBtom ) ∀t ∈ T , ∀m ∈ M

(10)

o=1 |T | 

uto t − To − Mxo ≤ ddomax ∀o ∈ O

(11)

uto t + Eo + Mxo ≥ ddomin ∀o ∈ O

(12)

t=1 |T |  t=1

Invl1 = Initl −

|M | |O|  

(INP1om qrmlm ) +

o=1 m=1 |M | |O|  

|K| 

rml1k ∀l ∈ L

(13)

k=1 |K| 

rmltk ∀t = 2, . . . , |T |, ∀l ∈ L

(14)

⎛ ⎞ |O|  |O| t t    ⎝dom uto ⎠ ≤ FGIt  m ∀t  ∈ T , ∀m ∈ M (INPtom + SUBtom ) −

(15)

Invlt = Invlt−1 −

(INPtom qrmlm ) +

o=1 m=1

o=1 t=1

k=1

o=1 |O| 

t=1

SUB (SUBtom ) ≤ UBtm ∀t ∈ T , ∀m ∈ M

(16)

o=1 cap

rmltk ≤ RMltk

∀t ∈ T , ∀k ∈ K, ∀l ∈ L

(17)

INPtom , SUBtom , rmltk , Invlt , Ot , Eo , To ≥ 0 ∀t ∈ T , ∀o ∈ O, ∀m ∈ M , ∀k ∈ K (18) xo , uto , ytm ∈ {0 or 1} ∀t ∈ T , ∀o ∈ O, ∀m ∈ M

(19)

Applying ATP Concept in Multi-Model Assembly Line Planning Problems

647

The objective function minimizes earliness, tardiness, lost sales costs, as well as inventory holding, FGI, subcontracting, overtime, and raw material costs. Equation (1) constraints that each model of order o is either produced in-house, subcontracted, or rejected. Also, if an order is rejected, then all demands of that order must be rejected. Equations (2–5) define the total work time (regular time + overtime) in each period and the amount of overtime used in each period. Equations (6–8) are applied to find which period order is completed. If an order is rejected or unmet, the order cannot be completed in any period (uto = 0). Equations (9 and 10) define a lower limit for subcontracted models. Equations (11 and 12) calculate tardiness and earliness of orders that are not rejected. It returns zero with the help of the objective function when the order is rejected. Constraints (13 and 14) calculate the raw material inventory level of each period by considering the availability (and capacity) of different shipping types of each period. Equation (15) states the finished goods inventory of each period. Equation (16) assigns the maximum subcontract capacity of the model of each period and each model. Equation (17) limits the availability of raw material suppliers based on their lead-time. Equations (18 and 19) are the nonnegativity constraints. B. Experimental Design In this section, we first aim to measure the effect of change in the parameters on the objective function and run time. We determine changeable parameters as demand quantity of each model/order, number of orders, time horizon, number of models, and number of critical common raw materials. The possible values of the changeable system parameters and the values of the constant parameters are presented in Tables 3 and 4, respectively. Table 3. Changeable parameters Demand for each model/order

Number of orders

Time horizon

Number of Models

Number of critical common raw materials

Total number of tests will be held

Low

Uniform (10,30)





3



48

Medium

Uniform (30,60)

10

15

5

2

High

Uniform (60,100)

20

30



3

In order to evaluate the performance of the proposed mathematical model, we perform a comprehensive numerical experiment by conducting 48 different cases, formed by combinations of parameters from Table 3. After examining the computational results in detail for the model, we obtain some implications. If the time required to produce all the total demand in an order does not exceed the regular time capacity, the problems are resolved in an acceptable time. However, when the regular time capacity started to be

648

M. Yüksel et al. Table 4. Constant parameters

Constant values for all cases

M1

M2

M3

M4

M5

RM1 RM2 RM3

Regular time capacity = 480 min

















Overtime capacity = 240 min

















Overtime cost per min (in $)

(Production Cost * 1.5)/Cycle Time







Minimum subcontract limit

5

5

5

5

5







Maximum subcontract limit

9999

9999

9999

9999

9999







Quantity of critical common raw 3-1-2 2-3-1 2-2-1 1-2-3 1-1-2 – materials of each model (respect to the order of RMs)





Setup time for each model (in minutes)

19

18

10

8

6







Production time for each model (in 52 minutes)

52

52

52

52







Cycle time for each model (in minutes)

13

13

13

13

13







Holding cost of finished goods per model (in $)

0.6

0.96

0.72

0.88

0.94







Raw material cost (in $)











0.8

1.2

1.2

Inventory holding sum

-









15%

20%

15%

Raw material holding cost (in $)

Raw Material Cost * Inventory Holding Sum

0.12

0.24

0.18

Initial inventory level of each critical common raw material











0

0

0

Airplane shipping cost per raw material (in $)











1.32

1.98

1.65

Freighter shipping cost per raw material (in $)











0.8

1.2

1

exceeded, the problems could not be resolved in an acceptable time. The reason for this is that the developed mathematical model evaluates the overtime production and outsource production options in order to meet the total demand with the aim of minimum cost, or the order is rejected if it is a profitable option. For this reason, a 3600 s, or 1 h, CPU time limit has been set for the CPLEX model to avoid excessively long run times. In some cases, the optimal solution was achieved in less than 1 h, but in most cases, the model was prone to run for more than 1 h. The results show that the developed mathematical model quickly calculates optimal results for small problems, while the run time for large problems is not in an acceptable time range. As seen in the Table 5, the optimality gaps obtained in the 1-h time limit to the optimal result does not have satisfactory values.

Applying ATP Concept in Multi-Model Assembly Line Planning Problems

649

For this reason, a heuristic solution method has been developed for larger problems that offers near-optimal values faster. Table 5. Experimental case results with run times Case number

Objective function

Run Time (Sec.)

Optimal solution

Optimal solution

Optimality gap (%)

9

47114.58

>3600

9.78

10

45549.40

>3600

13.30

11

28033.95

>3600

14.53

12

26957.10

>3600

19.12

17

54243.97

>3600

12.99

27

11768.42

>3600

9.20

28

9788.69

>3600

15.11

33

19992.88

>3600

8.02

34

19553.60

>3600

24.93

35

9933.88

>3600

14.96

36

8247.17

>3600

14.45

41

7962.46

>3600

20.42

42

6128.92

>3600

29.62

43

3679.02

>3600

5.59

44

3034.97

>3600

9.98

C. Heuristic Algorithm The heuristic algorithm has two parts. Constructive heuristic comes up with an initial solution and improvement heuristic aims to further improve the solution from the constructive phase. Details of the heuristic methods are given below. Constructive Heuristic: Constructive heuristic runs in three phases. Phase 1: Order Acceptance and Rejection: The aim of Phase 1 is to decide whether arrived orders will be accepted or rejected. |M | If the order ratio for order o, i.e. Colost / m=1 dom Cmsub , is less than one and there is no time to produce in-house, then the order is more profitable to reject. That is why, after calculating the total time necessary to produce all demand, Phase 1 rejects the orders with minimum ratios less than one until either there are no more orders with lower than order ratio one or the time necessary to produce all orders is less than the total capacity of the time horizon.

650

M. Yüksel et al.

Phase 2: Planning the Production Order of Orders After Phase 1 is completed, in the second stage, the order of producing the orders is decided among the accepted orders. Production takes place starting from the last period of the order with the latest due date. Among the accepted orders, the one with the largest Maximum Due Date (MDD) is determined first. If there are multiple orders with equal MDD, the one with the largest Earliest Due Date (EDD) is selected among these orders. If there are multiple orders with equal EDDs, the one with the highest earliness cost is selected and then scheduled. If there are multiple orders with equal earliness costs, a random selection is made between these orders. This cycle continues until there are no unscheduled orders left. After the orders are planned, Phase 3 will start. Phase 3: Planning the Production and Raw Material Purchase In the last phase of the constructive algorithm, the goal is to plan orders in descending latest due date order. For each order that is listed in Phase 2, Phase 3 detects the maximum number of models that is ordered. From maximum to a minimum number of models that is demanded by given order, phase 3 firstly defines the nearest period available to produce in regular time, overtime, and subcontract options. For each option, the algorithm calculates a common number of models that can be produced in defined options, and it chooses the option with the minimum cost. Phase 3 continues until all demand of defined order is satisfied. After that, it runs again for the next order in the given order list by Phase 2. Improvement Heuristic: The main goal of the improvement heuristic is to decrease the total subcontract cost by detecting and using regular unused capacity in periods. For the problem on hand, the most expensive option to satisfy the demand is by using subcontracting. That is why the improvement heuristic creates combinations of orders with as minimum lateness cost as possible to reduce the subcontract cost. First, the algorithm calculates the maximum number of a model type that is subcontracted in a period. For that found amount, if there is an available period later than the period that chosen subcontracted batch is produced, the algorithm calculates the minimum cost that can be observed by producing an order tardy. If the overall cost of producing an order tardy is cheaper than the initial total cost calculated by constructive heuristic, the algorithm saves the new total cost as the initial cost and updates the plan. After that, the algorithm continues to run until one of the given options is observed: (1) There is no available period to produce subcontracted models. (2) There are no subcontracted models left. (3) Subcontracting is a cheaper option than observing a tardy order. To test the effectiveness of the constructive and the improvement heuristics, 5 cases that are near-to-optimal or optimal found in the experimental design part are considered in Table 6. For the given cases, the percentages of optimality gaps are always calculated below 8%.

Applying ATP Concept in Multi-Model Assembly Line Planning Problems

651

Table 6. Comparison of heuristic method with optimal solution and their run time Case number

Objective function

Run time (sec)

Optimality gap (%)

Optimal

Heuristic

Optimal

Heuristic

5

99740.58

103968.99

57.78

780

4.07

6

97295.62

105672.26

43.26

308

7.93

13

46729.07

48553.50

>3600

138

3.76

14

44561.84

47665.91

>3600

318

6.51

15

25702.23

26947.42

>3600

145

4.62

4 Conclusions We study a MuMAL production planning problem by considering different models, orders, raw materials, and shipping types. The orders of customers are satisfied based on ATP system in an MTO environment. Due to MTO and contracts, there are significant costs associated with earliness, tardiness, inventory and production costs, capacity, and other operational constraints. We first review the literature on the ATP concept for MuMAL problems in the MTO environment. In order to fill the gap in the related literature, we consider shipping decisions on critical raw materials required for production in our production environment. Then, we develop a novel mathematical model to solve our problem. For the cases where the production time required to meet the total demand does not exceed the total in-house production time, the proposed mathematical model offers a satisfactory run time. However, for the cases where the total inhouse production time is inadequate to meet the total demand, it does not offer a faster run time. That’s why we develop a heuristic algorithm that provides faster results with near-optimal solutions for these problems. For the cases that are found near-to-optimal or optimal values, the percentages of optimality gaps do not exceed 8% in our numerical study. As future research, the proposed heuristic algorithm will be tested on a larger experimental set and will be improved to decrease optimality gaps. Furthermore, the meta-heuristics can be involved and compared with the developed heuristic algorithms to find the best algorithm which solves large-size problems with reasonable computation times and fewer optimality gaps.

References 1. Asl, A.J., Solimanpur, M., Shankar, R.: Multi-objective multi-model assembly line balancing problem: a quantitative study in engine manufacturing industry. Opsearch 56(3), 603–627 (2019) 2. Reginato, G., Anzanello, M.J., Kahmann, A., Schmidt, L.: Mixed assembly line balancing method in scenarios with different mix of products. Gestão & Produção 23(2), 294–307 (2016) 3. Bard, J.F., Shtub, A., Joshi, S.B.: Sequencing mixed-model assembly lines to level parts usage and minimize line length. Int. J. Prod. Res. 32(10), 2431–2454 (1994) 4. Xiaobo, Z., Ohno, K.: Algorithms for sequencing mixed models on an assembly line in a JIT production system. Comput. Ind. Eng. 32(1), 47–56 (1997)

652

M. Yüksel et al.

5. Akkan, C.: Finite-capacity scheduling-based planning for revenue-based capacity management. Eur. J. Oper. Res. 100(1), 170–179 (1997) 6. McMullen, P.R.: JIT sequencing for mixed-model assembly lines with setups using tabu search. Prod. Plan. Control 9(5), 504–510 (1998) 7. O˘guz, C., Salman, F.S., Yalçın, Z.B.: Order acceptance and scheduling decisions in make-toorder systems. Int. J. Prod. Econ. 125(1), 200–211 (2010) 8. Manavizadeh, N., Tavakoli, L., Rabbani, M., Jolai, F.: A multi-objective mixed-model assembly line sequencing problem in order to minimize total costs in a Make-To-Order environment, considering order priority. J. Manuf. Syst. 32(1), 124–137 (2013) 9. Sadjadi, S.J., Makui, A., Dehghani, E., Pourmohammad, M.: Applying queuing approach for a stochastic location-inventory problem with two different mean inventory considerations. Appl. Math. Model. 40(1), 578–596 (2016) 10. Rabbani, M., Manavizadeh, N., Shabanpour, N.: Sequencing of mixed models on U-shaped assembly lines by considering effective help policies in make-to-order environment. Scientia Scientia Iranica. Trans. E Ind. Eng. 24(3), 1493–1504 (2017) 11. Nazar, K.A., Pillai, V.M.: Mixed-model sequencing problem under capacity and machine idle time constraints in JIT production systems. Comput. Ind. Eng. 118, 226–236 (2018) 12. Tanhaie, F., Rabbani, M., & Manavizadeh, N. (2020). Sequencing mixed-model assembly lines with demand management: problem development and efficient multi-objective algorithms. Engineering Optimization, 1–18. 13. Tanhaie, F., Rabbani, M., Manavizadeh, N.: Applying available-to-promise (ATP) concept in mixed-model assembly line sequencing problems in a Make-To-Order (MTO) environment: problem extension, model formulation and Lagrangian relaxation algorithm. Opsearch 57(2), 320–346 (2020) 14. Robinson, A.G., Carlson, R.C.: Dynamic order promising: real-time ATP. Int. J. Integr. Supply Manage. 3(3), 283–301 (2007) 15. Kalantari, M., Rabbani, M., Ebadian, M.: A decision support system for order acceptance/rejection in hybrid MTS/MTO production systems. Appl. Math. Model. 35(3), 1363–1377 (2011)

Capacitated Vehicle Routing Problem with Time Windows Aleyna Tanel, Begüm Kınay, Deniz Karakul, Efecan Özyörük, Elif ˙Iskifo˘glu, Ezgi Özo˘gul, Meryem Ustao˘glu, Damla Yüksel(B) , and Mustafa Arslan Örnek Department of Industrial Engineering, Yasar University, ˙Izmir, Turkey [email protected], [email protected]

Abstract. Since distribution activities have great importance for firms, supply management is a widely studied concept in many sectors. This study demonstrates an application of a Capacitated Vehicle Routing Problem with Time Windows (CVRPTW). The problem is in the form of a fixed destination multi depots visited by multi-travelling salesmen and the distance-travel time matrix is assumed to be asymmetric. The objective of the problem is to minimize the longest route time of each vehicle. This is achieved by developing a mixed-integer linear programming model (MILP) for the problem. Additionally, since the problem is NP-hard, a general heuristic method is developed to solve the problem for larger instances in negligible computational times. Results show that the balance between the individual route times of the vehicles is provided and the time window limit is ensured. The paper also discusses the results and presents concluding remarks. Keywords: Multi depot · Multi-travelling salesman problem · MDMTSP · Min-Max mTSP · Mixed-integer linear programming · Capacitated vehicle routing problem · Time windows · Heuristics · 2opt algorithm

1 Introduction In the transportation industry, since the transshipment of goods and products is one of the fundamental necessities for society, carrying and delivering them is a widely studied concept in the sector of logistics and supply chain management. To develop more powerful approaches and procedures to distribute goods, computerized methods and optimizations are used. Many real-world applications have widely shown that the use of mathematical models for distribution process planning produces significant savings in transportation costs [1]. Vehicle Routing Problem is designed to obtain optimal routes for vehicles that distribute or serve goods to customers. In this research, Capacitated Vehicle Routing Problem with Time Windows is considered. The objective of the problem is to assign vehicles to delivery nodes regarding the vehicle capacities and time windows. With this motivation, the MILP model is formulated to minimize the maximum route time. Since the problem is NP-hard, as the number of instances increases, it becomes harder to get results via the MILP method. Therefore, a heuristic method has been developed to © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 653–664, 2022. https://doi.org/10.1007/978-3-030-90421-0_56

654

A. Tanel et al.

approach the problem empirically. This algorithm that serves the same objective, checks the capacity and time limitations constantly and creates feasible routes for vehicles in a short time. The paper is organized as follows: In Sect. 2 the problem definition is provided. Then, in Sect. 3, the literature review is given. The developed CVRP-TW model formulation and heuristic approach are presented in Sect. 4. In Sect. 5, the verification and validation of the solution methodology are given, and the sensitivity analysis is demonstrated. Then, the computational results and the developed decision support system are explained. Lastly, concluding remarks are provided in Sect. 6.

2 Problem Definition This paper studies a version of a Capacitated Vehicle Routing Problem with Time Windows. The objective of the problem is to determine the routes for vehicles while minimizing the total travelling time spent by considering the capacity limitations and assuring balanced route times. Another consideration is to have minimum number of vehicles. As the demand increases, the time spent by each vehicle rises, thus shift hours must be either increased or extra vehicles must be used. Thus, the decision of the minimum number of vehicles plays an important role when optimizing the routes in a limited time window. Based on the literature review, the problem is specified as Capacitated Vehicle Routing Problem with Time Windows. It differs from the other classical Vehicle Routing Problems with its multi restrictive structure. The general assumptions are as follows. • A single type of product is delivered to the delivery nodes. • Service times are deterministic. • Parallel delivery operations are conducted with a specific number of vehicles at the same time. • Depots are assigned specifically for each vehicle. (as starting and ending nodes) • All demands must be satisfied. • The capacity of the vehicles is constant and 420 packages of beverages for each vehicle. • The working hour is 8 h. Problem specifications are as follows. • • • •

All deliveries must be operated within the time limit of 8 h (480 min). The route times should be balanced. All delivery nodes must be visited once and only by one vehicle. Sub-tours are not allowed.

To minimize the travel times of the vehicles and to have a balance between them, the formulation of the problem is structured to minimize the maximum cluster time. Also, the delivery process for each location is limited by a single visit. Each delivery node has a specific service time and regarding service operations must be done under this time

Capacitated Vehicle Routing Problem with Time Windows

655

limitations. Individual starting and ending nodes are defined for each vehicle to provide them depots. The vehicles must start from these specific depots and finish the tour by returning to the same depots at the end of the working day. The sequence of the visits to the delivery nodes will be determined by constructing routes that satisfy all the demands and no sub-tours. Hence, the motivation of the study is to reduce the loss of time and maintain balanced route times for vehicles.

3 Literature Review A travelling salesman should visit each city only once and return home city after visiting the last city [2]. The TSP aims to find the shortest path while travelling between the cities. The goal of the salesman is to find a Hamilton Cycle that will minimize the distance (cost) and return where he/she started to travel. Some of the variants of TSP are; Symmetric/Asymmetric Travelling Salesman Problem, Multi Travelling Salesman Problem (mTSP) [3]. Kara & Bektas [4] states that mTSP is a base model for vehicle routing problems (VRP). The VRP can be considered as a variation of the mTSP. The problem handled by dividing it into two as Single depot mTSP and multi depot mTSP. In the case of single depot mTSP, the salesmens start from a single node (depot) and end their tour in the same spot. Yet, in the multi-depot approach, several depots are specified. In this methodology, if the starting node is the same as the ending, it is denoted as Fixed destination Multidepot mTSP (MmTSP), otherwise, -with different starting and ending pointsit is stated as Nonfixed destination Multidepot mTSP (MmTSP). Also, GuoXing [5] transforms the multi-depot multi-salesmen problem to the standard travelling salesman problem (MmTSP into an asymmetric TSP). Yuan et al. [11] demonstrate that the travelling salesman problem with time windows (TSPTW) is specialized as each node in the set must be visited in a specified time window [ai, bi], where waiting times and service times are allowed. The TSPTW desires to minimize the travel costs by visiting each node in the given time window. In that study, the time windows are formulated with two types of variables, a binary decision variable and a variable for the departure times. Sawik [12] points out different sub-tour elimination constraints in his article. One of the approaches is stated from the paper by Dantzig et al. [13] and denotes that the number of arcs cannot exceed the number of nodes that are desired to visit. Also, a two-node version reported that the tour has either one of the arcs to eliminate recursions. Reference [14] is also suggested. Literature review shows that the problem is a fixed destination mTSP with predetermined depots, by assuming that travel times(costs) are asymmetric. The vehicle starts its tour from its starting node (depot) and finishes at the same spot. Hence, the aim is to create optimal routes for each salesman. Both Min-Sum and Min-Max objectives are considered for the M-mTSP. In the Min-Sum approach, the objective is to determine routes by minimizing the total distance travelled by the salesman. In the Min-Max approach, the aim is to decide the tours that minimize the longest route by assuring a balance between the tours. Bertazzi et al. [15] applies a worst-case analysis for MinMax vs. Min–Sum to several variants of Vehicle Routing problems. Since Min-Max is found as a better objective for the problem, the paper mainly focuses on the Min-Max

656

A. Tanel et al.

approach. Accordingly, a MILP model and Heuristic algorithm are created. The paper puts effort to contribute to the literature by its multiple restrictions. As a summary, the articles related with this study are reported in Table 1 regarding their problem definition, objective function and solution methodologies. Furthermore, the contribution of this study is also depicted in the last row of Table 1. Table 1. Literature review Author-date

Problem definition

Benavent, E., & Martínez, A. (2013)

Multi-Depot Multiple Minimization of the Traveling Salesman total cost Problem (MDMTSP)

Linear programming formulations (ILPF)

Kara, I., & Bektas, T. (2006)

Extend versions of multiple travelling salesman problems (mTSP)

Minimization of the total cost

Linear programming formulations (ILPFs)

Laporte, G., Nobert Yves, & Desrochers, M. (1985)

Vehicle Routing Problem

Minimization of the total cost

Integer linear programming (ILP) in conjunction with constraint relaxation

Minimization of the total cost

Branch-and-bound algorithm

Laporte, G., Nobert, Asymmetrical Y., Taillefer, S. (1988) multi-depot vehicle routing problems and location-routing problems (LPRs)

Objective function

Solution method

Malik, W., Rathinam, Generalized, Multiple Minimization of the S., & Darbha, S. Depot, Multiple total cost (2007) Travelling Salesman Problem (GMTSP)

Degree constraint minimum spanning tree via a Lagrangian relaxation

Yadlapalli, S., Malik, W. A., Darbha, S., & Pachter, M. (2009)

Multiple Depot, Multiple Traveling Salesmen Problem (MDMTSP)

There the Held–Karp’s method generalized and the Lagrangian heuristics modified

This study

Capacitated Vehicle Minimization of the Routing Problem with maximum route time Time Windows (CVRPTW)

Minimization of the total cost

Mixed integer linear programming (MILP) and weighted shortest processing time (WSPT) heuristics ameliorated by two-opt method and a unique balance algorithm

Capacitated Vehicle Routing Problem with Time Windows

657

4 Modeling and Solution Methodology The proposed MILP model for the studied CVRP with Time Windows is presented in this section. The sets, parameters, and decision variables of the proposed model are listed below. A. Mathematical Model

Sets and Indices

Parameters : Service time of delivery node i : Distance travel times between delivery node i and k : Working time of vehicles. : Demands of delivery node i. : Capacity of vehicle v.

Decision Variables xi,v {1, if vehicle v visits delivery node i 0, otherwise}. δi,k,v {1, if a vehicle visits a delivery node from current delivery node directly  without visiting others. Xi,v = 1 Xk,v = 1 0, Otherwise

Zv : Total route time (in minutes) of vehicle v Znew : Optimal route time of the problem Yi,v : represents the sequence of the visited location i by the vehicle v MinZnew 

(1)

xi,v = 1 ∀i

(2)

v

 

δi,k,v = 1 ∀i|i < n + 1&i > n + m

(3)

k,i=k v



δk,i,v = 1 ∀k|k < n + m

(4)

i,i=k v

2 ∗ δi,k,v ≤ xi,v + xk,v ∀i, k, v|i = k zv =

 l

xi,v ∗ ui +

 i

δi,k,v ∗ Dk,i ∀v

k,k>n v

(6)

k

Yi,v − Yk,v + n ∗ δi,k,v ≤ n − 1 ∀i, k, v  

(5)

δi,k,v = 0 ∀i|2 × m + n + 1 > i > n & i = k

(7) (8)

658

A. Tanel et al.

zv ≤ t ∀v

(9)

znew − zv ≥ 0 ∀v

(10)

n n   i

δi,k,v × ri ≤ Capv ∀v

(11)

j

Yi,v = 1 ∀i|n + m ≥ i ≥ n + 1

(12)

In this model, the objective function (1) is the minimization of the maximum time spent completing cluster Zv (min(max(Zv))), whereas minimization of the longest route time. Constraint (2) states that each location must be visited by one vehicle. Constraint (3) and (4) act as pairs. There exists only one arc that leaves the location and only one arc that enters. The next constraint (5) is the linearization of decision variables, there is an arc on the vehicle’s route if the vehicle visits both delivery nodes i and k. Constraint (6) is the calculation of the route time for completing a cluster via vehicle v. In constraint (7), the sub tours, if any, are eliminated in the cluster, based on the Miller–Tucker–Zemlin formulation. Constraint (8) ensures that after visiting the last node (depot), the vehicle cannot visit any delivery node or a depot. Constraint (9) guarantees that the cluster time cannot exceed the total working time. Constraint (10) assures the maximum route time value as the highest cluster time. Constraint (11) makes sure that the vehicle capacity is sufficient to satisfy the customer demands. This equation represents the demand of delivery node i and is the capacity of the vehicle v. Remember that xi,v = 1, if node i is visited by vehicle v, and 0 otherwise. So, any cluster’s total amount demanded should be lower than or equal to the capacity of the delivery vehicle. Constraint (12) states that a vehicle starts the tour from its assigned depot. B. Heuristic Approach A unique heuristic approach is initiated for the CVRP by considering the special time limits. In parallel with the mathematical model’s objective, the main purpose of the heuristic algorithm is to obtain feasible routes for vehicles. In this manner, route times are minimized and a balance between the individual routes are yielded. An objectoriented Python algorithm is developed, where the salesmen are stated as an object and defined under a specific class. Therefore, the algorithm tends to find the upper bound for the number of vehicles. The attributes are unique for each vehicle, which can be sorted as the name, depot, route time, and route. The weighted shortest processing time (WSPT) plays a core role in the heuristic solution approach to serve performance measures. Briefly, the vehicle departs from its warehouse; finds the location which has maximum Demand/Distance Time ratio. For each cluster, the time window is 480 min. The algorithm checks the time violation which contains return times to the depot when to add a new location. If there is still time to visit an unvisited location, the vehicle adds it to its route. This process continues for a

Capacitated Vehicle Routing Problem with Time Windows

659

time until either capacity or time limit is exceeded by a vehicle. When a vehicle fulfils its working time and there are still unvisited nodes in the set, a new cluster as a new vehicle with an empty route is created. The algorithm creates vehicle objects until no unvisited delivery node remains. Subsequently, when the necessary number of vehicles is determined, improvement functions begin to operate. Certain binary exchange operations are conducted in the individual routes to improve route times, where the 2 opt algorithms are used. Next, the balance function operates, and the algorithm focuses on the clusters with longest and shortest route times. Then, the delivery nodes of the longest route are aligned in increasing order by their distance travel times. After ordering from the greatest distance travel time to less, the algorithm considers the maximum value, then seeks to transfer it to the shortest route time owned cluster by considering the availability of the vehicle capacity. The function checks the condition if the node can be transferred to a minimum route time owned cluster. When the maximum distance travel time value passes to the shortest route time, the system time must not exceed. If it does, then the maximum value does not transfer, and the balance algorithm proceeds to the second maximum value and applies the same method sequentially. This balance function operates until the desired ratio of the max differential as 10% between the routes is acquired.

5 Computational Results and Decision Support System A. Verification First, a narrow version of the problem is constructed with a small number of instances of delivery nodes and vehicles, to show how the proposed solution procedure works. Service times for each delivery node are deterministic based on the quantity demanded. 15 delivery nodes are identified to be visited and a depot for each vehicle is defined. Since the optimal output is obtained with 2 vehicles and 2 depots are assigned to each of them. However, as a solution approach, depots are assigned manually with 4 dummy nodes, and the data is manipulated to assure that salesman starts and ends its tour at the designated nodes. Hence, the problem is structured as a fixed destination problem starting and ending nodes identified as the same. To verify the developed model, IBM ILOG CPLEX Optimization Studio Version 12.10. is used to solve the problem. First, Single-depot mTSP version of the model is solved. Then, the considered model is adapted to the Multi-depot mTSP methodology. Consequently, the model is run for 15 s for the data with 15 instances when the time limit is 480 min and vehicle capacities are identical, the optimal route assignments are provided in Figs. 1 and 2. By using the same data in the heuristic model, the delivery nodes are added to the routes of vehicles in increasing order of “Demand/Distance Time ratios”, if and only if all considerations are satisfied. Also, since the problem has multiple considerations, the algorithm checks capacity and time limitations continuously. On that point, the initial routes are introduced for individual vehicles and ready for the next operation. The 2 opt algorithm is used to improve the solution by considering the time window and a unique function operates to assure a balance between the individual route times. The algorithm runs immediately and provides a solution for 15 instances as is represented in the below Figs. 3 and 4.

660

A. Tanel et al.

Fig. 1. Route representation of the toy problem solution

Fig. 3. Route representations by the heuristic approach

Fig. 2. Toy problem solution

Fig. 4. Toy problem solutions by the heuristic approach

To sum up, the solutions that are provided satisfy all the constraints of the model and yield feasible solutions. All operations are conducted in the time window of 480 min and without exceeding the vehicle capacity of 420 packages. When all delivery nodes are visited, there observed no sub tours within the delivery nodes and no trips between the depot nodes. The delivery node’s demands are satisfied by the single visit of the vehicles. B. Validation & Improvements To have the optimal number of vehicles, the output of the heuristic algorithm is used, a route and an objective value is obtained. Then, the data is utilized for a better application in real life. By doing so, insufficiency of vehicles to visit all sets of nodes in a specified time limit is observed. Hence, by changing the vehicle number, a feasible solution is obtained after running the model several times. In the case of the MILP approach, solutions cannot be obtained after 40 delivery nodes. At most a solution of 30 delivery nodes is obtained by limiting run time as 1 h. The data is expanded in a controlled manner to have solutions for greater instances of 15 to 100 delivery nodes for the heuristic algorithm. The Python code runs with data of

Capacitated Vehicle Routing Problem with Time Windows

661

100 instances, and the algorithm provides feasible solutions in 20 s. Thus, the following solutions are obtained as it is given in Fig. 5.

Fig. 5. Heuristic approach solutions for a set of 100 delivery nodes

Hence, even for a solution of 100 delivery nodes, 16 vehicles are required, which is a reasonable output for a real-life application. III. Sensitivity Analysis The objective of the problem is changed to the minimization of the total route time in the case of 15 delivery nodes. Objective values are stated as Z1 = 354 and Z2 = 366, when it is compared to the solution obtained by the objective of minimization of the maximum route time, and as expected, the total route time is decreased. Moreover, the model is run for different scenarios of the mentioned objective functions and also with different parameters to observe the effect of the changes, and results are shown in Table 2. Additionally, in the heuristic model, the effect of the demand on the number of vehicles is observed. For this purpose, with the constant standard deviation yet increasing mean values, normal distribution values are generated. As it can be seen in Fig. 6 when the demand increases, the algorithm tends to assign more vehicles. Besides, another consideration is to emphasize the vehicle’s capacity’s impact on the number of vehicles required. With this objective, capacity is increased from 100 to 720 boxes intermittently. As shown in Fig. 7, the need for more vehicles is declined as the capacity of the vehicles increase. IV. Decision Support System A dynamic decision support system is designed for the problem. An excel VBA-User Interface is used to collect parameters or specific limitations from the user, both CPLEX and Python solution algorithms offered to the user via the designed DSS. The routes are reported to the user via the Microsoft Power BI program. With the help of the buttons

662

A. Tanel et al. Table 2. Computational results with test data # of Delivery Nodes

# of Vehicles

Time Limit (minutes)

Capacity of Vehicles (#of packages)

Objective Values

minsum 10

2

480

420

10

3

240

420

10

3

240

185

10

3

480

185

2

480

420

10

2

480

420

10

3

240

420

10

3

240

185

10

3

480

185

15

2

480

420

15

=182 =300 =223 =190 =69 =159 =164 =180 =159 =164 =180 =354 =366

minmax =273 =269 =172 =173 =176 =159 =179 =173 =168 =179 =170 =363 =363

Fig. 6. Number of vehicles by average demand

created on the VBA user interfaces, the DSS redirects the user to desired representations of any solution methods. Additional information is also shared with the user on the PBI report page, such as objective values and KPI’s of the provided solution (Fig. 8).

Capacitated Vehicle Routing Problem with Time Windows

663

Fig. 7. Number of vehicles by vehicle capacity

Fig. 8. Microsoft POWER BI report page with node representation and KPI’s

6 Conclusion In this paper, the Capacitated Vehicle Routing Problem with Time Windows is studied. The problem is one of the variants of the Vehicle Routing Problem and it is formulated as a fixed destination mTSP with predetermined depots, by assuming that distance travel times (costs) are asymmetric. The objective of the problem is to minimize the maximum route time. A MILP model is formulated and solved in small instances using IBM ILOG CPLEX Optimization Studio Version 12.10. As a result, routes are created for vehicles, and demands are met by not exceeding the vehicle capacity within the specified period. Since the studied problem is NP-Hard, also a heuristic algorithm is developed to have feasible solutions for larger instances. The heuristic algorithm is based on the weighted shortest processing time (WSPT) method and the 2-opt algorithm. These methods are ameliorated for the

664

A. Tanel et al.

requirements of the problem and formed to find a feasible number of salesman and cluster times for the vehicles. Also, it assures a balance between the route times. Finally, a dynamic decision support system is developed by using Microsoft Excel VBA and PowerBI. Hence, the solutions and performance metrics are represented to the end-users. The proposed solution methods are also applicable to other transportation environments with similar requirements. The paper puts effort to contribute to the literature by its multiple restrictions. In further studies, the complexity can be reduced by different approaches, and the heuristic method can be improved relatively.

References 1. Kumar, S.N., Panneerselvam, R.: A survey on the vehicle routing problem and its variants. Intell. Inf. Manage. 04(03), 66–74 (2012). https://doi.org/10.4236/iim.2012.43010 2. Matai, R., Singh, S., Lal, M.: Traveling salesman problem: an overview of applications, formulations, and solution approaches. Travel. Salesm. Probl. Theory Appl. (2010). https:// doi.org/10.5772/12909 3. Rao, A.: Literature survey on travelling salesman problem using genetic algorithms. Int. J. Adv. Res. Educ. Technol. 2(1), 4 (2015) 4. Kara, I., Bektas, T.: Integer linear programming formulations of multiple salesman problems and its variations. Eur. J. Oper. Res. 174(3), 1449–1458 (2006). https://doi.org/10.1016/j.ejor. 2005.03.008 5. Guoxing, Y.: Theory and methodology transformation of multidepot multisalesmen problem to the standard travelling salesman problem, pp. 557–560 (1995) 6. Benavent, E., Martínez, A.: Multi-depot multiple TSP: a polyhedral study and computational results. Ann. Oper. Res. 207(1), 7–25 (2013). https://doi.org/10.1007/s10479-011-1024-y 7. Malik, W., Rathinam, S., Darbha, S.: An approximation algorithm for a symmetric generalized multiple depot. Mult. Travell. Sales. Probl. 35, 747–753 2007. https://doi.org/10.1016/j.orl. 2007.02.001 8. Yadlapalli, S., Malik, W.A., Darbha, S., Pachter, M.: Nonlinear analysis: real world applications a Lagrangian-based algorithm for a multiple depot, multiple traveling salesmen problem. Nonlinear Anal. Real World Appl. 10(4), 1990–1999 (2009). https://doi.org/10.1016/j. nonrwa.2008.03.014 9. Laporte, G., Nobert, Y., Taillefer, S.: Solving a family of multi-depot vehicle routing and location-routing problems. Transp. Sci. 22(3), 161–172 (1988) 10. Laporte, G., Yves, N., Desrochers, M.: Optimal routing under capacity and distance restrictions. Oper. Res. 33(5), 1050–1073 (1985). https://doi.org/10.1287/opre.33.5.1050 11. Yuan, Y., Cattaruzza, D., Ogier, M., Semet, F.: A note on the lifted Miller–Tucker–Zemlin subtour elimination constraints for routing problems with time windows. Oper. Res. Lett. 48(2), 167–169 (2020). https://doi.org/10.1016/j.orl.2020.01.008 12. Sawik, T.: A note on the miller-tucker-zemlin model for the asymmetric traveling salesman problem. Bull. Polish Acad. Sci. Tech. Sci. 64(3), 517–520 (2020). https://doi.org/10.1515/ bpasts-2016-0057 13. Dantzig, G., Fulkerson, R., Johnson, S.: Solution of a large-scale traveling-salesman problem 1934 1954 14. Vansteenwegen, P., Gunawan, A.: EURO Advanced Tutorials on Operational Research Series Editors 15. Bertazzi, L., Golden, B., Wang, X.: Min-Max vs. Min-Sum vehicle routing: a worst-case analysis. Eur. J. Oper. Res. 240(2), 372–381 (2015). https://doi.org/10.1016/j.ejor.2014. 07.025

Designing a Railway Network in Cesme, Izmir with Bi-objective Ring Star Problem Oya Merve Püskül, Dilara Aslan, Ceren Onay, Mehmet Serdar Erdogan(B) , and Mehmet Fatih Ta¸sgetiren International Logistics Management, Ya¸sar University, ˙Izmir, Turkey {mehmet.erdogan,fatih.tasgetiren}@yasar.edu.tr

Abstract. Transportation is a significant subject in today’s world, especially in terms of the environment and the needs of the community. Clearly, high rates of urbanization and population growth result in high volumes of demand for public transportation at the same growing ratio. This project aims to design an optimal railway network to meet the region’s public transportation needs and to reduce the region’s pollution due to the high seasonal density of the population in the Çe¸sme district. The objective functions of a project are determined by minimizing both assignment cost and routing cost. The assignment cost denotes the total cost of getting on the tram for people. The routing cost is defined as the total construction costs of the tram line’s selected nodes. This problem is solved by the epsilonconstraint method as a multi-objective optimization problem. Consequently, it has been determined that the two main costs do not decrease at the same time. They are in a correlation where one reduces and the other increases. This is the first study that applies a multi objective ring star problem to a real life case study. Keywords: Railway · Optimization · Epsilon-constraint · Multi objective · Ring star problem

1 Introduction The bi-objective Ring Star Problem (RSP) is an extension of the location allocation problem that was introduced back in the 1960’s. The ring star problem seeks to find a ring as a subset of points on the network while considering the minimization of two determining expenses. These are the costs of the ring and the costs of the assignment. Two objectives of the problem are specified as minimization of cost related to the ring itself, associated with the total length of the ring and minimization of the cost of assignment of not visited nodes to closest visited ones. The problem has numerous applications in a variety of industries, including telecommunication network design, school bus routing, medical service routing and so on. This paper will debate the specific application of the ring star problem in the district of Izmir. Ring Star Problems are related to cycle problems. These problems make cycle using a subset of a graph. Also seen as a Location Routing Problem. As expected, there are some constraints on these problems. Such constraints determine cycle length, distances © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 665–674, 2022. https://doi.org/10.1007/978-3-030-90421-0_57

666

O. M. Püskül et al.

between vertices on and off the cycle, penalties for not visited vertices and profit for visited vertices. The most popular examples of Ring Star Problems are the Traveling Salesman Problems. Traveling Salesman Problem (TSP) includes Non-visiting Penalties, the Selective TSP, the reward collection TSP, and the Covering Tour Problem. In this study, a theoretical train line in Cesme, Izmir province, is modelled. It is built on a multi-objective foundation to bring the problem in question to an efficient solution set. To generate distances between neighbourhoods, Google Maps is used. Physical characteristics were analysed as well as the demographic characteristics of the 25 neighbourhoods in Cesme. We created a Mixed Integer Linear Programming mathematical model to solve the problem. The model is coded into the IBM OPL CPLEX software and solved by using the epsilon-constraint method. The rest of the paper is organized as follows: Sect. 2 reviews the relevant literature. Section 3 defines the problem. Section 4 describes the methodology. Section 5 gives the results of the study. Finally, Sect. 6 gives the concluding remarks.

2 Literature Review Serra and Marinov (1998) [1] addressed the p-median formulation, since the travel time, distance and demand are unclear. Several scenarios should be produced over certain points, but while creating these, the best option for the future should be determined to avoid problems while locating new facilities. Agra et al. (2017) [2] developed a decomposition approach for p-median problem on disconnected graphs. The p-median problem seeks to place facilities on the vertices (customers) of a graph in order to minimize transportation costs while meeting the needs of the facilities’ customers. Labbe et al. (2005) [3] made discrete optimization of ring star problem. The aim of this article is to provide a model for two iterations of the Median Cycle Problem and a branch-and-cut algorithm. Soltanpour et al. (2020) [4] developed an inverse 1-median location problem. Inverse1 has many similarities with the ring star problem. The target is the same for all, which is to minimize the total cost and distance, however, the main, visible difference is in inverse 1-median location problem; instead of creating a ring or a cycle, the whole problem takes place in a tree. Labbe et al. (2004) [5] proposed a branch-and-cut algorithm that is believed to be a precise algorithm for RSP and proposed a mixed integer linear programming formulation that includes several classes of direction-defined inequalities. Hoshino and de Souza (2009) [6] formulated a branch-and-cut-and-price algorithm for the Capacitated m-Ring-Star Problem (CmRSP). Baldacci et al. (2007) [7] investigated CmRSP that is a RSP type that capacity of the ring should be taken into account and multiple rings are created. Baldacci et al. (2010).[8] has shown that constructive heuristics can offer a good solution for CmRSP. Mauttone et al. (2008) [9] presented a meta-heuristic approach to solve the CmRSP. This article proposes the implementation of a combined GRASP and Tabu Search algorithm to solve CmRSP, which can find accurate results at medium execution times. Hoshino et al. (2008) [10] proposed an integer programming formulation for CmRSP based on a set covering model and developed an exact branch-and-price (BP) algorithm to solve it exactly. The CmRSP can be interpreted as a generalization of the classical Capacitated Vehicle Problem (CVRP). Bayá

Designing a Railway Network

667

et al. (2016) [11] addressed the Diameter Limited Reliability Capacitated m Ring Star Problem, or CmRSP-DCR. The aim is to find a minimum cost spread plot consisting of rings that meet both capacity constraints and reliability constraints. This is an important issue in the fiber optic and telecommunications fields. The number of nodes in a ring should not exceed the warehouse capacity, and the cost of waiting nodes differs from the cost of links within the rings. Zhang et al. (2014) [12] proposed a memetic algorithm that includes a crossover operation, a mutation operation, a local search involving three neighbourhood operators, and a population selection strategy that maintains population diversity for CmRSP. Calvete et al. (2013) [13] addressed the bi-objective capacitated m-ring star problem in which both the ring cost and the allocation cost are considered individually instead of jointly. Hill and Voß (2016) [14] investigated an equity-model mat-heuristic for the multidepot ring star problem (MDRSP). A branch and cut algorithm has been developed for MDRSP. Sundar and Rathinam (2016) [15] developed an exact algorithm for MDRSP. Calvete et al. (2016) [16] created a multi-objective evolutionary algorithm with a local search for the bi-objective ring star problem. Franco et al. (2016) [17] modelled a Variable Neighbourhood Search Approach for the Capacitated m-Ring-Star Problem. Mukherjee et al. (2020). [18] formulated a new problem in the field of combinatorial optimization. This problem can be called a ring star problem in secondary sub-stores (RSPSSD). RSPSSD has a fixed master repository and this network has three nodes, which means it is larger than the common RSP. These are primary sub-stores, secondary sub-stores, and excluded nodes. The goal of the problem is to select some primary and secondary sub-repositories with three components that minimize the overall cost of routing. Ding et al. (2020) [19] developed a hybrid ant colony system algorithm for solving the ring star problem. Kedad-Sidhoum and Nguyen (2010) [20] developed an exact algorithm for the ring star problem.

3 Problem Definition The research has been carried out in Çe¸sme district, Izmir city. The population density of Çe¸sme, which increases insanely especially in the summer season has very low public transportation services, and this issue pushes local citizens and tourists to use cars to ensure their own transportation. It should be clearly stated that high growth rate in the number of vehicles is well above the city’s capacity. This increased number of vehicles brought a greater danger, which is “Pollution”. Pollution is the most important reason for this project. The irresponsible poisoning of a city, with exhaust gases must be terminated. Tramway is the beginning of the solution. Our aim is to create an alternative to society while helping nature. This project focuses on building a railway network for public transportation in Çe¸sme province of Izmir, which has 25 neighbourhoods. The project’s first goal is to shorten the railway, which reduces all installation and operating costs such as rail, concrete, labor, power, and fuel. The cost of assigning nonvisited to visited nodes nodes is the second goal. So, first objective of the problem is minimization of routing cost while second objective is the minimization of assignment cost that is the cost of assigning non-visited nodes to closest visited nodes. Basically,

668

O. M. Püskül et al.

the problem aims to select a subset of nodes and route them and assigns non-visited to nodes to visited ones. This is a multi objective problem since the routing objective and assignment objective are clearly contradicting. When less number of nodes are visited routing cost will decrease but assignment cost will increase since more number of nodes will be assigned to visited nodes. A mathematical model is given below: Sets. N D AN

Nodes {textDepot All Nodes:D ∪ N

Parameters. costRij costAij M

Cost of routing from node i ∈ AN to node j ∈ AN Assignment cost from node i ∈ AN to node j ∈ AN A sufficiently large number

Decision Variables  1 : if edge ij appears on the cycle Xij 0 : otherwise  1 : if node i is assigned to node j Yij 0 : otherwise Ti Objective Function 1:  

costRij ∗ Xij

i∈AN j∈AN

Objective Function 2:  

costAij ∗ Yij

i∈AN j∈AN

Subject to; 

Xij = Yii

∀i ∈ AN

(1)

j∈AN :i=j



Yij = 1

∀i ∈ N

(2)

j∈AN

Yii = 1

∀i ∈ D

Yij = 0 ∀i ∈ D, j ∈ N  j∈AN :i=j

Xij −

 j∈AN :i=j

Xji = 0 ∀i ∈ N

(3) (4) (5)

Designing a Railway Network



Yij ≤

Xkj ∀i, j ∈ AN : i = j

669

(6)

k∈AN :j=k

Ti − Tj + M ∗ Xij ≤ M − 1 ∀i, j ∈ N : i = j

(7)

Xij , Yij ∈ {0, 1}, Ti ≥ 0

(8)

Constraints (1) and (6) connect X and Y variables to each other. Constraint (2) guarantees that each node is allocated to a different node. It is worth noting that if a node is assigned to itself, it is in the ring. Constraint (3) ensures that the depot will be present in the ring. The fourth constraint precludes a client from being allocated to a depot. The balance constraint is the fifth constraint. The constraint for subtour selection is constraint (7). Constraint (8) is the boundary constraint.

4 Methodology Multi-objective optimization is a vital part of optimization activities, as almost all realworld optimization problems can be modelled using different incompatible objectives. The methodological approach to addressing such issues was to secularize many goals into a single goal, but the evolutionary approach was to tackle the multi-objective optimization problem as it was. The primary goal is to maximize or minimize a function subject to a set of restrictions. In multi-objective optimization, many methodologies are frequently used to provide a set of effective solutions from which the decision-maker can choose. Applications of multi-objective optimization problems are seen in many areas, such as supply chain management, inventory control, facility location selection, supplier selection, vehicle routing, and distribution network determination. Multi-objective optimization problems are cases where more than one goal is optimized simultaneously. Objective functions in multi-objective optimization frequently consider different properties of the desired result. The objectives are in conflict because there is no single result from optimizing all functions at the same time. Therefore, it creates an optimal set of solutions. In the notion of Pareto optimality, this set is referred to as the Pareto optimal set. Because there is no single solution to multi-objective optimization issues, a collection of solutions based on the objective functions must be found. The Epsilon Constraint Method is used to overcome convex problems with irregular boundaries. It works by adding constraints while optimizing an objective function. If we consider the Multi-Objective Mathematical Programming (MOMP) problem below; min St

(f 1(x), f 2(x), . . . fp(x)) x ∈ S,

(9)

This problem involves x being the vector of choice variables, f being the goal functions and S being feasible regions. In this method, one of the other objective functions is a constraint to optimize an objective function. At each iteration, right hand side of the

670

O. M. Püskül et al.

ASSIGNMENT COST

250 200 150 100 50 0 0

20

40

60

80

100

120

140

ROUTING COST

Fig. 1. Pareto-optimal solutions

constrained objective function changes by a small epsilon value. Thereby, new solutions are produced. min f 1(x) St f 2(x) ≥ e2, f 3(x) ≥ e3, fp(x) ≥ ep,

(10)

... X∈S These are included in the restriction of the model as shown above. Note that e2, e3, and ep are right hand side of the constrained objective functions. Effective solution of the problem is provided by parametric variations of constrained objective functions.

5 Results In this study, in order to solve the bi-objective ring star problem, IBM OPL CPLEX was operated. The problem is modeled as Mixed Integer Linear Programming. To obtain Pareto-optimal solutions of the multi-objective problem, the Epsilon-constraint method is used. The model was run in a computer with Intel ® Core ™ i5-8265U 1.60 GHz 1.80 GHz processor and 16 GB Ram. Figure 1 shows the Pareto Optimal solutions. According to the figure, assignment cost decreases when routing cost increases. Accordingly, assignment cost and routing cost are inversely proportional to each other. The following maps are designed by considering the optimization results. As indicated on the maps, thick lines represent cycle in nodes and thin lines represent cycle out nodes.

Designing a Railway Network

671

Fig. 2. Map 1

Fig. 3. Map 2

Figure 2 indicates a tram line including 16 cycle in nodes and 9 cycle out nodes. Assignment cost is 53.5 and routing cost is 65.2.

672

O. M. Püskül et al.

Figure 3 shows a tram line consist of 6 cycle in nodes and 19 cycle out nodes. Assignment cost is 104.6 and routing cost is 30.05.

Fig. 4. Map 3

Figure 4 represents the optimum route of a tram line which includes all neighbourhoods. Routing cost is 124 and assignment cost is 0.

Fig. 5. Map 4

Designing a Railway Network

673

In Fig. 5, 2 nodes are in the cycle and the number of cycles out nodes are determined as 23. Assignment cost is 245.2, routing cost is 6.3. Note that the solutions of Fig. 4 and 5 are the two extreme solutions of the problem where either all nodes are in the ring or only one node is in the ring.

6 Conclusion In this research, bi-objective RSP was implemented. Reductions in both assignment cost and routing cost have been determined as two main targets. According to result of scatter plot, correlation between assignment cost and routing cost analysed. It has been noticed that two main expenses are inversely linked. The aim of the study is modelling an optimal railway infrastructure in province of Cesme, Izmir. Assignment cost is referred as the cost of people coming to the railway stops by any transportation vehicle (minibus, bus, etc.) and the routing cost is the total expenses of construction in all selected nodes. With other words, routing cost is the construction cost of railway line. This study is novel from the aspect that it is the first study applying bi-objective RSP to a real life case study. This problem solved with the epsilon-constraint method as multi-objective. Paretooptimal solutions were obtained for 25 neighborhoods of Cesme. Consequently, it has been determined that the two main expenses do not decrease at the same time; they are in a correlation where the one reduces and the other increases. Therefore, as it is clarified before, two main targets of our study are linked to each other in an opposite ratio according to the real data input of a region. The future study should consider development of a metaheuristic algorithm that can solve large instances within reasonable solution times.

References 1. Serra, D., Marianov, V.: The p-median problem in a changing network: the case of Barcelona. Location Sci. 6(1–4), 383–394 (1998). https://doi.org/10.1016/S0966-8349(98)00049-7 2. Agra, A., Cerdeira, J.O., Requejo, C.: A decomposition approach for the p -median problem on disconnected graphs. Comput. Oper. Res. 86, 79–85 (2017). https://doi.org/10.1016/j.cor. 2017.05.006 3. Labbé, M., Laporte, G., Martın, I.R., González, J.J.S.: Locating median cycles in networks. Eur. J. Oper. Res. 160(2), 457–470 (2005) 4. Soltanpour, A., Baroughi, F., Alizadeh, B.: The inverse 1-median location problem on uncertain tree networks with tail value at risk criterion. Inf. Sci. 506, 383–394 (2020). https://doi. org/10.1016/j.ins.2019.08.018 5. Labbé, M., Laporte, G., Martín, I.R., Gonzalez, J.J.S.: The ring star problem: polyhedral analysis and exact algorithm. Netw. Int. J. 43(3), 177–189 (2004) 6. Hoshino, E.A., de Souza, C.C.: A branch-and-cut-and-price approach for the capacitated m-ring-star problem. Electron. Notes Discrete Mathe. 35, 103–108 (2009) 7. Baldacci, R., Dell’Amico, M., González, J.Z.: Operat. Res. 55(6):1147–1162 (2007). https:// doi.org/10.1287/opre.1070.0432 8. Baldacci, R., Dell’Amico, M.: Heuristic algorithms for the multi-depot ring-star problem. Eur. J. Oper. Res. 203(1), 270–281 (2010)

674

O. M. Püskül et al.

9. Mauttone, A., Nesmachnow, S., Olivera, A., Amoza, F.R.: Solving a ring star problem generalization. In: 2008 International Conference on Computational Intelligence for Modelling Control & Automation, pp. 981–986. IEEE, December 2008 10. Hoshino, E.A., de Souza, C.C.: Column generation algorithms for the capacitated m-ring-star problem. In International Computing and Combinatorics Conference, pp. 631–641. Springer, Berlin, Heidelberg, June 2008. https://doi.org/10.1007/978-3-540-69733-6_62 11. Bayá, G., Mauttone, A., Robledo, F., Romero, P., Rubino, G.: Capacitated m ring star problem under diameter constrained reliability. Electron. Notes Discr. Mathe. 51, 23–30 (2016) 12. Zhang, Z., Qin, H., Lim, A.: A memetic algorithm for the capacitated m-ring-star problem. Appl. Intell. 40(2), 305–321 (2014) 13. Calvete, H.I., Galé, C., Iranzo, J.A.: An efficient evolutionary algorithm for the ring star problem. Eur. J. Oper. Res. 231(1), 22–33 (2013). https://doi.org/10.1016/j.ejor.2013.05.013 14. Hill, A., Voß, S.: An equi-model matheuristic for the multi-depot ring star problem. Networks 67(3), 222–237 (2016) 15. Sundar, K., Rathinam, S.: Multiple depot ring star problem: a polyhedral study and an exact algorithm. J. Global Optim. 67(3), 527–551 (2016). https://doi.org/10.1007/s10898016-0431-7 16. Calvete, H.I., Galé, C., Iranzo, J.A.: MEALS: a multiobjective evolutionary algorithm with local search for solving the bi-objective ring star problem. Eur. J. Oper. Res. 250(2), 377–388 (2016) 17. Franco, C., López-Santana, E., Mendez-Giraldo, G.: A variable neighborhood search approach for the capacitated m-ring-star problem. In: International Conference on Intelligent Computing, pp. 3–11. Springer, Cham, August 2016 18. Mukherjee, A., Barma, P.S., Dutta, J., Panigrahi, G., Kar, S., Maiti, M.: A modified discrete antlion optimizer for the ring star problem with secondary sub-depots. Neural Comput. Appl. 32(12), 8143–8156 (2019). https://doi.org/10.1007/s00521-019-04292-9 19. Zang, X., Jiang, L., Ding, B., Fang, X.: A hybrid ant colony system algorithm for solving the ring star problem. Appl. Intell. 51(6), 3789–3800 (2020). https://doi.org/10.1007/s10489020-02072-w 20. Kedad-Sidhoum, S., Nguyen, V.H.: An exact algorithm for solving the ring star problem. Optimization 59(1), 125–140 (2010)

Distribution Planning of LPG to Gas Stations in the Aegean Region Berfin Alkan, Berfin Dilsan Kikizade, Buse Karadan, Ça˘gatay Duysak, Elif Hande Küpeli, Emre Ya˘gız Turan, Tu˘gçe Dilber, Erdinç Öner(B) , and Nazlı Karata¸s Aygün Department of Industrial Engineering, Yasar University, Izmir, Turkey {erdinc.oner,nazli.aygun}@yasar.edu.tr

Abstract. This study considers the application of LPG distribution to stations in the Aegean region in Turkey. There is a distribution center in Alia˘ga which collects the orders and distributes LPG to the gas stations. The problem is to determine the shortest routes with the minimum number of trucks at minimum cost. This problem was modeled as a vehicle routing problem and then extended to capacitated vehicle routing problem with time windows (CVRPTW) to increase the service quality and adapt the problem to real life. The small size models were solved by CPLEX Optimization Studio. It was not possible to obtain optimal solutions for problem instances with 41 stations and larger number of gas stations since respective problems are NP-hard. Therefore, several heuristics are applied to solve the problem such as Clarke and Wright algorithm, Nearest Neighborhood and The Best Decision (TBD) algorithms. Clarke and Wright and Nearest Neighborhood heuristics are already defined in literature. However, TBD heuristics is a novel approach proposed in this study. Computational results shows that Clarke and Wright algorithm is found to give the best results to solve the CVRPTW problem. A user friendly decision support system is developed to implement the heuristics to solve the problem. Keywords: LPG distribution · Capacitated vehicle routing problem with time window · Nearest neighborhood · Clarke and Wright algoritm · CPLEX solver · Decision support system

1 Introduction In today’s competitive business environment, firms need to optimize their logistics, which is defined as the process of strategically managing the acquisition, movement, and storage of materials, parts, and finished inventory (and the related information flows) through the organization and its marketing channel in such a way that current and future profitability is maximized through the cost-effective fulfilment of orders [1]. An essential constituent of any logistics system is the allocation and routing of vehicles to meet customer demands. Effective transportation management can save a company a considerable portion of its total distribution costs. [2]. For this reason, companies are trying to © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 675–688, 2022. https://doi.org/10.1007/978-3-030-90421-0_58

676

B. Alkan et al.

achieve sustainability in the competitive environment by adapting innovations in logistics and optimize their distribution networks. The problems relating to the distribution of goods between warehouses and customers are known as Vehicle Routing Problem (VRP). Vehicle Routing Problem refers to the situation where distinct routes are determined for each vehicle starting and ending their journey at one or more warehouses to various consumers delivering goods and services and its objective is to serve customers with known demands with minimum cost or minimum distance traveled while satisfying constraints [3]. In this paper, the distribution of Liquefied Petroleum Gas (LPG), which is a widely used hazardous material, from the warehouse to the gas stations is considered. LPG distribution problem in this study can be considered as the capacitated vehicle routing problem with time windows (CVRPTW), which is a generalization of the VRP where any customer is served within a given time interval and the vehicles in the fleet have limited capacities. The VRP is well-known to be NP-hard, and so are most of its variants [4]. Its solution methods include several heuristic and metaheuristic approaches, as well as some exact methods [4]. NP-hard problems are difficult to solve thereupon, heuristic approaches are applied [5]. In this paper, we proposed the Clarke and Wright heuristic algorithm which is a widely applied heuristic for solving CVRPTW due to its simplicity of implementation and efficient calculation speed [6] and Nearest Neighbor heuristic. Moreover, The Best Decision (TBD) heuristic method was developed for this study by the research team. In this method, which was created by grouping the scores, the Nearest Neighbor algorithm from the King Algorithm was used. The results obtained with, heuristic methods were compared to identify the best solution method. The rest of the report is organized as follows. In Sect. 2, the problem definition is provided. A concise literature review is provided in Sect. 3. The proposed CVRPTW model formulation is presented in Sect. 4. The verification of the developed solution methodology is explained in Sect. 5 as well as the comparison of methods. In Sect. 6, the developed decision support system is briefly explained. Finally, the concluding remarks are made in Sect. 7.

2 Problem Definition The problem is a Capacitated Vehicle Routing with Time Windows Problem that deals with the transportation of LPG from the warehouse to the gas stations in the Aegean Region The problem is solved by keeping the working time intervals of the stations and ensuring that the total distance of the routes and the number of trucks used are at minimum levels. There is only one warehouse in the problem, which is in Alia˘ga, and the LPG to be supplied to the stations is provided from here. In the CVRPTW, the capacities of the trucks and the time intervals of the stations differ. Each station has its own time to get LPG service. The routing drawn for the stations is made by considering each station’s opening and closing time. Demands from stations come to the company’s system between 08:00 and 17:00. The trucks must leave the warehouse and return to the warehouse between the entrance and exit hours of the warehouse. Each station that makes a request can only be visited once a day by a truck. There are two types of trucks with a capacity of 17000 L and 30000 L. The total demand

Distribution Planning of LPG to Gas Stations

677

of the stations to be supplied by the trucks should not exceed the truck capacity. The number of trucks of all types and the amount of LPG in the warehouse is determined without limit. The amount of demand from the stations is calculated by giving rates for all stations according to the number of people driving in Turkey, so demand is known. In the problem, the maximum working time of a driver should not exceed 9 h. It is assumed that there is a second driver when the route length exceeds 9 h, and the driver cost is added to the drivers as an additional 150 TL.

3 Literature Review This literature review examines Vehicle Routing Problem (VRP) which can be expressed as optimization problems arising from product distribution. The approach taken to obtain the best solution is to create constraints appropriate to the types of problems [7] Firstly, in the literature, Dantzig and Ramser introduced the VRP based on a real gasoline delivery problem [8]. After the studies of Dantzig and Ramser, constraints have been added to the vehicle routing problem to bring the problem closer to real life. The VRP is easy to define while it is hard to solve in the real world, especially combined with other practical constraints and there are numerous problems with specific constraints [5]. In the light of the research, problem type determined as capacitated vehicle routing problem with time windows. According to Kallehauge, Larsen, Madsen, Solomon, time windows are called soft when they can be considered non-binding for a penalty cost and they are hard when they cannot be violated [9]. Also, Li stated that if time window constraints are very tight, it complicates the problem and is quite hard to find a feasible solution [5]. In Capacitated Vehicle Routing Problem (CVRP) the objective is to find the optimum road with minimum transportation cost and maximum satisfaction of the customer. Özkök, Kurul affirmed that capacitated vehicle routing problem customer demands are deterministic, undivided, and known in advance [10]. In Capacitated Vehicle Routing Problem with Time Window (CVRPTW), two objectives are involved, minimizing the number of trucks and minimizing the total travel time. The goal of these problems is to minimize the number of vehicles and the total travel and waiting time needed to serve all customers respecting their time constraints [9]. Solving the CVRPTW problem with an exact algorithm is not an easy task because of the complicated constraints. Since the VRP is an NP-Hard problem, it is typically difficult to solve, especially in the real world. The algorithms to solve this type of problem can be classified into three methods: exact algorithms, classic heuristic algorithms, and metaheuristics algorithms.Bruno claimed that the exact methods for the VRPTW can be classified into three categories: Lagrange relaxation-based methods, column generation, and dynamic programming. It was reported that exact methods often perform poorly [11]. Therefore, most studies have relied on heuristic algorithms. However, a heuristic is a technique that seeks good solutions at a reasonable computational cost without being able to guarantee optimality often heuristics are problem-specific so that a method that works for one problem cannot be used to solve a different one [11].

678

B. Alkan et al.

There have been several studies to investigate solution methods toward VRP. Clarke and Wright (CW) approach is used in this paper because CW is a saving algorithm that could generate an optimal or near-optimal route rapidly for VRP. The Clarke and Wright algorithm is a greedy one, at each iteration, it selects the route merger that yields the largest saving [11]. Keskintürk Topuk Özye¸sil explained the CW algorithm as the most preferred one among the heuristic algorithms [12]. Besides CW, Nearest Neighbour is a sequential heuristic used in this paper as a second solution approach. Moreover, the last algorithm that the project team developed is the TBD algorithm inspired by the work of Erboy who used King Algorithm steps in the article [7]. The summary of the papers reviewed in this section are given in Table 1. Table 1. Literature review on capacitated vehicle routing problem with time window. Author-date

Problem definition

Objective function

Method

Kallehauge B.& Larsen J.& Madsen O.B.G.& Solomon M.M (2006)

Developments for Minimizing the total optimal column travel cost generation approach to the vehicle routing problem with time windows have been highlighted

Branch-and-bound method

Xiaoyan Li (2015)

It focuses on truck routing problems faced by St. Mary’s food bank distribution center

Minimizing is the number trucks used and total travel time of all routes

Clarke and Wright (Saving) algorithm and Record-to-Record algorithm

Timur Keskintürk, Nihan Topuk, Okan Özye¸sil (2015)

Implementation for solving a capacity-constrained vehicle routing problem with two heuristics and analysis about solving techniques have presented

Minimizing the number of vehicles, the total cost of distance traveled and the number of customers for which products were not collected

Clarke and Wright (Saving) algorithm

Leonardo Bruno (2019)

Solving a food delivery Minimizing the sum problem with a vehicle of all the distances routing problem traveled through its application in a real use case implemented at the company Sotral

Clarke & wright’s savings and time windowed nearest neighbour techniques

(continued)

Distribution Planning of LPG to Gas Stations

679

Table 1. (continued) Author-date

Problem definition

Objective function

Beyza Ahlatcıo˘glu Özkök & Feyyaz Celalettin Kurul (2014)

Integer linear programming model for vehicle routing problem and an application in the food industry

Establishing most Integer linear appropriate routes for programming model vehicles in the distribution of products to customers

Timur Keskintürk, Barı¸s Kiremitçi,Serap Kiremitçi (2016)

2-opt heuristic algorithm for the travelling salesman problem and the effect of the initial solutions

Comparison of 2-opt’s results with using random determination of initial solutions against using tour generation solutions

Nedret Erboy (2010) Modeling the manufacturing cells which is based on make to order and application on metal sector

Method

2-OPT heuristic algorithm

Minimizing the Genetic algorithm deviation between the method & king groups algorithm

4 Modeling and Solution Methodology The model describes the CVRPTW, which aims to reduce the cost over time intervals of the stations by minimizing the distance traveled and the number of trucks used. In the problem, the requests of a total of 41 stations in the Aegean Region are met from the warehouse in Alia˘ga. While creating the distance matrix by using Bing Maps, the demand matrix was evaluated by taking into consideration the population of Turkey with driving licenses and accordingly calculated for all stations. Service time was taken as 0.1667 min per 1000 L of demand, in addition, travel time between stations is calculated by dividing the distance between stations by truck speed. To avoid model confusion, a model based on two types of trucks, large trucks, and small trucks, was developed. The rental costs of big trucks are taken as 1000 TL and 800 TL for small trucks. A. Mathematical Modeling The mathematical model of the problem is as follows;

680

B. Alkan et al.

K

SETS Set of number of stations Set of number of stations including depot Aliağa which represented with zero (0), Set of number of trucks with types

dij Di ck rk Ck Ak

Distances from station i to station j, Demand of station i, Per km cost of trucks, Rent cost of trucks, Capacity of trucks, Driver cost,

M N

i, j, v

INDICES Stations’ indices

k

Trucks’ indices

PARAMETERS Z tij ei li si

Number of trucks available Travel time from station i to station j, Earliest departure time from station i, Latest departure time from station I, Service time in the station I,

DECISION VARIABLES 1, If the truck travels from station i to station j 0, If the truck does not travel from station i to station j

yijk

The arrival time of the vehicle of type k to the station i

xik

1, If the kth truck has second driver 0, If the kth truck does not have second driver

wk

Objective Function: Min 

 i∈N

 j∈N

K

k=1 dij ∗ ck ∗ yijk +



 j∈N

k∈K y0jk ∗ rk +





j∈N

k∈K y0jk ∗ Ak

k∈K Ak ∗ wk (i  = j)

Constraints:

N i=1

y0ik ≤ Z







k∈K



i∈N



k∈K

 i∈N

i∈N

j∈N

yivk −

 i∈N

yi0k ≤ Z ∀k

(3)

∀j ∈ M , i = j

(4)

yijk = 1

∀i ∈ M , i = j

(5)

j∈N

j∈M

(2)

yijk = 1





∀k ∈ K

(1)

yvjk = 0

∀k ∈ K, v ∈ N

(6)

  Dj ∗ yijk ≤ Ck ∀k ∈ K

(7)

ei ≤ xik ,

∀i ∈ N , k ∈ V

(8)

xik ≤ li ,

∀i ∈ N , k ∈ V

(9)

  xik + tij + sj − xjk ≤ 1 − yijk M , ∀i ∈ N , xik − 9 ≥ M (1 − wk ) ∀i ∈ N ,

∀j ∈ M , ∀k ∈ V

yijk , wk ∈ {0, 1} ∀i, j ∈ N , ∀k ∈ K xik ≥ 0,

∀i ∈ M ,

k∈V

∀k ∈ V

(10) (11) (12) (13)

Distribution Planning of LPG to Gas Stations

681

Table 2. Results obtained with Clarke & Wright algorithm 7 Stations- Clarke & Wright Algorithm Condition

Route no

Demand of route

Cost of route

Duration of route

Truck type

Route

Solutions with time window

1

15000

2242.58

16.40

S

A- K1- AF1-A

2

22912

2594.25

16.74

B

A-I7-M1-B2-A

3

20000

2530.79

15.62

B

A-D5-U2-A

Solutions without time window

1

25000

2426.18



B

A-AF1-K1-U2-A

2

25281

2541.41



B

A-B2-M1-D5-A

3

7631

1032.9



S

A-I7-A

The objective function (1) is to minimize the total transportation cost; It aims to minimize the total distance of the routes, the number of trucks used, and the cost of the drivers. Constraint (2) and (3) minimize the number of trucks leaving Alia˘ga and arriving at Alia˘ga to be less than a large number of specified trucks initially specified. Constraint (4) and (5) allow each station to be visited only once, except Alia˘ga. The constraint (6) is provided to create a route by creating an intermediate station (v) when entering and leaving Alia˘ga. Constraint (7) ensures that the sum of the demands the trucks will carry is smaller than truck capacity. Constraints (8), (9), and (10) ensure that demands arrive at the stations within a time window. Constraint (11) allows the use of a second driver after the route duration exceeds 9 h. Constraint (12) ensures that the y and W are binary decision variables. Constraint (13) ensures that xik is a nonnegative decision variable. B. Heuristic Solution Approaches The mathematical model given above provides a solution for a maximum of 12 stations of the data. Therefore, a literature review was conducted to determine the most appropriate heuristic method to solve the problem with real data. Among the heuristic methods used for CVRPTW, Clarke and Wright, the Nearest Neighbor Algorithm (NNH) and TBD Algorithm were determined. Clarke and Wright is a savings algorithm. It will enable the creation of the route that will provide the greatest savings for the CVRPTW problem. In this study, the amount of savings will be found by the distance between stations (dij). It also changes the selected stations for better savings at each step [10]. The saving amount (sij) is found by the following equation. sij = d0i + d0j − dij

(14)

Stations are selected by listing the amounts of savings found in ascending order. It is checked whether each selected station is suitable for the time window. If the total demand is less than the capacity of the truck, it is added to the route. The algorithm ends after all stations have been visited. Table 2 shows the result obtained for 7 stations.

682

B. Alkan et al.

The Nearest Neighbor Algorithm developed by Bellmore and Nemhauser (1966) is the second heuristic method chosen for CVRPTW. In the first step, the station closest to the starting point is selected. In the next steps, the closest station to the previous station is selected. As specified in the Clarke and Wright algorithm, each station is added to the route as long as it complies with time and capacity constraints. After visiting all stations, the Nearest Neighbor Algorithm ends. Solutions for 7 stations are shown in Table 3. Table 3. Results obtained with nearest neighbor algorithm 7 stations- nearest neighbor algorithm Condition

Route no

Demand of route

Cost of route

Duration of route

Truck type

Route

Solutions with time window

1

15000

2072.70

14,33

S

A-U2-AF1-A

2

15281

2243.95

16,46

S

A-D5-M1-A

3

17631

2375.48

13,67

B

A-B2-I7-A

4

10000

2134.35

14,25

S

A-K1-A

1

27631

2262.52



B

A-I7-D5-U2-A

2

20281

2962.25



B

A-B2-M1-AF1-A

3

10000

1834.35



S

A-K1-A

Solutions without time window

After the literature reviews for CVRPTW, TBD HEURISTIC was created for this project. TBD HEURISTIC aims to develop a new solution method by creating an alternative to the methods used in the literature. For the TBD Heuristic created, King’s Algorithm and Nearest Neighbor Algorithm were used. The steps of ordering the row scores used in the King algorithm from the highest value to the lowest value and their grouping are used in this algorithm [13]. The purpose of the algorithm is to minimize the cost like other algorithms used. Table 4 shows the result obtained for 7 stations. The steps of the method developed are as follows: Step 1: The average is taken sequentially (from left to right) for each row in the created distance matrix except Alia˘ga. Step 2: A new matrix of the same size is created. If the cells in each row in the matrix created except Alia˘ga are less than or equal to the row average, then 1; otherwise, 0 is set. Step 3: 1’s and 0’s are added up for each row. Line scores are found. Next, the rows are ordered according to the degree to which the scores decrease. Step 4: Those who have the same rank are grouped. (In grouping, one cannot pass to the other until one ends.) Step 5: The route starts with the station with the highest score and closest to Alia˘ga. Step 6: With the highest score and in the matrix created, the previous station goes to the station whose value is 1. If there is more than one station with a value of 1 among the stations in the same group, the closest one is selected.

Distribution Planning of LPG to Gas Stations

683

Step 7: While selecting each station, the earliest and latest times are checked. If appropriate, the next step is passed. If not, it returns to Step 6. Step 8: If the selected station is below the truck capacity, it will be added to the route. If not, it returns to Step 6. Step 9: If the total demand is 17000 and below, the small truck is assigned to the route. If it is over 17000, the big truck is on the road. Step 10: The algorithm ends when all stations are visited. If not, it goes back to Step 6. Table 4. Results obtained with TBD algorithm 7 stations -TBD algorithm Condition

Route no

Demand of route

Cost of route

Duration of route

Truck type

Route

Solutions with time window

1

22911.75

2595.92

16.75

B

A-B2-M1-I7-A

2

15000

2072.70

14.33

S

A-U2-AF1-A

3

20000

2866.46

18.97

B

A-D5-K1-A

Solutions without time window

1

25280.75

2489.64



B

A-B2-M1-D5-A

2

22631

2287.57



B

A-I7-U2-AF1-A

3

10000

1834.35



S

A-K1-A

5 Numerical Study and Discussion of Results A. Data Generation The data used for the solution of the problem have been generated with the approaches mentioned below. Firstly data generation was done by adding time constraints to control the entry and exit of stations. Users can decide by manually entering the earliest entry time and the latest exit time restrictions with DSS. LPG trucks distributed from Alia˘ga, can not reach the stations before the earliest, and can not enter the stations if it exceeds the latest time of station. The optimal route is created under these time and capacity constraints. Secondly, Turkey’s population between the ages of 20–60 was obtained for the demand amounts of the stations, and an estimated qualified population was found [14]. This rate constitutes 55.60% of the population ratio of Turkey. By taking the population of the locations where the stations are located, the qualified population driving a vehicle was obtained in line with this ratio. It is known that the Man1 station orders 6000 L of LPG per day. The LPG order given per person is calculated by dividing the license population of the Man1 station by the daily demand. Then the driver licensed population for forty-one stations is divided by the order quantity per person.Finally, since it is known that the tanks of the Man1 station are between 5000 and 10000 L, the lower and upper

684

B. Alkan et al.

limits of the order quantities have been revised based on 5000 and 10000 L. In line with these values, demand data found. The user can also enter the daily demands dynamically to the relevant places in DSS. The current locations of forty-one stations in the Aegean Region have been entered into the database using Bing Maps. When the user selects the stations to was served that day, a distance matrix created with the help of Bing Maps connected to DSS using these locations. The data sets used for the solution obtained in this way. Third, the time (tij) matrix between stations was created to create how long it takes to travel between two stations. This matrix is calculated by dividing each element of the distance matrix by the truck speed. For the solution, the average hourly speed of a truck was taken as 60 km/hr. The fourth data production was made by calculating the service times of the stations. Service time (sij) was taken assuming that 10 min will be spent for every thousand cubic meters of demand and this time was converted to hours. B. Verification The purpose of this section is to verify that CPLEX solutions and code are working correctly. The demand, earliest hour, latest hour, and service times of 7 stations prepared for the toy problem. The total demand for the route must be equal to the total demand of the stations. At the same time, the total route time of each route is checked by calculating the time between the station (tij) and the service time. If the cycle time of a route exceeds 9 h, the number of drivers is taken more than once and a salary of 150 Turkish Liras is paid for each driver. Vehicles will not be able to enter the stations before the earliest time and leave after the lates time. All solutions achieved with CPLEX and heuristic methods have been checked manually, no route has been found that does not provide assumptions and constraints. C. Comparison of Results The results were obtained with CPLEX [15], Python and Excel’s VBA code. The operating time of 12 stations at CPLEX exceeded twelve hours. CPLEX started to work for a long time as the number of stations increased, so finding a solution became difficult. Therefore, a heuristic model research has been made. The problem was solved with the three heuristic methods which are The Closest Neighbor is Clarke and Wright Algorithms, and The TBD Heuristics. The time constraints and truck driver working time constraints have added to both heuristic models and mathematical model. Costs of routes with and without time constraints have been calculated. Table 5 shows the total transportation costs of the station data 7–10 and 41. As can be seen from the tables, using time constraints requires more transportation costs than without using time constraints. It may be because adding restrictions to stations at the earliest and latest time may cause a new route to be drawn or may be due to adding drivers when the route length exceeds 9 h.

Distribution Planning of LPG to Gas Stations

685

The salary paid to the second truck driver used and the replacement of the arrival station also changes the transportation cost. The development of the TBD Heuristic Method has helped a less transportation cost solution than the Nearest Neighbor Heuristic Method while using the time constraint. With and without time-constrained results of Clarke and Wright’s Heuristic Method, transportation cost is lower than other heuristic methods. Table 5. The total transportation cost of the station 7–10 and 41 with and without time window Solutions with time window

Solutions without time window

Solutions with time window

Solutions without time window

Solutions with time window

Solutions without time window

Algorithm

7 stations -total cost of solution

10 stations total cost of solution

41 stations total cost of solution

Cplex

6931.23

5522.03

7945.58

6989.39

No solution No solution

Clark and wright algorithm

7367.62

6000.49

9569.91

7362.04

26398.68

21184.72

TBD algorithm

7535.08

6611.56

9597.46

8208.73

27515.57

23998.28

Nearest neighbor algorithm

8826.48

7059.12

11124.91

7272.69

27687.14

23148.10

The percentage value of the CPLEX solution for 7–10 stations and the percentage value of the Clarke & Wright solution for 41 stations are shown in Fig. 1. CPLEX and Clarke & Wright solutions taken by one hundred percentage and the comparison of solution methods’ done with these percentage values. As seen in the figure, Clarke and Wright’s Heuristic Method gave the closest value to the optimal solution. The observation that the TBD heuristic method gives a lower percentage than the Nearest Neighbor Heuristic method when using time constraints. D. Discussion of Results Time constraint was used in the problem, although there was a higher transportation cost. The reason for using time constraints is to allow the problem to approach reallife problems. Real-life issues to consider are that trucks do not exceed the 60 km/hr speed limit. Truck drivers have limited daily driving hours. In addition, each station has a specific earliest and latest time to receive LPG service. Mathematical models and heuristic methods are arranged according to these assumptions after the results are obtained. In the light of the results analyzed, the solutions obtained with CPLEX gave the lowest transportation cost value. When compare the solutions for the same stations, Clarke and

B. Alkan et al.

41

SoluƟons with Time Window Cplex

Clark & Wright Algorithm

7

10

100.00% 113.28% 108.18%

10

100.00% 105.22% 116.48% 103.37%

7

100.00% 108.66% 118.16% 123.25%

Comparison of Solution Methods’ Cost Changes in Percentages

100.00% 104.23% 104.68%

140.00% 120.00% 100.00% 80.00% 60.00% 40.00% 20.00% 0.00%

100.00% 120.44% 117.26% 133.13%

Percentage

100.00% 106.30% 108.20% 125.15%

686

StaƟons 41 Number

SoluƟons without Time Window

TBD Algorithm

Nearest Neighbor Algorithm

Fig. 1. The scores of the solution shown with percentage

Wright gave the lowest transportation cost of the other heuristic methods. However, the solutions of the three heuristic methods provide solutions close to the optimal solution. Time constraints used for the same station information, the observation that the TBD heuristic method provides lower transportation costs compared to the NNH algorithm. With the Decision Support System, the user will be able to reach the results of all four methods using the same station information and compare the solution methods.

6 Decision Support System DSS is a computer-based system that enables individuals in a company to choose among alternatives to facilitate their decision-making processes. DSS is created using the Excel VBA interface, as it is user-friendly and understandable software. The aim of DSS, developed for this project, is to enable the user to solve a CVRPTW by using forty-one stations. For this reason, a dynamic structure has been created so that the user can select the ones supplied that day among the stations served, add new stations, or delete stations. DSS offers the user two different ways to enter the system when the company agrees with the new station(s). Each one of them is the station names are entered by the user and to obtain station locations according to the addresses entered by the user using the Bing maps to which DSS is connected. The other is that the same process is done by selecting the addresses using an external excel. There is also the option to delete from the database when the company terminates the agreement with a station. While distances between stations are calculated in the background using Bing Maps, users can enter daily demand quantities and time constraints on the dynamically created demand page. In addition, there are buttons on the relevant pages so that the user can see the entered values and make changes for the trucks’ and stations’ information. In the solution phase, IBM ILOG CPLEX OPTIMIZATION STUDIO 20.1 [15] and Python programs are connected to the interface that offers four different alternatives to the user to be compared. While two methods were coded by these programs and the results were transferred to the interface, the other two methods were coded using Excel VBA. The results are reported to the user in detail on the relevant pages. This project group developed one of the three different heuristic methods used, and the page in Fig. 2 was created to compare the optimal solution with the results. On this

Distribution Planning of LPG to Gas Stations

687

page, the costs are visualized using a dynamic chart to make them more understandable. There is also a print button to print out the solutions on each page.

Fig. 2. Comparison of the results

7 Conclusion Classical vehicle routing literature often focuses on the simple routing problem. However, studies involving more realistic restrictions have been increasing in recent years. In this problem, constraints have been developed by adapting to the real-life problem, and solutions obtained with the capacitated vehicle routing problem with the time window problem. As the number of stations increased in the mathematical model, results started to be processed in a long time. As a solution, heuristic methods that give results in a short time used. In addition to the CPLEX solution, solutions have been made with three different heuristic methods. New constraints added to all four methods used, added time constraints and daily driving hours of the drivers. When the results obtained with CPLEX are verified and compared with other heuristic methods. However, other heuristic method solutions yielded results close to the optimal solution. A Decision Support System made with Excel’s VBA code. The user enters the station time intervals and demands with the help of DSS. Then the routes begin drawn at these time intervals. The total duration of the routes will also be determined by the user in DSS. Routes will be created not to exceed this hour. DSS solves the problem with the information receives from the user with four different methods. Also, helping DSS users print the results, compare solutions and present the cost graphs of the detailed reports of methods. While users are using DSS can observe the demands of the routes, the types of vehicles used, the route length, and the costs of the routes. In this way, the user will be able to choose the solution methodology.

688

B. Alkan et al.

The results obtained with CPLEX are responsive for a long time and solutions were not found for 41 stations. Heuristic methods were able to solve 41 station data in the Aegean region. Founding results of heuristics methods, The Clark and Wright Algorithm gave the lowest cost of 26398.68 TL. Then this cost was followed by the TBD Algorithm with a 4.23% difference and the Nearest Neighbor Algorithm with a 4.68% difference. In the future, aimed to solve the problem for much larger data sets by going beyond the Aegean Region. By determining more than one warehouse area, trucks can return from the last station to the nearest warehouse, thus may cause a reduction in transportation costs. In addition, it is preferable to purchase trucks instead of renting.

References 1. Gattorna, J., Day, A., Hargreaves, J.: Effective logistics management. Logistics Inf. Manage. 4(2), 2–87 (1991) 2. El Sherbeny, N.A.: Vehicle routing with time windows: an overview of exact, heuristic and metaheuristic methods. J. King Saud Univ. 22(3), 123–131 (2010) 3. Aggarwal, D., Chahar, V., Girdhar, A.: Lagrangian relaxation for the vehicle routing problem with time windows. International Conference on Intelligent Computing (2017) 4. Macedo, R., Alves, C., Carvalho, J.M., Clautiaux, F.: Solving the vehicle routing problem with time windows and multiple routes exactly using a pseudo-polynomial model. Eur. J. Oper. Res. 214, 535–545 (2011) 5. Li, X.: Capacitated Vehicle Routing Problem with Time Windows: A Case Study on Pickup of Dietary Products in Nonprofit Organization. Arizona State University (2015) 6. Pichpibul, T., Kawtummachai, R.: An improved clarke and wright savings algorithm for the capacitated vehicle routing problem. Sci. Asia 38(3), 307–318 (2012) 7. Keskintürk, T., Topuk, N., Özye¸sil, O.: Araç Rotalama Probleminin Tasarruf Algoritması ˙Ile Çözüm Yöntemlerinin Sınıflandırılması ve Bir Uygulama. ˙I¸sletme Bilimi Dergisi 3(2), 77–107 (2015) 8. Dantzig, G.B., Ramser, J.H.: The truck dispatching problem. Manage. Sci. 6, 80–91 (1959) 9. Kallehauge, B., Larsen, J., Madsen, O. B., & Solomon, M. M. (2005). Vehicle Routing Problem with Time Windows. Springer. https://doi.org/10.1007/0-387-25486-2_3 10. Ahlatcıo˘glu, B., Kurul, F.C.: Araç rotalama problemine tam sayılı lineer programlama modeli ve gıda sektöründe bir uygulama. Istanbul Univ. J. School Bus. 43(2), 251–260 (2014) 11. Bruno, L.: Solving a food delivery problem with a Vehicle Routing Problem-based approach. Politecnico Dı Torino (2019) 12. Sörensen, K., Arnold, F., Cuervo, D.P.: A critical analysis of the improved Clarke and Wright savings algorithm. In: International Transactions in Operational Research (2017) 13. Erboy, N.: Sipari¸se Dayalı Üretim Hücrelerinin Modellenmesi ve Metal Sektöründe Bir Uygulama. Dokuz Eylül Üniversitesi Sosyal Bilimler Enstitüsü (2010) 14. Türkiye Nüfusu Ya¸s Gruplarına Göre Da˘gılımı.: (2020). Retrieved from Nüfusu.com: https:// www.nufusu.com/turkiye-nufusu-yas-gruplari 15. IBM Corporation: ILOG CPLEX Optimization Studio (2017). Retrieved from Language User’s Manual: https://www.ibm.com/docs/en/icos/12.8.0.0?topic=opl-languageusers-manual

Drought Modelling Using Artificial Intelligence Algorithms in Izmir District Zeynep ˙Irem Özen1 , Berk Sadettin Tengerlek1 , Damla Yüksel1 , Efthymia Staiou1(B) , and Mir Jafar Sadegh Safari2 1 Department of Industrial Engineering, Ya¸sar University, ˙Izmir, Turkey [email protected], [email protected] 2 Department of Civil Engineering, Ya¸sar University, ˙Izmir, Turkey [email protected]

Abstract. The world’s water resources are decreasing day by day due to factors such as climate change, drought, inefficient pricing policies implemented by the government, population growth, uncontrolled water consumption, technological developments, and industrialization. A decrease in water resources causes water scarcity in the long-term period. This study is conducted to analysis the meteorological drought, in Izmir district, Turkey. Inspired by the real-life problem, drought estimation models are developed through artificial neural network-based artificial intelligence techniques incorporating a decision support system. The Zscore index (ZSI) values are computed using precipitation data collected from five meteorological station in Küçük Menderes basin, and several developed models are compared according to the variety of statistical performance metrics. Keywords: Drought · Artificial neural networks · Feed forward backpropagation · Generalized regression · Radial basis function · Z-Score Index

1 Introduction Nowadays, population growth and economic developments have caused increasing water demand and pressures on water resources. The demand for water supplies has risen seven times, while the world population has increased three times in the last century [6]. Over the years, the increase in water use brings with it several threats such as water scarcity due to the limited availability of water resources. In 2012, the number of countries suffering from water scarcity reached 43, and the total population reached 700 million, while forecasts for 2050 show that the number of countries that will suffer famine will be 65, and the number of people to be affected will be 7 billion [7]. Therefore, the average amount of water per capita in Turkey and EU member states is projected to gradually decrease until 2050 [3]. Climate change is the most important factor that causes water scarcity. In such a case, while temperatures are above seasonal normal, precipitation is below seasonal normal. Drought, which is a natural event caused by below-normal precipitation in a certain period, has the widest impact among natural disasters of meteorological character. Droughts begin with a lack of precipitation over © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 689–701, 2022. https://doi.org/10.1007/978-3-030-90421-0_59

690

Z. ˙I. Özen et al.

time or space, then progress to a more complex hydrometeorological state. Droughts can be classified into five categories: (1) meteorological, (2) agricultural, (3) hydrological, (4) socioeconomic, and (5) ecological. These classifications represent a decrease in precipitation, a lack of moisture in the soil, a decrease in streamflows, a scarcity of water necessary to fulfill human requirements, and a lack of water in biota, respectively. (Yihdego et al. 2017 [18], 2019 [19]; Vaheddoost and Safari, 2021 [14]). For the efficient water resources management, medium and long-term forecasts are critical. Various drought forecasting modelling approaches were examined in the literature. As examples from the relevant literature, Wambua et al. (2014) [15] explained different drought indexes and examined different types of artificial neural network methods for drought forecasting. ANN models were divided into three group: (i) feedforward, (ii) recurrent and (iii) hybrid networks. In feed-forward networks, the information propagation is in a single direction that from the input layer to the output layer. The feedforward ANN structure was also subcategorized into such as Generalized Regression Neural Network (GRNN), Recursive Multi-Step Neural Network (RMSNN), Multilayer Perceptron (MLP), Radial Basis Function (RBF), and the Direct Multi-Step Neural Network (DMSNN). Information can be propagated in forwarding and backward directions by feedback loops for recurrent networks. Data Intensive (DaI), Techniques Intensive (TI), and Model Intensive (MI) networks are subcategories of a hybrid network. Mishra and Desai (2006) [9] compared the linear stochastic models (ARIMA/SARIMA), recursive multi-step neural network (RMSNN) and direct multi-step neural network (DMSNN) for the Kansabati basin in India. The Standardized Precipitation Index (SPI) values were used for the drought forecasting. Also, the backpropagation training algorithm was shown in the article. R2 of RMSNN values were changeable between 6% and 85%, and R2 of DMSNN values were changeable between 30% and 85% according to different lead time and different SPI series. Additionally, Barua et al. (2012) [2] conducted a study evaluating the effectiveness of an ANN-based model for estimating the nonlinear aggregated drought index (NADI). Two ANN prediction models were developed: the direct multi-step neural network (DMSNN) and the iterative multi-step neural network (RMSNN). The predicted data from these two models were compared with ARIMA. The R2 values of the RMSNN model estimates for all six-month estimations are between 23% and 56%, while the R2 values of DMSNN model are between 24% and 56%. As a result, it was shown that both RMSNN and DMSNN models have better performance than the ARIMA model respect to the R2 , RMSE, and MAE. On the other hand, Cı˘gızo˘glu (2008) [4] investigated artificial neural networks in water resources. Feed-forward backpropagation (FFBP), Generalized Regression Neural Networks (GRNN), and Radial Basis Function Neural Networks (RBF) are different ANN methods and was applied to the hydrological data of the streamflow. Safari et al. (2016) [12] also used FFBP, GRNN, and RBF techniques, therefore, this article was examined to understand the concept of these techniques. Machine learning techniques, which are basically data-driven processes, have a mathematical algorithm that approximates nonlinear relationships between input and output data without any information about the system. Artificial neural networks techniques such as feed-forward backpropagation, generalized regression, and radial basis function neural networks are known to

Drought Modelling Using Artificial Intelligence Algorithms

691

be powerful approaches for solutions of engineering problems among machine learning algorithms (Safari et al. 2016 [12]). This study aims to evaluate the historical data in Izmir district to analyze the drought and support sustainable water management in the face of water scarcity. Therefore, three different artificial intelligence techniques of feed-forward backpropagation (FFBP), generalized regression neural networks (GRNN), and radial basis function-based neural networks (RBF) are applied for drought modelling.

2 Materials and Methods A. Study Area and Data This study uses daily total precipitation data which is obtained from the Turkish State Meteorology Service at average daily scale between 2005–2020 years for five stations of Çe¸sme, Izmir, Ku¸sadası, Selçuk and Seferihisar station of Küçük Menderes basin (Fig. 1). Initially, the missing daily data were reconstructed using historical properties of the time series including seasonal averages and the spatiotemporal properties of the precipitation time series in other stations. Then, the time series were converted respectively into average monthly data and total monthly data. Afterward, the consistency, randomness and trend in the data run were examined using double mass curve, run-test, and linear trend test respectively. The time series of precipitation data are given in Fig. 2.

20

P (mm)

15

Kusadasi Selcuk Cesme

Seferihisar İzmir

10 5

1 9 17 25 33 41 49 57 65 73 81 89 97 105 113 121 129 137 145 153 161 169 177 185

0 Month

Fig. 1. Study area and stations

Fig. 2. Precipitation time series for five stations

B. Applied Algorithms 1) Feedforward Back Propagation (FFBP) There are many variants in ANNs due to the many differences in network architecture. The information propagation is towards a single direction in typical a feedforward network. In contrast, the information propagates in forwarding and backward directions in Feedforward Back Propagation Networks (FFBP). Backpropagation is a broadly used algorithm to train feedforward neural networks. The structure of FFBP is shown in Fig. 3 [10]. FFBP comprises three layers which are input, hidden, and output layers. The source

692

Z. ˙I. Özen et al.

nodes in the input layers of the network meet the related elements of the activation model that create the input signals applied to the neurons in the second layer. The inputs of the third layer are provided by the output signals of the second layer. The number of neurons in the hidden layer influences the model performance. The activation function is a continuous and bounded nonlinear transfer function. For that reason, the sigmoid and hyperbolic tangent functions are selected. However, the FFBP algorithm has some disadvantages [1]. It is sensitive to the choice of initial weight values and can ensure importantly different performance in various applications [4]. The gradient descent algorithm is used for the backpropagation in most of the ANN applications for water resources. In the learning process, weights are adjusted for a given input to achieve the required output by using the error convergence technique. Generally, the error in the output layer propagates back to the input layer through the hidden layer to achieve the required output in this algorithm. The gradient descent method is used to compute the weight of the network and adjust the weights to minimize the output error. However, this algorithm needs a long training time to achieve the required error percentage. The Levenberg-Marquardt (LM) algorithm is another propagation method that decreases training time considerably. The aim of the LM algorithm is to scan for a nonlinear function’s minimum point. A curve-fitting on the data is carried out. This algorithm is designed to approach the speed of quadratic training. In general, quadratic nonlinear optimization techniques are quicker and more efficient than other approaches of optimization algorithms. Thus, the LM algorithm is used in this study. 2) Generalized Regression Neural Network (GRNN) The Generalized Regression Neural Network (GRNN) converges to any function between input and output by predicting a function directly from training data without using an iterative method. Thus, a backpropagation is not required. The GRNN contains input, pattern, summation, and output layers. The structure of GRNN is shown in Fig. 4 [8]. The input layer’s number of neurons equals the total number of input variables. The input layer is attached to the pattern layer. The pattern layer has one neuron, and its output is a measure of the distance of the input from the processed patterns for each training pattern. Each neuron in the pattern layer is attached to the neurons in the summation layers [4]. There are two summation neurons which are S and D. The summation of the weighted outputs of the pattern layer is computed in the S summation layers whereas, the summation of the unweighted outputs of the pattern layer is computed in the D summation layers. When weights are set, the GRNN computes outputs [5]. The spread factor effects the GRNN because Gaussian function is used. It is selected based on the minimum error criteria of the developed model and calculated experimentally. While the spread factor is large, the function approximation would be smoother and becomes a multivariate Gaussian at the limit [4]. 3) Radial Basis Function (RBF) The RBF requires less computer time consuming for network training. There are three layers which are input, hidden, and output. The nodes within each layer are linked completely to the preceding stage. The input variables are allocated to the nodes in the

Drought Modelling Using Artificial Intelligence Algorithms

Fig. 3. Structure of feedforward back propagation neural network

Fig. 4. Structure of generalized regression neural network

693

Fig. 5. Structure of radial basis function neural network

input layer and, without weights, transfer directly to the hidden layer. While the transition from the input layer to the hidden layer is not linear, it is linear between hidden and output layers [4]. There are various types of functions used for the transition, such as Cauchy, Gaussian, or multiquadric. Gaussian is the most common function. Figure 5 shows the RBF network described above, the input variables, and the relationship between the three layers [17]. Different numbers of hidden layer neurons and spread factors should be investigated. With a trial-and-error method, the spread constant and the number of hidden layers were determined before the RBF simulation began. At this point, despite both GRNN and RBF neural networks use the Gaussian function in their constructions, the optimum spread parameters are different from each other. The GRNN uses the same functions as the RBF neural network, but they use different output functions in the input layer. The spread factor is distinct in the output functions of both networks. The best spreads vary in each situation for this reason [11]. C. Performance Criteria The performances of different models can be evaluated in terms of compatibility of the forecasted and observed values with each other. Coefficient of determination (R2 ) measures the degree of correlation between observed and forecasted values. R2 values range between 0 and 1. The root means square error (RMSE), evaluates the variance of errors independently of the sample size. RMSE shows the inconsistency between the forecasted and observed values. Models with the lowest RMSE as well as the highest R2 values are the best performing models. The optimized ANN model is chosen, which has the best performance value for the validation data set. D. Z-Score Index (ZSI) The Z-Score Index (ZSI) as an important parameter for drought severity, is calculated by dividing the residual of precipitation and mean precipitation by standard deviation of the sample time series. The drought condition according to the ZSI value is shown in Table 1 [16]. The ZSI which is an important parameter for drought severity, for five meteorological stations were calculated. The graphs of ZSI for five stations are shown

694

Z. ˙I. Özen et al.

in Fig. 6. When the ZSI values are examined, it is observed that drought is noticeable in the last 15 years [13]. This is a sign that there may be a danger of water scarcity in the future. Therefore, drought forecasting is very important in order to observe whether there is a risk of water shortage in the future. Table 1. Drought conditions corresponds to ZSI Values ZSI value

Drought condition

ZSI ≥ 0.25

No drought

0.25 > ZSI ≥ −0.25

Weak drought

−0.25 > ZSI ≥ −0.52

Slight drought

−0.54 > ZSI ≥ −0.84

Moderate drought

−0.84 > ZSI ≥ −1.25

Severe drought

ZSI < −1.25

Extreme drought

4 Seferihisar

İzmir

Cesme

Selcuk

ZSI

2

Kusadasi

0

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106 113 120 127 134 141 148 155 162 169 176 183 190

-2 Month

Fig. 6. ZSI values for five stations

3 Computational Results and Decision Support System for Drought Modeling A. Data Preparation The resizing of data is necessary to treat data equally in the model. The maximum of each input has been found, then all data given for input was divided by the maximum value of that input, and these values have been normalized. Besides, a training set is implemented to build up a model, while a test (or validation) set is to validate the model built. In machine learning, to create a model to predict the test data is tried. The models generated are to predict the results unknown. Dataset divides into two groups by using the 80–20% rule. 80% of data are used in the training set and 20% use in the testing set. Accordingly, while 134 months of data created the training set, 34-months data set created the testing set. The ANN models train, and test based on different combinations of input variables and the number of neurons in the model’s hidden layer compared using the statistical measures of goodness of fit. The MATLAB code that takes a training set

Drought Modelling Using Artificial Intelligence Algorithms

695

of normalized data was run. For the training process, inputs and output were considered together and this process was the stage of learning from inputs and outputs. Then, only inputs were given and the output through MATLAB was expected to be forecasted in the testing process. Hidden neuron values for FFBP, the spread parameter for GRNN and RBF has been changed over the MATLAB code. The code that takes a testing set of normalized input data was run and the results were transferred to Excel. The data are normalized data so denormalization is done. Denormalized forecasted values were compared with actual observed values, to observe goodness-of-fit based on statistical performance measures such as R2 and RMSE. B. Drought Models In order to validate the drought forecasting part, the Z-score index (ZSI) values of 5 meteorological stations between 2005 and 2020 were examined [13]. These stations are Selçuk, Ku¸sadası, ˙Izmir, Çe¸sme and Seferihisar. In order to determine the values to be input in MATLAB, the time-lags between t-1 to t-24 is prepared based on these 15 years value. The correlation between time t and other lag-times is examined using the correlation function. The dataset with the highest six coefficient of correlation values is considered as inputs of the developed models. Normalization is done to the 6 input and 1 output (t) data set created. Data were prepared as mentioned in previous section. Results of the models for three different algorithms of FFBP, GRNN and RBF for five different stations are shown in Figs. 7–9, respectively. In order to find the best FFBP, GRNN and RBF drought models, different number of hidden neurons in FFBP and, different spreads were examined in the models’ structures. 1

1

Cesme

0.5 ZSI

Observed FFBP

ZSI

0.5

-0.5

-0.5

0

-1

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 Month 1

1 0.5 ZSI

Seferihisar

ZSI

Observed FFBP

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 Month

-0.5

-0.5

0

İzmir

-1 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 Month

1

Observed FFBP

0

-1

Observed FFBP

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 Month

Selçuk

ZSI

0.5

Kusadasi

0

-1

0.5

Observed FFBP

0

-0.5 -1 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 Month

Fig. 7. Performance of the best FFBP drought model for five stations

696

1

1

Cesme

0.5 ZSI

Observed GRNN

ZSI

0.5

Z. ˙I. Özen et al.

-0.5

-0.5

0

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 Month 1

-1 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 Month 1 0.5 ZSI

Seferihisar

ZSI

Observed GRNN

-0.5

-0.5

0

İzmir

-1 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 Month

Observed GRNN

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 Month

Selçuk

ZSI

0.5

Observed GRNN

0

-1 1

Kusadasi

0

-1

0.5

Observed GRNN

0

-0.5 -1 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 Month

Fig. 8. Performance of the best GRNN drought model for five stations 1

1

Cesme

0.5 ZSI

Observed RBF

ZSI

0.5

-0.5

-0.5 -1

-1

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 Month

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 Month

Seferihisar

0.5 ZSI

Observed RBF

1

ZSI

0.5

-0.5

-0.5

0

İzmir

-1 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 Month

Observed RBF

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 Month

Selçuk

ZSI

0.5

Observed RBF

0

-1

1

Kusadasi

0

0

1

Observed RBF

0

-0.5 -1 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 Month

Fig. 9. Performance of the best RBF drought model for five stations

As an example of finding the best model, the results of Çe¸sme station are presented below. Figure 10 shows the comparison of the actual data with the forecasted data. The

Drought Modelling Using Artificial Intelligence Algorithms

697

data were forecasted according to the number of different hidden neurons from 1 to 5. Likewise, as a result of trying different spread parameters from 1 to 5, Fig. 10 shows the comparison graphs of the forecasted and observed data according to the GRNN and RBF methods, as well. In Fig. 11, R2 values for Çe¸sme station are presented according to different hidden neuron/spread parameters of all methods. As seen in Fig. 10, there are some peaks in FFBP results. Especially when the number of hidden neurons was 3, these deviations were observed. These peaks occurred between data points 21 and 23 (November 2017-January 2018) and 33 (November 2018). It was observed that the number of hidden neurons was 1 and 4 in the forecasted data that detected the peak points best. But R2 value when hidden neuron number is 4 is not the best R2 value for this method. The best R2 value was observed when the number of hidden neurons was 1. Overestimated values were observed between hidden neuron 1,2 and 4, between data points 2 and 9 (between April 2016 and November 2016), and between data points 16 and 19 (between June 2017 and September 2017). While the number of hidden neurons was 5, underestimated values were observed between data point 10 and 12 (between December 2016 and February 2017) and between 22 and 24 (between December 2017 and February 2018). According to the GRNN results shown in Fig. 10, peak points were detected better when the spread parameter was 1. However, a better detection was achieved in FFBP when the hidden neuron was 1 compared to GRNN. In GRNN, overestimated values were observed for all spread parameters between data points 2 to 8 (April 2016 to October 2016), 14 to 20 (April 2017 to October 2017) and 26 to 32 (April 2018 to October 2018). Other than these points, there are underestimated values. On the other hand, R2 values of GRNN gave the best values for this station compared to other methods. According to the RBF results shown in Fig. 10, forecasting values with the same graphs were observed. When the spread parameter is 1, the peaks are detected a little better. Data forecasted by this method, overestimated values are observed for all spread parameters between data points 2 to 8 (April 2016 to October 2016), 16 to 19 (June 2017 to September 2017), and 29 to 33 (July 2018 to November 2018). In particular, underestimations were observed at data points 11 (January 2017) and 24 (February 2018) where the peak points are located.

Fig. 10. Examination of FFBP, GRNN and RBF models for ÇESME ¸ station

698

Z. ˙I. Özen et al.

In summary, these graphs show that FFBP method needs well organized data to perform better, because otherwise it might generate outliers. GRNN gives better R2 results, therefore is a more reliable method for this station. However, FFBP has detected peak values better than GRNN. Sometimes models generate a very good R2 , but they fail to detect the peak points. On the other hand, other models may have a lower R2 , but can better detect the peak values. Higher R2 values do not always determine the best model. It depends on the aim and scope of the forecaster.

Fig. 11. R- square values for different methods in ÇESME ¸

C. Decision Support System A user-friendly decision support system (DSS) has been developed by integrating the proposed forecasting model into the Excel-VBA interface in Fig. 12. In the DSS design, a dashboard was designed, and it aims to increase the ease of use and understanding of the developed work. The data that DSS takes as input are drought indexes. When the user opens the DSS, the instructions page appears and by clicking the start button the process begins. By default, data of 5 stations are shown on the dashboard. If the user wants to add a new station, he/she must first write the name of the station. Then the data file containing the drought indexes of that station should be selected. After the data file selection, the files with MATLAB codes prepared for the three different methods should be selected. These methods are FFBP, GRNN and RBF, respectively. After these files are selected, comparison charts of the values calculated according to the three methods and the observed values will be displayed on the dashboard. The dashboard has three sections of different graphics. These are “Data Comparison Graphs”, “R Square Comparison Graphs”, and “The Best R Square”. The comparison of observed and forecasted results for the selected method and station appear in the “Data Comparison Graphs”. In the selected graph, the user can see all results for all hidden neurons/spread parameters. The user can select any specific number of hidden neurons from 1 to 5 for the FFBP method or the spread parameters from 1 to 5 for the GRNN and RBF method, by clicking the corresponding box. In the “R Square Comparison Graphs” section, all R-square values for all methods and for all hidden neurons or spread parameters, are shown in the bar chart for the selected station. Finally, under the title “The Best R Square”, a donut chart displays the best R2 value for the selected station and method. The DSS is highly dynamic as the graphs vary according to the information selected. If the user has a problem while running the DSS, by clicking the “Help” button on the left, she/he can access the step-by-step instructions

Drought Modelling Using Artificial Intelligence Algorithms

699

Fig. 12. Decision support system of drought forecasting

of operations manual. If she/he wants to have information about forecasting methods, hidden neurons and spread factor, she/he can get information by pressing the “Info” buttons.

4 Conclusion In this way, we tried to eliminate the deficiencies of the problem given to us and proposed solutions that can continuously improve the existing system. In this study, the problem definition process started with macro and micro analysis. The study aims to analyse the drought in Izmir district and to maintain water management due to water scarcity. Subsequently, a detailed literature review was done about drought forecasting by artificial intelligence. For the drought forecasting model, FFBP, RBF, and GRNN techniques of the ANN algorithm are introduced and are tested by using the toy data generated. Then, the ZSI values for the last 15 years of 5 meteorological stations were examined, analysed and forecasts were made with MATLAB. The R2 values of the 3 models used for these stations are between 45,44% and 73,57%. This also shows our results in the acceptable range when compared with other studies in the literature. Also, a user-friendly decision support system (DSS) has been developed by integrating the proposed forecasting model into the Excel-VBA interface. A dashboard was designed as the DSS, and it was aimed to increase the ease of use and understanding of the developed work. The developed DSS provides detailed graphs to the user. Thanks to that, the user can compare the AI techniques by the stations. The proposed solution methods are also applicable to the other drought indexes. For the future work, other drought indexes can be studied. Also, hidden neuron number and spread parameter for related models will be extended. After the best model is selected, the forecasts should be developed with that method. Acknowledgment. This publication is supported as part of Project No. BAP095 entitled “Drought Assessment in Izmir District, Turkey” has been approved by Yasar University Project Evaluation Commission (PEC). Special thanks to the Turkish Meteorology General Directorate (MGM) for providing the database used in this study. We would like to thank Dr. Babak Vaheddoost for his support and contributions throughout the project. Also, we would like to thank our colleagues Ceyda Sepetçi, Ülkü Simay Mandil, Ata Çakar, Ege Paker and Emre Karao˘glan for their help and contribution to the study.

700

Z. ˙I. Özen et al.

References 1. Adamowski, J., Chan, H.F., Prasher, S.O., Ozga-Zielinski, B., Sliusarieva, A.: Comparison of multiple linear and nonlinear regression, autoregressive integrated moving average, artificial neural network, and wavelet artificial neural network methods for urban water demand forecasting in Montreal, Canada. Water Resour. Res. 48(1) (2012) 2. Barua, S., Ng, A.W.M., Perera, B.J.C.: Artificial neural network–based drought forecasting using a nonlinear aggregated drought index. J. Hydrol. Eng. 17(12), 1408–1413 (2012) 3. Bilen, Ö.: Türkiye’nin su gündemi: su yönetimi ve AB su politikaları. DS˙I ˙Idari ve Mali ˙I¸sler Dairesi Ba¸skanlı˘gı (2009) 4. Cigizoglu, H.K.: Artificial neural networks in water resources. In: Integration of information for environmental security, pp. 115–148. Springer, Dordrecht (2008). https://doi.org/10.1007/ 978-1-4020-6575-0_8 5. Hannan, S.A., Manza, R.R., Ramteke, R.J.: Generalized regression neural network and radial basis function for heart disease diagnosis. Int. J. Comput. Appl. 7(13), 7–13 (2010) 6. Islam, S.M.F., Karim, Z.: World’s Demand for Food and Water: The Consequences of Climate Change. In Desalination-Challenges and Opportunities (2019) 7. Kuraklık ve Susuzluk - Su ve Çevre Teknolojileri (2021). Available: http://www.suvecevre. com/?p=12&sayi=585&baslik=kuraklik-vesusuzluk&haberID=17269#.YANuw-gzZPa 8. Ladlani, I., Houichi, L., Djemili, L., Heddam, S., Belouz, K.: Modeling daily reference evapotranspiration (ET 0) in the north of Algeria using generalized regression neural networks (GRNN) and radial basis function neural networks (RBFNN): a comparative study. Meteorol. Atmos. Phys. 118(3), 163–178 (2012) 9. Mishra, A.K., Desai, V.R.: Drought forecasting using feed-forward recursive neural network. Ecol. Modell. 198(1–2), 127–138 (2006) 10. Naderpour, H., Mirrashid, M.: An innovative approach for compressive strength estimation of mortars having calcium inosilicate minerals. J. Build. Eng. 19, 205–215 (2018) 11. Ozbek, F.S., Fidan, H.: Estimation of pesticides usage in the agricultural sector in Turkey using artificial neural network (ANN). J. Animal Plant Sci. 4(3), 373–378 (2009) 12. Safari, M.J.S., Aksoy, H., Mohammadi, M.: Artificial neural network and regression models for flow velocity at sediment incipient deposition. J. Hydrol. 541, 1420–1429 (2016) 13. Stations. Meteoroloji Genel Müdürlü˘gü. Available: https://mgm.gov.tr/kurumsal/istasyonlari miz.aspx 14. Vaheddoost, B., Safari, M.J.S.: Application of signal processing in tracking meteorological drought in a mountainous region. Pure Appl. Geophys. 178(5), 1943–1957 (2021). https:// doi.org/10.1007/s00024-021-02737-8 15. Wambua, R.M., Mutua, B.M., Raude, J.M.: Drought forecasting using indices and Artificial Neural Networks for upper Tana River basin, Kenya-A review concept. J. Civil Environ. Eng. 4(4), 1 (2014) 16. World Meteorological Organization (WMO) (2011): Proceedings of an Expert Meeting, Murcia, Spain, 2–4 June 2010

Drought Modelling Using Artificial Intelligence Algorithms

701

17. Yan, Y., He, H., Chen, T., Cheng, P.: Tree height estimation of forest plantation in mountainous terrain from bare-earth points using a DoG-coupled radial basis function neural network. Remote Sens. 11(11) (2019) 18. Yihdego, Y., Webb, J., Vaheddoost, B.: Highlighting the role of groundwater in lake–aquifer interaction to reduce vulnerability and enhance resilience to climate change. Hydrology 4(1), 10 (2017) 19. Yihdego, Y., Vaheddoost, B., Al-Weshah, R.A.: Drought indices and indicators revisited. Arabian J. Geosci. 12(3), 69 (2019). https://doi.org/10.1007/s12517-019-4237-z

Electricity Consumption Forecasting in Turkey Bu˘gsem Acar(B) , Selin Yi˘git, Berkay Tuzuner, Burcu Özgirgin, ˙Ipek Ekiz, Melisa Özbiltekin-Pala, and Esra Ekinci Logistics Management Department, Yasar University, ˙Izmir, Turkey {melisa.ozbiltekin,esra.ekinci}@yasar.edu.tr

Abstract. Electrical energy is a type of energy that needs to be transmitted quickly and with high quality. With the development of industry and technology, the need for electrical energy is increasing every day. Accurate forecasting of electricity consumption is crucial since electricity cannot be stored. In this study, electricity consumption in Turkey has been forecasted for long run using Grey Model and short run using the seasonal ARIMA model. Short term production is important to produce the right amount at the right time to meet exact demand. Also, in the short-term, seasonal fluctuations in electricity will guide the companies in planning production and capacity. Long-term forecasting will indicate the investment requirement in facilities. Keywords: Electricity consumption · Forecasting · Seasonal ARIMA · Grey model

1 Introduction The provision of sufficient electrical energy is a vital requirement for individuals in society as it directly or indirectly affects their lives. There is a relationship between electricity consumption and economic development [1]. Generally, a country’s continuous growth indicates that demand is constantly increasing. Estimating electricity demand is vital for the future. Electricity is a product that is consumed at the same time as it is produced. There is a constant need for investment in electricity. Since electrical energy cannot be easily stored in large sizes to meet international needs, it must always be produced to meet the need [2]. In some manufacturing systems with stocks, forward plans are made and hourly demand is not considered. Although electric charge estimation is a well-established decades-old field of research in engineering, new modeling problems continue to arise as technological and legal transformations affect the energy industry [3]. The form of the request depends on the type of planning and the accuracy required. The false rate demand forecast raises the operating cost of a utility company, especially in a market environment where accuracy means money [4]. A good demand forecasting not only targets low-cost investments in capacity expansion planning of an energy system study but also plays an effective role in monitoring environmental problems as well as determining relevant plans for tariffs and demand [5]. Accurate estimation of electricity demand is not only valuable to electric generators, it allows them to program © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 702–714, 2022. https://doi.org/10.1007/978-3-030-90421-0_60

Electricity Consumption Forecasting in Turkey

703

their operations [6]. Factors such as the month of the year, day of the week, seasons (temperature and moisture effect) are significant among the factors influencing the electricity demand. Electricity supply planning requires the efficient management of existing power systems and the optimization of decisions regarding additional capacity. Due to the abovementioned reasons, understanding the trends in the long run will assist the electricity-producing companies to plan their investments. On the other hand, short term forecasts will be important on planning the production quantities. This study will provide two models for long- and short-term electricity consumption forecasting using the Grey model and the seasonal ARIMA model, respectively.

2 Literature Review There is a vast amount of literature considering electricity consumption and the most recent articles are provided below. Recently, some articles are considered impact of COVID-19 pandemic on electricity consumption. Reference [7] examined the impact of COVID-19 and the global pandemic on energy sector dynamics. Electricity demand varied in the first and second half of the week. Normally, demand increased towards the weekends with the highest demand on weekends. While demand increased in the first half of the week due to the pandemic, it started to decrease in the second half. Hourly demands were examined. CO2 decreased greenhouse gas emissions. There are different studies on electricity demand. Hydroelectric provides largely flexible, low-carbon electricity. However, climate change will affect the hydroelectric system by changing the hydrological regimes. It will also affect electricity demands for heating and cooling that hydroelectric sources serve. Reference [8] examines the impact of climate change on hydropower and electricity demand. Empirical electricity demand model is used. There are studies on electricity demand forecasting. Reference [9] study firstly shows that the response to changes in consumer prices and income is very limited and therefore conclude that the need for economic regulation of electricity markets in Turkey; and secondly, current official electricity demand calculations overestimate electricity demand, which could jeopardize the development of both a coherent energy policy in general and a particularly healthy electricity market. SVR (Support Vector Regression), MARS (Multivariate Adaptive Regression Spline), ARIMA (Autoregressive Integrated Moving Average) models were used. Reference [10] examines the relationship between Turkey industrial electricity consumption and industrial value. The goal is to predict the future of Turkey’s industrial electricity demand. To achieve this, an industrial electricity demand function for the 1960–2008 period, Turkey, is estimated by structural time-series techniques applied to annual data. Linear and nonlinear models, Structural Time Series Model (STSM) are used. Reference [1] developed a model of the effects of seasonality and trend. Four different ANN models have been developed and selected the superior one. In addition, the seasonal ANN model was compared with SARIMA. Then, the monthly electricity demand of Turkey has been forecasted between 2015 and 2018 via the seasonal ANN model. The issue of hourly electricity demand has been handled by different people and with different studies. In the study conducted by [11], the stabilization of the Holt-Winters

704

B. Acar et al.

model according to the softening parameters was shaped according to the hourly electricity demand of Spain. In this study, variables were determined according to seasonality, seasonal cycle lengths and climatic conditions. In [12], a method is proposed for shortterm demand forecasting. For the Forecast they used the total electricity consumption of Turkey in 2008 and 2009, and Istanbul province of meteorological data is used. To test the accuracy of the method short-term, hourly demand forecasts were made and the results were evaluated (Table 1).

3 Methodology A. GREY MODEL The grey forecasting model was proposed by Ju-long Deng in 1982 for constrained information and questionable frameworks [13]. The grey prediction method is an estimation method that produces satisfactory results in cases where there is insufficient experience on system behavior and only limited data is [14]. The grey prediction model is practical for uncertain and complex systems and small sample size [15]. Grey system theory contains partially known and partially unknown information. Grey prediction has advantages that require higher forecast accuracy and low data elements to do. Moreover, the model has been successfully used in many disciplines. It helps managers strategically, such as electricity demand, material processing, consumption forecasting and results [16]. GM (1,1) is one of the most preferred grey prediction models, and this model is a time series prediction model that includes a set of differential equations tailored for parameters. These equations have a time-varying structure. It is not necessary to use all data from the original batch to generate the GM (1,1), but the power of the batch must be more than four, and all data should be taken in equal intervals and sequential order with no data missing [17]. The variety inside the framework is utilized by GM (1,1) demonstrate to discover connections between sequential information and after that develop the forecast demonstrate. The last four examples are necessary for making predictions using the GM(1,1) model and do not require precise data distribution and are therefore practical for limited data [14, 15]. Therefore, following steps are applied on data. Step 1: Using the initial dataset, the original dataset is expressed as:   (1) xo = x10 , x20 , . . . , xn1 Where x0 is a non-negative sequence and n is the sample size of the dataset. It is known that this sequence is subjected to the Accumulating Generation Operation (AGO) which is cumulative sum of x0 series, the following sequence x1 is obtained. As it seen, x1 will be increased in the other step (Kayacan et al. 2010). x0 k ≥ 0 k = 1, 2, . . . , n

(2)

Electricity Consumption Forecasting in Turkey

705

Table 1. Literature about electricity consumption forecasting Author(s)

Method

Focused area Short-term/Long-term Country information

[7] Abu-Rayash Mathematical and & Dincer(2020) statistical models for smart city

Due to the pandemic, increase and decrease in demand problem

Short-term

[8] Qin et al. (2020)

Empirical electricity demand model

Climate Long-term change, hydropower, electricity demand

[9] Al-Musaylh et al. (2018)

SVR (Support Electricity Vector demand Regression), forecasting MARS (Multivariate Adaptive Regression Spline), ARIMA (Autoregressive Integrated Moving Average)

The province of Ontario

Upper Yangtze River Basin (UYRB) in China

Short-term

Queensland, Australia

[10] Dilaver and Structural Time Hunt(2011) Series Model (STSM)

Turkish industrial electricity demand Energy demand modelling and forecasting

Long-term

Turkey

[1] Hamzaçebi et al. (2017)

seasonal artificial neural network (ANN)

Monthly electricity demand forecasting

Short-term

Turkey

[11] Trull et al. (2020)

Exponential Smoothing, Holt-Winters Method

Hourly electricity demand

Short-term

Spain

(continued)

706

B. Acar et al. Table 1. (continued)

Author(s)

Method

Focused area Short-term/Long-term Country information

[12] Toker & Korkmaz. (2010)

ARIMA, ANN Fast Fourrier Transform (FFT)

Hourly and daily electricity demand forecasting

Short Term

Turkey

Step 2: In this step, by applying AGO, x0 series are converted monotonically to the increasing x1 series. To calculate x1 , sigma is used for the formula. The formula is as shown below. k xk1 = x0 i (3) i=1

After applying the AGO formula, x1 series is obtained as follow. xk1 = x11 , x21 , . . . , xn1 zk1

(4)

Step 3: After finding xk1 series, zk1 have to be calculated. The generated mean sequence of xk1 is defined as follow. xk1 = x11 , x21 , . . . , xn1 k = 1, 2, . . . ..n

(5)

1 zk1 = 0.5xk1 + 0.5x(k−1)

(6)

Considering the formula, zk1 is obtained as shown in the below:   zk1 = z11 + z21 + · · · + zn1

(7)

Step 4: After building the required GM model, the analytical solution of the corresponding grey equation can be found with the values of the parameters a and b [18]. The evolution coefficient “a” and the drive coefficient parameter values “b” in the equation are estimated as (9) or (11). There are two different ways to find parameters according to [19]. One is called as Least Square Method and other one is called as Parameter method. These two different methods are shown in the below. Least square method: It is based on finding parameters with using “a” and “b” coefficients. b = x0

z1

(k)+a k 0 1 x(2) = zk+b

0 1 = zk+b x(3)

(8)

Electricity Consumption Forecasting in Turkey

707

0 1 x(n) = zn+b

To finding a b, values, the values which are x and z series, are shown as “B” and “Y” in the matrix representation as Eq. (8). Y = x20 x30 xn0

(9)

B = −z21 −z31 −zn1 After this part, the matrix method is used to find a and b parameters by Eq. (9).  −1   BT · Y α == [a, b]T BT B

(10)

Parameter method: It is based on finding parameters with using x°(k) and z 1 (k). In the method, C, D, E and F values are used. Each parameter has different equation as below. In the formula, “n” represents the number of data.   n n n n (11) C= z 1 (k) = x0 (k), E = z 1 (k)x0 k, F = z1 k 2 k=2

k=2

k=2

k=2

After calculating C, D, E and F, “a” and “b” parameters can be found by using the equation which is shown belong, CD − (n − 1)E DF − CE , (n − 1)F − C 2 b = (n − 1)F − C 2

(12)

Step 5: To find the predicted value of the initial data at time (k + 1) the grey differential equation is applied.   b e−ak b 1 x(k+1) = x10 − a +a

(13)

With this step, future values of the data are obtained but to control the data Inverse Accumulating Generation Operation should be applied with using Eq. (13) [20]. Step 6: This is the last step to find future values of the initial data set. The two steps will give the same results to see the formulas are right. 0 0 x(k+1) = x(k+1) − xk1

k = 1, 2, . . . . . . . . . .., n

(14)

708

B. Acar et al.

1) Error Analysis Further tests are needed to determine the error between the predicted value and the actual value to find the accuracy of the suggested model. For calculating percent error average of the GM (1,1) model, given equation should be applied when k = l, l + 1,…,n − 1 [21]. xk0 is represented the true (initial) value and xk0 is represented the predicted value of the dataset [19]. 2) Implementation of GM (1,1) Model Turkey is an emerging country with uncertainty in terms of electricity consumption data. In this study, it is aimed to predict the electricity consumption between 2020–2028 using the Grey Method GM (1,1) by taking the data of electricity consumption quantities in Turkey between 2016–2019. The actual non-negative data series called as “X (0)” shows as; X (0) = (274864969, 289975177, 292171352, 290446924) After determination of X (0), the new X (1) series is calculated from the cumulative sum of the series (0) which is AGO. X (1) = (274864969, 564840146, 857011498, 1147458422) Z1kof X1k is found in the following sequence; Z (1) (k) = (419852558, 710925822, 1002234960) After applying Eq. 6, with using Eq. 7 “a” and “b” values are found, and the calculation is called as “The Least Square Method”. After determination of “a” and “b”, Y and B matrices are calculated by using Eq. 8. 189975177 −419852558 1 Y = 292171352, 1 B = −7100925822 1 290446923, 9 −100234960 1 Before using Eq. 9, (BT.B) − 1 is calculated. Transposition of B = (Bt.B) =

−419852558 −7100925822 −100234960 1 1 1 1, 68617E + 18 −2133013340 −2133013340 3

(BT .B) − 1 =

5, 89676E − 18 4, 19262E − 09 4, 19262E − 09 3, 314306859

After applying Eq. 9 which is α is calculated. The results are shown in the table below; a = −0, 000809121 b = 290289196, 1 e = 2, 71828183

Electricity Consumption Forecasting in Turkey

709

According to the calculations made, the error rate increased to %22. The GM (1,1) model provides a formula for calculating the error rate at the end of the implementation. When we look at the literature, this error rate is considered as an acceptable rate, as in our study. For this reason, this error rate, which is around %20 is an acceptable rate. The results obtained are shown in the Fig. 1 below.

Fig. 1. Electricity consumption prediction in long term

To sum up, the electricity demand forecast for 2020–2028 was obtained with Grey Method GM (1,1), using electricity consumption quantities in Turkey between 2016– 2019. When the graph is examined, it is seen that the electricity consumption rates have gradually increased on an annual basis. B Arima Model One of the common ways to model stationary time series is the “autoregressive integrated moving average” or ARIMA method for short. In ARIMA models, AR (p) models refer to autoregressive sequence model p, the MA (q) model is about the moving average model. If there is trend, I (d) part incorporates the d order difference in the series [22]. ARIMA models can also handle a wide range of seasonal data. Additional seasonal terms can be included in the ARIMA models as follows [23]. ARIMA

(p,d,q)

(P,D,Q)m

↑↑

↑↑

Non-seasonal part

Seasonal part of

of the model

of the model

In this study, seasonal ARIMA model is employed in order to forecast electricity consumption in Turkey on monthly basis. Monthly electricity consumption as TWh (terawatt-hour) between years 2006–2019 are used as input data to calculate the appropriate ARIMA model. In Fig. 2, monthly electricity consumption over the years has been provided and it can be seen that data is not stationary. Firstly, there is increasing trend in

710

B. Acar et al.

electricity consumption. Secondly, there is increasing variance over time. Thirdly, even though it is not visible from the plot, it is known that electricity consumption is seasonal by increased usage in summer and winter times.

Time Series Plot of Electricity Consumption

Electricity Consumption

30

25

20

15

2006-1 2007-5 2008-10

2010-3

2011-8

2013-1

2014-6 2015-11 2017-4

2018-9

Year-Month

Fig. 2. Time series plot of electricity consumption (TWH)

To achieve a fixed series, we need to remove the trend and seasonality from the series. To reduce the size of the values and reduce the rising variance in the series, logarithmic transformation has been applied. As it can be seen in Fig. 3, by applying natural logarithm, increasing variance of the data has been eliminated and data becomes a homoscedastic series.

T im e S e r ie s P lo t o f N a t. L o g . o f E le c tr . C o n s . 3 .4

Nat. Log. of Electr. Cons.

3 .3 3 .2 3 .1 3 .0 2 .9 2 .8 2 .7 2 .6 2 .5 2 0 0 6 -1

2 0 0 7 -5

2 0 0 8 -1 0

2 0 1 0 -3

2 0 1 1 -8

2 0 1 3 -1

2 0 1 4 -6

2 0 1 5 -1 1

2 0 1 7 -4

2 0 1 8 -9

Y e a r-M onth

Fig. 3. Time series plot of electricity consumption after applying natural logarithm

It can be clearly seen that data series still have trend and to eliminate the trend firstly single level of difference has been applied. After the taking the difference, there are significant spikes at lag 12, 24 and 36 of ACF graph, which is a sign of seasonality in the data series. It means that every year, electricity consumption follows the same pattern. Therefore, to eliminate the seasonality in the data series, second difference is taken by applying 12 periods of differencing. It appears that both types of differences are

Electricity Consumption Forecasting in Turkey

711

needed to stabilize the series and consider the gross pattern of seasonality. By applying the double difference in the monthly time series Y in the T = 12 seasonal periods, (Yt - Yt-1) - (Yt-12 - Yt-13) calculation is applied. Equivalently, (Yt - Yt-12) - (Yt-1 - Yt13) is applied. This is the amount in which the change from the previous period to the current period differs from the change observed exactly one year ago. After the double differencing, time series plot, ACF and PACF graphs can be seen in Fig. 4 and 5. In the ACF graph, there are significant spikes in lag 1 and lag 12, which indicates that there should be both seasonal and non-seasonal moving average components in the model. In the PACF graph, there are spikes in the first 3 lags, therefore we need to try fitting seasonal ARIMA model by changing the degree of moving average component. We have fitted the ARIMA models with different degrees of moving average components and compared the mean squared errors of each model. Among the ARIMA models that yield the smallest mean squared errors, only ARIMA (0,1,1)(0,1,1)12 model passed the statistical tests.

Time Series Plot of Nat Log of Electr Cons Dbl Diff

Nat Log of Electr Cons Dbl Diff

0.15

0.10

0.05

0.00

-0.05

-0.10 2006-1 2007-5 2008-10 2010-3

2011-8

2013-1

2014-6 2015-11 2017-4 2018-9

Year-Month

Fig. 4. Time series plot of double difference of logarithmic transformed electricity consumption

Fig. 5. ACF and PACF of data series after double difference

Minitab output for ARIMA (0,1,1) (0,1,1)12 model can be seen below. Model shows us that on the logarithmic transformed data, both first and seasonal differences should

712

B. Acar et al.

be applied (since d = 1 and D = 1), and there should be nonseasonal and seasonal MA components of 1st degree (q = 1 and Q = 1). Seasonal and non-seasonal MA coefficients of the model has p value which are close to zero, so the coefficients are statistically significant. Since Ljung-Box test p-value is larger than 0.05 (the significance level), null hypothesis can be rejected and residuals are assumed to be independent. In Fig. 6, plots for residuals are provided. Final Estimates of Parameters Type Coef SE Coef T P MA 1 0.6228 0.0639 9.75 0.000 SMA 12 0.8974 0.0594 15.10 0.000 Constant -0.0003153 0.0001616 -1.95 0.053 Differencing: 1 regular, 1 seasonal of order 12 Number of observations: Original series 168, after differencing 155 Residuals: SS = 0.127031 (backforecasts excluded) MS = 0.000836 DF = 152 Modified Box-Pierce (Ljung-Box) Chi-Square statistic Lag 12 24 36 48 Chi-Square 9.6 42.7 88.6 98.0 DF 9 21 33 45 P-Value 0.384 0.003 0.000 0.000

Fig. 6.Residual plots for logarithmic transformed electricity consumption

Fig. 6. Residual plots for logarithmic transformed electricity consumption

According to the fitted ARIMA (0,1,1) (0,1,1)12 model in Fig. 7, seasonal pattern and increasing trend of electricity consumption forecast can be visually observed. Mean squared error for the fitted model is 0.000836 which is close to zero.

Electricity Consumption Forecasting in Turkey

713

T im e S e r ie s P lo t o f F o r e c a s t, L o w e r b o u n d , U p p e r b o u n d 36

V a r ia b le F o r e c a st Lo w erb o u n d U p p erb o u n d

34 32

Data

30 28 26 24 22 20 20

20

-1 20

20

-5 20

20

-1

0 20

21

-3 20

21

-8 20

22

-1 20

22

-6 20

22

-1

1 20

23

-4 20

23

-9

Y ear-M onth_F

Fig. 7. Time series plot of electricity consumption forecast

4 Conclusion This paper is addressed the problem of electricity generation forecast at the highest level and keeping the supply-demand satisfaction at the maximum. In this case, the definition of our product, electricity, as a kind of energy which requires consuming when it is produced at the same time. Grey and ARIMA prediction models are applied to the defined problem with the data that is collected in the previous semester. Turkey’s electricity consumption forecast has been calculated in the long run and short run. In terms of long-term, the amount of electricity consumption has gradually increased over the years. For short term, the electricity consumption has seasonal behaviors in monthly periods.

References 1. Hamzaçebi, C., Es, H.A., Çakmak, R.: Forecasting of Turkey’s monthly electricity demand by seasonal artificial neural network. Neural Comput. Appl. 31(7), 2217–2231 (2017). https:// doi.org/10.1007/s00521-017-3183-5 2. Halilo˘glu, E.Y., Tutu, B.E.: Forecasting daily electricity demand for Turkey. Turkish J. Energy Pol. 3(7), 40–49 3. Fiot, J.B., Dinuzzo, F.: Electricity demand forecasting by multi-task learning. IEEE Trans. Smart Grid 9(2), 544–551 (2016) 4. Keyno, H.S., Ghaderi, F., Azade, A., Razmi, J.: Forecasting electricity consumption by clustering data in order to decline the periodic variable’s affects and simplification the pattern. Energy Convers. Manage. 50(3), 829–836 (2009) 5. Akay, D., Atak, M.: Grey prediction with rolling mechanism for electricity demand forecasting of Turkey. Energy 32(9), 1670–1675 (2007) 6. Wang, Y., Wang, J., Zhao, G., Dong, Y.: Application of residual modification approach in seasonal ARIMA for electricity demand forecasting: a case study of China. Energy Policy 48, 284–294 (2012) 7. Abu-Rayash, A., Dincer, I.: Analysis of the electricity demand trends amidst the COVID-19 coronavirus pandemic. Energy Res. Soc. Sci. 68, 101682 (2020) 8. Qin, P., Xu, H., Liu, M., Xiao, C., Forrest, K.E., Samuelsen, S., Tarroja, B.: Assessing concurrent effects of climate change on hydropower supply, electricity demand, and greenhouse gas emissions in the Upper Yangtze River Basin of China. Appl. Energy 279, 115694 (2020)

714

B. Acar et al.

9. Al-Musaylh, M.S., Deo, R.C., Adamowski, J.F., Li, Y.: Short-term electricity demand forecasting with MARS, SVR and ARIMA models using aggregated demand data in Queensland, Australia. Adv. Eng. Inform. 35, 1–16 (2018) 10. Dilaver, Z., Hunt, L.C.: Industrial electricity demand for Turkey: a structural time series analysis. Energy Econ. 33(3), 426–436 (2011) 11. Trull, Ó., García-Díaz, J.C., Troncoso, A.: Stability of multiple seasonal holt-winters models applied to hourly electricity demand in Spain. Appl. Sci. 10(7), 2630 (2020) 12. Toker, A.C., Korkmaz, O.: Türkiye Kisa Süreli Elektrik Talebinin Saatlik Olarak Tahmin Edilmesi. In: Proceedings 2010 17. Uluslararasi Enerji ve Cevre Konferansi, pp. 32–35 (2010) 13. Hsu, L.C.: Using improved grey forecasting models to forecast the output of opto-electronics industry. Expert Syst. Appl. 38(11), 13879–13885 (2011) 14. Kose, E., Tasci, L.: Prediction of the vertical displacement on the crest of Keban Dam. J. Grey Syst. 27(1), 12 (2015) 15. Hu, Y.C.: Grey prediction with residual modification using functional-link net and its application to energy demand forecasting. Kybernetes (2017) 16. Lim, D., Anthony, P., Mun, H.C., Wai, N.K.: Assessing the accuracy of Grey system theory against artificial neural network in predicting online auction closing price. In: Proceedings of the International MultiConference of Engineers and Computer Scientists, vol. 1, March 2008 17. Hsu, C.C., Chen, C.Y.: Applications of improved grey prediction model for power demand forecasting. Energy Convers. Manage. 44(14), 2241–2249 (2003) 18. Chen, H.W., Chang, N.B.: Prediction analysis of solid waste generation based on grey fuzzy dynamic modeling. Resour. Conserv. Recycl. 29(1–2), 1–18 (2000) 19. Wen, K.L.: Grey systems: modeling and prediction. Yang’s Scientific Research Institute (2004) 20. Kayacan, E., Ulutas, B., Kaynak, O.: Grey system theory-based models in time series prediction. Expert Syst. Appl. 37(2), 1784–1789 (2010) 21. Yilmaz, H., Yilmaz, M.: Forecasting CO2 emissions for Turkey by using the grey prediction method. Sigma 31, 141–148 (2013) 22. Nahmias, S., Olsen, T.L.: Production and Operations Analysis. Waveland Press (2015) 23. Dimri, T., Ahmad, S., Sharif, M.: Time series analysis of climate variables using seasonal ARIMA approach. J. Earth Syst. Sci. 129(1), 1–16 (2020). https://doi.org/10.1007/s12040020-01408-x

Forecasting Damaged Containers with Machine Learning Methods Mihra Güler, Onur Adak, Mehmet Serdar Erdogan, and Ozgur Kabadurmus(B) International Logistics Management, Ya¸sar University, ˙Izmir, Turkey {mehmet.erdogan,ozgur.kabadurmus}@yasar.edu.tr

Abstract. Forecasting the number of damaged containers is crucial for a maritime company to effectively plan future port operations. The purpose of this study is to forecast damaged container entries and exits. In this paper, we worked with a global logistics company in Turkey. Comparisons between ports were made using the company’s internal port operations data and externally available data. The external data that we used are Turkey’s GDP, exchange rates (USD/EUR), import and export data, TEU of Mersin port and the total TEU of Turkey’s ports (2015–2020). Our aim is to forecast the number of damaged containers at a specific port (Mersin, Turkey) using different machine learning methods and find the best method. We used Linear Regression, Boosted Decision Tree Regression, Decision Forest Regression and Artificial Neural Network Regression algorithms. The performances of these methods were evaluated according to various metrics, such as R2 , MAE, RMSE, RAE and RSE. According to our results, machine learning methods can forecast container demand effectively and the best performing method is Boosted Decision Tree regression. Keywords: Logistics · Forecasting · Container demand · Machine learning · Regression

1 Introduction Today, production can meet consumer demand around the world geographically. Challenges faced by supply chain managers include profitability, cost control, meeting customer needs, and supply chain flexibility. Businesses need to consider globally to survive and develop business in the competitive market. It is necessary to have global business activities for making international trade. However, globalization poses a difficulty for logistics companies. Consumers today expect quick access to goods and services from all over the world, thanks to the global economy’s growth. On the other hand, in an increasingly commoditized world, a 3PL company’s challenge is to provide exceptional and easy-to-understand service. Good management of the supply chain provides an advantage as it reduces many costs. Supply Chain Management has a broader capacity than the only supply of the product. It ensures its implementation, planning, and control until it reaches the customer including transportation. The supply chain’s aim is minimum cost and maximum © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 715–724, 2022. https://doi.org/10.1007/978-3-030-90421-0_61

716

M. Güler et al.

profit. Profitability, cost management, meeting customer expectations, and supply chain flexibility are among the most important concerns facing supply chain executives around the world. Increasing containerization and competition among seaport container terminals have become quite considerable in worldwide international trade. Effective terminal operations are essential to managing containerization. The current flow and storage of various types of packages in and around a terminal or port are controlled by a terminal operation, which is a vital component of the supply chain. The systems also allow for more efficient utilization of assets, employees, and equipment, as well as the planning of tasks and receiving up-to-date information. Other technologies, such as the internet, wireless LANs, mobile computers, electronic data interchange (EDI) processing, and radio-frequency identification (RFID), are frequently used by Terminal Operating Systems to efficiently monitoring of the flow of packages connected to the terminal. Data is synchronized in batches via a central database or transmitted in real-time via wireless transmission. The databases can create reports on the condition of products, locations, and machinery in the terminal. Failure to properly predict future port container demand can result in significant losses in port construction or facility development investments. To meet future demand, planners need to assess the existing infrastructure inside and outside the port, which is crucial to the port’s construction, improvement, and day-to-day operations management. Estimating container output at ports helps to meet demands but can cause an excess stock to accumulate on hand. The decisions that are operational and strategical (medium and short-term) such as material planning and port operating plans uses the assistance of accurate forecasts. In this study, our aim is to analyze a global logistic company’s Mersin warehouse filling demands, maintenance activities, and demand coverage rates and to prepare a forecasting study for the upcoming periods. We developed a forecasting model by using various machine learning algorithms to forecast the number of damaged containers of the company. Moreover, we evaluated the performance of the algorithms by using various metrics. The rest of this paper is structured as follows: The relevant literature is reviewed in Sect. 2. The problem is described in Sect. 3. The data utilized in the study is explained in Sect. 4. The methodology is presented in Sect. 5. Finally, Sect. 6 gives the concluding remarks.

2 Literature Review This section summarizes forecasting methods and models used in the container demand forecasting literature. Gosasang et al. (2018) [1] estimated inbound and outbound container throughput for Bangkok Port until 2041. Results were used to guide future port container terminal management and planning. They used vector error correction model. Du et al. (2019) [2] developed a hybrid model for container throughput forecasting depending on the decomposition and ensemble and error correction strategy. The variational mode decomposition, extreme learning machine, and butterfly optimization algorithm, as well

Forecasting Damaged Containers with Machine Learning Methods

717

as the construction of the hybrid prediction model, were used in this paper. Niu et al. (2018) [3] created a novel hybrid learning model for container throughput forecasting that incorporates effective decomposition techniques such as machine learning, variational mode decomposition, error correction strategies, and optimization algorithms. Chen et al. (2016) [4] simply explained the grey system theory and the grey Markov model. Dragan et al. (2020) [5] developed a prototyping forecasting module based on a general Dynamic Factor Analysis-ARIMAX method for predicting various types of cargo throughputs, which was tested at the North Adriatic Port of Koper. An innovative fourstage heuristic technique was established for the modeling process. Peng and Chu (2009) [6] compared univariate techniques for estimating container throughput volumes. The container throughput volumes in Taiwan’s three major ports are forecasted using six univariate forecasting models. Syafi’i et. al. (2005) [7] forecasted container throughput demand in Indonesia. The research was carried out using a multivariate autoregressive model. Huang et. al. (2020) [8] compared different univariate forecasting methods and provided a more accurate short-term forecasting model for container throughput to create a reference for relevant authorities. Gao et al. (2016) [9] compared the theoretical foundation and empirical results of six model selection criteria and six model averaging criteria that are applied to a structural change vector autoregression model. Chen and Chen (2010) [10] used genetic programming, a decomposition approach, and a seasonal auto regression integrated moving average (SARIMA) to create an optimal forecast model of container throughput volumes at ports. Farhan and Ong (2016) [11] studied the performance of SARIMA models to estimate container throughput at some major international container ports by taking seasonal fluctuations into consideration. Pani et. al. (2014) [12] researched a data mining approach for forecasting late arrivals in a transshipment container terminal. They used the Classification and Regression Trees method. Gosasang et al. (2010) [13] analyzed the container efficiency in Bangkok port by applying Neural Networks. Epstein et al. (2012) [14] researched Strategic Empty Container Logistics Optimization in a large shipping company. Guo et al. (2005) [15] conducted research on applying the Verhulst model in order to eliminate errors in port efficiency estimates. In this study, changes in port yield estimates were observed by applying the grey Verhulst model in order to correct the errors in port efficiency and to complete the missing information. Wu and Liu (2015) [16] conducted a study that forecasts container shipping volume in Ningbo Port with the combination of forecasting models. In this study, port data between 2009–2014 were used to evaluate the volume estimation for Ningbo port according to 3 different forecast models. Gray Model, Radial Basis Functions Neural Network Model and Gray-Radial Basis Functions Neural Network Combined forecasting model were used for forecasting. Lee et al. (2017) [17] found new methods to estimate the potential container cargo capacity that can be triggered by a port development project in a container transportation network by combining port selection and autoregressive integrated moving average. Huang et. al. (2015) [18] used a hybrid forecasting method to estimate container throughput of Qingdao port. The outlier-processed projection pursuit regression-genetic programming model is used to forecast container throughput at Qingdao port.

718

M. Güler et al.

Table 1 shows the data used as a result of the examination of the articles above. GDP rate, export rate, import rate, exchange rate, interest rate, inflation rate, industrial production index, purchasing power parity are the data that are mainly used. These data are of great importance in managing efficient terminal operations containerization. Table 1. Data used in forecasting models in the literature Author

Data period GDP rate

Export rate

Import rate

Exchange rate

Interest rate

Inflation rate

Industrial production index

Gosasang and Chandraprakaikul (2018)

2006–2016

X

X

X

X

X

X

Du et al. (2019)

1989–2001

X

X

Chen et al. (2013)

2003–2010

Dragan et al. (2020)

2014–2015

X

X

X

Syafi’i et al. (2005)

2001–2019

X

X

X

X

Purchasing power

X

Gao et al. (2016)

2015

X

Gosasang et al. (2010)

1999–2009

X

X

X

Wu and Liu (2015)

2009–2014

X

X

X

Lee et al. (2017)

2001–2012

X

X

X

3 Problem Definition The company’s terminal operations process starts with a stock entry. It is the essential step for a container to be in the stock of the relevant warehouse. The inventory entry phase begins when they create a door entry request for storage. For a container to create stock data in the relevant warehouse, the door entry process must be completed. With the control made at the door entry stage, it is determined whether the container to be received is damaged. If it is damaged, it is taken to the damaged stack, if it is not damaged, it is taken to the solid stack. With the estimate report (damage detail report) prepared for the damaged containers, the maintenance process is run by obtaining the cost approval from the container owner agency. After that, an exit request must be created. This step is necessary because it must create a warehouse exit request to load a container into the warehouse stock. Depending on the type of tour information requested from the inventoried containers, the exit process is performed with the agencies’ data electronically or the exit requests manually opened by the warehouse users. With this process, the exit request and the exit movement are matched and the container stock from the relevant warehouse is reduced. The process flow of the company’s container operations is shown in Fig. 1.

Forecasting Damaged Containers with Machine Learning Methods

719

Fig. 1. Container exit request process

In this study, our aim is to forecast the number of damaged containers according to terminal services based on container owner, period, warehouse, container type, number of the damaged container and to make forecasting for future periods using machine learning methods.

4 Data The exchange rate is defined as the price of one currency in terms of another currency and also it can be either fixed or variable. The central banks of each country establish fixed currency rates, whereas market demand and supply determine variable exchange rates. Turkey’s exchange data was taken from the Central Bank of Turkey (2017). We used the exchange rates of EUR/TRY and USD/TRY and obtained these data monthly because the remaining data which we have are also monthly. Gross Domestic Product (GDP) measures the total production, income, and expenditure in the economy. It is the economic worth of final goods and services produced within the borders of a country in a given year. In this study, we collected data for the years 2015–2021. Turkey’s GDP data was taken from Organization for Economic Co-Operation and Development (2021). The purchase of a product produced abroad by the buyers in a country is called import. Exporting is the sale of a good or service to foreign countries in exchange for foreign currency. A low import value is beneficial for a country’s trade balance. Based on this, we examined the import-export values of Turkey between 2015–2021 and we found the monthly change in Turkey’s import and export values. Turkey’s import and export data were taken from the Statistics Institute of Turkey (2021). We also obtained monthly number of damaged containers (in TEU) data set from the company, which covers 2015 and 2021 in Mersin port, Turkey. Using these internal

720

M. Güler et al.

and external data sets, we developed various forecasting models based on well-known machine learning methods.

5 Methodology In this study, we forecasted the number of damaged containers using various machine learning methods. A. Machine Learning (ML) ML is a class of algorithms that improves the accuracy of software programs in predicting outcomes without the use of explicit programming. Machine learning is based on the availability of new data when updating the output of input data received and creating algorithms that can use statistical analysis to predict an outcome. Machine learning algorithms are a series of limited and specific instructions that a machine can follow to accomplish a specific goal. The machine learning model’s aim is to construct or discover patterns that can be used to make predictions or organize the data. Machine learning algorithms are often separated into three classes: supervised learning, unsupervised learning, and reinforcement learning: • Supervised learning algorithms try to model the relationships and dependencies between the target prediction outcome and the input features to predicting output values for new data using relationships from previous datasets. While the cross-validation process happens, input data is entered into the model, and before the fitting of the suitable model, the weights are adjusted. • Unsupervised learning algorithms are used for classifying the patterns and descriptive modeling. It is a kind of machine learning algorithm. These algorithms tend to the extraction of functional insights. Unsupervised learning algorithms do not tag data points. By sorting the data or describing its structure, the algorithm tags the data points for you. This strategy is useful when the final outcome is not known. • Reinforcement learning is providing an automatic evaluation of the best behavior in a given situation for optimizing machines and software agents’ performances. It utilizes algorithms that learn from results and decide the best option. The Reinforcement Learning approach tends to maximizing the reward or minimizing the risk via using observations about the interaction with the environment. The algorithm receives feedback after each action that helps in deciding whether the decision is true, neutral, or false. There are two types of parameters in a machine learning model. Model parameters use the training data set for calculation. These are parameters have been fitted. Hyperparameters are configurable parameters that must become suitable in order to get the maximum output from a model. Splitting data into Train Data and Test Data is a method for measuring the accuracy of the model. In this method, the data set must be split into two parts: a training set and

Forecasting Damaged Containers with Machine Learning Methods

721

a testing set. The training set requires 80% of the data set and the test data has 20% of them. Training the model means creating a model and testing the model means testing the accuracy of the model. B. Algorithms 1) Linear Regression The interaction between two variables or factors is shown or forecasted using linear regression algorithms that apply a continuous straight line to the outputs. 2) Artificial Neural Networks Artificial Neural Networks are computing systems that mimics human brain’s information processing abilities. A collection of artificial neurons are connected to each other. Each neuron have input and output units and each neuron can transmit information to the other neurons. 3) Random Forest Random Forest algorithm can be used for both classification and regression problems. It is dependent on ensemble learning, which is a methodology for solving a complicated problem and improving the model’s performance by combining various categories. Random Forest is a classifier that averages the outcomes of several decision trees applied to diverse subsets of a dataset to improve the dataset’s predictive accuracy. Rather than focusing on a single decision tree, the random forest takes forecasts from each tree and predicts the final output based on the majority vote of forecasts. 4) Decision Tree Data set is separated into two or more homogeneous clusters in the decision tree method. It discriminates data using if-then rules depending on the variable that creates the most significant difference between data points. C. Metrics To evaluate the performance of the algorithms, we use various metrics such as R2 , Mean Absolute Error (MAE), Mean Square Error (MSE), Root Mean Square Error (RMSE), The Relative Squared Error (RSE), The Relative Absolute Error (RAE). R2 value indicates how similar the data are to the fitted regression line. To put it another way, R-square is a measure of accuracy for linear regression models. MAE is defined as the difference between expected values and actual values. To define it in detail, it is the mean of the absolute value of each difference between the actual and predicted value for that sample across the whole sample of the dataset. A lower value indicates better accuracy. MSE is the average frame loss per sample over the entire data collection. It adds up all frame losses for each sample and splits by the total number of samples required to get the MSE. RMSE is a measure of how far the regression line extends from the data points. In other words, it indicates how dense the data is around the best-fitting line.

722

M. Güler et al.

RSE is a direct function based on the average of real values. The relative squared error is obtained by dividing the total squared error by the total squared error of the simple predictor. It can be compared between models with different units of measurement. RAE is a metric for evaluating a predictive model’s performance. It is measured as a ratio between the mean error and the errors created by a simple or naive model. To normalize the total absolute error, it is divided by the total absolute error of the simple predictor.

6 Results Table 2 shows the results of our created model that considers both the internal and external data from 2015 to 2021 and a single port (Mersin, Turkey). Ideally, when the R2 value is near to 1, the model produces the best results. However, in the other metrics, the best results should be close to 0. In our model, Decision Forest Regression and Boosted Decision Tree Regression give the best results for all metrics. In R2 , Boosted Decision Tree Regression gives us the best result which is 0.6652 while Linear Regression gives the worst value that is 0.4580. In MAE, Decision Forest Regression shows the best value which is 0.0309 whereas Linear Regression gives the worst result that is 0.0421. Furthermore, for RMSE, the best result is obtained by Boosted Decision Tree Regression with 0.0712 while the worst result is obtained by Linear Regression with 0.888. Moreover, in RAE, Decision Forest Regression gives the best value that is 0.4367. Finally, in RSE, Boosted Decision Tree Regression is the algorithm that gives the best result with 0.3348. Table 2. Results of the created model Method

R2

MAE

RMSE

RAE

RSE

Artificial neural network regression

0.6495

0.0353

0.0726

0.4949

0.3505

Boosted decision tree regression

0.6652

0.0314

0.0712

0.4390

0.3348

Decision forest regression

0.6590

0.0309

0.0714

0.4367

0.3410

Linear regression

0.4580

0.0421

0.0888

0.5901

0.5421

7 Conclusion Managing terminal operations effectively is crucial for a logistics company to decrease operational costs. The flow and storage of various types of containers around the ports are controlled by a terminal operation that is a vital component of the supply chain. To manage terminal operations effectively, accurately forecasting the number of damaged containers is essential. In this study, we considered the port and terminal operations of a global logistics company from Turkey. The container flow of the company starts with a warehouse entry request and stock entry. Then the container is checked whether container is damaged.

Forecasting Damaged Containers with Machine Learning Methods

723

After the checks are made, it is taken to the solid stack if it is not damaged. The maintenance process is started with the damage detail report prepared for damaged containers. After the damage controls are completed, an exit request must be created. Our goal is to accurately predict the number of damaged containers in the future and to compare the current number of containers. We obtained and used external data such as GDP and Turkey’s export and import data. In particular, we used the loading and unloading data of the Mersin port. We also included the internal container data provided by the company. We used machine learning methods, i.e., Random Forest, Boosted Decision Tree Regression and Artificial Neural Networks to predict the number of damaged containers of the company. According to our results, machine learnings methods can forecast container demand effectively. According to our case study, the best performing method is Boosted Decision Tree regression. As a future work, different machine learning methods will be compared with the models developed in this study.

References 1. Gosasang, V., Yip, T.L., Chandraprakaikul, W.: Long-term container throughput forecast and equipment planning: the case of Bangkok Port. Maritime Bus. Rev. 3(1), 53–69 (2018) 2. Du, P., Wang, J., Yang, W., Niu, T.: Container throughput forecasting using a novel hybrid learning method with error correction strategy. Knowl. Based Syst. 182, 104853 (2019) 3. Niu, M., Hu, Y., Sun, S., Liu, Y.: A novel hybrid decomposition-ensemble model based on VMD and HGWO for container throughput forecasting. Appl. Math. Model. 57, 163–178 (2018) 4. Chen, C.P., Liu, Q.J., Zheng, P.: Application of Grey-Markov Model in predicting container throughput of Fujian province. Adv. Mater. Res. 779, 720–723 (2013) 5. Dragan, D., Keshavarzsaleh, A., Intihar, M., Popovi´c, V., Kramberger, T.: Throughput forecasting of different types of cargo in the adriatic seaport Koper. Maritime Policy Manage. 48(1), 19–45 (2020) 6. Peng, W.-Y., Chu, C.-W.: A comparison of univariate methods for forecasting container throughput volumes. Math. Comput. Modell. 50(7–8), 1045–1057 (2009) 7. Syafi’i, Kuroda, K., Takebayashi, M.: Forecasting the demand of container throughput in Indonesia. Memoirs Constr. Eng. Res. Inst. 47, 1–10 (2005) 8. Huang, J., Chu, C.-W., Tsai, Y.-C.: Container throughput forecasting for international ports in Taiwan. J. Mar. Sci. Technol. 28(5), 456–469 (2020) 9. Gao, Y., Luo, M., Zou, G.: Forecasting with model selection or model averaging: a case study for monthly container port throughput. Transportmetrica A: Trans. Sci. 12(4), 366–384 (2016) 10. Chen, S.H., Chen, J.N.: Forecasting container throughputs at ports using genetic programming. Expert Syst. Appl. 37(3), 2054–2058 (2010) 11. Farhan, J., Ong, G.P.: Forecasting seasonal container throughput at international ports using SARIMA models. Maritime Econ. Logist. 20(1), 131–148 (2018) 12. Pani, C., Fadda, P., Fancello, G., Frigau, L., Mola, F.: A data mining approach to forecast late arrivals in a transhipment container terminal. Transport 29(2), 175–184 (2014) 13. Gosasang, V., Chandraprakaikul, W., Kiattisin, S.: An application of neural networks for forecasting container throughput at Bangkok port. In: Proceedings of the World Congress on Engineering, vol. 1, pp. 2078–0958 (2010) 14. Epstein, R., et al.: A strategic empty container logistics optimization in a major shipping company. Interfaces 42(1), 5–16 (2012)

724

M. Güler et al.

15. Guo, Z., Song, X., Ye, J.: A Verhulst model on time series error corrected for port throughput forecasting. J. Eastern Asia Soc. Transp. Stud. 6, 881–891 (2005) 16. Wu, H., Liu, G.: Container sea-rail transport volume forecasting of Ningbo port based on combination forecasting model. In: International Conference on Advances in Energy, Environment and Chemical Engineering, pp. 449–454. Atlantis Press (2015) 17. Lee, S.Y., Lim, H., Kim, H.J.: Forecasting container port volume: implications for dredging. Maritime Econ. Logist. 19(2), 296–314 (2017) 18. Huang, A., Lai, K., Li, Y., Wang, S.: Forecasting container throughput of Qingdao port with a hybrid model. J. Syst. Sci. Complexity 28(1), 105–121 (2015). https://doi.org/10.1007/s11 424-014-3188-4 19. Central Bank of Turkey Republic (2021) [Online]. Available: https://tcmb.gov.tr/ 20. Economic Co-Operation and Development (2021) [Online]. Available: https://stats.oecd.org/ index.aspx?queryid=60702 21. Turkish Statistical Institute (2021). [Online]. Available: https://data.tuik.gov.tr/Bulten/Index? p=Dis-Ticaret-Istatistikleri-Ocak-2021-37413

Logistics Service Quality of Online Shopping Websites During Covid-19 Pandemic ˙Ilayda Gezer, Hasancan Erduran, Alper Kayıhan, Burak Çetiner(B) , and Pervin Ersoy Logistics Management, Ya¸sar University, Izmir, Turkey {burak.cetiner,pervin.ersoy}@yasar.edu.tr

Abstract. Since the day first case was reported in Wuhan, Covid-19 pandemic has been greatly transforming our lives as well as our economic activities. Due to the lockdowns, many businesses closed their doors forever, while some others experienced unpredictable growth. Online retail is among the areas that has seen the greatest growth. Understanding the elements that contribute to success and survival of companies during the pandemic era and utilizing these elements to improve service quality crucial for any business. It is no secret that logistics operations also played and still plays a big role in mitigating the damage dealt by the pandemic. Therefore, logistics service quality is as important as the overall service quality. In this study, the service quality elements that play a role in success are discussed. Then the online retailers operating in Turkey are evaluated on the basis of these service quality elements. For this study a survey was conducted with 160 participants and the data gathered was then analyzed with regression analysis. The results indicate that logistics service quality (LSQ), physical distribution service quality (PDSQ), and response to costumer (RTC) elements play a significant role in ensuring survival and even success of companies during the pandemic. Keywords: Logistics service quality · Physical distribution service quality · Response to customers · Customer satisfaction · Customer Loyalty

1 Introduction Since early 2019, Covid-19 pandemic has been severely affecting and changing our life styles as well as transforming the businesses’ operations. It is crucial for companies to make the necessary adjustments to their operations and workflow in accordance with the environmental conditions brought by the pandemic to protect their position within the market. In overcoming the challenges brought by Covid-19, transportation and distribution of cargoes including food and medical supplies and other vital goods will play a pivotal role. The conditions our World faces due to Covid-19 is unprecedented. Air transportation is banned by many countries to slow down or stop the spread of the virus across the globe. Operators that do not have their own vessels faced more detention and demurrage fees, interchange fees, and repositioning costs as they had to leave their containers and equipment idling longer than necessary. The entirety of transportation hubs is also affected by the pandemic [1]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 N. M. Durakbasa and M. G. Gen˛cyılmaz (Eds.): Digitizing Production Systems, LNME, pp. 725–734, 2022. https://doi.org/10.1007/978-3-030-90421-0_62

726

˙I. Gezer et al.

Emergency managers find themselves in a position where they have to tackle a large variety of uncertainties ranging from size and scope to nature of potential hazards [2, 3]. This forces them to take measures to mitigate risks among different audiences. To ensure risk mitigation, crisis communication needs to be multifaceted [4, 5]. Conveying information to large audiences is a challenge on its own due to the difficulty of communicating critical information which the communication participants need in order to make decisions. Failure to communicate might result in suboptimal outcomes [6, 7]. As social media platforms have the potential to reach large audiences, it is often utilized as a communication tool to provide a rapid flow of information [8, 9]. Developing effective communication strategies which could utilize social media’s full potential is a key challenge [10]. Service quality also plays a crucial role in mitigating the effects of Covid-19. Understanding what the customers expect from a company at times like these could help the managers focus on the aspects that will have a bigger impact on customer’s perception and future buying decisions. Providing high quality logistics service performance is considered as a way to ensure the customers experience a satisfactory service and in return give their loyalty to the organization. This study aims to determine how logistics service quality affects customer satisfaction and loyalty.

2 Literature Review The overall assessment derived from the comparison between customer expectations and the output provided by the organization can be defined as service quality [11]. According to reference [11], service quality is a tool of measurement that helps determining how effective a business is at meeting customer expectations through delivery performance. According to reference [12], providing a high level of service quality boosts corporate image and results in excellence when it comes to customer encounters. References [13][15] propose that the service quality perceived by customers has a positive impact on customer satisfaction. Reference [16] believes that service quality plays a crucial role in ensuring customer loyalty and customer retention. Reference [17] identifies service quality as an important element of customers’ evaluation of their service providers as well as their service provider choice. Businesses consider service quality as tool to differentiate themselves from their competitors and consequently gain a competitive advantage, thus, leading to more customer attraction and customer loyalty [18]. In summary, it can be said that customers base their evaluations regarding companies on the difference between the perception of the received service and their expectations. In creating customer satisfaction, logistics service quality is an important aspect [19]. Through customer satisfaction, customer loyalty is greatly enhanced [20] However, researchers often overlook the importance of logistics service quality as a contributor to overall service quality and thus, the studies on this topic are limited [21, 22]. According to references [23]-[25], logistics service quality is one of the sources that give firms a competitive edge ensuring further customer satisfaction and consequently more customer loyalty. References [26]-[27] believe that the level of logistics service quality has a significant positive correlation with market influence while a high logistics service quality also means better customer satisfaction. Reference [28] lists quantitative elements such as lead time and on-time delivery as important factors in determining overall logistics service quality.

Logistics Service Quality of Online Shopping Websites

727

In the case of online retailers, service quality again appears to be crucial in attracting customers. Reference [29] argue that quality of service provided by online retailers, especially personalized services, online transaction security, and accessibility of the web sites, largely affects the way general internet users’ quality perception and as a result, affect their willingness to make online purchases. Reference [30] states that services can be considered as the outcomes that the customers expect to get and they can be obtained through interactions between the service providers themselves and the customers. Another key indicator of an online retailer’s service quality and consequently their success in entering a new market is physical distribution service quality. Customers’ perception of retailer’s capability in delivering service to their homes and their evaluation of online shopping experience are important for the retailer’s success and survival. Logistics service quality is widely acknowledged as the basis of logistics companies and the degree which these companies can successfully execute their logistics operations and determines the level of customer satisfaction they can establish. Logistics service quality is also important when it comes to gaining a competitive edge over the competitors. References [31]-[34] state that a company can utilize their logistics service capabilities as a tool build close relationships with their customers and ensure customer loyalty. Due to the statements given above, it is important for companies to put improving their logistics service quality into focus to make sure that their customers leave with a satisfied smile after being serviced. Companies also need to be in communication continuum with their customers to always be up to date on the way that they can satisfy them [35]. In the literature, there are several dimensions listed by different researches for service and logistics service quality as indicators to measure performance. References [17] and [36] propose that physical distribution service quality has three dimensions; condition, availability, and timeliness. Reference [37] states that there are nine dimensions that make up for logistics service quality. These are; order condition, ordering procedures, order accuracy and timeliness, ordering release quantities, personal contact quality, order quality, information quality, and order discrepancy handling. According to reference [27] empathy and reliability are the elements that define logistics service quality. Reference [38] also proposes a three-dimension scale to measure logistics service quality that focuses on punctuality and order form, information and order quality, and personnel quality. Reference [28] mentions eight dimensions to measure performance of logistics service: regularity, lead time, completeness, reliability, productivity, correctness, flexibility, and harmfulness. In a later study, reference [39] reduces these dimensions to three: information actions, ways of fulfillment and tangible components.

3 Methodology and Data Analysis This study was conducted using the survey method. In this study, we reached 160 people who shop online or use online shopping websites. The study aims to determine whether the customers were satisfied with online retailers’ performance during Covid-19 pandemic or not, the sample was formed with customers that shopped online during Covid-19 pandemic period. Due to the difficulty of reaching people in a quarantine environment as

728

˙I. Gezer et al.

a result of health concerns and limitations brought by Covid-19 pandemic and time constraints related to the study, convenience sampling was chosen as the sampling method over random sampling method. A 7-likert scale was preferred to increase the reliability of the questionnaire [40]. For the analysis of the data, regression analysis was conducted to show the extent to which the dependent variables explain the independent variable. Regression analysis is a method that shows the relationship between model variables and how much the relationship is explained. The main aims of the regression analysis are as follows; • Determining if the linear regression between two variables are statistically meaningful, • Determining the degree which an independent variable can affect a dependent variable, • Understanding the direction and the magnitude of any relationship. While conducting the regression analysis, our assumption was that the independent variables (Logistics Service Quality, Physical Distribution Service Quality, Delivery Service Quality, and Response to Customers) would have a linear relationship with the dependent variables (Customer Satisfaction and Loyalty). As it is later explained in the paper, we see while our assumption for the relationship between the independent variables and Customer Satisfaction is met, when it comes to the relationship between independent variables and Loyalty, there are some violations.

4 Results First, the survey was sent to 160 people. While 151 potential customers out of 160 declared that they shopped online, 9 did not respond. The overall gender ratio of respondents is 49.3% female and 50.7% male. Looking at this statistic, we see that the online shopping data of both genders during the Covid-19 period are almost the same. Regarding their choice of retailer question, 106 participants answered with Trendyol as their online shopping site preference. Trendyol became the most preferred online retailer with a rate of 72.6%. In addition, Amazon was the second most preferred online retailer with a rate of 26%, with 38 people choosing it, while Morhipo was the third most preferred online retailer during the pandemic period with 17 respondents preffering it with a rate of 11.6%. 8.3% of the respondents chose other online retailers. Vatan, Kidega, Crafist, Ece’s Boutique and Bhc Sa˘glamlam, which were the least preferred online shopping sites during the pandemic period, were in the other option and had the same number of preferences with a rate of 0.7%. 43.8% of customers who completed the survey preferred online shopping for their clothing needs during Covid-19. Following clothing, other main items that customers preferred online retailers are shoes with 28.7%, and electronic products with 24.1% during the pandemic. Accessories, cosmetics, home and life products followed these categories with lower percentages. During the 1st year pandemic, 30.8% of customers preferred to shop online 11 to 20 times. 27.4% of the customers preferred to shop online less than 10 times. Customers that shopped online 21 to 30 amounts to 19.2%. Finally, the rate of customers who prefer to shop online more than 41 times during the Pandemic is 14.4%. In our survey, where we examined online shopping during the Covid-19 pandemic, 61.6% of our participants were 18–25 years old, 17.1% were 25–35 years old, 8.9%

Logistics Service Quality of Online Shopping Websites

729

were 35–45, 11% were 45- 55 years old and the last 2% are over 66. From these results, younger generations tend to online shop more compared to older generations. The spending amount of 251–500 TL was the most preferred range of spending with the rate of 24%. With 21.2%, the spending range of 501–1000 TL was the second most frequent spending range followed by 1001–1500 TL range. The spending ranges of 3001–5000 TL and 5001 and above were the least preferred spending ranges with a ratio of 5.15%. A. Regression Analysis Results for the Effects of Logistics Service Quality (LSQ), Physical Distribution Service Quality (PDSQ), Delivery Service Quality (DSQ) on Customer Satisfaction (CS) Independent Variables: LSQ, PDSQ, DSQ Dependent Variable: CS H1: LSQ positively affects CS H2: PDSQ positively affects CS H3: DSQ positively affects CS Table 1. Regression analysis results for effects of LSQ, PDSQ, and DSQ on customer satisfaction CS_MEAN Pearson Correlation

CS_MEAN LSQ_MEAN PDSQ_MEAN DSQ_MEAN

Sig. (1tailed)

C_MEAN LSQ_MEAN PDSQ_MEAN DSQ_MEAN

N

C_MEAN LSQ_MEAN PDSQ_MEAN DSQ_MEAN

LSQ_MEAN

1.000 .802 .802 1.000 .798 .837 .559 .546 .