132 80 113MB
English Pages 965 Year 2023
Lecture Notes in Mechanical Engineering
Francisco J. G. Silva António B. Pereira Raul D. S. G. Campilho Editors
Flexible Automation and Intelligent Manufacturing: Establishing Bridges for More Sustainable Manufacturing Systems Proceedings of FAIM 2023, June 18–22, 2023, Porto, Portugal, Volume 1: Modern Manufacturing
Lecture Notes in Mechanical Engineering Series Editors Fakher Chaari, National School of Engineers, University of Sfax, Sfax, Tunisia Francesco Gherardini , Dipartimento di Ingegneria “Enzo Ferrari”, Università di Modena e Reggio Emilia, Modena, Italy Vitalii Ivanov, Department of Manufacturing Engineering, Machines and Tools, Sumy State University, Sumy, Ukraine Mohamed Haddar, National School of Engineers of Sfax (ENIS), Sfax, Tunisia
Editorial Board Members Francisco Cavas-Martínez , Departamento de Estructuras, Construcción y Expresión Gráfica Universidad Politécnica de Cartagena, Cartagena, Murcia, Spain Francesca di Mare, Institute of Energy Technology, Ruhr-Universität Bochum, Bochum, Nordrhein-Westfalen, Germany Young W. Kwon, Department of Manufacturing Engineering and Aerospace Engineering, Graduate School of Engineering and Applied Science, Monterey, CA, USA Justyna Trojanowska, Poznan University of Technology, Poznan, Poland Jinyang Xu, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
Lecture Notes in Mechanical Engineering (LNME) publishes the latest developments in Mechanical Engineering—quickly, informally and with high quality. Original research reported in proceedings and post-proceedings represents the core of LNME. Volumes published in LNME embrace all aspects, subfields and new challenges of mechanical engineering. To submit a proposal or request further information, please contact the Springer Editor of your location: Europe, USA, Africa: Leontina Di Cecco at [email protected] China: Ella Zhang at [email protected] India: Priya Vyas at [email protected] Rest of Asia, Australia, New Zealand: Swati Meherishi at [email protected] Topics in the series include: • • • • • • • • • • • • • • • • •
Engineering Design Machinery and Machine Elements Mechanical Structures and Stress Analysis Automotive Engineering Engine Technology Aerospace Technology and Astronautics Nanotechnology and Microengineering Control, Robotics, Mechatronics MEMS Theoretical and Applied Mechanics Dynamical Systems, Control Fluid Mechanics Engineering Thermodynamics, Heat and Mass Transfer Manufacturing Precision Engineering, Instrumentation, Measurement Materials Engineering Tribology and Surface Technology
Indexed by SCOPUS, EI Compendex, and INSPEC All books published in the series are evaluated by Web of Science for the Conference Proceedings Citation Index (CPCI) To submit a proposal for a monograph, please check our Springer Tracts in Mechanical Engineering at https://link.springer.com/bookseries/11693
Francisco J. G. Silva · António B. Pereira · Raul D. S. G. Campilho Editors
Flexible Automation and Intelligent Manufacturing: Establishing Bridges for More Sustainable Manufacturing Systems Proceedings of FAIM 2023, June 18–22, 2023, Porto, Portugal, Volume 1: Modern Manufacturing
Editors Francisco J. G. Silva Department of Mechanical Engineering ISEP – School of Engineering, Polytechnic of Porto Porto, Portugal
António B. Pereira TEMA - Centre for Mechanical Technology and Automation, Department of Mechanical Engineering University of Aveiro Aveiro, Portugal
Raul D. S. G. Campilho Department of Mechanical Engineering ISEP – School of Engineering, Polytechnic of Porto Porto, Portugal
ISSN 2195-4356 ISSN 2195-4364 (electronic) Lecture Notes in Mechanical Engineering ISBN 978-3-031-38240-6 ISBN 978-3-031-38241-3 (eBook) https://doi.org/10.1007/978-3-031-38241-3 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
This volume of Lecture Notes in Mechanical Engineering (LNME) is one of two volumes including papers selected from the 32nd International Conference on Flexible Automation and Intelligent Manufacturing (FAIM 2023), held in Porto, Portugal, from June 18 to 22, 2023. The FAIM 2023 conference was organized by the School of Engineering, Polytechnic of Porto, located in Porto, Portugal. Flexible Automation and Intelligent Manufacturing (FAIM) is a renowned international forum for academia and industry to disseminate novel research, theories, and practices relevant to automation and manufacturing. For over 30 years, the FAIM conference has provided a strong and continuous presence in the international manufacturing scene, addressing both technology and management aspects via scientific conference sessions, workshops, tutorials, and industry tours. Since 1991, FAIM has been hosted in prestigious universities on both sides of the Atlantic and, in recent years, in Asia. The conference attracts hundreds of global leaders in automation and manufacturing research who attend program sessions where rigorously peer-reviewed papers are presented during the multiple-day conference. The conference links researchers and industry practitioners in a continuous effort to bridge the gap between research and implementation. FAIM 2023 received more than 400 contributions from over 40 countries and over 220 institutions around the world. After a two-stage double-blind review, the technical program committee accepted 263 papers. From these, 242 papers have been included in two LNME volumes, and 21 extended papers are published as fast-track articles in Robotics and Computer-Integrated Manufacturing and The International Journal of Advanced Manufacturing Technology. A selection of these LNME articles will be invited to submit substantially extended versions to special issues in ten international indexed journals, such as the International Journal of Computer Integrated Manufacturing, Journal of Mechanical Engineering Science, Journal of Testing and Evaluation, Sustainability journal, Machines journal, Metals journal, Actuators journal, Systems journal, FME Transactions journal, and Technological Sustainability journal. We are grateful to the authors for their contributions and would like to acknowledge the FAIM steering committee, advisory board committee, honorary chairs, the scientific committee members, and manuscript reviewers for their significant efforts, continuous support, sharing their expertise, and conducting manuscript reviews. Manuscript reviewers came from various locations around the world, performing 1339 reviews in total. With such effort and toughness, the high standards of the papers included in the FAIM program have been kept. Special thanks to FAIM 2023’s invited speakers: Jiju Antony (Professor, Khalifa University, Abu Dhabi, UAE), Daryl Powell (Chief Scientist at SINTEF, Norway), Marcello Pellicciari (Professor, Director of the PhD School “E4E-Engineering for Economics, Economics for Engineering”, University of Modena and Reggio Emilia, Modena, Italy), Rodrigo Martins (Professor, Nova School of Science and Technology, Lisboa, Portugal), Alcibíades Guedes (Professor at Faculty of Engineering, University of Porto, President
vi
Preface
of the Board & CEO at INEGI, Porto, Portugal), and Pedro Carreira (CEO at Continental Mabor, Lousado, Portugal). The book Flexible Automation and Intelligent Manufacturing: Establishing Bridges for More Sustainable Manufacturing Systems—Proceedings of FAIM 2023 has been organized in two LNME volumes. Volume 1 is mainly devoted to Mechanical Engineering, Robotics, Automation, Manufacturing Processes and Materials Processing, while Volume 2 is mainly focused on Industrial Management, Digital Transformation, Industry 4.0/5.0, Human Factors, Logistics, Sustainability, and related matters. In both volumes, the papers have been organized by topics, providing to the reader an excellent set of papers regarding each topic. We appreciate the partnership with Springer, ConfTool, and our sponsors for their fantastic support during the preparation of FAIM 2023. Thank you very much to the FAIM 2023 organizing team, whose hard work was crucial to the success of the FAIM 2023 conference. May 2023
Francisco J. G. Silva António Bastos Pereira Raul D. S. G. Campilho
FAIM 2023 Organization
Organizing Committee General Chair Francisco J. G. Silva
ISEP—Polytechnic of Porto, Porto, Portugal
Scientific Chairs Francisco J. G. Silva Raul D. S. G. Campilho
ISEP—Polytechnic of Porto, Porto, Portugal ISEP—Polytechnic of Porto, Porto, Portugal
Program Chairs Francisco J. G. Silva Arnaldo G. Pinto Raul D. S. G. Campilho Luís Filipe Coelho André Filipe Varandas Pedroso
ISEP—Polytechnic of Porto, Porto, Portugal ISEP—Polytechnic of Porto, Porto, Portugal ISEP—Polytechnic of Porto, Porto, Portugal ISEP—Polytechnic of Porto, Porto, Portugal ISEP—Polytechnic of Porto, Porto, Portugal
Communication and Media Chairs Gustavo Pinto Luís Pinto Ferreira Andresa Baptista Susana Nicola
ISEP—Polytechnic of Porto, Porto, Portugal ISEP—Polytechnic of Porto, Porto, Portugal ISEP—Polytechnic of Porto, Porto, Portugal ISEP—Polytechnic of Porto, Porto, Portugal
Industrial Chairs António Bastos Pereira Luísa Morgado Carla Pinto
University of Aveiro, Portugal ISEP—Polytechnic of Porto, Porto, Portugal ISEP—Polytechnic of Porto, Porto, Portugal
viii
FAIM 2023 Organization
Conference Managers Francisco J. G. Silva Raul D. S. G. Campilho Arnaldo G. Pinto José Carlos Sá Luís Pinto Ferreira Luís Filipe Coelho Rita Sales-Contini Filipe Fernandes
ISEP—Polytechnic of Porto, Porto, Portugal ISEP—Polytechnic of Porto, Porto, Portugal ISEP—Polytechnic of Porto, Porto, Portugal ISEP—Polytechnic of Porto, Porto, Portugal ISEP—Polytechnic of Porto, Porto, Portugal ISEP—Polytechnic of Porto, Porto, Portugal ISEP—Polytechnic of Porto, Portugal/Faculty of Technology, S. José dos Campos, Brazil ISEP—Polytechnic of Porto, Porto, Portugal
Assistant Conference Managers André Filipe Varandas Pedroso Vitor F. C. Sousa Naiara Sebbe Rúben Costa Marta Barbosa
ISEP—Polytechnic of Porto, Porto, Portugal ISEP—Polytechnic of Porto, Porto, Portugal ISEP—Polytechnic of Porto, Porto, Portugal ISEP—Polytechnic of Porto, Porto, Portugal ISEP—Polytechnic of Porto, Porto, Portugal
Steering Committee—FAIM Frank Chen Munir Ahmad
George-Christopher Vosniakos Kyoung-Yun Kim
The University of Texas at San Antonio, USA Khwaja Fareed University of Engineering and Information Technology, Pakistan/Teesside University, Middlesbrough, UK National Technical University of Athens, Greece Wayne State University, USA
Advisory Board—Program Committee Esther Alvarez de los Mozos Americo Azevedo F. Frank Chen Paul Eric Dossou Farnaz Ganjeizadeh Dong-Won Kim Stephen T. Newman Chike F. Oduoza
Universidad de Deusto, Spain INESC Porto, Portugal The University of Texas at San Antonio, USA ICAM Paris-Senart, France California State University, East Bay, USA Chonbuk National University, South Korea University of Bath, UK University of Wolverhampton, UK
FAIM 2023 Organization
Margherita Peruzzini Alan Ryan Francisco Silva Dusan Sormaz Leo De Vin George-Christopher Vosniakos Lihui Wang Yi-Chi Wang
University of Modena and Reggio Emilia, Italy University of Limerick, Ireland ISEP—Polytechnic of Porto, Porto, Portugal Ohio University, USA Karlstadt University, Sweden National Technical University of Athens, Greece KTH Royal Institute of Technology, Sweden Feng Chia University, Taiwan
Honorary Chairs Munir Ahmad (FAIM Founding Chair) F. Frank Chen William G. Sullivan (FAIM Founding Chair)
Khwaja Fareed University of Engineering and Information Technology, Pakistan/Teesside University, Middlesbrough, UK The University of Texas at San Antonio, USA Virginia Polytechnic Institute and State University, USA
Scientific Committee Abílio de Jesus Ana Luísa Ramos Anabela Carvalho Alves André Serra e Santos André Filipe Varandas Pedroso Andrea Trianni Andresa Baptista António Bastos Pereira Arnaldo Pinto Benny Tjahjono Bosko Rasuo Carina Pimentel Carla Geraldes Carla Pinto Chike F. Odouza Cristina Lopes Dimitris M. Mourtzis
ix
FEUP—Faculty of Engineering, University of Porto, Porto, Portugal University of Aveiro, Portugal University of Minho ISEP—Polytechnic of Porto, Porto, Portugal ISEP—Polytechnic of Porto, Porto, Portugal School of Mechanical and Mechatronic Engineering, Sydney, Australia ISEP—Polytechnic of Porto, Porto, Portugal University of Aveiro, Aveiro, Portugal ISEP—Polytechnic of Porto, Porto, Portugal Coventry University, Coventry, UK University of Belgrade, Belgrade, Serbia University of Aveiro, Aveiro, Portugal Institute Polytechnic of Bragança, Bragança, Portugal Polytechnic of Porto, Porto, Portugal University of Wolverhampton, UK Polytechnic of Porto, Porto, Portugal University of Patras, Greece
x
FAIM 2023 Organization
Dong-Won Kim Dusan Sormaz Esther Alvarez de los Mozos Eusébio Nunes F. Frank Chen Fernanda Amélia Ferreira Filipe Fernandes Francisco J. G. Silva Geandra Queiroz George Vosniakos Georgina Miranda Gustavo Pinto Isabel Lopes Isabel Pinto Isotília Costa Melo João Emílio Almeida João José Pinto Ferreira João Matias Jorge Lino Alves José Carlos Sá José Dinis-Carvalho José Fernando Oliveira José Ferreira José Machado Kyoung-yun “Joseph” Kim Leo de Vin Lihui Wang Lucas da Silva Luís Coelho Luís Filipe Malheiros Luís Miguel Fonseca Luís Pinto Ferreira Luísa Gaspar Morgado Manuel Pereira Lopes Manuel Silva Marcello Pellicciari
Chonbuk National University, South Korea Ohio State University, OH, USA University of Deusto, Spain University of Minho, Braga, Portugal University of Texas at San Antonio, USA Polytechnic of Porto, Porto, Portugal ISEP—Polytechnic of Porto, Porto, Portugal INEGI/ISEP - Polytechnic of Porto, Porto, Portugal Universidade do Estado de Minas Gerais, Minas Gerais, Brazil National Technical University of Athens, Greece University of Aveiro, Aveiro, Portugal ISEP—Polytechnic of Porto, Porto, Portugal University of Minho, Braga, Portugal Polytechnic of Porto, Porto, Portugal Universidad Católica del Norte, Chile ISTEC, Porto, Portugal FEUP, Porto, Portugal University of Aveiro, Aveiro, Portugal FEUP—Faculty of Engineering, University of Porto, Porto, Portugal ISEP—Polytechnic of Porto, Porto, Portugal Universidade do Minho, Braga, Portugal FEUP—Faculty of Engineering, University of Porto, Porto, Portugal Universidade de Aveiro, Aveiro, Portugal University of Minho, Guimarães, Portugal Wayne University, Detroit, MI, USA Karlstadt University, Sweden KTH Royal Institute of Technology, Sweden FEUP—Faculty of Engineering, University of Porto, Porto, Portugal ISEP—Polytechnic of Porto, Porto, Portugal Faculty of Engineering, University of Porto, Porto, Portugal ISEP—Polytechnic of Porto, Porto, Portugal ISEP—Polytechnic of Porto, Porto, Portugal ISEP—Polytechnic of Porto, Porto, Portugal ISEP—Polytechnic of Porto, Porto, Portugal INESC/Polytechnic of Porto, Porto, Portugal University of Modena and Reggio Emilia, Modena, Italy
FAIM 2023 Organization
Margherita Peruzzini Marisa Oliveira Marta Barbosa
Matthias Thurer Michele Cali Mónica Oliveira Nuno Octávio Fernandes Paul Eric Dossou Raul D. S. G. Campilho Radu Godina Rita C. M. Sales-Contini Rúben Costa
Sérgio Dinis Sousa Stephen T. Newman Susana Nicola Teresa Pereira Vanda Lima Vitor F. C. Sousa Xichun Luo Yi-Chi Wang
University of Modena and Reggio Emilia, Modena, Italy ISEP—Polytechnic of Porto, Porto, Portugal ISEP—Polytechnic of Porto/Faculty of Engineering, University of Porto, Porto, Portugal Jinan University, Jinan, China University of Catania, Italy University of Aveiro, Portugal Institute Polytechnic of Castelo Branco, Castelo Branco, Portugal ICAM Paris-Senart, France INEGI/ISEP—Polytechnic of Porto, Porto, Portugal Nova University, Lisbon, Portugal Polytechnic of Porto, Portugal/Faculty of Technology, S. José dos Campos, Brazil ISEP—Polytechnic of Porto/Faculty of Engineering, University of Porto, Porto, Portugal Universidade do Minho, Braga, Portugal University of Bath, UK ISEP—Polytechnic of Porto, Porto, Portugal INEGI/Polytechnic of Porto, Porto, Portugal ESTGF—Polytechnic of Porto, Felgueiras, Portugal Polytechnic of Porto/Faculty of Engineering, University of Porto, Porto, Portugal University of Strathclyde, Glasgow, UK Feng Chia University, Taiwan
xi
Contents
Automation and Robotics VR Driven Unsupervised Classification for Context Aware Human Robot Collaboration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ali Kamali Mohammadzadeh, Carlton Leroy Allen, and Sara Masoud
3
Environment for the Design and Automation of New Cable-Driven Parallel Robot Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Josué Rivera, Julio Garrido, Enrique Riveiro, and Diego Silva
12
Robot-Based Inspection of Freeform Components: Process Analysis and Challenges in Using a Lateral Scanning WLI . . . . . . . . . . . . . . . . . . . . . . . . . . Jessica Ehrbar, Daniel Schoepflin, and Thorsten Schüppstuhl
21
Development of a Robot-Based Handling System for a High Precision Manufacturing Cell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . George Papazetis, Evangelos Tzimas, Panorios Benardos, and George-Christopher Vosniakos Transparent Object Classification and Location Using MmWave Radar Technology for Robotic Picking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ricardo N. C. Rodrigues, João Borges, and António H. J. Moreira
29
37
AI-Based Supervising System for Improved Safety in Shared Robotic Areas . . . Ana Almeida and António H. J. Moreira
46
Vision Robotics for the Automatic Assessment of the Diabetic Foot . . . . . . . . . . . Rui Mesquita, Tatiana Costa, Luis Coelho, and Manuel F. Silva
54
A Conceptual Framework for the Improvement of Robotic System Reliability Through Industry 4.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dimitris Mourtzis, Sofia Tsoubou, and John Angelopoulos Human-Robot Collaboration, Sustainable Manufacturing Perspective . . . . . . . . . Robert Ojstersek, Borut Buchmeister, and Aljaz Javernik Towards a Robotic Intervention for On-Land Archaeological Fieldwork in Prehistoric Sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . L’hermite Tom, Cherlonneix Cyprien, Paul-Eric Dossou, and Laouenan Gaspard
62
71
79
xiv
Contents
UWB-Based Indoor Navigation in a Flexible Manufacturing System Using a Custom Quadrotor UAV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Petros Savvakis, George-Christopher Vosniakos, Emmanuel Stathatos, Axel Debar-Monclair, Marek Chodnicki, and Panorios Benardos Real-Time Defect and Object Detection in Assembly Line: A Case for In-Line Quality Inspection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Milad Ashourpour, Ghazaleh Azizpour, and Kerstin Johansen
91
99
Using Computer Vision to Improve SME Performance . . . . . . . . . . . . . . . . . . . . . . 107 Kokou C. Lissassi, Paul-Eric Dossou, and Christophe Sabourin Cyber-Physical Systems Based Smart Manufacturing of Disinfectants: A Need, and Solution Driven by COVID-19 Pandemic . . . . . . . . . . . . . . . . . . . . . . 117 Faiz Iqbal, Tushar Semwal, and Adam A. Stokes CPPS-3D: A Methodology to Support Cyber Physical Production Systems Design, Development and Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Pedro F. Cunha, Dário Pelixo, and Rui Madeira Reinforcement Learning-Based Model for Optimization of Cloud Manufacturing-Based Multi Objective Resource Scheduling: A Review . . . . . . . 133 Rasoul Rashidifar, F. Frank Chen, Mohammad Shahin, Ali Hosseinzadeh, Hamed Bouzary, and Awni Shahin An Overview of Explainable Artificial Intelligence in the Industry 4.0 Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Pedro Teixeira, Eurico Vasco Amorim, Jöerg Nagel, and Vitor Filipe Analyzing the Effects of Different 3D-Model Acquisition Methods for Synthetic AI Training Data Generation and the Domain Gap . . . . . . . . . . . . . . 149 Özge Beyza Albayrak, Daniel Schoepflin, Dirk Holst, Lars Möller, and Thorsten Schüppstuhl Improving Regulations for Automated Design Checking Through Decision Analysis Good Practices: A Conceptual Application to the Construction Sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 Ricardo J. G. Mateus, Francisco Silva Pinto, Judith Fauth, Miguel Azenha, José Granja, Ricardo Veludo, Bruno Muniz, João Reis, and Pedro Marques Preliminary Design of an Automatic Palletizing System During the Pre-sales Stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 Enrico Guidetti, Pietro Bilancia, Roberto Raffaeli, and Marcello Pellicciari
Contents
xv
A New Equipment for Automatic Calibration of the Semmes-Weinstein Monofilament . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Pedro Castro-Martins and Luís Pinto-Coelho Measuring the Moment-Curvature Relationship of a Steerable Catheter Using a Load Cell and Stereovision System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Jajun Ryu, Jaeseong Choi, Taeyoung Kim, and Hwa Young Kim Manufacturing Processes and Automation Study of the Best Operational Parameters in Laser Marking of Plastic Parts . . . . 199 João Costa, Francisco J. G. Silva, Arnaldo G. Pinto, Isabel Mendes Pinto, and Vitor F. C. Sousa Laser Marking on White-Coloured Polyoxymethylene (POM) Polymer Substrate: Challenges and Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 Stanley Udochukwu Ofoegbu, Paulo J. A. Rosa, Fábio A. O. Fernandes, António B. Pereira, and Pedro Fonseca Influence of Laser Beam Intensity Distribution on Keyhole Geometry and Process Stability Using Green Laser Radiation . . . . . . . . . . . . . . . . . . . . . . . . . 216 Florian Kaufmann, Andreas Maier, Julian Schrauder, Stephan Roth, and Michael Schmidt Application of Ontology Reasoning in Machining Process Planning – Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Peter Adjei, Felix Asare, Dušan Šormaz, Riad Al Hasan Abir, Mandvi Fuloria, David Koonce, and Saruda Seeharit Experimental Research on the Dimensional and Geometrical Deviations of Features-of-Size Produced by Material Extrusion Processes . . . . . . . . . . . . . . . 238 Christos Vakouftsis, Georgios Kaisarlis, Vasilios Spitas, and Christopher G. Provatidis Advanced Characterization Techniques of Multi-material Machining Tool Coatings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 R. D. F. S. Costa, A. M. P. Jesus, S. L. S. Simões, and M. L. S. Barbosa Optimisation of CNC Machining Part Programs Exemplified for Rough-Milling of Pockets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 A. Iliopoulos and George-Christopher Vosniakos
xvi
Contents
NAM-CAM: Neural-Additive Models for Semi-analytic Descriptions of CAM Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 Konstantin Ditschuneit, Adem Frenk, Markus Frings, Viktor Rudel, Stefan Dietzel, and Johannes S. Otterbach Adaptive Toolpath Planning for Hybrid Manufacturing Based on Raw 3D Scanning Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Panagiotis Stavropoulos, Lydia Athanasopoulou, Thanassis Souflas, and Konstantinos Tzimanis Machining of Individualized Milled Parts in a Skill-Based Production Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 Andreas Wagner, Magnus Volkmann, Jesko Hermann, and Martin Ruskowski Tool Path Length Optimization in Drilling Operations: A Comparative Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 Alaeddine Zouari, Dhouib Souhail, Fatma Lehyani, and José Carlos Sá Feed Rate Optimization Using NC Cutting Load Maps . . . . . . . . . . . . . . . . . . . . . . 302 N. H. Yoo, S. G. Kim, T. H. Kim, E. Y. Heo, and D. W. Kim Some Challenges and Opportunities in Additive Manufacturing Industrialization Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 Zahra Isania, Maria Pia Fanti, and Giuseppe Casalino Design Parameters to Develop Porous Structures: Case Study Applied to DLP 3D Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 R. Rodrigues, P. Lopes, Luis Oliveira, L. Santana, and J. Lino Alves Deep Learning Based Automatic Porosity Detection of Laser Powder Bed Fusion Additive Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328 Syed Ibn Mohsin, Behzad Farhang, Peng Wang, Yiran Yang, Narges Shayesteh, and Fazleena Badurdeen Influence of Process Parameters on Compression Properties of 3D Printed Polyether-Ether-Ketone by Fused Filament Fabrication . . . . . . . . . . . . . . . . . . . . . 336 Erika Lannunziata, Alberto Giubilini, Abdollah Saboori, and Paolo Minetola 3D Printing of Polycaprolactone Scaffolds with Heterogenous Pore Size for Living Tissue Regeneration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 N. Manou, George-Christopher Vosniakos, and P. Kostazos
Contents
xvii
An Innovative Platform for Designing and Rapid Virtual Prototyping of Garments: The Case of i-Mannequin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354 Evridiki Papachristou, Despoina Kalaitzi, and Michael Kaseris Analysis of the Possibilities of Applying 3D Print Methods for the Needs of Ship-Building Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363 Krzysztof Jasi´nski, Marek Chodnicki, Krzysztof Bobrowski, Krzysztof Lipi´nski, Marcin Kluczyk, and Adam Szelezi´nski Development of a Large-Scale Artifact for a Wine Visitor Center Using Additive Manufacturing Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 Luis Torres and Rui Mendonça A Non-retracted Path Generation Algorithm for Material Extrusion Type of Additive Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380 Melih Ozcan, Yigit Hergul, and Ulas Yaman Movement Tracking-Based In-Situ Monitoring System for Additive Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 Gokula Vasantha, Ayse Aslan, Paul Lapok, Alistair Lawson, and Stuart Thomas On the Development of a Compliant Mechanism for Displacement Amplification Produced by Selective Laser Sintering . . . . . . . . . . . . . . . . . . . . . . . 399 Alessandro Bove, Flaviana Calignano, Matteo Perrone, and Luca Iuliano Analysis on the Effect of Energy Density on Mechanical Properties of Selective Laser Sintering Processed Polyamide-12 . . . . . . . . . . . . . . . . . . . . . . . 407 P. Karmiris-Obrata´nski, E. L. Papazoglou, N. E. Karkalos, and A. P. Markopoulos Empirical Characterization of Track Dimensions for CMT-Based WAAM Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 Jacopo Lettori, Roberto Raffaeli, Pietro Bilancia, Milton Borsato, Margherita Peruzzini, and Marcello Pellicciari Surface Roughness Measurement in Laser Powder Bed Fusion Manufacturing Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425 Vincenza Mercurio, Flaviana Calignano, Giovanni Marchiandi, and Luca Iuliano Tuning Process Parameters to Control the Porosity of Parts Produced with Directed Energy Deposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434 Gabriele Piscopo, Eleonora Atzeni, Luca Iuliano, and Alessandro Salmi
xviii
Contents
Vision Inspection Design for Systematic Production of Needle Beds: An Industrial Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444 Luis Freitas, Teresa Malheiro, A. Manuela Gonçalves, José Vicente, Filipe Pereira, Francisco Morais, João Bessa, and José Machado Thermal Profile Prediction for Ball Grid Array Solder Joints Using Physic-Informed Artificial Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 Zhenxuan Zhang, Yuanyuan Li, Sang Won Yoon, Seungbae Park, and Daehan Won Design and Validation of Fixation Points of Polymeric Components for the Automotive Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461 R. J. O. Simões, Raul D. S. G. Campilho, Francisco J. G. Silva, and C. Prakash Towards an Automatic Strategy for Conformal Cooling Design . . . . . . . . . . . . . . 470 Sofia B. Rocha, Gonçalo Martins, Victor Neto, and Mónica S. A. Oliveira Process Improvement in Zamak Injection Machines for Automotive Component Fabrication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 J. L. T. A. Pereira, Raul D. S. G. Campilho, Francisco J. G. Silva, I. J. Sánchez-Arce, and C. Prakash EXPLAINS: Explainable Anomaly Prediction for SMT Solder Joints Using SPI Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487 Nieqing Cao, Daehan Won, and Sang Won Yoon Leakage Inspection for the Scale-Up of Hydrogen Electrolyzers: A Case Study and Comparative Analysis of Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . 496 Christian Masuhr, Lukas Büsch, and Thorsten Schüppstuhl Process Mining and TOPSIS Analysis for Identifying the Most Complex Combination Vehicle Model and Paint Color – A Case Study . . . . . . . . . . . . . . . . 509 André Luiz Micosky, Cleiton Ferreira dos Santos, Alef Berg de Oliveira, Eduardo de Freitas Rocha Loures, and Eduardo Alves Portela Santos Simulation Case Study for Improving Painting Tires Process Using the Fanuc Roboguide Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517 Adriano A. Santos, Jakub Haladus, Filipe Pereira, Carlos Felgueiras, and Rui Fazenda
Contents
xix
Integral Quality Assurance Method for a CFRP Aircraft Fuselage Skin: Gap and Overlap Measurement for Thermoplastic AFP . . . . . . . . . . . . . . . . . . . . . 525 Monika Mayer, Alfons Schuster, Lars Brandt, Dominik Deden, and Frederic Fischer Conceptual Framework for Development of Intelligent Control Systems for Thermoplastics Injection Molding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535 Olga Ogorodnyk Automation of Fitting Pipe Manufacturing in Shipbuilding . . . . . . . . . . . . . . . . . . 544 Klara Peji´c, Konstantin von Haugwitz, Martin-Christoph Wanner, and Wilko Flügge Development of a Recovery Process for Sanitary Ware Using Laser Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551 R. D. F. S. Costa, L. M. P. Durão, Arnaldo G. Pinto, and J. R. Ferreira Statistical-Based Pick-and-Place Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559 Jaewoo Kim, Daehan Won, and Sang Won Yoon The Development of a Robotic Digital Twin for the Life Science Sector . . . . . . . 567 E. P. Hinchy, N. Cunningham, A. Doohan, M. Hassanpour, E. Nwanji, D. O’Malley, A. Ryan, and M. Zeinali Evaluating a Grey-Box System Identification Module for a Digital Twin . . . . . . . 575 Jonathan Lesage and Robert W. Brennan A Model-Based Digital Twin for Adaptive Trajectory Planning of a Robot for Mixed Packaging Process and Active Collision Avoidance . . . . . . . . . . . . . . . . 583 Alexios Chaloulos, Nikolaos Nikolakis, and Kosmas Alexopoulos Data-Driven Discovery of Manufacturing Processes and Performance from Worker Localisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592 Ayse Aslan, Hanane El-Raoui, Jack Hanson, Gokula Vasantha, John Quigley, and Jonathan Corney Software-supported Hazards Identification for Plug & Produce Systems . . . . . . . 603 Waddah Mosa, Bassam Massouh, Mahmood Khabbazi, Mikael Eriksson, and Fredrik Danielsson Iterative Planning as a Holistic Framework for Production System-Wide Optimization Control Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611 David Dietrich, Michael Neubauer, Armin Lechler, and Alexander Verl
xx
Contents
Towards Model-Based Assembly System Configuration Supported by SysML and AutomationML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622 Jan-Erik Rath, Julian Koch, and Thorsten Schüppstuhl Deep Reinforcement Learning-Based Approach to Dynamically Balance Multi-manned Assembly Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 633 Romão Santos, Catarina Marques, César Toscano, Hugo M. Ferreira, and Joel Ribeiro A New Genetic Algorithm Approach to Optimize the Workload Balance in a Case Study of a Footwear Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641 Lísia Peroza Ruiz and Adelano Esposito A Hybrid Model to Support Decision Making in Manufacturing . . . . . . . . . . . . . . 651 Alef Berg de Oliveira, André Luiz Micosky, Cleiton Ferreira dos Santos, Eduardo de Freitas Rocha Loures, and Eduardo Alves Portela Santos Wireless Safety in Industrial 5G Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659 Simon Lamoth, Julian Goetz, Tatjana Legler, and Martin Ruskowski Development of a Digital Thread for Orchestrating Data Along the Product Lifecycle for Large-Part and High-Precision Manufacturing . . . . . . . . . . . . . . . . . 668 Félix Vidal, Lucía Alonso, Patrick de Luca, Yann Duplessis-Kergomard, and Roberto Castillo Inspection of Part Placement Within Containers Using Point Cloud Overlap Analysis for an Automotive Production Line . . . . . . . . . . . . . . . . . . . . . . . 677 Carlos M. Costa, Joana Dias, Rui Nascimento, Cláudia Rocha, Germano Veiga, Armando Sousa, Ulrike Thomas, and Luís Rocha Analysis and Assessment of Multi-Agent Systems for Production Planning and Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687 Julia Lena Huckert, Aleksandr Sidorenko, and Achim Wagner A Conceptual Framework for Localization of Active Sound Sources in Manufacturing Environment Based on Artificial Intelligence . . . . . . . . . . . . . . . 699 Reza Jalayer, Masoud Jalayer, Carlotta Orsenigo, and Carlo Vercellis Project-Based Collaborative Research and Training Roadmap for Manufacturing Based on Industry 4.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 708 Marek Chodnicki, Mariusz Deja, George-Christopher Vosniakos, Panorios Benardos, Lihui Wang, Xi Vincent Wang, Thomas Braun, and Robert Reimann
Contents
xxi
Application of Ensemble Learning for Improving Failure Prediction in Lithium-Ion Batteries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 716 Joelton Deonei Gotz, Gabriel Carrico Guerrero, Gustavo Onofre Andreão, and Milton Borsato Machines and Mechanical Design An Image Processing Approach for Morphology Characterization of Serration in Ti-6Al-7Nb Chip Sliding Surface . . . . . . . . . . . . . . . . . . . . . . . . . . . 727 Ana Horovistiz, Sílvia Carvalho, and J. P. Davim Degradation-Based Design for Disassembly Assessment Using Network Centrality Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 736 Header M. Alrufaifi, Joao Paulo Jacomini Prioli, and Jeremy L. Rickli CAD/CAM/CAE Rule-Based System for Assembly Planning Using Features . . . . . . . . . . . . . . . . . . 747 Dušan Šormaz and Anibal Careaga Campos Generation of Conformal Cooling Channels on Generic Geometries: An Assisted Automated Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755 David Oliveira, Pedro Ribeiro, Gustavo Carreira, and Miguel Belbut Gaspar On the Effects of Process Optimization for Ti-6Al-2Sn-4Zr-2Mo Lattice Structures Produced by Electron Beam Powder Bed Fusion . . . . . . . . . . . . . . . . . . 763 Manuela Galati, Massimo Giordano, Giovanni Rizza, Abdollah Saboori, Paolo Antonioni, and Luca Iuliano Advanced Materials Processing and Characterization A Review of INCONEL® Alloy’s Non-conventional Machining Processes . . . . . 773 A. F. V. Pedroso, Vitor F. C. Sousa, N. P. V. Sebbe, Francisco J. G. Silva, Raul D. S. G. Campilho, R. C. M. Sales-Contini, and F. R. Nogueira Wear Behavior Analysis of TiN/TiAlN Coated Tools in Milling of Inconel 718 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784 N. P. V. Sebbe, F. Fernandes, Francisco J. G. Silva, Vitor F. C. Sousa, R. C. M. Sales-Contini, Raul D. S. G. Campilho, and A. F. V. Pedroso
xxii
Contents
A Brief Review of Injection-Mould Materials Hybrid Manufacturing Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 796 F. R. Nogueira, A. F. V. Pedroso, Vitor F. C. Sousa, N. P. V. Sebbe, R. C. M. Sales-Contini, and M. L. S. Barbosa Studying the Machining Performance of DSS Steel Using Single and Multilayered TiAlSiN Coated Tools Deposited by HiPIMS . . . . . . . . . . . . . . 807 Andresa Baptista, Gustavo F. Pinto, Vitor F. C. Sousa, Raul D. S. G. Campilho, and Filipe Fernandes Characterization of Micro-nanostructured Thin Layers Used to Increase the Lifetime of Hip Prosthesis Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 819 Liliana-Laura Badita-Voicu, Aurel Zapciu, Dorin Angelescu, and Adrian-Catalin Vociu Optimization of Surface Roughness in Laser Cutting Process of Mild Steel Using RSM and GA Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 827 Majid Shamlooei, Gabriele Zanon, Marco Brugnolli, Mattia Vanin, and Oreste S. Bursi Non-destructive Crack Detection Methodologies in Green Compacts: An Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 836 Sameen Mustafa, Angelika Peer, and Franco Concli The Importance of Patent Registration in the Way of Generating Wealth for Nations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 848 Gilberto Santos, Sandro Carvalho, José Carlos Sá, and Luis P. Ferreira Innovative Maintenance Digital Transformation of Electrical Engineering Companies in the Czech Republic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 859 Andrea Benešová, František Steiner, and Jiˇrí Tupa A Value-Oriented Framework for Return Evaluation of Industry 4.0 Projects . . . 871 Alexander Dutra Tostes and Américo Azevedo Digital Factory for Product Customization: A Proposal for a Decentralized Production System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 879 Hélio Castro, Fernando Câmara, Eduardo Câmara, and Paulo Ávila
Contents
xxiii
Management Online Hazard Detection in Reconfigurable Plug & Produce Systems . . . . . . . . . 889 Bassam Massouh, Fredrik Danielsson, Sudha Ramasamy, Mahmood Khabbazi, and Xiaoxiao Zhang Sustainable Transport: A Systematic Literature Review . . . . . . . . . . . . . . . . . . . . . 898 João Reis, Joana Costa, Pedro Marques, Francisco Silva Pinto, and Ricardo J. G. Mateus Framework Proposition for the Implementation of Task Shifting Practice: A Case Study in the Healthcare Sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 909 Federica Costa, Najla Alemsan, Alberto Portioli Staudacher, and Guilherme Luz Tortorella Lean, Kaizen, Quality and Productivity A Review of Value Stream Mapping (VSM) for Viable Business Process Management Among Agro-Allied Companies During the 4IR . . . . . . . . . . . . . . . . 919 Makinde Oluwafemi Ajayi and Opeyeolu Timothy Laseinde Outsourcing Optimization in Footwear Industry: The Case of a Portuguese Company . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 929 Eliana Costa e Silva, Raquel Francisco, Sandra P. Sousa, and Vítor Braga Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 939
Automation and Robotics
VR Driven Unsupervised Classification for Context Aware Human Robot Collaboration Ali Kamali Mohammadzadeh1
, Carlton Leroy Allen2
, and Sara Masoud1(B)
1 Wayne State University, Detroit, MI 48201, USA
[email protected] 2 Bowling Green State University, Bowling Green, OH 43403, USA
Abstract. Human behavior, despite its complexity, follows structured principles that, if understood, will result in more reliable and effective collaborative automation environments. Characterizing human behavior in collaborative automation systems based on understanding the underlying context allows for novel advances in robotic human behavior sensing, processing, and predicting. Here, virtual reality, through integration of HTC Vive Arena Pro Eye Bundle, Leap Motion, and Unity 3D game engine, is used for safe and secure data collection on humans’ movements and body language in human robot collaborative environments. This paper proposes an unsupervised classification framework through integration of dynamic time warping and k-means clustering algorithm to enable robotics agents to understand humans’ intentions based on their body movements. Results display that the proposed framework is capable of identifying underlying intentions with an average accuracy, recall, and precision of 85%, 73%, and 75%, respectively. Keywords: Human Robot Interaction · Virtual Reality · Unsupervised Classification
1 Introduction Human-robot interaction (HRI) is a diverse field of research with huge economic impact. The collaborative robots is estimated to reach over $1.43 billion by 2027, having a significant impact on the Gross Domestic Product (GDP) of various economies [1]. HRI is already used in manufacturing environments, space applications, rescue robotics, and more [2–4] Human–robot interaction is defined as “the process of conveying human intentions and interpreting task descriptions into a sequence of robot motions complying with robot capabilities and working requirements” [5]. In manufacturing, industrial robots, equipped with different sensors, can be adapted to do many different industrial tasks [6]. HRI in manufacturing faces challenges such as ensuring safety and efficiency. To address these challenges, it is critical for robots understand humans’ intentions and the underlying context [7]. As demonstrated in Fig. 1, HRI can be classified into the following three classes: First, Human–Robot Coexistence, which is defined as the capability of sharing the workspace between humans and robots without requiring mutual contact or coordination of actions © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 F. J. G. Silva et al. (Eds.): FAIM 2023, LNME, pp. 3–11, 2024. https://doi.org/10.1007/978-3-031-38241-3_1
4
A. Kamali Mohammadzadeh et al.
and intentions [8]. Second, Human–robot cooperation, in which humans and robots work on the same goal and occupy the same time and space, simultaneously. The cooperation requires advanced technologies and techniques for collision detection and collision avoidance [9]. Third, Human–robot collaboration, performing complex tasks with direct human interactions, where explicit contact between human and robot exists [6].
Fig. 1. The three forms of human-robot interaction [7].
Humans and robots collaborating is on a common objective form a team with complementary skills, in which an agreed-upon strategy is necessary for effective collaboration amongst all parties. Also, humans and robots need to be aware of the intents of the other team members. Based on which, a robot can design its own behaviors that will ultimately result in the achievement of the shared objective through perceiving and understanding the surroundings as well as making decisions and plan ahead [10]. The goal of this study is to create a virtual platform that enables safe yet realistic context-aware human-robot collaboration. To this end, this study develops a Virtual Reality (VR) platform for safe data collection, while proposes an unsupervised classification framework through integration of dynamic time warping (DTW) and k-means algorithm for intent prediction. This enables robotics agents to predict and understand human intents from their physical cues. In the remainder of the paper, Sect. 2 reviews the literature of the topic. Section 3 provides the methodology of the research. Results and discussion are provided in Sect. 4 and, Sect. 5 concludes the paper.
2 Literature Review The literature of human intention recognition (classification) is diverse and has caught the attention of researchers from both industry and academia [11]. Intention recognition is also known as activity recognition, plan recognition, goal recognition, behavior recognition. In this work, we define intention recognition as the process of determining what an observed agent intends to do in the immediate future [12]. Recognizing the goal of the observed agent is an important aspect in human-robot interaction, as this understanding enables effective interaction and also proactive behavior modification in order to prevent accidents or resolve issues that may arise.
VR Driven Unsupervised Classification
5
Human intention/activity recognition based on images or videos has received plenty of attention recently in the field of computer vision. The occlusion problem, however, reduces the accuracy of visual-based recognitions [13]. Wearable devices, on the other hand, immediately sense human body movements, providing real-time information on the body status. Additionally, a variety of low-cost wearable gadgets are available on the market and are frequently employed in intention recognition [14]. Among the studies that addressed the problem of human intention recognition, Stiefmeire et al. [15] utilized ultrasonic sensors for worker activity recognition using a Hidden Markov Model. In their following study, the authors proposed a string-matching based classification approach using multiple sensors for recognizing worker activities in a manufacturing setting [16]. Koskimaki et al. classified five activities for industrial assembly lines using a wrist-worn Inertial Measurement Unit (IMU) sensor and a KNearest Neighbor model [17]. Using inputs from a smartwatch combined with an IMU sensor, Maekawa et al. [18] proposed an unsupervised approach for lead time estimation of manufacturing activities. Zhu et al. [19] addressed human intention recognition by designing a hidden Markov based recognition algorithm to classify hand gestures using an inertial sensor worn on finger of the subject. Zhu et al. tacked motion intention recognition using an inertial sensor and a deep convolution neural network (CNN) to extract discriminant features from temporal gait period [20]. Sun et al. used muscle electrical signals and joint angle signals as motion data and utilized K-Nearest Neighbor algorithm to identify four gait motion modes including walking naturally, climbing stairs, descending stairs, and crossing obstacles [21]. Wen and Wang [22] proposed an intention recognition algorithm based on multimodal spatiotemporal feature fusion using the data collected by multimodal sensors. Masoud et al. [23] proposed a task recognition framework to identify undergoing tasks in pseudo real-time using a pair of data glove for grafting operations. This study offers a distinct contribution to the existing body of knowledge and provides a fresh and unique approach to the study of human-robot interaction by integrating the advantages of a virtual immersive platform and an unsupervised classification for context aware HRI.
3 Methodology As displayed in Fig. 2, our proposed framework for intention classification consists of two main phases: First, creating an immersive virtual platform, which is used for safe data collection and second, unsupervised classification using K-means and dynamic time warping. In data acquisition and processing phase, we develop an interactive physicsbased model via Unity 3D game engine. The developed model is integrated with HTCVive room scale pro eye arena bundle and leap motion controller to create the immersive virtual platform for data collection. The data collected through this immersive platform (Unity) is then cleaned and processed using open source python libraries. In the training and intention classification phase, the processed data is used to train our unsupervised intention classification model.
6
A. Kamali Mohammadzadeh et al.
Fig. 2. Our proposed intention classification framework.
3.1 Immersive Virtual Reality Platform To build the immersive platform, HTC-Vive pro eye arena bundle is used to model the immersive environment while Leap Motion controller replaces the typical controllers to enable users interact with the virtual environment using their hands. Leap motion controller captures the hands’ gestures and movements using optical hand tracking sensors with high accuracy. Leap Motion controller can be mounted on the HTC VIVE head mounted display and connects communicate with the system via a USB cable. HTC VIVE also connects to the computation unit (Dell XPS 15 7590) via HDMI cables. HTC VIVE connects to Unity through VIVE port API package, while Leap Motion relies on its own package, UltraLeap plugin. Unity platform allows us to create a safe realistic physics-based manufacturing shop floor, where users can interact with equipment (Fig. 3).
Fig. 3. a. The VR model, developed via Unity 3D game engine, b. The experimental setup for data collection.
Our dataset of activities is established based on the literature of the topic and the available datasets (e.g., UCI, WISDM, GAMI) including activities such as standing idle (ST), walking (WA), bending (BE), and sitting (SI). Then, subjects are recruited and asked to stand in the middle of a designated zone, wear the immersive technology, and perform the assigned activity in a natural way. The data are collected during performing the tasks by the subjects at a frequency of 0.2 s. Next, the collected observations are imputed, normalized, and outliers are dropped.
VR Driven Unsupervised Classification
7
3.2 Unsupervised Classification The proposed unsupervised classification integrates k-means clustering and dynamic time warping. K-means algorithm is selected due to its scalability, guarantee of convergence, and ease of generalization. To the best of our knowledge, this is the first time this proposed integration is used in intention recognition literature. K-means Algorithm K-means algorithm addresses the problem of clustering m elements (with n features) into k groups using the method of the lowest cost function (J), as shown in (1), which is usually the sum of each element’s n-dimensional Euclidean distance to their closest group centroids. The constituents of the jth group are m(j). The n-dimensional feature values of each element inside a group determine the centroids. The K-means algorithm recalculates the group centroids, updates the cost function, and assigns all other elements to the closest group. K-means repeatedly performs these tasks until achieving the lowest cost function [24]. (j)
J (C) =
m k
(j)
xi − Cj2
(1)
j=1 i=1
where X = {x 1 , x 2 ,…, x m } and C = {c1 , c2 , …, ck } are the set of elements and centroids. Typically, Euclidean distance is the foundation of classification (or clustering). However, such a simplistic measure is not applicable to the observations that vary in size. As Euclidean distance is susceptible to even the slightest deviations in the size of comparing entities (time axis in case of time series), DTW replaces Euclidean distance in this study. Dynamic Time Warping Introduced by Berndt and Clifford [25], DTW provides a one-to-many match rather than being restricted to one-to-one matches. Given two members with f and d lengths, ui (i = 1, 2, 3, …, f) and vj (j = 1, 2, 3, …, d), a matrix S i,j is calculated as follows [25]: Si,0 = S0,j = 0
(2)
S1,1 = (u1 − v1 )2
(3)
2 Si,j = min Si−1,j , Si,j−1 , Si−1,j−1 + ui − vj
(4)
where the DTW distance is the minimum value of the sums of (ui –vj )2 , calculated along several paths. The path that minimizes this sum is typically a warped curve. Dynamic time warping is a well stablished measure of difference between the sequences that vary in length. The observations collected in this study also have different lengths depending on the activities and how they are performed by the subjects. Here, we start by collecting historical data and labeling the historical time series with the gestures generating them. Then, for any incoming time series read by the sensors, DTW takes place to collect time series of similar shapes. Finally, cluster centroids will
8
A. Kamali Mohammadzadeh et al.
be computed (or recomputed) with respect to reported DTW. Here the cluster centroid averages a subset from a set of time series within the measured DTW space. As a result, instead of a point, each centroid is a time series taking the average shape of the time series assigned to the cluster. The proposed framework label new time series by minimizing the DTW distance measures among the time series within clusters and their corresponding centroids.
4 Results and Discussion In this study, 5 participants performed the assigned activities multiple times and 72216 data points were collected. During data pre-processing, 27966 data points were discarded. The remaining data points formed 125 observations, where each observation contains 16 features and variable number of time stamps (e.g., varying from 39 to 354). The features include headset forward vector (x, y, z), headset position (x, y, z), headset rotation (x, y, z), headset velocity (x, y, z), headset angular velocity (x, y, z), and timestamp. The processed observations are divided into train and test sets according to an 80/20 ratio. Four commonly used metrics, namely accuracy, precision, recall, and F-1 score, (5) to (8), are selected to evaluate the performance of our proposed unsupervised classification algorithm. Accuracy =
TrueNegative + TruePositive TrueNagative + TruePositive + FalsePositive + FalseNagative Precision = Recall =
TruePositive TruePositive + FalsePositive
TruePositive TruePositive + FalseNegative
F − 1Score = 2 ×
Percision ∗ Recall Percision + Recall
(5) (6) (7) (8)
Although accuracy quantifies the ratio of correctly identified classes (intentions here), it cannot handle the imbalanced datasets. As a result, metrics such as precision, recall, and F-1 score are calculated to report the performance of our proposed model given our imbalanced dataset. While recall sheds light on the ratio of the relevant observations correctly classified by our trained models, precision represents the ratio of relevant observations. F-1 score, as the harmonic mean of recall and precision, combines these performance metrics into a single one. Relying on these metrics, we compared the performance of our proposed unsupervised classifier against the traditional k-means as displayed in Fig. 4. As displayed in Fig. 4, our proposed model outperforms traditional k-means in accuracy, precision, recall, and F-1 score, by 39% (i.e., 85%-46%), 42% (i.e., 75%33%), 27% (i.e., 73%-46%), and 36% (i.e., 74%-38%) over the test set, respectively. Justifying the performance gap between the proposed method and traditional k-means can be attributed to superiority of DTW on comparing time series and traditional k-means weakness in handling high dimensional data.
VR Driven Unsupervised Classification
9
Fig. 4. Accuracy, precision, recall, and F-1 score of our proposed unsupervised classifier and the traditional k-means model.
The superiority of DTW on comparing time series is due due to its one-to-many matching [27]. While Euclidean distance is vulnerable to even the smallest of distortions in time, DTW optimizes the fit by stretching and/or compressing the time axis over different intervals [25, 26, 27]. The second cause contributing to the poor performance of the traditional k-means is its inability to take advantage of a high range of features. While our proposed approach relies on full information available on all 16 available features, the traditional k-means can only process one dimensional time series. For reporting the baseline model’s performance, we trained 16 different traditional k-means models (corresponding to the 16 available features) and reported the best performance (i.e., the model trained on the head rotation time series over the y axis) in Fig. 4.
5 Conclusion As a fast-growing field, HRI has the potential of revolutionizing manufacturing and production systems through providing safe and efficient human robot teams. To achieve that, the first step is developing platforms that are capable of making robots context aware and enabling them to learn about humans’ intention. In this work, we propose an unsupervised classification framework for intention recognition based humans’ body gestures and motions. The proposed framework is built upon integration of K-means and dynamic time warping algorithms. To train the proposed framework, an immersive VR platform is developed for safe and realistic data collection. Our proposed framework outperformed the traditional k-means and achieved average accuracy, recall, precision, and F-1 score of 85%, 73%, and 75%, and 74%, respectively. Our future research focus on developing more intuitive and seamless interaction methods that enhance the overall user experience and foster a sense of trust and reliability between the human and robot through addressing safety issues. Acknowledgements. This work has been funded by the National Science Foundation under Grant No. 1950192.
10
A. Kamali Mohammadzadeh et al.
References 1. GlobeNewswire, https://www.globenewswire.com/en/news-release/2020/08/25/2083218/0/ en/Collaborative-Robots-Market-Worth-1-43-Billion-by-2027-Growing-at-a-CAGR-of-226-from-2020-Pre-and-Post-COVID-19-Market-Opportunity-Analysis-and-Industry-Foreca sts-by-Meticulous-Re.html, last accessed 02 February 2023 2. Mourtzis, D., Angelopoulos, J., Panopoulos, N.: Closed-loop robotic arm manipulation based on mixed reality. Appl. Sci. 12(6), 2972 (2022) 3. Masoud, S., Zhu, M., Rickli, J., Djuric, A.: Challenges and future directions for extended reality-enabled robotics laboratories during COVID-19. Technol. Inter. Int. J. 23(1), 1–22 (2022) 4. Mourtzis, D.: Simulation in the design and operation of manufacturing systems: state of the art and new trends. Int. J. Prod. Res. 58(7), 1927–1949 (2020) 5. Fang, H.C., Ong, S.K., Nee, A.Y.C.: A novel augmented reality-based interface for robot path planning. Int. J. Intera. Desi. Manuf. (IJIDeM) 8(1), 33–42 (2013). https://doi.org/10.1007/ s12008-013-0191-2 6. Hentout, A., Aouache, M., Maoudj, A., Akli, I.: Human–robot interaction in industrial collaborative robotics: a literature review of the decade 2008–2017. Adv. Robot. 33(15–16), 764–799 (2019) 7. Jahanmahin, R., Masoud, S., Rickli, J., Djuric, A.: Human-robot interactions in manufacturing: A survey of human behavior modeling. Robot. Comput. Integr. Manuf. 78, 102404 (2022) 8. Andrisano, A.O., Leali, F., Pellicciari, M., Pini, F., Vergnano, A.: Hybrid Reconfigurable System design and optimization through virtual prototyping and digital manufacturing tools. Int. J. Interact. Des. Manuf. 6(1), 17–27 (2012) 9. Wang, N., Zeng, Y., Geng, J.: A brief review on safety strategies of physical human-robot interaction. In: ITM Web of Conferences, vol. 25, p. 1015 (2019) 10. Bauer, A., Wollherr, D., Buss, M.: Human–robot collaboration: a survey. Int. J. Humanoid Robot. 5(01), 47–66 (2008) 11. Turaga, P., Chellappa, R., Subrahmanian, V.S., Udrea, O.: Machine recognition of human activities: A survey. IEEE Trans. Circuits Syst. Video Technol. 18(11), 1473–1488 (2008) 12. Vellenga, K., Steinhauer, H.J., Karlsson, A., Falkman, G., Rhodin, A., Koppisetty, A.C.: Driver intention recognition: state-of-the-art review. IEEE Open J. Intell. Transp. Syst. (2022) 13. Carreira, J., Zisserman, A.: Quo vadis, action recognition? a new model and the kinetics dataset. In: proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6299–6308 (2017) 14. Tao, W., Leu, M.C., Yin, Z.: Multi-modal recognition of worker activity for human-centered intelligent manufacturing. Eng. Appl. Artif. Intell. 95, 103868 (2020) 15. Stiefmeier, T., Ogris, G., Junker, H., Lukowicz, P., Troster, G.: Combining motion sensors and ultrasonic hands tracking for continuous activity recognition in a maintenance scenario. In: 2006 10th IEEE international symposium on wearable computers, pp. 97–104 (2006) 16. Stiefmeier, T., Roggen, D., Ogris, G., Lukowicz, P., Tröster, G.: Wearable activity tracking in car manufacturing. IEEE Pervasive Comput. 7(2), 42–50 (2008) 17. Koskimaki, H., Huikari, V., Siirtola, P., Laurinen, P., Roning, J.: Activity recognition using a wrist-worn inertial measurement unit: A case study for industrial assembly lines. In: 2009 17th mediterranean conference on control and automation, pp. 401–405 (2009) 18. Maekawa, T., Nakai, D., Ohara, K., Namioka, Y.: Toward practical factory activity recognition: unsupervised understanding of repetitive assembly work in a factory. In: Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, pp. 1088– 1099 (2016)
VR Driven Unsupervised Classification
11
19. Zhu, C., Sun, W., Sheng, W.: Wearable sensors based human intention recognition in smart assisted living systems. In: 2008 International Conference on Information and Automation, pp. 954–959 (2008) 20. Zhu, L., et al.: A novel motion intention recognition approach for soft exoskeleton via IMU. Electronics 9(12), 2176 (2020) 21. Sun, B., Cheng, G., Dai, Q., Chen, T., Liu, W., Xu, X.: Human motion intention recognition based on EMG signal and angle signal. Cogn. Comput. Syst. 3(1), 37–47 (2021) 22. Wen, M., Wang, Y.: Multimodal sensor motion intention recognition based on threedimensional convolutional neural network algorithm. Comput. Intell. Neurosci. 2021 (2021) 23. Masoud, S., Chowdhury, B., Son, Y.J., Kubota, C., Tronstad, R.: A dynamic modelling framework for human hand gesture task recognition. arXiv preprint arXiv:1911.03923 (2019) 24. Chen, C.H., Lin, W.Y., Lee, M.Y.: The Applications of K-means Clustering and Dynamic Time Warping Average in Seismocardiography Template Generation. In: 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 1000–1007 (2020) 25. Berndt, D.J., Clifford, J.: Using dynamic time warping to find patterns in time series. KDD workshop 10(16), 359–370 (1994) 26. Ida, Y., Fujita, E., Hirose, T.: Classification of volcano-seismic events using waveforms in the method of k-means clustering and dynamic time warping. J. Volcanol. Geotherm. Res. 429, 107616 (2022) 27. Masoud, S., Mariscal, N., Huang, Y., Zhu, M.: A sensor-based data driven framework to investigate PM 2.5 in the greater detroit area. IEEE Sens. J. 21(14), 16192–16200 (2021)
Environment for the Design and Automation of New Cable-Driven Parallel Robot Architectures Josué Rivera , Julio Garrido(B)
, Enrique Riveiro , and Diego Silva
Automation and System Engineering Department, University of Vigo, Vigo, Spain [email protected]
Abstract. Environments for the simulation and testing of automation play an important role in developing new robots and machine architectures. This paper presents a design and automation environment to study the control trajectory for new Cable-Driven Parallel Robot (CDPR) architectures, for instance, CDPRs with an unusual number of cables or different motor locations in the robot frame. In order to test the environment capabilities, an architecture of a planar underconstrained CDPR was designed, simulated, and implemented using standard industrial hardware. Both the simulated model and industrial prototype were running the same trajectories to determine the time delay and the error position between them. The tests have demonstrated that the simulated model of the CDPR reproduces the trajectories of the equivalent industrial prototype with a maximum deviation of 0.35% under loading and different speed conditions, despite the time delays produced by the data transmission and the non-deterministic communication protocols used to connect the industrial automation controller with the simulated model. The results have shown that the environment is suitable for trajectory control and workspace analysis of new CDPR architectures under different dynamic conditions. Keywords: CDPR · CDPR architecture · Design · Automation
1 Introduction Cable-driven parallel robots (CDPR) are characterized by employing cables to control the position and orientation of the end-effector. Since their proposal by Landsberger and Sheridan in the mid-1980s [1], the evolution of CDPRs has been continuous, with early developments such as the SkyCam, RoboCrane and FAST to more recent ones such as CoGiRo, IPAnema and FASTKIT [2]. Unlike conventional parallel manipulators, which can exert tensile and compressive forces on loads, CDPRs can only exert tensile forces due to the use of cables. However, cables provide CDPR with several advantages compared to traditional configurations, such as lower inertia, higher payloadto-weight ratio, higher dynamic performance, larger workspace, and higher modularity and reconfigurability [3]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 F. J. G. Silva et al. (Eds.): FAIM 2023, LNME, pp. 12–20, 2024. https://doi.org/10.1007/978-3-031-38241-3_2
Environment for the Design and Automation
13
The literature classifies CDPRs according to several criteria, including the working space and the ratio between the number of cables m and the degrees of freedom (DoF) n. According to the workspace, CDPRs are classified as either planar and spatial due to their motion in the plane or in space. According to the relationship between m and n, CDPRs are classified as fully constrained and under-constrained. A CDPR is considered fully constrained when it is impossible to change its position and orientation without changing the length of its cables. In general, this architecture requires at least one more cable than degrees of freedom (m ≥ n + 1) [4]. Any other architecture that is not fully constrained is considered under-constrained. The working space of a CDPR is limited by many constraints beyond the dimensions of the robot’s structure [5]: Controllable end-effector, positive tension between a minimum and maximum value to prevent the cables from breaking or sagging, and cable collision avoidance. Despite the advantages of such robots in many different scenarios [6], their wider use has been limited, especially in industry, due to obstacles such as the lack of commercialized systems by robot manufacturers, and the need to meet safety and operational requirements involving standard industry controllers for their implementation. Their use would benefit from environments that allow flexible and safe start-up, as industrial controller manufacturers do not provide universal tools for these configurations and all their variants. Tools to test the kinematics and dynamics of new robots without the need to build a physical model to validate the concept are useful for their design and start-up phase. Several environments contributed to cable-robot modelling, analysis, control and simulation of CDPR robots, such as WireX, CASPR and CDPR Studio [7]. However, none of these tools allows the user to govern the simulation from the industrial controller that will be used in the real prototype, being able to test different control algorithms and path planners. To do so, this article presents an environment to design and automate new CDPR architectures (for instance CDPRs with an unusual number of cables or different motor locations in the robot frame). To validate the environment’s capability to replicate the motion trajectories of a CDPR considering the effect of gravity and tool weight, an under-constrained 2-DoF CDPR is designed and simulated within the environment to compare its behavior with a prototype implemented with industry-standard hardware. The paper is organized as follows. Section 2 presents the kinematic model and the static equilibrium workspace analysis used to design the under-constrained 2-DoF CDPR. Section 3 describes the validation environment for the control of new CDPR architectures: simulation software, industrial controller application and the hardware for the validation. Section 4 presents the results obtained by comparing the behavior of both the simulated and experimental models. Finally, Sect. 5 outlines the conclusions.
2 Kinematic Modelling and Workspace Study of CDPR In order to obtain the 2-Dof CDPR kinematic model (Fig. 1), the cables were considered to be mass-less and non-elastic. In the model, O represents the absolute frame of the system, p is the vector position of the mobile frame P located in the center of the end effector, with respect to the absolute frame, bi is the vector position of the attachment
14
J. Rivera et al.
point Bi with respect to the mobile frame, ai is the vector position of the coiling system output point Ai of the cable i, and li is the length of the cable, as shown in Fig. 2. Thus, the general expression that describes the vector close-loop for the CDPR is: ai + li = p + Rbi
(1)
where R is the rotation matrix with respect to the absolute frame. The CDPR in this study has two cables and two DoF (each motor controls each cable), so it is considered under-constrained.
Fig. 1. 2-DoF CDPR schematic view.
Fig. 2. Kinematics model of CDPR.
A planar under-constrained CDPR can only control its position but not its orientation. Therefore, R becomes the identity matrix. Thus, the lengths of the cables (backward kinematic) are: 2 2 By Bx px − − A1x + py + − A1y (2) l1 = 2 2 2 2 By Bx px + − A2x + py + − A2y l2 = (3) 2 2 where Bx and By are the width and height of the end effector. Also A1x , A1y , A2x and A2y are the position coordinates of the coiling system output. Note that A1y and A2y are the same. The solution of the forward kinematic is a system with as many equations as the number of cables and unknowns equal to degrees-of-freedom of the system. In a 2-Dof CDPR, the forward kinematic solution can be obtained by solving Eq. 2 and Eq. 3 for px and py as follow: A21x − A22x + Bx A1x + Bx A2x − l12 + l22 2(A1x − A2x + Bx ) 2 By Bx py = A1y − l12 − px − − A1x − 2 2 px =
(4)
(5)
The static equilibrium workspace analysis returns the set of positions and orientations that the end effector can reach statically (only considering the gravity effects)
Environment for the Design and Automation
15
[8]. According to the Newton-Euler law, the equation that relates the wrenches on the end-effector (forces and moments) and the cable tensions can be written as: A·T =B
(6)
where A is the transpose of the Jacobian Matrix −−J T . T is the cable tension matrix, and B is the matrix of wrenches generated due to the cable actuation. T (7) B = fx fy fz Mx My Mz T = [T1 T2 . . . Tn ]T A= −J = T
u1 u2 . . . un c1 × u1 c2 × u2 . . . cn × un
(8) (9)
where ui is the unit vector along the cable direction, and ci is the vector position of the cable attachment point to the end effector center. The tension in each cable can be computed using Eq. 8, which is then compared to the lower and upper tension limit imposed by either the maximum tension supported by the cable or the maximum torque available in the motors. The base frame of the 2-DoF CDPR used is a square of 1500 mm each side, the end effector is a square of 120 mm each side, and weight of 1 kg. The tension limits are 0 ≤ T ≤ 20N due to the maximum torque of the motors. The tension map in the cables and the static equilibrium workspace of the CDPR is shown in Fig. 3a and Fig. 3b, respectively.
a)
b)
Fig. 3. a) Tension map in the cables. b) Static equilibrium workspace of 2-DoF CDPR.
3 Design and Automation Environment for Trajectory Control of New CDPR Architectures Figure 4 shows the diagram of the design and automation environment. It allows the user to test the trajectory by establishing a control loop between the controller and the simulated model (loop A) or the industrial prototype (loop B).
16
J. Rivera et al.
Fig. 4. Diagram of the design environment for trajectory control of new CDPR architectures.
In the framework of this paper, the industrial automation software TwinCAT 3 from Beckhoff (TwinCAT onwards) is used to control the new CDPR architectures. This software is employed to design the control trajectory strategy for testing the motion of both the simulation model and industrial prototype employing the kinematic modelling developed in Sect. 2. The simulation model was developed in CoppeliaSim, a robotic simulation environment that allows the user to control the behavior of each model object through scripts from inside the simulator or externally from an application programming interface (API) [9]. The API connects the simulated model with the industrial controller using Python as a gateway. It implements a communication channel between CoppeliaSim and TwinCAT by using WebSocket protocol on top of TCP and ADS (Automation Device Specification) protocol on top of TCP/IP [10], respectively. The process starts by creating a port for remote API communication in CoppeliaSim and establishing the communication between CoppeliaSim and TwinCAT through a Python script. Once the connection is established, Python acquires CoppeliaSim object handles for control from TwinCAT and executes the motion task. For the 2-DoF CDPR, a sequence of rigid bodies and joints was used without considering the coiling system and the cable sag as shown in Fig. 5a. The pulleys were modeled using rotational passive joints and rigid bodies (A in Fig. 5a). The cables were modeled as a sequence of a prismatic joint, a rigid body, a prismatic joint, and a rigid body. The first prismatic joint is used to change the cable length (B in Fig. 5a), operating as an active joint and giving the mechanism a DoF in each cable. The second prismatic joint is used to avoid compressive stresses over the end effector (C in Fig. 5a), acting as a free joint in one direction (compressive) and locking in the other direction (traction) as in reference [11]. To attach the end of the cable to the end-effector a similar construction to the pulley is used (D in Fig. 5a). Finally, Fig. 5b shows the prototype using a Beckhoff CX5130 industrial PC with distributed periphery connected by an EtherCAT Fieldbus to compact servo motor terminals (EL7211) and two compact servo motors with holding brakes (AM8112). The CDPR was programmed in Structured Text language according to IEC 61131–3.
Environment for the Design and Automation
17
Fig. 5. Under-constrained 2-DoF CDPR a) Simulation model in CoppeliaSim. b) Prototype.
4 Results Two tests were performed to check the behavior of the designed model considering the end effector in loaded condition and two different velocities, 100 mm/s and 1000 mm/s. The first test compared the simulated model and industrial prototype axis positions against the target axis positions to calculate the time delay in the execution of trajectories, as visualized in Fig. 6. This figure represents the axis 1 trajectory plot of the model and prototype at high speed. Table 1 shows the time delay results from this test. Figure 6a indicates that the simulated axes reproduce the target trajectory and its motion corresponds to that of the real prototype axes (Fig. 6b), despite the relative time delay between the systems. The 20ms delay in the industrial prototype (Table 1) is due to the synchronous-deterministic communications of EtherCAT fieldbus plus the processing time cycle (with a typical controller cycle time of 10ms). The significant increase in the delay for the axes of the simulated environment is caused by the use of non-deterministic communications: ADS communication between the industrial controller and the Python
Fig. 6. Time delay between the position of the simulated and prototype axes with respect to the target position at high speed. a) Simulated axis 1. b) Prototype axis 1.
18
J. Rivera et al.
script, and WebSocket communication between the Python script and CoppeliaSim. Moreover, as these are non-synchronous, it causes an additional time lag between the two axes. Table 1. Time delay of the simulated model and industrial prototype axis trajectories. Velocity [mm/s] 100 1000
Simulated axis time delay Axis 1 [ms] Axis 2 [ms] 150 130
120 110
Industrial prototype time delay Axis 1 [ms] Axis 2 [ms] 20 20
20 20
The second test evaluated the simulated model and industrial prototype end-effector position against the target position while executing a square trajectory to observe the difference in the model and prototype dynamics due to the end-effector weight. The position error results with respect to the target position at high speed is given in Fig. 7. Table 2 shows the average target positions and the average position of the simulated model and industrial prototype in both upper and lower horizontal trajectories at different velocities. These results show that both systems have a position error difference of less than 0.06% at low speed and 0.35% at high speed. These position errors indicate the capabilities of the environment to reproduce the dynamics of the 2-DoF CDPR with a tolerance of less than 5.52 mm in the robot workspace presented in Sect. 2. Therefore, the results show that the environment is suitable for developing trajectory control strategies and workspace analysis considering different dynamic conditions.
Fig. 7. Position error of the simulated and prototype axes with respect to the target position at high speed. a) Simulated axes. b) Prototype axes.
Environment for the Design and Automation
19
Table 2. Error position in simulated model and industrial prototype horizontal trajectory. Average simulated model position Target Simulated Error [mm] [mm] [%]
Average industrial prototype position Target Prototype Error [mm] [mm] [%]
Error Diff. [%]
100
947.613 749.578
946.016 750.183
0.168 0.081
947.613 749.582
948.670 748.844
0.112 0.098
0.057 -0.017
1000
948.046 749.594
945.437 750.510
0.275 0.122
948.035 749.600
945.879 746.090
0.227 0.468
0.048 -0.346
Velocity [mm/s]
5 Conclusions The main contribution of this paper is the proposal of a simulation environment to test different control algorithms of new CDPR architectures. The main feature is that the kinematics-dynamics is simulated, so there is no need to have the physical prototype, but the control to be evaluated is executed on the final industrial physical controllers. An under-constrained 2-DoF CDPR was designed, simulated, and implemented using industry-standard hardware to test the designed CDPR and the trajectory control performance by comparing both the simulated model and industrial prototype. The tests have shown that the simulated CDPR reproduces the end-effector movements of the prototype with a maximum deviation of 0.35% under loading conditions and at different speeds despite the time delay between both systems. These features make the environment an appropriate tool to perform motion and workspace studies of new CDPR architectures under different dynamic conditions. In addition, it allows such architectures to be validated against an industrial hardware robot by translating the design inputs (workspace, movements, and load limits) into the robot’s structure, motor and sensors, and end-effector manufacturing specifications. The current version of the environment simulates CDPRs with mass-less and nonelastic cables. Future research aims to expand the environment’s capabilities, as well as improve the cable modelling to consider those simplifications but also cable sagging to cover larger workspaces. Additionally, improvements in the communication part will be sought to reduce the time delay between the prototype and the simulated model. Acknowledgments. The work of Diego Silva has been supported by the 2023 predoctoral grant of the University of Vigo (00VI 131H 6410211).
References 1. Zhang, Z., et al.: State-of-the-art on theories and applications of cable-driven parallel robots. Front. Mech. Eng. 17, 37 (2022) 2. Hussein, H., Santos, J.C., Izard, J.-B., Gouttefarde, M.: Smallest Maximum Cable Tension Determination for Cable-Driven Parallel Robots. IEEE Trans. Robot. 37, 1186–1205 (2021) 3. Qian, S., Zi, B., Shang, W.-W., Xu, Q.-S.: A Review on Cable-driven Parallel Robots. Chinese J. Mecha. Eng. 31(1), 1–11 (2018). https://doi.org/10.1186/s10033-018-0267-9
20
J. Rivera et al.
4. Gouttefarde, M., Bruckmann, T.: Cable-Driven Parallel Robots. In: Ang, M.H., Khatib, O., Siciliano, B. (eds.) Encyclopedia of Robotics, pp. 1–14. Springer, Berlin (2022) 5. Tho, T.P., Thinh, N.T.: An Overview of Cable-Driven Parallel Robots: Workspace, Tension Distribution, and Cable Sagging. Math. Probl. Eng. 2022, 1–15 (2022) 6. Zarebidoki, M., Dhupia, J.S., Xu, W.: A review of cable-driven parallel robots: typical configurations, analysis techniques, and control methods. IEEE Robot. Autom. Mag. 29, 89–106 (2022) 7. McDonald, E., Beites, S., Arsenault, M.: CDPR Studio: A Parametric Design Tool for Simulating Cable-Suspended Parallel Robots. In: Gerber, D., Pantazis, E., Bogosian, B., Nahmad, A., Miltiadis, C. (eds.) CAAD Futures 2021. CCIS, vol. 1465, pp. 344–359. Springer, Singapore (2022). https://doi.org/10.1007/978-981-19-1280-1_22 8. Kumar, A.A., Antoine, J.-F., Zattarin, P., Abba, G.: Workspace Analysis of a 4 Cable-Driven Spatial Parallel Robot. In: Arakelian, V., Wenger, P. (eds.) ROMANSY 22 – Robot Design, Dynamics and Control. CICMS, vol. 584, pp. 204–212. Springer, Cham (2019). https://doi. org/10.1007/978-3-319-78963-7_27 9. Zhang, T., Shi, Y., Cheng, Y., Zeng, Y., Zhang, X., Liang, S.: The design and implementation of distributed architecture in the CMOR motion control system. Fusion Eng. Des. 186, 113357 (2023) 10. Beckhoff Automation GmbH & Co. KG: ADS-Communication, https://infosys.beckhoff. com/english.php?content=../content/1033/cx8190_hw/5091854987.html&id= 11. Zake, Z., Caro, S., Roos, A.S., Chaumette, F., Pedemonte, N.: Stability Analysis of PoseBased Visual Servoing Control of Cable-Driven Parallel Robots. In: Pott, A., Bruckmann, T. (eds.) Cable-Driven Parallel Robots. pp. 73–84. Springer, Cham (2019)
Robot-Based Inspection of Freeform Components: Process Analysis and Challenges in Using a Lateral Scanning WLI Jessica Ehrbar(B) , Daniel Schoepflin , and Thorsten Sch¨ uppstuhl Hamburg University of Technology, Institute of Aircraft Production Technology, Denickestraße 17, 21073 Hamburg, Germany [email protected]
Abstract. Vertical scanning white light interferometry is nowadays successfully used for automated inspection of combustion chambers in aircraft engines. Although they provide high-accuracy data, white light interferometers have the disadvantage of being comparatively slow sensors due to their small field of view and vibrational sensitivity. Using a lateral scanning white light interferometer instead of a vertical scanning one offers a reduction in inspection times by scanning larger areas at once. At the same time using the lateral scanning setup creates new challenges for the inspection process. In this paper, considering a fan blade of an aircraft engine as an example, a possible inspection setup is shown for reference and the challenges regarding the inspection process are investigated. Existing solutions for similar problems are analysed for their applicability. The biggest remaining challenge is seen in the path planning for the inspection, since generating a minimal number of viewpoints is critical for efficiently using a lateral scanning white light interferometer.
Keywords: robot guided
1
· path planning · inspection system
Introduction and Related Work
Visual inspections are a key process in quality assurance and maintenance of long-service products. The necessary accuracy of the inspection process depends on the requirements for these products. Especially for safety-critical components (e.g. on aircraft engines), the requirements for the corresponding inspection processes are very strict. A promising nondestructive sensor for the automated inspection of such goods with defects in the micrometer range is the white light interferometer (WLI) [1]. The working principle of a WLI is based on constructive and destructive interference. To calculate the maximum interference and thus the actual distance of a surface to the WLI, the interference must be measured at different distances c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 F. J. G. Silva et al. (Eds.): FAIM 2023, LNME, pp. 21–28, 2024. https://doi.org/10.1007/978-3-031-38241-3_3
22
J. Ehrbar et al.
from the surface. This can also be considered as moving the surface through the coherence plane of the sensor. High-precision linear axes are usually used to realize this movement. The method generates a point cloud as an image of the surface. In the conventional vertical scanning WLI (VSWLI) setup, (see left side Fig. 1) the direction of the axis is aligned with the optical axis of the sensor.
Fig. 1. Schematic of a vertical scanning WLI (left) and a lateral scanning WLI (right), a: WLI, b: coherence plane, c: optical axis, d: measurement volume, e: object surface.
Since the development of commercially available VSWLIs that can be used in an industrial environment, WLI-based inspection systems have been developed. For example, in [2], Biro et al. use the WLI to inspect freeform components like boat propellers, while in [3] the WLI is utilized to inspect combustion chambers of aircraft turbines. For both setups, industrial robots are used to handle the VSWLI for flexibility reasons. However, WLIs are highly sensitive towards vibration [3]. Thus, using industrial robots for the handling of those sensors creates the necessity of awaiting the settling of kinematically induced vibrations after each repositioning of the robot. Otherwise, measurements are generally noisy and error-prone. This procedure increases the inspection time drastically if large numbers of measurements are required, e.g. for the inspection of large components. Another major disadvantage of VSWLIs is their field of view (FOV). Typical sensors have FOVs smaller than 1 cm2 e.g. the heliInspectTM H8 with a FOV of 6.5 by 6.1 mm2 to achieve an optical resolution of 12 μm [4]. Using optics for higher resolutions generally results in even smaller FOVs. Scanning large surfaces like combustion chambers or boat impellers with these measuring devices results in a high number of single measurements. According to [5], over 50.000 single measurements are necessary to inspect an entire aircraft engine combustion chamber. The number of repositioning movements through the robot, and
Process Analysis and Challenges in Using a Lateral Scanning WLI
23
thus standby time because of vibration, is proportional to the number of necessary measurements. This leads to the conclusion that the maximization of the measured area per robot configuration is the most relevant factor to reduce inspection times. To reduce the effect of these VSWLI-related disadvantages and improve measurement times, WLIs can also be used in a lateral scanning setup. The use of a lateral scanning WLI (LSWLI) was first proposed by Olszak in [6]. In this specific setup, the direction of the optical axis is tilted by an angle α with respect to the surface normal (see right side Fig. 1). The tilted WLI is moved parallel to the object’s surface by the linear axis. Through this movement, surface points are measured through different areas of the WLI’s FOV with changing distances. Thus different interferences are measured for a single surface point. These can be used to compute the actual distance to the WLI. Detailed explanation, comparison and discussion of both WLI setups are presented in [6]. Using the lateral scanning setup for automated inspection yields the opportunity to decrease the inspection time by measuring larger areas at once while maintaining a high accuracy [6]. Additionally, less stitching is required because of the larger measurement volume. This reduces the post-processing time and errors in data that arise through point cloud registration. One drawback of the lateral scanning mode can be seen in its smaller vertical scanning range [6]. In the vertical scanning mode, this range is limited through the movement parallel to the optical axis, while it is dependent on the width of the FOV and the tilt angle with respect to the optical axis in the lateral scanning case. Some research on using LSWLIs for inspection tasks has already been done. In [7], Munteanu developed a method to calibrate the tilt angle of the WLI’s optical axis to the surface normal by using the fringe patterns resulting from the destructive and constructive interference. This reduces the error through changes in surface orientation or movement direction but so far only has been developed for plane surfaces. Behrends et al. extended Munteanu’s self-calibration method in [8] for cylindrical surfaces. Bahr et al. proposed using an LSWLI for the inspection of rotationally symmetrical components in [5] and did first experiments to validate the applicability of the lateral scanning setup for these components. However, none of the literature found covers the challenges an LSWLI imposes on a handling system for automated inspection or problems that arise in a typical inspection process. Such problems are expected since changes in properties, like the severe reduction of the vertical scanning range and a variable FOV length, have an influence e.g. on path planning approaches. In this paper, challenges that arise in an inspection process from using an LSWLI instead of a VSWLI are identified and discussed. The use-case inspection of aircraft fan blades is used to illustrate those discussion points. These components have high accuracy requirements on a micrometer scale for inspection since their correct maintenance is crucial for an aircraft’s flight safety and efficiency. Furthermore, fan blades are freeform components, which makes the inspection of their surfaces challenging for an automated inspection system and the corresponding process.
24
J. Ehrbar et al.
The next chapter describes the concept for a possible handling system followed by Sect. 3 which discusses a corresponding inspection process focusing on the challenges that arise from using an LSWLI instead of a VSWLI. In the last chapter, a short conclusion is given and the possibilities of future work are discussed.
2
Concept for the LSWLI Based Inspection System
We propose a system concept for the inspection of fan blades that consists of a robot and a measuring unit, which itself is composed of the WLI with the additional linear axis for the lateral scanning motion. A mock-up of a possible system is shown in Fig. 2. The measuring unit is fixed in space and the robot carries the device under test (DUT). A 6 DOF robot arm is preferred here due to its flexibility. Using the robot for handling the DUT facilitates inspecting almost all areas of a fan blade without the necessity of manual interference.
6-DOF Robot
Device Under Test Lateral Scanning WLI
Fig. 2. Mock-up of the proposed Inspection system concept.
For the repositioning of the DUT in front of the sensor, the absolute accuracy of the robot plays an important role. Industrial robot arms usually have good repeatability. Nevertheless, their absolute accuracy is in general worse. Considering the small, stripe-like FOV of the LSWLI, errors in positioning can result in missing surface areas. Therefore, high-accuracy versions of robots are preferred. These do not solve the positioning problem entirely and the robot’s accuracy needs to be considered in the overlapping of measurements. Hence, it influences the path planning and the number of measurements necessary. Another solution is to use a highly accurate external reference system like a laser tracker to control the robot pose and reduce the positioning error to a minimum.
Process Analysis and Challenges in Using a Lateral Scanning WLI
25
Different from the use of a VSWLI, for LSWLI a choice for the tilt angle α has to be made. Combined with the movement range of the linear axis, the tilt angle determines the measurement volume. Furthermore, the choice of the tilt angle is crucial for the measurement performance, as can be seen in some experimental results shown in [7] and [8]. Extending these results, a parameter study for finding a feasible range for the tilt angle for inspecting fan blades is planned for future research.
3
Inspection Process Analysis and Identification of Challenges
In this chapter, the process challenges that arise from using an LSWLI instead of a VSWLI are investigated. The overall process has similarities to the process for a VSWLI inspection but several sub-processes need to be adapted. A typical process chain, similar to the one presented in [3], is visualized in Fig. 3. The scanning step represents the actual scanning of the component including the online calibration of the surface normals to the WLI’s optical axis. As a fan blade has freeform surfaces neither Munteanu’s [7] self-calibration for planar nor Behrends et al. [8] calibration for cylindrical surfaces are applicable. Both strategies work by making use of the interference fringe frequencies and the specific definitions of the surface’s curvature. Since the curvature of a freeform surface is not constant as in the other two cases the functions for the self-calibration require adaptations. As the robot is not moving while scanning, the synchronization of the robot position and the sensor data is limited to reporting the robot position to a computer and saving it together with the sensor data. Under the assumption that a working LSWLI is available including such calibration formulations and the processing of the raw interference data into point clouds, the scanning step is not further investigated below. In the following, the offline calibration, path planning and defect detection steps of the process are described briefly. Problems that arise due to use of an LSWLI are explained and possible solutions are discussed.
Fig. 3. Schematic of a typical inspection process.
3.1
Calibration
The first step of the inspection process is the calibration of the WLI. The terminology calibration is used here for the calculation of the transformation between
26
J. Ehrbar et al.
frames. For the WLI to the robot, this can also be described as the tool center point calibration with a static tool and movable workpiece. For a VSWLI known calibration approaches are applicable like the two-step approach described in [3]. Such a two-step approach using a spherical calibration target is generally also applicable for the calibration of an LSWLI to the robot. Special attention has to be given to the choice of the dimensions of the calibration target since the vertical scanning range of an LSWLI is very limited. Additionally, the transformation from the flange to the DUT has to be determined. Even if the DUT is mounted on the robot nearly at the same place each time, through the small FOV of the WLI each additional uncertainty needs to be minimized. To determine the transformation to the DUT some edges of the DUT will be measured with the WLI and a CAD model of the DUT is fitted into these measurements. This process is known for other sensors and can be adopted. 3.2
Path Planning
The path planning for scanning the surface of the whole DUT with the LSWLI is challenging. Teaching methods are not applicable to WLI inspection systems, because already covered areas are not visible and full coverage is hard to reach. Other online path planning methods that are based on calculating the next pose out of data measured in a previous pose are also not applicable. The reason for this is the measuring distance of a WLI which is too small for these methods. Offline path planning methods for inspection can be split into two steps, the viewpoint generation and the path planning for in between these viewpoints. The latter is also called the travelling salesman problem (TSP). The TSP for the LSWLI inspection process is not different from the TSP for the VSWLI inspection process. The problem can be solved like the TSP for any other sensor or inspection task. Other than the TSP, the viewpoint generation problem is affected by the different scanning modes, since the FOV needs to be modelled differently. Not only is the FOV of the LSWLI much larger depending on the range of the linear axis used, but the length of the FOV is variable. Furthermore, the length-towidth ratio of the FOV is much higher. Hence, more attention has to be given to the orientation of the sensor for scanning. To fully exploit the potential of the LSWLI and reduce the inspection time to its minimum, the viewpoint planning has to generate a minimum number of viewpoints. Any excessive viewpoints lead to unnecessary repositionings of the robot, which then lead to the necessity to wait for vibrations to settle before being able to scan the next area. The viewpoint planning approach using modified measurability matrices developed by Scott in [10] and adapted by Domaschke in [9] for VSWLI generally is a valid option to find a solution for the viewpoint generation task. The problem with this solution is its feasibility. In Scott’s approach, the optimality of the viewpoint set that can be found depends on an initial choice of possible viewpoints at the beginning. This initial choice depends on a surface discretization or prior measurements of the corresponding surface and is in general not
Process Analysis and Challenges in Using a Lateral Scanning WLI
27
optimized for finding a minimal set of viewpoints. Furthermore, the approach has a deficit regarding the optimization of the orientation of the sensor around the optical axis and does not consider variable FOV lengths. Viewpoint planning for minimal viewpoint numbers is still an active research area. In [11] Glorieux et al. propose an iterative approach. In each iteration, for a given set of surface points, a viewpoint is optimized. The respective measurable surface points are then eliminated from the surface point set for the next iteration. These steps repeat until no surface points are left. Because such an iterative approach doesn’t lead to a minimum number of viewpoints, the process is carried out several times and out of the overall set of redundant viewpoints, a subset is searched for a minimum number of viewpoints. Using this approach for an LSWLI will generate a set of viewpoints with which the considered surface can be inspected. Nevertheless, the number of viewpoints generated will not be minimal. Combining different sets of viewpoints for FOVs with a high lengthto-width ratio is less promising that doing the same steps for a nearly quadratic FOV. Additionally, the approach does not take FOVs of variable length into account which is important for optimal viewpoint placement for LSWLIs. For the reduction of cycle time, huge potential is seen in the development of innovative viewpoint planning approaches. It is planned to find a way to use continuous optimization formulations to find more suitable solutions for viewpoint distributions. Thereby, the most crucial part is to develop appropriate approximation functions for the naturally discrete problem (e.g. the area of the surface is either in or out of the FOV). 3.3
Defect Detection
As the last steps of the proposed inspection process, the LSWLI measurements have to be registered and afterwards analysed for possible defects. In the end, an LSWLI generates the same type of data as a VSWLI. Hence the same processing and defect detection methods are applicable. Since defect detection in 3D point clouds is still not very well researched, in [3], Domaschke et al. propose to transform the registered point clouds into a 2D image. Through this transformation well know 2D defect detection approaches can be used. After the detection, the results are transformed back into 3D space. The potential for improvement is seen in novel approaches for direct defect detection in 3D data through AI methods. These improvements are not limited to the application for LSWLI but for all 3D sensors.
4
Conclusion and Future Work
In this paper, a review of the advantages and disadvantages of LSWLI for use in inspection processes is given. A concept for an inspection system for the inspection of aircraft engine fan blades is described, containing an industrial robot for handling the DUT and a stationary LSWLI. A general inspection process for the system is proposed. New methods for the different steps of the inspection
28
J. Ehrbar et al.
process still need to be developed. Especially, the viewpoint planning for surface inspection is interesting. Through the properties of the LSWLI, state-of-the-art path planning methods are not feasible if at all applicable. A more thorough discussion on the non-optimality of existing approaches and the optimization potential through innovative viewpoint generation approaches are subject to future research.
Acknowledgments. Research was funded by the German Federal Ministry for Economic Affair and Climate Action under the Program LuFo VI-1 AcDiCrackInspect.
References 1. Kramb, V., Shell, E.B., Hoying, J., Simon, L.B., Meyendorf, N.: Applicability of white light scanning interferometry for high resolution characterization of surface defects. In: Proceedings of SPIE, Nondestructive Evaluation of Materials and Composites V, vol. 4336, pp. 135–145 (2001) 2. Biro, I., Turner, J., Hodgson, J., Lohse, N., Kinnell, P.: Integration of a scanning interferometer into a robotic inspection system for factory deployment. In: Proceedings of the 2020 IEEE/SICE International Symposium on System Integration, pp. 1371–1375 (2020) 3. Domaschke, T., Schueppstuhl, T., Otto, M.: Robot guided white light interferometry for crack inspection on airplane engine components. In: ISR/Robotik 2014, 41st International Symposium on Robotics, pp. 1–7 (2014) 4. Heliotis, A.G.: H8 Data Sheet: Swiss precision in three dimensions - 3D Inspection. https://www.heliotis.com/sensoren/serie-4/#. Accessed 29 Sept 2022 5. Bahr, S., Otto, M., Domaschke, T., Sch¨ uppstuhl, T.: Continuous digitalization of rotationally symmetrical components with a lateral scanning white light interferometer. In: Tagungsband des 2. Kongresses Montage Handhabung Industrieroboter, pp. 135–143 (2017) 6. Olszak, A.: Lateral scanning white-light interferometer. Appl. Optics 39(22), 3906– 3913 (2000) 7. Munteanu, F.: Self-calibrating lateral scanning white-light interferometer. In: Proceedings of SPIE 7790, Interferometry XV: Techniques and Analysis (2010) 8. Behrends, G., St¨ obener, D., Fischer, A.: Lateral scanning white-light interferometry on rotating objects. In: Surface Topography: Metrology and Properties, vol. 8 (2020) 9. Domaschke, T.: Automatisierung der Weißlichtinterferometrie zur Inspektion rotationssymmetrischer Triebwerksbauteile. PhD Thesis, Hamburg University of Technology (2017) 10. Scott, W.R.: Model-based view planning. Mach. Vision Appl. 20, 47–69 (2009) 11. Glorieux, E., Franciosa, P., Ceglarek, D.: Coverage path planning with targetted viewpoint sampling for robotic free-form surface inspection. In: Robotics and Computer-Integrated Manufacturing, vol. 61 (2020)
Development of a Robot-Based Handling System for a High Precision Manufacturing Cell George Papazetis , Evangelos Tzimas , Panorios Benardos , and George-Christopher Vosniakos(B) School of Mechanical Engineering, National Technical University of Athens, Heroon Polytechniou 9, 15772 Athens, Greece [email protected]
Abstract. Automated production of micro-fluidic devices (bio-MEMS) is performed in a dedicated manufacturing cell, comprising six precision manufacturing workstations and a handling system. The desired flexibility of the manufacturing cell is highly dependent on the handling system, whose design and implementation are reported on in this paper. This comprises specially designed part carriers that are hosted on a spigot both at each workstation and in a central storage cabinet, an industrial robot that loads and unloads the carriers by means of a custom designed gripper and a central controller commanding the movements and transfers in a single part or batch mode. Besides accuracy and reliability of the handling system, special emphasis in designing it, is given to safety. Keywords: Flexible cell · Robot · Gripper design · Bio-MEMS manufacture
1 Introduction Manufacturing cells, in order to be effective, are typically dedicated to the production of a family of similar parts [1]. Integration of an appropriate material handling system aims to increase productivity and flexibility but also adds complexity to the system [2]. Design parameters to be considered, pertain to the type of the handling system, changes regarding the workstation layout [3], design of handling operations [4], human safety, communication and synchronization among the cooperating modules [5, 6] etc. Structured and repetitive tasks, such as material handling [7], machine tending [8] and packaging [9], are assigned to robots. Current efforts focus on designing handling systems, in order to perform more advanced scenarios. Efficient scheduling of production tasks [10] and autonomous robotic manufacturing via machine vision systems [11, 12] are indicative examples of introducing intelligence to handling systems. In addition, recent studies indicate capability of robots to perform biomedical tasks, such as microfluidic injection [13], that require increased repeatability and delicate handling in sterile environments. This paper focuses on the development of an automated handling system serving a high precision manufacturing cell producing micro-fluidic devices (bio-MEMS) for medical applications. The general cell description is given in Sect. 2. Mechanical design © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 F. J. G. Silva et al. (Eds.): FAIM 2023, LNME, pp. 29–36, 2024. https://doi.org/10.1007/978-3-031-38241-3_4
30
G. Papazetis et al.
of the handling system is presented in Sect. 3. Section 4 focuses on robot programming and operation in relation to the cell’s controller. Section 5 outlines the main conclusions drawn and future extensions perceived.
2 The Manufacturing Cell The described manufacturing cell is tailored to the automated production of Lab-on-Chip (LoC) bio-MEMS. As a common first step in all pertinent variants, polymer substrates are fabricated on a material extrusion-based 3D-printer. The next step involves milling of micro-channels and holes that enable injection of special fluids (bio-inks) in the substrate. To facilitate smooth fluid flow, the required texture and surface roughness of the channels is processed in two separate laser-based modules for ablation and polishing operations, respectively. Finally, an inkjet workstation fills the substrate channels with bio-ink. During the final step the devices are inspected in an X-ray scanning module where manufacturing and assembly defects can be detected. LoC devices are manufactured on so-called ‘carriers’, that are transferred across the workstations as required by the process plan, while the storage cabinet provides the starting and ending positions for each carrier participating in the production process. The layout of the cell is shown in Fig. 1(a).
Fig. 1. (a) virtual representation of LoC cell including (1): robot (2) storage cabinet (3) laser polishing (4) inkjet station (5) X-ray station (6) laser ablation station (7) 3D printer (8) µmilling (b) actual LoC installation on site.
Footprint of specific LoC products ranges from 5x5mm to 100x100 mm dimensional tolerances being ± 50 µm. Minimum feature size on LoC devices is 80 µm, respective dimensional tolerances being ± 2 µm. Positional accuracy on individual machines is dealt with by dedicated alignment modules. Thus, main requirements of the handling system concern production flexibility, i.e. minimal changeover and/or setup downtime. This is ideally fulfilled by an industrial robot. A Kawasaki BA006L industrial robot arm with 6 degrees of freedom (DOFs), 6 kg payload, 2036 mm maximum reach and 80 µm repeatability was deemed adequate for this pick-and-place application. The robot slides on a linear rail acting as the 7th DOF and hence expanding its reach, in Fig. 1(b).
Development of a Robot-Based Handling System
31
3 Mechanical Design of the Handling System The handling system must ensure the correct placement of the carriers transferring the LoC devices to the various stations. Therefore, special consideration was given to the design of the carrier and the robotic gripper to (i) allow their smooth cooperation and (ii) meet positional and orientation requirements of the application. In addition, the carrier has to be mechanically interfaced to all workstations in a uniform way, which was achieved by using a spigot. Both carrier and spigot are made from Al alloy 5083. The carrier is ring-shaped with a 3 mm-thick glass on its top, weighing in total 999 gr, Fig. 2(a). It is supposed to rest on the cylindrical spigot, Fig. 2(b), the glass making contact with the spigot’s top surface. A radial clearance of 0.5 mm between carrier and spigot allows jam-free robotic picking-and-placing between carrier and spigot.
Fig. 2. (a) Carrier top view (b) spigot detail highlighting the locking pin (c) Carrier bottom view with locking mechanism components (1: spring loaded ball, 2: groove 3: planar face).
This loose fit does not restrict relative rotation of the carrier around the vertical direction. In fact, positioning tolerance of the carrier is 0.1 mm. Even if workstations have an auto alignment capability compensating for placement error, accumulation of the latter needs to be restricted. Furthermore, it was noticed that forces that arise during the 3D printing, tend to unintentionally move and rotate the carrier, inducing positional inaccuracy, critical for the product quality. Carrier position error was tackled first by fitting two spring-loaded bearing balls in a 90° angle arrangement in the inner carrier circumference, Fig. 2(c). The spring is compressed when placing the carrier hence stabilizing its position. In addition, a locking mechanism was devised, involving a 4 mm diameter pin that was bolted in the cylindrical surface of the spigot, Fig. 2(b). Accordingly, a similar sized vertical groove was milled in the carrier, with a relief at its entrance for centering and guiding the carrier on the pin Fig. 2(c). This did not impact carrier placement at the handover positions of workstations other than the 3D printer. The carrier is handled by the robot through an electric gripper (manufacturer: SMC model: LEHZ40, controller: LECP1), which is equipped with a pair of custom designed fingers. Grasping force FG should be 10–20 times greater than payload [14]. The gripper is mounted on the robot’s end effector with a custom designed steel bracket, Fig. 3. The mounting orientation should ensure that the robot is able to reach all workstation handover positions. Furthermore, it should consider the presence of robot singularity points along the relevant trajectories, thereby avoiding trajectory inaccuracy or discontinuity and inappropriate carrier handling at radically high angular velocity.[15]. Thus,
32
G. Papazetis et al.
initial design of the bracket was replaced by the final one, the 6th axis of the robot being parallel to the ground in the former case or normal to it in the latter case, see Fig. 3.
Fig. 3. (a) Initial and (b) final designs of mounting bracket for the gripper mechanism.
The proposed mounting design successfully tackled the aforementioned issues with the trade-off of increased oscillations during carrier transfer due to the protruding gripper. However, given that processing times were at least an order of magnitude greater than the carrier transfer times, the robot was not required to operate at high speeds and therefore these oscillations were not significant enough to negatively affect the quality of carrier transportation under actual operating conditions. Finger geometry depends on the application and objects handled [16]. In this case, a pair of aluminum fingers of dimensions 195x92x40 mm are attached to the gripper Fig. 4. The designed rib helps the structure withstand the reaction force once the carrier is grasped with a tolerable 40 µm deflection, as proved by static finite element analysis in Solidworks™, Fig. 4(a). Since the selected gripper stroke length is 30mm, the finger L-shaped design essentially enlarges the opening between them appropriately to fit in the carrier’s underside. Two straight cuts milled on the underside of the carrier, Fig. 2(c) create two planar faces for firm grasping.
Fig. 4. (a) Robot gripper design and analysis (b) Operation for different spigot arrangements
Development of a Robot-Based Handling System
33
Initially, the fingers were fitted with butyl rubber pads but they would, occasionally, not properly release the carrier during placing on the spigots due to a vacuum effect associated with their smooth texture and hence they had to be removed. Total payload due to gripper, bracket, fingers and carrier was calculated at an acceptable 4025 gr. The grasping force FG depends on the mass of the carrier, the length and the overhang distance between the grasping point at the carrier’s centre of gravity (COG) and the base of the gripper, Fig. 4(a). Following [14], the gripper can apply FG = 125 N at 150 mm length with maximum overhang of 25 mm.
4 Handling System Layout, Safety and Programming The designed mechanical parts and all mechanisms must be accommodated in the access openings of each workstation to reach the respective handover positions which are denoted by the spigot(s) of each workstation, Fig. 4(b). Maneuvering of the robot within a relatively restricted cell’s space, proved quite challenging and required some fine tuning of the pertinent station positions with the help of kinematic simulation in the off-line robot programming software K-Roset™ [17], which typically enables virtual representation of the workspace and simulation of robot trajectory with collision detection functionality. This was exploited for tweaking position of workstations keeping in mind the relatively restricted accuracy of representation. It was also exploited in assessing bracket design, Fig. 3. For instance, collisions between robot links or the end-effector and the workstations were easily discovered, and preemptively avoided by re-positioning workstations or re-designing the bracket, Fig. 5.
Fig. 5. Collision detection using off-line programming environment for (a) Tweaking workstation positioning (b) Checking bracket design in combination with workstation positioning.
Production of LoC devices is performed in a dedicated area that is separated from adjacent rooms where human operator tasks take place. The storage cabinet is located in the assembly room but its front side is open and faces the production room in order to be accessible by the robot, Fig. 1. The operator is responsible for manually placing / picking carriers to / from the cabinet relying on visual signs on the cabinet spigots for rough alignment of the carrier. Then, contact of the gripper fingers, Fig. 4(a) with the planar faces of the carrier, Fig. 2(c), will fine-rotate the carrier as necessary.
34
G. Papazetis et al.
Since the operator interferes with the robot’s workspace, safety automation measures were considered. The cabinet door is equipped with trigger switches connected to the robot controller as an interlock. Regarding human entry in the cell’s room, all doors are equipped with interlocks that are also connected to the robot’s controller locking the room when the robot is moving. In addition, the robot cannot be activated while the room doors are open. Collision detection software by the robot manufacturer monitors electric current for all robot axes recurrently. Current thresholds to stop the motion are set for a normal (collision-free) program execution. They can be modified even for parts of the trajectory to adjust sensitivity of collision sensing. Path planning and programming was reliably done off-line by motion simulation for those paths that lie outside the workstations. By contrast, for handover operations where accuracy was important it had to be defined by on-line teaching method [18]. On-line teaching begins with recording the workstation handover positions. Since all handover operations follow similar “pick and place” motion commands, they serve as reference points enabling robot movement in relative coordinates when approaching and departing from them. This accelerated programming. Approximately 30 min were needed to compile and test a motion routine. However, this task became challenging in cases where the spigot position was not easily accessible, e.g. for the X-ray station, or in cases where spigots were located close to sensitive machine elements, e.g. in 3D-Printer and laser ablation station, resulting in prolonged programming and testing time. In total, 51 independent robot motion routines for each workstation and cabinet positions were developed in AS language and were stored in the robot controller memory, including the trajectory, the robot speed and the gripper operations. With emphasis on supporting flexibility, extendibility and facilitate code maintenance, each motion subroutine handles a pick or place action. A common safe position serves as the starting and ending point of each motion subroutine, allowing collision free composition of multiple motions. A System Controller Software (SCS) was developed from scratch, details being reported in [19]. The software is executed on the central control PC and it allows the operator to input multiple orders simultaneously for production of multiple LoC devices according to customized process plans (‘recipes’). The parallel production is achieved through the SCS ability to monitor the status of each workstation and assign manufacturing tasks to them right upon their availability, thus adding the desired flexibility to the system and also reducing production time. Respective jobs are dispatched and coordinated and their progress is monitored and reported. A characteristic snapshot of the user interface of the monitoring subsystem is shown in Fig. 6. In practice, the operator loads carriers to the cabinet and assigns process plans (production steps or ‘recipes’) to them using the SCS. According to the system’s state, the robot receives commands from the SCS to transfer a carrier from one location to another in an automated manner. A server application has been developed and is executed in the robot’s memory providing an interface for its communication with the SCS. The application is using the TCP/IP protocol and upon receiving a command, it handles the synthesis of the appropriate motion routines (pick and placement combination), their sequential execution, and the response to the client application.
Development of a Robot-Based Handling System
35
Fig. 6. Monitoring system snapshot capturing carrier transfer from polishing to storage station.
5 Concluding Remarks In systems of high complexity, it is almost impossible to foresee every single potential issue before the actual production execution in the long run. Therefore, multiple tests of production scenarios were conducted thus evaluating the repeatability of the process under realistic operation conditions. Unforeseen issues were rectified to ensure reliability, e.g. stabilising the carrier on the spigot, removing robot gripper rubber pads. Since processing workstations were developed simultaneously with the handling system, simulation based off-line robot programming was a valuable tool that supported decisions regarding workstation dimensions, positioning and layout. Fixing workstation positions was critical because even slight displacement negatively affected handover operations or, even worse, cause collisions in some cases. In this light, robot’s collision detection functionality proved important. The manufactured carrier-spigot interchangeability for all workstations supports the SCS capability for simultaneous production of multiple micro-fluidic devices therefore adding flexibility to the manufacturing cell. Robot’s kinematics and reach ability proved sufficient for the size of the system and its components. All handover operations were smooth, given the assembly requirements of the micro-fluidic devices. Robot auxiliary hardware functioned successfully. For the future, a lighter carrier could be designed to minimize observed vibrations due to the protruding gripper, along with a detailed analysis of robot’s kinematic behaviour under different settings of speed and acceleration. In addition, for stations with multiple spigots, handover programming could benefit from local coordinate system with a single reference point. Acknowledgments. This work has been funded by the Horizon 2020 program of the European Commission, project ‘Additive Manufacturing of 3D Microfluidic MEMS for Lab-on-a-Chip applications (M3DLoC), contract no. 76066
36
G. Papazetis et al.
References 1. Hyer, N.L., Brown, K.A.: The discipline of real cells. J. Oper. Manag. 17(5), 557–574 (1999) 2. Barosz, P., Gołda, G., Kampa, A.: Efficiency analysis of manufacturing line with industrial robots and human operators. Applied Sciences (Switzerland) 10(8) (2020) 3. Banas, W., Sekala, A., Gwiazda, A., et al.: Determination of the robot location in a workcell of a flexible production line. IOP Conference Series: Materials Science and Engineering 95(1) (2015) 4. Fantoni, G., Capiferri, S., Tilli, J.: Method for supporting the selection of robot grippers. Procedia CIRP 21, 330–335 (2014) 5. Ferrolho, A., Crisóstomo, M.: Intelligent control and integration software for flexible manufacturing cells. IEEE Trans. Industr. Inf. 3(1), 3–11 (2007) 6. Gultekin, H., Akturk, M.S., Ekin Karasan, O.: Scheduling in a three-machine robotic flexible manufacturing cell. Comput. Oper. Res. 34(8), 2463–2477 (2007) 7. Ferreras-Higuero, E., Leal-Muñoz, E., García de Jalón, J., et al.: Robot-process precision modelling for the improvement of productivity in flexible manufacturing cells. Robotics and Computer-Integrated Manufacturing 65(February), 101966 (2020) 8. Gürel, S., Gultekin, H., Akhlaghi, V.E.: Energy conscious scheduling of a material handling robot in a manufacturing cell. Robotics and Computer-Integrated Manufacturing 58(July 2018), 97–108 (2019) 9. LIU, C., CAO, G.-H., QU, Y.-Y., CHENG, Y.-M.: An improved PSO algorithm for timeoptimal trajectory planning of Delta robot in intelligent packaging. The Int. J. Adv. Manuf. Technol. 107(3–4), 1091–1099 (2019). https://doi.org/10.1007/s00170-019-04421-7 10. Do, H.M., Choi, T.-Y., Kyung, J.H.: Automation of cell production system for cellular phones using dual-arm robots. The Int. J. Adv. Manuf. Technol. 83(5–8), 1349–1360 (2015). https:// doi.org/10.1007/s00170-015-7585-1 11. Li, X., Su, X., Liu, Y.H.: Vision-Based Robotic Manipulation of Flexible PCBs. IEEE/ASME Trans. Mechatron. 23(6), 2739–2749 (2018) 12. Stavropoulos, P., Papacharalampopoulos, A., Athanasopoulou, L., et al.: Designing a digitalized cell for remanufacturing of automotive frames. Procedia CIRP 109, 513–519 (2022) 13. Feng, L., Zhou, Q., Song, B., et al.: Cell injection millirobot development and evaluation in microfluidic chip. Micromachines 9(11), 590 (2018) 14. SMC®.: Electric Grippers - LEH Series 15. Rebouças Filho, P.P., da Silva, S.P., Praxedes, V.N., et al.: Control of singularity trajectory tracking for robotic manipulator by genetic algorithms. J. Computat. Sci. 30, 55–64 (2019) 16. Tai, K., El-Sayed, A.-R., Shahriari, M., et al.: State of the Art Robotic Grippers and Applications. Robotics 5(2), 11 (2016) 17. Pan, Z., Polden, J., Larkin, N., et al.: Recent progress on programming methods for industrial robots. Robo. Comp.-Integr. Manuf. 28(2), 87–94 (2012) 18. Du, G., Chen, M., Liu, C., et al.: Online robot teaching with natural human-robot interaction. IEEE Trans. Industr. Electron. 65(12), 9571–9581 (2018) 19. Tzimas, E., Papazetis, G., Charitidis, C., Fantanas, D., Benardos, P., Vosniakos, G.C.: Machine system monitoring and control software, deliverable D4.1 on M3DLOC “Additive Manufacturing of 3D Microfluidic MEMES for Lab-on-a-Chip applications” project, Grant Agreement ID 760662
Transparent Object Classification and Location Using MmWave Radar Technology for Robotic Picking Ricardo N. C. Rodrigues(B)
, João Borges , and António H. J. Moreira
2Ai – School of Technology, IPCA, Barcelos, Portugal [email protected]
Abstract. Detection of transparent objects for classification or localization are among the most challenging tasks in flexible automation areas for highly personalized manufacturing. Additional issues arise when also applying the concepts of Lights-out-Manufacturing. The need to make processes more flexible and efficient are some of the key objectives of Industry 4.0, with automation going as far as removing all personnel from factory floors. In this sense, this paper presents a prototype tool proposing the use of mmWave (millimeter Wave) radars to detect, classify and locate objects, especially transparent ones, by evaluating two versions of radars produced by Texas Instruments. The best approaches to acquire data, classify and locate the objects with mmWave radars are explored using a combined solution of a robotic system and Deep Neural Networks (DNN) to process the cloud points. Using a total of 12 scanning routines, 6 showed more than 80% of detection accuracy, 2 near 40%, 1 above 70% and 3 were not able to be execute due to physical limitations. The classification of stationary objects showed limited results. Motion variations on object position shows an accuracy decreased, to around 40%. The velocity changes were also assessed, revealing that at slower velocities (0.03 m/s) the accuracy increases above 80%. The final system evaluation was executed in two approaches with raw data direct from sensor and normalized data around the axis coordinate, showing similar and promising results in both approaches. The localization did not show the best results, although improvement to the methodology is suggested. Keywords: mmWave · Industry 4.0 · Lights-out- Manufacturing · Transparent object classification
1 Introduction In current days is observable an evolution in Industry, denominated as Industry 4.0, this step consists in multiple paradigm changes with the objective of improving performance, thru use of state-of-art technology, machine learning and alternative methodologies such as reduced artificial lighting in factories (Lights-out manuf.) [1, 2].
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 F. J. G. Silva et al. (Eds.): FAIM 2023, LNME, pp. 37–45, 2024. https://doi.org/10.1007/978-3-031-38241-3_5
38
R. N. C. Rodrigues et al.
To reduce waste and increase performance companies started to decrease the size of production batches to a more personalized offer, cutting cost of planning and avoid overproduction. To enable this concept production lines need to adapt multiple times during working hours, creating the necessity to automate processes with the objective of reducing time between batches, increasing versatility, agility, and performance [3]. Accompanying this new ideology, the concept of the digitalizing the physical system and manage all aspects, drives the concept of a Digital Twin, that intertwines most of the industry 4.0 views [3]. Robotics are one of the already established technologies that can be used in new environments due to their great flexibility, precision, and control method openness. Formally used in repetitive and constrained operations, their capability of recognizing the surroundings was overlooked, however the conception is changing and the necessities of tools that complement this flaw is evident [5]. The transformation on industry is still occurring, however some companies have additional difficulty in adapting these new paradigms caused by the properties of their main products, namely transparent or translucent items, which offers special difficulties to most common detection solutions [5]. Normally these companies produce multiple types of objects consisting in the same material altering only on shape and dimension. The necessity of technological solutions that help convectional industries evolve to new century is clear, and with transparent objects one must fulfill several requirements: a) Identify and locate transparent objects by shape and dimension, b) Ability to detect objects in luminosity demanding environments, c) Capable of interfacing with multiple robotic systems. In this sense, this article is structured as follows: Sensor Review, Data Acquisition, Discussion and Conclusions, Acknowledgements ending with References. 1.1 State of Art To detect and identify multiple objects by shape are two main methods cameras and laser scanners. Each one with limitations when used with transparent objects, on a camera the background can be seen and identified through the object and laser beans can be refracted on incision interfering on the reception of signal, as exemplified on Fig. 1.
Fig. 1. Example of challenges using cameras and lasers with transparent objects (A- Background appears on target, B- Refraction, the signal never reaches the receptor)
Transparent Object Classification and Location Using
39
To surpass these challenges multiple systems and implementations have been studied to detect the presence of transparent objects at industrial level. 1. Some system uses the lack of depth information (from RGB-D cameras (Red, Green, Blue and Depth) in contrast to the nearby surroundings to select the region of interest, complementing it with further analysis on the RGB data [6]. Other systems rely only in RGB cameras (no depth), as exemplified by Miyazaki et al., that uses inversed polarized beams to represent transparent objects [7]. 2. The other main method employs the usage of lasers to detect the borders, this method presents difficulty in detecting transparent objects because of the refraction of the emitted beams, creating the necessity to adjust the emitter or receptor position to obtain a viable detection, this creates additional problems when the shape is not flat, or the position of the object varies [8]. 1.2 System Proposal To address these problems and after analysis of the current solutions, it was proposed a system that uses as sensorial element a Millimeter Wave (mmWave) radar. This type of sensor offers high resolution, lighting immunity and can detect a wide range of substances independent of their color. The radar can be combined with a robotic system that after the identification and localization of objects via AI will proceed with picking or manipulation operations. MmWave technology presents characteristics that checks some of the requirements imposed by other technologies, as observable at Table 1. MmWave technologies are being explored in multiple applications as 3D (three dimensional) reconstruction with special attention in security applications, case of hidden objects detection, by harvesting the innate capacity of detecting objects through some substances like clothing [9, 10]. These radars are also being explored in self-driving vehicles and UAS (Unmanned Aircraft Systems) due to their high robustness against environmental conditions and luminosity variations [11, 12]. Table 1. Comparing multiple sensors, Radar refers to mmWave Radar [13] Radar
Camera
Ultrasonic
Lidar
Range of Detecon
Short/Med/Long
Short
Short
Short/Medium
Detecon Accuracy
High
Medium
High
High
Detecon Resoluon
Medium
High
Medium
High
Speed Measurements
Good
No
No
No
Robustness vs. Environmental Condions
Good
Poor
Medium
Medium
Dark/Light Independent
Good
Poor
Good
Good
Size
Small
Small to Med
Small
Large
Cost
Low
Low/Medium
Low
High
40
R. N. C. Rodrigues et al.
2 Sensor Review To assess the representation capability and limitations of mmWave technology for 3D object detection, two models of mmWave radars were tested, the IWR6843ISK-ODS ES1 (from now on called ISK) and the IWR6843AOP ES2 (from now on called AOP), from the Texas Instruments Industrial range sensors, both sensors show similar specifications and differ on the generation (ES1 and ES2), the AOP is abbreviation for Antenna on Package and contains the antennas inside the chip where the ISK needs external antennas [14, 15]. To assess these two sensors a setup was implemented where the sensor is stationary, and in front of it, a perpendicularly object was placed over a linear axis with a rotary platform attached to obtain data in multiple conditions, as seen on Fig. 2. Then two objects (water bottle and an acrylic sheet) plus an empty setup were evaluated in five tests to analyze the characteristics of this technology: a) Stationary objects on position 1 (right of sensor); b) Stationary objects on position 2 (left of sensor); c) Rotary objects on position 1; d) Rotary objects on position 2; e) Linear movement between position 1 and 2.
Fig. 2. Representation of the test setup, A- Setup positioning, B- Setup and objects pictures (red -sensor, blue- platform)
2.1 Preliminary Results This subchapter contains the data acquired during tests to investigate the differences between the two sensors, in order to visualize the data, the software Cloud Compare was used. Table 2 shows the ISK data, in red, and the AOP data, in white.
Transparent Object Classification and Location Using
41
Table 2. Benchmarking mmWave radars, scale in meters (Red- ISK, White- AOP).
2.2 Conclusions The tests showed that both sensors have low representation capability when presented with stationary objects, presenting cloud points below one hundred points, however when the objects are in motion (rotational or translational) both sensors offer more significant amount of data. The ISK presents a more irrational response by mirroring the points in the y-axis, adding noise to the acquired data. Additionally, it presents itself unable to represent object directly in front of him, creating a blind spot, none of these flaws are manifested in the AOP data, presented on Table 2.
3 Data Acquisition Since movement must exist to obtain reliable data the AOP mmWave sensor was attached to a 6 DoF robot arm (Eva V4 from Automata Tech. Ltd.), enabling motion studies of different routines/paths during the acquisition process. Twelve routines were idealized and evaluated, see Fig. 3, a DNN (Deep Neural Network) was implemented to classify the objects from the data acquired by the sensor during every routine. The DNN used was PointNet [16] with sixty samples (fifty for training and ten for evaluation) of five different objects were acquired (Water Bottle, Acrylic Sheet, Yogurt Flask, Shaker Bottle and Empty Base). After selecting the best acquisition routine (perpendicular horizontal routine – A), the range of objects was increased with the addition of a Wine Glass, a plastic box and a smartphone screen protector. In this new phase, the objects used in the former testing will change their location for additional data, between 20 mm in the Y axis. After this procedure, the velocity of movement during the acquisition time was assessed with 0.1 m/s and 0.03 m/s values selected, to avoid adding external noise or vibrations.
42
R. N. C. Rodrigues et al.
Fig. 3. Tested routines example – blue represents direction of motion applied to the mmwave sensor at the robot end-effort.
One of the objectives of this project was to explore the ability to locate objects before identifying them, in this sense an additional DNN was implemented for object detection. The first attempt consisted in a modified PointNet to use Euclidean Distance as metric, although it proved ineffective without converging or minimizing error. The second attempt involved a simpler network with five convolutional (3x3 kernel) layers, one max-polling and two fully connected layers. Trained with learning rate of 0.001, Mean Squared Error as loss and Euclidean Distance as Metric, this solution presented more promising data. The data acquired by the last DNN is showed in Table 3. To evaluate this solution, two different approaches were used: a) with raw data from the mmWave sensor and, b) with normalized data around the zero-coordinate axis of the system. Both cases are used in classification and location. To maximize the ability to identify which object is present and to find their handling point (point where the robot tool can secure the object for manipulation), a test zone consisting in an area of 100x240 mm where objects could be positioned was used to add variability. In here, three positions were defined, one at center of the test area and two in its extremities. The results can be observed in the Table 4 and Table 5. 3.1 Results The confusion matrix obtained in the classification processes is presented with raw and with normalized data. Subsequent the tables with errors obtained in the location processes in the same structure as the classification, with raw data and normalized data.
Transparent Object Classification and Location Using
43
Table 3. Velocity effect in classification and handling point calculation. Distance Between Network Output and Object Handling Point (m)
Object Classificaon Accuracy (%) Velocity
Assay 1
Assay 2
Assay 3
Avg (μ)
SD (σ)
Assay 1
Assay 2
Assay 3
Avg (μ)
0,1 m/s
42
43
54
46.33
5.44
0.78
0.3
0.25
0.44
SD (σ) 0.24
0,03 m/s
75
87.5
87.5
83.33
5.89
0.08
0.06
0.06
0.07
0.01
Table 4. Classification confusion matrixes, in percentage. Raw data (%) Emp.
Shak.
Sheet
Yog.
Normalized data (%)
Bot.
Glass
Box
Prot.
Emp.
Shak.
Sheet
Yog.
Bot.
Glass
Box
Prot.
Empty
1
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
Shaker
0.2
0.2
0
0.1
0.4
0
0
0.1
0.6
0
0
0
0
0.3
0
0.1
Sheet
0
0.1
0.7
0
0
0
0
0.2
0
0
1
0
0
0
0
0
Yogurt
0.1
0
0
0.4
0.2
0.1
0
0.3
0.1
0
0
0.5
0
0
0
0.5
Bole
0
0
0
0
0.9
0
0.1
0
0
0
0.1
0.1
0.1
0.1
0
0.5
Glass
0
0.1
0
0
0.3
0.5
0.1
0
0
0.4
0.1
0
0.2
0.3
0
0
Box
0
0.3
0
0
0.3
0
0.3
0.1
0
0.7
0.3
0.1
0
0.1
0.5
0
Screen
0
0
0
0.3
0
0
0
0.7
0
0
0.2
0.1
0
0
0
0.7
Table 5. Picking point and location average relative error, due to different axis sizes of the test zone Raw data (%) Error Average
Normalized data (%) Error STD
Error Average
Error STD
X
Y
Z
X
Y
Z
X
Y
Z
X
Y
Z
Shaker
4.6
-14.2
-14.6
3.7
25.3
4.9
17.2
-14.9
-17.3
3.7
4.1
2.3 4.7
Sheet
22.6
-39.3
-10.3
9.9
29.1
3.9
10.3
-33.6
-16.9
6.4
19.6
Yogurt
9.4
-12.8
9.3
7.7
29.3
3.4
14.2
-4.9
3.2
6.6
25.6
0.6
Bole
15.7
4.5
-16.6
12.3
9.2
3.9
13.4
-17.6
-19.7
5.3
27.2
13.6
Glass
3.3
-11.2
1.1
4.3
25.5
3.0
17.8
-19.3
-1.9
2.2
26.8
3.1
Box
23.0
5.9
-8.2
11.9
12.1
2.9
14.5
-15.2
-10.1
0.2
30.0
4.1
Screen
20.6
-6.6
10.3
2.1
30.6
3.4
-18.5
-9.9
-14.9
4.0
27.3
3.8
44
R. N. C. Rodrigues et al.
4 Conclusions The main contribution of this work is the proof of concept that mmWave technology allows to classify/detect transparent objects by their shape. Using a mmWave radar for transparent object recognition presents several challenges and with multiple versions of the technology it was mandatory to assess their differences and limitations before defining a valid approach. This showed the necessity of movement between the sensor and object to obtain more consistent data from the AOP version. Due to the need of movement, the sensor velocity was also assessed, resulting in an increase of accuracy at slower movements (0.03 m/s) but not static, both on object recognition and handling point calculation. When evaluating the proposed system with several objects and multiple possible positions the system provided promising results in object recognition even with small amount of data but failed to deliver in handling point calculation showing errors up to 23% or 55 mm in X axis, 39% or 39 mm in the Y axis and 14% or 28 mm in Z axis. The current limitations in computing the correct handling point could be related to the reduced dataset from which the network was trained, by some external interference caused by nearby objects or other setup miscalculations. In overall, the obtained results show promising findings especially in object recognition, although considering current results more work should be considered by expanding the datasets to increase precision and robustness. In the future, after addressing the low datasets size, such enabling the fusion of the location and classification DNN in a single one, promoting sensing solution that can be integrated on a standalone system to enhance flexibility and compatibility with more robotic systems and allow for further sensor configurations to obtain better data acquisitions rates. Acknowledgments. This paper was partially funded by national funds (PIDDAC), through the Portuguese Foundation for Science and Technology – FCT and FCT/MCTES under the scope of the projects UIDB/05549/2020 and UIDP/05549/2020 and under the scope of the project LASILA/P/0104/2020. It was also funded with National funding by FCT, through the individual research grant UI/BD/151296/2021.
References 1. Schwab, K.: fourth industrial revolution. Routledge (2020) 2. Lights-Out Manufacturing: Factory & Machining Automation, https://redshift.autodesk.com/ lights-out-manufacturing/, last accessed 21 October 2020 3. Fuller, A., et al.: Digital twin: enabling technologies, challenges and open research. IEEE. Access 8, 108952–108971 (2020) 4. Arents, J., Greitans, M.: Smart Industrial Robot Control Trends, Challenges and Opportunities Within Manufacturing (2022) 5. Groover, M.P.: GlobAl edITIon GlobAl edITIon Automation, Production Systems, and Computer-Integrated Manufacturing FoUrTh edITIon 6. Sajjan, S.S., Song, S. et al.: ClearGrasp: 3D Shape Estimation of Transparent Objects for Manipulation
Transparent Object Classification and Location Using
45
7. Miyazaki, D., Ikeuchi, K.: Shape estimation of transparent objects by using inverse polarization ray tracing. IEEE Trans Pattern Anal Mach Intell. 29, 2018–2029 (2007) 8. Adrian, N., Pham, Q.-C.: Locating Transparent Objects to Millimetre Accuracy (2019) 9. Gao, J., et al.: A novel method for 3-D millimeter-wave holographic reconstruction based on frequency interferometry techniques. IEEE Trans Microw Theory Tech. 66, 1579–1596 (2018) 10. Yanik, M.E., Torlak, M.: Near-Field 2-D SAR imaging by millimeter-wave radar for concealed item detection. 2019 IEEE Radio and Wireless Symposium (RWS), pp. 1–4 (2019) 11. Ezuma, M., et al.: Micro-UAV detection with a low-grazing angle millimeter wave radar. 2019 IEEE Radio and Wireless Symposium (RWS), pp. 1–4 (2019) 12. Zöchmann, E., et al.: Geometric Tracking of Vehicular mmWave Channels to Enable Machine Learning of Onboard Sensors Christian Doppler Laboratory for Dependable Wireless Connectivity for the Society in Motion, pp. 1–6 (2018) 13. Patented NoraSens mm-Wave radar sensor technology | Novelic, https://www.novelic.com/ mm-wave-radar-sensor-technology/, last accessed 05 March 2020 14. IWR6843ISK-ODS Evaluation board | TI.com, https://www.ti.com/tool/IWR6843ISK-ODS, last accessed 05 November 2020 15. IWR6843AOPEVM IWR6843 intelligent mmWave sensor antenna-on-package (AoP) evaluation module | TI.com, http://www.ti.com/tool/IWR6843AOPEVM, last accessed 05 March 2020 16. Qi, C.R., et al.: PointNet: Deep learning on point sets for 3D classification and segmentation. Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017. 2017-Janua, pp. 77–85 (2017)
AI-Based Supervising System for Improved Safety in Shared Robotic Areas Ana Almeida1(B) and António H. J. Moreira2 1 Polytechnique Institute of Cávado and Ave, Barcelos, Portugal
[email protected] 2 2Ai – School of Technology, IPCA, Barcelos, Portugal
Abstract. Robots were introduced into the industry to improve quality and productivity. However, questions began to arise regarding the safety of employees. A new generation of robots, called cobots, has emerged and started to gain prominence due to their characteristics, although tools and grippers have limited security to be included in collaborative working cells. In this sense, we consider that the creation of an adaptative safety system for work cells is necessary to improve safety with minimum production impact. Our goal with the creation of an intelligent vision system is to detect humans/robots in the work cell, detect their joints using neural networks, simultaneously acquire the current position of each robot axis and determine the physical distance between the human and robot by creating a virtual environment developed in Unity. Depending on the safety level determined by the system, a different action/speed is communicated to the robot. Low speeds if the safety level is low, and higher speeds if there is no danger to the human. We only intend to make changes to the robot’s speed, avoiding sudden stops or emergency stops that will eventually deteriorate the correct operation and fluidity of the robot’s movements. In this sense, we present some validation results of the system’s operation, performing several tests with different danger and safety distances and verifying if it can accurately identify the safety level that the human is in from the robotic collaborative work cell. Keywords: Vision System · Collaborative Work Cells · Neural Networks · Collaborative Robots
1 Introduction Industrial automation can handle mass production with high efficiency and repeatability. However, it lacks the flexibility to handle constant product changes. Humans in such situations can easily handle these product customizations and variations, however, they are restricted by their physical capabilities in terms of strength, endurance, speed, and repeatability. Thus, an operator and a cobot can easily collaborate on a wide variety of industrial tasks, creating a collaborative work cell. Human-robot collaboration arises from the need for humans and automated systems to simultaneously share the same space. Collaborative robots have been cataloged as safe © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 F. J. G. Silva et al. (Eds.): FAIM 2023, LNME, pp. 46–53, 2024. https://doi.org/10.1007/978-3-031-38241-3_6
AI-Based Supervising System
47
equipment capable of interacting with humans without causing them harm. However, despite the robot being provided with a safety license, all the pick-and-place tools, bolting tools, and grippers among other tools, which are attached to the robot, do not present any level of safety for the operator. Our main idea is based on the development of a system that allows avoiding contact between people and a robotic arm, as well as the tools attached to the gripper. 1.1 Goals and Attributes Ensuring the physical well-being of a human is essential to avoid production breakdowns and fatigue. The integration of collaborative robots has addressed these deficits of humans but has added some risks associated with the physical safety of the operator. To overcome this problem, numerous extra safety techniques have been developed to protect the operator. The main motivation for this work lies in the possibility of developing an external intelligent and highly efficient system that allows operators to interact with a robot in any kind of assembly task without any accidental physical damage, while maintaining operations in other cases (i.e., mobile robot interaction, human crossing at distance).
2 Literature Review A collaborative robot is a robot that can work alongside humans in complete safety [1]. These are the ideal solution for repetitive operations in small spaces and to increase the productivity of activities that need to be performed with humans. In industry, the most common way to ensure the safety of robots is to surround them with mechanical guards, because restricting humans’ access to danger zones is the best way to achieve the highest level of safety. 2.1 Existing Solutions and Projects The integration of collaborative robots in the industry still presents many unknowns in terms of operator safety, there are several systems for controlling operator-robot interaction that should be certified according to IEC roles as well as IEC 61496 [2] that reference special requirements for electrosensitive non-contact protective equipment using devices of vision, but we verified that none of them presents a total and 100% safe control. The system SafetyEYE developed by Pilz uses 3 images to make a three-dimensional reconstruction, detecting the presence of any human or object within the defined safety zones [3]. The Team@work system uses the same principle as the first with the particularity that it is prepared to detect the presence of human skin in the captured images, see Fig. 1 [4]. The HRI PLATFORM, it’s also a system that follows the same principle as the first system mentioned adding the possibility of face detection to indicate to the robot that the worker is ready for a new task, as pointed out, this face detection only works properly if the worker looks directly at the camera or an angle of ± 50 degrees [3].
48
A. Almeida and A. H. J. Moreira
Fig. 1. Team@Work Operation [4]. Left – standard setup; middle – upper image sample with operators; right – hand/skin detection and tracking.
The VEO ROBOTICS [5] system is a security system that seems to be the most complete because it allows the labeling of all objects that enter the security zones, all the systems presented before are unable to distinguish between humans and objects when the vision systems detect that something has violated the security zones. Figure 2 represents another 3D vision-based monitoring system, which is very similar to the system we want to implement. However, this system can’t differentiate objects (robots) from humans, and as such, triggers it’s alert system whenever something is detected in the danger or forbidden zone [6].
Fig. 2. Example of how the 3D vision-based monitoring and protection system works.
We verified that in all the identified systems, whenever the operator violates the safety zones, the robot immediately goes into the emergency system, few are systems that allow controlling the position and speed of the robot in situations like this. In situations where the operator must invade the robot’s work zone several times and the robot’s emergency system is activated each time, we will be heading toward the degradation of the robot in terms of joint maintenance.
3 Ai SaferCobot 4.0 System Definition Using a camera, capable of collecting an image around the robot, and two neural networks, YOLOR [7] and PoseNet [8], the goal is to create a machine vision system for collaborative robots, giving it the perception around it. The robot will only slow down or eventually stop only when the intelligence algorithm detects or predicts a collision between the operator and the robotic arm. We will need to integrate a YOLOR to do the first screening of the environment of the work cell and a PoseNet to estimate the pose of humans when the first screening tells us
AI-Based Supervising System
49
that there are humans inside the work. We will also use the same model to estimate the positioning of human hands in situations where the work cell lacks collaborative tasks. To establish communication with the UR3 robot we use the URX library [9] which allows the exchange of data between the application and the robotic arm. Our system it’s composed of three modules: a vision system, a robot system, and a virtual environment, see Fig. 3. The vision system will only capture images and will check if exist humans in a work cell, with the camera at 1.5 m of the robot and a field of view with almost 2 m of wide. The robot system it’s responsible to communicate with the robot arm. The virtual environment was created to be easier the estimative of distances between the human and the robot, applying the Euclidean distance calculation. The communication between modules was implemented using the message library ZeroMQ [10] to ensure fast and efficient communication.
Fig. 3. Architecture of Ai SaferCobot 4.0.
3.1 Collision Avoidance Strategy In the system, we defined three zones/areas: safe area, dangerous and forbidden area levels 5, 4–3, and 2–1, respectively. See Table 1. After calculating the distances between human and robot joints that are represented in a virtual environment with GameObjects [11]. We checked if any distance ≤ danger distance, and in this case, we cataloged it as 1. The Unity should communicate to perform a significant speed reduction or even stop the robotic arm. If the danger distance ≥ distances ≤ safety distance, the order to reduce the robot’s speed should be given. If all distances ≥ safety distances, it means that the human is still in a safe area. After we cataloged the work area (red - prohibited area, yellow - dangerous area, and green - safe area), see Fig. 4. In Unity, we can parameterize the values of safety and danger distances. This way, by observing the interface, we were able to perceive which joints are preventing the normal functioning of the robotic arm, see Fig. 4, middle and right images. Still referring to the numbering of the security types, we considered two security levels for the dangerous area (4–3) and the forbidden area (2–1). We decided to implement
50
A. Almeida and A. H. J. Moreira
Table 1. Safety levels are defined for the system and the respective speeds in percent that are communicated to the robot. Dangerous Area
Dangerous Area
Safe Area
Security Level 1
2
3
4
5
Speed (%)
15
25
50
80
100
Description
Prohibited area with humans approaching the robotic arm
Prohibited area with human walking away from the robotic arm
Dangerous area with humans approaching the robotic arm
Prohibited area Safe area with human walking away from the robotic arm
Fig. 4. Applied methodology for calculating distances between human joints and robot axes; left – distances from robot model to human articulations; middle – safety areas; right – color changing joints after safety evaluation.
this system to get a faster reaction time from the system when the human moves toward the robotic arm. Additionally, the calculation of the human motion prediction is performed relative to the robot’s position. In each frame, a hand is detected, and we obtain the distance from the hand’s central position to the robot’s position, and we subtract the last distance calculated. If the difference is positive, it means that the hand is moving towards the robot and as such, we can act faster by further reducing the robot’s speed level. In the situation where the difference is negative, we know that the human is moving away from the robot.
4 Validation Results of the Proposed Solution 4.1 Processing Time for Ai SaferCobot 4.0 To frame the next results, it is necessary to specify the hardware used, since the monitoring time is directly associated with the processing capacity. All the tests were performed using a desktop with Intel(R) Core (TM) i5-7300HQ CPU @ 2.50GHz 2.50 GHz; RAM 8 GB; NVIDIA GeForce GTX 1050 2GB.
AI-Based Supervising System
51
As mentioned before, the processing capacity is directly associated with the execution time of the application cycle. Therefore, it is possible to improve the results obtained. Our target was for our system to have a reaction time that is less than a human’s reaction time, less than 450 ms. In Table 2, we compiled some processing and response times that were presented previously and that we consider relevant for estimating the total processing time of the intelligent system. By summing up the average times for each of the modules, the system will have a response time of around 0.66 secs with a standard deviation of 0.18 s. Table 2. Processing and response times (average and standard deviation) of the various modules that make up Ai SaferCobot 4.0 Average (s)
STD (s)
Frame capture time
0.05
0.02
Vision system processing time
0.32
0.05
Robot response time to speed changes
0.09
0.01
Response time of the interface in determining the security level (Processing and response times)
0.2
0.1
0.66
0.18
Unfortunately, the data presented isn’t very satisfactory. The theoretically obtained response time is very close to 1 s. We know that the human reaction time to a stimulus can vary between 200 to 450 ms depending on age [12]. 4.2 Assessment of Joints and Distance Between Human and Robot We will now present more results, this time focusing on the ability to detect human joints in different danger zones. The test involved placing a person at known distances from the robot and verifying whether the system was able to correctly calculate distances between human and robot and then determine the person’s safety level. One of the ways we found to validate the correct detection of joints and determination of the safety level of the human in the collaborative work cell, was to set known values of dangerous and safe distances and analyze the system response. According to the data in Fig. 5, we wanted to ensure that the real distance corresponds to the distance in the virtual environment. The use cases exposed in this chapter, show that the system can correctly differentiate the 3 safety areas, but presents some limitations when the danger area is less than 200 mm. This vulnerability is due to the way distances between joints and the robot is calculated using the object’s center. If we observe the robotic arm, some of the axes have a size equivalent to 150 mm, if we define a dangerous distance of less than 200 mm, the human may already be touching the robotic arm without the system detecting it because the distance from the central point of the joint in question to the central point of the robot axes may be greater than the defined danger distance, this is one of the points that could be improved.
52
A. Almeida and A. H. J. Moreira
Fig. 5. Results were obtained for different safety distances. Humans at different distance positions from the robotic arm, projection of the human’s joints in a virtual environment, and calculation of distances between the human’s joints and the robotic arm axes.
5 Conclusion During the phase of tests and data collection regarding the performance of the system, we verified that we need a much faster system, reducing its processing time from 660 ms to a value lower than 250 ms, to optimize time, we could make improvements and optimizations at the algorithm level by dividing the system into more processing modules making the system more asynchronous and independent from each other, but also by using neural networks accelerators. The YOLOR neural network gave us good results, but it may be necessary to perform some optimization at the network level such as using compiled networks or neural accelerators. We verified with the results presented above that the methodology that we found for the calculation of distances between humans and robots seems not to have been the best approach, it would be interesting to think of an approach that could easily detect articulations at a distance of less than 200 mm without any difficulty, this improvement would involve exploring some features of Unity in which we could create a virtual zone around each axis that would be wider or not according to the parameterized danger distance and when the articulations collided with this virtual zone, the system would know that the human was in a forbidden zone.
AI-Based Supervising System
53
By reducing the speed of the robot, we were able to protect humans and continue the process that was taking place. Once the human is in a safe position, the robot immediately returns to its normal speed set for the trajectory.
References 1. Vysocky, A., Novak, P.: Human – robot collaboration in industry. MM Science Journal 2016(02), 903–906 (2016). https://doi.org/10.17973/mmsj.2016_06_201611 2. “EN IEC 61496–1:2020 - Safety of machinery - Electro-sensitive protective equipment Part 1: General.” https://standards.iteh.ai/catalog/standards/clc/b77ec184-b687-47c0-b4fd874dc25d41b8/en-iec-61496-1-2020, accessed 21 Nov. 2022 3. Gopinath, V., Ore, F., Grahn, S., Johansen, K.: Safety-focussed design of collaborative assembly station with large industrial robots. Procedia Manuf 25, 503–510 (2018). https://doi.org/ 10.1016/j.promfg.2018.06.124 4. Krüger, J., Nickolay, B., Heyer, P., Seliger, G.: Image based 3D surveillance for flexible manrobot-cooperation. CIRP Ann Manuf Technol 54(1), 19–22 (2005). https://doi.org/10.1016/ S0007-8506(07)60040-7 5. “Robo-Vision! Turning Robots into Cobots.” https://www.machinedesign.com/motion-con trol/robo-vision-turning-robots-cobots, accessed 23 Jun. 2019 6. Márcio, A., Santos, F., Orientador, M., Fernando, D., Lopes, J.P.: Sistemas de Monitorização e Proteção baseados em Visão 3D-Desenvolvimento de uma Aplicação de Segurança e Proteção Industrial utilizando Sensores RGB-D 7. Wang, C.-Y., Yeh, I.-H., Liao, H.-Y.M.: You Only Learn One Representation: Unified Network for Multiple Tasks. (May 2021). https://doi.org/10.48550/arxiv.2105.04206 8. Kendall, A., Grimes, M., Cipolla, R.: PoseNet: A Convolutional Network for Real-Time 6DOF Camera Relocalization (May 2015). Accessed: 09 Feb.2023. [Online]. Available: http:// arxiv.org/abs/1505.07427 9. “python-urx/urrobot.py at master · SintefManufacturing/python-urx · GitHub.” https://github. com/SintefManufacturing/python-urx/blob/master/urx/urrobot.py, accessed 01 Sep. 2022 10. “ZeroMQ.” https://zeromq.org/, accessed 09 Feb. 2023 11. Unity - Manual: GameObjects. https://docs.unity3d.com/Manual/GameObjects.html, accessed 08 Feb. 2023 12. Jain, A., Bansal, R., Kumar, A., Singh, K.: A comparative study of visual and auditory reaction times on the basis of gender and physical activity levels of medical first year students. Int J Appl Basic Med Res 5(2), 124 (2015). https://doi.org/10.4103/2229-516X.157168
Vision Robotics for the Automatic Assessment of the Diabetic Foot Rui Mesquita1 , Tatiana Costa1 , Luis Coelho1,2(B) , and Manuel F. Silva1,2 1
Polytechnic of Porto-School of Engineering, Porto, Portugal {1171050,1161257,lfc,mms}@isep.ipp.pt 2 INESC-TEC, Porto, Portugal
Abstract. Diabetes, a chronic condition affecting millions of people, requires ongoing medical care and treatment, which can place a significant financial burden on society, directly and indirectly. In this paper we propose a vision-robotics system for the automatic assessment of the diabetic foot, one the exams used for the disease management. We present and discuss various computer vision techniques that can support the core operation of the system. U-Net and Segnet, two popular convolutional network architectures for image segmentation are applied in the current case. Hardcoded and machine learning pipelines are explained and compared using different metrics and scenarios. The obtained results show the advantages of the machine learning approach but also point to the importance of hard coded rules, especially when well know areas, such as the human foot, are the systems’ target. Overall, the system achieved very good results, paving the way to a fully automated clinical system.
Keywords: Vision Robotics
1
· Image segmentation · Diabetes
Introduction
Vision robotics is a subfield of robotics that focuses on the use of computer vision techniques to enable robots to perceive and understand their environment. This can involve the use of cameras or other sensors to capture images or other data about the environment, as well as the use of algorithms to process and interpret this data in order to enable the robot to make decisions and take actions. The wide range of applications cover object recognition and tracking, navigation, manipulation, and surveillance, among others. It is a rapidly developing field that is expected to play a significant role in the future of robotics and automation. In the last few years, some of the key technologies used in vision robotics include computer vision algorithms, machine learning (ML) algorithms, and sensor hardware such as depth cameras and Light Detection and Ranging (LiDAR). These technologies enable robots to better perceive and understand their environment, to make decisions and take actions based on that understanding. In the healthcare area, the introduction of robotics system has been growing steadily [2] and vision robotics has had a crucial role on this development while c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 F. J. G. Silva et al. (Eds.): FAIM 2023, LNME, pp. 54–61, 2024. https://doi.org/10.1007/978-3-031-38241-3_7
Vision Robotics for the Automatic Assessment of the Diabetic Foot
55
Fig. 1. Plantar sensitivity assessment. Comparison of the human-based procedure with the automated vision-robotics approach.
offering a wide range of possibilities. For example, vision robotics systems can be used to assist surgeons in minimally invasive procedures, such as laparoscopic surgery, by providing high-resolution, real-time images of the surgical field [8]. Auscultation, a major source of information in clinical decision, can also be performed by robotic systems assisted by vision [10]. Additionally, these systems can also be used to support tasks such as wound care and rehabilitation, or to monitor patients for signs of deterioration [12], especially in chronic disease care, where repetitive tasks are often involved, and can be automated. Observation and assessment of the diabetic foot is an example of such a task. It consists in a procedure where the practitioner, using a specific device, repeatedly applies small touches on the patient’s feet to evaluate their plantar sensitivity (PS), as observed in the right side Fig. 1. Peripheral neuropathy is the main cause of loss of PS in diabetic patients and can lead to serious conditions or even limb amputation. The number of patients with diabetes is increasing and foot assessment if now done in a mass scale. The procedure has minimal complexity and requires around fifteen minutes including preparation time [11]. Vision-robotics can be beneficial for this case, freeing up medical staff to focus on more complex tasks. Overall, vision robotics has the potential to improve the efficiency and accuracy of medical care, as well as to reduce the workload of medical staff while improving patient outcomes. In this paper we present a comparison of vision-robotics technologies for the assessment of PS in the context of diabetic foot management. In the next section we present an overview of the proposed system and explain the role of the artificial vision component. Then a special focus is given to ML approaches that can be used to tackle vision-robotics problems in healthcare applications. Following development, a comparison of technologies is performed covering different scenarios. The obtained results are then presented and discussed, before the main conclusions are devised.
56
2 2.1
R. Mesquita et al.
Methodology Plantar Sensitivity Assessment
PS examination, also know as the 10 gf Semmes-Weinstein monofilament (SWM) test, is a procedure often used by healthcare professionals to assess the effects of peripheral neuropathy in diabetic patients [3]. A nylon thread (the monofilament), used as a testing device, is manufacturer’s calibrated to buckle with a force of 10 gf. With the patient laying with feet exposed, the practitioner approaches the monofilament to the plantar area and then touches a predefined point. The patient’s sensitivity feedback to this touch, positive or absent, is registered and the procedure is repeated for a total of nine points. 2.2
System Overview
Vision-robotics can be a feasible solution to automate the PS examination. For this, from the several involved system modules, the computer-vision module presents the major challenges, since the feet have variable morphology and also exhibit diverse color patterns, despite the common underlying skeletal structure. As in Fig. 2, two approaches are possible. The first, is based on a pipeline of image processing techniques [4], whose main stages are background segmentation, orientation correction, segmentation of toes and plantar test sites and, finally, a conversion of image coordinates to real-world physical coordinates. During the development phase, it is necessary to select image characteristics for further processing, and during implementation, it is essential to ensure their existence and extract them. In contrast, ML algorithms have emerged as an efficient and effective approach for image segmentation. They often provide end-to-end solutions, eliminate the need for feature selection and robustly handle image variability. These are interesting advantages when image acquisition is performed in an uncontrolled context and lighting conditions, such as in daily clinical practice. In this paper we compare three techniques for a vision robotics solution, one based on image processing operation, previously described in [4] and two novel techniques, based on ML. 2.3
Dataset
For development, we have used a previously developed database [4] from which we extracted a subset of 46 images, for quality reasons. The database contains photographic plantar images of Caucasian humans, encompassing both genders and a wide range of ages. The location of the testing sites (TS) are provided as metadata for each image in the form of an independent image mask, with a total of eighteen circles (nine for left foot and nine for right foot). Our subset was then subjected to image augmentation techniques, allowing to increase the number of tuples for the ML training stage. Using five different transformations, as shown in the examples of Fig. 3, from the initial subset we were able to reach a total of 276 images.
Vision Robotics for the Automatic Assessment of the Diabetic Foot
57
Fig. 2. Computer-vision based plantar sensitivity assessment pipelines. Comparison of the hard-coded feature based approach with an end-to-end ML based approach.
Fig. 3. For data augmentation purposes, the (a) original image was transformed using: (b) horizontal flip, (c) vertical flip, (d) rotation, (e) hue variations and (f) optical distortion. (Original resolution and aspect ratio have been adjusted to match the machine learning input requirements.)
58
R. Mesquita et al. Input
1
2 5
3
HCFB
1 4
U-Net
SegNet
2 3
5
4
6
6
8
7
9
7
8
9
Fig. 4. From left to right, input image (here represented with annotations), and segmentation results (for a 9 points segmentation task) from HCFB, U-Net and Segnet approaches. Estimated masks are represented as white dots over black background.
2.4
Image Segmentation
The ML landscape for image segmentation is wide, offering many options: UNet [14], SegNet [1], Feature Pyramid Networks (FPN) [9], Mask-RCNN [6], Resunet [5], Multiresunet [7], among many others [16]. The use of attention masks and adversarial mechanisms has also become popular in recent years [13]. From these options we have selected U-Net and SegNet because they have been previously used with success in healthcare related problems [15] and a have strong supporting community. U-Net is a convolutional neural network (CNN) architecture that is commonly used for image segmentation tasks. The architecture is called “U-Net” because it has a U-shaped network structure, which consists of an encoder network, to capture context, and an expanding network, that leads to precise localization. The first consists of convolutional and max-pooling layers while the second uses upsampling to reach a per-pixel classification using the feature maps from the first. SegNet is another CNN architecture with similar objectives, being also composed of a encoder and a decoder networks. One of the key differences is that a set of indices is used to upsample the feature maps in the decoder network, rather than using interpolation. This allows to more accurately reconstruct the input image while reducing the number of parameters of the encoder section from 134M to 14.7M . This can improve performance when using smaller datasets. For training and testing, similar pipelines, for U-Net and Segnet, were created, using a 10-fold validation strategy. The loss function for model training was defined as the binary crossed entropy (BCE) combined with Dice-Sorensen Coefficient (DSC), widely used in image comparison.
3
Results and Discussion
In Fig. 4, an example of the obtained results is presented, showing the output segmentation masks for each methodology. The expected results should follow a pattern as in the input image (leftmost) superimposed mask, with three TS on the bigger, middle and minor toes (loc. 1, 2, 3), three TS on the metatarsus (loc. 3,4, 5), located below the previous, two TS on the central plantar area (loc. 6, 7), and one on the heel (loc. 9). The hard-coded feature-based approach
Vision Robotics for the Automatic Assessment of the Diabetic Foot
59
(HCFB) (second, left to right), can provide precise TS locations but often fails to detect toes. The error propagates since the first points serve as reference for the metatarsus locations. The U-Net approach (third square) conveys a mask with all expected test sites but they are broadly defined. Additionally, the central points are sometimes wrongly positioned, maybe due to a confusion with left and right foot. Segnet (rightmost) exhibits a more refined result, with well-defined test locations, despite the absence of points in the central plantar area. To evaluate the systems’ performance, Accuracy, Precision, Jaccard score, F1 score and Recall were calculated. These are popular metrics applied to ML segmentation problems. The masks with annotations performed by experts, were the ground-truth for a one-on-one pixel comparison. The obtained values, in Table 1, were calculated for two different scenarios: SCN1) one considering the toes, encompassing three testing sites per foot, in a total of six points, and another SCN2) considering all TS, nine per foot in a total of eighteen points. We can observe that the best results are obtained using U-net for SCN2. We can also observe that Recall is higher than Precision, indicating that many of the positively predicted pixels are incorrect. This is compatible with the more broadly defined area provided by U-Net. For SCN1, more challenging, U-net also achieves better results, and HCFB providing better Precision than Segnet. In both scenarios, overall, the HCFB approach is surpassed by the ML approaches. However, for HCFB the test site locations are drawn as a circle with a predefined diameter, which can lead to a smaller overlap with the ground-truth mask, and to worst performance scores. The broadly defined areas of U-Net can be converted in specific location by calculating the center points for each blob. Table 1. Performance metrics for the evaluation of testing points segmentation methodologies, comparing U-Net, Segnet and HCFB approaches. The highest value for each metric is shown in bold-face. Metric
SCN1 (Toes, 3L+3R) HCFB U-net
SCN2 (Full foot, 9L+9R)
Segnet HCFB U-net
Segnet
Jaccard
0.1729 0.1700 0.1822 0.2828 0.3178 0.3012
F1
0.2873 0.2898 0.2949 0.3973 0.4799 0.4661
Recall
0.1174 0.1800 0.1763 0.4399 0.4876 0.3373
Precision 0.7338 0.7455 0.6622 0.4533 0.4753 0.4480 Accuracy 0.8223 0.9982 0.9387 0.8939 0.9979 0.9654
In addition to pixel-based metrics, we also evaluated task-oriented metrics. We have defined different target regions on the feet and have counted the number of times that the expected test locations were correctly estimated within a radius of one centimeter from the expected location, regardless of whether or not they appeared superimposed on the reference mask. From the obtained results, represented in Fig. 5, we can observe that U-Net can provide better estimates for all regions, closely followed by Segnet. Due to its deterministic algorithm,
60
R. Mesquita et al.
Fig. 5. Number of testing points detected, by foot region, comparing distinct segmentation approaches. The number of expected points is shown inside parenthesis on the horizontal axis labels.
HCFB has the smallest variance and never exceeds the expected number of points, which does not happen with the ML approaches. U-Net shows the highest variance results on the central foot region, but the average is aligned with the expected value. Furthermore, it is interesting to observe that the ML algorithms deal better with the segmentation of the left foot than the right foot.
4
Conclusion and Future Work
Diabetes, affecting millions of people, can lead to serious health complications and have a significant impact on society. In this paper we have presented a comparison of algorithms for supporting the operation of a vision-robotics system for the automatic assessment of the diabetic foot. Segnet provided more precise results, achieving better pixel matching. However, U-Net outperforms in terms of location identification and in challenging locations/patterns. U-Net achieved the overall best, but HCFB leaded to lower results’ variability. For the future, an hybrid approach, using ML for an initial estimate and using anatomical mesh constraints for refining, could lead to optimal results. Since foot anatomy is well known, skeletal landmarks connected by orientation and dimension constrained links can be adjusted to the ML estimated points. Location specific algorithms can integrate and improve the results towards an optimized human-oriented solution. Developments, featuring the aforementioned characteristics, are anticipated for the future. (This work is financed through the Portuguese funding agency, FCT Funda¸c˜ao para a Ciˆencia e a Tecnologia, within project LA/P/0063/2020.)
Vision Robotics for the Automatic Assessment of the Diabetic Foot
61
References 1. Badrinarayanan, V., Kendall, A., Cipolla, R.: SegNet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39(12), 2481–2495 (2017) 2. Bramhe, S., Pathak, S.S.: Robotic surgery: a narrative review. Cureus 14(9), e29179 (2022) 3. Bus, S.A., et al.: Guidelines on the prevention of foot ulcers in persons with diabetes (IWGDF 2019 update). Diab./Metab. Res. Rev. 36(Suppl 1), e3269 (2020) 4. Costa, T., Coelho, L., Silva, M.F.: Automatic segmentation of monofilament testing sites in plantar images for diabetic foot management. Bioengineering 9(3), 86 (2022) 5. Diakogiannis, F.I., Waldner, F., Caccetta, P., Wu, C.: ResUNet-a: a deep learning framework for semantic segmentation of remotely sensed data. ISPRS J. Photogram. Remote Sens. 162, 94–114 (2020) 6. He, K., Gkioxari, G., Doll´ ar, P., Girshick, R.: Mask R-CNN. IEEE Trans. Pattern Anal. Mach. Intell. (2018) 7. Ibtehaz, N., Rahman, M.S.: MultiResUNet: rethinking the U-Net architecture for multimodal biomedical image segmentation. Neural Netw. 121, 74–87 (2020) 8. Klodmann, J., et al.: An introduction to robotically assisted surgical systems: current developments and focus areas of research. Curr. Rob. Rep. 2(3), 321–332 (2021) 9. Lin, T.Y., Doll´ ar, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 936–944 (2017). iSSN: 1063–6919 10. Lopes, D., Coelho, L., Silva, M.F.: Development of a collaborative robotic platform for autonomous auscultation. Appl. Sci. 13(3), 1604 (2023) 11. Martins, P., Coelho, L.: Evaluation of the Semmes-Weinstein monofilament on the diabetic foot assessment. In: Belinha, J., et al. (eds.) Advances and Current Trends in Biomechanics, pp. 121–125. CRC Press, Porto (2021) 12. Nieto Agraz, C., Pfingsthorn, M., Gliesche, P., Eichelberg, M., Hein, A.: A survey of robotic systems for nursing care. Front. Rob. AI 9, 832248 (2022) 13. Oktay, O., et al.: Attention U-Net: learning where to look for the pancreas. Technical report. (2018). arXiv:1804.03999 [cs] type: article 14. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) Medical Image Computing and Computer-Assisted Intervention - MICCAI 2015. Lecture Notes in Computer Science, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4 28 15. Saood, A., Hatem, I.: COVID-19 lung CT image segmentation using deep learning methods: U-Net versus SegNet. BMC Med. Imaging 21(1), 19 (2021) 16. Wang, R., Lei, T., Cui, R., Zhang, B., Meng, H., Nandi, A.K.: Medical image segmentation using deep learning: a survey. IET Image Process. 16(5), 1243–1267 (2022)
A Conceptual Framework for the Improvement of Robotic System Reliability Through Industry 4.0 Dimitris Mourtzis(B)
, Sofia Tsoubou, and John Angelopoulos
Laboratory for Manufacturing Systems and Automation, Department of Mechanical Engineering and Aeronautics, University of Patras, 26504 Rio Patras, Greece [email protected]
Abstract. In recent years, the traditional manufacturing industry has been affected by the advent of Industry 4.0. Robotic systems have become a standard tool in modern manufacturing, due to their unique characteristics, such as repeatability, precision, speed, and high payload. However, robotics manipulators suffer from low reliability. Low reliability increases the probability of disruption in manufacturing processes, maximizing in this way the downtime and the maintenance costs. Consequently, for complex systems, like robots which consist of a lot of interdependent components, with various failures, it is critical to develop effective reliability assessment techniques to ensure high performance. Until now, there are several reliability assessment methods, but they are mostly time-consuming and expert-knowledge-intensive processes. This research develops a model-based approach for the reliability calculation of a robotic cell pinpointing the simplifications that are taking into account. Furthermore, a data-driven approach is proposed with the aim to enhance the results of the model-based approach and to improve the reliability of the robotic system towards the new directions that Industry 4.0 technologies can offer, by predicting the Remaining Useful Life (RUL) of critical components, using real-time data, thanks to the Digital Twin (DT). Keywords: Reliability assessment · Robots · Digital Twin · Industry 4.0 · RUL
1 Introduction and State of the Art Due to the rapid evolution of technology, the industrial requirements are constantly changing, making the need of more reliable manufacturing systems [1]. Robotic systems have become fundamental parts of the manufacturing industry because of their versatility when performing manufacturing tasks in environments with increasingly rising demands [2]. For many years, manufacturing system’s reliability has been of highest concern. This fact has not changed over the years. In contrast, reliability’s significance as a performance indicator of manufacturing systems has increased as a result of the increasing failures of systems which are arising because of the complexity of them. Due to the Industry 4.0 technologies, the data, that can be used to analyze the reliability of systems, are now even more accessible. As a result, a significant amount of expert © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 F. J. G. Silva et al. (Eds.): FAIM 2023, LNME, pp. 62–70, 2024. https://doi.org/10.1007/978-3-031-38241-3_8
A Conceptual Framework for the Improvement
63
knowledge, that was previously guiding and supporting reliability assessment, has the chance now to be improved and validated [3]. Companies now are trying to be smarter and instead of preventing failures, are moving towards the prediction of them [4]. Predictive Maintenance (PdM) is gaining more interest as it is dealing with predicting faults before they occur [5]. RUL prediction has become increasingly significant in machine health monitoring [6]. By predicting the RUL of a machine, the estimation of possible equipment malfunctions is enabled, and by extension scheduling more efficiently Maintenance and Repair Operations (MRO) can be achieved. Ultimately the goal is to enhance the reliability of company’s equipment. The contribution of this paper is to develop a model-based approach to calculate the reliability of a robotic cell and to propose a data-driven approach for the robot’s reliability improvement based on the DT under the framework of PdM. Assumptions and simplifications in the construction of RBD gave breeding ground to the data-driven approaches. The remainder of this manuscript is organized as follows. In sect. 2, the proposed methodology and the system architecture are illustrated, in sect. 3, the case study is presented and in sect. 4, the research is concluded. Over time, several robotics versions emerged as the industry witnessed various revolutions. In the era of Robotics 1.0, the robots were dangerous, and they bounded by fences. They developed to relieve humans from tiring and repetitive tasks. During the era of Robotics 2.0, the vision was added to robots, by using sensors and cameras. Also, collaborative robots were developed but they were more slowly when they interacted with humans. Now we are experiencing the digital era of robots, the Robotics 3.0. In this version, mobile robots are introduced, and robots are leveraging the Industry 4.0 technologies. Robotics 4.0 area is starting from 2020s with the integration of Artificial Intelligence of Things (AIoT) to present more cognitive and perceptive robots [7, 8]. In Fig. 1, the four robotic revolutions are illustrated.
Fig. 1. The four Robotic Revolutions
One way to improve the reliability of a robot, is making a virtual counterpart of it. In DT concept, the virtual and physical systems are synchronized because of connected smart devices. DT is one of the main technologies of Industry 4.0 and can be used as a tool to forecast failures of a system, by the real-time data exchange due to connected
64
D. Mourtzis et al.
sensors [9, 10]. Due to the development of Internet of Things (IoT), huge amounts of data can now be gathered [11]. Machine Learning (ML) can act as an effective tool in PdM field for predicting equipment breakdowns due to its ability to analyze high-dimensional and multivariate data and to extract hidden patterns [5, 6]. Reliability is defined as the ability of a system or a component to serve its required functions in a determining period and under certain conditions [12]. Reliability improvement offers among others longer product lifecycles and reduced maintenance costs [11]. Reliability assessment is a systematic implementation through the entire lifecycle of a product or system [13]. Until now, reliability modeling methods are developed for estimating parameters such as Mean Time To Failure (MTTF), Mean Time Between Failure (MTBF). These model-based methods are based on the domain knowledge of the system and on domain experts’ opinions, creating a bottleneck, especially in complex production systems. These days, data-driven approaches are receiving a lot of interest in modern manufacturing systems due to the development of IoT [11]. In references [3, 6, 11], the integration of Industry 4.0 tools with the aim to enhance the reliability of manufacturing systems are discussed. In today’s industries, reactive and preventive maintenance are considered ineffective. Being able to predict when a machine fails i.e., predicting its RUL, will increase the reliability of it and will enable engineers to schedule maintenance [4]. In Fig. 2, the evolution of reliability assessment is presented. The conventional reliability assessment processes can now be improved and enhanced leveraging the Industry 4.0 technologies.
Fig. 2. The Evolution of Reliability Assessment in the context of Industry 4.0
Failure Mode & Effect Analysis (FMEA) is a strategy for identifying potential failures, assessing the causes and effects of them, and determining what should be done to reduce the risk of failure. The traditional method of risk assessment in FMEA is to create a Risk Priority Number (RPN). Every failure is evaluated by three main characteristics: severity (S), occurrence (O), and detection (D) and the production of them is the RPN. The failures can be ordered based on the RPN scores, and appropriate measures will be conducted first on the high-risk failures [14, 15]. Fault Tree Analysis (FTA) is a technique which assesses system failures one at a time, but by detecting causal linkages and it can integrate numerous causes of failure. FTA is used as the basis for numerical analysis. Combinations of faults are represented at each level of the tree using logical operators such as AND, OR, and EVENT. FTA can be used to completely understand the root cause of a failure. Markov Analysis (MA) is a time-dependent reliability technique
A Conceptual Framework for the Improvement
65
which assesses the likelihood of system being in a given functional state, or the likelihood of certain events occurring at specific times or intervals [14]. Petri Net (PN) is a method that is utilized in the modeling and analysis of complex manufacturing systems due to its ability to dynamically model a system. It can also examine unplanned failures, as well as the sequence of them [16]. The Reliability Block Diagram (RBD) is a representation method for evaluating a system’s reliability, depending on the logical structure of it. RBD is comprised by parallel, series or a combination of them configurations. A parallel connection is used to demonstrate redundancy [14]. In Table 1, conventional reliability assessment techniques are summarized. Table 1. Review of the most common conventional reliability assessment techniques Method
Type
Capabilities
Limitations
Ref
FMEA
Bottom-up Analysis
Critical components identification; Failure Prioritization; Cause-Causality Linkage of defects;
Different combinations of S, O and D may produce the same RPN number; Difficult to assess risk factors precisely
[14, 15]
FTA
Top-down Graphical Analysis
Identification of root causes; Integration of several failure causes; Detection of causal linkages
Difficult analysis of highly complex systems; Does not analyze intricate maintenance tactics; Multiple procedural steps
[14, 16]
MA
Time dependent
Detects functioning and non-working condition of the system with random variables
Assumes that state transition rates are constant; High complexity in large number of system states;
[14]
PN
Graphical Method
Minimal cut and path sets are determined in lesser procedural steps
Very complex graphical model; Difficult to understand the structure;
[16–18]
RBD
Graphical Method
System Modeling; Examines the relationships in system’s components (series/parallel)
Cause and effects [14] ways cannot be given; Does not examine complicated maintenance strategies
66
D. Mourtzis et al.
2 Proposed Methodology and System Architecture According to the literature, traditional reliability assessment techniques are mainly based on probabilistic assumptions, and experts’ opinion, constituting reliability assessment less accurate. In addition, it is challenging to comprehend and create detailed mathematical models of modern complex systems. Therefore, a conceptual data-driven framework is proposed for improving reliability assessment. In Fig. 3, the methodology is presented, illustrating how the real-time data exchange via the DT, can be used for the RUL prediction with the aim to enhance the reliability of the system. A robotic system is composed by several components and each of those affects the probability that the system will operate correctly [16]. Thus, a robotic cell breakdown into its modules is needed. FTA model can be constructed to identify root causes and interdependencies between the various components’ faults that could cause a system’s failure. By expert opinions, publicly available reliability databases, maintenance manuals and FTA, the critical component is selected. A way to improve the reliability of a system is to make a DT of it. A data acquisition device is used to gather data from the real robot. The data are stored into a cloud datastore through IoT devices. A PdM algorithm retrieves data that has been stored in the cloud database and tries to identify patterns in processing data in order to identify unanticipated machine malfunctions [4]. Data preprocessing is essential in order to extract Condition Indicators (CI) as well as to reduce the size of the datasets to the bare minimum required. The most suitable CIs will lead to more accurate ML models. Train data are used first to train the ML model and after it is validated with test data. The result of the RUL estimation will support the decision-making department by performing maintenance strategies in the optimum time optimizing in this way the reliability of the system.
Fig. 3. The conceptual framework for Reliability Improvement of a Robot in Industry 4.0
A Conceptual Framework for the Improvement
67
3 Case Study In this chapter, the RBD is constructed for a real spot-welding robotic cell which consist of two identical Comau SMART NJ-370–3.0 robots along with the PdM approach. The first robot operates as a welder and the second handles the metal sheets. The main modules of the robotic cell are depicted in Fig. 4, where the RBD of the robotic cell is constructed. Constant failure rate is applied for this modelling approach and exponential distribution function is used due to its simplicity in dealing with constant failure rates. For the assessment of the robotic cell’s reliability the following generalized equations are implemented, for in series and in parallel systems respectively. n
Rtotal,1 (t) = Rtotal,2 (t) = 1 −
Ri
(1)
i=1 n
(1 − Ri )
i=1
Rtotal (t) = RA × RD = R0 × 1 − 1 −
n ι=1
(2)
R1ι × 1 −
n
R2ι
(3)
ι=1
where Ri is the reliability of each individual component, Rtotal (t) is the total reliability of the system ∀i ∈ Z+ .
Fig. 4. The network reduction method of robotic cell’s RBD
To assess the reliability of the robotic cell, a digital counterpart of it should be made. For this reason, we make the simulated model of the robot. In [19] the Solidworks assembly of the SMART NJ-370–3.0 is provided. A free educational version of Solidworks
68
D. Mourtzis et al.
is used for this research. Then the Solidworks assembly is exported to the Simulink of MATLAB. In Fig. 5, the communication between the most important robotic cell’s modules as well as the PdM approach, are illustrated.
Fig. 5. The spot-welding process and the PdM approach into the robotic cell
4 Outlook and Future Work In this research work the design of a conceptual model-based framework for improving reliability assessment of a robotic cell has been presented. Furthermore, a data-driven approach is proposed for enhancing RBD results. Considering the preliminary results, assumptions and simplifications used for the RBD modeling, lead to less accurate reliability calculations. The constant failure rate was applied since most of the components exhibit constant failure rate in their useful life and the exponential distribution was used due to its efficiency to deal with it. RBD can be considered as an initial and important method for the assessment of manufacturing system’s reliability as it gives knowledge regarding the structure of the system. To enhance this assessment, our interest should be focused on the huge amount of data which generated due to the proliferation of sensors. This research work will be further elaborated in the future, towards the completion of the robotic cell modelling within the Simulink modelling environment and fully connect it to the physical cell, aiming a real-time connection between the physical and digital counterparts. Our aim is to develop a predictive model to estimate the RUL of the robotic cell’s critical component with the aim to enhance its reliability as a result the overall reliability of the system. In order to design and develop an efficient PdM strategy, each step should be completed correctly. The selection of CIs as well as the selection of ML models are challenging tasks. Non distinctive features may harm the model. With regards to the data management, due to the vast amount of data produced daily, edge computing will be integrated, in order to minimize the computational load on the Cloud layer, and fully utilize the inherent intelligence of the embedded systems at the shop-floor level To conclude, this research work was focused on the reliability of hardware components.
A Conceptual Framework for the Improvement
69
However, the software reliability and the impact of human in the reliability are additional factors that should be taken into account for the assessment of the overall reliability of manufacturing systems.
References 1. Mourtzis, D.: Simulation in the design and operation of manufacturing systems: state of the art and new trends. Int. J. Prod. Res. 58(7), 1927–1949 (2020) 2. Mourtzis, D., Synodinos, G., Angelopoulos, J., Panopoulos, N.: An augmented reality application for robotic cell customization. In: Brissaud, D., Zwolinski, P., Paris, H., Riel, A. (eds.) 27th CIRP Life Cycle Engineering Conference (LCE2020) Advancing Life Cycle Engineering: from technological eco-efficiency to technology that supports a world that meets the development goals and the absolute sustainability, vol. 90, pp. 654–659. Procedia CIRP (2020) 3. Lazarova-Molnar, S., Mohamed, N.: Reliability assessment in the context of industry 4.0: data as a game changer. In: Elhadi, S. (ed.) The 10th International Conference on Ambient Systems, Networks and Technologies (ANT 2019) / The 2nd International Conference on Emerging Data and Industry 4.0 (EDI40 2019) / Affiliated Workshops, vol. 151, pp. 691–698. Procedia Computer Science (2019) 4. Mourtzis, D., Angelopoulos, J., Panopoulos, N.: Intelligent predictive maintenance and remote monitoring framework for industrial equipment based on mixed reality. Frontiers in Mechanical Engineering 6, 578379 (2020) 5. Carvalho, T.P., Soares, F.A., Vita, R., Francisco, R.D.P., Basto, J.P., Alcalá, S.G.: A systematic literature review of machine learning methods applied to predictive maintenance. Comput. Ind. Eng. 137, 106024 (2019) 6. Wang, Y., Zhao, Y., Addepalli, S.: Remaining useful life prediction using deep learning approaches: A review. In: Gao, R.X., Yan, R. (eds.) Proceedings of the 8th International Conference on Through-Life Engineering Services – TESConf 2019 Procedia manufacturing, vol. 49, pp. 81–88 (2020) 7. LNCS Homepage,https://www.opiflex.se/en/publicity/four-robot-revolutions-flexible-rob ots/, last accessed 14 October 2022 8. Ghodsian, N., Benfriha, K., Olabi, A., Gopinath, V., Arnou, A., Charrier, Q.: Toward designing an integration architecture for a mobile manipulator in production systems: Industry 4.0. In: Nabil, A. (ed.) 32nd CIRP Design Conference (CIRP Design 2022) - Design in a changing world. Procedia CIRP, vol. 109, pp. 443–448 (2022) 9. Negri, E., Fumagalli, L., Macchi, M.: A review of the roles of digital twin in CPS-based production systems. In: Pellicciari, M., Peruzzini, M. (eds) 27th International Conference on Flexible Automation and Intelligent Manufacturing, FAIM2017, 27–30 June 2017, Moderna, Italy Procedia manufacturing, vol. 11, pp. 939–948 (2017) 10. Phanden, R.K., Sharma, P., Dubey, A.: A review on simulation in digital twin for aerospace, manufacturing and robotics. In: Tyagi, R.K., Avasthi, D.K., Garg, A. (eds.) 2nd International Conference on Future Learning Aspects of Mechanical Engineering, vol. 38, pp. 174–178. Materials today: proceedings (2021) 11. Friederich, J., Lazarova-Molnar, S.: Towards data-driven reliability modeling for cyberphysical production systems. In: Shakshuki, E., Yasar, A. (eds.) The 12th International Conference on Ambient Systems, Networks and Technologies (ANT) / The 4th International Conference on Emerging Data and Industry 4.0 (EDI40) / Affiliated Workshops, Procedia Computer Science, vol. 184, pp. 589–596 (2021) 12. Chryssolouris, G.: Manufacturing systems: theory and practice, 2nd edn. Springer Verlag, New York (2006)
70
D. Mourtzis et al.
13. Fazlollahtabar, H., Niaki, S.T.A.: Reliability models of complex systems for robots and automation. CRC Press Taylor & Francis Group, London New York (2017) 14. Kostina, M.: Reliability management of manufacturing processes in machinery enterprises. Theses of Tallinn University of Technology. ISSN 1406-4766 71 (2012) 15. Liu, H.C., Liu, L., Liu, N.: Risk evaluation approaches in failure mode and effects analysis: A literature review. Expert Syst. Appl. 40(2), 828–838 (2013) 16. Sharma, S.P., Kumar, D., Kumar, A.: Reliability analysis of complex multi-robotic system using GA and fuzzy methodology. Appl. Soft Comput. 12(1), 405–415 (2012) 17. Knezevic, J., Odoom, E.R.: Reliability modelling of repairable systems using Petri nets and fuzzy Lambda-Tau methodology. Reliab. Eng. Syst. Saf. 73(1), 1–17 (2001) 18. Signoret, J.P., Dutuit, Y., Cacheux, P.J., Folleau, C., Collas, S., Thomas, P.: Make your Petri nets understandable: Reliability block diagrams driven Petri nets. Reliab. Eng. Syst. Saf. 113, 61–75 (2013) 19. LNCS Homepage, https://www.comau.com/en/competencies/robotics-automation/robotteam/nj-370-3-0/, last accessed 9 November 2022
Human-Robot Collaboration, Sustainable Manufacturing Perspective Robert Ojstersek(B)
, Borut Buchmeister , and Aljaz Javernik
Faculty of Mechanical Engineering, University of Maribor, Smetanova 17, 2000 Maribor, Slovenia [email protected]
Abstract. This paper presents the use of Siemens’ Tecnomatix simulation modelling tool to evaluate the importance of the sustainable manufacturing perspective for human-robot collaboration. The research focuses on evaluating key parameters (utilization, scrap rate, cost, time, and quantities) of sustainable manufacturing from social, environmental, and economic perspectives. The simulation model was used to obtain time and quantity-based parameters of the collaborative workplace that can be directly applied to the real-world environment. The results show high suitability of simulation modelling methods to evaluate the collaborative workplace from the perspective of sustainable manufacturing. The obtained results provide answers to the original question of how multidisciplinary research can evaluate the impact of collaborative robots on humans and the sustainability of the manufacturing system. Results presents that human-robot collaboration, when studied at an advanced stage, can deal with different labor shortages in developed countries and ensure the global competitiveness of companies through a highly efficient and sustainable manufacturing system. Keywords: Sustainable Manufacturing · Human-robot collaboration · Workplace · Simulation
1 Introduction In the era of Industry 4.0 and the transition to new technologies, these offer new opportunities to achieve more efficient manufacturing systems [1]. Aware of sustainable manufacturing importance, companies are looking for new solutions to increase the efficiency of manufacturing systems, where in the developed world they often face labor, environmental and economic limitations [2]. The technology that could make this possible is the use of collaborative machines (mostly robots) in production systems [3], although their use does not necessarily meet all the parameters of sustainable manufacturing. Collaborative robots enable safe and efficient teamwork with humans, but we do not know their impact on the well-being of the worker and, more broadly, on the efficiency of the manufacturing system [4]. Previous studies of industrial practice show that existing tools for planning the impact of collaborative workplaces cannot consider their suitability as we know it in detail for industrial robots [5]. The disadvantage of collaborative workplaces is often the rather low performance of the applications compared to traditional © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 F. J. G. Silva et al. (Eds.): FAIM 2023, LNME, pp. 71–78, 2024. https://doi.org/10.1007/978-3-031-38241-3_9
72
R. Ojstersek et al.
robot applications without human interaction [6]. The multidisciplinary optimization problem [7] requires that, we simultaneously increase the safety of the worker and the efficiency of the workplace, thus seeking an economic justification [8]. The use of simulation modeling methods [9] ensures the reliability and stability of production control in a dynamic production environment where the company maintains its global competitiveness. Using the applied simulation modeling methods, the authors were already able to transfer the results to a real-world environment, where they can predict the impact of collaborative workplaces on the manufacturing system efficiency [10]. At the same time, they proved that the optimal design of collaborative workplaces is important to improve the performance of robotic manipulators and workers in particular [11]. With the proper implementation of collaborative workplaces, we can reduce the risks of worker health problems as collaborative robots perform tasks that are difficult and repetitive for workers [12]. Proper planning and placement of the workplace can increase the overall efficiency of the manufacturing process [13]. The effectiveness of collaborative robots has been demonstrated in various application areas [14, 15], but researchers are still wondering about their effectiveness and benefits, where their limitations lie from a sustainable manufacturing perspective. Sustainable manufacturing is and will be an increasingly important aspect of holistic efficient manufacturing systems [16]. With the introduction of collaborative workplaces, industry is seeking an open, inclusive, and neutral set of indicators to measure the sustainability of manufactured products and manufacturing processes [17]. Despite the economically justified and well-researched adoption of collaborative workplaces [18], the social and environmental justification aspects of such workplaces remain unexplored [19]. Based on previous research, we find that researchers focus on evaluating and optimizing individual parameters of sustainable manufacturing [20], linked to specific industrial cases [21], and that a unified approach to evaluating collaborative workplaces is not apparent. In this research, we aim to present a simulation modeling approach to obtain data of a collaborative workplace, which can be used to verify the parameters of sustainable manufacturing. With the proposed approach, we aim to reduce the risk of sustainable unjustified collaborative workplaces in advance, enabling companies to gain global competitiveness.
2 Problem Description Collaborative workplaces, where a human interacts directly with a robot, are becoming increasingly common in industry. With the rapid development of collaborative robots in the industrial environment, companies have often opted for collaborative robots due to their advantages related to financial justification, sometimes lack of legislation about the associated safety features (even on national level), and the ease of direct implementation in a real-world environment. With the trend of increasing awareness of the importance of sustainable manufacturing, collaborative workplaces represent a key technology on which all three main aspects of sustainable manufacturing can be based. In Fig. 1, the proposed block diagram represents economic, social, and environmental perspectives of sustainable manufacturing, with collaborative workplaces providing highly efficient, environmentally justified, and socially balanced manufacturing system. Discussing sustainable manufacturing a growing number of companies state it as a key objective in
Human-Robot Collaboration, Sustainable Manufacturing Perspective
73
their strategy to drive growth and global competitiveness. Sustainable manufacturing is the production of products using a process that minimizes negative impacts on the environment, conserves energy and natural resources, and is safe for employees, communities, and consumers. The overall goal of sustainable manufacturing is to consider the entire product cycle and optimize the life cycle of manufacturing systems, products, and services.
Fig. 1. Problem description block diagram.
Sustainable manufacturing not only produces more sustainable products, but also makes manufacturing processes more sustainable, which increases a company’s overall benefit to society and the environment. When introducing a collaborative workplace into manufacturing system, the question is whether the collaborative robot can raise the company’s sustainability orientation. Also questionable for manufacturing companies is the initial investment cost, which is generally lower than purchasing an industrial robot, but still higher than the cost of human labor in developing or less developed countries. In developed countries with labor shortages, the collaborative robots can fill the gap of the lack of personnel for monotonous, dangerous, or toxic work. However, investment costs are decreasing every year as new technologies are developed, and collaborative robots become more widespread. The research presented is considering evaluation of the collaborative workplace sustainability in which one worker operates with two collaborative robots. The goal of the company is to evaluate the sustainable justification of the introduction of such a workplace, where we want to verify the economic justification with the parameters of operating and idle costs, the environmental justification with the parameter of process waste, and the social inclusion of the worker with uniformly high utilization of his work.
3 Simulation Modelling The research presented uses the simulation modelling method to evaluate the suitability of a collaborative workplace from the sustainable manufacturing perspective. The Siemens Tecnomatix software environment was used in which a model of a collaborative workplace was modelled, as shown in Fig. 2. The evaluated case study considers real-world data in which the dynamics of the collaborative workplace is assumed. The simulation model runs with the following parameters: The shift time is 8 h, where worker 1 is replaced by worker 2 when needed (statutory breaks). The simulation model runs in
74
R. Ojstersek et al.
three-shifts per day, when the assumed simulation time is 1440 min. According to the three-shift schedule, a simulation model uses four-hour warm-up period. The optimal target goal of the collaborative workplace, for the finished product per day parameter, is 5760 finished parts, assuming that a new part arrives every 15 s. Transfer time is included in the operating time of each assembly operation. Semi-finished products are always available.
Fig. 2. Proposed block diagram and evaluated case study simulation model.
Table 1 contains the input data describing the data-driven simulation model. The collaborative workstation, consisting of the first collaborative robot (CR1 ) and the second collaborative robot (CR2 ), have constant processing time, operation costs and idle costs are determined. Table 1. Simulation model input parameters. Operation
Placing
Inserting-1
Screwing
Inserting-2
Packing
Operator
CR1
W
CR2
W
CR1
Processing time [s]
5.9
T (6.8, 8, 9.2)
15
T (7.5, 8.8, 10.1)
7.2
Operating cost [e/h]
37
23
33
23
37
Idle cost [e/h]
13
23
11
23
13
Scrap rate [%]
T (1, 1.5, 2)
U (4, 7)
T (1, 1.5, 2)
U (4, 7)
T (1, 1.5, 2)
Letter T is used for triangular distribution and letter U for uniform distribution. The collaborative robot CR1 performs the placing and packing operations, where the collaborative robot CR2 performs the screwdriving operations. In the case of collaboration, there is a worker (W) that performs two operations for inserting components. The operating cost, and the idle cost are also determined for the worker. Processing times for collaborating robots and the worker are determined in the Siemens Tecnomatix software environment – the sequence modeling approach (Sequence Editor), where we can
Human-Robot Collaboration, Sustainable Manufacturing Perspective
75
precisely define exact time and sequence parameters, which can be transferred directly into a real-world environment. The simulation model uses methods, calculations and assumptions presented in the research papers [19] in determining the operating, idle costs, and the process scrap rate parameter. Assumptions regarding operating and idle costs include, when a collaborative machine is introduced, its initial investment cost with a depreciation period of five years. The costs describing the worker refer to the gross hourly rate of the worker.
4 Results The results in Table 2 show the evaluated parameters describing sustainable manufacturing from social, economic and environmental points of view. It is difficult to classify certain parameters into a single category of sustainable manufacturing aspects because they describe a multidimensional parameter in both the collaborative robot and the worker. From the social aspect, the appropriate level of employment (workload/utilization) of the worker represents his long-term effective, health-friendly work. According to the results of the studies, the parameter of utilization only partially describes the actual state, so a more detailed investigation would be useful. Certainly, the integration of workers in collaborative workplaces, in cooperation with a robot, increases the degree of integration in automated and, at the same time, flexible production systems. From the economic perspective justification, the results are represented by parameters of operating and idle costs, and with the quantity of produced products. The environmental aspect, the importance of which is becoming increasingly apparent in view of limited supply chains and rising prices for basic energy resources and materials, is represented by the parameter of process scrap rate. As for the process waste parameter, it is important to emphasize that each discarded piece in the single manufacturing phase represents not only discarded material, but also the time and capacity of the machine or worker invested in such product. Table 2. Collaborative workplace evaluation results. Parameter
Sustainable manufacturing aspect Operator CR1
CR2
W
92.6
95.03
Utilization [%]
Social/Economic
83.6
Operating cost [e]
Economic
742.64 733.01 524.62
Idle cost [e]
Economic
363.1
19.66
282.9
Scrap [pcs]
Environmental
88
92
265
Time processing [h]
Social/Economic
20.06
22.21
22.8
Total products produced [pcs] Economic
5265
The results in Fig. 3a show that the average utilization of the collaborative robots and worker is 90.4%, with the highest utilization at the worker with a total utilization
76
R. Ojstersek et al.
of 95.03%, which can present a bottleneck in case of additional parts arriving. Since the simulation duration time was 1440 min, it would be useful to extend this time, especially to evaluate the utilization of the worker over a longer period. The highest processing cost, shown in Fig. 3b, occurs with the collaborative robot CR1 , followed by the collaborative robot CR2 , and the lowest labor cost is caused by the worker. These results prove that the use of labor in the less developed world, where the hourly rate of a worker in production is low, is still an important factor in the adoption of automated and robotic systems. Of course, the shortage of labor in developed countries and the return of industry (backshoring) from less developed countries significantly increases the hourly rate for the worker, and in this case, collaborative robots can be one of the main pillars of serial and flexible production. Looking at the total cost of operators, we find that the collaborative robot CR1 is the most expensive with 1105.7 EUR per three shifts, the collaborative robot CR2 is cheaper by 10.8% and the worker is cheaper even by 37.8%, which further supports the above claims. As for the process scrap rate parameter, we note (Fig. 3c) that for the collaborative robots it averages 90 pieces in three shifts, while for the worker it increases by a factor of 2.94 due to assembly errors and amounts to 265 pieces. From the process design point of view, the stands involved ensure higher robustness and reliability of the work performed by collaborative robots with lower material, resource, and time losses. The time processing parameter in Fig. 3d proves the correctness of the operation of the simulation model, as the data exactly match the instructions for operators’ utilization parameter. According to the company optimal target of 5760 pieces in three shifts, the results show that the collaborative workplace produces an average of 5265 pieces, which is 8.6% less than the required quantity.
Fig. 3. Parameters results: a) Utilization, b) Costs, c) Scrap and d) Time processing.
Human-Robot Collaboration, Sustainable Manufacturing Perspective
77
5 Discussion and Conclusions The main research question considered the importance of studying the aspect of sustainable manufacturing in collaborative workplaces. Using the evaluated parameters of social, environmental, and economic justification, the presented results answer the research question with data that prove the importance of appropriately integrating modern technologies into existing or newly proposed manufacturing systems. Regarding the social aspect, we note that the adoption of collaborative workplaces depends on the price and availability of labor. With high probability, we can claim that in the highly-developed countries collaborative workplaces represent an opportunity for a higher economic competitiveness level. In the present research work, the parameters of the worker’s activity and his processing time were evaluated from the social aspect point of view. These two parameters can only answer the question of the worker’s continuous employment, but there are work limitations from the worker’s well-being point of view raising the question of his psychophysical state. When evaluating the ecological justification of the collaborative workplace, we presented the importance of the process scrap rate parameter, where we can conclude that the worker suffers greater losses, not only in the form of process waste, but also in the form of time spent on the production of scrap products and, as a result, a lower final efficiency. Despite the importance of the social and environmental perspective of sustainable manufacturing, the economic justification still plays a key role in the adoption of new technologies. The results show that the economic perspective depends on the initial investment costs of collaborative robots, which can be lower than for fully automated robotic cells. Meanwhile, there is a question of the labor price, which can be a completely different value even within the developed countries (for example differences between the EU countries). Therefore, we still see companies looking for markets where labor is cheaper. In contrast to previous research [2], this paper focuses on the individual collaborative workplace study using the Siemens Tecnomatix simulation environment. The results enable the evaluation of collaborative workplaces from a sustainable manufacturing point of view before investing in such a workplace. Using the presented approach, companies can determine the advisability of introducing collaborative workplaces into their existing or newly planned manufacturing system. In future research we will focus on the study of social aspects related to the impact of the collaborative robot on the workers psycho-physical state, using the ergonomically appropriate workplace design that will allow the long-term health sustainability of the worker during the repetitive work with a collaborative robot. In examining the sustainable justification of collaborative workplaces, it is useful to extend the study of their impact to the entire production system. Acknowledgement. The authors gratefully acknowledge the support of the Slovenian Research Agency (ARRS), Research Core Funding No. P2-0190.
References 1. Rosin, F., Forget, P., Lamouri, S., Pellerin, R.: Impact of Industry 4.0 on decision-making in an operational context. Adv. Produc. Eng. Manag. 16(4), 500–514 (2021)
78
R. Ojstersek et al.
2. Ojstersek, R., Javernik, A., Buchmeister, B.: Importance of sustainable collaborative workplaces – simulation modelling approach. Int. J. Simul. Model. 21(4), 627–638 (2022) 3. Gualtieri, L., Palomba, I., Merati, F.A., Rauch, E., Vidoni, R.: Design of human-centered collaborative assembly workstations for the improvement of operators’ physical ergonomics and production efficiency: a case study. Sustainability 12(9), 3606 (2020) 4. Pascual, A.I., Högberg, D., Lämkull, D., Luque, E.P., Syberfeldt, A., Hanson, L.: Optimization of productivity and worker well-being by using a multi-objective optimization framework. IISE Trans. Occup. Ergon. Human Factors 9(3–4), 143–153 (2021) 5. Gihleb, R., Giuntella, O., Stella, L., Wang, T.: Industrial robots, workers’ safety, and health. Labour Econ. 78, 102205 (2022) 6. Himmelsbach, U.B., Wendt, T.M., Hangst, N., Gawron, P., Stiglmeier, L.: Human-machine differentiation in speed and separation monitoring for improved efficiency in human–robot collaboration. Sensors 21, 7144 (2021) 7. Mirzapour, A.-E.-H., S.M.J., Baboli, A., Sadjadi, S.J., Aryanezhad, M.B.: A multiobjective stochastic production-distribution planning problem in an uncertain environment considering risk and workers productivity. Math. Prob. Eng. 2011, 406398 (2011) 8. Kanazawa, A., Kinugawa, J., Kosuge, K.: Adaptive motion planning for a collaborative robot based on prediction uncertainty to enhance human safety and work efficiency. IEEE Trans. Rob. 35(4), 817–832 (2019) 9. Chen, W., Hao, Y.F.: A combined service optimization and production control simulation system. Int. J. Simul. Model. 21(4), 684–695 (2022) 10. Ojstersek, R., Javernik, A., Buchmeister, B.: The impact of the collaborative workplace on the production system capacity: simulation modelling vs. real-world application approach. Adv. Produc. Eng. Manag. 16(4), 431–442 (2021) 11. Hu, M., Wang, H., Pan, X.: Multi-objective global optimum design of collaborative robots. Struct. Multi. Optim. 62(3), 1547–1561 (2020) 12. Realyvásquez-Vargas, A., Cecilia Arredondo-Soto, K., Luis García-Alcaraz, J., Yail MárquezLobato, B., Cruz-García, J.: Introduction and configuration of a collaborative robot in an assembly task as a means to decrease occupational risks and increase efficiency in a manufacturing company. Robot. Comput.-Integr. Manuf. 57, 315–328 (2019) 13. Meng, J.L.: Demand prediction and allocation optimization of manufacturing resources. Int. J. Simul. Model. 20(4), 790–801 (2021) 14. El Zaatari, S., Marei, M., Li, W., Usman, Z.: Cobot programming for collaborative industrial tasks: an overview. Robot. Auton. Syst. 116, 162–180 (2019) 15. Fager, P., Calzavara, M., Sgarbossa, F.: Modelling time efficiency of cobot-supported kit preparation. The Int. J. Adv. Manuf. Technol. 106(5–6), 2227–2241 (2019). https://doi.org/ 10.1007/s00170-019-04679-x 16. Garetti, M., Taisch, M.: Sustainable manufacturing: trends and research challenges. Product. Plann. Control 23(2–3), 83–104 (2012). https://doi.org/10.1080/09537287.2011.591619 17. Joung, C.B., Carrell, J., Sarkar, P., Feng, S.C.: Categorization of indicators for sustainable manufacturing. Ecol. Ind. 24, 148–157 (2013) 18. Machado, C.G., Winroth, M.P., Ribeiro da Silva, E.H.D.: Sustainable manufacturing in Industry 4.0: an emerging research agenda. Int. J. Prod. Res. 58(5), 1462–1484 (2019) 19. Ojstersek, R., Buchmeister, B.: Simulation modeling approach for collaborative workplaces’ assessment in sustainable manufacturing. Sustainability 12(10), 4103 (2020) 20. Wang, L., Mohammed, A., Wang, X.V., Schmidt, B.: Energy-efficient robot applications towards sustainable manufacturing. Int. J. Comput. Integr. Manuf. 31(8), 692–700 (2017) 21. Liu, Q., Liu, Z., Xu, W., Tang, Q., Zhou, Z., Pham, D.: Human-robot collaboration in disassembly for sustainable manufacturing. Int. J. Prod. Res. 57(12), 4027–4044 (2019)
Towards a Robotic Intervention for On-Land Archaeological Fieldwork in Prehistoric Sites L’hermite Tom1 , Cherlonneix Cyprien1 , Paul-Eric Dossou1,2(B) and Laouenan Gaspard1,3
,
1 Icam, Site of Grand Paris Sud, 77127 Lieusaint, France
[email protected]
2 SPLOTT/AME/University of Gustave Eiffel, 77420 Champs-Sur-Marne, France 3 IRA2/IBISC Laboratory, University Paris-Saclay, Université d’Evry, 77447 Evry, France
Abstract. Archaeological activities lead to ancient artifacts discoveries and vestiges. Some excavation operations are both difficult and repetitive: industry 4.0 concepts such as artificial intelligence (AI) and advanced robotics are exploited in the production manufacturing processes to increase performance and could help automate some stages of excavation. This paper deals with the integration of these concepts in the archaeological domain to address specific and tedious tasks. Indeed, archaeological sites are mostly difficult to access places such as open sites and caves, thus making excavation even more challenging, hence the need for robots. The Archaeological Cobotic Explorer (A.C.E.), presented in this paper is a robot that could work alongside humans during archeological surveys. It would be a precise and untiring workforce, capable of locating and retrieving artifacts underground. This project aims to describe the appropriate industry 4.0 concepts that could be exploited in this particular domain and to create a machine, (A.C.E.), built with commercially available materials or replaceable 3D-printed pieces, capable of automating specific stages of the excavation process. Advanced computer aided design and functional prototypes were built and are presented in this paper. Keywords: Artificial intelligence · Visual recognition · Advanced Robotics · Archaeological excavation · Mechanical engineering
1 Introduction Archaeology is confronted to various complexities such as the geographical location of vestiges, the random nature of discoveries, temperature or humidity levels in a region, the problem of sediment evacuation or the traceability of the discovered items. Despite tedious and time-consuming tasks, archaeology is mainly conducted by humans: ergonomic concepts could aid archaeologists in their work. Industry 4.0 concepts and sustainability aspects contribute to a company’s digital transformation by introducing new technologies in its manufacturing processes to eliminate non-added values and optimizing added values. In an industrial context, those concepts have proven their efficiency when it comes to flexibilizing the capacities on a production line. For instance, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 F. J. G. Silva et al. (Eds.): FAIM 2023, LNME, pp. 79–90, 2024. https://doi.org/10.1007/978-3-031-38241-3_10
80
L. Tom et al.
collaborative robots (“cobots”) are designed to assist humans and work with them [1]. They are equipped with detection mechanisms, pressure sensors and cameras for security purposes, and they automate strenuous or non-value-added tasks. To sum up, while traditional robotics’ reliability lies in redundancy, repetitiveness and predictability [2], new technologies already used in industrial contexts can help take a step and automate more complex tasks that require higher flexibility in the peculiar fieldwork of archaeologists. This paper focuses on the exploitation of new technologies such as artificial intelligence, advanced robotics, big data analytics, internet of things and 3D printing to elaborate the adapted concepts to create a specific robot designed to assist archeologists in their repetitive tasks. It aims to address the operational difficulties archaeologists encounter by prototyping ways of bypassing them with automated systems, especially by assisting surveys in difficult to access areas and during excavations. The autonomous nature of this robot would make it capable of operating at night in case of long excavations, thus making it an additional workforce that could potentially perform twice as long as a human. Also, it would not be prone to human error: it should record every action with great precision, a crucial asset since archaeology is a destructive activity. Following a literature review, the main concepts and designs of the robot will be presented. Discussions on future developments will be outlined, along with artificial intelligence tools that could be implemented in the robot to fulfill the various tasks.
2 Literature Review Artificial intelligence, machine learning and remotely controlled autonomous rovers have an increasing impact on archaeological research. Current applications focus on issues in landscape archaeology, as well as aerial and underwater vehicles, equipped for on-land and underwater remote sensing, or 3D and Lidar-based scanning of monuments and settlement sites. Some can also analyze sunken settlement structures and shipwrecks. Yet, there are no archaeological research projects that automate excavations of prehistoric sites so far. Although archeological robots for on land archaeological surveys do not exist yet, similar machines have already been built for other environments and purposes and have brought inspiration for the concept presented in this paper: ROVINA [3] is capable of investigating hidden chambers, passages and sanctuaries of monuments such as the Egyptian pyramids or the catacombs of Rome. Excavating machines were also built to operate on construction sites [4] and adapt to any work condition. What makes the Archaeological Cobotic Explorer (A.C.E.) unique are the possible whereabouts of its application and the use archeologists will have for it. 2.1 Explorers and Excavators Backhoes [5] are the most common excavators. They can even be used for underwater excavation. Underwater archaeological sites cannot be easily accessed by humans due to the increasing pressure that arises as the seafloor is deeper in the high seas. Instead, specific machines are designed to dive and fetch the items with robotic arms. Whereas the ARROWS project [6] consisted in creating a subaquatic vehicle meant to reduce the cost of archaeological research, Remora 2000, a small submarine capable of carrying
Towards a Robotic Intervention
81
two passengers and diving 610 m below the surface, is used to explore deep underwater grounds, for up to ten hours. Europe prepared the ExoMars mission in 2022 in order to investigate possible life forms of the past on Mars, and built its own rover, Rosalind Franklin, to carry it out [7]. National Aeronautics and Space Administration have already designed machines capable of exploring foreign planets: the Curiosity Rover [8] and its more recent cousin, Perseverance. Since (A.C.E.) does not have to move on horizontal surfaces, the very same mechanism would be useless. However, since it needs to dive in trenches, similar mechanisms can be implemented when it comes to controlling position within the trench. Bio-inspiration is the process of developing machines by analyzing animal species such as insects and bugs [9]. Spider-like movement could meet the requirements of this robot: six to eight legs would enable precise movements and specific contortions for instance. However, simpler solutions exist to control positioning, not to mention that this kind of mechanism would hardly allow fully retractable designs for the legs. The French company Aspirloc, specialized in civil-engineering, has designed a vacuum-excavator specifically designed to fit into pipes and evacuate rubble and other debris with its vacuum. Movements within the pipes are made easier by its crawlers. Similar maintenance robots also exist for other applications [10]. (A.C.E.) will exploit advantages of these mechanisms. 2.2 New Technologies Artificial intelligence will be implemented to the machine in the fieldwork process to automate object recognition. In industrial contexts, applications for computer vision are common and now benefit from a heavy scientific background that can be transposed to archaeology: the use of deep learning and especially convolution neural networks (CNN) has become a standard method [11]. Since on-field discoveries may not match with known data, a database can be created to initiate a supervised training process of a deep learning model, and exploited to compare to already encountered situations, which will improve precision and reliability [12]: such a database will be implemented to (A.C.E.). During the training process, active learning [13] with archaeologists can help reach high classification performance quicker. Additionally, semantic segmentation with attention maps [14, 15] can both improve the model’s performance and make it more understandable by highlighting specific areas on the images the model focuses on to make predictions: since artificial intelligence knowledge is hardly in an archaeologist’s area of expertise, fathomable models [16] can be essential to ensure usability and the smooth training of the system. The database should mainly include photos of common artifacts, which will be provided by archaeologists. For instance, Artificial Neural Networks (ANN) [17] could be used to classify artifacts according to their chemical composition. During the excavation fieldwork, chemical tests are hardly easy to run. Thus, a deep learning method based on a sample of labeled images could generate a simple classification prior to further analysis. Unsupervised Data Augmentation (UDA) [18] can help improve performances with limited training samples by artificially filling up the database. Similarly, self-supervised learning (SSL) techniques such as contrastive learning [19] can improve a model’s performance despite having few labeled data points. Collaborative robotics (Cobotics) are usually implemented in industrial contexts for it allows the automation of repetitive tasks while still being able to benefit from human
82
L. Tom et al.
adaptability with the use of Robot Operating System tool for the programming [20]. Thus, a production process can be flexibilized, making it easier to adapt to continuously varying demands. A lean automation approach [21] shows that an implementation of Roozenburg’s engineering design cycle [22] to cobotic applications is possible. In the context of archaeological research, since what the fieldwork produces cannot be accurately predicted, human adaptability becomes a strong asset in the process: designing a machine capable of operating alongside archaeologists in critical parts of the process makes it possible to handle the variability of the fieldwork results. 2.3 Organizational Concepts This section discusses the methods and tools that could be used to manage the project and ensure its success by using an efficient process. Lean manufacturing is a methodology destined to reduce waste in manufacturing processes [23]. It focuses on value-added activities and reduces non-value-added ones [24]. It is effective when it comes to company performance improvements, and its concepts can be transposed to project management. Lean thinking has been developed to use the same approach in other areas [25] such as product development. Design thinking is an innovative, human-centered approach used to develop new designs, products or services, and a toolbox meant to assist product development [26]. An approach based on five steps (Empathize, Define, Ideate, Prototype and Test) has been developed by the Stanford Design School [27]. As it is an iterative process that constantly focuses on user expectations, this methodology is adapted to (A.C.E.)’s development and would ensure its mechanical development. Although, as previously mentioned, the development and exploitation of this robot require new technologies and the development of an adapted software to manage both human and robot information in this collaborative system. Some other useful tools could be the Agile methods [28]: they promise to deliver consistent business value to adapt and improve the product as well as the work process both incrementally and empirically. They are used in various areas such as agile business models, enterprise agility, organizational agility, agile manufacturing, agile supply chains, and agile software development [29]. One of the most important agile methods is Scrum. Scrum is a framework meant to address complex adaptive problems while productively delivering creative products of the highest possible value [30]. Product development is described as an iterative cyclic process with continuous validation during the process [31]. Then, requirements are continuously integrated. Indeed, as explained in reference [32], the use of agile methods such as Scrum in the development of physical products is advantageous: it improves communication, responsiveness, flexibility, transparency and increased commitment/motivation. These methods and tools will be combined to define the methodology that will be used to develop (A.C.E.) and manage the project.
3 Concepts and Methods 3.1 Global Approach The methodology presented below is the one used in the project. It is the result of a combination between lean design, design thinking and the agile method Scrum (Fig. 1).
Towards a Robotic Intervention
83
Fig. 1. Agile Lean design thinking Methodology
Starting the project from scratch meant innovative solutions had to be brought up, for it is a one-of-a-kind machine with specific requirements. Given the variety of tasks it has to achieve, it seemed much more simple to build two separate contraptions: the first one, referred to as the Explorer, is exclusively designed to go down trenches, dig and collect the items, and the other, referred to as the processing unit due to its functions, is meant to clean, bag and label with QR codes, thus preparing the found objects for further laboratory analysis. The main reason for this dual design is practicality: it reduces the weight of the Explorer by having it perform a lesser number of tasks, and cutting down the quantity of moving parts is crucial for a functioning, redundant design. This chapter aims to describe the process that has been applied to imagine solutions regarding the overall mechanism: first, emphasis is placed on the Explorer and the ways of handling its movements within the trench with maximum precision by considering environmental constraints, followed by a description of the processing unit, especially the sieving mechanism and how it manipulates archaeological artifacts. 3.2 The Excavation Robot Concepts The excavation robot has been defined as a human aided system integrating all the concepts necessary to the creation, design, and elaboration phases. The design includes an Explorer, the conceptualization of a processing unit, the use of new technologies such as artificial intelligence or 3D printing and the elaboration of torque and gear systems. (A.C.E.)’s AI is handled through a Raspberry device for it offers the necessary features for this robot. This intelligent system uses deep learning to detect and recognize finds. A module meant to contain all the data required to increase the quality of the detection system is being designed. The Explorer handles motors, cameras and images by sending collected data to a specific file. Several versions of the machine have been imagined inspired by existing machines such as the Gargantua robot moving along the Z
84
L. Tom et al.
axis [33]. Due to its rather small size (approximately fourty centimeters high and thirty centimeters wide), the Explorer can go down trenches thanks to a hoist that remains at ground level. As for lateral movements, it is divided in two separate rotating parts: the upper one containing four telescopic arms designed to handle position within the trench, and the lower part containing both digging mechanisms. The first would gradually dig the trench whereas the second would retrieve the items with a specific clamp that has not been designed yet. Once collected, they would be taken to the surface by the hoist and passed on to the processing unit. The hoist itself could be designed based on 3D-printer mechanisms to ensure precision, meant to remain at the surface, guiding the Explorer at all times [34]. As the main concern was space, one of the very first ideas was to include telescopic arms to the Explorer [35]. Pressure against the walls would have been applied using a spring, one per arm, each of them fixed within the arm itself. However, asperities on the walls had to be taken into account to ensure fluent movement: four wheels with suspensions [36] would be created by placing springs between the wheels and the upper plate. The solution actually being developed is described as follows (see Fig. 2).
Fig. 2. Explorer overview
The one issue with bone-like arms was the vertical movement of the wheels as the arms deploy and retract. This can easily be overcome by attaching the bone structure to a slide at the very end of the arms: the arms could deploy naturally, without causing the wheels to move vertically at all for they would be fixed to the slide. Also, fully deployed arms could possibly be impossible to retract: an ascender could be fixed to the threaded rod, and step motors programmed to stop at that precise point to prevent both impossible movements and overheating. A circuit mainly composed of logic gates and limit sensors, such as the one further below, could be wired to the motors to kill power when the arms are fully deployed. The processing unit (Fig. 3) is a one meter long and fifty centimeters wide contraption divided in three separate modules, each designed to fulfill a specific task: sieving [37], cleaning, and bagging/stamping the artifacts one at
Towards a Robotic Intervention
85
a time. Special attention has to be paid to ensure item handling without damaging them at all [38]. The processing unit will be powered by a rechargeable battery: voltage will be reduced through voltage dividers.
Fig. 3. Processing unit overview
Cleaning the items involves separating earth from precious materials: this is accomplished by the sieving module and its grid. The design of the bagging system resembles a knife plastic welder combined with a folding mechanism and rolls that will stretch the plastic before folding it. 3D-printing as an industry 4.0 concept has been used to develop the prototype. 3D-printing offers many solutions [39] to the various problems encountered during the designing process: innovative mechanisms can be built and implemented to the machine. So far, PLA plastic filament was used for it is a quite cheap material, which makes it suitable for multiple prototyping and testing: although building such a machine is expensive, additive fabrication can help cut down costs over time. Furthermore, most of these pieces can be robust enough to withstand the efforts applied to them if sufficiently filled while printing. Also, the motors used for this robot have so far been sized to hold a heavier load than that which is actually applied to them. The system meant to manipulate the artifacts that has been implemented to the device consisted in two robotic arms, located on each side of the processing unit. These arms are able to deploy and retract according to the size of the items, protected by both the foam [40] fixed to the arms and the force sensor located on the plates since they are the first “hard” parts in contact with the items. These sensors evaluate the force applied to the items, thus ensuring it does not exceed a certain value. Foam also adds mechanical friction which helps prevent the object from gliding and falling.
86
L. Tom et al.
4 Experimental Results Theoretical results obtained through calculus and computer simulation validate the concepts and formalisms developed above, and design analysis [41]. So far, the focus was placed on the sieving module and the foam arms along with their mobility mechanism: functional prototypes have been built, and although the final version of the robot should include metal pieces when possible to increase resistance and durability, 3D-printing brought surprisingly good results. The sieving mechanism has been developed with computer simulations to predict the linear speed of the grid, actuated by a slider-crank mechanism (Fig. 5, a) to generate translation. A rack-and-pinion mechanism (Fig. 5, b) lifts it for the arms to grab the artifact. Their movements have to be slow enough not to hurt the artifacts: this could either be handled with a rotary encoder, a visual sensor [42], or with a simple mechanism, using a photoresistor and a laser beam on either side of the rack, pierced with three or four holes. To get the items to turn and change their angular position, they first have to be lifted above ground level. Many systems could have been used to get the arms to move vertically, a belt or a chain for instance, yet a threaded rod attached to a step motor (Figs. 4 and 5 c, d and e) seemed to be the most appropriate solution for that case for precision reasons. The overall mechanism must be approximately one meter long to make sure the arms can seize the items on the three different modules.
Fig. 4. Grabbing archaeological items: foam arms movement mechanism
Since for the most part, the items (A.C.E.) will manipulate are extremely fragile, an overheating protection system is being implemented, to protect both the engines and the artifacts by shutting power down according to the force applied. An optocoupler isolates the mechanical part of the design from the circuit, thus ensuring protection should mechanical problems occur. The signal is processed through logic gates, enabling or disabling the motor through transistors according to switch states and Raspberry inputs. These transistors activate the relays that control the motor’s rotation direction. The force sensor is handled by the Raspberry and only alters its inputs.
Towards a Robotic Intervention
87
Fig. 5. Processing unit: sieving module (a: slider-crank mechanism. b: rack-and-pinion mechanism. c: overview) and foam arm mechanism (d, e) prototypes
5 Conclusion and Outlook This project aims to automate several stages of archaeological excavations in order to relieve archaeologists in their daily activities. Those stages include reconnaissance, digging and cleaning. The Archaeological Cobotic Explorer (A.C.E.), ensures secure movements in an archaeological environment, from excavation to preparation for laboratory analysis. Its Explorer scans trenches, digs and retrieves items buried underground, before taking them back to the surface to the processing unit, which prepares them for laboratory analysis. The device is also capable of sorting artifacts by referring to its own database which will be updated over time. Since radars capable of detecting items underground are hardly affordable, (A.C.E.) will dig about five millimeters at a time to avoid damaging the artifacts and will use computer vision to classify them. So far, prototypes of the processing unit, especially its robotic grippers and the sieving module, have been built. The arms are designed to securely manipulate fragile pieces, using foam at low speed, and reasonably close to ground level. Security measures have also been designed to avoid engine overheating. Even though the main ideas for each part of the design have been found and chosen, the exact systems for digging, retrieving items, moving the Explorer both vertically and horizontally, the transmission of the artifacts between the Explorer and the processing unit, cleaning, bagging and QR code labeling have not yet been fully designed and prototyped. Future work should focus on improving the already-existing design of the arms, designing the digging mechanism and prototyping the rest before testing. Further development to address the issue of classifying artifacts in a supervised learning approach with limited labeled data samples will implement a deep learning algorithm. In particular, data augmentation methods together with active learning should be used to ensure an acceptable generalization performance. Additionally, the use of convolution neural networks with attention maps should initiate the design of an explainable artificial intelligence
88
L. Tom et al.
to address usability issues for the archaeologists. (A.C.E.)’s AI will be implemented along with its databases, as well as the QR code generation system, the human-machine interfaces, and potential 3D imagery for position recording of the artifacts. Handling the agents in the information system should be done with open-source tools such as Robot Operating System (ROS). Acknowledgement. This project is supported by Icam, a French engineering High school, in collaboration with Ateneo, a university located in the Philippines. Special thanks to Riczar B. Fuentes and Alfred F. Pawlik, archaeologists who have contributed to the specification.
References 1. Lefranc, G., Lopez-Juarez, I., Osorio-Comparán, R., Peña-Cabrera, M.: Impact of Cobots on automation. Procedia Comput. Sci. 214, 71–78 (2022) 2. Zbytniewska-Mégret, M., et al.: Reliability, validity and clinical usability of a robotic assessment of finger proprioception in persons with multiple sclerosis. Multiple Sclerosis Related Disord. 70, 104521 (2023) 3. Di Stefano, M., Salonia, P., Ventura, C.: Mapping and digitizing heritage sites: ROVINA project for programmed conservation. Procedia – Soc. Behav. Sci. 223, 944–951 (2016) 4. Gharbia, M., Chang-Richards, A., Lu, Y., Zhong, R.Y., Li, H.: Robotic technologies for on-site building construction: a systematic review. J. Build. Eng. 32, 101584 (2020) 5. Yin, G., Fuying, H., Li, Z., Ling, J.: Workspace description and simulation of a backhoe device for hydraulic excavators. Autom. Constr. 119, 103325 (2022) 6. Allotta, B.: The ARROWS project: adapting and developing robotics technologies for underwater archaeology. IFAC-PapersOnLine 48, 194–199 (2015) 7. Brossier, J., et al.: Ma_MISS team, constraining the spectral behavior of the clay-bearing outcrops in Oxia Planum, the landing site for ExoMars “Rosalind Franklin” rover. Icarus 386, 115114 (2022) 8. Rampe, E.B.: The MSL Science Team, Mineralogy and geochemistry of sedimentary rocks and eolian sediments in Gale crater, Mars: a review after six Earth years of exploration with Curiosity. Geochemistry 80, 125605 (2020) 9. Western, A., Haghshenas-Jaryani, M., Hassanalian, M.: Golden wheel spider-inspired rolling robots for planetary exploration. Acta Astronaut. 204, 34–48 (2023) 10. Özgür Acar, O., YaSar, ¸ C.F.: Autonomous climbing robot for tank inspection. Procedia Comput. Sci. 158, 376–381 (2019) 11. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press (2016) 12. Aivaliotis, P., Zampetis, A., Michalos, G., Makris, S.: A machine learning approach for visual recognition of complex parts in robotic manipulation. Procedia Manuf. 11, 423–430 (2017) 13. Wu, J., et al.: Multi-label active learning algorithms for image classification: overview and future promise. ACM Comput. Surv. 53, 1–35 (2020) 14. Pu, T., Sun, M., Wu, H., Chen, T., Tian, L., Lin, L.: Semantic representation and dependency learning for multi-label image recognition. Neurocomputing 526, 121–130 (2023) 15. Mankodiya, H., Jadav, D., Gupta, R.P., Tanwar, S., Wei-Chiang Hong, W.-C., Sharma, W.: OD-XAI: explainable AI-based semantic object detection for autonomous vehicles. Appl. Sci. 12, 5310 (2022) 16. Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., Yang, G.-Z.: XAIExplainable artificial intelligence. Sci. Robot. 4(37), eaay7120 (2019)
Towards a Robotic Intervention
89
17. Barone, G., Mazzoleni, P., Vera Spagnolo, G., Simona Raneri, S.: Artificial neural network for the provenance study of archaeological ceramics using clay sediment database. J. Cult. Heritage 38, 147–157 (2019) 18. Xie, Q., Dai, Z., Hovy, E.: Unsupervised data augmentation for consistency training. Adv. Neural Inform. Process. Syst. 33, 6256–6268 (2020) 19. Chen, T., Kornblith, S., Norouzi, M.: A simple framework for contrastive learning of visual representations. In: International Conference on Machine Learning, pp. 1597–1607 (2020) 20. Quigley, M., Conley, K., Gerkey, B.: ROS: an open-source Robot Operating System. In: ICRA Workshop on Open Source Software, p. 5 (2009) 21. Malik, A.A., Bilberg, A.: framework to implement collaborative robots in manual assembly: a lean automation Approach. In: Katalinic, B. (ed.) Proceedings of the 28th DAAAM. International Symposium. DAAAM International, Vienna, Austria (2017). ISBN 978-3-90273411-2, ISSN 1726-9679 22. Roozenburg, N.F.M., Cross, N.G.: Models of the design process: integrating across the disciplines. Design Stud. 12(4), 215–220 (1991) 23. Ohno, T.: Toyota Production System. Beyond Large-Scale Production. Productivity Press, New York (1988) 24. Kluge, S., Rau, A., Westkämper, E.: Type toyota management systems (MSTT) of small and medium-sized enterprises in mechanical and electrical industry. In: Vallespir, B., Alix, T. (eds.) APMS 2009. IAICT, vol. 338, pp. 97–104. Springer, Heidelberg (2010). https://doi. org/10.1007/978-3-642-16358-6_13 25. Marodin, G., Frank, A.G., Tortorella, G.L., Netland, T.: Lean product development and lean manufacturing: testing moderation effects. Int. J. Prod. Econ. 203, 301–310 (2018) 26. Micheli, P., Wilner, S.J., Bhatti, S.H., Mura, M., Beverland, M.B.: Doing design thinking: conceptual review, synthesis, and research agenda. J. Product Innovation Manag. 36, 124–148 (2018) 27. Plattner, H.: Bootcamp Bootleg. Design School Stanford, Palo Alto (2010) 28. Abbas, N., Gravell, A.M., Wills, G.B.: Historical roots of agile methods: Where did ‘agile thinking’ come from? In: Agile Processes in Software Engineering and Extreme Programming, pp. 94–103. Springer, Berlin, Heidelberg (2008) 29. Kettunen, P.: Adopting key lessons from agile manufacturing to agile software development – A comparative study. Technovation 29(6–7), 408–422 (2009) 30. Schwaber, K.: Agile Project Management with Scrum. Microsoft Press, Redmond, WA, USW (2004) 31. Takeuchi, H., Nonaka, I.: The new product development game: stop running the relay race and take up rugby. In: Harvard Business Review, pp. 137–146 (1986) 32. Gabriel, S., Niewoehner, N., Asmar, L., Kuhn, A., Dumitrescu, R.: Integration of agile practices in the product development process of intelligent technical systems. Procedia CIRP 100, 427–432 (2021) 33. Tang, H.B., Han, Y., Fu, H., Xu, B.G.: Mathematical modeling of linearly-elastic nonprestrained cables based on a local reference frame. Appl. Math. Modell. 91, 695–708 (2021) 34. Hussain, G., et al.: Design and development of a lightweight SLS 3D printer with a controlled heating mechanism: Part A. Int. J. Lightweight Mater. Manuf. 2(4), 373–378 (2019) 35. Mayorova, V.I., Shcheglov, G.A., Stognii, M.V.: Analysis of the space debris objects nozzle capture dynamic processed by a telescopic robotic arm. Acta Astronautica 187, 259–270 (2021) 36. Binorkar, V.A., Dorlikar, P.V.: Synthesis of new suspension mechanisms for two-wheeler vehicles. Mater. Today: Proc. 77, 711–716 (2023) 37. Kurnia, G., Yulianto, B., Jamari, J., Bayuseno, A.P.: Evaluation in conceptual design of human powered sand sieving machine. E3S Web Conf. 125, 03001 (2019)
90
L. Tom et al.
38. Li, Z., Hsu, P., Sastry, S.: Grasping and coordinated manipulation by a multifingered robot hand. Int. J. Robot. Res. 8(4), 33–50 (2016). https://doi.org/10.1177/027836498900800402 39. Ellery, A.: Notes on extraterrestrial applications of 3D-printing with regard to self-replicating machines. In: 2015 IEEE International Conference on Automation Science and Engineering (CASE) (2015) 40. Badiche, X., et al.: Mechanical properties and non-homogeneous deformation of open-cell nickel foams: application of the mechanics of cellular solids and of porous materials. Mater. Sci. Eng.: A 289(1–2), 276–288 (2000) 41. Wang, W., Xiong, Y., Zi, B., Qian, S., Wang, Z., Zhu, W.: Design, analysis and experiment of a passively adaptive underactuated robotic hand with linkage-slider and rack-pinion mechanisms. Mech. Mach. Theor. 155, 104092 (2021) 42. Ko, D.-K., Lee, K.-W., Lee, D.H., Lim, S.-C.: Vision-based interaction force estimation for robot grip motion without tactile/force sensor. Expert Syst. Appl. 211, 118441 (2023)
UWB-Based Indoor Navigation in a Flexible Manufacturing System Using a Custom Quadrotor UAV Petros Savvakis1 , George-Christopher Vosniakos1(B) , Emmanuel Stathatos1 Axel Debar-Monclair3 , Marek Chodnicki2 , and Panorios Benardos1
,
1 National Technical University of Athens, Heroon Politehniou 9, 15773 Athens, Greece
[email protected]
2 Gdansk University of Technology, Narutowicza Str. 11/12, 80-233 Gdansk, Poland 3 ENSMM, 26, rue de l’épitaphe, 25030 Besancon, France
Abstract. A novel solution for indoor navigation of a transportation drone in flexible manufacturing is presented in this paper. To address the challenges of accurate and robust drone navigation in occluded environments, an ultra-wideband (UWB) navigation system has been integrated with a commercially available open source control platform. The system offers high accuracy (±20 mm), low power consumption, resistance to electronic interference, and support for automatic navigation. UWB technology has not been applied to drone navigation in flexible manufacturing before. Acceptable navigation accuracy was demonstrated in preliminary testing, which is expected to have significant implications for the efficiency and safety of manufacturing operations. Keywords: Indoor Navigation · Unmanned Aerial Vehicle · Ultra-Wideband
1 Introduction Unmanned Aerial Vehicles (UAVs) are an attractive choice for monitoring tasks in production, inspection and inventory [1], payload being the weight of a mini camera. By contrast, UAVs have scarcely been proposed as transporters of parts and tools in factories due to the associated high payload. In any case indoor navigation in factories is a major burden since the commonly adopted global positioning system (GPS) for UAV tracking cannot function satisfactorily inside a factory. Onboard LiDARs (Light Detection and Ranging) were proposed as an alternative [2] but they are expensive and increase power consumption [3] affecting the already limited UAV flight time due to restricted battery capacity [4]. In recent years, UAV indoor navigation systems have been being developed, including Inertial Measurement Units (IMUs), ultrasonic sensors, cameras with or without markers [5], ultra-wideband (UWB) sensors [6], or a tether [7]. Hardware is used by localization algorithms exploiting triangulation, computer vision, SLAM (Simultaneous Localization and Mapping) etc. [8]. UWB technology, in particular, has received increased attention for possible use in robotics indoor navigation due to its high robustness and accuracy, low cost and low © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 F. J. G. Silva et al. (Eds.): FAIM 2023, LNME, pp. 91–98, 2024. https://doi.org/10.1007/978-3-031-38241-3_11
92
P. Savvakis et al.
power consumption [3]. UWB technology is also known for its practical immunity to most types of noise due to its short-range transmissions and the use of time-domain filtering techniques. Early applications with reduced accuracy have been reported [9] regarding peer-to-peer relative localization of aerial vehicles [10]. A comparison of UWB technology with alternatives such as Wi-Fi, GPS and Bluetooth is provided in [11] along with insights into the recent advancements in UWB positioning. Information on performance testing of UWB systems can be found in [12]. This paper reports on the suitability of UWB for automatic navigation of a custom quadrotor UAV that is used for transporting parts and tools between stations in a Flexible Manufacturing System (FMS). Section 2 summarizes UAV design. Section 3 outlines UWB integration considerations with the UAV and Sect. 4 presents the proof of concept of navigating the UAV based on a commercially available UWB platform. Section 5 outlines conclusions and further developments that are under way.
2 UAV System Design The UAV developed is shown in Fig. 1. It consists of an Al7075 T6 frame, 4 Motors with Propellers, 4 ESCs (Electronic Speed Controllers), Battery, Charger, Flight Controller (Pixhawk™ PX4) and a camera. Details are given in [13, 14]. The UAV measures 700 × 700 × 175 mm weighs 5.9 kg and carries up to 2 kg. Parts and tools are transported in special ABS 3D printed baskets (e.g. measuring 200 × 80 × 50, 70 × 30 × 30 etc.), see Fig. 2(a).
Fig. 1. UAV general aspect
The basket is attached to the UAV frame by an electromagnetically operated latch that has been custom designed and manufactured from Al alloy (casing) and steel (ram), see Fig. 2(d). The loading and unloading operation take place at docking. Each docking station comprises a flat surface on which four male pads of trapezoidal cross-section are fixed in order to guide the corresponding female pads that constitute the feet of the UAV, see Fig. 2(c), to which The UAV charging receptacle makes contact to two charging plates on the docking surface, see Fig. 2(b).
UWB-Based Indoor Navigation
93
Fig. 2. UAV specifics (a) transportation basket (b) charging slider (c) docking station surface (d) gripper on UAV (e) UWB sensor on the UAV underside
3 UWB Localization Hardware Integration The UAV system’s block diagram is shown in Fig. 3 including the UWB subsystem, which was implemented by the Marvelmind™ Starter Set Super-MP-3D kit.
Fig. 3. UWB system hardware block diagram created by the authors
Ultra-Wideband (UWB) technology is based on the accurate measurement of the Time-of-Flight (ToF) of a signal between the receiver, which is typically a UWB sensor attached on the UAV, and the transmitters, which are ‘anchors’ located at a higher
94
P. Savvakis et al.
elevation than the maximum permitted height of the UAV. Precise measurement of the ToF enables the calculation of the relative distance between the receiver and transmitter, thereby enabling the determination of the location of the UAV via one of the wellestablished triangulation algorithms [15]. Note that the UAV must have constant line of sight with at least three ‘anchor’ sensors in order to achieve triangulation. To determine the position of the UAV multiple ultrasound signal pulses are sent from four different anchor sensors with 30 m coverage. The system can be upgraded to support up to 250 anchors, multiple UAVs and extended area coverage with automatic sub-mapping. Nominal precision is claimed to be ±20 mm. The sensors are wireless and are powered by 3.7 V rechargeable batteries of capacity 1000 mAh. The receiver sensor is mounted on the underside of the UAV see Fig. 2(e). System controller is in the same enclosure with a router/modem. Distances between sensors are calculated typically at 25 Hz. The router communicates with the computer where the User Interface is installed, enabling monitoring of the UAV path in real time. A tracking quality threshold feature adjusts the system to particular circumstances, e.g. to different noise levels in the factory, to distances between anchor sensors etc.
4 Navigation Using UWB: Proof of Concept Navigation provides reference inputs in terms of 3D points, path segments connecting them and motion commands along these segments following which the UAV is controlled. Several controllers may be used in an open system such as Pixhawk™. However, Pixhawk™ PX4 native controller was kept as it is well established. The Pixhawk™ controller was configured according to standard procedures to achieve system identification and calibration. Manual navigation of the UAV was performed first in the guided mode with additional use of a GPS to double-check its ability to fly. In automatic navigation, contrary to human guided navigation, the path to be followed has to be defined beforehand. This was done in Mission Planner™ within Ardupilot™ open platform which supports the following commands: (a) Waypoint defining a straight line path between successive waypoints (b) Spline waypoint, defining a spline path (c) Take-off denoting vertical upwards flight (d) Land denoting vertical downwards flight (e) Loiter time, i.e. hovering on the same point for a specific duration (f) Loiter turns, i.e. hovering in a circular path about a specific point (g) return to launch. A rudimentary path, is shown in Fig. 4. The waypoints are defined by reference to a digital earth map on which a plan view (image or drawing) of the factory layout served by the drone is overlaid. To do so, a standard procedure called georeferencing is used defining in effect a transformation achieving common scaling and alignment of the image and the map. Figure 4 presents such an overlay of the satellite image of the factory building and a CAD plan view of the machine layout on the ground floor of that building measuring 52.5 × 21.5 m. Georeferencing was done on the QGIS™ open system. Plan view coordinates of waypoints are defined by simply clicking on the map. Height of the waypoints needs to be entered manually as read off photos or measured in the real environment.
UWB-Based Indoor Navigation
95
Fig. 4. UAV path planning example on world map with overlaid factory layout
Table 1. Waypoint coordinates (ED1950UTM system) for path of Fig. 4 (F: flight, G: ground). Point
1G
Latitude (deg)
1F
2G
2F
3G
3F
4G
4F
5G
5F
37.9795316
37.9795279
37.9794365
37.9795353
Longitude (deg) 23.7847868
23.7848767
23.7848727
23.78511
23.7849887
Xabsolute (m)
744650.912
744.658.821
744.658.774
744.679.292
744.668.630
Yabsolute (m)
4207384.321
4207384.147
4207373.992
4207385.582
4207385.440
Xrelative (m)
0.000
7.909
7.862
28.380
17.718
Yrelative (m)
0.000
−0.174
−10.329
1.261
1.119
Zrelative (m)
0.000
4.000
0.000
4.000
0.000
4.000
0.000
4.000
37.9795369
0.000
4.000
Table 1 contains waypoint coordinates used to define path 1G-1F-2F-2G-2F-3F3G-3F-4F-4G-4F-5F-5G shown in Fig. 4, where segment patterns nF-(n + 1)F, denote straight line flying, patterns nG-nF denote take-offs and segment patterns nF-nG denote landings (n integer from 1 to 5). The path and movements that are defined like this are saved in a special file, which is fed to the UAV controller. This mission is then executed by the drone in autopiloting mode. During mission execution the User Interface of the UWB system shows the current position of the UAV as well as the path that has been followed so far along with the anchor sensors positions. Note that all coordinates in Mission Planner™ are given in Latitude-LongitudeAltitude format (GPS coordinates). In fact, latitude and longitude are given in degrees with a maximum of 8 decimal points. For example latitude resolution is λR = R*(π /180)*10–8 where R is the radius of the parallel at given latitude, at the equator R = 6371 × 109 mm, thus λR = 1.11 mm. Which is acceptable for defining waypoints. In addition, waypoints, especially those corresponding to landing or take off positions can be checked and corrected by positioning the UAV at the real locations in the factory and comparing with the nominal ones shown in the UWB interface. In any case,
96
P. Savvakis et al.
transformation between the cartesian frame associated with GPS coordinates and that established by the UWB system facilitates localization calculations. In order to test accuracy of the UWB system three docking positions were marked on the factory floor and the UWB receiver of the UAV was positioned successively on the respective marks. The coordinates of these positions were independently calculated by photogrammetry using IMetric™ system, see Fig. 5(a) and Table 2. Note that even when static measurement is made readings vary due to noise etc., hence standard deviation based on 100 readings was calculated. Additionally, the coordinates were read off the UWB system’s user interface, see Fig. 5(b).
Fig. 5. Static positioning accuracy test (a) photogrammetry snapshot (b) UWB GUI snapshot
Table 2. Docking position localization test results Point pair
Distance (mm) Real distance (mm)
UWB distance (mm) Average
Std Deviation
Difference %
1-2
2424
2429
0.007
0.20%
2-3
3957
3967
0.011
0.25%
3-1
3167
3176
0.011
0.29%
Subsequently, a circular path (center: (X,Y,Z) radius: 1600 mm) was programmed on a 6 axis Motoman HP20 robot and was applied to the UWB receiver as this was firmly attached to the robot effector, see Fig. 6(a). The robot’s accuracy and repeatability was better than 100 µm according to the manufacturer, thus the programmed straight line path could be safely used as a reference for comparison of the UWB coordinates, see Fig. 6(b). This experiment was repeated 3 times and results are shown in Table 3. The radius measured by the UWB system is about 8 mm shorter than the commanded radius to the robot. In addition, quality of circle fitting to the measured data is good as shown by the small residual error. Furthermore, repeatability of the UWB system is excellent as shown by the close values of the 3 radius measurements.
UWB-Based Indoor Navigation
97
Fig. 6. Continuous flying accuracy test (a) robotic layout (b) UWB GUI snapshot
Table 3. Circular path accuracy test results
Radius (mm)
Robot
UWB (mean)
UWB-1
UWB-2
UWB-3
1652.49
1644.47 ± 0.43
1644.34
1644.95
1644.12
Residual error (mm)
0.012
0.013
0.013
5 Conclusions and Future Work The UWB system’s advantages in indoor localization have been verified regarding accuracy, repeatability and deployment. Limitations pertain to the need for constant line of sight. Integration of UWB with an open controller exploits standard interfaces that are usually already embedded in a reference architecture thereby allowing interoperability of the various systems involved in flying a UAV. Testing performed on sample navigation points and a circular path validated the claimed positioning accuracy of ±20 mm, which is suitable for the transportation of parts and tools. Overall, this study contributes to the field of robotics and UAV technology by presenting a novel and effective solution to the challenge of indoor localization and navigation. As for future work, the focus may be on improving the automatic navigation system by developing standard paths consisting of standard segments and their respective navigation commands. This will involve creating fine motion routines and collision avoidance routines, which can be obtained from existing robotics libraries that support proximity sensors or image-based object recognition. These improvements will be crucial for achieving accurate docking and avoiding obstacles. Acknowledgment. This work is partly funded by the European Commission, framework HORIZON-WIDERA-2021-ACCESS-03, project 101079398 ‘New Approach to Innovative Technologies in Manufacturing (NEPTUN)’.Financial support for the third author from the Gda´nsk University of Technology by the DEC13/2021/IDUB/ll.1/AMERICIUM grant under the AMERICIUM – ‘Excellence Initiative – Research University’ program is gratefully acknowledged.
98
P. Savvakis et al.
References 1. Hassanalian, M., Abdelkefi, A.: Classifications, applications, and design challenges of drones: a review. Prog. Aerosp. Sci. 91, 99–131 (2017) 2. Maghazei, O., Netland, T.: Drones in manufacturing: exploring opportunities for research and practice. J. Manuf. Technol. Manag. 31, 1237–1259 (2020) 3. Kabiri, M., Cimarelli, C., Bavle, H., Sanchez-Lopez, J.L., Voos, H.: A review of radio frequency based localisation for aerial and ground robots with 5G future perspectives. Sensors 23(1), 188 (2023) 4. Deja, M., Siemitkowski, M.S., Vosniakos, G.C., Maltezos, G.: Opportunities and challenges for exploiting drones in agile manufacturing systems. Procedia Manuf. 51, 527–534 (2020) 5. Ekici, M., Seçkin, A.Ç., Özek, A., Karpuz, C.: Warehouse drone: indoor positioning and product counter with virtual fiducial markers. Drones. 7, 3 (2022) 6. Macoir, N., et al.: UWB localization with battery-powered wireless backbone for drone-based inventory management. Sensors 19(3), 467 (2019) 7. Xiao, X., Fan, Y., Dufek, J., Murphy, R.: Indoor UAV localization using a tether. In: 2018 IEEE International Symposium on Safety, Security, and Rescue Robotics, SSRR 2018, pp. 1–6. Institute of Electrical and Electronics Engineers Inc. (2018) 8. Pérez, M.C., Gualda, D., de Vicente, J., Villadangos, J.M., Ureña, J.: Review of UAV positioning in indoor environments and new proposal based on US measurements. In: CEUR Workshop Proceedings, pp. 267–274 (2019) 9. Kempke, B., Pannuto, P., Dutta, P.: Polypoint: guiding indoor quadrotors with ultra-wideband localization. In: HotWireless 2015 – Proceedings of the 2nd International Workshop on Hot Topics in Wireless, co-located with MobiCom 2015, pp. 16–20. Association for Computing Machinery, Inc. (2015) 10. Guler, S., Abdelkader, M., Shamma, J.S.: Peer-to-peer relative localization of aerial robots with ultrawideband sensors. IEEE Trans. Control Syst. Technol. 29, 1981–1996 (2021) 11. Shule, W., Almansa, C.M., Queralta, J.P., Zou, Z., Westerlund, T.: UWB-based localization for multi-UAV systems and collaborative heterogeneous multi-robot systems. Procedia Comput Sci. 175, 357–364 (2020) 12. Volpi, A., Tebaldi, L., Matrella, G., Montanari, R., Bottani, E.: Low-cost uwb based real-time locating system: development, lab test. Ind. Implementation Econ. Assess. Sens. 23, 1124 (2023) 13. Vosniakos, G.C., Maltezos, G.: A feasibility study on Unmanned Aerial Vehicle navigation and docking for materials transportation in manufacturing systems. IFAC-PapersOnLine. 55, 970–975 (2022) 14. Vosniakos, G.C., Lekai, E., Maltezos, G.: On the mechanical design of a customized Unmanned Aerial Vehicle transporter for Flexible Manufacturing Systems. IFACPapersOnLine. 55, 989–994 (2022) 15. Win, M.Z., Scholtz, R.A.: Ultra-wide bandwidth time-hopping spread-spectrum impulse radio for wireless multiple-access communications. IEEE Trans. Commun. 48, 679–689 (2000)
Real-Time Defect and Object Detection in Assembly Line: A Case for In-Line Quality Inspection Milad Ashourpour1(B) , Ghazaleh Azizpour2 , and Kerstin Johansen1 1 Jönköping School of Engineering, 55111 Jönköping, Sweden
{Milad.Ashourpour,Kerstin.Johansen}@ju.se 2 Husqvarna AB, 56182 Huskvarna, Sweden [email protected]
Abstract. Identification of flawed assemblies and defective parts or products as early as possible is a daily struggle for manufacturing companies. With the ever-increasing complexity of assembly operations and manufacturing processes alongside the need for shorter cycle times and higher flexibility of productions, companies cannot afford to check for quality issues only at the end of the line. Inline quality inspection needs to be considered as a vital part of the process. This paper explores use of a real-time automated solution for detection of assembly defects through YOLOv8 (You Only Look Once) deep learning algorithm which is a class of convolutional neural networks (CNN). The use cases of the algorithm can be extended into detection of multiple objects within a single image to account for not only defects and missing parts in an assembly operation, but also quality assurance of the process both in manual and automatic cells. An analysis of YOLOv8 algorithm over an industrial case study for object detection shows the mean average precision (mAP) of the model on the test dataset and consequently its overall performance is extremely high. An implementation of this model would facilitate in-line quality inspection and streamline quality control tasks in complex assembly operations. Keywords: In-line Quality Inspection · Computer Vision · Deep Learning · YOLOv8 · Assembly Line
1 Introduction With the advancement of production technologies and increasing expansion of automation into manufacturing and complex assembly operations, overseeing quality aspects of productions becomes a more critical task. Complying with customer requirements and possibly exceeding quality standards could arguably be one of the key determinants of industrial rivalry in today’s competitive market. Staying in this course would among other things, require companies to improve final quality of their products, minimize their defective parts and/or products, and improve efficiency of their production processes. In a typical assembly plant where hundreds of components are processed and assembled © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 F. J. G. Silva et al. (Eds.): FAIM 2023, LNME, pp. 99–106, 2024. https://doi.org/10.1007/978-3-031-38241-3_12
100
M. Ashourpour et al.
throughout consecutive working shifts, inspection of products – both during and after production – form essential checkpoints that would directly affect performance of the plant. Performing quality control during production instead of holding it off until the end of the line would turn recognition of faults and defects proactive rather than reactive. Different inspection tools would produce varying amounts of data that can be used in a quality inspection process. The tool choice would define accuracy, repeatability, and rate of quality inspection, which could probably be one of the reasons why a majority of quality inspection studies would opt for the choice of machine vision systems [1]. Indeed, using computer vision in inspection and quality control has been the most researched stage of manufacturing in the entire product lifecycle, with production process and assembly operations trailing closely [2]. Machine vision plays a crucial role in industrial automation, with a wide range of applications that can be found in areas such as visual inspection, defect detection, part positioning and measurement, and product sorting and tracking [3].
2 Background The study conducted by [4] uses the YOLOv5 algorithm to train a model for real-time detection of defective boxes in a packaging application. By proposing an architecture based on the original YOLO algorithm and implementing it on an experimental case taken from Kaggle platform, the authors obtain relatively high precision (81.8%). Another similar application can be found at [5] where the authors use YOLOv4 as the object detection model for validation of assembly operations and test it on two case studies in automotive and electronics sector. The results show that the model’s performance could reach 83% accuracy in object detection. Integration of the model with an asset administration shell has also been explored in this study which could be useful from an implementation point of view. In another study, the authors use YOLOv3 and YOLOv4 for visual inspection and defect detection [6] and the algorithm is applied in a rim manufacturing process. A total of 270 images containing four types of defects are initially compiled, while further augmentation through generative adversarial network (GAN) and deep convolutional generative adversarial networks (DCGAN) on Keras platform is implemented to expand the dataset for training the YOLO algorithm. The study results show YOLOv4 outperforming YOLOv3, with the augmentations further improving performance of the model. In almost all these studies, the confusion matrix is obtained which is itself made up of four principal elements: True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN). Using these elements, performance of the model can then be calculated to obtain key metrics i.e., Accuracy, Recall, and Precision [6]. Applications of the YOLO algorithm however is not limited to the quality inspection of assembly operations. A different application of YOLO can be seen in [7] where the authors use YOLOv3 to detect four types of defect in a surface welding process. The usage of 4850 images with 9:1 ratio for the training set and test set results in 75.5% accuracy, with a slightly lower performance in the workshop, possibly due to lighting conditions. Another similar implementation of the algorithm for inspection of metal surface over a custom surface detection dataset shows a performance of 71% (mAP) [8]. The implementation can of course be extended into other fields e.g., electronics where quality inspection
Real-Time Defect and Object Detection in Assembly Line
101
depends heavily on skilled operators/engineers, and relying merely on automatic inspection machines (in contrast to machine visions powered by DL algorithms) could result in false positives. The work done by [9] is an example of such studies where YOLOv2 is used to detect defects in printed circuit boards (PCB), generating 98% accuracy in detection, resulting from utilization of almost 11000 images. A demonstration of YOLO in the same field but for object detection and more specifically detection and localization of capacitors in PCB is discussed in [10] where the results show an average detection time of 0.3 s with 11000 epochs. The current study applies YOLO algorithm at its core analysis. It retains actual images taken from the assembly plant and trains them for developing a real-time object and defect detection. The main purpose here is to demonstrate capabilities of an in-line quality inspection system that is trained by a DL algorithm in a real assembly environment. A performance evaluation of such models could potentially facilitate automation of quality inspection tasks throughout assembly in a range of operations: • • • • •
Positioning and alignment of parts/tools for precise assembly and processing [11] Identification and data capture from bar codes, QR codes, labels, etc. [12] Verification of correct assembly and labelling [13] Measurement of dimensions, shapes, angles, etc. [14] Detection of flawed or defective parts (cracks, stains, foreign objects, etc.) [15]
3 Method and Implementation The YOLOv8 with the new backbone network is the latest version of the YOLO [16] object detection and image segmentation model developed by Ultralytics and it can be used for multiple object detection in real-time. YOLO can be utilized in a range of applications including, autonomous vehicles, security and surveillance, medical imaging, etc. [17]. The algorithm can detect objects on the entire image in one forward pass through the network, which is an improvement compared with other algorithms that divide the image into multiple regions. The model can be used for detecting objects in both images and videos by deploying a package of tools and libraries that includes pre-built models for different modes i.e., Train, Validate, and Predict. Prediction tasks that may be done on images (called inferences) can include detection, instance segmentation, and classification. The framework followed in this study is shown in Fig. 1.
Fig. 1. Framework followed in the paper to implement YOLOv8 for object detection.
Having installed and imported YOLOv8 into the Google Colab environment, the custom dataset for training the model is created by following sequential steps i.e., preparation and organization, labeling/annotation, and version generation. This can be done
102
M. Ashourpour et al.
on a computer vision platform e.g., VoTT, Roboflow, Supervisely, labelImg. It must be noted that for increasing performance and decreasing training time, various image transformations such as orientation changes and resizing are performed in the pre-processing step. To increase the number of images and account for the variations in images, another set of transformations can be considered. This is to increase accuracy of the model across the most common use-cases that happen with respect to the part/product in hand. After custom training of the data, depending on the size of the data and required accuracy, the hyperparameter epochs size is input to the execution runtime. It must be noted that higher epochs with larger dataset sizes would significantly increase the execution time in the model. The model is then ready to be validated and tested for performance measurement. The images not used for training are in the validation batch which can be used later to study model’s prediction performance. In the validation mode, the model’s performance can be examined on the test dataset (the batch not used so far) to obtain metrics such as mean average precision (mAP). Lastly, inferences on other images, recorded videos, or live video stream are realized by deploying the model for processing those images.
4 Case Study The case study is related to Husqvarna AB industrial plant performing assembly of various components in chainsaw production. The station under study belongs to filter holder assembly in which an operator is tasked with multiple operations including, insertion of the filter holder on the chainsaw, connecting wires from ignition module to the filter holder, inserting screws, assembly of side screws, and verification of the correct installation of the candle, all in a cycle time of 44 s. Considering the constraints of a short cycle time for executing all the necessary tasks, it is noteworthy that the absence of a vision system on a robot stationed downstream would necessitate the screws already being inserted on the filter holder. In this context, a recurring issue of the station is the possibility of operators inadvertently missing one or more screw insertions, resulting in quality deficiencies. Moreover, detecting black screws on a black base can be a particularly challenging task for vision systems in industry. This difficulty arises from the fact that most vision systems use contrast settings as a means of object identification. When two objects have similar color or luminance, the contrast between them becomes weak, which can cause the system to fail in detecting them accurately. Furthermore, the decision made by the plant’s management to automate this station in near future would further highlight the need for a high performing vision system. As per requirements of the model, a custom dataset needs to be prepared as the first step. So, a total number of 225 images are taken from the station, including images without the required data (marked Null). Having labelled the images for objects of interest which add up to three classes of Candle, Hole, and Screw, the images are then grouped into three sets: Training set: 148 images (70%), Validation set: 38 images (20%), and Testing set: 22 images (10%). To increase recall (decreasing false negatives), a ratio of 50% for null-annotated images is maintained. As the assembly is done on the shop floor with no controlled lighting (artificial lighting), the model should take lighting and camera settings changes into account. A range of {−50%, +50%} can be regarded suitable for brightness variations. Another factor that could apply to this case is the blurring
Real-Time Defect and Object Detection in Assembly Line
103
effect, especially because the part under study has several edges that the model could be overfitting in the process. To mitigate camera focus changes, 1.75px random Gaussian blur is added to the transformations list. The cropping (20%) and horizontal flipping are also included. The final number of the images in the dataset would be approximately 500 (444 Training, 38 Validation, and 22 Testing). The dataset is then imported into the YOLO model, and the training mode for the imported dataset is executed with 50 epochs on the model.
5 Analysis and Discussion The confusion matrix from the model demonstrates that the model is highly accurate in detection of all the labeled objects, with two classes (Candle and Hole) being detected incorrectly as background, respectively 3% and 4%. The complete matrix can be seen in Fig. 2.
Fig. 2. (A) Confusion matrix showing ratios between True and Predicted detections in the model, (B) Model’s prediction on the subset validation batch showing different classes.
The model shows an overall convergence which demonstrates training has been adequate, although longer training could potentially yield better results. The curves in Fig. 3 plot training and validation loss over time during the model training process. The difference measured between predicted class probabilities and true class labels for each object in the training set (train/cls_loss) and validation set (val/cls_loss) means the model would be penalized for incorrect predictions on the object classes. On the other hands, box loss measures differences between predicted bounding box coordinates and true bounding box coordinates for each object in both validation and training sets. The improvement of the model in its object detection performance takes place by minimizing total loss function during training which is a weighted sum of the class loss and box loss. Validating the model with the trained weights on the test dataset demonstrates that the model is highly accurate in its prediction. This can be seen in column headings mAP50 and mAP50–95 in Table 1, which respectively show model’s object detection performance over a range of Intersection over Union (IoU) thresholds 0.5, and 0.5 -
104
M. Ashourpour et al.
Fig. 3. Training and validation loss curves over time during model training process.
0.95. This high accuracy is potentially attributable to the precise tolerances and optimal design of the products for use in an automated assembly line. Table 1. Validation of the model on the test dataset using the new weights Class
Images
Instances
mAP50
mAP50–95
All
38
185
0.995
0.689
Candle
38
37
0.995
0.787
Hole
38
25
0.995
0.641
Screw
38
123
0.995
0.638
The model can then be used for inference on images or videos. Results of the inference on an example image from the test subset and a live video of the cell (snapshot) can be seen in Fig. 4.
Fig. 4. Running inferences on: (A) an example image from test subset, (B) a frame from live video from the station under study.
Real-Time Defect and Object Detection in Assembly Line
105
6 Conclusions and Future Work A machine vision solution for detection of defects and objects within assembly operations through a general deep learning YOLOv8 algorithm is discussed in this paper. An industrial case is used to train and test the algorithm. The model exhibits the ability to detect screws, which is commonly regarded as a difficult task due to the screws’ black color and their placement on a black base, posing a significant challenge for vision systems used in industrial settings. Furthermore, the results demonstrate that YOLOv8 performs with high accuracy in both static and dynamic pictures (real-time video). While this study does not aim to provide a comparison between performance of YOLOv8 and that of previous YOLO versions, the results obtained here are immensely improved from those obtained from studies mentioned in Sect. 2 of the current paper, showing the latest version of YOLO has been significantly improved. One of the limitations of this method lies in the difficulty of labelling raw images and creating classes which is generally done manually. Using self-supervised methods could be a solution to such problems [2, 18]. Future studies in this field need to compare performance of this general algorithm with that of industrial solutions that are commonly deployed in manufacturing companies. Particularly, comparing cost components of both methods would provide useful insights about return on investments. Another potential future work could be investigating integration of this solution with robots and other automation equipment within manufacturing and assembly stations.
References 1. Azamfirei, V., Psarommatis, F., Lagrosen, Y.: Application of automation for in-line quality inspection, a zero-defect manufacturing approach (2023) 2. Zhou, L., Zhang, L., Konz, N.: Computer vision techniques in manufacturing. IEEE Trans. Syst. Man Cybern. Syst. 53, 105–117 (2023). https://doi.org/10.1109/TSMC.2022.3166397 3. What Is Machine Vision? https://www.intel.com/content/www/us/en/manufacturing/what-ismachine-vision.html 4. Vu, T.-T.-H., Pham, D.-L., Chang, T.-W.: A YOLO-based real-time packaging defect detection system. Procedia Comput. Sci. 217, 886–894 (2023). https://doi.org/10.1016/J.PROCS.2022. 12.285 5. Basamakis, F.P., Bavelos, A.C., Dimosthenopoulos, D., Papavasileiou, A., Makris, S.: Deep object detection framework for automated quality inspection in assembly operations. Procedia CIRP 115, 166–171 (2022). https://doi.org/10.1016/J.PROCIR.2022.10.068 6. Mao, W.L., et al.: Integration of deep learning network and robot arm system for rim defect inspection application. Sensors 22, 3927 (2022). https://doi.org/10.3390/S22103927 7. Zuo, Y., Wang, J., Song, J.: Application of YOLO object detection network in weld surface defect detection. In: 2021 IEEE 11th Annual International Conference on CYBER Technology in Automation Control and Intelligent Systems (CYBER), pp. 704–710 (2021). https://doi. org/10.1109/CYBER53097.2021.9588269 8. Aein, S.L., Thu, T.T., Htun, P.P., Paing, A., Htet, H.T.M.: YOLO based deep learning network for metal surface inspection system. In: Lecture Notes in Electrical Engineering 829 LNEE, pp. 923–929 (2022). https://doi.org/10.1007/978-981-16-8129-5_141/COVER 9. Adibhatla, V.A., Chih, H.C., Hsu, C.C., Cheng, J., Abbod, M.F., Shieh, J.S.: Defect detection in printed circuit boards using you-only-look-once convolutional neural networks. Electron 9, 1547 (2020). https://doi.org/10.3390/electronics9091547
106
M. Ashourpour et al.
10. Lin, Y.L., Chiang, Y.M., Hsu, H.C.: Capacitor detection in PCB using YOLO algorithm. In: 2018 International Conference on System Science Engineering ICSSE 2018 (2018). https:// doi.org/10.1109/ICSSE.2018.8520170 11. Sun, W.-H., Yeh, S.-S.: Using the machine vision method to develop an on-machine insert condition monitoring system for computer numerical control turning machine tools. Materials 11(10), 1977 (2018). https://doi.org/10.3390/ma11101977 12. Ramshankar, Y., Deivanathan, R.: Development of machine vision system for automatic inspection of vehicle identification number. Int. J. Eng. Manuf. 8(2), 21–32 (2018). https:// doi.org/10.5815/ijem.2018.02.03 13. Gargiulo, F., Duellmann, D., Arpaia, P., Schiano Lo Moriello, R.: Predicting hard disk failure by means of automatized labeling and machine learning approach. Appl. Sci. 11, 8293 (2021). https://doi.org/10.3390/APP11188293/S1 14. Li, B.: Research on geometric dimension measurement system of shaft parts based on machine vision. EURASIP J. Image Video Process. 2018, 1–9 (2018). https://doi.org/10.1186/S13640018-0339-X/TABLES/1 15. Misiak, P., Szempruch, D.: Automated quality inspection of high voltage equipment supported by machine learning and computer vision. In: Lecture Notes Computer Science (including Subser. Lecture Notes Artificial Intelligence Lecture Notes Bioinformatics). 13652 LNAI, pp. 211–222 (2022). https://doi.org/10.1007/978-3-031-21441-7_15/FIGURES/5 16. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: Unified, real-time object detection. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016) 17. Ultralytics YOLOv8 Docs. https://docs.ultralytics.com/ 18. Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: 37th International Conference Machine Learning ICML 2020. PartF168147-3, pp. 1575–1585 (2020). https://doi.org/10.48550/arxiv.2002.05709
Using Computer Vision to Improve SME Performance Kokou C. Lissassi1,3
, Paul-Eric Dossou1,2(B)
, and Christophe Sabourin3
1 ICAM Site of Grand Paris Sud, 77127 Lieusaint, France
[email protected]
2 University of Gustave Eiffel, 77420 Champs-Sur-Marne, France 3 Univ Paris Est Créteil, LISSI, 77567 Lieusaint, France
Abstract. The versatility of customer demand in addition to globalization involves companies to focus on how to improve their global performance. Industry 4.0 concepts are used by large companies for increasing their performance. Despite the success of these concepts, SMEs are reluctant to exploit them for their digital transformation. Nevertheless, human-machine collaboration by combining sustainability aspects with these new technologies corresponds to a solution for their performance improvement. This paper focuses on the use of computer vision and mobile robots for optimizing a production manufacturing through lean manufacturing methodology exploitation. Indeed, in a production line, products or raw materials transportation corresponds to a waste. A literature review based on organizational methods, industry 4.0 and vision including artificial intelligence tools allows to find concepts that will contribute to develop the solution for SMEs. This paper proposes a sustainable methodology, supported by an intelligent tool to reduce in SMEs, this motion waste and increase the operator’s well-being at work. An illustration based on an electronics card SME is presented to validate the concepts and tool that have been elaborated. Keywords: Performance optimization · Computer Vision · Artificial intelligence · Robotics · Lean manufacturing
1 Introduction In this new era of mass consumption, companies are subjected, as ever before, to tough competition and productivity has become one of their biggest and challenging tasks to stay in the race as it reflects the internal state of the company. Simple put [1], productivity “expresses the relationship between the quantity of goods and services produced (output) and the quantity of labor, capital, land, energy, and other resources utilized to produce these goods and services”. Indeed, Industry 4.0 concepts are used as a tool to transform digitally companies [2] and related assessments have been developed to measure large company performance [3]. Despite the success of these concepts, their implementation in SMEs is confidential. A sustainable methodology has been developed to increase the exploitation of new technologies in SMEs [4]. This method combines new technologies tools with sustainability for exploiting organizational methods such as lean © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 F. J. G. Silva et al. (Eds.): FAIM 2023, LNME, pp. 107–116, 2024. https://doi.org/10.1007/978-3-031-38241-3_13
108
K. C. Lissassi et al.
manufacturing [5] for the SME optimization. Each production line of the SME needs to be optimized. For a given amount X of available resources, how well they are utilized to get the best outcome. This ratio-oriented explanation is also developed through various case studies [6]. Then, high productivity is the key factor for a company to extend its influence on the market, as it may lead to high production, higher profit, therefore low per unit cost and, finally, higher wages for workers. Many companies and researchers have expressed interest in the problem and are eager to identify both excellent techniques to boost productivity and harmful ones to avoid. Waste is one of those negative elements. In his famous book Toyota Production System – Beyond Large-Scale Production [7], Taiichi Ohno identified seven types of waste, including overproduction, waiting, transportation, unused talent, stock on hand (inventory), unnecessary movement and making defective products. Waste is defined [7] as any action taken that does not add value to the finished product, such as walking to collect parts or opening a package of outside-ordered products. As a result, numerous approaches have developed over time to do away with it, with Lean Manufacturing being by far the most prevalent. A concept known as “lean manufacturing” aims to increase productivity while eliminating waste in industrial systems. After a literature review describing industry 4.0 concepts and organizational methods such as lean manufacturing, vision and artificial intelligence, this paper explains the concepts and methodology developed for increasing the company performance through a human-mobile robot collaboration. Then, the architecture of the intelligent system that has been developed for improving the company manufacturing performance is shown. An illustration example is exposed for validating the concepts and tool.
2 Literature Review This section presents some of the industry 4.0 concepts and methodologies that have been developed in the literature review to increase the company performance. Then, the second part is focused on artificial intelligence tools and vision. 2.1 Organizational methods and Industry 4.0 The integration of industry 4.0 concepts in the company manufacturing system is a success in large companies but must be more implemented in SMEs [8]. The main goal of this philosophy is the company digital transformation [9] through the use of new technologies such as cyber-physical systems, internet of things, cloud computing, robotics, digital twins, information systems, big data or artificial intelligence. For instance, big data technology contributes to manage structured and unstructured data [10] of the company informational or physical systems. Cloud computing allows to share data on devices with computer features [11] available for managers and operators. Robots are defined as advanced computer programmable machines able to realize automatically complex actions [12]. Then, these new technologies allow to integrate, automate, and optimize the manufacturing production flows [13]. An industry 4.0 framework has been developed by using vertical and horizontal integration [2] for the company digital transformation.
Using Computer Vision to Improve SME Performance
109
This framework focuses on the end-to-end engineering through the value chain. Organization methods such as lean manufacturing, DMAIC and Design of Experiments (DOE) contribute to the company performance improvement. Lean manufacturing is composed of tools Value Stream Mapping (VSM), Kanban, or SMED that could be used to increase the manufacturing processes performance. The objective is to eliminate waste such as transport in the production processes and to focus on added values [14]. Six Sigma DMAIC is a methodology used for improving quality [15] and decreasing waste in the processes. DOE method is an experimental and statistical methodology for finding solutions to technical challenges such as production, or complex mechanisms [16]. In this paper a combination of lean manufacturing methodology and DMAIC methods will be used to increase the company performance. 2.2 Vision Combining these organizational methods and industry 4.0 concepts to optimize manufacturing processes involves to solve technical problems for eliminating waste. Vision has always been a crucial component of the industry and was largely utilized for monitoring structures, inspecting products for quality, and tracking for flaws or damage. Computerbased vision produces an interesting ratio between costs and consistency, precision, and working time, unlike human perception, which can be easily tricked. A non-exhaustive list of computer vision’s industrial uses has been provided [17], including the automated visual inspection of food, automated manufacturing, printed circuit boards, steel, and wood. For instance, in the food industry, it can be used to check the baked products’ quality based on their appearance. Another application has been realized in the Printing Industry 4.0, automated the visual inspection process using computer vision [18]. Contrary to earlier uses, which were product-oriented, a method to lower the danger of COVID19 virus transmission by assuring the right use of mask and distance-keeping measures has been developed [19]. Thanks to recent breakthroughs in sensor technologies, cameras, and image processing and computer vision algorithms, the scope of vision has been expanded to applications requiring high level processing, such as human gesture recognition. Gestures, a generic term, is used to refer to either body, hand and arm, or head and facial gestures. However, because of its various uses, notably in the robotics sector for robot control, gesture recognition has recently attracted a lot of interest. A typical recognition task includes three stages [20] namely (i) gesture representation (for static gestures), (ii) gesture tracking (for dynamic gestures), and (iii) gesture classification. In the representation stage, Skin color is a common visual element used to distinguish people from the background due to its uniqueness. However, because of how sensitive it is to lighting, additional features like motion and depth will frequently be combined with it. A skin-color map with depth information is used to accomplish pointing gesture detection [21], which was subsequently fed into a Hidden Markov Model (HMMs). Recent research is centered on learning strategies to autonomously extract pertinent features for the recognition task due to the complexity that may develop during the extraction process. K-means clustering is applied to an Artificial Neural Network (ANN) trained on the three H, S, and V components of the HSV color space to detect American Sign Language [22]. These formalisms focusing on the combination of vision tools, artificial
110
K. C. Lissassi et al.
intelligence and other new technologies can be exploited to optimize the manufacturing processes. The formalisms, concepts and methodologies presented in this section are combined in the following section to elaborate the concepts and tool for increasing the company performance through the human machine collaboration.
3 Concept and Methodology The following section describes the adopted strategy to put the solution into practice. 3.1 The Organizational Methodology A sustainable industry 4.0 methodology has been developed in [8] putting sustainability at the center of the company digital transformation. Three transformation axes have been defined: physical, decisional and informational. The decisional transformation allows to structure strategic, tactical and operational decision of the company for increasing its performance. The informational transformation integrates information systems, and new technologies such as big data, cloud computing, and applications to facilitate the manufacturing and processes management. The physical axe contributes to optimize the manufacturing processes through a better organization. Lean Manufacturing and DMAIC methodology are exploited for realizing these changes. The SME digital transformation corresponds to the definition of steps to attain the company objectives related to its financial capacity. The improvement of the manufacturing processes requires the use of physical new technologies such as mobile robots, IoTs, and cobots, but they have to be combined with informational tools. According to lean manufacturing methodology, the sustainable manufacturing processes digital transformation, human have to focus on added values and non-added values have to be managed by new technologies (see Fig. 1.).
Fig. 1. Manufacturing processes sustainable digital transformation
Using Computer Vision to Improve SME Performance
111
3.2 The Computational Structure This section presents the concepts that will be used to elaborate the computational structure supporting the previous methodology. The computational structure associated to human mobile collaboration is defined as follow. Each working station is equipped with a camera that capture the gesture of the worker. Employees are presumably instructed in the various gestures to use to request a specific component. As a result, each time a gesture is recognized by the camera, an order is sent to the central unit, representing the Central Intelligent System (CIS). Based on heuristic information, in this case, the closest robot to the first waypoint on the navigation path, the CIS is in charge of allowing a free robot to carry out the mission. A Graphical User Interface (GUI) is also proposed to help the manager to monitor the overall state of the system by displaying, in real time, the current mission that is being carried out and the corresponding robot. The communication between the aforementioned blocs is ensured through ROS (Robot Operating System). Robot Operating System The Robot Operating System (ROS), a personal project begun by Eric Berger and Keenan Wyrobek in 2007, is a powerful working environment with outstanding tools that facilitate the development of robotic applications. ROS is well-suited for a variety of applications since it is particularly effective at building heterogeneous systems and is compatible with several different programming languages, such as C++, Python, Java, etc. A basic ROS system is made up of a master (roscore), which enables communication with the other parts namely nodes. A node is the smallest ROS compatible processing units. Three different communication channels are implemented by ROS: services for one-time messages, topics for asynchronous and continual communication, and actions when feedback is necessary. Person Detection System The vision system is implemented based on jetson inference project. Jetson inference project is a GitHub repository [23] developed and maintain by NVIDIA developers. It includes a variety of Deep Learning projects that use pretrained neural networks to carry out typical computer vision tasks as pose estimation, object detection, semantic segmentation, and image classification. A training phase has been avoided by using the Pose Estimation project for the application. The model calculates and coordinates the eighteen essential points on the human body, including the wrist, both shoulders, and elbow, using a pretrained residual network. The angle formed at the elbow (see Fig. 4b) is computed using the shoulder, elbow, and wrist coordinates, and when it reaches a certain value, a command is delivered. Robot Navigation System Making a map, a metrical representation of the production line, is the first stage in the navigation process. Both the operator and the robot will need it to set a destination point, for the former, and to calculate the distance to that place for the later.
112
K. C. Lissassi et al.
To implement standard robotic applications like map creation, navigation, etc., ROS offers a variety of packages. The navigation is implemented using the navigation stack, while the map is built exploiting gmapping. The robot’s pose estimation, map localization, and path planning nodes make up the navigation stack. Graphical User Interface (GUI) The GUI has two functions. It first provides the means by which the employee set the planning list. A mission may be a supply mission, in which the robot provides the operator with raw materials, or a storage mission, in which the robot removes the finished product from the workspace at the operator’s request. The second function of the GUI is for monitoring purposes. It shows, in real time, the mission that is being carried out as well as the associated robot. These concepts and tools are implemented in the computational structure and the following section presents the architecture of the system that has been developed to improve the manufacturing processes performance.
4 Architecture The architecture of the intelligent system that have been developed for managing the human mobile robot interaction is composed of the following module (see Fig. 2.).
Fig. 2. Overview of the intelligent system.
Four major blocs make up the solution. The vision system is the primary point of interaction with the system. The camera tracks in real time the operator and the vision module performs pattern recognition on the input images. Then, when a match is found, a request is forwarded to the Artificial Intelligent Module which is responsible for allocating the appropriate mobile robot to perform the mission. The Human/Machine Interface allows the operator to track the proper functioning of the system by displaying the current mission, being performed by the corresponding robot, and by providing a mean by which the operator can set the hyperparameters (thresholds, minimum and max time, etc.) of the system. A database module manages all the information of the system. It stores all relevant system data, including the number of robots, their names, all missions, and the navigation path (series of waypoints) for each mission, along with the missions’
Using Computer Vision to Improve SME Performance
113
priority. Figure 3 shows the internal flowchart of the system. There is only one intelligent module for the entire system, each request must first go through a queue management module before the intelligent module can begin processing it. First-In-First-Out (FIFO) processing is used by the queue manager module to handle input requests. On the other hand, the intelligent system initially searches the database for a list of available robots. If the feedback is positive, it queries for the mission with the highest priority and the nearest robot is assigned to carry it out. Otherwise, that is, when no robots are available, the query is added to the pending list and is automatically executed as soon as a robot becomes available.
Fig. 3. Internal flowchart of the system
5 Illustration This section presents the use case realized on a SME of the microelectronics industry. A demonstrator corresponding to a situation of this SME, has been elaborated to validate the concepts and tool developed in this paper. 5.1 Experimental Environment The production line has been simulated by a rectangular 3668 mm × 2503 mm surface with plywood on the four edges (see Fig. 4a). It has been designed to hold four workstations, two on a length and one on each width. The stacker crane will be held on the final length. Raspberry Pi camera v2 has been used in conjunction with the Nvidia Jetson nano 2 GB developer kit to implement the vision system. Two turtlebots (burgers) have been used to carry out the missions. The master node is started on Ubuntu Desktop 20.04, and the entire system is powered through ROS Noetic. The scenario being tested is described as follow. A list of two different mission categories has been defined: • Storage missions: When performing a storage mission, the robot first goes to the operator workstation to retrieve the finished product before moving to the storage area to store it. • Supply missions: When the robot moves from its current position, it first goes to the storage area to gather the raw materials before moving to the operator workstation.
114
K. C. Lissassi et al.
Fig. 4. Experimental environment
5.2 Results Figure 5 depicts the execution of a typical supply operation. As shown, following the receipt of the order, the CIS chooses the closest robot to complete the work, in this case robot 1. The user could also see that robot 1 is in charge of the mission with the greatest priority in the GUI. When the second order is received, Robot 2 is the only robot that is still available, therefore the mission is assigned to this robot. In this case, task 16 on the intended list, which has the second-highest priority, is being carried out. The human machine collaboration has been implemented and the company manufacturing process optimization has been clearly validated by this demonstrator.
Fig. 5. Execution of a typical supply mission
5.3 Discussion The operator moving in the demonstrator takes 3 min (corresponds to the SME wasting time) to retrieve the raw material from an external storage space in the experiment. These operator’s movements are realized by the mobile robots through the development of an intermediate storage area that is filled up each day in full compliance with production
Using Computer Vision to Improve SME Performance
115
forecasts. This allows an operator who moves 20 times per day to save an additional hour of work since the supplies are made in hidden time. When magnified by the number of operators, this significantly boosts the performance of the business through a decrease of costs due to the operator’s increased productivity, a production time reduction (enabling customer delivery times), and better quality due to the operator’s focusing on his primary task. From a sustainability perspective, ergonomics aspects have been taken into account because of fewer daily movements and less operator fatigue. However, there is still a concern regarding the departure of the employees who contribute to the company’s positive social atmosphere. It will be necessary to make up for this by instituting living-together activities such as points breakfasts or debriefingsnacks, which at first glance seem like a waste of time but are actually a source of well-being and help preventing absenteeism or work – related diseases. By producing these moments, the business succeeds.
6 Conclusion Lean manufacturing philosophy considers transport and operator motion as wastes (Muda). The introduction of industry 4.0 concepts in the SME manufacturing processes, allows to optimize the company performance with its sustainable digital transformation. This paper focuses on the development of a sustainable methodology supported by an intelligent tool to increase the operator performance and finally the company productivity. However, the proper functioning of this solution is subject to an effective communication mean between human and robot. Vision tool, artificial intelligence techniques, new technologies such as advanced robotics and organizational methods have been exploited to achieve these goals. The development of a demonstrator for testing use cases of an electronics card SME has been presented. In this typical use case scenario, the operators, can perform a gesture when raw materials or final product delivery is required, and this gesture is interpreted by the mobile robot for realizing the procurement or dispatching operations. This demonstrator allows to validate the concepts and tool that have been elaborated for solving the problem of transport and operator motion as wastes. The results need to be implemented in the company for measuring their real impact in terms of productivity, sustainability and digital transformation. Despite, the clarity and precision that gestures bring to conversations, the daily human-machine interactions are mostly voice-based. In this typical use case scenario, the operators, each time his/she need a component performs a gesture which once been interpreted by the mobile robot, supply him with the appropriate component. He use of artificial intelligence such as deep learning will allow to improve the intelligent tool and extended its utilization to others industrial area.
References 1. Ibrahim Egdair, S.L.: Analysis of factors which impact on productivity of manufacturing companies (2016)
116
K. C. Lissassi et al.
2. Stock, T., Seliger, G.: Opportunities of Sustainable Manufacturing in Industry 4.0. Procedia CIRP 40, 536–541 (2016) 3. Bai, C., Dallasega, P., Orzes, G., Sarkis, J.: Industry 4.0 technologies assessment: a sustainability perspective. Int. J. Prod. Econ. 229, 107776 (2020) 4. Dossou, P.E.: Development of a new framework for implementing industry 4.0 in companies. Procedia Manuf. 38, 573–580 (2019) 5. Sony, M.: Industry 4.0 and lean management: a proposed integration model and research propositions. Product. Manuf. Res. 6, 416–432 (2018) 6. Sreekumar, M.D., Meghna Chhabra, D.R.Y.: Productivity in manufacturing industries. Int. J. Innov. Sci. Res. Tech. 3(10), 634–639 (2018) 7. Ohno, T.: Toyota Production System. Diamond, Inc. (1978) 8. Koumas, M., Dossou, P.-E., Didier, J.-Y.: Digital transformation of small and medium sized enterprises production manufacturing. J. Softw. Eng. Appl. 14, 607–630 (2021) 9. Li, L.: China’s manufacturing locus in 2025: with a comparison of “Made-in-China 2025” and “Industry 4.0.” Technol. Forecast. Soc. Change 135, 66–74 (2018) 10. Banger, G.: Endüstri 4.0 Extra. Dorlion Yayınları, Baski, Ankara (2017) 11. Basl, J.: The pilot survey of the Industry 4.0 principles penetration in the selected Czech and Polish companies. J. Syst. Integr. 7(4), 3–8 (2016) 12. Duman, M.C., Akdemir, B.: A study to determine the effects of industry 4.0 technology components on organizational performance. Technol. Forecast. Soc. Change 167, 120615 (2021). https://doi.org/10.1016/j.techfore.2021.120615 13. Benesova, A., Hirman, M., Steiner, F., Tupa, J.: Determination of changes in process management within Industry 4.0. Procedia Manuf. 38, 1691–1696 (2019) 14. Al-Tahat, M.D., Jalham, I.S.: A structural equation model and a statistical investigation of lean-based quality and productivity improvement. J. Intell. Manuf. 26(3), 571–583 (2013) 15. Arcidiacono, G., Pieroni, A.: The revolution lean six sigma 4.0. Int. J. Adv. Sci. Eng. Inf. Technol. 8(1), 141–149 (2018) 16. McLean, K.A.P., McAuley, K.B.: Mathematical modelling of chemical processes-obtaining the best model predictions and parameter estimates using identifiability and estimability procedures. Can. J. Chem. Eng. 90(2), 351–366 (2012). https://doi.org/10.1002/cjce.20660 17. Ciora, R.A., Simion, C.M.: Industrial applications of image processing. ACTA Universitatis Cibiniensis 64(1), 17–21 (2014) 18. Villalba-Diez, J., Schmidt, D., Gevers, R., Ordieres-Meré, J., Buchwitz, M., Wellbrock, W.: Deep learning for industrial computer vision quality control in the printing industry 4.0. Sensors 19(18), 3987 (2019) 19. Khandelwal, P., Khandelwal, A., Agarwal, S., Thomas, D., Xavier, N., Raghuraman, A.: Using computer vision to enhance safety of workforce in manufacturing in a post covid world. Comput. Vis. Pattern Recogn. 1–7 (2020) 20. Liu, H., Wang, L.: Gesture recognition for human-robot collaboration: a review. Int. J. Ergon. 68, 355–367 (2018) 21. Nickel, K., Stiefelhagen, R.: Real-time person tracking and pointing gesture recognition for human-robot interaction. In: Sebe, N., Lew, M., Huang, T.S. (eds.) CVHCI 2004. LNCS, vol. 3058, pp. 28–38. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-24837-8_4 22. Shomi Khan, M.E.A., Das, S.S.: Real time hand gesture recognition by skin color detection for American sign language. In: 4th International Conference on Electrical Information and Communication Technology (EICT), pp. 1–6. Khulna, Bangladesh (2019) 23. Franklin, D.: Jetson inference. https://github.com/dusty-nv/jetson-inference
Cyber-Physical Systems Based Smart Manufacturing of Disinfectants: A Need, and Solution Driven by COVID-19 Pandemic Faiz Iqbal1(B)
, Tushar Semwal2
, and Adam A. Stokes2
1 School of Engineering, University of Lincoln, Lincoln LN6 7TS, UK
[email protected] 2 School of Engineering, Institute for Integrated Micro and Nano Systems, The University
of Edinburgh, The King’s Buildings, Edinburgh EH9 3FF, UK
Abstract. Cyber-physical systems and Industry-4.0 have made smart manufacturing possible. The multi-national companies were able to quickly adapt and cope with changing times, but the micro small-scale enterprises (m-SMEs) were at crossroads to adopt new technology due to extensive costs involved. This gap of no existing solutions for micro-SMEs is the result of literature survey for this work. Then the COVID-19 pandemic made it extremely difficult for micro-SMEs to survive. This paper implements a smart manufacturing solution for production facility at one such micro-SME. Pre-pandemic, their facility had enough production to satisfy the demand, but pandemic overwhelmed their production with rapid rise in demand. The main goal for this study is to make the micro-SME cope this unprecedented rise in demand. The production facility was upgraded, and smart features were incorporated based on cyber-physical production systems framework to predict demand and transform a manual manufacturing into smart manufacturing. With more than one product being produced, optimum production was achieved avoiding over/underproduction for different products given the limited storage capacity available with the micro-SMEs. Keywords: Smart Manufacturing · Cyber-Physical Systems · Small scale enterprises · AI · COVID-19
1 Introduction Smart manufacturing has rapidly spread across the manufacturing industry in last one and a half decade. Smart manufacturing as defined by Lu et al. [1] as a fully integrated and collaborative manufacturing system that responds in real time to meet the changing demands and conditions in the factory, supply network, and customer needs. Where on one hand the MNEs (Multinational Enterprises) can cope up with this disruptive industrial revolution, on the other hand there are many micro small-scale enterprises (mSMEs) which have faced difficulties in keeping up with the pace the industrial revolution has gone ahead with. This has meant that either m-SMEs have already faced or will soon face problems when their competition who do opt for the rise in technology will start © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 F. J. G. Silva et al. (Eds.): FAIM 2023, LNME, pp. 117–124, 2024. https://doi.org/10.1007/978-3-031-38241-3_14
118
F. Iqbal et al.
gaining advantage over them. Not only competition from similar scale enterprises, but the competition will also be fierce as the new industrial revolution enables large scale enterprises to capture small scale markets as well.The current literature archives have a plethora of work dedicated to smart manufacturing and related topics such as CPS and I4.0. Most of these works have focussed on detailing out roadmaps, maturity models, architectures and frameworks of smart manufacturing specifically addressing needs for MNEs only. The unprecedented COVID-19 situation brought about the need for this existing enterprise to adopt the smart manufacturing transformation, the pandemic made it difficult for the enterprise to continue the usual practice of their production and think out of the box to keep up to a whopping rise in demand of their product due to the pandemic. Smart manufacturing has gained a lot of importance all over the world and enormous amount of work is being done to bring this into reality. Latest advances, new and improved products, services, and software have all been made available at an early stage for adoption of I4.0 to all concerned. The new industrial revolution has led to possibilities of mass customization [2] and mass personalization [3]. Termed as I4.0, this revolution has many significant terms one of which is smart manufacturing [4, 5]. A survey conducted to find out the transition of mechatronic systems into CPS [6] listed out design, models, simulations for CPS. A cyber physical manufacturing cloud (CPMC) mainly focussing on visualization of the CPS [7] was presented as a monitoring system or testbed for CPS. A recent work on CPS based transformation of existing legacy machines is available in literature [8]. Malhotra et al. [9] gave an architecture of CPS for smart manufacturing. The background check concludes that the literature has largely provided conceptual frameworks and models of smart manufacturing and CPS. Lack of CPS enabled smart manufacturing and its implementation is evident. A lack of existing research work for enabling SMEs (especially micro businesses) into smart manufacturing exists giving quest for this work and the novelty behind it. The paper is structured as follows: Sect. 1 is introduction and includes literature review, introducing the micro-SME and describing the current state-of-the-art. Section 2 describes the proposed solution, Sect. 3 provides the results with relevant discussions, Sect. 4 concludes the paper followed by acknowledgements and References. 1.1 The Micro-SME This work focuses on a micro-SME which produces disinfectants without any Industry 4.0 technologies and demand to supply balance had been manageable until the COVID-19 pandemic. The demand due to pandemic rose drastically and the production went always on a 100% capacity making it difficult to manage demand and supply balance for different products and multiple customers at the same time. The same is true for any facility which has one or more than one product and have multiple customers placing orders for one or many products in different quantities as per their needs. With unprecedented situation customer orders became more frequent and demand very high leading to delays. 1.2 State of the Art Figure 1 schematically shows the state in which production facility was operating before the pandemic. An operator is tasked to manually mix the ingredients in the final mixing
Cyber-Physical Systems Based Smart Manufacturing
119
tank which. Being an m-SME, a limited storage facility is available. Therefore, with this storage facility and the manual operation, the company usually fills all the storage tanks with one type of product leaving less room for flexibility if there is a demand for a different product type. Product type A and product type B are taken as examples to show flexibility issues with manual production setup.
Fig. 1. Pre-COVID state of m-SME production facility.
The COVID-19 pandemic meant that current practices were all outdated and demand was so high that it became impossible for current production rate to meet the demand. However, if a facility was smart it had a better chance to cope up with this sudden rise in demand to at least bridge the gap between demand and supply in a smart manner. If the facility were not smart it would be completely in a disarray. Not to say the human decision makers are not smart enough to manage the production as much as they could but was it enough is the question. This led to current work focussing on providing a solution that will enable the company to better manage their production according to the specific demands of individual customers. A CPPS framework [10, 11] which can guide the industrial organisations to either newly create or upgrade existing manufacturing facilities into facilities compatible with Industry-4.0 has been referred in this work to transform the micro-SME facility into an automated one and then smart features were incorporated.
2 The Proposed Solution For this work, the existing manual setup at the facility was revamped with industrial PLC based automated system that replaced the manual tasks to scale the production volume up and be compatible with smart features of a cyber-physical system (CPS) to enable
120
F. Iqbal et al.
smart manufacturing functionalities. Figure 2 shows a schematic representation of the revamped system displaying the automated system replacing manual operator.
Fig. 2. Automated production system
The current work proposes a solution for smart manufacturing for m-SMEs and to make the solution generic and be applicable to various other m-SMEs some assumptions and pseudo scenarios have been made. In this work it is assumed that the company makes two products viz. Product-A (PA ) and Product-B (PB ). The demand for these products can come from various places such as hospitals, malls, public transport stations etc. For this work, only hospitals are assumed as customers as during the COVID19 pandemic these were the ones hugely in demand and therefore four customers are assumed which are Hospital-1 (H1 ), Hospital-2 (H2 ), Hospital-3 (H3 ), and Hospital4 (H4 ). The consumption at customer end is related to COVID-19 confirmed positive cases for hospital floors that relates to Product-A, and ambulance visits which relate to Product-B. If there were more positive cases during a day, then this meant that more movement of COVID-19 positive patients took place on floors and hence more cleaning was required and done. The COVID-19 data was publicly available on govt. Website, we developed algorithms using Python scripts to access daily COVID-19 data and extracted the daily change in COVID-19 positive patients and ambulance attendances in four customer areas mapping one each with H1 , H2 , H3 , and H4 . This enables us to define the product consumption and consequently the demand for the two products from each customer.
Cyber-Physical Systems Based Smart Manufacturing
121
3 Results and Discussion Figure 3a is a plot of daily COVID-19 cases across the four hospitals and Fig. 3b is a plot of daily ambulance visits to attend COVID-19 calls. A schematic representation of how the pre-COVID production facility was working is provided in Fig. 4.
Fig. 3. The daily data across the four hospital regions (a) + ve cases, (b) Ambulance visits.
Figure 5 shows the same production facility operating amidst COVID19 pandemic and how the production stats were affected by the unprecedented situation. The production was same as in pre-COVID era, but the demand rose sharply due to rise in COVID cases (from Fig. 3). This unprecedented demand was much more than the amount of each product being produced by the m-SME production facility and hence resulted in negative inventory as seen in Fig. 5. With an automated system the capacity of producing the two products increased considerable and production stats with the automated system during COVID-19 was revamped, the same is shown in Fig. 6 respectively.
Fig. 4. Production stats for products A and B before COVID-19.
This mismatch between product A having a positive inventory (i.e., overproduction) and product B having negative inventory (i.e., under production) can be managed but with a manual think tank this is difficult to avoid. After developing the automated production system and understanding that there is a need for smart manufacturing solution to better predict demand, a production scheduling work was needed and has been developed.
122
F. Iqbal et al.
A linear regression model was employed to study the trends of COVID-19 cases and product consumption related to it. The model was then used to predict COVID-19 cases and thus product demand for coming week.
Fig. 5. Production stats for products A and B during COVID-19 with existing facility.
Fig. 6. Production stats for products A and B during COVID-19 with automated system.
Figure 7a and b shows the model fit on training data, predicted data and actual future data for COVID-19 cases and ambulance visits. Based on the model predicted, the predicted data slightly differs from actual future data in case of H1 but in case of the other three hospitals the predicted data and actual future data are quite similar. The misfit in case of H1 is due to this region being very big and various factors affecting the COVID cases in this area. Important thing to note is that with more training data in coming weeks even this error can be further reduced. Based on the predicted data, the production is also scheduled for these requirements. Trials were conducted for smart manufacturing production in a demo setup. The demo setup is a scaled down version of the main production facility and directly proportional to the volume scales of a fullscale production system. A small percentage of safety production is added to ensure production demand gap is as small as possible. Figure 8 is a graphical representation of production trial stats based on smart manufacturing predicted schedule on a day-to-day basis.
Cyber-Physical Systems Based Smart Manufacturing
123
Fig. 7. Proposed model predictions for daily Covid-19 cases (a) product A, (b) product B for next 3 days across four hospital regions
Fig. 8. Production stats based on scheduled smart manufacturing.
As seen on the figure the predicted volume of products A and B is well within the storage capacity available within the m-SME and the actual demand volume is just inside the predicted volume. The predicted volume is derived from the predicted COVID19 cases and ambulance visits and to overcome any prediction error the production includes 10% factor of safety to ensure there is no under production. Minor amount of both products overproduced are due to the factor of safety which is a negligible amount. The whole solution has addressed the gap in literature which had no smart manufacturing solution for the micro-SMEs.
4 Conclusion The following points conclude the work: • This work brings out a smart manufacturing solution for micro-SMEs which was not there in the archival literature. • The theoretical contribution is simple techniques used to predict demand and schedule production at micro-SME facility to have optimum production. • The work still has limitations such as it is applicable to limited customers and 2 products at a time only.
124
F. Iqbal et al.
• A future scope of this work would be to better strengthen this solution for more customers and products included. Acknowledgements. This project was funded by EPSRC IAA scheme, the authors would like to acknowledge the support received from EPSRC, University of Edinburgh, Edinburgh Innovations, and the micro-SME Aqualution systems Ltd.
References 1. Lu, Y., Morris, K.C., Frechette, S.: Current standards landscape for smart manufacturing systems. National Institute of Standards and Tech., NISTIR, 8107, 39 (2016) 2. Fogliatto, F.S., Da Silveira, G.J., Borenstein, D.: The mass customization decade: an updated review of the literature. Int. J. Prod. Econ. 138(1), 14–25 (2012) 3. Tseng, M.M., Jiao, R.J., Wang, C.: Design for mass personalization. CIRP Annals 59(1), 175–178 (2010) 4. Kusiak, A.: Smart manufacturing. Int. J. Prod. Res. 56(1–2), 508–517 (2018) 5. Tiwari, S., Bahuguna, P.C., Srivastava, R.: Smart Manufacturing and Sustainability: a Bibliometric Analysis. An International Journal, Benchmarking (2022) 6. Hehenberger, P., Vogel-Heuser, B., Bradley, D., Eynard, B., Tomiyama, T., Achiche, S.: Design, modelling, simulation and integration of cyber physical systems: methods and applications. Comput. Ind. 82, 273–289 (2016) 7. Liu, X.F., Shahriar, M.R., Al Sunny, S.N., Leu, M.C., Hu, L.: Cyber-physical manufacturing cloud: architecture, virtualization, communication, and testbed. J. Manuf. Syst. 43, 352–364 (2017) 8. Iqbal, F., Malhotra, J., Jha, S., Semwal, T.: Introduction to cyber-physical systems and challenges faced due to the COVID-19 pandemic. In: Semwal, T., Iqbal, F. (eds.) Cyber-Physical Systems: Solutions to Pandemic Challenges, pp. 1–23. CRC Press, Boca Raton (2021). https:// doi.org/10.1201/9781003186380-1 9. Malhotra, J., Iqbal, F., Sahu, A.K., Jha, S.: A cyber-physical system architecture for smart manufacturing. In: Shunmugam, M.S., Kanthababu, M. (eds.) Advances in Forming, Machining and Automation. LNMIE, pp. 637–647. Springer, Singapore (2019). https://doi.org/10. 1007/978-981-32-9417-2_53 10. Semwal, T., Iqbal, F. (eds.): Cyber-Physical Systems: Solutions to Pandemic Challenges. CRC Press, Boca Raton, FL (2022) 11. Nguyen, T.H., Bundas, M., Son, T.C., Balduccini, M., Garwood, K.C., Griffor, E.R.: Specifying and Reasoning about CPS through the Lens of the NIST CPS Framework. Theory Practice Logic Programm, pp. 1–41 (2022)
CPPS-3D: A Methodology to Support Cyber Physical Production Systems Design, Development and Deployment Pedro F. Cunha(B)
, Dário Pelixo, and Rui Madeira
Inst. Politécnico de Setúbal - Escola Sup. de Tecnologia de Setúbal, Setúbal, Portugal [email protected]
Abstract. Cyber-Physical Production Systems Design, Development and Deployment methodology (CPPS-3D) is a systematic approach in assessing necessities, identifying gaps, and then designing, developing, and deploying solutions to fill such gaps. It aims to support enterprise’s evolution into the industry 4.0 paradigms and technologies, promoting process improvements and enterprise competitiveness. The methodology considers the real context of the enterprise, in terms of its level of organization, competencies and technology. It is a twophased sequentially stepped process to enable discussion, reflection or reasoning, decision-making and action-taking towards evolution. The first phase assesses an enterprise across its organizational, technological and human dimensions. The second phase establishes a sequence of tasks to successfully deploy solutions to fill previously identified gaps. The methodology was applied at a Portuguese enterprise with the development of a new visual management system on the shopfloor, capable of remote communications. This solution promoted faster decisionmaking, improved data availability and fostered a more dynamic workplace with enhanced reactivity to unexpected problems. Keywords: Production systems · Cyber-physical system · Problem-solving
1 Introduction At its full technological plenitude, Industry 4.0 (I4.0) will stand as an integrated, selfadaptable and self-configurable production process powered by state-of-the-art technologies, big data handling and analysis algorithms [1–3]. However, there are currently still many research gaps towards that goal and several challenges must yet be addressed. This agile and dynamic environment envisaged with the I4.0 implementation will only be possible by improving and enhancing the capabilities of cyber-physical systems (CPS) [4] and through its correct integration or connection with other systems. By interlinking all CPS in the production system, a Cyber-Physical Production System (CPPS) is realized [5]. This realization of a CPPS advocate the need to look past the Technological dimension and into the Organizational and the Human dimensions of I4.0, as enterprises seem to be lacking in the proper skills and knowledge going towards a new age of digitalization. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 F. J. G. Silva et al. (Eds.): FAIM 2023, LNME, pp. 125–132, 2024. https://doi.org/10.1007/978-3-031-38241-3_15
126
P. F. Cunha et al.
The herein proposed methodology was conceived to support cyber-physical production systems design, development and deployment (CPPS-3D). Its fundamental premise is that implementing changes in the Technological dimension has to be accompanied with changes in the Human and Organizational dimensions. Throughout this paper will be presented the background knowledge that supports the design and proposal of CPPS-3D methodology. It will be also presented an industrial application of this methodology, realized within of PRODUTECH SIF “Soluções para a Indústria de Futuro” Project (nº 24541) – “Base Technologies for Cyber-Physical Production Systems (CPPS) SmartObjects” and partly funded by the PORTUGAL 2020.
2 Background Knowledge The enterprise’s evolution into the industry 4.0 paradigms and technologies means digitalization and integration of assets into the production system. Effective communication networks will enable better data retrieval and information availability. The processing of this data through analytics and algorithms will generate meaningful information able to provide deeper understanding of operating conditions, faults, failures and useful insights for management. Challenges are security breaches, data leaks, data privacy, needs for investments in new technology and retraining of workers into new skills, most notably of those with repetitive and routine work [6]. Therefore, the enterprise’s evolution has to consider, simultaneously, with three dimensions as is presented below. 2.1 Technological Dimension I4.0 is still in its early stages and since it is mostly discussed on the digital level of production, the following are applicable to define its fundamental production systems [7]: Researching the current technological status of fundamental production systems, the technological gaps between current production systems and the plenitude of I4.0 is estimated in [4] and defined based on 10 general characteristics (digitalization, communication, standardization, flexibility, customization, real-time responsibility, productive maintenance, earl-aware, self-optimization and self-configuration). CPPS are believed to be the key to close these gaps and unlock the full potential benefits of I4.0 paradigm [1]. A CPPS comprises smart machines, storage systems and production facilities, able to autonomously exchange information, trigger actions and control each other independently. CPPS facilitates improvements to the industrial processes involved in production, engineering, supply chain management, material usage and life cycle management [8]. To fully integrate physical and cyber systems the traditional automation pyramid needs to evolve into a new form of industrial architecture to realize a new decentralized paradigm [3]. CPS-based automation keeps the typical control and field levels of the pyramid as base of the decentralized paradigm. Programmable Logic Controllers (PLC) control the critical field processes ensuring the highest performance for control loops in real-time. This lower level is responsible for the advanced connectivity ensuring real-time data acquisition from the physical world and information feedback from the cyber space. At higher levels of hierarchy, the pyramid ceases to exist transforming into
CPPS-3D: A Methodology to Support Cyber Physical Production Systems Design
127
a decentralized way of functioning, a defining characteristic of CPPS. These incorporate intelligent data management, analytics and decentralized computational capabilities building up the cyber space [3]. To evolve into CPPS, individual CPS must be developed first. CPS dissemination will greatly contribute to the ongoing evolution of I4.0 by promoting the proliferation of advanced technologies as well as innovative information and communication systems. 2.2 Human Dimension As CPPS evolves and substitute humans in the standard and routine decision situations, these will then be reintegrated into the CPPS to oversee the tasks of understanding, interpreting, evaluating, verifying and deciding on the validity of all generated information. For this retraining and requalification of human workers at all levels is critical for taking on new roles. Based on the 5C architecture, in [9] is proposed the complementary implementation of a unified architecture between human, product, process and production environment. This unified vision aims for a deeper control and visibility of manual processes through all generated data and resultant information, allowing for deeper understanding of the dynamics of humans in the production loop. There are activities that are deemed difficult to implement in the cyber world but are the very ones humans excel at and vice versa. Human variability, human privacy, ethical data usage/storage and acceptance of the emotional requirements of instrumenting a human are sensible issues that must be considered along the entire length of the pyramid. 2.3 Organizational Dimension With I4.0 promising benefits across all areas it’s also important to discern the most efficient organizational paths for a sustainable evolution. Building on this by means of academic and corporate literature reviews, [10] identified the framework consisting of eighteen management challenges aggregated in six interrelated clusters of managerial challenges of Industry 4.0: (i) Analysis and strategy, (ii) Planning and implementation, (iii) Cooperation and networks, (iv) Business models, (v) Human resources and (vi) Change and leadership. Deriving from this it is clear that enterprises need to begin to establish strategic transformation paths to better pave their ways into I4.0. They not only need to seize the advantages of new technological advancements but also develop the organizational structure to support such advancements. This implies establishing means to build knowledge, to seek inter-organizational collaboration opportunities and to redefine their business model if necessary. For all this to happen, leaders must promote these changes, creating acceptance for change and counteracting organizational inertia, while designing the workplace of the future, through employee qualification and motivation. A requirement for changes to occur effectively is the need to design and develop good solutions. In this sense, at an organizational level, it is desirable the enterprise to have a methodology that supports the design and development of new products/solutions. The Axiomatic Design (AD) is a methodology that helps to structure and understand design problems, guiding the synthesis and analysis of requirements, solutions and adequate design processes. AD provides a framework from which metrics of design alternatives can be quantified. Throughout this process constraints and project requirements are
128
P. F. Cunha et al.
always present and impose limitations to all possible solutions. To deal with this and decompose general requirements, a back and forth approach between all of these is required [11]. Deriving from this, it is clear that enterprises need to begin to establish strategic transformation paths to better pave their ways into I4.0. They not only need to seize the advantages of new technological advancements but also develop the organizational structure to support such advancements. This implies establishing means to build knowledge, to seek inter-organizational collaboration opportunities and to redefine their business model if necessary. For all this to happen, leaders have to promote these changes, creating acceptance for change and counteracting organizational inertia, while designing the workplace of the future, through employee qualification and motivation. Leadership is undoubtedly an important factor to drive the transformation of enterprises and for this reason the existence of a Lean culture can be a competitive advantage.
3 Methodology Cyber-Physical Production Systems Design, Development and Deployment (CPPS-3D) methodology consists of two phases namely Assessment and Project Development. These represent two distinct work phases with different objectives. By virtue of this each phase can be carried out by different teams which translates into better human resources management. Each phase is built upon sequential steps, so Assessment contains the Status, the Analysis and the Vision steps while Project development includes the Design and the Deployment steps as presented in Fig. 1.
Fig. 1. CPPS-3D methodology
For each step there are Focus Points (FP) that target specific topics which act as guidelines for directing efforts. Then for each FP there are bullet points named Elements of Interest (ELI) that act as enablers for discussion, reflection/reasoning, decision-making and action-taking needed to address the relevant issues to further progress in the methodology. The sequence of phases to conduct the methodology is inspired by circular models that promote continuous improvement, such as PDCA and VDI/VDE 3695, which promote a continuous sequence of phases each with its own particular purpose. 3.1 Assessment Phase CPPS-3D starts by conducting the Assessment Phase represented in Fig. 2. This phase targets several key issues in the enterprise as to determine its current status and upon analysis its future goals.
CPPS-3D: A Methodology to Support Cyber Physical Production Systems Design
129
Fig. 2. CPPS-3D Assessment phase
Additionally at this phase CPPS-3D proposes that six key areas in the enterprise are specifically looked at taking into account the three dimensions of I4.0 and defining their relationships as per Fig. 3. These promote discussion between relevant parties in order to evaluate them. For that each area is rated from 1 to 5 with that rating depending on its operational and integration capabilities. The ratings are adaptions of all dimensions to the technological concepts of the 5C architecture. This enables to radar plot a visual aid in determining the current status of the enterprise across the discussed dimensions, as well as, enable perception on much an improvement can accomplish.
Fig. 3. CPPS-3D dimensions and relationships.
Organizational Dimension - Related to functions, responsibilities, and principles, as well as, methods and tools implemented to run and manage an enterprise. It concerns to how production is structurally organized and supported by specialized management software, such as Enterprise Resource Planning (ERP), customer involvement in production decisions and effective visual management of processes. Proposed key areas to look at are “Organization in production and Support Functions”, “Customer focus and employee awareness” and “Workplace organization and visual management”. Sustainability - As the enterprise evolves the organizational changes must accompany the technological changes. New technologies bring forth changes and these changes must be evaluated and controlled to ensure that they continue to add productivity, quality and safety benefits long after their implementation. Proposed key areas to look at are “Audit and Control Processes” and “Quality and Safety”.
130
P. F. Cunha et al.
Technological Dimension - This determines how technologically advanced is the enterprise and pertain to how technologies improve workers productivity, monitor processes and promote production flexibility as well as support system’s vertical and horizontal integration throughout the enterprise. Proposed key areas to look at are “Standard Work”, “Reliability and Robustness”, “Processes and affective resources” and “Integration and automation”. Continuous Improvement - As automation levels rise up to deal with repetitive tasks the need to rely on human knowledge and creativity is further reinforced. So humans are vital in the way processes can be improved from resorting to their ingenuity. Proposed key areas to look at are “Problem-solving”, “Performance improvement” and “Management of improvement ideas”. Human Dimension - Relating directly to the needs to redefine human roles in the I4.0 environment and based on the concepts of Cyber-Human Systems, this dimension deals with the integration of both workers and managers into the enterprise. Proposed key areas to look at are “Versatility/Backup Capacity” and “Role of leadership”. Communication - Relating to this topic it is important to have the means to reliably and accurately evaluate how processes perform. In addition, it is vital to understand the processes of information availability and sharing, including with suppliers and customers, in order to promote better operation management. Proposed key areas to look at are “Performance evaluation” and “Communication for the management of operations”.
3.2 Project Development Phase The previous phase produces a clear set of decisions about what aspects to improve and their priorities. Thus, concrete projects can be started and carried onto deployment and hence the second phase of CPPS-3D methodology presented in Fig. 4. As it happens in the Assessment phase, for each Focus Points (FP) and Elements of Interest (ELI) are define in this phase.
Fig. 4. CPPS-3D Project development phase.
3.3 Case Study An application of the CPPS-3D methodology was explored by means of a case study at a leading Portuguese manufacturer of metal-based solutions and products. In one
CPPS-3D: A Methodology to Support Cyber Physical Production Systems Design
131
of its robotic welding sections, production data goes to the ERP while at the shopfloor there was very little information available in real-time. Through application of the CPPS-3D methodology a new visual management system was designed, developed and implemented, being the proposed solution presented in Fig. 5.
Fig. 5. Proposed system configuration.
The proposed final solution is based on three key components: (i) SmartObject: A microprocessor physical interface with own processing and communication capabilities, designed for data acquisition from current machines, communication with ERP and with expansion possibilities; (ii) HMImodel [12]: a software framework designed to build a HMI layer to be used as an application on mobiles devices; (iii) Beacon: a Bluetooth Low Energy (BLE) device that broadcasts their identifier to nearby portable devices allowing for these devices to act upon near the beacon. The beacon was designed separately from the SmartObject for better project team management and BLE was preferred as it consumes less energy than Bluetooth standard. With this configuration set in mind Product and Process analysis each identified Functional Requirements (FR) was mapped to the necessary Design Parameters (DP). With satisfactory results from initial testing the chosen architecture started to be detailed with the development of the Smart Object and the HMI model. Developments from the customer on the final definition of users, access levels and KPIs, as well as making their APIs available, was done in order to finalize the work done. The application of the methodology to this concrete application was successful and once the new visual management system was fully operational the expected benefits were observed, e.g. faster decision-making, better production management, more accurate and reliable data availability, as well as, the promotion of a more dynamic workplace with improved reactivity to problems.
4 Discussion and Conclusion New technological and computational capabilities enable CPPS. The integration of physical processes into communication and computational networks allows data collection, sharing and analysis to become an indispensable support to decision-making. Furthermore, standard, and routine tasks are relegated to automation and empowerment is given to humans where they can excel in problem-solving through their innate creativity skills. Therefore, the transition into I4.0 is not an easy path. It involves looking simultaneously at the three dimensions to perceive its benefits and requisites.
132
P. F. Cunha et al.
The CPPS-3D methodology was developed, and it shows a relevant tool to guide enterprises into I4.0. It aims at better help enterprises and their leaders in evaluating their capabilities, find gaps, develop improvement actions and implement solutions. CPPS-3D is a framework that facilitates designing, development and deployment of key I4.0 improvements. The Assessment phase of CPPS-3D methodology allows to obtain an exhaustive report of the enterprise and a general understanding on how it functions. It can be very time consuming if the enterprise isn’t internally well organized in terms of processes and information-sharing. This Assessment phase is complemented visually by outcomes such as an evaluation table and radar plot across six key areas. The evaluation adopts 5C architecture concepts to rate each category and concepts are open to interpretations in most cases. The Project development phase of enables designing, developing and deploying concrete solutions. It guides project teams through different stages helping to define and organize tasks and responsibilities. The application of CPPS-3D in a Portuguese enterprise was important to validate the methodology, being successful in the assessment of the production section, its analysis and identification of gaps to drive the design and development of a solution.
References 1. Rikalovic, A., Rikalovic, N., Rikalovic, B., Rikalovic, V.: 4.0 implementation challenges and opportunities: a technological perspective. IEEE Syst. J. 16(2), 2797–2810 (2022) 2. Lu, Y.: Industry 4.0: a survey on technologies, applications and open research issues. J. Ind. Inf. Integr. 6, 1–10 (2017) 3. Monostori, L., et al.: Cyber-physical systems in manufacturing. CIRP Ann. 65(2), 621–641 (2016) 4. Qin, J., Liu, Y., Grosvenor, R.: A categorical framework of manufacturing for industry 4.0 and beyond. Procedia CIRP 52, 173–178 (2016) 5. Thoben, K.D., Wiesner, S.A. Wuest, T.: “Industrie 4.0” and smart manufacturing-a review of research issues and application examples. Int. J. Autom. Technol. 11(1), 4–16 (2017) 6. Sung, T.K.: Industry 4.0: a Korea perspective, technol. Forecast. Soc. Change 132, 40–45 (2018) 7. Groover, M.P.: Automation, Production Systems, and Computer-Integrated Manufacturing, 4th edn., Pearson, pp. 353–574 (2015) 8. Kagermann, H.: Industrie 4.0 – what can the UK learn from germany’s manufacturing strategy ? R. Acad. Eng. https://www.raeng.org.uk/RAE/media/Events/Programmes/20140204-indust rie4-Henning-Kagermann.pdf. Accessed 14 Jan 2020 9. Krugh, M., Mears, L.: A complementary cyber-human systems framework for industry 4.0 cyber-physical systems. Manuf. Lett. 15, 89–92 (2018) 10. Schneider, P.: Managerial challenges of industry 4.0: an empirically backed research agenda for a nascent field 12(3) (2018). https://doi.org/10.1007/s11846-018-0283-2 11. Delaram, J., Valilai, O.F.: An architectural view to computer integrated manufacturing systems based on axiomatic design theory. Comput. Ind. 100, 96–114 (2018) 12. Leal, P., Madeira, R.N., Romão, T.: Model-driven framework for human machine interaction design in industry 4.0. In: Lamas, D., Loizides, F., Nacke, L., Petrie, H., Winckler, M., Zaphiris, P. (eds.) INTERACT 2019. LNCS, vol. 11749, pp. 644–648. Springer, Cham (2019). https:// doi.org/10.1007/978-3-030-29390-1_54
Reinforcement Learning-Based Model for Optimization of Cloud Manufacturing-Based Multi Objective Resource Scheduling: A Review Rasoul Rashidifar1 , F. Frank Chen1(B) , Mohammad Shahin1 Ali Hosseinzadeh1 , Hamed Bouzary1 , and Awni Shahin2
,
1 The University of Texas at San Antonio, San Antonio, TX 78249, USA
[email protected] 2 Mu’tah University, Karak, Jordan
Abstract. Cloud-based resource scheduling problem is a typical combination optimization problem, and the task allocation problem refers to how to use resources most efficiently for the completion of a limited set of tasks in a cloud manufacturing system (CMfg). Due to their dynamic nature, and real-time requirements, CMfg faces great challenges in optimizing resource scheduling problems. Since machine learning was developed, a variety of decision-making problems have been solved using Reinforcement Learning (RL). This review paper is aiming to discuss aspects of an RL-based algorithm for optimization of resource scheduling in CMfg through investigating the literature to date. To this end, first, multi-objective resource scheduling is defined and elaborated. Subsequently, the aspects of RL algorithms are presented through their fundamental elements to optimize the scheduling model. Finally, the findings of the review paper are discussed and some suggestions for potential future research to further consolidate this field have been enumerated. Keywords: Resource Scheduling · Reinforcement Learning · Cloud Manufacturing · Artificial Intelligence · Machine Learning
1 Introduction Cloud manufacturing is a service-oriented paradigm that provides manufacturing resources and capabilities on-demand and on a pay-as-you-go basis over the Internet [1]. In a central cloud management system, a mechanism is required to handle various tasks such as decomposition, discovery, matching, composition, billing, and resource scheduling [2]. Resource scheduling has become the most popular among them because it addresses the fundamental issue at the center of this paradigm namely, how to share manufacturing resources and capabilities across geographically dispersed edges [2]. There are two different perspectives including cloud client and cloud provider that have a significant role in the survey of the scheduling in that Quality of Service (QoS) and low cost are requested by cloud client side while using optimal resources and making profit © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 F. J. G. Silva et al. (Eds.): FAIM 2023, LNME, pp. 133–140, 2024. https://doi.org/10.1007/978-3-031-38241-3_16
134
R. Rashidifar et al.
are brought under one roof by cloud providers [3]. As data becomes more readily available, simulation-based methods [5] and optimization algorithms [6] are able to schedule manufacturing jobs more efficiently by incorporating historical data and empirical rules [7]. Simulation methods have limitations in addressing uncertainties inherent in dynamic scheduling in real-time [8]. Since cloud-based resource scheduling is an NP-hard problem, various metaheuristic algorithms have attracted the most attention, however, these algorithms are ineffective for three main reasons. [9]. Firstly, scheduling problems in cloud manufacturing are complex due to the large solution space and the dynamic effect of QoS attributes on the final composition based on the execution path of complex tasks. Secondly, most well-established and recently developed algorithms are only suitable for continuous optimization problems, requiring modifications to adapt to combinatorial problems like scheduling. Lastly, some evolutionary and swarm-based algorithms such as GA and PSO can easily trap into local modes due to their ineffective evolutionary processes [9]. Artificial Intelligence (AI) is increasingly applied to manufacturing dynamics scheduling problems [8] in which Machine Learning (ML) techniques as a subfield of AI are considered the main driver behind smart manufacturing paradigms today [10, 11]. This review paper explores the use of Reinforcement Learning (RL) for the optimization of a multi-objective scheduling model. A systematic literature review is provided in [4] in which 67 papers published between 2011 to 2021 are chosen. The papers have focused on optimization of resource scheduling in cloud manufacturing environment. In this article, the existing literatures are analyzed to identify the different aspects of the RL algorithm in a resource scheduling problem. The article is organized into the following sections. Section 2 provides the multi-objective resource scheduling model in cloud manufacturing systems through indicating the mathematical model and model’s objective functions. In Sect. 3, machine learning approaches to resource scheduling models are described to address existing literatures in optimization of resource scheduling problems. In this section, the different aspects of the RL-based resource scheduling model are presented. In Sect. 4 the findings of the review paper are discussed. Section 5 concludes the review paper and suggests some future research lines.
2 Multi-objective Resource Scheduling Model A cloud manufacturing system consists of M enterprises that can provide certain operations to accomplish the tasks and subtasks [12]. Task requirements are submitted on the cloud manufacturing platform by the clients and then each one of them would be decomposed into a certain number of subtasks. The subtasks may need processes that could belong to one or more enterprises [13]. In cloud manufacturing environments, resource scheduling is a decision-making process for allocating resources to tasks over a given period of time; it aims to optimize one or more objectives [4]. In resource scheduling, a mathematical model with objective functions and constraints is provided. The scheduling provides the required service resources to perform some processes on the subtasks [12]. The aim is to define and optimize objective functions and constraints in scheduling problems. min(a1 T + a2 C − a3 R)xij
Reinforcement Learning-Based Model for Optimization
135
S.t. a1 , a2 , a3 ≥ 0, T ≤ Tmax , C ≤ Cmax , R ≥ Rmin xij (tx ) =
1, if Task i is completed by service resource j at time tx 0, otherwise
(1)
In this study three main objective function namely Time (T), Cost (C) and Reliability (R) are considered [13, 14] and presented by Eq. 1 in which a1 , a2 , a3 are the weight values of the objective functions and xij is a decision variable. In QoS-aware service optimization, the optimal path to complete a task is determined under customer requirements that determine maximum QoS (such as time, cost, reliability, etc.) [14]. Therefore, total completion time, total cost and reliability ratio are used as metrics to evaluate the system performance with scheduling models. Optimization of resources is the process of selecting, assigning, and managing resources to workloads or tasks based on their performance, compliance, and cost in real-time against the best-fit infrastructure [4]. ML algorithms are proposed to achieve this goal.
3 Machine Learning Approach to Resource Scheduling Currently, some scholars have been applying machine learning algorithms in their research to solve scheduling problems in cloud manufacturing environments. Chen et al. [15] define a mathematical model regarding resource allocation for real-time scheduling in the cloud manufacturing environment in which the goal is to minimize the cost, the make-span, and maximize the service satisfaction. In this paper an Artificial Neural Network (ANN) is built to predict the task completion status. Morariu et al. [16] put forward an integrated control solution in which big data and machine learning algorithms are deployed to predict the operational schedules, resource allocation and maintenance in a large-scale manufacturing system by using Long Short-term Memory neural networks and deep learning in real time. According to the literature review, RL is the most important algorithm for resource scheduling optimization among machine learning methods. 3.1 RL-Based Algorithm for Resource Scheduling Problem Supervised and unsupervised machine learnings that are not a suitable method for the resource scheduling models due to supervised learning is learning from a training set of labeled examples provided by a knowledgeable external supervisor and unsupervised learning is typically about finding structure hidden in collections of unlabeled data [17]. Therefore, the RL algorithm is the most popular type of ML that deals with resource scheduling problems. This algorithm usually is combined with Deep Learning (DL) methods to improve the optimization algorithms applied on scheduling of cloud manufacturing. RL as a dynamic learning method uses trial and error methods to make decisions while DL utilizes existing data sets to make predictions on unseen data. Chen et al. [18] investigated the challenges of scheduling multiple projects in cloud manufacturing systems. To address the issue of unevenly distributed services and uncertain project arrivals, they introduced an assigning policy based on reinforcement learning
136
R. Rashidifar et al.
(RLAP). RLAP was implemented using two approaches, NSGA-II and Q-learning, and the researchers compared the outcomes of both methods to achieve a non-dominated set of solutions. A novel double-faced algorithm is introduced by Fang et al. [19] that utilizes both Q-learning and an enhanced Gale-Shapley algorithm in reinforcement learning to achieve improved results. The algorithm works by first applying the Gale-Shapley algorithm to establish a stable pairing between manufacturers and distributors. Then, the Q-learning algorithm is employed on this initial set of results to further optimize the outcome. The existence of multiple resources in the cloud manufacturing environment bumps up the complexity of the system in a significant way. To solve this problem, Zhu et al. [20] put forward a deep reinforcement learning (DRL) approach in which the state transitions and rewards based on the Markov property are assumed, and this algorithm transmutes the scheduling problem with multiple resources into a learning target to learn quicker and find out better solutions. Dong et al. [21] proposed a dynamic task scheduling algorithm based on a deep reinforcement learning (DRL) to minimize the task execution time. This algorithm uses the Deep-Q-Network (DQN) method on the complexity of the scheduling problem in the cloud manufacturing system. A new approach in DRL based on DQN is proposed in [22] in which for each machine’s queue, a set of known rules is considered and based on the rules, the best machine is chosen automatically. The novelty of this approach is related to its reward function, state, and action space. Yu et al. [23] consider a multi-agent task scheduling problem in the Human-Robot Collaboration (HRC) environment. A DQN-based multi-agent reinforcement learning (MARL) algorithm is proposed in which a Markov game model is considered. In the Markov game model, the task structure and the agent status are considered as the state input and the completion time is considered as the reward. RL-Based Resource Scheduling Model. Reinforcement learning (RL) is a form of machine learning that focuses on determining how to act in various situations to optimize a numeric reward signal [17]. RL-based algorithms are characterized by two essential features: trial-and-error search, and delayed reward, and typically involve five key elements: the agent, environment, reward, policy, and action [24]. In a typical RL-based model of a manufacturing system, data collection and estimation of state are dependent on sensors. AI schedulers interact with the environment to learn and make decisions during resource and task transitions [25]. Figure 1 shows the interaction between a scheduler and manufacturing system in the RL model. Sensors monitor the system’s changes and feed data to analytical modules for estimation of the new condition, as well as rewards to the scheduler once the action has been taken. AI schedulers continue to interact with the system to learn and improve their decision-making abilities for production scheduling [8]. The algorithm used by the agent to decide its actions is Policy that can be modelbased or model-free [26]. A model-based policy seeks to understand the environment by building a model based on its interactions with it and always tries to perform an action that will get the maximum reward, regardless of the consequences [26]. Model-free policies, on the other hand, use algorithms such as Policy Gradient, Q-Learning, etc. to learn from their experiences the consequences of their actions. In consequence, the two classes can be used for different purposes, for example when a static environment is present and we are concerned with getting the job done efficiently, a model-based approach would
Reinforcement Learning-Based Model for Optimization
137
Fig. 1. The interaction between a scheduler and manufacturing system in RL model [8]
be ideal, like the robotic arm on an assembly line. The model-free approach, however, is best suited for real-world applications and dynamic systems, such as cloud-based scheduling [24]. Table 1 shows the relationship between a resource scheduling model and the main elements of RL model. Table 1. Relationship between RL model and Scheduling RL Model
Resource Scheduling
Agent
Scheduler
Environment
A manufacturing system with resources and tasks
State
Dynamic attribute of resources and tasks
Action
Scheduling
Rewards
Optimization objectives
Model-Free RL Algorithm for Resource Scheduling. As mentioned above, the RLbased resource scheduling problem is a model-free RL algorithm which can be further divided into value-based RL and policy-based RL [24]. In value-based RL, the optimal strategy is built up by selecting the action with the maximum state-action value. The representative algorithms of value-based RL on production scheduling optimization include State-Action-Reward-State-Action (SARSA), Q-learning, and Deep Q-Network (DQN) [8]. SARSA is an on-policy Temporal Difference (TD) algorithm. As part of the iterative learning process, the agent uses the ε-greedy method to select an action in the state st and obtain a reward rt+1 . Different from SARSA, Q-learning is an off-policy TD algorithm. It updates the value function q(st , at ) for SARSA and Q-learning in Eq. 2, 3 respectively where α is the learning rate and γ is the discount factor [24]. q(st , at ) = q(st , at ) + α × (rt+1 + γ × q(st+1 , at+1 ) − q(st , at ))
(2)
q(st , at ) = q(st , ) + α × (rt+1 + γ × Maxq(st+1 , at+1 ) − q(st , at ))
(3)
Both SARSA and Q-learning adopt a table to record the state-action value, but the table is no longer applicable when the scale of the state space or the action space is
138
R. Rashidifar et al.
too large. Therefore, the deep Q-learning network is proposed by integrating Q-learning and the deep neural network to approximate the value function [27]. Different from the value-based RL, the policy-based RL does not consider the value function and directly searches for the best policy. Moreover, the policy-based RL usually adopts a neural network to fit the policy function. The typical algorithms include REINFORCE, PPO, and Trust Region Policy Optimization (TRPO). Reward in RL-Based Algorithm. Resource scheduling focuses on the performance improvement of a smart factory from various aspects such as efficiency (e.g., makespan, tardiness), cost (e.g., energy consumption, work order urgency levels), and other metrics (e.g., workload balance, customer satisfaction). Multi-objective formulation can not only meet customers’ satisfaction, save energy for more profits and cost-reduction, but also balance the workload to reduce the machine failures.
4 Discussion In cloud manufacturing environments, optimization algorithms are used to produce the most efficient schedules since they affect process efficiency and optimization accuracy. As mentioned in Sect. 1, Rashidifar et al. [4] reviewed 67 papers published between 2011 to 2022 in this field. In this study, the statistical analysis of the literature shows that nearly more than 80% of papers consider a multi-objective (time, cost, reliability, etc.) scheduling model [4]. The second point that stands out in these reviewed papers is the high rate (45%) of using the metaheuristic algorithms for solving scheduling problems in CMfg. Only 16% of reviewed articles have used machine learning algorithms [4]. Figure 2 displays an overview of each optimization algorithm’s frequency of use.
Other Methods
27%
Game theory-based Algorithm
12%
Machine Learning Algorithm
16%
Metahuristic Algorithm
45% 0
5
10
15
20
25
30
35
Fig. 2. Distribution of optimization algorithms in reviewed articles [4].
The development of artificial intelligence has led to breakthroughs in many combinatorial optimization problems as well as new ways to optimize resource scheduling [24]. Currently, model-free algorithms are mostly used to optimize scheduling problems, while policy-based RL algorithms rarely used for production scheduling can search for an optimal policy and generate a schedule end-to-end. The learning performance of the rewards is evaluated with different distributions based on the defined objectives such as time, cost, reliability, etc.
Reinforcement Learning-Based Model for Optimization
139
5 Conclusion and Future Research Line Resource scheduling is the core of the cloud manufacturing system and has attracted much attention. The cloud-based resource scheduling has been a challenging aspect of this paradigm due to the real-time, large-scale and complex nature of the cloud-based systems. In this paper, the Reinforcement Learning (RL) for resource scheduling is reviewed to develop a framework and fundamental concept for RL-based resource scheduling and investigate different aspects of the RL optimization algorithm. The findings from this study support the idea that scheduling in the cloud environment must be tackled in a multi-objective perspective considering both the customers’ and providers’ viewpoints. Moreover, only 16% of the reviewed research highlights machine learning algorithms as effective tools for gaining insight and making scheduling predictions. It shows that further work should be made in using ML algorithms to optimize scheduling problems. Optimization of resource scheduling problems is mostly carried out by model-free RL algorithms. Thus, it is of practical significance to explore the RL algorithm to optimize multi-objective problems. While learning in a RL-based model, the AI schedulers interact with more and more tasks and subtasks, increasing the rewards per workorder. The RL-based scheduling model’s performance is evaluated considering the different distribution of rewards such as time, cost, reliability, etc. To develop a full picture of Resource scheduling, additional studies will be needed. As mentioned above, ML algorithms are considered in only 16% of reviewed papers. Therefore, there is abundant room for further work on using ML algorithms and RL algorithms to achieve optimization of scheduling problems. Furthermore, in the future investigation, it might be possible to use data imbalance problems to solve the resource scheduling in big-data CMfg systems due to the immense increase in the scale of the cloud manufacturing platform.
References 1. Yang, C., Peng, T., Lan, S., Shen, W., Wang, L.: Towards IoT-enabled dynamic service optimal selection in multiple manufacturing clouds. J. Manuf. Syst. 56, 213–226 (2020) 2. Rashidifar, R., Chen, F.F., Bouzary, H., Shahin, M.: A mathematical model for cloud-based scheduling using heavy traffic limit theorem in queuing process. In: Kim, KY., Monplaisir, L., Rickli, J. (eds.) FAIM 2022. LNME, pp. 197–206. Springer, Cham (2023). https://doi.org/ 10.1007/978-3-031-18326-3_20 3. Bittencourt, L.F., Goldman, A., Madeira, E.R., da Fonseca, N.L., Sakellariou, R.: Scheduling in distributed systems: a cloud computing perspective. Comput. Sci. Rev. 30, 31–54 (2018) 4. Rashidifar, R., Bouzary, H., Chen, F.F.: Resource scheduling in cloud-based manufacturing system: a comprehensive survey. Int. J. Adv. Manuf. Technol. 122(11), 4201–4219 (2022) 5. Ebrahimi, N.: Modeling, simulation and control of a robotic arm, pp. 1–7 (2019) 6. Sangaiah, A.K., Zhiyong, Z., Sheng, M.: Computational Intelligence for Multimedia Big Data on the Cloud with Engineering Applications. Academic Press (2018) 7. Rashidifar, R., Chen, F.F., Tran, T.: Simulation and analysis of production scheduling in eyeglasses industry for productivity improvement. In: IIE Annual Conference. Proceedings, pp. 1–6 (2022) 8. Zhou, T., Tang, D., Zhu, H., Wang, L.: Reinforcement learning with composite rewards for production scheduling in a smart factory. IEEE Access 9, 752–766 (2020)
140
R. Rashidifar et al.
9. Bouzary, H., Frank Chen, F.: A hybrid grey wolf optimizer algorithm with evolutionary operators for optimal QoS-aware service composition and optimal selection in cloud manufacturing. Int. J. Adv. Manuf. Technol. 101(9–12), 2771–2784 (2018). https://doi.org/10.1007/s00170018-3028-0 10. Shahin, M., Chen, F., Bouzary, H., Hosseinzadeh, A., Rashidifar, R.: Classification and detection of malicious attacks in industrial IoT devices via machine learning. In: Kim, KY., Monplaisir, L., Rickli, J. (eds.) FAIM 2022. LNME, pp. 99–106. Springer, Cham (2023). https:// doi.org/10.1007/978-3-031-18326-3_10 11. Halty, A., Sánchez, R., Vázquez, V., Viana, V., Piñeyro, P., Rossit, D.A.: Scheduling in cloud manufacturing systems: recent systematic literature review. Math. Biosci. Eng. MBE 17(6), 7378–7397 (2020) 12. Delaram, J., Valilai, O.F.: A mathematical model for task scheduling in cloud manufacturing systems focusing on global logistics. Procedia Manuf. 17, 387–394 (2018) 13. Yuan, M., Cai, X., Zhou, Z., Sun, C., Gu, W., Huang, J.: Dynamic service resources scheduling method in cloud manufacturing environment. Int. J. Prod. Res. 59(2), 542–559 (2021) 14. Liu, Y., Xu, X., Zhang, L., Wang, L., Zhong, R.Y.: Workload-based multi-task scheduling in cloud manufacturing. Robot. Comput. Integr. Manuf. 45, 3–20 (2017) 15. Chen, S., Fang, S., Tang, R.: An ANN-based approach for real-time scheduling in cloud manufacturing. Appl. Sci. 10(7), 2491 (2020) 16. Morariu, C., Morariu, O., R˘aileanu, S., Borangiu, T.: Machine learning for predictive scheduling and resource allocation in large scale manufacturing systems. Comput. Ind. 120, 103244 (2020) 17. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (2018) 18. Chen, S., Fang, S., Tang, R.: A reinforcement learning based approach for multi-projects scheduling in cloud manufacturing. Int. J. Prod. Res. 57(10), 3080–3098 (2019) 19. Fang, Z., Hu, Q., Sun, H., Chen, G., Qi, J.: Research on intelligent cloud manufacturing resource adaptation methodology based on reinforcement learning. In: Sun, X., Zhang, X., Xia, Z., Bertino, E. (eds.) ICAIS 2021. LNCS, vol. 12736, Part I, pp. 155–166. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-78609-0_14 20. Zhu, H., Li, M., Tang, Y., Sun, Y.: A deep-reinforcement-learning-based optimization approach for real-time scheduling in cloud manufacturing. IEEE Access 8, 9987–9997 (2020) 21. Dong, T., Xue, F., Xiao, C., Li, J.: Task scheduling based on deep reinforcement learning in a cloud manufacturing environment. Concurr. Comput. Pract. Exp. 32(11), e5654 (2020) 22. Marchesano, M.G., Guizzi, G., Santillo, L.C., Vespoli, S.: Dynamic scheduling in a flow shop using deep reinforcement learning. In: Dolgui, A., Bernard, A., Lemoine, D., von Cieminski, G., Romero, D. (eds.) APMS 2021. IAICT, vol. 630, Part I, pp. 152–160. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-85874-2_16 23. Yu, T., Huang, J., Chang, Q.: Optimizing task scheduling in human-robot collaboration with deep multi-agent reinforcement learning. J. Manuf. Syst. 60, 487–499 (2021) 24. Wang, L., Pan, Z., Wang, J.: A review of reinforcement learning based intelligent optimization for manufacturing scheduling. Complex Syst. Model. Simul. 1(4), 257–270 (2021) 25. Belgacem, A., Mahmoudi, S., Kihl, M.: Intelligent multi-agent reinforcement learning model for resources allocation in cloud computing. J. King Saud Univ. Comput. Inf. Sci. 34(6), 2391–2404 (2022) 26. Tu, S., Recht, B.: The gap between model-based and model-free methods on the linear quadratic regulator: an asymptotic viewpoint. In: Conference on Learning Theory, pp. 3036–3083 (2019) 27. Shahin, M., Chen, F.F., Hosseinzadeh, A., Bouzary, H., Rashidifar, R.: A deep hybrid learning model for detection of cyber attacks in industrial IoT devices. Int. J. Adv. Manuf. Technol., 1–11 (2022)
An Overview of Explainable Artificial Intelligence in the Industry 4.0 Context Pedro Teixeira1,2 , Eurico Vasco Amorim1,3 , J¨ oerg Nagel4 , 1,3(B) and Vitor Filipe 1
School of Science and Technology, Universidade de Tr´ as -os -Montes E Alto Douro (UTAD), 5000-801 Vila Real, Portugal {eamorim,vfilipe}@utad.pt 2 Neoception, Incubadora de Empresas da Universidade de Tr´ as-os-Montes e Alto Douro, (UTAD),5000-801 Vila Real, Portugal 3 INESC TEC-INESC Tecnologia e Ciˆencia, 4200-465 Porto, Portugal 4 Neoception GmbH, Mallaustraße 50-56, 68219 Mannheim, Germany [email protected] Abstract. Artificial intelligence (AI) has gained significant evolution in recent years that, if properly harnessed, may meet or exceed expectations in a wide range of application fields. However, because Machine Learning (ML) models have a black-box structure, end users frequently seek explanations for the predictions made by these learning models. Through tools, approaches, and algorithms, Explainable Artificial Intelligence (XAI) gives descriptions of black-box models to better understand the models’ behaviour and underlying decision-making mechanisms. The AI development in companies enables them to participate in Industry 4.0. The need to inform users of transparent algorithms has given rise to the research field of XAI. This paper provides a brief overview and introduction to the subject of XAI while highlighting why this topic is generating more and more attention in many sectors, such as industry.
Keywords: Explainable Artificial Intelligence Deep Learning · Industry 4.0
1
· Machine Learning ·
Introduction
Industry 4.0 has been interpreted as “the new direction of automation and digital data transfer in manufacturing and similar technologies, including Internet of Things (IoT), cyber-physical systems, cloud computing, systems integration, bigdata analytics that services in establishing the smart industries and factories [1]. With communication and technology, Industry 4.0 refers to an intelligent network of devices, machines, and systems for industries. The growth of artificial intelligence, machine learning, and deep learning-based technologies in industries allows them to participate in Industry 4.0 [2]. Although these technologies are designed to aid users in their regular duties, there are still problems with acceptability. Users frequently still need clarification regarding the suggestions made. In worse circumstances, users disagree c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 F. J. G. Silva et al. (Eds.): FAIM 2023, LNME, pp. 141–148, 2024. https://doi.org/10.1007/978-3-031-38241-3_17
142
P. Teixeira et al.
with the AI/ML model’s conclusion since these systems’ inference techniques are frequently ambiguous, counterintuitive, and incomprehensible to people [3]. AIbased methods are becoming more accurate, requiring additional expert explanations or information on how judgments and instructions are made. As a result, for the seamless deployment of AI systems and their acceptance or adoption by professionals, decisions must be intelligible and explainable. Explainability is a term that refers to the ability of a model to increase user trust in it [1]. Particularly in high-stakes areas, the explainability of models is even more significant than their performance. The study area of explainable Artificial Intelligence has emerged in response to the necessity to explain untransparent machine learning algorithms to users. To better understand how black-box algorithms predict outcomes, XAI intends to aid end users and experts in the related field. Outlining how black-box models make decisions also assists ML researchers with the model-creation process [4]. The paper’s main goal was to briefly overview what XAI is and its future importance in Industry 4.0. The structure of this paper is as follows: Sect. 2 begins with a brief definition of XAI, and the types of XAI approaches described in the literature are presented, along with a contextualization of each. Section 3 focuses on why XAI can be very important in Industry 4.0. The main conclusions of this study are presented in Sect. 4.
2
Explainable Artificial Intelligence
Explainable Artificial Intelligence seeks to give end-users understandable AI outcomes [4]. XAI approaches aim to build machine learning methods that can provide reliable, clear, and comprehensible justifications for judgments made by black-box models [5]. Based on applications in specific domains, use case scenarios, and researchers’ knowledge, the concept of XAI has undergone significant improvement and modification. According to Defense Advanced Research Projects Agency (DARPA) technical report [5], XAI is defined as “a suite of machine learning techniques that enables human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners”. However, there is a trend to acknowledge that model “explainability” does not always imply that the model is well described and understood by humans [6]. Figure 1 illustrates this concept. 2.1
Categories of XAI Methods
To describe the inner workings of black-box models and their judgments, a sizable number of XAI techniques have been created. Based on explanation level, implementation level, and model dependency, the XAI may be categorized into three major types [4].
An Overview of Explainable Artificial Intelligence
143
Fig. 1. XAI Concept. Adapted from [5].
2.2
Explanation Level
An XAI technique’s explanation level specifies whether it concentrates on the entire model or just one particular instance [7,8]. These can be Global level, which emphasizes the model’s overall explicability and operation and decisionmaking processes, and Local level, which describes how a model makes decisions for a specific instance or subpopulation. While some XAI techniques, such as Bayesian Rule Lists (BRL) [9], and Distillation technique [10], provide global-level explanations for an entire model and its decision-making mechanism, other techniques, such as Local Interpretable Model-Agnostic Explanations (LIME) [11], Shapley Additive Explanations (SHAP) [12], Deep Learning Important FeaTures (DeepLIFT) [13] provide local explanations for instance data. 2.3
Implementation Level
Intrinsic and post hoc explanations are the two basic subcategories of implementation level [7,8]. The model offers intrinsic explanations regarding how a prediction has been achieved by the model parameters, decision trees, and/or rules through the approaches, such as Bayesian Rule Lists [9]. Black-box models’ internal operations and decision-making processes are made clear by post hoc justifications. A pre-trained model or a model that has finished its training process can both use post hoc explanations. Many post hoc XAI approaches have been developed, including Layer-wise Relevance Propagation (LRP) [14], LIME [11], Integrated Gradients [15], since post hoc explainers transform black-box models into interpretable models. 2.4
Model Dependency
Model dependency consists of model-specific and model-agnostic explainers [4,16]. Model-specific - This category of explainers considers a machine learning model’s internal workings as well as its inputs and outputs. This is especially
144
P. Teixeira et al.
helpful for model developers as they may aid in diagnosing a model’s underlying structure and improving it depending on how inputs and outputs interact concerning a particular architecture. Model agnostic - These explainers, in contrast to model-specific explainers, disregard the model’s internal workings and see it as a black box. The transition between input and output is explained simply at the level of the data. This is helpful for both new model users and experienced model users who are more interested in applying the model to their data and activities than in the precise model architecture. The most well-known examples of model-agnostic post hoc explainers are ANCHORS [17], LIME [11], LRP [14], and SHAP [12].
2.5
Explanation Types
There is little doubt that the explainable models’ methods change while learning different input data kinds, such as images, numbers, words, etc. The five types of explanations that are most frequently used are mixed, textual, visual, and numerical [7,8]. These explanation types are briefly described and illustrated in Fig. 2.
Fig. 2. Overview of the different categories of XAI Methods. Adapted from [3, 4]
– Numeric explanations: Models often produce numerical explanations by calculating how much the input variables contributed to the final result. However, because they are linked to the characteristics, the numerical explanations call for a high skill level in the relevant fields. – Rule-based explanations: The decision-making process of a model is depicted in a tree or list by rule-based descriptions. Unlike the numerical answers that enable this form of report in recommendation systems created for broad users from sectors like entertainment and finance, rule-based explanations are significantly more straightforward.
An Overview of Explainable Artificial Intelligence
145
– Textual Explanations: Due to their higher computing complexity, which necessitates natural language processing, textual explanations are determined to be the least often used explanation type. Most textual justifications are created at the local level, or for a single judgment. Additionally, it is discovered that textual explanations are produced at the local level, which are connected to scholarly study, legal systems, etc. – Visual explanations: Visual explanations were discovered to be the most popular type of explanation [3]. Regarding adding descriptions, most of the time, research investigations were conducted as domain-agnostic and from the healthcare domain. Visual explanations in local and global scopes were developed using post hoc techniques.
3
XAI in Industry 4.0
In the 4.0 industrial revolution, XAI technology has the potential to improve AI system reliability in several ways [1,18]. Several instances include: – Increased explainability: XAI approaches can offer concise and intelligible justifications of how an AI system makes decisions, which can help to increase users’ trust and confidence in the technology. This can assist the system to be deployed and used effectively, as well as aid to lower the dangers involved with depending on black-box AI systems. – Enhanced transparency: XAI approaches can reveal information about the variables influencing an AI system’s judgments, which can be used to spot biases or mistakes in the system. As a result, it may be possible to design AI systems that are more accurate and dependable and to lessen the dangers of applying AI to situations involving essential decisions. – Better interpretability: XAI approaches can help discover and fix possible flaws with the system by making the behaviour of AI systems easier to comprehend and interpret. This can guarantee that the system is functioning as planned and can promote the efficient use and maintenance. – Optimization: XAI technology can be used to optimize AI systems, increasing their effectiveness and efficiency. XAI can assist in identifying areas for improvement, resulting in higher performance and reliability, by evaluating data and offering insights into the system’s performance. Therefore, the application of XAI technology may contribute to improving the reliability of AI systems in the 4.0 industrial revolution by enhancing the behaviour of the system’s transparency, explicability, interpretability or optimization. This can facilitate the system’s deployment and use effectively and assist to lower the risks involved with using AI in situations requiring critical decision-making. Multisensory is a new trend in manufacturing. Researchers are trying to design a single unit of multifunctional sensor that is able to provide multiple measurements in several industry processes. The multisensoring systems are a crucial illustration of how this XAI technology application can be integrated.
146
P. Teixeira et al.
This is a concept from Industry 4.0 that refers to the use of several sensors to collect data from numerous sources, including machines, equipment, or processes. These sensors may be cameras, microphones, pressure, temperature, and other IoT devices. Manufacturers can optimize their processes and increase productivity by using multisensoring systems to monitor and control industrial processes. Multisensoring systems can offer a more comprehensive picture of the production process by gathering data from several sources, empowering producers to make better decisions. They can also be used to maintain inventory levels, keep an eye on the standard of raw materials, and make sure that regulations are being followed. Generally, multisensoring systems are essential to Industry 4.0 because they give manufacturers real-time data on their operations, allowing them to make better decisions and increase efficiency and dependability [19,20]. Multisensoring systems in manufacturing can benefit from the application of XAI in several of ways: – Data integration: Large amounts of data produced by multisensor systems might be challenging to integrate and evaluate. By providing models that can combine and evaluate data from various sources, including sensors, cameras, and other IoT devices, XAI can assist in improving data integration. – Anomaly detection: XAI can assist multisensoring systems detect anomalies more accurately. XAI can heklp identify abnormalities more precisely by offering explainable models, lowering the possibility of false positives or false negatives. – Predictive maintenance: Multisensor systems can benefit from improved predictive maintenance according to XAI. Through the analysis of data from many sources, XAI can identify when a machine is most likely to malfunction and explain the underlying causes. As a result, there is a lower chance of unplanned downtime and higher level of reliability when manufacturers arrange maintenance in advance. – Quality control: Multisensoring systems’ quality control can be enhanced with XAI. XAI may assist in the identification of product flaws and the explanation of their root causes by evaluating data from several sensors. Understanding the underlying causes of problems allows manufacturers to take action to stop them from happening again, thereby raising the general caliber and dependability of their products. In general, XAI can enhance multisensoring systems in production by bringing accountability and transparency to the decision-making process. In order to increase efficiency and dependability, XAI can help with anomaly detection, predictive maintenance, quality control, and data integration by evaluating data from many sources. [21,22].
4
Conclusions
XAI research has been expanding quickly to increase trust in the judgments of black box models. Since then, other XAI techniques have been put out to
An Overview of Explainable Artificial Intelligence
147
provide people with intelligible AI results. Industry 4.0 technologies combined with advanced AI and XAI-based approaches generate more accuracy and quality in various applications. Explainable architectures can attain a high level of abstraction by leveraging many data samples, garnering a lot of interest in all areas of Industry 4.0. AI can help make intelligent systems in Industry 4.0 more transparent and interpretable by providing information about their behaviour and enabling more informed decision-making about their development and use. This can help build trust in these systems, and identify and address potential issues or biases. Therefore, XAI has the potential to be extremely important to Industry 4.0’s effective adoption of intelligent systems. Acknowledgments. This work was supported by the RD Project “Continental Factory of Future (CONTINENTAL FoF) / POCI-01-0247-FEDER-047512”, financed by the European Regional Development Fund (ERDF), through the Program “ Programa Operacional Competitividade e Internacionalizacao (POCI) / PORTUGAL 2020 ”, under the management of AICEP Portugal Global - Trade Investment Agency.
References 1. Ahmed, I., Jeon, G., Piccialli, F.: From artificial intelligence to explainable artificial intelligence in industry 4.0: A survey on what, how, and where. IEEE Trans. Indust. Inform. 18, 5031–5042 (2022) 2. Peres, R.S., Jia, X., Lee, J., Sun, K., Colombo, A.W., Barata, J.: Industrial artificial intelligence in industry 4.0 -systematic review, challenges and outlook. IEEE Access (2020) 3. Islam, M.R., Ahmed, M.U., Barua, S., Begum, S.: A systematic review of explainable artificial intelligence in terms of different application domains and tasks. Appl. Sci. 12, 1353 (2022) 4. Alicioglu, G., Sun, B.: A survey of visual analytics for explainable artificial intelligence methods. Comput. Graph. (Pergamon) 102, 502–520 (2022) 5. Gunning, D., Aha, D.W.: Darpa’s explainable artificial intelligence (xai) program. AI Mag. 40, 44–58 (2019) 6. Andrienko, N., Andrienko, G., Adilova, L., Wrobel, S.: Visual analytics for humancentered machine learning. IEEE Comput. Graphics Appl. 42, 123–133 (2022) 7. Vilone, G., Longo, L.: Explainable artificial intelligence: a systematic review. ArXiv (2020) 8. Vilone, G., Longo, L.: Classification of explainable artificial intelligence methods through their output formats. Mach. Learn. Knowl. Extract. 3, 615–661 (2021) 9. Letham, B., Rudin, C., McCormick, T.H., Madigan, D.: Interpretable classifiers using rules and bayesian analysis: Building a better stroke prediction model. https://doi.org/10.1214/15-AOAS848 9, 1350–1371 (2015) 10. Tan, S., Caruana, R., Hooker, G., Lou, Y.: Distill-and-compare: Auditing blackbox models using transparent model distillation. In: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, AIES ’18, p. 303-310. Association for Computing Machinery, New York, NY, USA (2018) 11. Ribeiro, M.T., Singh, S., Guestrin, C.: “why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 13-17-August-2016, pp. 1135–1144 (2016)
148
P. Teixeira et al.
12. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems, 2017-December, pp. 4766– 4775 (2017) 13. Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. In: 34th International Conference on Machine Learning, ICML 2017 7, 4844–4866 (2017) 14. Bach, S., Binder, A., Montavon, G., Klauschen, F., M¨ uller, K.R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLOS ONE 10, e0130, 140 (2015) 15. Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: 34th International Conference on Machine Learning, ICML 2017 vol. 7, pp. 5109–5118 (2017) 16. Das, A., Member, G.S., Rad, P., Member, S.: Opportunities and challenges in explainable artificial intelligence (xai): A survey. ArXiv (2020) 17. Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. Proc. AAAI Conf. Artif. Intell. 32, 1527–1535 (2018) 18. Islam, M., Ahmed, M., Barua, S., Begum, S.: A systematic review of explainable artificial intelligence in terms of different application domains and tasks. Appl. Sci. 12, 1353 (2022) 19. Kong, L., Peng, X., Chen, Y., Wang, P., Xu, M.: Multi-sensor measurement and data fusion technology for manufacturing process monitoring: a literature review. Int. J. Extreme Manufact. 2(2), 022,001 (2020) 20. Tsanousa, A., et al.: A review of multisensor data fusion solutions in smart manufacturing: Systems and trends. Sensors 22(5) (2022) 21. Ha, D.T., Hoang, N.X., Hoang, N.V., Du, N.H., Huong, T.T., Tran, K.P.: Explainable anomaly detection for industrial control system cybersecurity. IFACPapersOnLine 55(10), 1183–1188 (2022). In: 10th IFAC Conference on Manufacturing Modelling, Management and Control MIM (2022) 22. Cheng, X., et al.: Systematic literature review on visual analytics of predictive maintenance in the manufacturing industry. Sensors 22(17) (2022)
Analyzing the Effects of Different 3D-Model Acquisition Methods for Synthetic AI Training Data Generation and the Domain Gap ¨ Ozge Beyza Albayrak , Daniel Schoepflin(B) , Dirk Holst , Lars M¨ oller, and Thorsten Sch¨ uppstuhl Hamburg University of Technology, Hamburg 21073, Germany {oezge.albayrak,daniel.schoepflin}@tuhh.de https://www.tuhh.de/ifpt/institut/ueberblick.html
Abstract. Synthetic data generation to enable industrial visual AI applications is a promising alternative to manual data acquisition. Such methods rely on the availability of 3D models to generate virtual worlds and derive or render 2D image data. Since CAD models can be created on different detail levels but require extensive effort, questions arise about which affect different detail levels of 3D models have on synthetic data generation. The effect of different CAD model details is investigated and compared to different 3D scanning and photogrammetry approaches. Different synthetic datasets are created for each 3D model acquisition type and a test-benchmark set is used to evaluate the performance of each dataset. Based on the results, the suitability of each acquisition method is derived and the effects of bridging the domain gap from the synthetic training domain to the real-world application domain are discussed. The findings indicate that 3D scans are as feasible for synthetic data generation as feature-rich high-level CAD data. Feature poor CAD data as they might originate from manufacturing data performs significantly worse. Keywords: Synthetic Data Generation · Domain Gap Acquisition · 3D Scanning · Photogrammetry
1
· Data
Introduction and Related Work
Obtaining training for AI object detectors plays a significant role in the successful deployment of automated AI-based identification systems in manufacturing and intralogistics transportation systems. Thus, the availability of sufficient training data for such use cases is considered a main challenge [14,16,18]. While datasets for common objects are widely available [9], data for specific industrial applications (e.g., detection of company-specific components) are significantly less available. In such cases, the dataset has to be created from scratch, where obtaining and manually labeling images of a wide variety of components is a c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 F. J. G. Silva et al. (Eds.): FAIM 2023, LNME, pp. 149–159, 2024. https://doi.org/10.1007/978-3-031-38241-3_18
150
¨ B. Albayrak et al. O.
laborious, costly process and is considered not applicable for multiple industrial use cases. Thus, synthetic data generation as a scalable and automated alternative is increasing in popularity [2,6,15,17]. In this process, data is generated programmatically from virtual scenes. Labels are therefore inherently available and do not have to be manually crafted. The process of synthetic data generation starts with acquiring 3D models of real-world objects. Once the 3D models are obtained, a graphics engine is used to create different scenes using various lighting, position, and orientation of the 3D model. Afterward, the scene is rendered and 2D images, as well as annotations, are generated. Special attention in synthetic data is given to the transferability of the synthetically trained AI toward a real-world application domain [5,12,17]. This domain gap is considered one of the main challenges for synthetic data. With synthetic data being reliant on 3D models, this work investigates different 3Dmodel acquisition approaches and their effect on bridging this domain gap. In most industrial cases, 3D objects exist and are modeled with CAD software sometime in the product’s life cycle [2,5,11]. CAD modeling, however, can result in different detail levels of a product. Models might only include geometric information or in other cases highly detailed textural information. Products assembled with multiple components might not be accurately represented in a model, with missing features, such as buttons, stickers, or standard components. Such features might be of great importance when training a visual AI application and creating a dataset without them can increase the domain gap and render an application useless. In addition, CAD models might not be shared throughout a supply chain and a potential Original Equipment Manufacturer (OEM) might not be able to obtain detailed models from his supplier. A promising approach to overcome such a drawback is to use 3D scanners instead of 3D CAD models [3,13,18]. The 3D scanning approach offers the benefit of capturing the geometry of the object along with its color and textures. However, possible technologies and various methods to create a 3D scanned model must be explored since this approach can also cause inaccuracies in resulting 3D models due to textures and lighting effects. On the other hand, different types of 3D scanning technologies and 3D Scanners should be used for their area of application properly to increase the accuracy of the modeling result [18]. This work focuses on investigating the influence of different techniques for creating 3D models using CAD software or 3D scanning technologies and 3D scanners with respect to resulting synthetic training data. Different CAD levels of detail of industrial objects are obtained and different scanning methods are utilized to obtain additional 3D models. With those different types of 3D models, a previously reported toolbox [14] is used to generate a synthetic dataset for each 3D model type. Different AI-based object identification applications are trained and a benchmark evaluation dataset is used to test and evaluate the trained applications. Based on the results for each model type and dataset, the effects on the domain gap and transfer to a real-world application domain are discussed.
Analyzing the Effects of Different 3D-Model Acquisition Methods
2
151
Methodology
This section covers the methodology of the presented work. First, the considered use case is introduced and the AI-based identification setting is presented, for which the synthetic data is to be generated. Afterward, the 3D model acquisition techniques are presented and the features of generated six different datasets are addressed. Lastly, the covered AI models and frameworks are described. 2.1
Considered Use Case
A production supplying logistics scenario is considered for this work. A loadcarrier-based identification of components is to be enabled through a visual system. Four different components are to be identified: (1) Transformator, (2) Emulator, (3) Motor Controller, and (4) Switch. 2.2
3D Model Acquisition
This section introduces the different 3D model acquisition methods to create feasible 3D models of target objects in terms of quality, effort, and efficiency and the features of resulting 3D models. In this work, two of the categories of 3D scanning technologies have been presented: Structured Light and Photogrammetry-based 3D scanning [4,7]. With regard to CAD modeling, low and high-detail level 3D CAD models are investigated. Structured Light 3D Scanning. Structured light scanning technology utilizes two cameras and a light projector as a source of light as illustrated in Fig. 1. The system casts a series of patterns onto the object using the light source. The pattern deforms as it hits the surface of the object and the cameras placed on each side around the light source capture images of these distorted patterns due to the object’s shape. In this study, an Artec 3D Scanner is used to acquire scanned models using structured light scanning technology with a 3D resolution of up to 0.5 mm and 3D point accuracy of up to 0.1 mm. Photogrammetry. Photogrammetry technique captures 2D images of a 3D surface as input and reconstructs the surfaces primarily using texture cues on the surfaces and computed through Structure from Motion (SFM) algorithms. Photographs are taken from different viewpoints (overlapping perspectives being captured of the same object from different angles) without moving the object to be scanned as shown in the photogrammetry scanning principle in Fig. 2. The downside is its sensitivity to the resolution of the input photographs which makes the quality of the camera an important measure to acquiring accurate 3D models. Here, the open-source photogrammetry software Meshroom is used to transform the images into a 3D model. Realistic and accurate surface texture is achieved through an overlay of the original photos onto the 3D mesh. The images are taken using different quality camera lenses and imported into Meshroom software. The
152
¨ B. Albayrak et al. O.
Fig. 1. Structure light scanning principle and Artec EVA 3D Scanner
3D models are obtained by using two different cameras (a DSRL and a handheld iPhone camera) to investigate the effect of the camera type in terms of model accuracy.
Fig. 2. Photogrammetry principle
CAD Low and High Detail Level. Modeling objects using CAD software yields multiple modeling options in terms of model detail such as considering only the shape, and geometry of the object without any further details or with a more realistic design including color, electronic details depending on the object being modeled, 3D texturing, etc. In the course of this work, 3D CAD models of desired objects are modeled with different detail. The low-level design contains only the 3D shape, and the geometry of the object while the high-level design contains more details (e.g., electronic details, color) of the object. By considering these aspects, the effect of the detail level on synthetic data generation is investigated.
Analyzing the Effects of Different 3D-Model Acquisition Methods
153
It is suspected that the consideration of high similarity between the modeled 3D object and the real-world object has a positive effect on the performance of the model trained using the synthetically generated images with regard to object identification purposes. 2.3
Dataset Creation
With the 3D models acquired using the above mentioned techniques, different datasets are created. Below the used data generation toolbox and the resulting datasets are introduced. Synthetic Data Generation Toolbox. The Open Source software suite Blender is used to render images from the acquired 3D models using various modeling techniques and generated six synthetic image sets with different features and specifications. The process of synthetic image generation is performed as introduced in [14] where the images are generated using a virtually created 3D scene. The 3D scene varies, and certain parameter variations are included such as lighting, camera position, and object positions. The toolbox aims to automate the annotated image generation with minimal manual input. As described in [14], the scene data (e.g., object positions, orientations, background, lighting, camera settings, label data) is generated after specifying the necessary parameters with physics simulation, and the image is created using various rendered scenes with different lighting and camera settings. Introduction of Datasets. Six datasets are created following above shown acquisition methods. Visual representations of the 3D models for each dataset are shown in Fig. 3 and dataset-specific parameters are listed in Table 1. The first dataset, Dataset 1 is generated by the CAD models acquired by considering only the geometry of the object without including any electronic details, material assignment, or realistic coloring with low-level detail. Dataset 2 is acquired by using the 3D scanned models with a 3D multiimage photogrammetry approach. As mentioned earlier, in photogrammetry, the quality of the camera lens affects the quality of the generated 3D model since the only source of scanning software to build the scanned model is the 2D images of the desired object. Thus, we created different datasets to analyze the effect of the camera quality in scanning results with the photogrammetry approach by using different cameras to take images. In Dataset 2, a professional DSLR camera is used to take images and generate scanned models. Dataset 3 is acquired using the same technique performed to generate Dataset 2, however, the camera used to take 2D images to generate 3D scans was different. In Dataset 3, an iPhone camera (smartphone) is used instead of a professional camera where we also obtain high-quality images to generate the 3D scanned objects.
154
¨ B. Albayrak et al. O. Table 1. Features and differences of the different datasets Type
Color 3D Texture Completeness Electronic Details
Surface Quality
✖
✖
✓
✖
Uniform
Dataset 2 Photo✓ grammetry w/ DSLR
✓
✖
Partially
Rough
Dataset 3 Photo✓ grammetry w/ iPhone
✓
✖
Partially
Rough
Dataset 4 Photo✓ grammetry w/ iPhone
✓
✓
Partially
Rough
Dataset 5 CAD High-level detail
✓
✖
✓
✓
Uniform
Dataset 6 3D Scanning
✓
✓
✓
Partially
Partially Uniform
Dataset 1 CAD Low-level detail
Dataset 4 is generated by using the 3D scanned models with the same 3D modeling approach as in Dataset 2 and Dataset 3. However, even though multiimage photogrammetry is a fast and accurate approach, it has a drawback regarding the possibility to obtain a complete 3D model with all surfaces scanned (i.e., the bottom surface in case the object is placed on a surface). To overcome this problem, the object should be hung so that the 2D images can be taken from all angles to cover the bottom surface as well which is not possible when the object is placed on a table or any other flat surface. Therefore, the main difference between Dataset 2 and Dataset 4 is the completeness of the scanned objects where the bottom surface of the scans in Dataset 4 is more complete compared to the ones used to generate Dataset 2. Dataset 5 is generated by the CAD models including the electronic details, material assignment, and coloring with a high level of detail. Dataset 6 is generated by using the 3D scanned models acquired by the 3D Artec Eva scanner where the obtained 3D scans are generated by the Structured Light scanning technology. When the two different scanning technologies are compared, these models are more accurate 3D models than 3D scans acquired by the photogrammetry scanning method in terms of completeness and surface quality. However, one must consider that photogrammetry is a much faster and cheaper approach compared to the structured light scanning. 2.4
Application Training
Once the synthetic image sets are generated and manipulated in the desired data format, in the final stage, network training is performed on each synthetic dataset to evaluate the object identification performance of the trained model
Analyzing the Effects of Different 3D-Model Acquisition Methods
155
Fig. 3. Dataset overview: Different type of 3D models created
on real images. A custom object detector is trained using the Tensorflow Object Detection API [1]. A Single Shot Detector (SSD) is trained in this application, which is a method for detecting objects in images using a single (one-stage) deep neural network [10]. The SSD has two components: a backbone model and an SSD head. The backbone model of the SSD-MobileNet V2 network is MobileNet [8] which is a pre-trained image classification network as a feature extractor. Pre-training was performed on the COCO 2017 dataset. The SSD head consists of one or more convolutional layers added to the backbone and the output of the model is bounding boxes and classes of objects. Six different object detectors are trained one for each dataset as introduced above. Total datasets sizes are ˜4200 images per dataset and a train- and validation-split of 80:20 was done.
3
Experiments and Results
To investigate the effect of the data acquisition methods on real-world applications, a benchmark dataset is created using real-world images. To increase
156
¨ B. Albayrak et al. O.
the variety of the images, the lighting, the position of the objects, background, packaging material, and the type (e.g., color, shape, size) of the transportation box are changed. The test set consists of 35 images per class totaling 140 images. Images are captured by an IntelRealsense L515 with a Resolution of 1920 × 1080 px. Afterward, the test dataset is manually labeled, and the labels are exported in COCO format. Examples of the created test set and predictions from the network with bounding boxes and object classes are shown in Fig. 4.
Fig. 4. Bounding Box Detection on Test Dataset
As an evaluation metric, classification, and detection results are combined. Based on the detection, an Intersection of Union (IoU) threshold is introduced, only if a sufficiently accurate bounding is drawn (IoU > 0.5) and the classification label is correct, the prediction is counted as True Positive. With True and False Positives, the precision metric is used to measure how accurate the predictions are. This combination results in more rigorous and diverse evaluation values. The precision results of each dataset are shown in Fig. 5. According to the bounding box detection accuracy, Datasets 5 and 6 perform the best, being the 3D models containing the most detail and feature information. Dataset 1, generated by the CAD model with a low level of detail performs the worst while the photogrammetric Datasets 2–4 perform in between. There is no noticeable difference between the results obtained using Datasets 3 and 4, with the latter being the Dataset with surface completeness of the models. The DSLR camera, in Dataset 2 outperforms the iPhone camera in Datasets 3 and 4.
4
Discussion
With Dataset 1 containing the fewest details, multiple features in comparison to the real word objects are missing. The object detection network trained on this dataset struggles with placing bounding boxes accurately over the objects,
Analyzing the Effects of Different 3D-Model Acquisition Methods
157
Fig. 5. Representation of the evaluation results of each dataset
resulting in significantly poorer IoU-values and resulting poorer classification precision compared to the other networks. This result is consistent with findings from the above literature utilizing synthetic datasets. Increasing the level of detail and subsequently narrowing the domain gap between real-world applications and synthetic training domain, is expected to increase the precision metric. This expectation is met by the networks that were trained with higher detail levels, reaffirming the presumption that higherresolution images result in more detailed photogrammetric models. The narrow difference between Dataset 3 and 4 (photogrammetric datasets without and with surface completeness) indicates that completeness of all the surfaces of the model does not significantly influence the results. The high-level details that are represented in Datasets 5 and 6 result in better performance. Although the features represented in Dataset 6 (scanning) should be comparable with those on Datasets 3 and 4 (photogrammetry), the network performs considerably better. This behavior most likely originates from the more uniform surface quality of the 3D scan. Similar suitability for 3D-model acquisition methods of 3D-scanning and High-Level CAD models indicate the feasibility to substitute 3D-models with 3D-scans, whenever 3D-models are not available or are of low quality.
5
Conclusion and Outlook
As previously described, the CAD models of simple assemblies for common industrial use cases are mostly available, however, the complex assemblies for specific applications are difficult to obtain. CAD modeling is a laborious, time-consuming process that can easily miss important features such as object texture and color once not modeled in detail. This difference creates a gap between the modeled and the real-world object which leads to a biased training set generated from the 3D modeled object. Thus, we propose that the usage of 3D scanned models instead of CAD models solves the problem of data availability, the amount
158
¨ B. Albayrak et al. O.
of time spent on modeling using CAD software since scanned models can be obtained rather fast with high quality, and the problem of missing features utilizing accurately capturing the geometry and the texture of the real-world object. Therefore, in this work, we focused on training networks using different training sets generated from various 3D model acquisition techniques to prove the superior features of 3D scanning on model performance. According to our results, the quality of the 3D model created to be used as a source of synthetically generated images does affect the trained model performance on the test set generated by the real-world images. The model which performs the best in the object identification task is trained by the image set generated from the scanned models acquired by the Artec Eva 3D scanner and the high-detail level CAD model. The 3D scans contain important information which reflects the real-world objects in a better way compared to the other 3D models. The multi-image photogrammetry approach outperforms simple CAD models and may be a viable alternative to generate scanned models which are relatively cheap compared to Structured Light scanning technology since it does not require any hardware further than a camera. Further investigation of the above results may include explainable AI methods to reveal the exact features necessary for an AI to identify components. With that precise recommendations, a suitable 3D acquisition method can be derived for each object. Acknowledgements. Work was funded by IFB Hamburg, Germany under grant number 51161730.
References 1. Abadi, M., Agarwal, A., Barham, P., et al.: Tensorflow: large-scale machine learning on heterogeneous distributed systems (2016). https://www.tensorflow.org/. Softwareavailablefromtensorflow.org 2. Alexopoulos, K., Nikolakis, N., Chryssolouris, G.: Digital twin-driven supervised machine learning for the development of artificial intelligence applications in manufacturing. Int. J. Comput. Integr. Manuf. 33(5), 429–439 (2020) 3. B¨ orold, A., Teucke, M., Rust, J., Freitag, M.: Recognition of car parts in automotive supply chains by combining synthetically generated training data with classical and deep learning based image processing. Procedia CIRP 93, 377–382 (2020) 4. Conway, B.: 3D scanning and photogrammetry explained (2018). https://www. vntana.com/blog/3d-scanning-and-photogrammetry-explained/ 5. Dahmen, T., et al.: Digital reality: a model-based approach to supervised learning from synthetic data. AI Perspect. 1(1), 1–12 (2019) 6. Georgakis, G., Mousavian, A., Berg, A.C., Kosecka, J.: Synthesizing training data for object detection in indoor scenes (2017). arXiv preprint arXiv:1702.07836 7. Hamidi, H.: Structured light 3D scanner (2019). https://www.opensourceimaging. org/project/structured-light-3d-scanner/ 8. Howard, A.G., et al.: Mobilenets: efficient convolutional neural networks for mobile vision applications (2017). arXiv preprint arXiv:1704.04861 9. Lin, T., et al.: Microsoft COCO: common objects in context. CoRR abs/1405. 0312 (2014). http://arxiv.org/abs/1405.0312
Analyzing the Effects of Different 3D-Model Acquisition Methods
159
10. Liu, W., et al.: Ssd: single shot multibox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) European Conference on Computer Vision. ECCV 2016, pp. 21–37. Springer, Heidelberg (2016). https://doi.org/10.1007/978-3-319-46448-0 2 11. Manettas, C., Nikolakis, N., Alexopoulos, K.: Synthetic datasets for deep learning in computer-vision assisted tasks in manufacturing. Procedia CIRP 103, 237–242 (2021) 12. Nowruzi, F.E., Kapoor, P., Kolhatkar, D., Hassanat, F.A., Laganiere, R., Rebut, J.: How much real data do we actually need: analyzing object detection performance using synthetic and real data (2019). arXiv preprint arXiv:1907.07061 13. Sarkar, K., Pagani, A., Stricker, D.: Feature-augmented trained models for 6DOF object recognition and camera calibration. In: VISIGRAPP (4: VISAPP), pp. 632– 640 (2016) 14. Schoepflin, D., Holst, D., Gomse, M., Sch¨ uppstuhl, T.: Synthetic training data generation for visual object identification on load carriers. Procedia CIRP 104, 1257–1262 (2021) 15. Schoepflin, D., Iyer, K., Gomse, M., Sch¨ uppstuhl, T.: Towards synthetic AI training data for image classification in intralogistic settings. In: Sch¨ uppstuhl, T., Tracht, K., Raatz, A. (eds.) Annals of Scientific Society for Assembly, Handling and Industrial Robotics 2021, pp. 325–336. Springer, Cham (2022). https://doi.org/10.1007/ 978-3-030-74032-0 27 16. Su, H., Qi, C.R., Li, Y., Guibas, L.J.: Render for CNN: viewpoint estimation in images using CNNs trained with rendered 3D model views. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2686–2694 (2015) 17. Tobin, J., Fong, R., Ray, A., Schneider, J., Zaremba, W., Abbeel, P.: Domain randomization for transferring deep neural networks from simulation to the real world. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 23–30. IEEE (2017) 18. Wong, M.Z., Kunii, K., Baylis, M., Ong, W.H., Kroupa, P., Koller, S.: Synthetic dataset generation for object-to-model deep learning in industrial applications. PeerJ Comput. Sci. 5, e222 (2019)
Improving Regulations for Automated Design Checking Through Decision Analysis Good Practices: A Conceptual Application to the Construction Sector Ricardo J. G. Mateus1,2(B) , Francisco Silva Pinto1,6 , Judith Fauth3 , Miguel Azenha4 , José Granja4 , Ricardo Veludo5 , Bruno Muniz4 , João Reis1 and Pedro Marques1
,
1 RCM2+ Research Centre for Asset Management and Systems Engineering, Lusófona
University, Lisbon, Portugal [email protected] 2 CEG-IST, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal 3 RIB Software GmbH, Stuttgart, Germany 4 ARISE, ISISE, Department of Civil Engineering, University of Minho, Guimarães, Portugal 5 Ministério da Coesão Territorial, Lisbon, Portugal 6 CERIS, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal
Abstract. Automating regulations is difficult because they were not drafted for computers. This article suggests decision analysis good practices to structure, analyze and support regulation drafting, namely: structuring rules using a means-ends objectives network, identifying the level at which rules should be set, developing performance measures for each fundamental objective, setting compliance thresholds based on the respective upper-level objectives and trade-offs, and monitoring and adapting rules. The approach is illustrated and validated through its theoretical application to Portuguese rights to light regulation. It proposes specific performance-based metrics on three fundamental objectives from this regulation: direct sunlight; natural daylight; and solar energy. Climate-based daylight simulation methods coupled with Building Information Modeling (BIM) provide the breakthrough to develop better performance-based metrics, rules, and building design optimization. Keywords: Automation · Building Permit · Regulation · Legal Drafting · Decision Analysis
1 Introduction Checking designs against rules laid down in laws and regulations is pervasive across industries. It typically demands manual, time-consuming, paper-based work, which is often opaque, inefficient, inconsistent, and prone to errors [1]. This is the case with building permits, where public authorities often consume a lot of time and money to make decisions. Yet it is possible and desirable to make this decision process more © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 F. J. G. Silva et al. (Eds.): FAIM 2023, LNME, pp. 160–169, 2024. https://doi.org/10.1007/978-3-031-38241-3_19
Improving Regulations for Automated Design Checking Through Decision Analysis
161
effective, efficient, consistent, and transparent, namely by setting objective compliance metrics which can be encoded into a machine-interpretable format for automatically computing and checking them. Encoding regulations is a major hurdle to overcome for automating compliance checking. Rules were written for human interpretation in an analogue world. They are often complex, intertwined, ambiguous, imprecise, redundant, opaque, and inconsistent, making them difficult to interpret and encode [2, 3]. Although some provisions can be directly and objectively encoded [4], many do not, requiring a thorough review of the underlying regulations. This article presents preliminary results for the following research question: how rules can be improved for better interpretation and digital transformation by following good practices recommended from the decision analysis literature? Decision analysis methods prescribe how people should make decisions from a rational perspective. It comprehends sound qualitative and quantitative procedures, methods, and tools for representing any decision problem in a requisite manner. For the sake of conciseness, but without loss of generality, the proposed methodology is illustrated and validated through its conceptual application to building regulations related to the rights to light in Portugal, an easement that guarantees the right to receive light in a building through defined apertures in other buildings [5, 6]. We chose this subject because all building codes regulate it, and most people rate natural light as the most important aspect of a home [7]. Section 2 reviews the state-of-the art on regulation drafting for automated checking. Section 3 illustrates how rights to light are typically regulated in current Portuguese building regulations. It then proceeds by presenting decision analysis solutions to common regulation problems followed by the presentation of a methodology to improve regulation analysis and drafting considering the principles of decision analysis. Section 4 illustrates its conceptual application to rights to light rules. Section 5 concludes and presents potential future research avenues.
2 Regulation Drafting for Automated Checking Regulations are drafted by design for human interpretation. If rules are to be used for automated regulatory compliance checking, they must be encoded so that computers can interpret them in an accurate, scalable, and maintainable process [8]. Two complementary approaches have been pursuit toward this goal: a priori approaches which focus on improving regulation drafting before its enactment; and a posteriori approaches which aim to formalize existing rules into a machine interpretable format. The approach presented in this article falls in the former. Legislative drafting [9] is the traditional discipline that studies a priori approaches, and more recently “Rules as Code” [10]. Existing a posteriori approaches include [8, 11, 12]: a) computer hard-coded rules, usually coupled with parametric tables [13]; b) manual and automatic extraction of rules from regulations [14]; c) formal logic methods [15]; and d) semantic web technologies [16]. Xanthaki states that legislative drafting aims at efficacy: the best regulation is the one that is more capable of achieving the regulatory results required by policy makers [9]. Hence, the author stresses the importance of understanding what goal is served by each
162
R. J. G. Mateus et al.
rule and suggests using explanatory materials, including diagrams, to improve analysis, clarity, and understanding of rules. The present article is a contribution toward those aims. “Rules as Code” is a recent interdisciplinary approach that advocates co-drafting rules from the outset both in natural language and in code [10]. Notably, benefits come primarily from improving the quality of regulation drafting rather than encoding the rules to a machine-interpretable format. Additionally, the consequences of encoded rules can be automatically tested and compared against expected outcomes. The present article suggests a similar interdisciplinary co-creation based on the principles and methods of decision analysis, which is, to the best of our knowledge, an innovate proposition in the literature.
3 Improving Regulations Through Decision Analysis Methods 3.1 Portuguese Building Regulations on Rights to Light This subsection outlines Portuguese building regulations on rights to light to illustrate typical regulation problems and to set the ground to validate decision analysis solutions and the methodology proposed in this section to a concrete case in Sect. 4. Construction works in European Union countries require that developers previously submit the respective architectural (and sometimes engineering) designs to public authorities for compliance checking against urban planning (zoning) and building regulations [17]. Provisions in building regulations are classified in the literature as [18]: functional, if they are set based on the qualitative description of intended objectives only; performancebased, if they are set using specific performance metrics and their compliance thresholds; or prescriptive, if they are set by specifying exhaustive or illustrative alternative construction designs or solutions (e.g., materials, dimensions). Portuguese building regulations include a mix of these three categories [18]. They include functional provisions stated through vague and ambiguous objectives that designs should achieve, namely that spaces shall “maximize solar gains”, “have suitable and sufficient light”, “promote thermal comfort” or “ensure prolonged exposure to the direct action of the sun’s rays”. Terms like “suitable”, “sufficient” or “prolonged” are naturally prone to different interpretations. Furthermore, without specifying how the objectives of maximizing solar gains or thermal comfort must be measured as well as the respective threshold levels that must be achieved to comply with the respective rules, it is not possible to decide objectively whether a given design solution complies or not with any of this type of provisions. Portuguese regulations also include performance-based provisions, but typically without a clear understanding of the reasons that led to their definition. For instance, Portuguese regulations demand that building façades stand a distance back from each other greater than the respective building heights (a.k.a. the 45° rule) [5]. Furthermore, each indoor space must have at least one opening (e.g., window) with an area superior to 1.08 m2 and one tenth of the space’s floor surface. It can be guessed that these rules aim to guarantee natural light access, but it is not possible to infer the reasons why the respective compliance thresholds were set specifically at those levels.
Improving Regulations for Automated Design Checking Through Decision Analysis
163
Finally, Portuguese regulations also include prescriptive provisions. For instance, “claddings and openings must […] ensure shading […], favoring the use of trees for solar protection, especially autochthonous and deciduous species” or “to maximize energy efficiency […] centralized systems should be favored, namely using urban heating and cooling networks or cogeneration systems, among others”. Note though that this type of provisions hinders creativity and flexibility in design submissions as compared to the previous ones. 3.2 Decision Analysis Recommendations for Common Regulation Problems Rules must be interpreted based on the letter of the law mostly. However, if rules are not objective, precise, transparent, unambiguous, and consistent, their actual meaning can only be guessed, including why they were written as they were. Distinct stakeholders might reach different conclusions about their interpretation, often biased by their own interests, mood, and other cognitive biases. Uncertainty on the right interpretation of a rule can create arbitrariness, litigation, and prejudice. Furthermore, sometimes indeterminacy comes not from the application of the rule, but rather on the conditions for its application (e.g., “use of renewable energy systems is required in new buildings, except in duly justified situations”). Table 1 summarizes common regulation problems, as illustrated in Subsect. 3.1, and suggests specific decision analysis recommendations to tackle them. Section 4 will then illustrate its conceptual application to Portuguese rights to light regulation. Table 1. Common regulation problems and suggested decision analysis good practices. Problems
Recommendations
Complexity, inconsistency, and opaqueness
Means-ends objectives network
Vagueness (imprecision) and ambiguity
Formal logic; Performance-based rules; Rules as Code [10]
Limited design flexibility and creativity
Rules set closer to fundamental objectives; rules as multicriteria models; dynamic rules
Discrimination, subjectivity
Performance-based rules with thresholds
Limited monitoring, accountability, and adaptation
Regulatory impact analysis; structural and functional relationships between objectives
3.3 Methodology The application of decision analysis good practices for improving the analysis or drafting of regulations benefits from following a structured procedure, as outlined next: 1. Make clear the policy intent of the regulation; 2. Disentangle objectives, from potential design options to the policy intent(s), and structure them into a means-ends objectives network;
164
R. J. G. Mateus et al.
3. Identify the fundamental objectives (the level at which the rule should be checked); 4. Develop a performance metric for each fundamental objective (FO): a) If not possible, disaggregate the FO into sub-objectives; a1) If relevant, build a multicriteria performance model for measuring the FO; b) If still not possible, develop a proxy performance metric from the means-objectives of the FO; 5. Infer functional relationships between causes (means-objectives) and effects (endsobjectives); 6. Set rule compliance threshold levels based on the upper objectives and factual tradeoffs: a) If relevant, set dynamic thresholds; 7. Monitor expected impacts across time and adapt rules accordingly as necessary. Section 4 elaborates on this methodology presenting how it can be applied in practice to the improvement of Portuguese building regulations on rights to light.
4 A Conceptual Application of the Methodology to Portuguese Rights to Light Regulation Regulation analysis or drafting should start by making clear the respective policy intent. This knowledge is essential to guide the rationale and interpretation of rules [19]. Policy intent can be directly obtained from policy makers or legislative drafters, or indirectly from the preamble, principles, or functional provisions of a regulation. Overall, building regulations aim to guarantee and promote that buildings satisfy social functions (e.g., security, accessibility, communication, safety, health, comfort, aesthetics, energy efficiency) for their users as well as the society in general. These social outcomes are incentivized by forbidding or inducing social behaviors from Architecture, Engineering and Construction’s stakeholders. The policy intent can be converted into a coherent set of legally enforceable normative rules by asking, for each policy intent, “How could it be achieved”. In the opposite direction, asking “Why Is This Important” [20, 21] for each existing or potential rule should make clear the intermediate and ultimate purposes (objectives) that each provision aims to guarantee or promote. Both these top-down and bottom-up approaches should be concurrently applied until a structural network of causes (means-objectives) and effects (ends-objectives) emerges, as illustrated in Fig. 1 for rights to light regulations. This visual representation provides an artifact for thought and communication on the relationships and the rationale of the rules set in a regulation, including how they promote the achievement of the policy intent. For instance, potential alternative architectural design options like the building, obstructions (e.g., trees, other buildings), indoor spaces, openings (e.g., windows, doors, balconies), and other building systems, along with their location, orientation, shapes, dimensions, materials, and other properties are important because they influence intermediate objectives (e.g., solar energy, natural lighting, direct sunlight) which in turn are important to achieve broader strategic objectives (e.g., health, comfort, energy efficiency, greenhouse gas reduction). Furthermore, it often allows to identify unforeseen conflicting unintended negative consequences that must be traded off (e.g., material deterioration, noise reduction, privacy) when choosing a design solution.
Improving Regulations for Automated Design Checking Through Decision Analysis Skin damage
Vitamin D synthesis
UV A
UV B
Circadian rhythm regulation
Material deterioration
Respiratory diseases
Indoor air quality (ventilation)
Ultraviolet (UV) radiation
165
Greenhouse gas emissions
Visual comfort
Thermal comfort Energy consumption
Humidity and infiltrations Direct sunlight
Water heating
Artificial heating & cooling systems
Glare prevention Outside view & Privacy
Artificial lighting systems Electricity production
Shading devices Daylight
Passive heating and cooling
Solar thermal collectors
Acoustic comfort Photovoltaic panels
Noise
Solar energy Windows
Surrounding obstructions
Interior building spaces
Exterior building
Impacts on nearby buildings
Local climate
Fig. 1. Illustrative network of the means- and ends-objectives involved in regulating rights to light provisions. Design options are represented within circles, while fundamental objectives are in bold.
Fundamental objectives correspond to the objectives at which the rule should be ideally applied [20]. For instance, the objectives related to health (e.g., skin damage, vitamin D synthesis, circadian rhythm regulation) are too broad (strategic) for defining an actionable rule, because their control is well beyond the influence of the available design options (e.g., they depend mostly on the occupant’s behavior). On the other hand, design options like windows are not a fundamental objective too, but rather a means objective to attain other more fundamental objectives. This research proposes therefore that right to light rules should be set based on three fundamental objectives: direct sunlight; natural daylight; and solar energy (in bold in Fig. 1). Besides being controllable, this set of fundamental objectives possesses additional desirable properties, namely measurability, completeness, conciseness, and clearness [22]. Stating only the objectives that each rule is intended to promote, as is common in functional provisions, is useless for design submissions, since objectives do not allow neither designers nor public officials to objectively check and decide whether a given design fulfills the rule or not. Rules based on objectives only reduce then to mere guidelines, which cannot be legally used to justify a decision, or else they further require arbitrary subjective judgements from the appointed officials responsible for the decision. In sum, functional provisions should be replaced by performance-based ones, stating the respective performance metric along with the respective threshold compliance level as well as the respective measurement method. Choosing an adequate performance metric is not trivial. Decision analysis good practices advocate that each metric should ideally measure the direct (natural) effects on the respective fundamental objective [22]. However, this is not always possible, namely
166
R. J. G. Mateus et al.
because it might be impossible or too costly to directly measure fundamental objectives in practice (e.g., how to measure daylighting?). In these cases, a possible strategy is to break down the fundamental objective into measurable sub-objectives (e.g., availability, distribution, glare; and/or by space function) and then aggregate them into a single composite multicriteria indicator. If that is not viable too, indirect (proxy) metrics could be used as a last resource, built from the respective means-objectives (e.g., building orientation, window size, distance between buildings). Performance-based metrics used in many European building codes for the objective of guaranteeing indoor daylighting are of the following types: window size, often as a function of room area [7]; sunlight daily duration [23]; or indoor illuminance levels, namely using the Average Daylight Factor (ADF) in specific periods of the year [24]. ADF computes the ratio (as a %) of the average daylight illuminance (under standard overcast conditions) in a space to the outside illuminance [25]. Recently, climate-based daylight computer simulation methods allow to predict quantitative metrics (e.g., irradiance, illuminance, radiance, and luminance) of sun and shading patterns, falling on and around buildings [26]. Performance-based rules based on these dynamic metrics can then be built by computing cumulative quantities over a period (e.g., total or average annual illuminance) or instantaneous quantities (e.g., minimum illuminance) [27, 28]. Performance-based rules should be defined as close as possible to the fundamental objectives that they are intended to measure, since they promote internal validity (less-biased measurements) but also higher flexibility and creativity to comply with a rule. Hence, it is preferred to regulate rights to light using climate-based metrics rather than ADF, since the latter depends only on the shape and composition of the buildings, while the former takes also into account their location, orientation, and the seasonal variation of the sun across the day and year. This is currently possible to achieve using building information modelling (BIM), a technology that is replacing traditional 2D CAD drawings for designing constructions. BIM represents physical and functional objects in construction designs as database models containing 3D objects (e.g., plot, spaces, walls, doors, windows) and associated information (e.g., geometry, spatial relationships, materials, time, costs, sustainability). Semantically rich BIM models coupled with simulation-based analyses provide thus the breakthrough for automating design measurements and regulatory checking support in building permitting [1, 29]. Suitable performance climate-based metrics include [23, 26, 30]: a) Useful Daylight Illuminance (fraction of time in a year when the indoor horizontal illuminance falls within a given range, e.g., between 100 and 2000 lx) for measuring daylight availability (illuminance) on the objective “natural daylight”; b) Discomfort Glare Probability (probability that a person is disturbed by glare) for measuring glare (luminance) on the objective “natural daylight”; c) Annual Sunlight Exposure (% of space area where direct sunlight is above a certain threshold) for measuring the objective “direct sunlight”; and d) Annual Solar Insolation (total amount of energy that a space area receives over a year) for measuring the objective “solar energy”. Beyond a structural representation of the relationships between design options, means- and ends-objectives (see Fig. 1), the respective functional (cause-effect) dependencies should also be ideally estimated, namely from causal inference empirical studies,
Improving Regulations for Automated Design Checking Through Decision Analysis
167
theory, or expert panels. Quantitative functions allow to quantify how distinct window sizes influence indoor direct sunlight and daylight (e.g., lux), or how the latter influence visual comfort, energy consumption, or thermal comfort (e.g., overheating days). The benefits of estimating these functional relationships are twofold. They allow policymakers to understand the expected upper-level consequences of fixing distinct approval threshold levels on each performance-based rule as well as the expected factual tradeoffs between those consequences (e.g., promoting direct sunlight might promote user health, but deteriorate material and increase energy consumption in the summer). Specifying the dependencies between objectives, that is, the upper-level consequences that each rule is expected to achieve is also relevant for ex ante evaluations, as is required in regulatory impact analysis studies, as well as monitoring and ex post evaluations of its application. If the expected consequences are not achieved, rules should be changed or adapted accordingly. More recently, legal dynamism proposes rules that could adapt in response to the actual performance of real-world systems [31]. For instance, a dynamic rule could regulate varying minimum window sizes in response to the actual evolution of the number of buildings and/or noise level in an area. Window size limits could increase as the number of nearby buildings increases and/or could decrease as the surrounding noise increases.
5 Discussion, Concluding Remarks and Future Work A set of good practices from decision analysis is suggested to structure, analyze, and support regulation drafting for automated design checking. A structured methodology is presented and validated through its conceptual application to Portuguese rights to light regulations, suggesting namely that performance-based metrics should be developed for three fundamental objectives: Annual Sunlight Exposure for measuring the objective “direct sunlight”; Useful Daylight Illuminance and Discomfort Glare Probability for measuring the subobjectives “availability” and “glare” from the objective “natural daylight”; and Annual Solar Insolation for measuring the objective “solar energy”. Additionally, climate-based daylight computer simulation methods coupled with BIM models provide the necessary breakthrough to develop performance-based metrics which can lead to better and more efficient rules on rights to light, since they can be automated and allow building design optimization. The methodology and decision analysis good practices suggested in this article are applicable to improving the analysis, drafting, understanding, coherence, as well as the ex-ante and ex-post evaluation of regulations in many other topics. It provides sound analytical and visual tools to better draft and structure regulations, brings clarity and rationality to the objectives and consequences of each rule, supports the definition of the appropriate fundamental objectives at which performance-based rules should be set along with their compliance threshold levels, and allows monitoring the estimated policy consequences of the rules against its actual outcomes in practice. This research is planned to be tested and implemented within the framework of the CHEK (“Change toolkit for digital building permit”) R&D project, which aims to develop methods and tools to automate compliance checking on designs for building permitting in European municipalities. It further aims to propose changes and statutory
168
R. J. G. Mateus et al.
interpretations following the principles of ‘Rules as Code’. By imposing that rules must be encoded into a formal logic language (e.g., if fact situation F falls under category C, then verify rule R), the quality of legal drafting is improved, and its intended policy effects can be tested. The expectation is that costs associated with this shift will be compensated by foreseeable reductions in regulatory and compliance costs. Acknowledgments. The authors would like to thank three anonymous reviewers for their comments on the draft version of this paper. This work was partly financed by FCT/MCTES through national funds (PIDDAC) under the R&D Unit Institute for Sustainability and Innovation in Structural Engineering (ISISE), under reference UIDB / 04029/2020, and under the Associate Laboratory Advanced Production and Intelligent Systems ARISE under reference LA/P/0112/2020.
References 1. Malsane, S., Matthews, J., Lockley, S., Love, P.E.D., Greenwood, D.: Development of an object model for automated compliance checking. Autom. Constr. 49(A), 51–58 (2015) 2. Nawari, N.O.: A generalized adaptive framework (GAF) for automating code compliance checking. Buildings 9(4), 86 (2019) 3. Beach, T.H., Hippolyte, J.-L., Rezgui, Y.: Towards the adoption of automated regulatory compliance checking in the built environment. Autom. Constr. 118, 103285 (2020) 4. ˙Ilal, S.B., Günaydın, H.M.: Computer representation of building codes for automated compliance checking. Autom. Constr. 82, 43–58 (2017) 5. Portugal: General Regulation of Urban Buildings, Decree-Law 38382, Portuguese Official Gazette, 7 August (1951) 6. Lisbon: Lisbon’s Municipal Regulation of Urban Buildings, Public Notice 16520/2021, Portuguese Official Gazette H169, 280–402, 31 August (2021) 7. Kunkel, S., Kontonasiou, E., Arcipowska, A., Mariottini, F., Atanasiu, B.: Indoor air quality, thermal comfort and daylight: analysis of residential building regulations in eight EU member states. Buildings Performance Institute Europe (2015) 8. Beach, T.H., Rezgui, Y., Li, H., Kasim, T.: A rule-based semantic approach for automated regulatory compliance in the construction sector. Expert Syst. Appl. 42, 5219–5231 (2015) 9. Xanthaki, H.: Drafting Legislation: Art and Technology of Rules for Regulation. Hart Publishing (2014) 10. Waddington, M.: Research note: rules as code. Law Context 37(1), 179–186 (2020) 11. Eastman, C., Lee, J.M., Jeong, Y.-S., Lee, J.-K.: Automatic rule-based checking of building designs. Autom. Constr. 18, 1011–1033 (2009) 12. Salama, D.M., El-Gohary, N.M.: Semantic modeling for automated compliance checking. In: ASCE International Workshop on Computing in Civil Engineering, Miami, USA (2011) 13. Solibri Model Checker. https://www.solibri.com. Accessed 30 Jan 2023 14. Hjelseth, E., Nisbet, N.: Exploring semantic based model checking. In: Proceedings of the 27th CIB W78 International Conference, p. 54 (2010) 15. Giblin, C., Liu, A.Y., Müller, S., Pfitzmann, B., Zhou, X.: Regulations expressed as logical models (REALM). In: Proceedings of the 18th JURIX Conference, IOS Press (2005) 16. Pauwels, P., Zhang, S.: Semantic rule-checking for regulation compliance checking: an overview of strategies and approaches. In: 32nd CIB W78 Conference, Eindhoven (2015) 17. Pedro, J.B., Meijer, F.M., Visscher, H.J.: Comparison of building permit procedures in European Union countries. In: Proceedings of RICS Construction and Property Conference, COBRA 2011. RICS & University of Salford (2011)
Improving Regulations for Automated Design Checking Through Decision Analysis
169
18. Pedro, J.B., Meijer, F.M., Visscher, H.J.: Technical building regulations in EU countries: a comparison of their organization and formulation. In: Proceedings of CIB World Congress 2010, Building a Better World, Salford, UK (2010) 19. Alpa, G.: General principles of law. Ann. Surv. Int. Comp. Law 1(1), Article no. 2 (1994) 20. Spradlin, D.: Are you solving the right problem? Harv. Bus. Rev., 84–93 (2012) 21. Keeney, R.L.: Value-Focused Thinking: A Path to Creative Decision Making. Harvard University Press, Cambridge (1992) 22. Keeney, R.L.: Developing objectives and attributes. In: Edwards, R.F., Miles, R.F., Winterfeldt, D. (eds.) Advances in Decision Analysis: From Foundations to Applications, pp. 104–128. Cambridge University Press, Cambridge (2007) 23. Darula, S., Christoffersen, J., Malikova, M.: Sunlight and insolation of building interiors. Energy Procedia 78, 1245–1250 (2015) 24. Bournas, I., Dubois, M.-C.: Daylight regulation compliance of existing multi-family apartment blocks in Sweden. Build. Environ. 150, 254–265 (2019) 25. Littlefair, P.J.: Site Layout Planning for Daylight and Sunlight: A Guide to Good Practice. Construction Research Communications, London (1995) 26. Brembilla, E., Mardaljevic, J.: Climate-based daylight modelling for compliance verification: benchmarking multiple state-of-the-art methods. Build. Environ. 158, 151–164 (2019) 27. BSI: Code of practice for daylighting. BS 8206-2:2008. British Standards Institution, London (2008) 28. Lu, M., Du, J.: Dynamic evaluation of daylight availability in a highly-dense Chinese residential area with a cold climate. Energy Build. 193, 139–159 (2019) 29. Noardo, F., et al.: Unveiling the actual progress of digital building permit: getting awareness through a critical state of the art review. Build. Environ. 213, 108854 (2022) 30. Chow, A., Fung, A.S., Li, S.: GIS modeling of solar neighborhood potential at a fine spatiotemporal resolution. Buildings 4(2), 195–206 (2014) 31. Pentland, S., Mahari, R.: Legal Dynamism from the Network Law Review. https://www.net worklawreview.org/computational-one. Accessed 14 Oct 2022
Preliminary Design of an Automatic Palletizing System During the Pre-sales Stage Enrico Guidetti , Pietro Bilancia(B) , Roberto Raffaeli , and Marcello Pellicciari Department of Sciences and Methods for Engineering, University of Modena and Reggio Emilia, Via Amendola 2, 42122 Reggio Emilia, Italy [email protected]
Abstract. The study of an automated system for intralogistics requires an important use of time and resources, starting from the input data analysis up to the definition of the technical solution. While many commercial tools are available for testing and optimizing the plant performance during the advanced design stages, little work has been done concerning the workflow to be followed at the pre-sales design phase. In this context, the present paper focuses on the definition of the best practices for the correct preliminary definition of a robotic cell for palletization. To simplify and speed up the pre-sales feasibility study and estimate the performance of the proposed robotic system, an engineering approach based on a simplified theoretical model is reported and integrated within a dynamic calculation table. As the main output, the proposed tool calculates the robot saturation which is a key index for the plant preliminary definition. Keywords: Industry 4.0 · Palletizing Robotic System · Pre-Sales Design · Performance Definition · Design Tool
1 Introduction The Industry 4.0 paradigm is leading industrial production towards a strong technological change, in which cyber-physical systems allow communication between machinery and equipment, generating real time data and information on logistical and production processes [1, 2]. In parallel, the ever-increasing implementation of robotic platforms is changing operations in the industrial world and, particularly, in the logistic sector [3]. As discussed by Alias et al. [4], serial palletizing arms are one of the most adopted robot solutions in logistics. These are commonly equipped with a pneumatic or a motorized gripper (mounted on the end-effector) and are mainly used for pick-and-place operations, namely for lifting products from one location and placing them at other pre-defined locations to create transportable unit load on pallets. Palletizing robots have become an integral part of many logistic systems worldwide as they allow more efficient productions (increased production rates and quality) at lower costs, with improved working conditions and employee safety. As a result of this, today several companies are specialized in supply of robotic solutions for intralogistics. In fact, although robot manufacturers © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 F. J. G. Silva et al. (Eds.): FAIM 2023, LNME, pp. 170–178, 2024. https://doi.org/10.1007/978-3-031-38241-3_20
Preliminary Design of an Automatic Palletizing System During the Pre-sales Stage
171
have released manipulators specifically developed for palletization, the accurate design and programming of robotic cells integrating multi-brand hardware solutions is still a complex task characterized by multiple phases and variables. In particular, as discussed in the work of Zhang et al. [5], during the pre-sale phase six design steps can be identified, which are: i) process study (underline the principal tasks), ii) task breakdown, iii) task relation and flow diagram (sequences definition), iv) space requirement (analysis of 2D layout), v) alternative layouts evaluation and vi) layout optimization. Naturally, a constant collaboration between proposal, mechanical, electrical and software engineers is always required throughout the design process. Several simulation systems are available on the market that can be used during this preliminary phase [6]. However, as clearly highlighted by Gavrila et al. [7], these software (e.g. Process Simulate, Visual Components, Delmia Robotics, etc.) require skilled users with solid knowledge of robotics and programming. Also, an acceptable preliminary design solution (to be further analyzed and reviewed in the subsequent phases) may be reached after several iterations with significant modeling and overall simulation times, inevitably introducing delays and thus impacting the overall business competitiveness. Based on these considerations, this paper proposes an easy-to-use calculation tool able to quickly return an estimation of the robotic cell performance. The tool is based on a simplified yet efficient analytical model which can be applied, during the pre-sales phase, to different robotic cell configurations for palletization. After detailed descriptions of the adopted design method, the paper reports an industrial case study for validation purposes. In the following paragraphs, the main steps necessary to preliminarily define a robotic cell applicable in the logistic industry will be illustrated, analyzing the required input data to complete the feasibility study.
2 Overview into Robotic Cell in Pre-sales Design Stage To carry out the feasibility analysis of a robotic system, all the design requirements must be first provided to the company developing the intralogistics solution. Usually, the information analyzed by the technical pre-sales team are of a purely technicalanalytical nature. In particular, customers are asked to provide the 2D layout drawings of the available area where the system will be installed, as well as all the production and logistic data related to the product to be handled. With reference to Fig. 1, in case of a robotic palletizing cell designed for handling products inside cases, the required production data to start the preliminary study of the solution are: – Reference speed (RS, expressed in cases/min or boxes/min): Usually, this value refers to the theoretical speed of the production lines, without considering machine downtime such as, for example, the downtime for feeding raw materials. – Line data: these are the number of production lines and their characteristics, such as the Stock Keeping Unit (SKU) of each product. In case of multiple production lines working in parallel and a single palletizing station, the SKUs are analyzed to identity the worst case, that is when all the lines are producing the SKU having highest throughput rate (cases/min) or the one with physical geometries such that product manipulation becomes problematic.
172
E. Guidetti et al.
– Cases data: These are geometrical and physical characteristics (dimensions and mass) of the manipulated items. There may be situations where an automatic packaging machine responsible for placing multiple individual cases within a single box is installed ahead of the robotic palletizing cell. In this case, the product to be manipulated is no longer the single case but the box itself, with overall higher dimensions and weights. Hence, robots with increased payloads as well as larger grippers may be needed. As for the logistic data, customers must specify the characteristics of the final unit load that will be subsequently managed in the warehouse and during shipping (see Fig. 1). In the case of a robotic palletizing cell, the required logistic data can be summarized as follows: – Number of cases (or boxes) for each level of the final unit load: as discussed by McDonald [8], this parameter must be determined according to the resulting impact on the supply chain costs. – Number of levels of the final unit load: This value allows to define the total height of the final unit load. – Number of slip-sheets for each level of the final unit load: Slip-sheets are used when the stability of the final unit loads is compromised. This value may affect the cell configuration in case of palletization with high production rates. Indeed, a secondary robot dedicated to the positioning of the slip-sheets may become necessary to satisfy the production rates requested by the customer. – Pallet pattern configuration: Depending on the physical characteristics of the products, different pattern configurations (i.e. products positioning on the pallet) can be obtained [9]. The reference market can also affect the selection of a specific configuration. For example, if the end customer decides to exhibit the pallet at his points-of-sale over the shelfs or along the aisles, the facing of the products and their position on the pallet are strategic for the marketing. – Type of pallet: Today, two main types of transport pallets are used in industry: the first one is made from wood whereas the second from primary or recycled plastic [10]. The choice of the correct support depends on the stability of the unit load and on the company supply chain strategies. After processing the input data, the technical pre-sales team will be able to create a 2D layout draft of the automated cell by taking into consideration the solutions available in the portfolio of the company’s suppliers, as well as the critical aspects detected during the preliminary analysis. In the specific case of a robotic palletizing cell, the product physical characteristics (primarily weight, dimensions and shape) assume a certain importance as they guide the design/selection of the hardware components (gripper, conveyors, mechanical devices). Common issues arise in case of high production throughput rates or insufficient available spaces for installing the automated solution. Overall, the large amount of involved design variables and constraints, together with the required high level of customization, make the design problem quite intricate.
Preliminary Design of an Automatic Palletizing System During the Pre-sales Stage PRODUCTION DATA
173
LOGISTIC DATA
Product ID:
AA111
Number of cases/layer:
8
Height:
180 mm
Number of layers/pallet:
7
Length:
435 mm
Cases/pallet:
56
Width: 300 mm Mass: 8 kg Reference speed: 8 cases/min Cases dimensions:
Interlayers/pallet: 4 Leading side: 435 mm Pallet type: EURO 800x1200 Pallet pattern (layer and full pallet):
Fig. 1. Example of “production” and “logistic” data in a palletizing system
CUSTOMER
START
Data collection (layout, production and logistic data)
Analyze flows to be performed automatically TECHNICAL PRE SALES TEAM
Verify in CAD if the overall dimensions are satisfied
Yes
Project redefinition
Modifications needed?
No
Solution confirmed
On site visit / inspection Solution performances definition
Is the solution feasible?
Yes Proceed with quotation
No Technical department support
End
Yes TECHNICAL DEPARTMENT
Internal test phase in workshop
Are the outcomes reliable?
No
R&D deep dives
Fig. 2. General workflow for robotic cells definition in the pre-sales design phase
In this context, the engineering approach presented in Sect. 3 will assist during the solution performance definition phase, i.e. the steps highlighted in yellow within the workflow shown in Fig. 2. Once the necessary input data have been defined, the mathematical model will rapidly return the saturation values of the robotic cell. In this way, the technical pre-sales team will be able to verify the technical correctness of the proposed draft solution and eventually proceed with the required modifications. In practice, in case the obtained results exceed the limits of the proposed system, the technical department will go through an in-depth revision of the individual steps to identify the bottleneck(s) and the required actions. Once a feasible solution has been defined, the customer plant is inspected and the proposed layout is further reviewed and, eventually, updated. For example, it may happen that some dimensional parameters in the real factory differ from the ones previously
174
E. Guidetti et al.
shared by the customer. After confirmation of the final solution, the economic quantification is carried out and the commercial offer is formulated. The pre-sales design phase is proven to be the fulcrum of the entire sales process as it guides the quotation phase and the subsequent negotiation of the proposed system with the final customer.
3 Engineering Approach and Calculation Tool This section outlines the main variables and mathematical passages of the adopted engineering approach for the evaluation of the robot saturation. Its practical implementation into a ready-to-use calculation tool is then discussed. 3.1 Variables of the Mathematic Model In the case of a robotic palletizing system, the principal variables for the preliminary design problem may be listed as follows: – Robot Productivity (RP, expressed in cycles/min): it is intrinsically related to the robot operational speed, although this is strongly influenced by the applied payload (i.e. the attached gripper and load unit). Several types of serial palletizing arms are available on the market, each one with different productivity parameters. These are usually provided in the vendor datasheet as in Table 1. – Cycle time for empty pallet changing (tep , expressed in seconds): the handling of empty pallets inside the robotic palletizing cell is an important aspect which can be managed with different approaches (and times). For example, the operator can manually infeed the cell with stacks of empty pallets and the palletizing robot can directly de-stack each single pallet starting from the upper one. Alternatively, a dedicated mechanical device responsible for de-stacking each empty pallet may be installed inside/outside the cell. Then, the single pallet will reach the pick-up point (or, possibly, the palletizing station) by means of motorized conveyors. In case of limited available space, a compact solution consisting of a 2 levels overlapped motorized conveyor is considered. Here, the empty pallets are introduced in the lower conveyor line and the finished pallets are evacuated with the upper one. The two lines are connected with a vertical mover (e.g. hydraulic platform or palletizing robot itself) to transfer the empty pallet at the upper level. Each of the described configurations has a specific cycle time, as visible in Table 2. – Cycle time for managing one slip-sheet (tss , expressed in seconds): this parameter is to be considered for unstable patterns and/or products geometries with high risks of falling. In such cases, a cardboard or a plastic sheet (thickness of about 2 mm) may be required between the layers in order to increase the support surface for each layer, thus enhancing the overall stability of the unit load. The slip-sheets can be managed automatically in different ways. For example, they can be picked-up from a dedicated container via a vacuum gripper attached to the robot. The container can also be equipped with a hydraulic system capable to hold the slip-sheet at its top level, thus removing the need for the gripper to move inside the container at very low speed. Alternatively, an automatic dispenser can be installed inside the robotic cell. Starting from a reel of paper or a cardboard, such device automatically cut the slip-sheets of
Preliminary Design of an Automatic Palletizing System During the Pre-sales Stage
175
the desired length and width. Using this system, the robot will always have the single slip-sheet available in a fixed pick-up position at the end of the automatic dispenser. For each of the above configurations a different cycle time must be considered, as visible in Table 3. – Number of grips per layers: this parameter is strictly related with the pallet pattern configuration specified by the customer. In particular, multi-product grips can possibly be performed based on the type of product to be handled and on the size of the installed gripper. Basically, the more individual grips are performed, the worse the overall saturation of the robotic system becomes.
Table 1. General description of standard robot productivities. Robot Type
Payload [kg]
Cycle/min
Budget Investment
A
100
13
Low
B
140
12
Low
C
315
8
Medium
D
500
8
Medium
E
700
7
High
Table 2. Reference values for empty pallets cycle times (measured on real plants). Configuration type
Cycle time [seconds]
Budget Investment
De-stacking done by Robot
22
Low
Palletizing cell with de-stacker device
15
Medium
Two levels conveyor
26
High
Table 3. Reference values for slip-sheet cycle times (measured on real plants). Configuration type
Cycle time [seconds]
Budget Investment
Slip-sheet container (traditional)
15
Low
Slip-sheet container with hydraulic platform
13
Medium
Automatic dispenser
7
High
176
E. Guidetti et al.
3.2 Calculation Tool The graphical interface of the calculation tool is structured as a dynamic table, where the input data and the model variables, respectively described in Sect. 2 and 3.1, can be edited. Based on the implemented sub-processes (e.g. slip-sheets and empty pallets positioning systems), hardware components (robot model) and input data, the tool will first evaluate the sum of secondary cycle times (called waste time, i.e. by excluding the palletization cycle) and then return as an output the cycles per minute performed by the robot during palletization. In the following, the main steps for the evaluation of the robotic arm saturation are reported. The total waste times for slip-sheets (TWss) and for empty pallet changing (TWep) are calculated as: TWss = Rph nss tss
(1)
TWep = Rph tep
(2)
where nss is the number of slip-sheets in one pallet and Rph represents the number of pallets in one hour, defined as the ratio between the reference speed of the production line (RS, , i.e. the product cadence or number of items to be handled by the robot in a time interval, here expressed in cases/hour) and the number of cases per pallet. The sum of the times obtained from Eqs.1 and 2 gives the total waste time. By subtracting this sum from the production time frame (i.e. 3600 s), it is possible to obtain the total operative time (TO) spent on palletizing (expressed in minutes): TO = ((3600 − (TWss + TWep))/60
(3)
Now, by considering RS (to be converted in cases/min), the number of cases per layer and the number of grips per layer, one could easily determine the number of cycles that the robot must perform in one hour of production (nrc ). Then, the effective robot productivity (ERP) is given by: ERP = nrc /TO
(4)
Finally, the robot saturation is calculated as: S = ERP/RP
(5)
For simplicity, S is usually expressed as a percentage and its value must be lower than 100% to consider the draft layout valid. In such case, the configured palletizing solution can meet the requested working conditions (i.e. satisfy a certain production rate and type). S is also used to verify the impact of the secondary functionalities (as the empty pallet positioning performed by the robot) and of the selected devices (e.g. the ones listed in Tables 2 and 3) on the cell performance. If the value of S becomes equal or higher than 100%, system optimizations should be addressed to reduce the cycle times. The reported engineering tool allows to rapidly evaluate design variants (see Fig. 3) and thus supports in the solution of multi-parameters problems normally encountered at the pre-sales stage.
Preliminary Design of an Automatic Palletizing System During the Pre-sales Stage
177
Fig. 3. Evaluation of design alternatives for palletization: a) First cell layout (S > 100%) adopting a slip-sheet container with hydraulic platform Vs. b) improved solution (S = 98%) utilizing an automatic dispenser (A) and a de-stacker device (B)
4 Conclusions In this paper, an engineering method for addressing the feasibility study of an industrial robotic cell during the pre-sales design phase has been presented. After a preliminary analysis of the design problem workflow, a simple theoretical model aimed at evaluating the general performance of a palletizing robotic system has been reported and its main parameters discussed. The reported formulas are integrated within dynamic tables to obtain a fast and effective calculation tool aimed at supporting the system designers during the pre-sales phase. The tool provides the robot saturation which is a basic information necessary to understand if the draft solution is feasible, and allows the quick evaluation of many design alternatives.
References 1. Barreto, L., Amaral, A., Pereira, T.: Industry 4.0 implications in logistics: an overview. Procedia Manuf. 13, 1245–1252 (2017) 2. Korczak, J., Kijewska, K.: Smart logistics in the development of smart cities. Transp. Res. Procedia 39, 201–211 (2019) 3. Kadir, B.A., Broberg, O., Souza da Conceição, C.: Designing human-robot collaborations in industry 4.0: explorative case studies. In: DS 92: Proceedings of the DESIGN 2018 15th International Design Conference, pp. 601–610 (2018) 4. Alias, C., Nikolaev, I., Magallanes, E.G.C., Noche, B.: An overview of warehousing applications based on cable robot technology in logistics. In: 2018 IEEE International Conference on Service Operations and Logistics, and Informatics, pp. 232–239 (2018) 5. Zhang, J., Fang, X.: Challenges and key technologies in robotic cell layout design and optimization. Proc. Inst. Mech. Eng. Part C J. Mech. Eng. Sci. 231(15), 2912–2924 (2017) 6. Carvalho de Souza, J.P., Castro, A.L., Rocha, L.F., Silva, M.F.: AdaptPack studio translator: translating offline programming to real palletizing robots. Ind. Robot Int. J. Robot. Res. Appl. 47(5), 713–721 (2020) 7. Gavrila, S., Ionita, V.: Modeling, simulation and optimization of a robotic flexible manufacturing packaging-palletizing Cell. Int. J. Innov. Res. Inf. Secur. 2(10), 1–12 (2015) 8. McDonald, C.M.: Integrating packaging and supply chain decisions: selection of economic handling unit quantities. Int. J. Prod. Econ. 180, 208–221 (2016)
178
E. Guidetti et al.
9. Balasubramanian, R.: The pallet loading problem: a survey. Int. J. Prod. Econ. 28(2), 17–225 (1992) 10. Roy, D., Carrano, A.L., Pazour, J.A., Gupta, A.: Cost-effective pallet management strategies. Transp. Res. Part E Logist. Transp. Rev. 93, 358–371 (2016)
A New Equipment for Automatic Calibration of the Semmes-Weinstein Monofilament Pedro Castro-Martins1,2(B) 1
and Lu´ıs Pinto-Coelho1,3(B)
Center for Innovation in Engineering and Industrial Technology, Polytechnic of Porto - School of Engineering, Porto, Portugal {pmdcm,lfc}@isep.ipp.pt 2 Faculty of Engineering, University of Porto, Porto, Portugal 3 INESC-TEC - CRIIS, Porto, Portugal Abstract. Diabetic foot is a complication that carries a considerable risk in diabetic patients. The consequent loss of protective sensitivity in the lower limbs requires an early diagnosis due to the imminent possibility of ulceration or amputation of the affected limb. To assess the loss of protective sensitivity, the 10 gf Semmes-Weinstein (SW) monofilament is the most used first-line procedure. However, the used device is most often non-calibrated and its feedback can lead to decision errors. In this paper we present an equipment that is able to automatically conduct a metrological verification and evaluation of the 10 gf SW monofilament in the assessment of the loss of protective sensitivity. Additionally, the proposed equipment is able to simulate the practicioner’s procedure, or can be used for training purposes, providing force-feedback information. After calibration, displacement vs. buckling force contours were plotted for three distinct monofilaments, confirming then ability of the equipment to provide fast, detailed and precise information. Keywords: Diabetic Foot Buckling Force
1
· Monofilament · Metrological Verification ·
Introduction
Diabetes is an incurable chronic disease that is growing exponentially all over the world, especially in most developed countries. It presents several complications, but when associated with other comorbidities, serious problems arise for the patient, as is the case of the diabetic foot. This pathology results in complications in the lower limbs, namely in the plantar region, whose evolution can have devastating effects and with a high probability of compromising part of the affected limb [1]. As a preventive measure, tests are periodically carried out to assess the loss of skin sensitivity to pressure on the foot, with the objective of identifying situations of risk of imminent injury. The 10 gf Semmes-Weinstein monofilament (SWM) test is the most used first-line instrument in the assessment of loss of protective sensitivity in the diabetic foot (see Fig. 1), with international recommendations for its widespread use [1,2,8]. This assessment is most often c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 F. J. G. Silva et al. (Eds.): FAIM 2023, LNME, pp. 179–186, 2024. https://doi.org/10.1007/978-3-031-38241-3_21
180
P. Castro-Martins and L. Pinto-Coelho
Fig. 1. Plantar sensitivity assessment using the monofilament, showing the approaching stage (a) and the contact stage, with visible buckling (b). The related evaluation points, for the left foot, are represented in (c).
performed manually, by an experienced health professional, but it can also be performed automatically, using a collaborative robot [4,5]. The SWM only provides a binary qualitative assessment, as the health professional cannot obtain a numerical value as a result of the test. This prevents an objective comparison between the various screening tests, when, in the case of non-sensitivity feedback, the patient will be classified as at risk [2,9]. Another factor to be taken into account is the unknown measurement error that may be associated with the use of the SWM as decision making support device, even when the literature indicates the need to carry out quantitative assessments of the degree of loss of sensitivity in the diabetic foot. Thus, some authors have been presenting evidences that indicate a clear distrust regarding the real buckling force value that a SWM imposes on a patient’s skin during a sensitivity screening test [3,6,7,10]. SWM performance is rather unexplored despite its impact on medical decisions. In this paper we present an equipment that is able to perform an automatic assessment of the SWM, whose result can be used for calibration. Additionally, the equipment can be used for training healthcare professionals on how to perform the SWM test and to increase their awareness regarding force-feedback mechanism during the procedure.
2
Equipment Development
The proposed equipment is composed of two elements that can be integrated or work independently. One, is a force sensing system, that is able to measure and display the applied force. The other, is a precision linear displacement system, for simulating different SWM application scenarios. The different components that constitute the developed equipment, as well as their interactions, are illustrated in the diagram shown in Fig. 2. The equipment should be interfaced with a computer to allow the better equipment operation, for the observation of evaluation charts, data persistence, and statistics, among other functionalities. For development, priority was given, whenever possible, to standard components,
A New Equipment for Automatic Calibration of the Semmes-Weinstein
181
Fig. 2. Main equipment components and their interaction.
already tested and readily available on the market. When this was not possible, specific parts were developed from scratch, being designed and simulated using 3D modeling software, to make sure it was possible to meet all the project requirement needs and desired characteristics. The prototypes were then printed with 3D technology using PLA (a type of organic and biodegradable thermoplastic polymer), tested and refined when necessary. These parts showed to have good mechanical resistance and operational reliability in the desired operation conditions, thus giving them an excellent performance for their purpose. 2.1
Force Sensing System
This is a portable element and can be operated alone to carry out measurements with manual application of the monofilament and take measurements, thereby sensing the value of the applied force. However, it can also be coupled with the precision linear displacement system (PLDS) (described next) and work together as a single mechanism in automatic mode. The force sensing system (FSS) (see Fig. 3), in summary, is equipped with an OLED screen, a force transducer, a button to carry out the tare measurement, a connector for power input and a USB Mini-B input for communication with a computer. Finally, an electronic controller (composed by uP, memory, analog to digital converters), based on the IC ATmega328, is responsible for carrying out all the programmed operations and executing the commands that will be sent through a computer application. When in automatic mode, the computer fully controls the equipment, i.e. when the FSS is coupled with the PLDS. The fundamental element consists of a force measurement platform, shaped similar to the anatomy of the plantar region of a human foot, which, in turn, is coupled to the force transducer. The surface of the platform has ethylene-vinyl acetate coating with a texture and hardness chosen to be similar to what is observed with real biological tissues. It will be on this platform that the monofilaments under evaluation will apply their strength, in a movement similar to the technique used in the sensitivity evaluation of the diabetic foot. Through this measurement process it will be possible to obtain the respective inherent value of the buckling force applied by the SWM.
182
P. Castro-Martins and L. Pinto-Coelho
Fig. 3. Force sensing system and its measurement platform shaped similar to the plantar region of the foot: external view of the module for manual measurement (left) and inner hardware components (right).
2.2
Precision Linear Displacement System
This element is comprised of a stepper motor, a v-shaped sliding guide, a trolley with bearings and a fixation support for the SWM. This set of components will carry out the movements to apply the strength of the monofilaments, simulating the technique used by health professionals on the diabetic foot assessment. In Fig. 4 the full equipment can be observed in detail. The PLDS, depicted in a lower position, has visible slots for the rods connecting the visible tray with the motorized mechanism, hidden below. The top clamp, mounted on the tray, is used to robustly fixate the SWM, as shown. It allows to adjust pitch and yaw incidence angles, creating the possibility for different testing conditions. Finally, we can observe on the rightmost area, the FSS platform, already described, which in this case is operating integrated with the PLDS. The hidden stepper motor is capable of performing precise linear movements. It basically consists of a movement of the visible tray that follows with the SWM attached and simulates the technique of applying the monofilament to the patient’s foot. The monofilament is then moved and comes into contact with the measurement platform where the applied force is measured, as this platform is coupled to the force transducer. These displacement movements of the monofilament fixation support are provided by the gantry and v-shaped sliding guide. The motion is supported by a transmission belt that is engaged in a pulley coupled to the stepper motor shaft. The movements can be performed in linear and continuous movements, forwards and backwards, with end-of-course control or the number of steps taken.
A New Equipment for Automatic Calibration of the Semmes-Weinstein
183
Fig. 4. Proposed equipment, with a detailed view of the precision linear displacement system (bottom, horiz.) integrated with the force sensing system platform (right, vert.).
2.3
Equipment Technical Specifications
Since the proposed equipment is intended to serve as a measuring system, it is important to consider the inherent accuracy and precision characteristics of force transducers, the components that convert physical quantities into corresponding electrical signals. To ensure that the equipment conveys quality estimates of the applied forces, it was subjected to a calibration process through the use of calibrated masses whose values were confirmed with a high precision weight scale (0.01 gf ). This stage allowed adjustments to the measurement algorithm to ensure precise and accurate measurements in a range of 0.1 gf to 500 gf . Although the developed equipment has the capacity to carry out accurate measurements in a wider measurement range, for safety reasons the force transducer was limited to 500 gf of maximum admissible load, which is within the expected SWM test values. This step ensures the integrity of the equipment and makes it possible to adapt it to other assessments with monofilaments of a caliber greater than 10 gf (there are calibers of up to 300 gf on the market, more used to assess deep sensitivity). If 500 gf are exceeded, all system operation is aborted and an error message appears on the screen, instructing the user to remove the load and reset the equipment. It should be noted that, although the measuring equipment is capable of operating with a higher resolution, the measured values are rounded to a single decimal place. This procedure guarantees a balanced rounding and speeds up the comparison of the values obtained in the measurements with the values declared by the monofilament manufacturers. The same approach can be used to evaluate other monofilament calibers.
184
P. Castro-Martins and L. Pinto-Coelho Table 1. Detailed equipment specifications. Specification
Value/Description
Range
0.1 gf to 500 gf
Uncertainty
± 0.05 gf
Displacement resolution 0.08 mm Modes
Linear by limit switch; Linear by number of steps
Computer interface
USB mini-B connector; Specific Windows application
Power supply
9 V DC
From a metrological perspective, an assessment of the quality of a particular measurement result must be ensured by measurement uncertainty, a nonnegative quantity that quantifies the spread of values associated with the measurand. The uncertainty must therefore be included in the expression of the measurement result. Therefore, a measurement uncertainty for this equipment was defined as half the amplitude of a measurement interval. Then, the measurements taken will have a measurement uncertainty of ± 0.05 gf which will be reflected in all the presented results. The PLDS is intended to perform high precision movements. For this, a stepper motor, powered by a ULN2003A based electronic driver, and a toothed belt drive are used. This setup is capable of making minimum increments of 0.08 mm in the displacement steps, this at the time of a given measurement. Still regarding the characteristics of the linear displacement component, a total space of approximately 140 mm can be covered, enough to manipulate the various models of monofilaments available on the market. Table 1 shows the most relevant technical specifications of the proposed equipment. When the equipment is operated in automatic mode, it is necessary to connect it to a computer through a USB Mini-B communication cable. This communication allows, through a specific computer application, developed for this purpose, to operate the equipment and receive all the information relating to each monofilament evaluation. As for the energy source of this equipment, it is powered by an external source of 9 V DC.
3
Results and Discussion
After development, the automatic evaluation of the performance of three 10 gf monofilaments, brand-new, from different manufacturers, was carried out. For evaluation scenario we have set the following common conditions for each: perpendicular incidence angle, speed of 8 mm.s−1 and advance step of 0.08 mm. To achieve the necessary curvature, as recommended by the monofilament test international standards, the tray travels approximately 10 mm. In Fig. 5 we can observe the obtained results. Initially, the force readings are zero since the motorized tray is positioned before the monofilament contacts the force sensing
A New Equipment for Automatic Calibration of the Semmes-Weinstein
185
25
Buckling force (gf)
20 15 10 5 M1
M2
M3
0 0
1
2
3
4
5
6
7
8
9
10
Displacement (mm) Fig. 5. Example results for the buckling behaviour of three distinct monofilaments (Mx, obtained at a 8 mm.s−1 displacement velocity with 20 ◦ C ambient temperature).
platform. Then, a very rapid increase in force readings is observed for all monofilaments, up to approximately 1.5 mm of displacement, the moment from which the force reached its maximum peak. Surely, from 2 mm of displacement on, the nylon filaments have already started their state of flexion and maintain a more or less constant force, albeit with some oscillations, but of reduced amplitude. Due to the high precision and stability of the motorized system, these oscillations are solely the result of constant changes in the viscoelastic properties of the nylon material which, due to compression and temperature, create memory and changes in its deformation. Finally, as displacement grows, the force measures tend to stabilize. The expected final value of around 10 gf , as indicated by the SWM manufacturers, was only obtained for the M2 monofilament, although with some error. Monofilaments M1 and M2 showed to apply an early higher force, that slowly decays, while monofilament M3 has a more consistent behaviour, exhibiting a fast convergence to its final value. The final values were again checked with a precision weigh-scale, without significant errors.
4
Conclusion and Future Work
This article presents a new equipment for the automatic metrological evaluation of the SW monofilaments. Additionally, the system can be used for training health professionals on the SWM test procedure. The results show that the proposed solution was able to automatically evaluate a set of monofilaments, allowing to obtain with detail and precision the displacement vs. buckling force contours that characterizes the behaviour of each monofilament. It was also
186
P. Castro-Martins and L. Pinto-Coelho
observed that the forces applied by the monofilaments may present values different from manufacturers specifications. The performed tests have revealed important information about the force that is imposed by the SWM at each moment of its compression path, until its flexion presents the curvature indicated and recommended by the conventional technique that the health professional performs. This can bring additional information for the clinical decision process. In the future, it is planned to perform a broader evaluation of other monofilaments. Acknowledgements. This work was partially supported by grant FCT-UIDB/ 04730/2020.
References 1. Bus, S.A., et al.: Guidelines on offloading foot ulcers in persons with diabetes (iwgdf 2019 update). Diabetes/Metabolism Res. Rev. 36(S1:e3274), 1–18 (2020) 2. Bus, S.A., et al.: Guidelines on the prevention of foot ulcers in persons with diabetes (iwgdf 2019 update). Diabetes/Metabolism Res. Rev. 36(S1:e3269), 1–18 (2020) 3. Chikai, M., Ino, S.: Buckling force variability of semmes-weinstein monofilaments in successive use determined by manual and automated operation. Sensors MDPI 19(4) (2019) 4. Costa, T., Coelho, L., Silva, M.F.: Automatic segmentation of monofilament testing sites in plantar images for diabetic foot management. Bioengineering 9(3), 86 (2022) 5. Costa, T., Coelho, L., Silva, M.F.: Integrating computer vision, robotics, and artificial intelligence for healthcare: An application case for diabetic foot management. In: Coelho, L., Queiros, R. (eds.) Exploring the Convergence of Computer and Medical Science Through Cloud Healthcare, p. 29. IGI Global (2023) 6. Lavery, L.A., Lavery, D.E., Lavery, D.C., Lafontaine, J., Bharara, M., Najafi, B.: Accuracy and durability of semmes-weinstein monofilaments: what is the useful service life? Diabetes Res. Clin. Pract. 97, 399–404 (2012) 7. Martins, P., Coelho, L.: Evaluation of the semmes-weinstein monofilament on the diabetic foot assessment. In: Advances and Current Trends in Biomechanics. CRC Press, Porto (2021) 8. Olaiya, M.T.: Use of graded semmes weinstein monofilament testing for ascertaining peripheral neuropathy in people with and without diabetes. Diabetes Res. Clin. Pract. 151, 1–10 (2019) 9. Wang, F., et al.: Diagnostic accuracy of monofilament tests for detecting diabetic peripheral neuropathy: a systematic review and meta-analysis. J. Diabetes Res, 2017(8787261) (2017) 10. Young, M.: A perfect 10? why the accuracy of your monofilament matters. Diabetes Primary Care 11, 40–43 (2009)
Measuring the Moment-Curvature Relationship of a Steerable Catheter Using a Load Cell and Stereovision System Jajun Ryu , Jaeseong Choi , Taeyoung Kim , and Hwa Young Kim(B) Pusan National University, Busan, South Korea [email protected]
Abstract. Accurately positioning the tip of steerable catheters is crucial for effective catheter-based operations. This requires an understanding of the momentcurvature relationship of a catheter. However, determining this relationship using traditional methods is challenging due to catheters’ multi-material and segmented structure. This study introduces a method for determining the moment-curvature relationship of each segment of the steerable catheter by using a load cell and stereovision system. The load cell and stereo camera are utilized to collect moment and curvature data, respectively, over time. An algorithm to detect the catheter from images is developed, and the procedure of getting the curvature of each segment is demonstrated. The moment-curvature relationship is plotted as the moment changes with time. The reliability of stereo camera is verified showing error less than 4 mm, and the algorithm successfully detects the curvature of the catheter by segments. This study offers a tool to measure steerable catheters’ material properties, and ultimately enhances catheter-based interventions’ precision. Keywords: Steerable Catheter · Moment-Curvature Relationship · Stereo Vision
1 Introduction A steerable catheter is a tube that bends using the pull wires contained inside, and precise tip positioning is crucial for its performance. However, in most cases, operators rely on their senses to control the tip of the catheter, leading to trial and error and financial losses. An accurate model of the catheter’s bending behavior and material properties is necessary to enhance precision. A steerable catheter includes various materials and consists of multiple segments with different stiffness, so measuring the stiffness is challenging. Traditional tensile test requires a single material specimen, and only small strains are measured. Large deformation tensile tester could damage the catheter. Directly measuring flexural rigidity, EI, is desired than measuring elastic modulus, E, as elongation deformation is small compared to bending deformation, and measuring the only E could result in serious error. Ultimately, performing the test on a finished product without damaging the catheter to obtain the flexural rigidity is desired. This requires measuring the stiffness of a catheter © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 F. J. G. Silva et al. (Eds.): FAIM 2023, LNME, pp. 187–195, 2024. https://doi.org/10.1007/978-3-031-38241-3_22
188
J. Ryu et al.
by bending it. Several different ways to measure the bending of the catheter are presented in studies [1–4], such as measuring the vertical displacement, using graph paper, or using magnetic sensors. However, the use of magnetic sensors can alter the catheter’s stiffness, and the other methods may be inaccurate for the cases of large deformation. Therefore, stereovision could be an effective solution, as presented in studies [5–9]. However, no studies have measured the stiffness of a catheter by segments or presented the momentcurvature relationship of the catheter with respect to time. This paper addresses these gaps and presents a novel measurement method for determining the moment-curvature relationship of a catheter by segments. The Bending of catheters exhibits characteristics of a viscoelastic material, which means that it significantly shows time-dependent properties. Nonetheless, studies on viscoelastic behavior of steerable catheters rarely exits. Therefore, determining the momentcurvature relationship over time is vital for modeling the catheter’s bending. Ultimately, this study presents the measurement method of the bending behavior of a continuum robot that exhibits viscoelastic behavior.
2 A Load Cell and Stereovision System 2.1 Structure of a Steerable Catheter Figure 1 shows the structure of a steerable catheter. The tube has multiple segments, with proximal and distal shafts being the primary segments where the bending occurs. The proximal shaft is stiffer than distal shaft, and the distal end is a tip that does not bend.
Fig. 1. Structure of a Steerable Catheter [10]
Catheter bending occurs when the tension on the pull wire creates the bending moment, M, which is calculated by multiplying the tension, T, by the distance between the center of the tube and the pull wires, r. The bending moment remains constant throughout the length of the entire tube at a fixed time, and therefore, the curvature of a single segment remains constant along its length. Only the difference in stiffness causes the curvature to change.
Measuring the Moment-Curvature Relationship of a Steerable Catheter
189
2.2 Load Cell and Stereovision System Setup Figure 2 shows the load cell and stereovision system setup. A 650 mm catheter is fixed, and the tension is exerted on the pull wire by controlling the linear stage. Moment data is collected over time by converting the load cell measurement. The two 8-megapixels cameras used to measure catheter bending are arranged 300 mm apart in parallel. The two cameras are calibrated using Matlab Stereo Camera Calibrator App. Six red points are marked on the catheter to capture the catheter’s curvature. A photo is taken every two seconds to detect the curvature change over time. A DAQ board collects the load cell data, and a micro-controller controls the linear stage and stereovision system.
Fig. 2. Stereo Camera Measurement Setup for Catheter Deformation
The initial coordinate system obtained from the stereo measurements has its origin in camera 1. A coordinate transformation is then performed according to four alignment points indicated in Fig. 3. The final coordinate transformation is performed such that the reference point matches the coordinate (−35, 420) mm, and the starting point of the catheter becomes the origin. The bottom 200 mm of the catheter is not captured in the pictures, but this is acceptable since the curvature of the proximal shaft is constant. The validity of stereo measurement is tested by comparing the measured coordinates of the alignment points to the actual ones, as shown in Table 1. The maximum distance error is less than 4 mm, confirming the accuracy of the stereo system.
190
J. Ryu et al.
(a) Left
(b) Right
Fig. 3. A Stereo Image of a Bent Catheter
Table 1. Comparing the Measured Alignment Points to the Actual Coordinates Measured (mm)
Actual (mm)
Distance Error (mm)
xm
ym
zm
xa
ya
za
((x m − x a )2 + (ym − ya )2 + (zm − za )2 )0.5
P1
−55.1
123.7
624.9
−55
125
625
1.79
P2
195.3
124.9
624.9
195
125
625
0.11
P3
−55.1
125.8
325.8
−55
125
325
1.39
P4
195.9
124.6
326.8
195
125
325
3.96
2.3 Catheter Detection Algorithm An algorithm that automatically detects the catheter from an image has been developed, as shown in Fig. 4. First, a reference image without the catheter is taken, as shown in Fig. 4(a). Then, the image data containing the catheter is subtracted to the reference image data, as shown in Fig. 4(b). Edge detection is performed on the subtracted image, and the biggest edge is selected. The centerline and the direction of the catheter can be determined from this image, as shown in Fig. 4(c). The marked points can be detected by recovering the color of the pixels within the selected biggest edge. By distinguishing the color red from the recovered pixels, the marked points can be detected and numbered according to the direction of the catheter, as shown in Fig. 4(d). The average pixel data for each marked point from both stereo images is used to generate 3D coordinates. Since the catheter bends only in 2D, the 2D data of bent catheter is shown in Fig. 5.
Measuring the Moment-Curvature Relationship of a Steerable Catheter
(a) Reference Image
(c) Biggest Edge with Center line
191
(b) Subtracted Image
(d) Marked Points Numbered
Fig. 4. Catheter Detecting Process
Fig. 5. Measured Data of the Bent Catheter
3 Determination of the Curvature of Each Segment Figure 6 shows two coordinate system assigned to the beginning points of each segment of the catheter. Frame 1, located at the beginning of the proximal shaft, is the global frame. The points (x p , yp ) and (x d , yd ) is the end point of the proximal shaft and the distal shaft, respectively, with respect to frame 1. The tip point, (x t , yt ), is unimportant because the tip section does not bend. The point (x 2d , y2d ) is the end point of the distal shaft with respect to frame 2. A single segment of a catheter can be seen as an arc of a circle with radius, R, with its center located at (0, R) on an xy-plane. The curvature of this circle represents the
192
J. Ryu et al.
(a) Frame 1
(b) Frame 2
Fig. 6. Coordiante Systems Assigned on a Bent Catheter
curvature of the segment. The formula for this circle can be expressed as Eq. 1. By rearranging the equation, radius of the circle can be written as Eq. 2. Equation 2 implies that the radius of the circle can be determined if the coordinate of a random single point on the circle is known. Since the end points of each segment are measured, the curvature of the catheter can be calculated. x2 + (y − R)2 = R2 R=
x2 + y2 2y
(1) (2)
The radius of the proximal shaft can be directly obtained using Eq. 2 and the point (x p , yp ). The point (x 2d , y2d ) from frame 2, which is unavailable from direct measurement, is required to calculate the radius of the distal shaft. The angle of frame 2 with respect to frame 1, denoted as θ, is calculated as L p /Rp where L p and Rp are the length and the radius of the proximal shaft, respectively. The transformation between frame 1 and frame 2 can be expressed as Eq. 3. Rearranging the equation gives Eq. 4 where the point (x 2d , y2d ) is calculated. The radius of the distal shaft can then be calculated. The curvature of each segment is obtained by taking the inverse of the radius. xp cos(θ ) −sin(θ ) x2d xd = + (3) yd yp y2d sin(θ ) cos(θ ) −1 x2d cos(θ ) −sin(θ ) xd x = − p (4) sin(θ ) cos(θ ) y2d yd yp
4 Results and Discussion The catheter is subjected to three levels of bending moment, as shown in Fig. 7. The curvature data, available every 2 s of each segment is presented in Fig. 8. The momentcurvature relationship for each segment can be plotted, as shown in Fig. 9. Notably, the distal shaft undergoes significantly greater bending than the proximal shaft. The distal shaft exhibits hysteresis, while proximal shaft shows a more linear response.
Measuring the Moment-Curvature Relationship of a Steerable Catheter
Fig. 7. Three Levels of Applied Moment
(a) Distal Shaft
(b) Proximal Shaft
Fig. 8. Three Levels of Curvature
193
194
J. Ryu et al.
(a) Distal Shaft
(b) Proximal Shaft
Fig. 9. Measured Moment-Curvature Relationship
5 Conclusion The moment-curvature relationship of a multi-segmented steerable catheter is measured using a load cell and stereovision system. The stereovision measurement demonstrated good reliability, with an error less than 4 mm across the designated range. Additionally, the algorithm developed in this study successfully detects the moment-curvature relationship of each segment, indicating successful system development. Catheters show viscoelastic tendencies. The measurement method established in this paper will be used to study the theory behind the catheter’s bending in the future work for identifying viscoelastic material parameters. Ultimately, the aim is to contribute to improving surgical procedure by accurately predicting the tip position. Acknowledgements. This work was supported by the National Research Foundation (NRF), Korea, under project BK21 FOUR.
References 1. Stenqvist, O., Curelaru, I., Linder, L.E., Gustavsson, B.: Stiffness of central venous catheters. Acta Anaesthesiol. Scand. 27(2), 153–157 (1983) 2. Brandt-Wunderlich, C., Grabow, N., Schmitz, K.P., Siewert, S., Schmidt, W.: Cardiovascular catheter stiffness – a static measurement approach. Curr. Dir. Biomed. Eng. 7(2), 721–723 (2021) 3. Hu, X., Luo, Y., Chen, A., Zhang, E., Zhang, W.J.: A novel methodology for comprehensive modeling of the kinetic behavior of steerable catheter. IEEE/ASME Trans. Mechatron. 24(4), 1785–1797 (2019) 4. Ganji, Y., Janabi-Sharifi, F.: Catheter kinematics for intracardiac navigation. IEEE Trans. Biomed. Eng. 56(3), 621–632 (2009) 5. Jayender, J., Azizian, M., Patel, R.V.: Autonomous image-guided robot-assisted active catheter insertion. IEEE Trans. Rob. 24(4), 858–871 (2008) 6. Khoshnam, M., Azizian, M., Patel R.V.: Modeling of a steerable catheter based on beam theory. In: International Conference on Robotics and Automation, Saint Paul, Minnesota, USA, pp. 4681–4686. IEEE (2012)
Measuring the Moment-Curvature Relationship of a Steerable Catheter
195
7. Nakagaki, H., Kitagaki, K., Ogasawara, T., Tuskune, H.: Study of insertion task of a flexible wire into a hole by using visual tracking observed by stereo vision. In: International Conference on Robotics and Automation, Minneapolis, Minnesota, pp. 3209–3214. IEEE (1996) 8. Dalvand, M.M., Nahavandi, S., Howe, R.D.: Fast vision-based catheter 3D reconstruction. Phys. Med. Biol. 61(2016), 5128–5148 (2016) 9. Ji, Y.F., Chang, C.C.: Nontarget stero vision technique for spatiotemporal response measurement of line-like structures. J. Eng. Mech. 134(6), 466–474 (2008) 10. Ryu, J., Ahn, H.Y., Kim, H.Y., Ahn, J.H.: Analysis and simulation of large deflection of a muti-segmented catheter tube under wire tension. J. Mech. Sci. Technol. 33(3), 1305–1310 (2019)
Manufacturing Processes and Automation
Study of the Best Operational Parameters in Laser Marking of Plastic Parts João Costa1 , Francisco J. G. Silva1,2(B) , Arnaldo G. Pinto1 Isabel Mendes Pinto1,3 , and Vitor F. C. Sousa1,2
,
1 ISEP - School of Engineering, Polytechnic of Porto, Porto, Portugal
[email protected]
2 INEGI - Instituto de Ciência e Inovação em Engenharia Mecânica e Engenharia Industrial,
Porto, Portugal 3 Mathematical Engineering Laboratory, LEMA, Porto, Portugal
Abstract. Laser technology has been used more and more intensively in different areas of application, from cutting materials to medicine. More recently, this technique has also been used for labeling. However, scientific works on this topic are still very scarce. This work aimed to study the best operating parameters in the Laser Marking of Data Matrix Codes (DMCs) in plastic cases used for motorcycle instrument panels. The results showed that the best quality marking was performed using 19 W of laser power, 30 kHz of pulse frequency, 1000 mm/s of displacement speed and 30% of overlap, among a set of parameters studied around the manufacturer’s initial recommendations. The quality analysis was performed using Scanning Electron Microscopy technology, seeking to obtain the best possible definition on the PBT-GF30 FR composite material. Keywords: Laser technology · Marking · Laser marking · Composites · Quality
1 Introduction The current international situation in which industrial companies operate, is characterized by intense competition, where new technologies assume an increasingly predominant role [1]. In this context, continuous improvement is assumed as a strategy that allows companies to maintain and improve competitiveness, making use of knowledge, through the involvement of employees [2]. The components industry for the automotive and motorcycle sector needs to guarantee this quality, preserving a rigorous traceability system [3]. This traceability involves the need to mark Data Matrix Codes (DMCs) on components manufactured in the most diverse materials [4]. Direct Part Marking (DPM) is a symbol marking technology that has generated interest in recent years in the automotive, aerospace, medicine and decoration industries, for permanent and direct marking on parts, with reliability [4, 5]. DPM can be performed by several processes, such as laser marking, dot peening, and chemical etching. Among these methods, Laser Marking (LM) is chosen for writing symbols on small parts, as it offers a set of advantages that make this marking technology one of the most advanced. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 F. J. G. Silva et al. (Eds.): FAIM 2023, LNME, pp. 199–207, 2024. https://doi.org/10.1007/978-3-031-38241-3_23
200
J. Costa et al.
Some of the advantages are the fact that it is a contactless process capable of being automated characterized by high speed, repeatability and flexibility, and presenting reduced chemical and thermal impact on parts [5, 6]. According to Qiu, Bao and Lu [7], LM is a thermal process employing a concentrated beam of high intensity of light to create a contrasting mark on a surface. This mark can be obtained either by removing material or by changing the color of the surface using localized heating [8]. The content to be marked can vary from a simple alphanumeric code, a DMC to indicate technical data, serial numbers, date of manufacture, expiration date, etc. to functional symbols, certifications and company logos [9], enabling application in several areas, such as automotive industry - matrix code marking for registration of part numbers and logos on instrument panels, headlights, bumpers, marking of cylinder bore sizes in engine blocks [7], electronics industry - marking on switches, coils, capacitors, silicon chips and printed circuit boards [10], and medicine - marking of artificial joints [7]. However, in marking or cutting, the optimization of operational parameters is crucial [11]. Qi et al. [12], in an LM work carried out on stainless steel, found that increasing the pulse frequency translates into an increased marking depth, reaching its maximum at pulse frequency around 3 kHz. On the other hand, the marking depth decreases with increasing pulse frequency to values above 3 kHz, a phenomenon explained by the authors as being caused by the change in peak power and average laser power with the change in pulse frequency. Peak power decreases and average power increases with increasing pulse frequency. That is, due to the increase in average power with increasing pulse frequency, the volume of evaporated material increases and the marking depth also increases. However, when the pulse frequency is high, the material does not completely evaporate and part of it remains in the molten state, because the peak power is not high enough to make it evaporate, causing the marking depth to decrease by this. Reason. These authors also verified that the marking contrast improves with the increase of the pulse frequency, and is maximum for 8 kHz, since its increase translates into less material evaporation and a more significant oxidation. Patel et al. [13] concluded that increasing the impulse frequency, the average roughness (Ra) is lower, being minimum for an impulse frequency of 80 kHz and maximum for an impulse frequency of 20 kHz. In the same study, the average roughness (Ra) is maximum for a marking speed of 100 mm/s and decreases sharply with increasing marking speed up to 300 mm/s, then remaining stable up to 500 mm/s. Also in this study, the average roughness (Ra) is maximum for 10 passes and decreases sharply up to 15 passes, then slightly increases again, up to 20 passes. The experiments conducted by Leone et al. [14] on stainless steel were performed at room temperature, with a range of current intensity between 35 and 45 A, marking speed equal to 50, 100 and 200 mm/s and changing the pulse frequency in the range of 1 to 30 kHz. In each experiment, a 50 mm long straight line was marked through a single pass. It was found that surface roughness and oxidation increase as a function of the pulse frequency, resulting in improved contrast, up to a characteristic value (between 4 and 6 kHz), then decreasing. For these reasons, the authors refer that, although the visibility of the marking on stainless steel is dependent on the laser system used, if the objective is to obtain good visibility in the marking, relatively low pulse frequencies (between 4 and 6 kHz) and average powers. No studies about LM have been found using polymer-based composites as substrate.
Study of the Best Operational Parameters in Laser Marking
201
2 Materials and Methods The composite material used as substrate to be marked was a PBT-GF30 FR(17) provided with the commercial name of Ultradur® B 4406 G6. This composite presents as main properties a specific weight of 1.65 g/cm3 , 223 ºC melting temperature, 0.32 W/(m·K) thermal conductivity at 23 ºC, 900 J/(kg·K) specific heat at 23 ºC, 145 MPa ultimate tensile strength, 11.3 GPa Young’s modulus, and 55 kJ/m2 Charpy impact strength at − 30 ºC. The marking process was performed using a Rofin® equipment, model Coherent, provided with a pulsed fiber laser with a maximum power of 20 W, wavelength of 1064 nm, spot diameter of 20 µm and impulse frequency in the range of 20–80 kHz. The marking speed ranges from 0–20000 mm/s and the pulse duration between 4–200 ns. The overlap ranges from 0–100%. The quality of the marked DMC regarding ISO/IEC 29158:2020 standard was analysed using REA® Vericube equipment. Due to the nonconductive nature of the samples, a Quorum® Q150R ES PLUS was used to deposit a thin layer of gold by PVD technique. A deep analysis of the marks produced has been made using an HITACHI® FlexSEM 1000 Scanning Electron Microscope equipped with a Bruker® Quantax 80 EDS system. Four parameters have been studied always using three levels: Laser power (15, 17, 19 W), pulse frequency (15, 25, 30 kHz), marking speed (1000, 2000, 3000 mm/s) and overlap rate (0, 15, 30%). A full factorial set of experiences was implemented, repeating each set of conditions 3 times to detect possible misaligned results. The operational conditions recommended by the laser marking equipment are 17 W laser power, 25 kHz pulse frequency, 2000 mm/s marking speed and 15% overlap rate. To assess the quality of the DMCs, the following characteristics were analyzed: i) Decodability; ii) Cell Contrast (CC); iii) Cell Modulation (CM); iv) Fixed Pattern Damage (FPD); v) Axial Non-Uniformity (ANU); and vi) Grid Non-Uniformity (GNU). Reflectance is also a relevant feature, and the minimum value should be 5%. Decodability can be classified only through two classes: i) class A, it means that the code is easily readable; ii) class F, means that the code is unreadable. The other mentioned features are evaluated with a class, which can be: A (4.0), B (3.0), C (2.0), D (1.0) or F (0.0), according to the criteria described in the ISO/IEC 29158:2020 standard, corresponding to: A, the best classification, and F, the worst classification. The global quality class assigned to the code corresponds to the minimum classification of all these characteristics evaluated. Some samples were also analyzed by SEM (Scanning Electron Microscopy) to verify the effect of the laser on the marking of surfaces.
3 Results and Discussion Here, the results regarding the analysis of marking process are going to be presented, showing the micrographs of the best and worst results obtained. 3.1 Analysis of Code Quality The analysis of the codes quality was carried out using a specific verifier for this purpose. It should be noted that all codes marked, even those with global quality F, were able to decode, so a positive Decodability result was obtained in all DMCs.
202
J. Costa et al.
Figures 1 and 2 show two examples of analysis of DMCs with global quality A and F, respectively, obtained through the code checker. In the example of Fig. 1 is presented a DMC with quality A (4.0) in all its features and, for this reason, its global quality classification is A (4.0). On the other hand, in the example of Fig. 2, a DMC with an overall quality rating of F (0.0) is presented, as it shows a reflectance value of 4% for a minimum requirement of 5% and, for this reason, evaluated with F (0.0) in this feature. In this second example, despite the Cell Modulation (CM) with a rating of 2.0 (C), the Fixed Pattern Damage (FPD) with 3.0 (B) and the remaining features with 4.0 (A), the global quality class assigned to this code is F (0.0), as it is limited to the lowest rating of all features evaluated. After analyzing all codes with the checker, the results obtained were exported to MS Excel®, and were treated and correlated using Minitab® software.
Fig. 1. Example of an A-quality DMC.
Fig. 2. Example of an F-quality DMC.
Study of the Best Operational Parameters in Laser Marking
203
3.2 Statistical Analysis and Analysis of Variance With the data collected by the code checker, a statistical analysis and an analysis of variance (ANOVA) were performed using the Minitab® software. These analyzes make it possible to determine, for each feature under analysis, the importance of each laser parameter and the set of parameters that give rise to the best classification for a given feature. Figure 3 shows an example for Cell Contrast (CC). From the Pareto chart, it is possible to verify, with a confidence level of 95%, that all laser parameters studied affect the result, with pulse frequency and marking speed being the parameters with the greatest impact on the CC result. The graph on the right shows the parameters for obtaining the best CC results of the DMC. It is clear that the best set of laser marking parameters for the CC to be maximized are: Radiation Power: 19 W; Pulse Frequency: 30 kHz; Marking Speed: 1000 mm/s; Overlap rate: 30%.
Fig. 3. Pareto’s chart of standardized effects (left) and chart of main effects (right) for CC.
The CC analysis was carried out for the other features analyzed and that allow assessing the DMCs quality. The Pareto’s chart of standardized effects in Fig. 4 allows the observation of the most significant parameters for the quality (Grade) of the DMC, with a confidence level of 95%. In decreasing order of importance, the parameters are marking speed, pulse frequency and radiation power. Overlap rate also contributes, although not significantly. The main effects graph in Fig. 4 (right) indicates the parameters for obtaining the best quality results from the DMC. The best set of laser marking parameters to maximize quality are shown in Table 1.
Fig. 4. Pareto’s Chart of Standard Effects (Left) of Main Effects Chart (Right) for the DMC Grid.
Using Minitab® software, it was also possible to obtain an analysis of variance of the results. This analysis made it possible to determine the importance of each parameter for
204
J. Costa et al.
the results of the various DMCs features, and to verify their dispersion. In the following tables it is possible to observe the best conditions of laser marking of the DMCs, resulting from both the full-factorial DoE (using all the possible combinations of parameter variation) and the ANOVA (Table 2). Table 1. Best combination of laser marking parameters, by full-factorial DoE Laser power
Pulse frequency
Marking speed
Overlap rate
19 W
30 kHz
1000 mm/s
30%
Table 2. Most significant laser marking parameters, by analysis of variance (ANOVA) Results for CC
Pulse frequency, Marking speed and Overlap rate
Results for CM, FDP, GNU
Pulse frequency and Marking speed
In Fig. 5(a) it is possible to see a DMC performed with the laser marking parameters initially used in production and in Fig. 5(b) a DMC performed with the parameters defined by the DoE and presented in Table 1. There is a significant improvement in the definition and contrast of the marking. Codes appear more filled in and without gaps.
a)
b) Fig. 5. Comparison of codes marked before (a) and after (b) the DoE.
3.3 SEM Analysis of the Marked Surface After carrying out the tests, the surface of several laser-marked samples was analyzed in detail by SEM. With this analysis it was possible to determine the effect of the various laser marking parameters on the sample surface. Figure 6 shows the surface of an unmarked sample and Fig. 7 and 8 show the surface of the sample with the worst and the best quality obtained through the code checker.
Study of the Best Operational Parameters in Laser Marking
205
Fig. 6. Unmarked surface.
Fig. 7. Surface with the worst quality marking, produced with laser power 15 W, a pulse frequency of 15 kHz and a marking speed of 3000 mm/s with no overlap.
Fig. 8. Surface with the best quality marking, produced with laser power of 19 W, a pulse frequency of 30 kHz, a marking speed of 1000 mm/s and an overlap rate of 30%
From the tests carried out, it was found that the increase in laser power and pulse frequency resulted in a more evident surface marking. The same was observed with the decrease in marking speed. In fact, with these parameters the interaction of the laser with the surface is higher. Increasing the overlap rate also makes the marking more evident, but it is the least significant parameter.
4 Conclusion From the analyzes carried out with the code checker, using the Minitab® software and by SEM, the following was verified: • Increasing laser power and pulse frequency make marking more evident; • Decreasing the marking speed makes the marking more obvious; • Overlap rate does not have a significant impact on the overall rating given to code quality; • The pulse frequency and the marking speed are more important, in relation to the other laser marking parameters, in all the characteristics of the DMCs;
206
J. Costa et al.
• The best set of laser marking parameters, for the material under study and for the laser system used, were: Laser power: 19 W, Pulse frequency: 30 kHz, Marking speed: 1000 mm/s and Overlap rate: 30%. The definition of the laser parameters, which provide the best quality of marking of the DMCs, allowed an improvement of the quality of the laser marking process on the assembly line with consequent reduction of the rejection rate, reduction of waste, of the number of complaints and interventions by maintenance technicians. Acknowledgements. The authors thank to INEGI/LAETA to the support given to perform this work.
References 1. Garcia, J.A., Pardo, M., Bonavía, T.: Longitudinal study of the results of continuous improvement in an industrial company. Team Perform. Manag. An Int. J. 14(1–2), 56–69 (2008) 2. Lorenzo, A., Prado, J.: Employee participation systems in Spain. past, present and future. Total Qual. Manag. Bus. Excell. 14(1), 15–24 (2003) 3. Costa, M.J.R., Gouveia, R.M., Silva, F.J.G., Campilho, R.D.S.G.: How to solve quality problems by advanced fully automated manufacturing systems. Int. J. Adv. Manuf. Technol. 94, 3041–3063 (2018) 4. Li, J., et al.: Experimental investigation and mathematical modeling of laser marking twodimensional barcodes on surfaces of aluminum alloy. J. Manuf. Process. 21(1), 141–152 (2016) 5. Li, C., Lu, C., Li, J.: Research on the quality of laser marked data matrix symbols. Key Eng. Mater. 764(1), 219–224 (2018) 6. Velotti, C., Astarita, A., Leone, C., Genna, S., Minutolo, F.M.C., Squillace, A.: Laser marking of titanium coating for aerospace applications. Procedia CIRP 41(1), 975–980 (2016). https:// doi.org/10.1016/j.procir.2016.01.006 7. Qiu, H., Bao, W., Lu, C.: Investigation of laser parameters influence of direct-part marking data matrix symbols on aluminum alloy. Appl. Mech. Mater. 141(1), 328–333 (2012) 8. Jangsombatsiri, W., Porter, J.D.: Artificial neural network approach to data matrix laser direct part marking. J. Intell. Manuf. 17(1), 133–147 (2006) 9. Santo, L., Trovalusci, F., Davim, J.: Laser applications in the field of plastics. In: Comprehensive Materials Processing, pp. 243–260. Elsevier, Amsterdam, Netherlands. ISBN: 978-0080965321 (2014) 10. Wimmer, C., Moser, S., Neel, T., Zhen, L.: Readability of directly-marked traceability symbols on PCBs, Omron Microscan Industrial Automation Solutions. chromeextension://efaidnbmnnnibpcajpcglclefindmkaj/https://usermanual.wiki/Microscan/Readab ilityOfDirectlyMarkedTraceabilitySymbolsOnPcbs.865582092.pdf. Accessed 16 Feb. 2022 11. Amaral, I., Silva, F.J.G., Pinto, G.F.L., Campilho, R.D.S.G., Gouveia, R.M.: Improving the cut surface quality by optimizing parameters in the fibre laser cutting process. Procedia Manuf. 38, 1111–1120 (2019) 12. Qi, J., Wang, K., Zhu, Y.: A study on the laser marking process of stainless steel. J. Mater. Process. Technol. 139(1), 273–276 (2003)
Study of the Best Operational Parameters in Laser Marking
207
13. Patel, D.M., Dharmesh, K.: Analysis the effect of laser engraving process for surface roughness measurement on stainless steel (304). Int. J. Adv. Sci. Tech. Res. 3(4), 725–730 (2014) 14. Leone, C., Genna, S., Caprino, G., de Iorio, I.: AISI 304 stainless steel marking by a Qswitched diode pumped Nd:YAG laser. J. Mater. Process. Technol. 210(10), 1297–1303 (2010)
Laser Marking on White-Coloured Polyoxymethylene (POM) Polymer Substrate: Challenges and Perspectives Stanley Udochukwu Ofoegbu1,2(B) , Paulo J. A. Rosa1,2 , Fábio A. O. Fernandes1,2 , António B. Pereira1,2 , and Pedro Fonseca3 1 Centre for Mechanical Technology and Automation (TEMA), Department of Mechanical
Engineering, University of Aveiro, Campus Universitário de Santiago, 3810-193 Aveiro, Portugal [email protected] 2 LASI—Intelligent Systems Associate Laboratory, Aveiro, Portugal 3 Institute of Electronics and Informatics Engineering of Aveiro/Department of Electronics, Telecommunications, and Informatics (IEETA/DETI), University of Aveiro, 3810-193 Aveiro, Portugal
Abstract. Polyoxymethylene (POM) is a semi-crystalline engineering thermoplastic polymer employed in the production of precision parts in applications that demand good dimensional stability and frictional properties. Consistent with modern manufacturing practices, laser marking is a desirable method for marking products for branding and coding in order to enhance traceability and accountability due to its lower costs and amenability to automation. Laser making of transparent and white-coloured polymers is a technological challenge. Laser marking of polyoxymethylene substrate can be quite challenging due to its low laser absorption. In this work, white-coloured POM was laser-marked using a wide range of marking parameters, and a promising range of parameters for laser-marking of white POM substrates was identified. Results indicate that the most promising marking quality is obtained with a laser power of 30 W (100% of equipment laser output) and frequencies in the range of 30 to 80 kHz. Perspectives on improving the quality of the laser marking outputs are presented based on the understanding of the degradation behaviour of POM under the laser marking conditions employed. A new method for improving laser markability of polymers without the need of incorporating marking aids in polymer compositions is demonstrated. Keywords: Laser absorption · Contrast · Carbonization · Foaming
1 Introduction Laser marking is an important component of today’s industrial manufacturing practice. A wide variety of materials can be marked using lasers. However, the particular materials that a given laser source can mark are dependent on various factors. Laser marking of polymeric materials is achieved by heat-induced localized thermal degradation in the area of polymer exposed to laser irradiation due to a localized increase in © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 F. J. G. Silva et al. (Eds.): FAIM 2023, LNME, pp. 208–215, 2024. https://doi.org/10.1007/978-3-031-38241-3_24
Laser Marking on White-Coloured Polyoxymethylene (POM) Polymer Substrate
209
temperature above the decomposition temperature of the polymers. The appearance of the marking is dependent on the energy absorbed and, consequently, on the local rise in temperature. Laser marking of polymers is achieved via two main competing mechanisms: degradation-induced carbonization [1, 2] and foaming [3]. A third mechanism is laser-induced thermal chemical reactions in pigmented polymers, resulting in colour changes and hence generating contrast [4]. This mechanism can occur in pigmented polymers and polymer composites loaded with appropriate laser-sensitive additive(s). When the polymer substrate is raised to the temperature(s) at which it carbonizes, the marking appears dark. Still, when the local temperature rise is sufficient to cause polymer degradation to gases, the polymer expands, forming bubbles and the markings appear light-coloured. Since both polymer laser marking mechanisms are competitive, it is necessary to optimize laser parameters to obtain a dark or light making. Laser marking of transparent and white-coloured polymers is a technological challenge [5]. Various solutions have been employed to overcome this challenge and successfully laser mark these classes of polymeric materials. These approaches mainly include doping the target substrate or matrix material (throughout its bulk or up to a certain depth from the surface) with light-absorbing species [5–7]. Also, employing laser-sensitive coatings on the marking area and laser-induced dye diffusion in which the dye absorbs laser irradiation, melting and diffusing into the polymer substrate only in the area exposed to the laser beam, which then exhibits distinct spectral signatures from the solid dye and polymer substrate [8]. Zelenska et al. [5] employed light absorbingmicroparticles (highly viscous polystyrene suspension of light-absorbing microparticles) to laser mark transparent polymers with pulsed radiation from a Q-switched YAG:Nd3+ laser (wavelength and pulse duration of, λ = 1064 nm, τ = 20 ns, respectively) and reported thermal decomposition (pyrolysis) of polymer in the vicinity of overheated particles due to their absorption of energy from the laser to attain temperatures equal or higher than the polymer’s decomposition temperature. Cao et al. [6] added molybdenum sulfide (MoS2 ) in concentrations ranging from 0.005 to 0.2 wt. % as laser-sensitive particles to poly(propylene) (PP) as the matrix resin, using the melt-blending method and reported the best laser marking performance at 0.02 wt. % MoS2 content in the PP/MoS2 composite material at a laser marking current intensity of 11 A, and a laser marking speed of 800 mm/s. The clear, darker, and high-contrast marking pattern was attributed to PP pyrolysis and carbonization to form amorphous carbonized materials on the surface of the PP/MoS2 composite material due to MoS2 -enhanced laser absorption and subsequent conversion to heat energy. Zhong et al. [9] reported successful high-contrast laser markings on transparent thermoplastic polyurethane (PU) by adding bismuth oxide (in the range 0.1–1.0% B2 O5 ) with optimum results at 0.3% Bi2 O3 content using an Nd:YAG laser with a wavelength of 1064 nm. The authors attributed the high marking contrast to the synergistic effects of PU carbonization and Bi2 O3 reduction to black bismuth metal on laser irradiation. Polyoxymethylene (POM) is an engineering thermoplastic employed in applications requiring high stiffness, low friction, and excellent dimensional stability, which has hitherto been considered unmarkable [10]. POM is difficult to laser mark due principally to its low laser absorption. Moreover, some of the compounds [11] used as marking aids and incorporated in polymer composition might present potential health risks [12]. This
210
S. U. Ofoegbu et al.
could affect their application in polymers for medical and food packaging applications. To overcome these challenges there is a need for the development of new strategies for marking polymeric materials that do not require prior manipulation of polymer compositions and incorporation of marking aids. In response to this technological need, this work presents interesting results from attempts at marking the difficult-to-mark POM without manipulation of polymer composition by the addition of laser marking aids. The strategy consists of employing two different “inks” as masking agents to achieve better laser-marking outcomes.
2 Materials and Methods Marking Without Masking (Laser Absorption Aid): White polyoxymethylene slabs of thickness 5 mm sourced from Polylanema (Portugal) were marked using a videojet 7440 fibre laser marker (a pulsed Ytterbium fibre laser) with a laser wavelength (λ) of 1040–1060 nm, using a variety of marking parameters. This laser marker has a nominal pulse energy of 1 mJ, nominal laser power of 30 W, peak frequency variable between 1 to 400 kHz, pulse duration variable from 160 to 200 ns, and marking speed variable from 1 to 30,000 mm/s. Table 1. Key variables employed in laser marking Marking speed (mm/s)
Laser power (%)
Frequency (lower) (kHz)
Frequency (higher) (kHz)
Number of passes Rotation
15
10
10
100
1
50
20
20
150
2
100
30
30
200
150
40
40
250
400
50
50
300
500
60
60
350
70
70
400
80
80
90o
90 100
The polyoxymethylene substrates were marked using a variety of marking parameters (Table 1) (laser power, frequency, marking speed, number of marking runs, and changes in orientations), in the as-received condition without any masking. and also after masking. Marking with Masking (Laser Absorption Aid): The POM substrates were next masked by applying two layers of cleanable marker ink (blue BIC Velleda, France) and permanent marker ink (blue PILOT permanent marker 400, Japan), respectively before
Laser Marking on White-Coloured Polyoxymethylene (POM) Polymer Substrate
211
marking in an attempt to improve laser absorption (Fig. 1). After laser marking, the ink is cleaned off with alcohol in order to observe and evaluate the markings.
Fig. 1. Scheme employed in laser marking on POM substrates
3 Results and Discussion Attempts at marking POM substrates as-received without masking were made using a slow marking speed of 15 mm/s without success. Hence, laser marking on unmasked POM surfaces was discontinued. As described in Fig. 1 schematics, laser marking involved the prior application of two layers of cleanable marker ink (blue BIC Villeda whiteboard marker, France), which is cleaned off with alcohol after marking to observe the markings resulting in matrices of laser markings presented in Figs. 2, 3, 4 and 5. Laser marking after the application of permanent marker ink layers (blue PILOT permanent marker 400, Japan) did not result in the successful laser marking of POM. Figure 2 presents the matrix generated by single-run laser marking at a speed of 15 mm/s with laser frequency varied from 10–80 kHz, and laser power varied from 10 to 100% (maximum laser power is 30 W). It can be observed that at this slow marking speed, which permits more interaction of the laser with the polymer, few good marking outputs were realized due to extensive polymer degradation and the production of fumes that forced further marking at this slow marking speed to be discontinued. At this marking speed (15 mm/s) the best marking was obtained at a laser power of 80% and frequency of 20 kHz (red square in Fig. 2), while fairly good markings were obtained at a laser power of 60% and frequency of 60 kHz, and at a laser power of 60% and frequency of 70 kHz. At some combinations of laser power and frequency (annotated with red circles in Fig. 2), significant foaming and melting of the polymer through its thickness (5 mm) occurred, which completely or markedly obscured the markings. Foaming and melting were most pronounced at 90% of laser power and a frequency of 70 kHz. Since foaming is due to the
212
S. U. Ofoegbu et al.
thermal degradation of the polymer under laser irradiation to gases, it could be inferred that at these combinations of laser power and frequency (annotated with red circles in Fig. 2), foaming predominates carbonization as a thermal degradation mechanism.
Fig. 2. Marking matrix generated by laser marking at a speed of 15 mm/s with laser frequency varied from 10–80 kHz and laser power ranged from 10 to 100% (maximum laser power is 30 W).
Figure 3 presents the backside of the POM sample marked at 15 mm/s (frontside presented in Fig. 2). Examination of Fig. 3 reveals the magnitude of the through-thickness (5 mm) degradation on the POM sample at different combinations of laser frequency and laser power. This degradation manifested on the side opposite the marked surface as polymer ablation and burn-out (Fig. 3).
Fig. 3. Damage to the backside of polyoxymethylene surface after masking and marking on the opposite surface at 15 mm/s (The surface marked is presented in Fig. 2).
Laser Marking on White-Coloured Polyoxymethylene (POM) Polymer Substrate
213
The laser marking speed was increased to 50 mm/s with the laser power varied from 10–100%, and laser frequency varied from 10–80 kHz. The resultant laser marking matrix is presented in Fig. 4. From Fig. 4 the range of possible (lemon-green square in Fig. 4), and most feasible marking parameters (red square in Fig. 4) can be observed. Parameters in the area denoted by the purple square in Fig. 4 were not explored (laser marked) as the experiment was discontinued due to significant fume release., The results presented in Fig. 4 indicate that the most promising marking quality is obtained with a laser power of 100%.
Fig. 4. Matrix generated by single-pass laser marking of POM at a speed of 50 mm/s and laser power varying from 10–100% and laser frequency ranging from 10–80 kHz.
Leveraging on insights from Fig. 4 that 100% laser power yielded laser markings of good quality, further investigations were carried out at 100% laser power. Since, from the point of automation to meet production volumes, the fastest marking speed that delivers consistently acceptable laser markings is the most preferred, experimental marking runs were carried out at much higher marking speeds (than in Fig. 4) and laser frequencies (100 to 400 kHz). The matrices generated for marking speeds of 100 mm/s and 500 mm/s, 100% laser power, and at laser frequencies ranging from 100 kHz to 400 kHz are presented in Figs. 5a and b, respectively. It should be noted that besides the effect of higher marking speeds and higher laser frequencies, the effect of repeated marking runs (two laser passes) on marking quality/legibility was studied by changing
214
S. U. Ofoegbu et al.
the orientation of the second pass by 90° in relation to the first laser pass. Figure 5 shows that the best laser marking on POM was obtained by a single pass marking at 100% laser power, a frequency of 100 kHz, and a marking speed of 100 mm/s (red square in Fig. 5).
Fig. 5. Matrix generated by single and double-pass laser marking at 100% laser power and marking speeds of (a) 100 mm/s and (b) 500 mm/s with laser frequency varied from 100–400 kHz.
4 Conclusions As many polymeric materials are not good absorbers of laser light in the wavelengths around 1064 nm, which are often employed in laser marking, it is necessary to introduce absorption enhancers in the polymer substrates during the manufacturing stage or use special powders or masks that improve laser absorption. The use of masks is essential as, in some applications, it might not be feasible to change the polymer composition to integrate laser additives into the blend. Results from this work demonstrate that; • the use of laser-absorbing wipeable masking agents can be a feasible approach to laser marking of difficult-to-mark polymeric materials. • Marking quality is improved with the application of a double coat of the mask (wipeable blue whiteboard marker).
Laser Marking on White-Coloured Polyoxymethylene (POM) Polymer Substrate
215
• Insights from this work indicate that the feasible ranges of parameters for successful laser marking on POM are; a single pass marking at a marking speed of 50–100 mm/s, laser frequency of 30–80 kHz, and laser power in the range of 70–100% of the maximum laser power output of 30 W. Acknowledgments. The authors acknowledge support from the projects; AdaptMarkAdaptMark-Intelligent, autonomous, and flexible robotic component marking system (POCI01-0247-FEDER-046982), UIDP/00481/2020, and CENTRO-01-0145-FEDER-022083 – Centro Portugal Regional Operational Programme (Centro 2020), under the Portugal 2020 Partnership Agreement, through the European Regional Development Fund.
References 1. Lippert, T., et al.: Imaging-XPS/Raman investigation on the carbonization of polyimide after irradiation at 308 nm. Appl. Phys. A 69(1), S651–S654 (1999) 2. Zhang, J., Zhou, T., Wen, L., Zhao, J., Zhang, A.: A simple way to achieve legible and local controllable patterning for polymers based on a near-infrared pulsed laser. ACS Appl. Mater. Interfaces 8(3), 1977–1983 (2016) 3. Kanev, K., Mizeikis, V., Gnatyuk, V.: Localization encoding in the bulk of physical objects by laser-induced damage. In: Klyuev, V., Vazhenin, A. (eds.) Joint International Conference on Human-Centered Computer Environments 2012, HCCE2012, pp. 93–98. The Association for Computing Machinery (2012) 4. Zheng, H.Y., Rosseinsky, D., Lim, G.C.: Laser-evoked coloration in polymers. Appl. Surf. Sci. 245(1–4), 191–195 (2005) 5. Zelenska, K.S., Zelensky, S.E., Poperenko, L.V., Kanev, K., Mizeikis, V., Gnatyuk, V.A.: Thermal mechanisms of laser marking in transparent polymers with light-absorbing microparticles. Opt. Laser Technol. 76, 96–100 (2016) 6. Cao, Z., et al.: Preparation and laser marking properties of poly (propylene)/molybdenum sulfide composite materials. ACS Omega 6(13), 9129–9140 (2021) 7. Sabreen, S.R.: Fiber laser enables marking of advanced plastics. Industrial Laser Solutions, pp. 32–34 (2016) 8. Novotny, V., Alexandru, L.: Laser marking in dye–polymer systems. J. Appl. Polym. Sci. 24(5), 1321–1328 (1979) 9. Zhong, W., et al.: Laser-marking mechanism of thermoplastic polyurethane/Bi2O3 composites. ACS Appl. Mater. Interfaces 7(43), 24142–24149 (2015) 10. Li, W.H.: Preparation of laser markable polyamide compounds. In: 2nd International Conference on Graphene and Novel Nanomaterials (GNN)2020, Journal of Physics: Conference Series, vol. 1765, no. 1, p. 012003. IOP Publishing (2021) 11. Cheng, J., Li, H., Zhou, J., Cao, Z., Wu, D., Liu, C.: Influences of diantimony trioxide on laser-marking properties of thermoplastic polyurethane. Polym. Degrad. Stab. 154, 149–156 (2018) 12. Boreiko, C.J., Rossman, T.G.: Antimony and its compounds: health impacts related to pulmonary toxicity, cancer, and genotoxicity. Toxicol. Appl. Pharmacol. 403, 115156 (2020)
Influence of Laser Beam Intensity Distribution on Keyhole Geometry and Process Stability Using Green Laser Radiation Florian Kaufmann1(B)
, Andreas Maier1 , Julian Schrauder1 , Stephan Roth1,2 and Michael Schmidt1,2,3
,
1 Bayerisches Laserzentrum GmbH (blz), Konrad-Zuse-Street 2–6, 91052 Erlangen, Germany
[email protected]
2 Erlangen Graduate School in Advanced Optical Technologies (SAOT), Paul-Gordan-Street 6,
91052 Erlangen, Germany 3 Institute of Photonic Technologies (LPT), Friedrich-Alexander Universität
Erlangen-Nürnberg, Konrad Zuse-Street 3–5, 91052 Erlangen, Germany
Abstract. Laser beam welding is being more frequently employed to join copper materials. The use of green laser radiation shows the advantage of significantly higher absorptivity for these metals compared to near-infrared radiation. Therefore, a change in process stability and defect formation is expected. In addition, the effect of modifying the intensity distribution on the formation of weld seam defects and the geometric properties of the seam in deep penetration mode is largely unexplored. Thus, the aim of this work is the characterization of process dynamics and defect formation in correlation to the focal position and the intensity distribution by means of high-speed imaging and metallographic analysis. A significant reduction of seam imperfections is observed for a gaussian beam profile compared to a Top Hat intensity distribution. An advantageous seam shape and the earlier onset of the deep penetration welding process are favorable reasons for the application of this intensity distribution, while medium to high processing speeds further improve the processing quality. Keywords: Laser beam welding · green laser radiation · intensity distribution · electromobility · process observation · quality improvement
1 Introduction The continuously increasing demands on electrical equipment require the use of the most suitable material and processing method. With respect to current carrying connections in automotive applications, laser beam welding of copper is a key technology that reveals new opportunities by the use of visible laser radiation. In general, the physical properties of copper, like its high thermal conductivity and low melt viscosity, impede the welding process compared to steel. Altered optical properties aside from conventional nearinfrared (NIR) beam sources, for example when using green laser radiation (λ = 515 nm), show a changed energy coupling into the material. This fact has particular impact on © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 F. J. G. Silva et al. (Eds.): FAIM 2023, LNME, pp. 216–227, 2024. https://doi.org/10.1007/978-3-031-38241-3_25
Influence of Laser Beam Intensity Distribution on Keyhole Geometry
217
the process performance in terms of weld seam morphology and changed occurrence of seam imperfections like spatters and porosity. Approaches for the modification of the energy input at the capillary front wall and its reducing effect on the formation of process imperfections presented in the literature indicate the possibility of using laser beam shaping to manipulate process imperfections in a targeted manner. Therefore, the aim of this work is to analyze the influence of the laser beam intensity distribution on the seam morphology and process stability to improve the understanding of the laser welding process of copper using visible processing wavelength.
2 State of the Art In general, the laser welding process can be divided into two regimes, namely heat conduction welding (HCW) and deep penetration welding (DPW) mode. The latter is characterized by the presence of a vapor channel (also called “keyhole”), which guides the laser beam into the material through multiple reflections on the capillary wall, resulting in weld seams with high aspect ratio [1]. As this phenomenon leads to an efficient use of the irradiated energy and facilitates the processing of metals in the millimeter-range using a laser spot of a few hundred microns in diameter, DPW is frequently applied for laser welding of copper materials and will therefore be focused in this paper. It can be stated that the long-term goal is to design the laser welding process of copper as stable as possible. This is of special interest for e-mobility components (hairpin, battery tab or busbar welding) since laser welding is often applied in later stages of the value chain and rejects or rework cause high additional costs for the manufacturer. The quality requirements for these processes are therefore particularly high. A stable welding process is defined by consistent welding results, i.e. low fluctuations in the seam geometry with regard to penetration depth and seam width as well as a low occurrence of defects, especially concerning porosity and spatter formation [2]. For copper materials, it is particularly difficult to generate a stable welding process at moderate feed rates (v ≤ 6 m/min) with conventional NIR lasers, which are well researched for steel joining [3]. This fact is attributed to the specific properties of copper. Its absorptivity for wavelengths of λ ≥1 μm is in the lower single-digit percentage range at room temperature and increases erratically at the transition point from solid to liquid state. Furthermore, a sudden decrease of the thermal conductivity in the molten state is unfavorable for the welding process, as this leads to heat accumulation in the interaction zone and thus to increased spatter formation and an irregular seam appearance [4]. For green laser radiation (λ = 515 nm), higher absorptivity values around A = 40%– 50% are reported for room temperature and no change over temperature in the order of magnitude as for NIR radiation is present. Due to the improved energy coupling at the first laser-matter interaction, less energy is required to reach the melting and evaporation temperature of copper and the process window for DPW is observed enlarged [5]. The most common defects in copper welding, regardless of the processing wavelength, are pores, spatter formation, melt ejection with subsequent hole formation, and seam root collapse [6]. An overview of these seam imperfections is presented in Fig. 1. Pores are classified as spherical gas-containing inclusions, distinguishing between gas and process pores [7]. Gas porosity results from the precipitation of dissolved gases
218
F. Kaufmann et al.
Fig. 1. Weld seam defects in copper welding – exemplary top view and cross-sections – a) pores, b) spatter formation, c) melt ejection, d) seam root collapse.
during solidification, while the latter are inclusions of inert gas and metal vapor [8]. According to [9], small pores ( triplet. The result of the second stage, i.e. the tool diameter, toolpath strategy and overlap are taken for granted, yielding a constant toolpath length (L) on each cutting plane. The determined value of the depth of cut was challenged, examining it in continuous domain, especially since it may violate cutting stability constraint. The objective function is machining time Tc expressed as: TC = roundup(btot /b) · π · D · L/(z · 1000 · fz · Vc )
(2)
where btot is the depth of the pocket z is the number of teeth and D the tool diameter. This is an approximation of the actual machining time calculated by the CAM system, but it is convenient as a closed form function directly insertable into the GA code. In fact Eq. (2) for nominal conditions (btot = 20 mm, b = 3 mm, z = 6, f = 170 mm/min (fz = 0.052 mm/tooth), n = 550 rpm (Vc = 108.85 m/min), L = 4567.3 mm, D = 63 mm) gives Tc = 186.32 min which is close enough to the CAM value of 192.65 min especially if the rapid feed moves amounting to 2 min are added up. Several technological constraints were taken into account in the GA as follows. The range of tool insert parameters are: fz (mm/tooth)[0.052,0.12], Vc (m/min) [70, 150], b(mm) [2, 5]. Note that the high values are indeed constraints set by the tool manufacturers, whilst the low values are set by experienced machinists. The surface roughness of the pocket wall is not of particular concern in rough machining. However, a limit was set of 3 times the surface roughness specification for the finished pocket wall, i.e. 2.5 μm, to avoid surface anomalies owing to extreme cutting conditions: Ra = 64.2 fz 2 /D ≤ 7.5 μm. Given that the tool diameter D is fixed, this constraint acts only on the feed per tooth fz . The power consumed by the machining operation PC is calculated from Eq. (3) and has to be lower than the maximum power of the machine spindle (22 kW). PC = (b · B/(60 · 106 · η)) · KC · (z · fz · VC · 1000/(π · D))
(3)
where: B is the width of cut, η is the machine efficiency in the range 0.95–1 and in this case 0.97 to match the specific machine’s age and condition and K c the specific power.
262
A. Iliopoulos and G.-C. Vosniakos
This is tabulated by the tool manufacturer for the specific material as a function of fz , enabling a 3rd degree polynomial fitting (R2 = 0.9955) to yield: Kc = −8003.1 · fz 3 + 10127 · fz 2 − 5119.2 · fz + 2953.2
(4)
Width of cut varies continuously in upcut milling, which is difficult to express by a continuous function, thus it was opted to calculate a mean width of cut B based on the material removal rate (MRR) expressed as total machined volume over total machining time, being obtained by the CAM software as 2,895,872 mm3 and 192.65 min respectively, yielding MRR = 15,032 mm3 /min. Since MRR = b B f z , where nominally f z and b are known, B = 29.47mm. The stability lobe diagram is crucial for ensuring that chatter and its detrimental effects will not happen. It is generally difficult to obtain stability lobe diagrams for a machine and material type since they are cutting tool specific. Thus, experimental determination is necessary despite its cost. A pertinent diagram was obtained for the particular machine with a cutting tool very similar to the one defined in Sect. 3, see Fig. 4. This was used in proving the GA optimisation concept in realistic terms.
Fig. 4. Stability lobe diagram applicable
The diagram was discretised using PlotDigitizer defining 230 points that were entered in a coordinate table in Matlab™ used in stability constraint checking by the GA. In particular, for each value of spindle speed examined by the GA the corresponding limiting depth of cut is calculated by linear interpolation and is compared with the depth of cut currently examined in order to retain the lowest. GA results are shown in Table 1 with and without consideration of stability lobes. GA execution without consideration of stability lobes resulted in substantially higher depth of cut by about 33% its correction increasing machining time. However, cutting conditions suggested by machinists are at least 5 times higher. This might be attributed to subjectivity of human experts and the tendency to always be on the safe side. Table 2 presents GA hyperparameters used in Matlab™ optimisation platform.
Optimisation of CNC Machining Part Programs
263
Table 1. GA (third stage) results compared with first and second stage results. Experience
Taguchi
GA-no lobes
GA-lobes
Cutting speed Vc (m/min)
109
109
147
147
Feed per tooth fz (mm/tooth)
0.052
0.052
0.118
0.118
Depth of cut b (mm)
2.00
3.00
4.56
3.54
Machining time (min)
271.37
197.92
43.50
52.22
Table 2. GA setup parameters
Population
Reproduction
Population Type: Double Vector Population Size: 50 for ≤5 variables Creation Function: Constraint Dependent
Elite Count: 0.05∙Population Size Crossover Fraction: 0.6
Fitness
M utation Function: Adaptive Feasible
Fitness Scaling: Proportional
Crossover
Selection
Crossover Function: Constraint Dependent
Selection Function: Tournament Tournament Size:4
Constraint Parameters
Mutation
Migration Direction: Forward Fraction: 0.2
Nonlinear constraint algorithm: Penalty
5 Conclusions This work proved that even when relatively simple rough machining operations are concerned, such as 2.5 D pocket milling, there is ample scope for optimizing toolpath and machining parameters on CAM systems when striving to reduce machining time. The novelty of the approach is that a large problem is split into three subproblems tackled sequentially by the right tools. In view of the large number of parameters concerned it was shown that a three-stage process is beneficial, starting with establishment of a basic toolpath with relatively conservative cutting conditions followed by two more stages. In the second stage, improvement of the toolpath strategy (machining time reduction by 27% in this case) is achieved by Taguchi DoE which considers only discrete values set by the experienced user. In the third stage, continuous domain optimisation of cutting conditions is achieved through a GA (machining time reduction by a further 73% in this case) under a multitude of crucial constraints. A major benefit is that it is straightforward to extend the methodology to other domains in machining by modifying objective function criteria, constraints, etc. Pareto-style multi-objective GAs are a further possibility for the future. Acknowledgments. Part of this work is funded by the European Commission in framework HORIZON-WIDERA-2021-ACCESS-03, project 101079398 ‘New Approach to Innovative Technologies in Manufacturing (NEPTUN)’.
264
A. Iliopoulos and G.-C. Vosniakos
References 1. Vosniakos, G.-C., Benardos, P.G., Krimpenis, A.: Intelligent optimisation of 3-axis sculptured surface machining on existing CAM systems. Machining of complex sculptured surfaces, Springer, pp. 157–189 (2012) 2. Minquiz, G.M., Borja, V., López-parra, M., Ramírez-reivich, A.C., Domínguez, M.A., Alcaide, A.: A comparative study of CNC part programming addressing energy consumption and productivity. Procedia CIRP, 14 (2014) 3. Anggoro, P.W., Purharyono, Y., Anthony, A.A., Tauviqirrahman, M., Bayuseno, A.P.: Jamari: optimisation of cutting parameters of new material orthotic insole using a Taguchi and response surface methodology approach. Alexandria Engin J. 61, 3613–3632 (2022) 4. Vosniakos, G.-C., Gkortza, A., Kontolatis, N.: Toolpath strategy decisions in ‘rough machining-by-region’ using design of experiments on computer-aided manufacturing systems. Int. J. Manufacturing Res. 11(1), 68–88 (2016) 5. Fedai, Y., Kahraman, F., Kirli Akin, H., Bsar, G.: Optimisation of machining parameters in face milling using multi-objective Taguchi technique. Tehniˇcki glasnik. 12, 104–108 (2018) 6. Pezer, D.: Efficiency of toolpath optimisation using genetic algorithm in relation to the optimisation achieved with the CAM software. Procedia Eng. 149 (2016) 7. Nassehi, A., Essink, W., Barclay, J.: Evolutionary algorithms for generation and optimisation of toolpaths. CIRP Annals—Manufacturing Technol. 64 (2015) 8. Diyaley, S., Chakraborty, S.: Optimisation of multi-pass face milling parameters using metaheuristic algorithms. Facta Universitatis, Series: Mech Engin. 17, 365–383 (2019) 9. Yeganefar, A., Niknam, S.A., Asadi, R.: The use of support vector machine, neural network, and regression analysis to predict and optimize surface roughness and cutting forces in milling. The Int. J. Adv. Manufacturing Technol. 105(1–4), 951–965 (2019) 10. Tlhabadira, I., Daniyan, I.A., Machaka, R., Machio, C., Masu, L., VanStaden, L.R.: Modelling and optimisation of surface roughness during AISI P20 milling process using Taguchi method. Int. J. of Advanced Manufacturing Technol. 102, 3707–3718 (2019) 11. Leo Kumar, S.P.: Experimental investigations and empirical modeling for optimisation of surface roughness and machining time parameters in micro end milling using Genetic Algorithm. Measurement (Lond). 124, 386–394 (2018) 12. Camposeco-Negrete, C., de Dios Calderón-Nájera, J.: Optimisation of energy consumption and surface roughness in slot milling of AISI 6061 T6 using the response surface method. Int. J. Adv. Manufacturing Technol. 103, 4063–4069 (2019) 13. Lmalghan, R., Rao, K., ArunKumar, S., Rao, S.S., Herbert, M.A.: Machining parameters optimisation of AA6061 using response surface methodology and particle swarm optimisation. Int. J. Precision Eng. Manufact. 19, 695–704 (2018) 14. Hatna, A., Grieve, R.J., Broomhead, P.: Automatic CNC milling of pockets: geometric and technological issues. Computer Integrated Manufact. Syst. 11 (1998) 15. Vosniakos, G.-C., Kalattas, A., Siasos, A.: Optimal process planning for helical bevel gears using Taguchi design of simulated machining experiments. Proc. Inst. Mech. Eng. B J Eng. Manuf. 232, 2627–2640 (2017)
NAM-CAM: Neural-Additive Models for Semi-analytic Descriptions of CAM Simulations Konstantin Ditschuneit1 , Adem Frenk1 , Markus Frings2 , Viktor Rudel3 , Stefan Dietzel1 , and Johannes S. Otterbach1(B) 1 Merantix Momentum, Berlin, Germany {konstantin.ditschuneit,johannes.otterbach}@merantix.com 2 ModuleWorks, Aachen, Germany 3 Fraunhofer IPT, Aachen, Germany
Abstract. Computer-Aided Manufacturing (CAM) is an iterative, time- and resource-intensive process involving high computational costs and domain expertise. The exponentially large CAM configuration space is a major hurdle in speeding up the CAM iteration process. Existing methods fail to capture the complex dependency on CAM parameters. We address this challenge by proposing a new element for the engineer’s design workflow based on an explainable artificial intelligence method. Using Neural-Additive Models (NAMs), we create a semi-analytic model that improves guided search through the configuration space and reduces convergence time to an optimal CAM parameter set. NAMs allow us to visualize individual parameter contributions and trivially compute their sensitivity. We demonstrate the integration of this new element into the CAM design process of a blade-integrated disk (blisk). By visualizing the learned parameter contributions, we successfully leverage NAMs to model the dependency on CAM parameters. Keywords: Computer-Aided Manufacturing · blisk machine learning · XAI · Neural-Additive Models
1
· interpretable
Introduction
With the advance of modern manufacturing technologies, designing new technical components becomes increasingly more complex. While planning the machining process of these components, engineers make extensive use of Computer-Aided Manufacturing (CAM) systems. Typical CAM-based simulation workflows consist of multiple subsequent steps, such as (1) tool path calculation; (2) tool engagement simulation; and (3) cutting force simulation. CAM designers visually inspect the tool path w.r.t. tool accelerations, trajectory and tool orientation smoothness, and general machinability of the part and adapt the CAM parameters accordingly. However, a typical CAM parameter space consists of around 50 parameters. When testing just 3 independent settings for each parameter, the total number of configurations to calculate is 350 ≈ 7.2e23, rendering an exhaustive search of this c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 F. J. G. Silva et al. (Eds.): FAIM 2023, LNME, pp. 265–272, 2024. https://doi.org/10.1007/978-3-031-38241-3_30
266
K. Ditschuneit et al.
configuration space infeasible. A major limiting factor is the simulation of relevant process variables, such as the cutting force, using CAM-integrated technology models. Especially, the dexel-based tool engagement simulation makes the workflow expensive, as it requires high tool path and part resolutions. Consequently, human intuition plays a significant role in guiding the exploration. However, the individual steps in the CAM workflow commonly run for multiple hours on modern computers, and engineers often spend long hours optimizing CAM parameters by hand and abort once a satisfactory result is achieved.
Fig. 1. (Left) Learned parameter importance αd (see Eq. 1) for targets toolpath length (red) and force proxy (blue). The plots reveal that some parameters have negligible contributions overall, e.g., MaxSideTilt, while BladeOffset and DistBetweenLayers are influential for both targets. (Right) 3D sketch of a blisk with the toolpath in blue and the machining tool head in yellow.
In recent years, the success of advanced Machine Learning (ML) techniques, have been extended from traditional applications, e.g., computer vision and natural language processing, to new domains, such as physical system [3,8,23] or engineering [17,19,26]. We add to this research by introducing a new design step that can be integrated into the engineer’s workflow, based on NAMs [1,15], an explainable artificial intelligence technique. Studying the challenging machine planning process of blade-integrated disks, or blisks (see Fig. 1 left), we show that NAMs can assist in the guided search of the CAM parameter space. NAMs constitute a semi-analytic model of the CAM simulation, due to leveraging modern auto-differentiation frameworks. The architecture of NAMs allows to easily visualize the parameter-contributions to the quantity of interest and facilitates further analyses, such as sensitivity analysis, intuition checks, and insights into the simulation dynamics. The interpretability features enable engineers to assess the quality of the final model and guide them in their experimentation and search for optimal CAM parameter settings. Finally, NAMs allow for further automation and multi-target optimization when turning the quantities of interest into a corresponding optimization metric.
NAM-CAM: Semi-analytic CAM Modeling
2
267
Related Work
Artificial Intelligence (AI) in Manufacturing. The study of AI-based methods to optimize manufacturing processes is not new [22]. Most works are based on a variety of zero-order optimization routines [14,25,27], while artificial neural networks have received considerably less attention and are focused on path optimization problems [29]. Interpretable Models for Tabular Data. The parameter dependency of the CAM simulation can be expressed as tabular data. In practice, tabular data are often modeled using linear models, such as regressions or discriminant analysis [13]. Linear models require hand-engineered features to achieve good performance, making them time-consuming to build. To increase the expressivity of such models, Generalized Additive Models (GAMs) [9] have been introduced to model non-linear but still univariate regressions. The non-linear dependencies are often modeled with tree-based models, such as xgboost [16]. While these tree-based GAMs are powerful, they are not differentiable, making any analysis cumbersome. In contrast, NAMs [1] cover the full expressivity of GAMs while also being fully differentiable. This allows for different types of analyses and optimizations. Moreover, NAMs, being neural network-based algorithms, benefit from modern accelerators, such as Graphics Processing Units (GPUs), due to their matrix-multiplication parallelism.
3
Neural-Additive Models
NAMs are instances of GAMs [9]. This model class uses non-linear univariate functions to approximate the quantity of interest. Let D = {(xi , yi )}i=1,...,N be a dataset of N samples, with xi ∈ RD denoting the independent variables and yi ∈ RK the target variable. For a simple regression, K = 1, but we also investigate multi-target regressions with K > 1, which enables us to re-use parameters in a multi-target regression. We express the relationship between dependent and independent variables as y˜i = α0 +
D d=1
αd φd
xi,d − μd σd
,
(1)
where φd is a non-linear function, the shape function, of the d-th component of vector xi , and αd are parameters that are determined numerically. Note that we normalize the input data using a component-wise shift μi and scale σi . The tilde indicates that this is the approximated value and not the ground truth value. In the NAM setting, we replace the functions φd with neural networks, i.e., φd (·) = NN[θθ d ](·), where θ d are the trainable parameters of the d-th component. To train the NAM, we make use of the mean squared error (MSE) yi −μy 2 defined as LMSE = Exi ∈D ˜ yi − σy , where we also scale the target variable for numerical stability. Minimizing LMSE w.r.t. the family of parameters
268
K. Ditschuneit et al.
{θθ d }d=1,...,D and {αd }d=0,...,D results in shape functions that capture the influence of the input variables onto the target quantity. A generalization of model 1 to multivariate shapes for higher-order interactions is straight-forward [15]. This increases the expressivity of the model but reduces the interpretability due to higher-dimensional shape functions. Sensitivity Analysis. Using neural networks as shape functions φd makes model 1 fully differentiable. This allows us to compute and visualize local sensitivity measures defined via the first derivative ∂y . (2) sd (x0 ) = ∂xd x0 We can define a global sensitivity measure by computing, for instance, the maximum local sensitivity over a bounded value range. This assumption is acceptable, as most input parameters in a CAM system have a finite range. Uncertainty Assessment for the Shape Functions. The interpretability properties of univariate NAMs offer the added benefit of visualizing the uncertainty of the shape functions. For small- to medium-sized datasets, fitting a NAM is fast. This allows for several strategies to create uncertainty estimates, e.g., bootstrapping [13], ensemble learning [13], Bayesian dropout [11], or randomized NAM initialization. Independent of the specific choice, any of these methods results in varying shape functions to assess uncertainty. Using human assessment, the engineer can determine whether additional experiments are needed.
4 4.1
Experiments CAM Toolpath Calculation and Force Predictions
For the CAM workflow, we use the ModuleWorks SDK. The toolpath is calculated with the MultiBlade component, while the engagement simulation utilizes the CutSim component. Simulations can be done using mesh-based [12] or dexel-based approaches, such as tri-dexel models [4] used here. Compared to mesh-based methods, memory consumption grows moderately with growing complexity of the mechanical part. However, due to its discrete representation, the dexel-method misses features smaller than the dexel distance and requires an increased dexel resolution. To save computational costs, a buffered-cuts simulation approach is used. Therefore, the tool movement over several steps is summed up before updating the in-process workpiece (IPW) by intersecting the swept volume of the tool and the old IPW. To calculate forces on the tool, the cutter-workpiece engagement (CWE) is required in combination with a mechanistic cutting force model [2,6,7]. We follow a strategy based on a detailed 3D tool shape model [6]. However, accurate force estimates are at odds with the buffering, as fine-grained geometry updates are needed. This makes the computation slow.
NAM-CAM: Semi-analytic CAM Modeling
269
Fig. 2. Shape functions of the learned NAMs for both targets (top: force proxy, bottom: toolpath length). Shapes for the force proxy show almost linear correspondence, while the shapes for the toolpath length are non-linear. NAMs are clearly capable of simultaneously learning both simple and complex correlations. The uncertainty bands are derived from ensemble learning and correspond to 1σ standard deviation for shape functions and sensitivity. Please note the differences in scale for the different curves. The black stripes mark individual feature values in the dataset to indicate the density of data values.
4.2
Dataset Generation and NAM Training
To generate the dataset, we use Docker [21] to containerize the ModuleWorks SDK. Alternatively, the CAM simulator can also be accessed via an internetbased API endpoint. Either setup allows abstracting the underlying operating system and running jobs on any hardware. We use Kubernetes [20] to orchestrate the workflow and automate it using Flyte [10] in conjunction with a Hydra entrypoint [28]. With this setup, we simulate the blisk toolpath using a set of initial parameters. We parallelize the dataset simulation by horizontally scaling the environment. For each simulation, we record the complete trajectory of the path, including the tool immersion angles, slices, and so forth. We are interested in minimizing the average absolute force between the tool and the workpiece during the cutting process over all points on the toolpath. Since we do not have access to the exact force calculation, we use an empirically tested proxy metric based on the sum of the cut-intersection surface area at,k between the component and 3 cutting edge of the tool t: F˜t = k=1 at,k . 4.3
Experiment Design
Dataset. We ran the CAM workflow for 128 different parameter configurations sampled uniformly at random. We collect the parameters and the corresponding target metrics as rows in our tabular dataset. The data consists of 8 independent
270
K. Ditschuneit et al.
parameters and 2 target metrics. We randomly split the dataset into a training (80%), a validation (10%), and a test (10%) dataset. To fit the NAM, we normalize the target metrics to be mean-centered and to have a unit standard deviation and the input variables to lie within the [0, 1] range. To estimate the shift and scale values, we use the training set to avoid leakage of the test set. Models and Training. The NAMs are implemented with PyTorch [24] and use multi-target regression with K = 2 to model both metrics simultaneously. The networks are trained using mini-batch gradient descent using the AdamW optimizer [18]. Furthermore, we perform hyperparameter optimization using a Treestructured Parzen Estimator [5] to maximize the R2 scores [13] of the model. We train 256 different models for both targets, force proxy and toolpath length and rank models based on their R2 scores. The best ones predict the normalized targets with a mean absolute error (MAE) [13] of 0.24 and 0.13, respectively. 4.4
Discussion
We demonstrate the usefulness of the NAMs in Fig. 1, where we show the importance of each CAM parameter on the different target quantities. As can be seen clearly, different parameters influence the quantities of interest differently. For instance, the tool diameter has a more significant impact on the force due to increased surface area but has a vanishing impact on the path length due to a constant layer distance. Examples of the shape functions underlying the models are depicted in Fig. 2. We show the mean of the shape in blue with the 1σ standard deviation depicted by the shaded area and computed over the best ten runs. The sensitivity in red is computed per individual run, then averaged. Here the shaded region shows 1σ standard deviation as the error propagation makes the signal noisy. We can see the different influences of the parameters on the final target metrics due to the different shape functions. At the same time, the sensitivity dependence is similar in shape but not magnitude.
5
Conclusion
The application of interpretable AI models to manufacturing processes enables new insights during the design process. We showed that by using NAMs, we can understand detailed parameter influences of a CAM design system, which subsequently can be used for guided design exploration, uncertainty assessments, and sensitivity analyses. We demonstrated the power of NAMs using a blisk design and showed how we can predict toolpath length and tool force simultaneously while also extracting the sensitivities of the metrics w.r.t. these parameters using the differentiable nature of neural networks. We also highlight that the method is use-case agnostic and can be applied to all industrial applications that leverage CAM to improve the manufacturing process. We hypothesize that it will speed up the design process by enabling power users and users with limited domain knowledge to better and faster understand the simulation tools and their input settings.
NAM-CAM: Semi-analytic CAM Modeling
271
Future work will extend the NAM framework to include higher-order interactions and investigate the use of uncertainty metrics for automated design optimization using Bayesian learning techniques. Moreover, we are interested in the automation and integration of such tools into the engineering design process. Acknowledgment. We kindly acknowledge funding by the German Federal Ministry of Education and Research (BMBF) within the project “CAM2030: Entwicklung einer innovativen L¨ osung f¨ ur das Advanced Systems Engineering der computergest¨ utzten Prozessplanung der Zukunft” (#02J19B080, #02J19B082, #02J19B084). Contributions. KD and JSO designed the experiments and NAMs and performed the analysis. AF and MF built the experimentation framework and simulation environment. VR provided the blisk setup. SD and JSO coordinated and oversaw the project. All authors contributed to the manuscript.
References 1. Agarwal, R., Frosst, N., Zhang, X., Caruana, R., Hinton, G.E.: Neural additive models: interpretable machine learning with neural nets. arXiv (2020) 2. Altintas, Y., Kersting, P., Biermann, D., Budak, E., Denkena, B., Lazoglu, I.: Virtual process systems for part machining operations. CIRP Annals 63(2), 585– 605 (2014) 3. Belbute-Peres, F., Economon, T.D., Kolter, J.Z.: Combining differentiable PDE solvers and graph neural networks for fluid flow prediction. arXiv (2020) 4. Benouamer, M.O., Michelucci, D.: Bridging the gap between CSG and BREP via a triple ray representation. In: Proceedings of the Fourth ACM Symposium on Solid Modeling and Applications, pp. 68–79 (1997) 5. Bergstra, J., Bardenet, R., Bengio, Y., K´egl, B.: Algorithms for hyper-parameter optimization. In: Shawe-Taylor, J., Zemel, R., Bartlett, P., Pereira, F., Weinberger, K. (eds.) Advances in Neural Information Processing Systems, vol. 24. Curran Associates, Inc. (2011). https://proceedings.neurips.cc/paper/2011/file/ 86e8f7ab32cfd12577bc2619bc635690-Paper.pdf 6. Boess, V., Ammermann, C., Niederwestberg, D., Denkena, B.: Contact zone analysis based on multidexel workpiece model and detailed tool geometry representation. Procedia CIRP 4, 41–45 (2012) 7. Boz, Y., Erdim, H., Lazoglu, I.: A comparison of solid model and three-orthogonal dexelfield methods for cutter-workpiece engagement calculations in three-and fiveaxis virtual milling. Int. J. Adv. Manuf. Technol. 81(5), 811–823 (2015) 8. Brandstetter, J., Worrall, D., Welling, M.: Message passing neural PDE solvers. arXiv (2022) 9. Cao, L., et al.: Intelligible models for HealthCare. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1721–1730 (2015) 10. Flyte: Flyte: The Workflow Automation Platform for Complex, Mission-Critical Data and Machine Learning Processes at Scale (2022). https://github.com/ flyteorg/flyte 11. Gal, Y., Ghahramani, Z.: Dropout as a Bayesian approximation: representing model uncertainty in deep learning. arXiv (2015) 12. Gong, X., Feng, H.Y.: Cutter-workpiece engagement determination for general milling using triangle mesh modeling. J. Comput. Des. Eng. 3(2), 151–160 (2016)
272
K. Ditschuneit et al.
13. Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning. Data Mining, Inference, and Prediction. Springer Series in Statistics. Springer, Cham (2009). https://doi.org/10.1007/978-0-387-84858-7 14. Karuppusamy, N.S., Kang, B.Y.: Minimizing airtime by optimizing tool path in computer numerical control machine tools with application of A* and genetic algorithms. Adv. Mech. Eng. 9(12), 1687814017737,448 (2017) 15. Kim, M., Choi, H.S., Kim, J.: Higher-order neural additive models: an interpretable machine learning model with feature interactions. arXiv (2022) 16. Krishnapuram, B., et al.: XGBoost: a scalable tree boosting system. arXiv, pp. 785–794 (2016) 17. Li, Z., et al.: Neural operator: graph kernel network for partial differential equations. arXiv (2020) 18. Loshchilov, I., Hutter, F.: Fixing weight decay regularization in Adam. CoRR abs/1711.05101 (2017). http://arxiv.org/abs/1711.05101 19. L¨ otzsch, W., Ohler, S., Otterbach, J.S.: Learning the solution operator of boundary value problems using Graph Neural Networks. arXiv (2022) 20. Martin, P.: Kubernetes (2021) 21. Merkel, D.: Docker: lightweight Linux containers for consistent development and deployment. Linux J. 2014(239) (2014) 22. Narooei, K.D., Ramli, R.: Application of artificial intelligence methods of tool path optimization in CNC machines: a review. Res. J. Appl. Sci. Eng. Technol. 8(6), 746–754 (2014) 23. Ohler, S., Brady, D., L¨ otzsch, W., Fleischhauer, M., Otterbach, J.S.: Towards learning self-organized criticality of Rydberg atoms using Graph Neural Networks. arXiv (2022) 24. Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. arXiv (2019) 25. Pezer, D.: Efficiency of tool path optimization using genetic algorithm in relation to the optimization achieved with the CAM software. Procedia Eng. 149, 374–379 (2016) 26. Pfaff, T., Fortunato, M., Sanchez-Gonzalez, A., Battaglia, P.W.: Learning meshbased simulation with graph networks. arXiv (2020) 27. Tsagaris, A., Mansour, G.: Path planning optimization for mechatronic systems with the use of genetic algorithm and ant colony. IOP Conf. Ser. Mater. Sci. Eng. 564(1), 012,051 (2019) 28. Yadan, O.: Hydra - a framework for elegantly configuring complex applications. Github (2019). https://github.com/facebookresearch/hydra 29. Zuperl, U., Cus, F.: Optimization of cutting conditions during cutting by using neural networks. Robot. Comput.-Integr. Manuf. 19(1–2), 189–199 (2003)
Adaptive Toolpath Planning for Hybrid Manufacturing Based on Raw 3D Scanning Data Panagiotis Stavropoulos(B) , Lydia Athanasopoulou, Thanassis Souflas , and Konstantinos Tzimanis Laboratory for Manufacturing Systems and Automation (LMS), Department of Mechanical Engineering and Aeronautics, University of Patras, Patras, Greece [email protected]
Abstract. The wide industrial adoption of metal Additive Manufacturing (AM) technologies has shown that AM can rarely thrive on its own but should be integrated in a wider manufacturing chain. The integration of AM with milling, to form the so-called hybrid manufacturing process is one of the most popular approaches. A key bottleneck of hybrid manufacturing is toolpath planning. In a hybrid manufacturing workflow, 3D scanning is often integrated between milling and AM, in an inspection step, which is highly important for the correct planning of the next process, since AM parts can have a significant divergence from their original CAD. In order to plan the toolpath of the successive AM or milling process, one must consider the 3D scan as an input, posing a new challenge compared to traditional toolpath planning technologies. The surface reconstruction of the part, based on the point cloud can be very computationally heavy. To this end, new techniques for toolpath planning directly from the raw 3D scanning data are necessary. This study presents such a technique to provide an automated tool for toolpath planning of both AM and machining processes, based on point cloud data. Edge extraction is performed through eigenvalue analysis, followed by segmentation of the part based on popular clustering algorithms. Next, curve fitting on the external and internal parts of the segmented point cloud is performed using alpha shapes, followed by polyline offsetting for the toolpath generation. The methodology is validated through a case study on a real component. Keywords: Hybrid Manufacturing · Additive Manufacturing · Milling · Toolpath Planning · Point Cloud · 3D scanning
1 Introduction The industrial adoption of Additive Manufacturing (AM) due to its numerous benefits [1, 2] has also highlighted the bottlenecks of AM that are inherent to its process mechanism, namely the poor surface quality and dimensional accuracy. To address this, the integration of AM in a wider manufacturing process chain is proposed [3]. Hybrid manufacturing, incorporating AM and subtractive manufacturing, usually in the form of milling, is a popular approach, with commercial machine tools already available. Apart from the physical integration of the processes, their digital integration is also crucial to achieve © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 F. J. G. Silva et al. (Eds.): FAIM 2023, LNME, pp. 273–282, 2024. https://doi.org/10.1007/978-3-031-38241-3_31
274
P. Stavropoulos et al.
an efficient hybrid manufacturing workflow. For this purpose, 3D scanning, integrated between AM and milling as an intermediate inspection step, is a key enabler. 3D scanning can provide an accurate representation of the actual status of the manufactured part and quantify the divergence of the actual part, manufactured by AM, from its nominal 3D CAD model. This facilitates the appropriate toolpath planning for a successive AM or milling process, under the hybrid manufacturing context [4]. This adaptive toolpath planning approach, using the 3D scanning results, in the form of a point cloud, as an input enables the seamless digital integration of the two processes. Nevertheless, it introduces a new complexity level, compared to traditional toolpath planning processes that are followed from commercial CAM systems. This is linked to the representation of the 3D model of the part in the form of the point cloud, instead of a standardized surfacebased representation (e.g., BREP). The surface reconstruction from the point cloud to enable the direct use of existing toolpath planning approaches is a computationally heavy process, requiring significant user expertise. On the other hand, the use of raw point clouds introduces additional challenges, due to the lack of robustness in the point cloud, particularly when it contains unorganized data. To this end, there is a need for approaches that address these challenges and enable direct toolpath planning from raw point cloud data. Although the published literature on the topic of toolpath generation from raw point cloud data is still limited, there have been various approaches proposed. Masood et al. [5] have developed an algorithm for toolpath generation based on point clouds that were divided into segments. Each segment was fitted with a B-Spline, which represented the machining toolpath. Zou and Zhao [6] utilized conformal point cloud parametrization to generate machining toolpaths for freeform surfaces. Dhanda et al. [7] used a curvaturebased segmentation approach for the point cloud to partition the point cloud into several regions. Then, a grid-based adaptive planar toolpath strategy was employed to machine each region independently. Chui et al. [8] have proposed an algorithm for toolpath generation in 5-axis machining directly from point cloud data. They performed the triangulation of the point cloud and used the mesh points as tool contact locations. Then, a 3D biarc fitting technique was used to generate the 5-axis toolpath. Ghogare et al. [9] proposed a method for direct toolpath planning from point clouds, using the boundary of the point cloud as a master cutter path and calculating adjacent side toolpaths, using and iso-scallop strategy. Popescu et al. [10] have proposed an approach based on the graph theory for toolpath generation in point clouds for roughing milling operations. The cutting areas are identified using a binary map and the toolpath is generated using the Dijkstra algorithm inside the graph.
2 Approach This approach takes as input the scanned 3D model of the manufactured part, in the form of a point cloud and the CAD model of the original geometry of the part aiming to guide the supervised algorithms that will be used for the intermediate steps until the tool path generation. The CAD of the part is digitized in a point cloud format, before it is used as an input for the proposed algorithm. The proposed approach is based on four key building blocks. First, the scanned part is registered in the CAD coordinate system and
Adaptive Toolpath Planning for Hybrid Manufacturing
275
aligned with the original model. Next, the deviations of the scanned part are calculated (disparity map) and the volume to be processed is extracted. This is performed in an open-source software (CloudCompare [11]) in this work. In general, it is considered that the volume to be processed will be used as an input to the proposed algorithm, based on any commercial software, therefore the disparity mapping is out of the scope of this paper. Next, the volume to be processed is sliced according to the selected process parameters (axial depth of cut for milling or layer height for AM). After that, the preparation of the data for the extraction of the inner and outer contours is performed. The edges (inner and outer) of the volume are identified, to detect its limits in the cartesian space, as well as the existence of islands (pockets, etc.) in the part. For this step, the original CAD of the part is digitized, and its point cloud is generated, where the edge detection is performed (Sect. 3.1).
Fig. 1. Toolpath planning workflow
When the data preparation is finished, the outer and inner contours are extracted and segmented, using Machine Learning (ML) algorithms (Sect. 3.2). Finally, the toolpath is generated, and the part program is written (Sect. 3.3). The following paragraphs will clarify the way that the present work enables the automation regarding the transition from the initial scanned model to the tool path generation either for AM or for machining without the need for user input during the several steps of the methodology. The approach is summarized in Fig. 1.
276
P. Stavropoulos et al.
2.1 Eigenvalue Analysis and Edge Detection For the edge detection, the work from Bazazian et al. [12] has been adopted. The point clouds that are generated from 3D scanning do not contain any information regarding the normal vectors of the points which means that it is not known if the points belong to flat surfaces or edges. The knowledge of normal vectors at each point can facilitate the identification of an edge point. However, given the fact that point clouds are unstructured, a normal vector for a single point cannot be identified, thus requiring the examination of its neighboring points, to form a local surface. Principal Component Analysis (PCA) is applied to neighborhoods of the point clouds to estimate the local normal vectors. For every point of the cloud, a least squares local plane is fitted to its k nearest neighbors. The normal of each point is the eigenvector corresponding to the smallest eigenvalue of the covariance matrix. Covariance is an indicator showing how much each of the dimensions varies from the mean with respect to each other. The covariance matrix for a sample point of a 3-dimensional dataset is given below: ⎡
⎤ Cov(x, x) Cov(x, y) Cov(z, x) C = ⎣ Cov(y, x) Cov(y, y) Cov(z, y) ⎦ Cov(z, x) Cov(z, y) Cov(z, z)
(1)
The covariance Cov(x, y) can be computed as it can be seen below while the same procedure applies for all the other covariance values. k Cov(x, y) =
i=1 (xi
− x)(yi − y) n−1
(2)
Since the estimation of normal vectors by PCA is based on the calculation of the eigenvalues of the covariance matrix, it is possible to identify edge features solely based on the variation of these eigenvalues for each point. With all the values of the matrix known, the eigenvalues of the covariance matrix (λ0, λ1, λ2) are utilized. The concept of the surface variation is introduced. The surface variation, σk (p), , for each sample point with k neighbors enables to distinguish whether the point belongs to a flat plane or a salient point (edge). σk (p) =
λ0 λ1 + λ2 + λ3
(3)
Since the smallest eigenvalue of the covariance matrix as it regards the flat surfaces equals zero, then the value of the surface variation, σk , for the flat surfaces is zero. 2.2 Segmentation of Volumes to be Processed Having determined which of the points represent edges, the next step is the clustering of the volumes that need to be processed along the z-axis. This enables to identify the geometrical changes of the part along the z-axis and then fix the number of machining/additive operations that need to be performed, in order to process the part as intended. This clustering operation is performed through unsupervised machine learning, using
Adaptive Toolpath Planning for Hybrid Manufacturing
277
the DBSCAN algorithm that has been selected among the connectivity-based, centroidbased and grid-based options due to the high density of the point that represent edges and different geometries of the part. DBSCAN is a density-based, non-parametric algorithm. A DBSCAN cluster incorporates two very important properties. At first all the points that are included on the cluster are mutually density-connected and the second mentions that if a point is density-reachable from some point of the cluster, it is part of the cluster as well. When the clusters are identified, it is possible to group geometries which do not change in the XY plane, as we move along the z-axis. The clustering on ZY plane follows the edge extraction of the part. The clustering of the geometries on the XY plane takes place after the end of clustering on ZY plane. This is a crucial operation that needs to be performed, in order to identify islands (e.g., pockets) that exist in the geometry. With this knowledge, it is possible to program the toolpath appropriately and include z-hops among the different clusters, in order to create collision-free and gouge-free toolpaths. For this purpose, several different clustering approaches, both unsupervised (e.g., spectral clustering, kmeans) and supervised (e.g. SVM, KNN). K-nearest-neighbours (KNN) has proven to be the most effective algorithm for the specific application, enabling significant clustering performance. However, although the significant positive aspects, KNN has a major drawback, which is the fact that it is a supervised clustering algorithm, therefore it requires a training dataset. It is necessary to find a method to tackle this drawback, otherwise the path planning tool would be unusable for a potential end-user. The DBSCAN algorithm did not perform very well in the clustering of the point cloud data on the XY plane; however, it performed very well for the clustering of the original CAD on the XY plane. So, an innovative approach was followed to address the issue of the training dataset and automate the entire process: • The point cloud of the scanned part, as well as the digitized point cloud of the original model are sliced along the z-axis according to the layer height (for AM operations) or the depth of cut (for subtractive operations) • For each slice, DBSCAN is used on the digitized point cloud of the original CAD model to cluster the geometries (e.g., identify islands). These clusters are labelled sequentially from the innermost to the outermost cluster and exported. This is the training dataset for KNN. • KNN is employed on the corresponding slice of the point cloud of the scanned part, to cluster it in the XY plane. Through this process, it is possible to effectively cluster the scanned part, even for very complex or sparse geometries 2.3 Tool Path Generation The last step of the workflow is to generate the toolpaths for AM or machining for inner and outer contours. After the clusters of each layer of the point cloud have been identified, it is necessary to extract their inner and outer contours, which will act as the limits of the path, during the toolpath planning stage. This process constitutes a curve fitting problem. There are several techniques that can be employed, such as fitting parametric curves (e.g., B-Splines or NURBS), which give a good control on the curve fitting process and the handling of the curves afterwards. However, this requires the development of complex algorithms. On the other hand, when dense point clouds are handled (which is
278
P. Stavropoulos et al.
always the case in the context of 3D scanned parts) polylines can generate sufficiently satisfactory results in terms of curve fitting. To this end, alpha shapes are used to extract the inner and outer contours at each layer of the point cloud. Alpha shapes are families of piecewise linear curves that are associated with a set of points. After the contours of the different clusters are extracted, polyline offsetting is used to generate the offset curves accordingly. The offset curves are generated to compensate for the tool radius (subtractive operations) and the track width (AM operations). Finally, polyline offsetting is used again to generate the toolpath that covers the whole volume to be processed. The offset values are based on the radial engagement (subtractive operations) and the track width/overlap (AM operations). In the future, other toolpath patterns (one-way, zigzag, raster, etc.) will be also examined, especially for the case of AM. When machining operations are considered, there are some cases, where the programmed radial engagement is larger than the available material to be removed. Such cases need to be handled to prevent gouging, utilizing the digitized point cloud of the original model. Alpha shapes are used to generate the polylines that comprise its outer geometry and polyline offsetting is used to compensate for the tool radius. The machining pass is performed on the outer surface of the part, cutting only the small parts of excess material that exist. After the above steps have been completed, the toolpath for the whole part can be generated.
3 Case Study and Results This case study focuses on the post processing with conventional machining of a part that has been made with AM process, particularly Material Extrusion (MEX-AM) process. The test piece was manufactured by Polylactic Acid (PLA) in a Sindoh 3DWOX1 printer, using process parameters for PLA that ensure good dimensional accuracy and surface quality, based on previous works of the authors on MEX-AM process development [13]. The test piece has been selected is the NAS 979 artefact from NIST [14] (also called the circle-diamond-square artefact) that is often used to assess the performance of AM processes (Fig. 2). Distinctive features can be detected aiming to test the methodology on several cases; holes of various sizes across the height, diamond-shaped geometry, and circle-shaped geometry etc.
Fig. 2. (a) Test part design; (b) manufactured part
The nominal CAD model of the part has been digitized into, 1.000.012 points scattered in the 3D space (Fig. 3a) so as to provide a point cloud that enables the accurate and thorough inspection of the excess region, also providing enough points for the detection
Adaptive Toolpath Planning for Hybrid Manufacturing
279
of disparity between the initial and the scanned model, while asking for minimum computational and processing power. On the other hand, the point cloud for the areas where the excess material is found contains 1.000.416 points (Fig. 3b).
Fig. 3. (a) point cloud of the original part; (b) point cloud of the disparity map
The developed point cloud is imported to the algorithm that has been deployed in Python related to edge extraction using eigenvalue analysis. For the KNN algorithm and the clustering of edges, the threshold for the σk has been set to 0.03 while the k-nearest neighbors used for local normal vector calculation through PCA have been set to 40. Regarding the threshold for σk , a sensitivity analysis has been performed by modifying its value and identifying the edge extraction quality. In general, the value of σk should be close to zero but selecting the absolute zero as a target value leads to numerical instabilities. Such instabilities are related to the extraction of vertical edges (i.e., parallel to the build direction) that would then compromise the clustering along the build axis in the next step, as well as the inclusion of outlying points. The results of such a sensitivity analysis are presented in Fig. 4.
Fig. 4. Sensitivity analysis for σk
For the neighborhood size for the normal vector calculation, one needs to select such a value that would provide an accurate calculation, while maintaining a low computational cost. In general, for neighborhoods including more than 40 points, the benefits in calculation accuracy are negligible, so this has been the value used in the proposed algorithm [15]. Once the edges have been extracted, their x, y, z coordinates are grouped in a dataset. The next step is the implementation of DBSCAN algorithm on the edges that
280
P. Stavropoulos et al.
have been extracted from the original CAD. For the DBSCAN algorithm the maximum distance parameter has been set to 1 and the number of samples in a neighborhood for a point that is considered as a core point has been set to 5. In order to proceed to the next step, slicing of the point cloud that represents the disparity map has been performed so as to generate the layers that will be analyzed so as to obtain the corresponding paths. The slicing has been conducted with Python so as to match the slicing height with the extracted edges, using as slicing value the cutting depth of the conventional process, which has been selected to be equal to 2 mm, along the cutting direction, which in this case is the z axis. 15 layers have been generated based on the scattering of the points in the 3D space. Considering that at first the edge extraction and the classification takes place at ZY plane and then to XY plane, Fig. 5 and Fig. 6 are developed.
Fig. 5. (a) Edge extraction on the ZY plane; (b) DBSCAN clustering
After identifying the disparity map across the height comparing the initial CAD and the scanned model, the contour generation for each slice follows. The slicing value equals the axial depth of cut that has been chosen at 2 mm. The procedure for extracting the alpha shapes of each region is identical to the clustered internal and external geometries. The last phase of the proposed methodology focuses on the generation of the tool path. Each contour is examined with respect to the conditional offsetting principle. If the contour contains enough region with respect to the tool radius where internal offset polylines can be created, polygon offsetting is being performed. The offset value is defined equal to the tool radius. For each contour per layer inner offset paths are created until complete coverage of the layer surface. On the other hand, if the tool size exceeds the region that allows the generation of inner offset paths, the offset paths that have been generated using the original point cloud are selected. In total, 64 paths were created using the first condition and 8 paths using the second condition. This ratio is highly affected by the part geometry. During the final steps of the path generation a z hop value is defined, which represents the distance the tool has to raise above the part for the transition to different contours without collision with the existing bodies. The z hop value has been selected to be 10 mm above the part height. A point is added at the end of each path and at the beginning of the next path so as to ensure the cutting tool motion above the part. By combining the different steps of the presented methodology, the final path is generated and illustrated in Fig. 7.
Adaptive Toolpath Planning for Hybrid Manufacturing
281
Fig. 6. Edge extraction on the XY plane for different Z levels
Fig. 7. Toolpath generated for finish machining of the manufactured part
4 Conclusions This work proposes a methodology for adaptive toolpath planning directly from point cloud data in the context of hybrid manufacturing. Through the use of advance computational approaches and machine learning, it was possible to provide an algorithm of reduced complexity of implementation, based on discrete steps, contrary to existing approaches that lie solely on computational geometry, but are not easy to implement. Moreover, the minimized need of user input for the proposed algorithm provides an approach that reduces the skills barrier and can support the seamless integration of 3D scanning in the hybrid manufacturing workflow, thus leading to increased industrial acceptance of hybrid manufacturing technologies. As a future work, validation of the algorithms in a hybrid manufacturing scenario, using metal AM and milling will be pursued. Acknowledgement. This work has been co-financed by the European Regional Development Fund of the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH – CREATE - INNOVATE (project code: T2EDK-03896).
282
P. Stavropoulos et al.
References 1. Wohlers Associates Inc., Wohlers Report 2021: 3D Printing and Additive Manufacturing Global State of the Industry (2021) 2. Stavropoulos, P., Tzimanis, K., Souflas, T., et al.: Knowledge-based manufacturability assessment for optimization of additive manufacturing processes based on automated feature recognition from CAD models. Int. J. Adv. Manuf. Technol. 122, 993–1007 (2022) 3. Souflas, T., Bikas, H., Ghassempouri, M., et al.: A comparative study of dry and cryogenic milling for directed energy deposited IN718 components: effect on process and part quality. Int. J. Advanced Manufacturing Technol. 119, 745–758 (2022). 4. Stavropoulos, P., Bikas, H., Avram, O., Valente, A., Chryssolouris, G.: Hybrid subtractive–additive manufacturing processes for high value-added metal components. The Int. J. Advanced Manufacturing Technol. 111(3–4), 645–655 (2020) 5. Masood, A., Siddiqui, R., Pinto, M.: Tool path generation, for complex surface machining. Using Point Cloud Data, Procedia CIRP 26, 397–402 (2015) 6. Zou, Q., Zhao, J.: Iso-parametric toolpath planning for point clouds. Comput. Aided Des. 45(11), 1459–1468 (2013) 7. Dhanda, M., Kukreja, A., Pande, S.S.: Region-based efficient computer numerical control machining using point cloud data. ASME. J. Comput. Inf. Science in Eng. 21(4), 041005 (2021) 8. Chui, K.L., Chiu, W.K., Yu, K.M.: Direct 5-axis tool-path generation from point cloud input using 3D biarc fitting. Robotics and Comput.-Integrated Manufacturing 24(2), 270–286 (2008) 9. Ghogare, S., Pande, S.S.: Efficient CNC tool path planning using point cloud. In: Proceedings of the ASME 2018 13th International Manufacturing Science and Engineering Conference. Volume 4: Processes. College Station, Texas, USA. June 18–22 (2018) 10. Popescu, D., Popister, F., Popescu, S.: Direct toolpath generation based on graph theory for milling roughing. Procedia CIRP 25, 75–80 (2014) 11. CloudCompare[GPL software]. http://www.cloudcompare.org/. Accessed 21 Dec 2022 12. Bazazian, D., Casas J.R., Ruiz-Hidalgo J.: Fast and robust edge extraction in unorganized point clouds. International Conference on Digital Image Computing: Techniques and Applications (DICTA), pp. 1–8 (2015) 13. Stavropoulos, P., Papacharalampopoulos, A., Tzimanis, K.: Design and implementation of a digital twin platform for AM processes. Procedia CIRP 25, 1722–1727 (2021) 14. Moylan, S., Slotwinski, J., Cooke, A.: An additive manufacturing test artifact. J. Res. Nat. Inst. Stand. Technol. 119, 429–459 (2014) 15. Sanchez, J., Denis, F., Coeurjolly, D.: Robust normal vector estimation in 3D point clouds through iterative principal component analysis. ISPRS J. Photogramm. Remote. Sens. 163, 18–35 (2020)
Machining of Individualized Milled Parts in a Skill-Based Production Environment Andreas Wagner1(B) , Magnus Volkmann1 , Jesko Hermann2 , and Martin Ruskowski1,2,3 1
2
Institute of Machine Tools and Control Systems (WSKL), University of Kaiserslautern-Landau (RPTU), Kaiserslautern, Germany [email protected] Technologie-Initiative SmartFactory Kaiserslautern e.V., Kaiserslautern, Germany 3 Innovative Factory Systems (IFS), German Research Center for Artificial Intelligence (DFKI), Kaiserslautern, Germany https://rptu.de/,https://smartfactory.de/
Abstract. Changing markets, individualized products, and volatile supply chains require a high degree of flexibility, especially in production and manufacturing. A promising approach to reconfigurable and flexible manufacturing is the use of machine-level skills. This paper presents a simple architecture for skill-based machining of individualized milled parts. A transport system and a machine tool are controlled purely by skills using a defined OPC UA interface. Production orders can be generated from a CAD system or received via the GAIA-X network. The order data as well as information about the manufacturing process are stored in a Product Asset Administration Shell, which is interpreted by a production flow control. Sequentially, different skills for the manufacturing process are executed on the resources. This enables the direct production of individualized parts that are available in CAD. With the architecture presented, there is no need to generate machine code using CAM tools, NC programming or shop floor programming. This is because the intelligence required for the manufacturing process is encapsulated behind the defined OPC UA skill interface, which requires only the geometric data of the features of the CAD part. Keywords: OPC UA
1
· Skills · AAS · Skill-based manufacturing · 5G
Introduction
The ability to adapt to unexpected changes in the production process, as well as to serve more individualized requests, is increasingly required in novel production systems [1]. A promising approach to achieve this kind of flexibility at the machine level is skill-based control [2]. This applies to assembly and handling processes, shown in [3,4], as well as to machining, shown in [5], which is the focus of this paper. However, this type of control requires a suitable middleware to control the skills on machine level. The middleware has to get, match c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 F. J. G. Silva et al. (Eds.): FAIM 2023, LNME, pp. 283–292, 2024. https://doi.org/10.1007/978-3-031-38241-3_32
284
A. Wagner et al.
and handle the necessary input parameters for the skills. Further, the middleware has to deal with the storage of product and process related data, such as energy consumption data. In the future, a connection to the sovereign European dataspace GAIA-X must be considered to enable shared production scenarios. How geometric data can be extracted from a part is shown in [5]. In this approach, however, the feature extraction is directly coupled to the machine tool and thus does not allow flexible control of different machines or tracking of productrelated process data. In order to maintain conformity with the approaches of Industry 4.0 and to store the product-related process data along the product lifecycle, product Asset Administration Shells (AAS) should be used [6]. For the field of assembly, [7] showed a possible approach to use AASs in production. To the best of our knowledge, there is no existing approach that brings together the concepts of feature extraction, AAS and capabilities, skills and services resp. skill-based control in the field of machining. For this, the paper presents an architecture, which connects the mentioned approaches for the field of machining without using a multi-agent system (MAS) to keep it simple.
2
State of the Art
An information model for capabilities, skills and services is described in [8]. The model defines a basis for flexible production in the context of Industry 4.0. It can be used to build up shared production networks. A shared production enables the planning of dynamic supply chains among trusted partners. To identify a process for an individual product, production functions in a factory are modelled by capabilities and skills. According to [8], a capability is defined as the “implementation-independent specification of a function in industrial production to achieve an effect in the physical or virtual world ” while a skill is defined as the “executable implementation of an encapsulated (automation) function specified by a capability”. To automate the production, an important point is the matching of product information to the existing capabilities and skills. In traditional machining operations, components are designed in CAD and then technical drawings are created to obtain quotations from manufacturers. One approach to directly use the digital model to generate a production plan is feature recognition. “Features are technical information items which represent one or more products in the (technical) region of interest. A feature is described by an aggregation of characteristics of a product”. A product can be described by form features which define geometric items within a CAD model. Capabilities and skills could be described by manufacturing features, which define shape and topology of the manufacturing process [9]. By matching form features and manufacturing features, manufacturing processes can be identified. In order to describe products and capabilities in a manufacturer-independent way, the AAS is a possible choice. The AAS is a development of Industry 4.0 to exchange information vendor independent. Specifications define the meta model and elements to describe an asset [10].
Machining of Individualized Milled Parts in a Skill-Based Production
285
Several research projects provide approaches and show solutions for defining features and matching to capabilities with ontologies. The ManuService ontology defines a product ontology based on milling, turning and drilling features to be able to identify needed services in cloud manufacturing [11]. However, to automate manufacturing of the features, specific geometric parameters are missing in the ManuService ontology (e.g. the position and orientation of a feature). The Semantically Integrated Manufacturing Planning Model (SIMPM) ontology identifies possible process flows based on product features, capabilities, and functions [12]. Complex rules are used to identify possible manufacturing processes so it is difficult for a user to understand and expand the ontology. The Manufacturing Resource Capability Ontology (MaRCO) describes capabilities in a process-specific manner based on DIN 8580 [13,14]. This makes it difficult to match product features to only one capability, since DIN 8580 describes manufacturing processes in a general and process-related manner (e.g. machining with geometrically defined cutting edges/external milling). A more detailed and product-related description has to be chosen (e.g. milling rectangular pockets). Because of missing standardized, vendor-independent capability ontologies in form of machining features, in this paper the matching of features to skills is configured by a user (see Sect. 3.2). To execute skills of machines in a factory, the skill interface is used [8]. OPC UA as vendor independent communication framework can be used as a semantic interface to control machines in a factory [3,5].
3
Implementation of a Simple Skill-Based Manufacturing Architecture
While skill-based approaches are already being tested for production tasks such as assembly and handling, shown in [3,4], this work focuses on the use of skills in the area of machining. It extends the preliminary work of [5], which shows a first implementation of manufacturing skills for drilling a hole and milling a rectangular pocket based on CAD information. The extension covers the development of a control architecture with AAS including the integration of a new skill-based transport system as well as the extension of the skill structure by a FeasibilityCheck and a ContextCheck according to [15]. In addition, a connection to the GAIA-X network was created using an International Dataspace Connector, which can be used to receive production orders directly from other participants in the network. Even though [7] shows that control using multi-agent systems (MAS) is a promising approach for complex production and transport systems, the use of a MAS was omitted for reasons of complexity. Since only one machine tool and one transport vehicle are used in the implemented production scenario, reconfigurability and dynamic process planning can also be achieved with the simpler architecture shown in Fig. 1.
286
A. Wagner et al.
MongoDB (JSON) GAIA-X IDS Connector
Execung Skills out of CAD shown in [5]
CAD
Order data
IDS data
Templates
Generang AAS submodel for manufacturing process
Submodels Read
Read and write Producon Flow Control
OPC UA Skills
Machine tool
Order interpreter
Control via OPC UA
OPC UA Skills
Dashboard
Transport system
Fig. 1. Presented Manufacturing architecture. The Figure shows the flow of information between the participants.
3.1
Overview of the Software Systems Within the Manufacturing Architecture
As shown in Fig. 1, the entire production is controlled skill-based. For this purpose, each larger or smaller production module has its own OPC UA server providing an interface to the implemented skills. These skills can be started or controlled by an OPC UA client by writing certain parameters and triggering a state machine. The complexity of the individual functions of the Cyber-Physical Production Modules (CPPM), in this case the machine tool, is encapsulated behind the skill interfaces. This enables to call the most diverse types of skills in the same manner, such as a transport skill, a gripping skill or a milling skill, only with different parameters. Which skills are to be processed in which order with which parameters on which machine is stored in a defined submodel of the AAS in JSON format. Instances of these AAS submodels are generated by the CAD plugin and stored in a MongoDB database. As soon as a new instance is stored in the database, it appears as a new order on a dashboard. A supervisor can start manufacturing of the order from this dashboard and monitor the process during production. While starting the order, a reference to the related submodel data is transferred to the Production Flow Control (PFC) via OPC UA. The PFC accesses to the database and reads the necessary production data, in particular which skill on which machine is to be executed with which parameters. In addition, after successfully finishing a skill, the so-called FinalResultData are collected by the PFC and written back to the database resp. AAS.
Machining of Individualized Milled Parts in a Skill-Based Production
3.2
287
PlugIn for Feature-Extraction Within the CAD-Tool
The starting point for production is the design of a CAD part. In this case, NX R version 1872) is used as CAD software, since the API of NX Open (by Siemens, offers the possibility to start a feature detection automatically [16]. A plug-in programmed in Python, which uses the NX Open API, can search within the designed CAD part for features as well as product and manufacturing information (PMI) like material or tolerances. Both basic features defined in NX and user-specific features can be identified. Basic features are e.g. pockets, slots or holes. On this way all necessary geometric and production-relevant parameters can be extracted and saved to a product AAS. A configuration file can be used to provide the software with information about the matching of the NX features on capabilities and skills. Since there is only one milling machine in the manufacturing environment and to simplify implementation, features are matched directly to a skill. The intermediate step of matching capabilities was omitted in order to keep the complexity low. The capability is needed as soon as several machines with different skill names and skill interfaces are used. The capability can thus reference the same functions that can be found via different interfaces and under different names. In particular the configuration file gives information about which machines are available on the shop floor, which skills they can perform and which features can be manufactured with these skills. A worker or production planner can then assign a skill to the extracted features within the Graphical User Interface (GUI). In addition, peripheral skills are also to be chosen in this step when the production plan is created, e.g. for transport of the workpiece. From the user input, a submodel for the process is generated, added to the product AAS and stored as a JSON file into a database (MongoDB). 3.3
AAS Submodel for Required Production Steps
The product AAS submodel used in this project is structured as a simple JSON file and is generated from a CAD plug-in as described in Subsect. 3.2. The AAS contains: – A “ProductIdentification”, containing some meta information about the customer and the order and a description of the product. – A “ProductionLog”, containing information about the actual status of the production, e.g. count of skills to finish and skills in total. It is mainly used for monitoring the job. – A “ProductionPlan”, a list containing all the skills required for the production. For each skill, information about its accessibility (IP-address of the OPC UA server, NodeId and NamespaceIndex of the skill etc.) are provided as well as the values for the different parameters. Additionally there are some more status information and placeholders for the FinalResultData that will be filled after the execution of every skill. The FinalResultData is calculated and written by the skill and contains data about the execution time of the skill, a statement about the successful execution and a failure id.
288
3.4
A. Wagner et al.
Production Flow Control
The central part of the production architecture is the PFC. It provides an OPC UA server for initiating, controlling and monitoring new jobs. After receiving a new job, the PFC opens a new thread in which new OPC UA client is generated. The thread connects to the MongoDB on the one hand reading the required information about the skills out of the AAS submodel. On the other hand it connects to the OPC UA servers of the resources to transfer the parameters and control the state machines of the skills. The PFC processes skill by skill sequentially and writes the FinalResultData back into the AAS submodel after every skill. By opening a thread for every order, every product has its own OPC UA Client that leads the product through the manufacturing plant by triggering the corresponding skills. Theoretically it is possible to start different orders in parallel but synchronization is not implemented yet. 3.5
Machine Tool and Skill-Interface
R with a milling spindle (Fig. 2) is used An industrial robot (KR300 by KUKA) as the machine tool. An additional programmable logic controller (PLC) communicates with the robot controller and acts as an OPC UA adapter by providing an OPC UA server with the skill interfaces and forwarding the transmitted parameters and method calls (see [5]). On the one hand, manufacturing skills are implemented on the controller, e.g. for milling slots, circular pockets, rectangular pockets and for drilling. On the other hand, skills are also implemented to control the periphery, e.g. to open and close the vice, to open and close the gate or to calibrate the workpiece with the touch probe. The structure of the skill interface is based on the structure of [15]. An essential part of each skill is the SkillStateMachine, which can be used to control the execution of the skill using methods. In addition, most skills have a parameter set that can be used to parameterize the skill. The more complex the skills become in their functionality, the more are two optional checks required, which can be used to retrieve information about the general feasibility (FeasibilityCheck) and the current feasibility (ContextCheck). By calling up the FeasibilityCheck, the machine calculates whether the related skill can generally be executed with the provided parameters. In the case of machine tools, this includes autonomously checking whether there is sufficient machining space, whether suitable tools are available, and whether machining can take place without collisions. This requires active simulations or more extensive calculations in the FeasibilityCheck. A passive decision based on formalized descriptions such as ISO 14649-201 is not suitable, since this neither makes predictions about the expected processing time nor reliable statements about the absence of collisions, e.g. in the case of undercuts. After the check has been completed, the machine returns information on the general feasibility, the estimated machining time and the estimated energy consumption, among other things. With this information, decision criteria are available when planning a
Machining of Individualized Milled Parts in a Skill-Based Production
289
production job, which can be used to select the most suitable skills. Before the skill is executed, the ContextCheck checks whether the skill can be executed at the current time with the given parameters. This can be used to avoid unnecessary transport processes when a workpiece is transported to a machine but the execution of the skill is currently not possible due to various events.
Fig. 2. Transport system placing a workpiece into the vice of the machine tool KL c /A. Sell Smartfactory
3.6
5G Skill-Based Transport System
R is used as the transAn omnidirectional mobile robot (MMO500 by Neobotix) port system, shown in Fig. 2. The robot is controlled by means of the Robot Operating System (ROS). Non-hardware-bound software of the system has been outsourced to a powerful computer in the network (Edge Node) [17]. The Edge Node, which runs roscore, rviz and the OPC UA server to provide the transport skills, communicates via 5G network with the Automated Guided Vehicle (AGV) itself. The low latency of 5G make it possible to send the sensor data from the laser scanner on the AGV to the Edge Node where they are processed. On the other way, data for the actuators is sent from the Edge Node to the AGV. This way the hardware for the processing unit on the AGV can be reduced as well as its energy consumption. The OPC UA server for controlling the robot arm R mounted on the AGV also runs on the Edge Node (UR5 by Universal Robots) and communicates with the robot controller via 5G. The skills of the transport system can be controlled via the uniform OPC UA interface as described in Sect. 3.5, thus encapsulating the complexity of the transport system.
290
4
A. Wagner et al.
Testing, Results and Discussion
We tested the architecture by machining individualized parts, see Fig. 3. Parts of the results are available on video in [18] and [19]. The components are cuboids made of polyoxymethylene. In CAD, holes, circular pockets, rectangular pockets and other features can be individually designed on the cuboid. Feature extraction is limited to all standard features. Free-form surfaces and other special geometries cannot be extracted at present. The amount of usable features could be increased by choosing NX compared to [5], who showed the extraction with Fusion360. Exporting the product data to an AAS in JSON format in a MongoDB proved to be stable and fast. Latencies are not critical at this level, as these are not real-time applications. The PFC connects to the various machines (transport system and machine tool) and thus leads a product through the manufacturing process. However, its main weakness is the lack of synchronization between two jobs running in parallel. In the scenario shown, this case does not occur because only one machine and one transport system are used, but improvements have to be done in case of an extension. Under the acceptance of increased implementation efforts and more complex system architectures, [7] shows a possible solution for such extended systems using a multi-agent system. On the one hand, the generation of machine code could be avoided by skill-based control and, on the other hand, a dynamic generation of process chains is possible due to the uniform interface. By the choice of an AAS a uniform format can be used for controlling the production as well as for storing the process data.
Fig. 3. Individualized parts that are machined with the presented manufacturing architecture
Machining of Individualized Milled Parts in a Skill-Based Production
5
291
Conclusion and Future Work
The presented work shows a flexible manufacturing architecture built on skills, AAS and feature extraction for the first time. With the help of the architecture described in this paper, flexible machining of simple parts in small lotsizes is possible without generating machine code for the machine tool or the transport vehicle. An AAS submodel for executing a manufacturing process including the transport process can be generated directly for parts designed in CAD. Parts of this implementation can be seen on video at [18] and [19]. While the approach described is practicable for single production orders, problems can arise as soon as several orders are started simultaneously, since synchronization between the orders is not implemented yet. Also, the manufacturing process is limited to features that are extractable by CAD. In addition, the current setup does not yet address the execution of the FeasibilityCheck or ContextCheck, even though they are already available in a simple form on the machines. However, in order to obtain information about the feasibility in advance, these checks will be added in future work for production planning. A particular focus of future work will also be on how the described checks can make reliable statements about feasibility and how this information can be used for the execution of the skill, since skills for machining have an enormous complexity if they have to make autonomous machining decisions in the desired way. Acknowledgment. A part of this work has been supported by the Federal Ministry for Digital and Transport (BMDV). The authors would like to thank the BMDV for its financial support in the context of the 5x5G strategy (funding code: VB5GFKAISE).
References 1. Weyer, S., Schmitt, M., Ohmer, M., Gorecky, D.: Towards industry 4.0 - standardization as the crucial challenge for highly modular, multi-vendor production systems. IFAC-PapersOnLine 48(3), 579–584 (2015). https://www.sciencedirect. com/science/article/pii/S2405896315003821, 15th IFAC Symposium on Information Control Problems in Manufacturing 2. Dorofeev, K., Wenger, M.: Evaluating skill-based control architecture for flexible automation systems. In: 2019 24th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), pp. 1077–1084. IEEE, Piscataway, NJ (2019) 3. Zimmermann, P., Axmann, E., Brandenbourger, B., Dorofeev, K., Mankowski, A., Zanini, P.: Skill-based engineering and control on field-device-level with OPC UA. In: 2019 24th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), pp. 1101–1108. IEEE, Piscataway, NJ (2019) 4. Profanter, S., Breitkreuz, A., Rickert, M., Knoll, A.: A hardware-agnostic OPC UA skill model for robot manipulators and tools. In: 2019 24th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), pp. 1061– 1068 (2019)
292
A. Wagner et al.
5. Volkmann, M., Legler, T., Wagner, A., Ruskowski, M.: A CAD feature-based manufacturing approach with OPC UA skills. Procedia Manuf. 51, 416–423 (2020) 6. Plociennik, C., et al.: Towards a digital lifecycle passport for the circular economy. Procedia CIRP 105, 122–127 (2022). https://www.sciencedirect.com/science/ article/pii/S221282712200021X 7. Jungbluth, S., et al.: Dynamic replanning using multi-agent systems and asset administration shells. In: 2022 IEEE 27th International Conference on Emerging Technologies and Factory Automation (ETFA), pp. 1–8 (2022) 8. Plattform Industrie 4.0: Information model for capabilities, skills & services (2022) 9. VDI 2218. Information technology in product development. Feature technology 10. Plattform Industrie 4.0: Asset administration shell specifications. https:// www.plattform-i40.de/IP/Redaktion/EN/Standardartikel/specificationadministrationshell.html 11. Lu, Y., Wang, H., Xu, X.: ManuService ontology: a product data model for serviceoriented business interactions in a cloud manufacturing environment. J. Intell. Manuf. 30(1), 317–334 (2019). pII: 1250 12. Sarkar, A.: Semantic agent based process planning for distributed cloud manufacturing (2020) 13. J¨ arvenp¨ aa ¨, E., Hylli, O., Siltala, N., Lanz, M.: Utilizing SPIN rules to infer the parameters for combined capabilities of aggregated manufacturing resources. IFACPapersOnLine 51(11), 84–89 (2018). pII: S2405896318313636 14. J¨ arvenp¨ aa ¨, E., Siltala, N., Hylli, O., Lanz, M.: The development of an ontology for describing the capabilities of manufacturing resources. J. Intell. Manuf. 30(2), 959–978 (2019). pII: 1427 15. Volkmann, M., Sidorenko, A., Wagner, A., Hermann, J., Legler, T., Ruskowski, M.: Integration of a feasibility and context check into an OPC UA skill. IFACPapersOnLine 54(1), 276–281 (2021). https://www.sciencedirect.com/science/ article/pii/S2405896321009678 16. Siemens PLM Software: NX open. https://docs.plm.automation.siemens.com/ tdoc/nx/1872/nx api#uid:xid1162445 17. Technologie-Initiative SmartFactory KL e.V.: Industrial edge cloud (2021). https://smartfactory.de/wp-content/uploads/2021/11/SF WhitepaperIndustrial-Edge-Cloud-WEB.pdf 18. 5G-Kaiserslautern: Flexible produktion - 5G-kaiserslautern, 21 December 2022. https://www.5g-kaiserslautern.de/flexible-produktion/ 19. Smartfactory-KL: HM 2022: Resilient, nachhaltig, zukunftsorientiert - PL4 steht f¨ ur die produktion von morgen - youtube, 21 December 2022. https://www. youtube.com/watch?v= ZzbiHO846k&t=1586s
Tool Path Length Optimization in Drilling Operations: A Comparative Study Alaeddine Zouari1(B) , Dhouib Souhail1 and José Carlos Sá2,3
, Fatma Lehyani1
,
1 Higher Institute of Industrial Management of Sfax, University of Sfax, Sfax, Tunisia
[email protected]
2 ISEP, Instituto Politécnico Do Porto, Porto, Portugal 3 Associate Laboratory for Energy, Transports and Aerospace (LAETA-INEGI), Porto, Portugal
Abstract. Drilling is the most common operation in the manufacture of machined parts. The complexity of this process depends on the number of holes to be machined which can reach hundreds or even thousands for certain parts. Also, the geometrical distribution of holes may or may not be regular, on a single plane or on several. Therefore, optimizing tool paths in multi-hole drilling operations can reduce time, cost, and energy consumption. Hence, several optimization methods were used to solve this issue. The objective of this research is to prove the performance of the novel optimization concept Dhouib-Matrix (DM) to find the shortest drilling tool path and thus improve productivity. In this vein, DM and its derivatives such as ABC-DM-TSP1, DM3, and A-DM3 have been applied to several case studies of the regular rectangular and circular array of holes. To evaluate the performance of DM, a comparative study with several optimization algorithms commonly used in the literature is conducted in this article. To this end, data set from 13 case studies have been experimented to compare the three derivatives of DM and eleven well-known optimization methods, mainly genetic algorithm, ant colony optimization, artificial bee colony, etc. Computational results indicate that the derivatives of DM outperform all studied competing metaheuristics, and yielded in some cases improvements exceeding 100% in tool path length. Keywords: Drilling tool path optimization · Dhouib Matrix · Comparative study · Regular array of holes
1 Introduction Several researchers have dealt with the issue of optimizing tool paths for various machining and production processes. In milling, Kucukoglu et al. [1] considered a modified Satin Bowerbird Optimizer algorithm to optimize the tool path in CNC milling machines. They noted that their approach provides considerable machining time reduction and increased the company’s operational efficiency rates. In the same vein, Abdullah et al. [2] proposed a modified Ant Colony Optimization algorithm (ACO) to optimize tool paths in contour parallel milling in order to minimize cutting time. In an experimental study © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 F. J. G. Silva et al. (Eds.): FAIM 2023, LNME, pp. 293–301, 2024. https://doi.org/10.1007/978-3-031-38241-3_33
294
A. Zouari et al.
by Lin et al. [3], modified ACO was reported to outperform the conventional ACO. Moreover, Karuppanan and Saravanan [4] presented a method adopting Particle Swarm Optimization (PSO) and GA to reduce CNC milling tool path segments. The proposed optimization approach saves during machining almost 40% of the tool’s non-productive time compared to Autodesk Inventor HSM and Mastercam. Besides, Zhang et al. [5] proposed Tabu Search (TS) algorithm based method to optimize a cutting sequence for complex parts rough machining. Results show that the tool path length is 16.7% reduced and the efficiency is 21.62% increased. In another study, Lin et al. [6] used the LinKernighan Heuristic TSP solver to generate the optimal tool path in freeform surface finishing machining. Results show that the proposed method can generate a shorter tool path. However, for cutting operations, Makbul et al. [7] combined the Simulated Annealing algorithm with Adaptive Large Neighborhood Search to optimize the laser beam path for the 2D cutting process. Simulation results revealed that the proposed algorithm successfully solved several datasets with high-quality solutions and yielded a shorter cutting trajectory compared to CAM software and similar previous works. Furthermore, Sonawane et al. [8] used a greedy algorithm to optimize the plasma cutting head path. This algorithm contributed to reducing the plasma beam traversed the distance, costs, energy consumption, and cycle time. Nonetheless, some researchers have applied tool path optimization and scheduling for the Additive Manufacturing (AM) process. Gong and Zhang, [9] used a modified GA to optimize the tool path length of fused deposition manufacturing 3D printing that enabled a finer tool-path planning for every zigzag pocket. Moreover, Jin et al. [10] proposed two algorithms namely longest processing time and minimum overlapped area first to optimize tool path scheduling for AM with Fused Filament Fabrication (FFF). Results showed that the proposed heuristics reduce the layer printing times. In the same vein, Liu et al. [11] used sliced model decomposition and metaheuristic algorithms to optimize tool path planning for AM. The proposed approach demonstrates its effectiveness and robustness in two physical experiments using direct metal deposition and fused deposition modeling technologies. Similarly, Lai et al. [12] implemented GA on a CNC tufting machine. The heuristic GA allowed for optimizing needle location paths and reducing the spindle travel time. On the other hand, tool path optimization is also applied in welding, gluing, and deburring operations. Thus, in order to reduce welding costs and improve productivity Yifei et al. [13] tested GA and discrete PSO to optimize the welding robot path. The simulation results demonstrate that both algorithms obtain a near-optimal tool path and ensure afterward the significance of the study goals. As Well, Zhang et al. [14] proposed an improved GA (IGA) to optimize the gluing robot path. Results show that IGA does not give deviation from the optimal solution and has better performance in terms of problem solving quality and processing time compared to discrete PSO. Additionally, Abele et al. [15] implemented A*-Algorithm to optimize robot based deburring process tool paths. The study aimed to decrease operation time and minimize the robot’s arm trajectory of deburring process. Nevertheless, several authors have studied tool path optimization for Drilling, Punching, and Tapping operations. Hence, Li et al. [16] have converted the holes machining
Tool Path Length Optimization in Drilling Operations
295
tool path by helical milling to a TSP and used the ACO algorithm for solving this problem. Their simulation results show 41.1% increase in the tool traveling efficiency. In addition, Balderas et al. [17] integrated digital twins with ACO algorithm and direct Simulink model to optimize the Printed Circuit Boards (PCB) hole making tool path for industry 4.0. They claimed that the optimization can triple the number of PCB manufactured. Also, EL-Midany et al. [18] proposed Guided Fast Local Search (GFLS) with the aim to find an optimal tool-point path for small-hole Electrical Discharge Machining drilling. Results confirm the proposed algorithm’s quickness and robustness. Besides, new heuristics and metaheuristics have been designed under the concept of Dhouib Matrix (DM) to optimize the drill path in multi-holes machining operations [19–21]. Proposed methods have been tested for several case studies and their respective performances demonstrated. Consequently, the question arises; do the derivatives of DM provide the shortest tool path compared to commonly used optimization methods in literature? To answer this question, this article proposes a comparative study based on 13 problem tests to demonstrate how the derivatives of DM perform against eleven optimization methods.
2 Dhouib-Matrix Metaheuristic The deterministic heuristic Dhouib-Matrix-TSP1 (DM-TSP1) is designed under the concept of Dhouib Matrix (DM) in 2021 to solve the Travelling Salesman Problem (TSP). The general structure of DM-TSP1 is based on four steps. Step 1 is to compute the corresponding metric for each row and write it on the right side of the matrix. Next, find the minimal range value and select its row. Then, select the smallest element in this row which will specify the two first cities x and y to be inserted into the list List-cities {x, y}. Finally, discard the respective columns of city x and city y. Step 2 is to find the minimal element for city x and city y and select the smallest distance which will indicate the city z. Step 3 is to add city z to the list List-cities and discard its column. Next, go to step 4 if there is no column to discard otherwise go to step 2. Step 4 is to compute the value of the generated solution in List-cities. The hybrid metaheuristic titled ABC-DM-TSP1 integrated the Artificial Bee Colony (ABC) metaheuristic with Dhouib-Matrix-TSP1 [19]. Where, DM-TSP1 is performed with different statistical metrics (Max, Min, Mean, Standard Deviation, etc.) to generate several feasible solutions to facilitate the initial population random search for ABC. Furthermore, DM-TSP1 is enhanced to a stochastic heuristic version called Dhouib-MatrixTSP2 (DM-TSP2). The DM-TSP2 has been integrated with the Far-to-Near (FtN) local search method to give an iterated metaheuristic named Dhouib-Matrix-3 (DM3) [20]. The DM3 variant, namely Adaptive-Dhouib-Matrix-3 (A-DM3) consists of combining the iterated stochastic Dhouib-Matrix-3 (DM3) with a tabu memory inspired from TS metaheuristic [21]. Figure 1a illustrates the general structure of DM3 where K depicts the degree of disturbance (K is the number of the closest nodes). Figure 1b and Fig. 1c depict the generated solution generated by DM3 and A-DM3 for drilling circular and rectangular holes network pattern.
296
A. Zouari et al.
Fig. 1. General structure of DM3 and examples of the generated solutions
3 Comparative Studies Conditions To prove the performance of the derivatives of DM, they were applied to several holes network patterns case studies from the literature. Simulation results were compared with some known optimization algorithms. The experiments were conducted using a laptop computer equipped with a 2.50 GHz Intel® Core™ i5-3210M CPU processor and 8GB of RAM. The implementation used the Python programming language under 64-bit Windows 10 Operating System. Workpieces (1) consisted of a rectangular array of holes, while workpieces (2) were a circular array of holes. Tables 1 and 2 detail holes distribution metrics for each part and the number of holes in several test problems. Table 1. Parameter values for holes drilling in rectangular array case studies 1
2
3
4
5
6
7
8
9
10
Layout
4x5
5x5
5x5
7x7
9x9
11 x 11
11x11
20 x 20
23 x 22
100x100
Number of holes
20
25
25
49
81
121
121
400
500
1000
Horizontal 100 space [mm]
100
5
5
5
100
5
100
20
20
Vertical 50 space [mm]
50
5
5
5
50
5
50
20
20
Total run
50
30
30
30
50
30
50
50
50
50
4 Computational Experiment Results and Discussion The experiment results of the derivatives of DM were compared to those published in the literature for the same forms of work-pieces and the same parameter values for holes drilling case studies. Hence, for the rectangular array of holes, ABC-DM-TSP1 results
Tool Path Length Optimization in Drilling Operations
297
Table 2. Parameter values for holes drilling in circular array case studies Geometric Layout
11
12
13
First circle diameter [mm]
40
40
40
Increment of each diameter [mm]
20
20
20
Number of holes in the first circle
10
10
10
Increment of number of holes per circle
10
10
10
Total number of circles
3
4
20
Total number of holes
60
86
2100
are compared to GA, PSO, ACO, ABC, and Bat Algorithm (BA) presented by Diyaley et al., [22] for four case studies (3 to 6), see Table 3. In addition, A-DM3 is compared to GA, ACO, modified ACO (ACO-2 and ACO-3) presented by Abbas et al. [23], and modified Shuffled Frog Leaping Algorithm (mSFLA) presented by Dalavi et al. [24] for the case studies (1, 2, 7, 8). Also, A-DM3 is compared to Cuckoo Search (CS) and hybrid CS-GA presented by Lim et al. [25] for the case studies (9, 10), see Table 4. Likewise, for the circular array of holes, DM3 results are compared to GA, PSO, ACO, and modified ACO (ACOp20) presented by Abbas et al. [26] for three other case studies, see Table 5. Table 3. Comparing ABC-DM-TSP1 to other methods for rectangular array case study Best found solution [mm] Case study
GA [22, 23]
ACO [22, 23]
PSO [22]
ABC [22]
BA [22]
ABC-DM-TSP1 [19]
3
131.2
127.0
127.0
127.0
122.0
120.0
4
255.5
247.0
253.6
248.4
244.1
240.0
5
431.9
407.0
420.3
419.4
406.2
400.0
6
642.1
607.0
627.7
640.0
606.8
600.0
In the case studies 3 to 6 the ABC-DM-TSP1 performed better than GA, PSO, ACO, ABC, and BA with a reduction in tool path length ranging from 1.146% to 9.345%. The total rate of deviation error to A-DM3 is computed by the following formula: ((Best found metaheuristic – Best found DM) / Best found DM) * 100. Based on experimental results shown in Table 4, the performance of A-DM3 has been tested for several rectangular arrays of holes: 20, 25, 121, 400, 500, and 1000 holes. The tool path length generated by A-DM3 was compared initially in the case studies (1, 2, and 7) to GA, ACO, ACO 2, ACO3, and mSFLA. It is noticed that computational results demonstrated that A-DM3 outperforms widely the GA, ACO, and its derivatives especially when the number of holes increases. Thus, the deviation rate error for the best found solutions was not too far for the two first cases. Though, it was 231.37, 182.45,
298
A. Zouari et al. Table 4. Comparing A-DM3 to other methods for rectangular array case study Best found solution [mm]
Case study 1
GA [22, 23]
ACO [22, 23]
1300.0
1341.4
ACO-2 [23] 1300.0
ACO-3 [23]
mSFLA [24]
1300.0
1300.0
CS [25]
CS-GA [25]
A-DM3 [21] 1300.0
2
1744.6
1765.0
1685.4
1703.2
1819.0
1685.4
7
23713.3
20212.7
12809.5
9118.6
7618.0
7156.2
8
23713.3
20212.7
12809.5
9118.6
2280
21923
9
2496
1052
1024
10
6424
2240
2060
79.00, and 27.42% for respectively the GA, ACO1, ACO2, and ACO3 in case study 7. However, mSFLA generated the closest solution to the A-DM3 one with a deviation error of 6.45%. Thus, a confirmatory test was performed in case study 8 for a 400-holes array, where A-DM3 gave also the shortest tool path length compared to mSFLA. The total deviation error rate for the best-found solution is 4%. In case studies 9 and 10, ADM3 was compared to CS and the hybrid CS-GA for 500 and 1000 holes. Once again, A-DM3 outperformed them and provided the shortest tool path length with 211.84% and 8.74% derivation error rates respectively compared to CS and SC-GA for the 1000 holes array. Table 5. Comparing DM3 to other methods for circular array case studies Best found solution [mm] Case study
GA [26]
ACO [26]
ACO p20 [26]
DM3 [20]
1
828
711
594
574
2
1330
1062
804
794
14655
14687
3
Finally, For the circular array of holes, in case studies 11 and 12, DM3 performs better than GA, ACO, and Modified ACO (p = 20) with a derivation error rate between 01.26% and 67.50%. However, for the case of 2101 holes (13), results show that the best-found solution is given by ACO (p = 20) with a deviation error rate of 0.21% compared to DM3.
5 Conclusion In the literature, several research articles deal with the application of optimization methods in order to increase the productivity of the CNC Machine Tool. In this article, a comparative analysis based on 13 case studies from the literature was presented to test
Tool Path Length Optimization in Drilling Operations
299
the performance of the innovative optimization concept entitled Dhouib-Matrix. Hence, DM-TSP1, A-DM3 and DM3 were tested to overcome the problem of reducing the tool path length in holes drilling operations. The results demonstrate that DM derivatives overcome ABC, ACO, ACO 2, ACO3, BA, GA, mSFLA, and PSO, for several types of rectangular array of holes formed respectively of 20, 25, 49, 81, 121, 400, 500 and 1000 drillings. ABC-DM-TSP1, and A-DM3 provided a considerable tool path length reduction ranging from 1.146% to 231.37%. Additionally, DM3 outperformed GA and ACO in 60 and 86 holes circular array and provided the shortest tool path length. However, it provided almost the same result given by ACO (p = 20). Hence, it is noticed that mSFLA, and ACO (p = 20) are the main competitors for the derivatives of DM. Nevertheless, DM derivatives still the most stable methods for all case studies and give the lowest standard deviation value. The key difference provided by the derivatives of DM is that they are composed of two main phases: generate an initial basic feasible solution (x 0 ) using the stochastic heuristic DM-TSP2 and perform the initial solution (x 0 ) by the local search method FtN (FtN starts from (x 0 ) and moves step by step towards the best solution). To avoid the return to visited solutions, A-DM3 uses a list of previously used initial solutions (x 0 ) that will not be allowed (tabu solutions) for the next generation named Tabu-List. Then, when generating an initial solution (x 0 ) by the heuristic DM-TSP2, it will be recorded and updated in Tabu-List. Yet, the search process ended if the allowed search time or a particular number of moves is reached. Holes-making time includes non-productive time spent moving tool in addition to drilling time. Thus, Managers have to encourage the optimization of machining time by minimizing the drilling tool path. This allows reducing the machining costs, energy consumed, etc. subsequently, improving the productivity and drilling machine availability. Future works will focus on the application of DM to more complicated combinatorial problems such as optimizing milling operation in CNC Machine. Acknowledgments. The authors acknowledge Fundação para a Ciência e a Tecnologia (FCT) for its financial support - UIDB/50022/2020 (LAETA Base Funding).
References 1. Kucukoglu, I., Gunduz, T., Balkancioglu, F., Topal, E.C., Sayim, O.: Application of precedence constrained travelling salesman problem model for tool path optimization in CNC milling machines. An Int. J. Optimization and Control: Theories Appl. (IJOCTA) 9(3), 59–68 (2019) 2. Abdullah, H., Ramli, R., Wahab, D.A.: Tool path length optimisation of contour parallel milling based on modified ant colony optimisation. Int. J. Ad. Manufacturing Technol. 92(1– 4), 1263–1276 (2017) 3. Lin, Z., Fu, J., Shen, H., Gan, W.: Global uncut regions removal for efficient contour-parallel milling. Int. J. Adv. Manufacturing Technol. 68(5–8), 1241–1252 (2013) 4. Karuppanan, B.R.C., Saravanan, M.: Optimized sequencing of CNC milling toolpath segments using metaheuristic algorithms. J. Mech. Sci. Technol. 33(2), 791–800 (2019) 5. Zhang, C., Han, F., Zhang, W.: A cutting sequence optimization method based on tabu search algorithm for complex parts machining. Proceedings of the Institution of Mechanical Engineers, Part B: J. Eng. Manufacture 233(3), 745–755 (2019)
300
A. Zouari et al.
6. Lin, Z., Fu, J., Shen, H., Gan, W., Yue, S.: Tool path generation for multi-axis freeform surface finishing with the LKH TSP solver. Comput. Aided Des. 69, 51–61 (2015) 7. Hajad, M., Tangwarodomnukun, V., Jaturanonda, C., Dumkum, C.: Laser cutting path optimization using simulated annealing with an adaptive large neighborhood search. Int. J. Advanced Manufacturing Technol. 103(1–4), 781–792 (2019) 8. Sonawane, S., Patil, P., Bharsakade, R., Gaigole, P.: Optimizing tool path sequence of plasma cutting machine using TSP approach. In: E3S Web of Conferences ICMED’2020, 184, p. 01037. EDP Sciences (2020) 9. Gong, J., Zhang, L.: Genetic algorithm with unit processor applied in fused deposition manufacturing (FDM) for minimizing non-productive tool-path. In: Proceedings of the Industrial Engineering, Machine Design and Automation (IEMDA 2014) & Computer Science and Application (CCSA 2014), pp. 191–197 (2015) 10. Jin, Y., Pierson, H.A., Liao, H.: Toolpath allocation and scheduling for concurrent fused filament fabrication with multiple extruders. IISE Transactions 51(2), 192–208 (2019) 11. Liu, W., Chen, L., Mai, G., Song, L.: Toolpath planning for additive manufacturing using sliced model decomposition and metaheuristic algorithms. Adv. Eng. Softw. 149, 102906 (2020) 12. Lai, Y.L., Shen, P.C., Liao, C.C., Luo, T.L.: Methodology to optimize dead yarn and tufting time for a high performance CNC by heuristic and genetic approach. Robotics and ComputerIntegrated Manufacturing 56, 157–177 (2019) 13. Yifei, T., Meng, Z., Jingwei, L., Dongbo, L., Yulin, W.: Research on intelligent welding robot path optimization based on GA and PSO algorithms. IEEE Access 6, 65397–65404 (2018) 14. Zhang, Y., Song, Z., Yuan, J., Deng, Z., Du, H., Li, L.: Path optimization of gluing robot based on improved genetic algorithm. IEEE Access 9, 124873–124886 (2021) 15. Abele, E., Haehn, F., Pischan, M., Herr, F.: Time optimal path planning for industrial robots using STL data files. Procedia CIRP 55, 6–11 (2016) 16. Li, Z.Q., Wang, X., Dong, Y.F.: ACO-based holes machining path optimization using helical milling operation. In Advanced Materials Research 834–836 (2014), pp. 1386–1390. Trans Tech Publications Ltd (2014) 17. Balderas, D., Ortiz, A., Méndez, E., Ponce, P., Molina, A.: Empowering digital twin for industry 4.0 using metaheuristic optimization algorithms: case study PCB drilling optimization. Int. J. Adv. Manufacturing Technol. 113(5–6), 1295–1306 (2021) 18. EL-Midany, T.T., Kohail, A.M., Tawfik, H.: A proposed algorithm for optimizing the toolpoint path of the small-hole EDM-drilling. In: Geometric Modeling and Imaging GMAI’07, pp. 25– 32. IEEE (2007) 19. Dhouib, S., Zouari, A., Dhouib, S., Chabchoub, H.: Integrating the artificial bee colony metaheuristic with Dhouib-Matrix-TSP1 heuristic for holes drilling problems. J. Industrial Prod. Eng. (in press) (2023). https://doi.org/10.1080/21681015.2022.2158499 20. Dhouib, S.,Zouari, A.: Optimizing the non-productive time of robotic arm for drilling circular holes network patterns via the dhouib-matrix-3 metaheuristic. Int. J. Mechatronics Manufacturing Syst. (in press) (2023). https://doi.org/10.1504/IJMMS.2023.10054319 21. Dhouib, S., Zouari, A.: Adaptive iterated stochastic metaheuristic to optimize holes drilling path in manufacturing industry: the adaptive-dhouib-matrix-3 (A-DM3). Eng. Appl. Artif. Intell. 120, 105898 (2023) 22. Diyaley, S., Burman Biswas, A., Chakraborty, S.: Determination of the optimal drill path sequence using bat algorithm and analysis of its optimization performance. J. Ind. Prod. Eng. 36(2), 97–112 (2019) 23. Abbas, A.T., Aly, M.F., Hamza, K.: Optimum drilling path planning for a rectangular matrix of holes using ant colony optimisation. Int. J. Prod. Res. 49, 5877–5891 (2011)
Tool Path Length Optimization in Drilling Operations
301
24. Dalavi, A.M., Pawar, P.J., Singh, T.P., Gomes, A.: Optimal drilling sequences for rectangular hole matrices using modified shuffled frog leaping algorithm. Int. J. Industrial Eng. 33, 1–10 (2022) 25. Lim, W.C.E., Kanagaraj, G., Ponnambalam, S.G.: A hybrid cuckoo search-genetic algorithm for hole-making sequence optimization. J. Intell. Manuf. 27(2), 417–429 (2014) 26. Abbas, A.T., Hamza, K., Aly, M.F.: CNC machining path planning optimization for circular hole patterns via a hybrid ant colony optimization approach. Mechanical Eng. Res. 4, 16–29 (2014)
Feed Rate Optimization Using NC Cutting Load Maps N. H. Yoo1
, S. G. Kim2
, T. H. Kim3
, E. Y. Heo2
, and D. W. Kim4(B)
1 Kyungnam University, Changwon 51767, South Korea 2 EDIM Inc., 67 Yusang-Ro, Deokjin-Gu, Jeonju 54852, South Korea 3 Aero-Campus, Korea Polytechnics, Sacheon 52549, South Korea 4 Jeonbuk National University, Jeonju 54896, South Korea
[email protected]
Abstract. In CNC (Computer Numerical Control) machining, single-use components including moulds and dies require only a one-time machining process. Therefore, collision checks and optimization of the machining path are carried out through simulation ahead of time. Due to the nature of single-use machining, it is challenging to accurately predict the cutting load for a specific spindle-toolmaterial combination, as there are many variables involved. The cutting force can be calculated physically based on the tool shape, the material’s specific cutting resistance, and cutting conditions such as cutting width and depth, feed rate, and spindle speed. The predicted cutting force has a similar pattern to the cutting load. However, the cutting load may be different from the cutting force, taking into account factors such as the spindle characteristic curve, temperature, cutting flow, tool wear, etc. Improved product quality, reduced tool wear during machining, and more mindful selection of cutting conditions can be achieved if the predicted cutting force can be acceptably converted into the actual cutting load. This study presents a method for optimizing the feed rate of the tool, which has a direct influence on the cutting load, by converting the cutting force into a cutting load map. The results of the experiment showed that the actual cutting load can be successfully found through the predicted cutting force. These research findings can be applied to single-use machining products through data-based reasoning. Keywords: NC machining · Fool Feed Rate · Cutting Conditions · Cutting Load Map · Machining Simulation
1 Introduction The improvement in the performance of CNC (Computer Numerical Control) machine tools and the development of CAD/CAM (Computer Aided Design/Computer Aided Manufacturing) software are contributing to the improvement in product quality and productivity in various industries, such as automobile parts, dies and molds, and mechanical parts. Recently, cutting physics-based machining simulation software has been commercialized [1, 2], and optimization methods that can improve machining efficiency and product quality have been introduced prior to actual machining. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 F. J. G. Silva et al. (Eds.): FAIM 2023, LNME, pp. 302–310, 2024. https://doi.org/10.1007/978-3-031-38241-3_34
Feed Rate Optimization Using NC Cutting Load Maps
303
Methods to indirectly measure the cutting force to detect tool wear or damage have also been studied. Drossel et al. (2018) measured the cutting force by installing a Piezo sensor on an insert-type tool [3]. Hiruta, T. et al. (2019) collected data using multiple sensors and analyzed signal distribution to diagnose the state of the equipment [4]. Teti, R. et al. (2010) installed a torque sensor on the spindle of a machine tool and measured cutting force to detect tool wear and tear [5]. Aslan, D. and Altintas, Y. (2018) conducted a study to analyse the signal characteristics and diagnose tool vibration by measuring the current of a spindle motor drive when tool vibration occurs [6]. An optimization method that takes into account raw material information and equipment characteristics prior to machining was studied. Erkorkmaz et al. (2013) predicted the cutting force through cutting simulation and optimized the feed rate for each cutting segment by considering the cutting force limit value and the jerk that occurs during cutting [7]. Wirtz, A. et al. (2018) attempted to predict the roughness of the machined surface by predicting the cutting force and tool deflection through cutting simulation [8]. Optimization tools that eliminate overload, idle load, and tool vibration that may occur during machining by predicting cutting forces have been commercialized. However, there are still difficulties in widespread use of these commercial tools as the optimization of NC-data, which directly affects productivity and quality at the machining floor, is still largely based on machining experience and expertise [9]. Recently, thus, research based on data has been carried out in various fields including manufacturing sites, and Lu, N. et al. (2016) diagnosed abnormal conditions of tools using data such as information about the product shape, tools, materials, and tool dynamometers [10]. Mourtzis, D. et al. (2016) predicted the energy used for product production by monitoring the machining status data of equipment and the amount of current used during cutting in real-time [11]. As a result of previous research, it became possible to optimize linear speed, feed per tooth, depth of cut, cutting width, tool path, etc. to extend tool life. A study was conducted by Rattunde et al. [12] on optimizing the feed rate to maximize the material removal rate (MRR) using the preset spindle speed and torque, and a cutting force-based feed rate optimization method was tried by Xiong et al. [13] based on the tool shape and material property attributes. Both methods are similar in that they calculate the tool engagement of the material geometrically and make use of it according to the tool feed. However, the cutting load also varies depending on the performance (torque & power) of the spindle and the shape of the tool. Thus, the optimization method using the cutting load generated during actual machining is more accurate than the predicted MRR or cutting force. Therefore, this study proposes a method to predict the actual machining load based on the cutting force calculated through machining simulation. To predict the machining load, a cutting load map based on the tool-material combination is first created through a preliminary experiment. After converting the simulation result value using the map, we propose a method of optimizing the feed rate. The proposed method is aimed to be applied to one-time, single use cutting processes such as dies & moulds.
304
N. H. Yoo et al.
2 Optimization of Machining Data The productivity and quality of NC machining is determined by the machining data or NC part program. To machine a specific shaped workpiece, cutting conditions, tools, and machining paths are input into a CAM system to create a part program. Traditionally, a commercial CAM system is used rather than manually writing a part program, and each system provides different machining paths and performance. As a result, further optimization of the machining data is necessary. The evaluation of machining data, in which the feed rate is changed according to the cutting load control curve, can be compared using the cutting load measured during actual machining. The current applied to a spindle motor from a spindle drive was measured using a current sensor, and the current generated during material cutting was defined as the cutting load, which was used for evaluating the machining data. For the assessment of the machining data optimization, relative evaluation criteria are necessary when different units are present. In this study, the average value is used to compare optimization through cutting force. The average (Avg) is calculated from the predicted values, and the upper and lower limits are set as Avg ± 20%.
(a) Linear interpolation
(b) Interpolation result (blue line: feed rate, light blue line: actual cutting load)
Fig. 1. Illustration of feed rate interpolation
As illustrated in Fig. 1(a), the feed rate is determined as follows: i) The minimum feed rate (minF) is applied for predicted values (L) exceeding the upper limit (maxL), ii) The maximum feed rate (maxF) is applied for predicted values below the lower limit (minL), iii) For predicted values within the range of the upper and lower limits, the feed rate is calculated through linear interpolation.
3 Feed Rate Adjustment Based on Cutting Force In order to compare the optimization criteria, the workpiece depicted in Fig. 2 was machined. The material was S45C, and the D16 R0.8 insert end mill used in the initial experiment was utilised. The original machining data (NC code) specified two and a half axis machining with the same tool path for each layer. For the machining simulation, MAL Inc.’s MACHPRO [1] was employed, and the parameters for optimization in
Feed Rate Optimization Using NC Cutting Load Maps
305
Table 1. The feed rate adjustment, as shown in Fig. 1(a), reduces the feed rate when it exceeds the upper limit and increases it when it falls below the lower limit. For the cutting load between the upper and lower limits, linear interpolation was carried out as shown in Fig. 3. Figure 3 illustrates the case where the tool feed rate is optimised based on the cutting force criterion.
(a) scroll workpiece
(b) shouldering
(c) scrolling
Fig. 2. Machining features of a workpiece Table 1. Cutting-force based feed criteria in shouldering and scrolling Cutting force Criteria
Average
UL/LL
Feed override Range
No load / Fast feed rate
Shouldering (O3000)
99.3 [N]
119.2/79.4 [N]
± 20%
2022 [mm/min]
Scrolling (O3001)
42.0 [N]
50.3/33.6 [N]
± 20%
2022 [mm/min]
Fig. 3. Cutting force based feed rate adjustment
4 Feed Rate Optimization Based on Actual Cutting Loads The cutting force and cutting load, which are the results of simulation, may vary depending on the performance of the equipment. It is necessary to optimise them based on the actual machining load generated during cutting. If the actual machining load can be
306
N. H. Yoo et al.
predicted by converting the predicted cutting force, it is possible to optimize the load generated during machining. This can be achieved by controlling the feed rate with specific objectives in mind, such as increasing tool life, improving machined surface quality, and minimizing energy consumption. In this study, the feed rate was optimised based on the predicted machining load, using the results of preliminary tests under the assumption that the cutting depth was constant. 4.1 Cutting Load Maps via Conversion Models Through preliminary experiments, a regression equation for converting cutting force (X) into the machining load was obtained (Eq. 1, Fig. 4). The regression equations that determine machining load as a function of feed rate and cutting width (W) are presented in Eqs. (2–9) and Fig. 5. Equations 2 to 9 assumed the cutting depth remains constant at 1 mm as it was in the initial experiment. L(F) = 0.188X − 0.00237X2 + 0.0000128X3 − 0.000000024X4 + 6.1
(1)
L(F300) = 3.3E − 08W4 − 7.4E06W3 + 0.000478W2 + 0.051956W + 6.1
(2)
L(F400) = 7.4E − 08W4 − 1.5E − 05W3 + 0.000849W2 + 0.057707W + 6.1 (3) L(F500) = 9.7E − 08W4 − 1.9E − 05W3 + 0.000932W2 + 0.06874W + 6.1
(4)
L(F600) = 9.1E − 08W4 − 1.5E − 05W3 + 0.000535W2 + 0.072077W + 6.1 (5) L(F700) = 1.29E − 07W4 − 2.1E − 05W3 + 0.000749W2 + 0.079754W + 6.1 (6) L(F800) = 1.85E − 07W4 − 3.1E − 05W3 + 0.001154W2 + 0.082231W + 6.1 (7) L(F900) = 1.7E − 07W4 − 2.9E − 05W3 + 0.001065W2 + 0.088252W + 6.1 (8) L(F1000) = 1.7E − 07W4 − 2.8E − 05W3 + 0.000892W2 + 0.096752W + 6.1 (9)
Fig. 4. Conversion of cutting force to cutting load
Feed Rate Optimization Using NC Cutting Load Maps
307
Fig. 5. Converting cutting width to cutting load with a constant feed rate
4.2 Feed Rate Optimization Using Cutting Load Maps If the actual machining load can be predicted by converting the predicted cutting force, it is possible to optimize the load generated during machining. This can be achieved by controlling the feed rate with specific objectives in mind, such as increasing tool life, improving machined surface quality, and minimizing energy consumption. In this study, the feed rate was optimised using the results of preliminary tests under the assumption that the cutting depth was constant. The first method involves to convert cutting force into machining load and performing linear interpolation using Eq. (1) (Method 1). The second method involves calculating the cutting load based on the cutting width with a fixed feed rate (Method 2). The difference between these tool feed rate optimization methods and the one presented in Sect. 3 is that this method seeks to make the machining load uniform by placing it within the upper and lower limits. The constraints for this method include a maximum feed per tooth of 0.13 (mm/tooth, rev) and a cutting speed of 180 (m/min). The upper limit of machining load (maxL) was set to 11A and the lower limit of machining load (minL) was set to 10A for machining load optimization.
Fig. 6. Optimization by Method 1 (O3000: the estimated cutting force of the first layer)
308
N. H. Yoo et al.
Fig. 7. Optimization by Method 2 (O3000: the estimated cutting force of the first layer)
4.3 Extended Experimental Results In the case of shouldering (O3000), it is predicted that the cutting force will be low using Method 1, while in the case of scroll wall machining (O3001), Method 2 is predicted to result in a low cutting force. Nevertheless, when optimization is carried out using Method 2, the actual machining load is relatively uniform. As seen in Table 2, the machining time generally increased, except for a reduction in the O3001 machining using Method 1. This reduction is achieved by greatly reducing the feed rate in the overload section, which helps to maintain a uniform machining load and minimise the increase in machining time. Figures 6 and 7 show the results of re-simulating the NC program optimized by Method 1 and 2. In the case of Shouldering, Method 2 is predicted to have a higher cutting force compared to Method 1, while the opposite is seen in the case of the scroll wall. However, Figs. 8 and 9, which show the actual machining load, indicate that Method 2 is closer to the desired machining load (maxL = 11A, minL = 10). The cutting load distribution graphs in Figs. 8 and 9 demonstrate that Method 2 is closer to the target range.
Fig. 8. Cutting load map and distribution by Method 1 (the first layer of O3000)
Feed Rate Optimization Using NC Cutting Load Maps
309
Fig. 9. Cutting load map and distribution by Method 2 (the first layer of O3000)
Table 2. Machining time comparison via simulation O3000 (shouldering) Org. NC code
58 min 51 s
Method 1 Method 2
O3001 (Scrolling) -
51 min 23 s
-
1 h 29 min
+ 51.2%
50 min 16 s
−2.2%
1 h 6 min 7 s
+ 12.3%
57 min 50 s
+ 12.6%
5 Concluding Remarks Simulation-based NC machining makes use of the contact surface between the tool and the material prior to machining in order to predict the cutting force and carry out optimization. There is an advantage in that it is possible to anticipate and prevent an overload or a delay in machining time during actual cutting. However, there can be differences from the machining load generated in actual machining, which can vary depending on the performance of the equipment’s spindle. In this study, therefore, machining was performed by optimising the tool feed rate based on the cutting load. As a result of the experimental test, it was demonstrated that there is a discrepancy between the actual machining load and the simulated machining value. Through further experiments addressed in this study, a method for optimising the prediction of the cutting load according to the floor engineer’s intention was proposed. That is, a method for optimising the tool feed rate was proposed by calculating the cutting force based on the equipment-tool-material combination and converting the cutting force to the machining load. For the optimization of the feed rate, additional experiments were carried out on two methods: Method 1, which used a regression equation to convert the cutting force into the machining load, and Method 2, which used a machining load map obtained by transforming the cutting width with a fixed feed rate. Of the two methods, Method 2 demonstrated a high degree of uniformity in terms of the cutting load. It was also found
310
N. H. Yoo et al.
that the prediction of the cutting load by various factors was more efficient than the prediction based on one factor alone. Future research might encompass a method that utilises multiple factors to predict the machining load, as well as a study that derives the machining load map by utilising the machining history data, without relying on the preliminary experiments. Acknowledgement. This research was supported by the Software Convergence Cluster 2.0, which was funded by the Ministry of Science and Information and Communications Technology (ICT), Gyeongsangnam-do, the National IT Industry Promotion Agency (NIPA), and the Institute for Information and Communications Technology Promotion (IITP). The latter was funded by the Korean Government (Ministry of Science, ICT, and Future Planning (MSIP)) as part of the Development and Proof of Open Manufacturing Operation System with ICT Convergence, under Grant 2020–0-00299–002.
References 1. VERICUT Homepage. https://www.cgtech.com. Accessed 06 May 2023 2. MACHPRO Homepage. https://www.malinc.com. Accessed 06 May 2023 3. Drossel, W., Gebhardt, S., Bucht, A., Kranz, B., Schneider, J., Ettrichrätz, M.: Performance of a new piezo ceramic thick film sensor for measurement and control of cutting forces during milling. CIRP Ann. 68(1), 45–48 (2018) 4. Hiruta, T., Uchida, T., Yuda, S., Umeda, Y.: A design method of data analytics process for condition based maintenance. CIRP Ann. 68(1), 145–148 (2019) 5. Teti, R., Jemielniak, K., O’Donnell, G., Dornfeld, D.: Advanced monitoring of machining operations. CIRP Ann. 59(2), 717–739 (2010) 6. Aslan, D., Altintas, Y.: On-line chatter detection in milling using drive motor current commands extracted from CNC. Int. J. Mach. Tools Manuf 132, 64–80 (2018) 7. Erkorkmaz, K., Layegh, S.E., Lazoglu, I., Erdim, H.: Feedrate optimization for freeform milling considering constraints from the feed drive system and process mechanics. CIRP Ann. Manuf. Technol. 62(1), 395–398 (2013) 8. Wirtz, A., Meiner, M., Wiederkehr, P., Myrzik, J.: Simulation-assisted investigation of the electric power consumption of milling processes and machine tools. Procedia CIRP 67, 87–92 (2018) 9. Altintas, Y.: Manufacturing Automation: Metal Cutting Mechanics, Machine Tool Vibrations, and CNC Design, 2nd edn. Cambridge University Press, Cambridge (2012) 10. Lu, N., Li, Y., Liu, C., Mou, W.: Cutting tool condition recognition in NC machining process of structural parts based on machining features. Procedia CIRP 56, 321–325 (2016) 11. Mourtzis, D., Vlachou, E., Milas, N., Dimitrakopoulos, G.: Energy consumption estimation for machining processes based on real-time shop floor monitoring via wireless sensor networks. Procedia CIRP 57, 637–642 (2016) 12. Rattunde, L., Laptev, I., Klenske, E.D., Möhring, H.: Safe optimization for feedrate scheduling of power-constrained milling processes by using Gaussian processes. Procedia CIRP 99, 127–132 (2021) 13. Xiong, G., Li, Z.-L., Ding, Y., Zhu, L.: Integration of optimized feedrate into an online adaptive force controller for robot milling. Int. J. Adv. Manufacturing Technol. 106(3–4), 1533–1542 (2019)
Some Challenges and Opportunities in Additive Manufacturing Industrialization Process Zahra Isania1(B)
, Maria Pia Fanti2
, and Giuseppe Casalino1
1 Department of Mechanics, Mathematics and Management, Polytechnic
University of Bari, Via Orabona, 5, 70125 Bari, Italy [email protected] 2 Department of Electrical and Information Engineering, Polytechnic University of Bari, Via Orabona, 5, 70125 Bari, Italy
Abstract. Additive Manufacturing (AM) is a term often used interchangeably with 3D Printing (3DP) and has brought about a significant transformation in the design and production process by enabling a continuous flow of material data to the finished product. New opportunities for diversification, increased agility, and potentially lower overall costs compared to traditional manufacturing methods arise. Therefore, various industries are moving away from traditional manufacturing ways and turning to this technology to improve their processes and outputs and ultimately for improving their supply chain efficiency. A comprehensive innovation and application of AM processes and equipment, in conjunction with emerging technologies such as cloud manufacturing, big data, and the Internet of Things are necessary in the improvement of AM process. In fact, there are still several challenges and issues that must be addressed for AM to be fully industrialized and widely adopted across different industries. The authors of this paper have succinctly and effectively highlighted significant concerns related to production and operational management. They have presented these concerns as an inspiration for readers to tackle challenges and capitalize on opportunities during the industrialization process of additive manufacturing. Keywords: Additive Manufacturing · 3D Printing · Challenges and Opportunities · Industrialization process
1 Introduction In recent years, additive manufacturing has emerged as a novel manufacturing technology that is rapidly evolving. This method is increasingly being recognized as a crucial tool in a variety of fields, ranging from medical sciences to aerospace, automotive, energy and electrical industries. By offering greater design flexibility, access to new materials, and the ability to produce lightweight and complex geometries, this innovative approach provides a promising solution to the challenges and limitations of traditional manufacturing methods. Consequently, it represents a new pathway towards achieving progress and success in manufacturing [1–5]. AM ensures several advantages, such as providing in-Chain suppliers, producing light materials, more accessibility for various fields, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 F. J. G. Silva et al. (Eds.): FAIM 2023, LNME, pp. 311–318, 2024. https://doi.org/10.1007/978-3-031-38241-3_35
312
Z. Isania et al.
less assembly, reducing waste material, environmentally friendly [6]. Based on the most recent progresses in the manufacturing industries, AM methods in this field will soon dramatically change the manufacturing landscape and many parts of production can merge with other science fields such as artificial intelligence and 3DP of soft robotic systems. 3DP technology has changed quickly since its origin point because of the unique benefits and capabilities which can affect many parts of science. As stated in reference [7], revenue in the 3DP industry has grown by approximately 27% over the past 29 years. This means that although the technology was not commercially used in the 1980s, the industry’s market value has steadily increased, and it is predicted that this trend will continue in a similar manner [8]. 3DP is going to be one of the most essential and effective parts of the supply chain. However, 3DP would have a key role in the industrialization of manufacturing and could ultimately lead to the development of supply chain systems and new business models if it will become increasingly prevalent in a wide variety of industries, from simple household items to complex industrial components and thermal management systems [9, 10]. This paper is going to assess the current state of additive manufacturing (AM) industrialization and to review open issues related with its challenges and opportunities. The final goal is to determine whether AM will become a widely adopted technology in industrial innovation and entrepreneurship, resulting in a growing number of products, or if it will continue to be used primarily for specialized or customized products. 3D Printing (3DP) may play a less central role in mass manufacturing, but it could still have a significant impact on certain industries and markets. The authors have identified from the available sources some major issues in production planning and control. Operational management can also represent a source of challenges and opportunities for 3DP industrialization. The role of numerical simulation is highlighted for its enormous potential but the discussion in depth was reserved for a separate paper. This paper is organized into several sections. After the introduction, the second section explores the ability of AM to be reproduced and repeated on a large industrial scale. The third section examines the effects of operations management on smart and sustainable AM. The fourth one offers insight into the opportunities for AM industrialization. Moreover, it contains some inferences drawn from the analysis presented.
2 Producibility, Repeatability, and Reproducibility of the AM Process at Industrial Scale In 3D printing, producibility, repeatability, and reproducibility are important factors that determine the quality and consistency of the parts produced. Producibility refers to the ability of a 3D printing process to produce parts that meet specific requirements in terms of dimensional accuracy, surface finish, and mechanical properties. It is a measure of the process capabilities and the ability to produce parts that meet the design specifications. Repeatability refers to the capability 3DP process to produce parts with the same dimensional accuracy, surface finish, and mechanical properties. It is a measure of the
Some Challenges and Opportunities in Additive Manufacturing
313
consistency of the process, and it is important for producing parts that are interchangeable and meet the same quality standards. Reproducibility is a measure of the ability of the process to produce parts that are identical to a reference part. In fact, ensuring producibility, repeatability, and reproducibility in 3D printing requires controlling various factors such as the quality of the raw materials, the calibration of the equipment, the stability of the process, and the quality of the process parameters. It also requires the use of dimensional inspection, visual inspection, and mechanical testing [11]. The AM product realization process has been explained by NIST researchers through a six-activity model [12]. In Fig. 1, the information map of AM is shown, which comprises four levels: technical model, reproducibility layers, AM digital spectrum (conceptual model), and data packet (data model). As data moves vertically from the digital spectrum layer to the reproducibility layer, the information becomes more refined. Similarly, horizontal information flow from the Creates AM design (A1) to the qualified section (A6) enhances the communication of part-to-part reproducibility. The data structure establishes the foundation for establishing a digital origin and ultimately a digital discipline. Repeatability is associated with the first four activities in the NIST-AM digital spectrum row (A1 through A4 in Fig. 1) because it is closely related to the AM process. The data package for reproducibility includes the data necessary to support manufacturability, part qualification, and repeatability (process validation).
Fig. 1. The AM product realization process into a six-activity model [12].
The desired structure for reproducibility, producibility, and repeatability is extracted from the digital spectrum layer and further refined with each step in the feature layer. The key characteristics for each step in the AM process are documented in the data packet row. As the process progresses from the feature row to the data packet row, the data structure is formatted as data packages for reproducibility, producibility, and
314
Z. Isania et al.
Fig. 2. A model that outlines the concepts of reproducibility, producibility, and repeatability based on the product, process, and resources [13].
repeatability. The data packets are continuously verified and validated for repeatability, reproducibility, and producibility as they move from the data packet row to the reproducibility row. Ultimately, stakeholders can use each data packet for product repeatability, process producibility, and part-to-part reproducibility to achieve their specific goals. Reproducibility includes the entire dataset of six processes and is therefore the most comprehensive, while repeatability includes only data sets A1 to A4. Producibility is related to information such as 3-dimensional models and material type, which are based on minimum design characteristics. Figure 2 shows in terms of the product, process, and resource (PPR) model [13]. Other authors showed an information map that leverages informatics to facilitate part producibility, process repeatability, and part-to-part reproducibility in a 3DP process. Figure 2 shows those concepts in terms of product, process, and resource (PPR) [13].
3 Operations Management for Smart and Sustainable AM Operations management is the complete timeline of the production of a service/product from the input stage to the final stage, such as planning, organizing, and monitoring operations, manufacturing and production processes, and providing services to lead to the desired result of a High-Quality product. Operations management has a significant impact on 3D printing, as it involves the coordination and management of all activities involved in the production process. Some of the key impacts of operations management on 3D printing include [14] the features listed in the following. Resource optimization: Processing orders and assigning print jobs to machines. These functions are performed based on collected data about order date, material availability, job priority, print bed size, and machine availability. This makes it much more accurate than human computing and ensures more optimal use of resources [15]. The
Some Challenges and Opportunities in Additive Manufacturing
315
software also guarantees jobs are completed using the least number of materials, power, and human effort. Efficient and automated workflows: It replaces manual processes and spreadsheets with digital, simple workflows. As AM operations expand, using manual processes for collecting data and scheduling can lead to bottlenecks. Lack of observation and human error can limit software capabilities. Linking manual processes and automating on a digitized platform, improving efficiency, and reducing manual work. Full traceability: AM implementation systems can create the process of manufacturing a product from raw materials to the final product. This can be used to improve the overall performance of the manufacturing process and is useful for customers looking for it for an image of the lifetime of the product they buy. Delay prediction: AM Execution System (MES) monitors machine data in realtime and tracks scheduling, estimates future times, and anticipates potential production delays. Reduces operating costs. Streamlined and automated workflows help companies to reduce operating costs [16]. Automation reduces labor costs and allows human resources to prioritize valuable tasks. Using data to order materials and schedule work also saves wasted resources, optimizes production, and controls costs. Connection and standardization: Facilitate communication between various elements of production processes. This software connects processes such as postprocessing, machine management, and order management and ensures smooth communication between each stage of manufacture. Connecting all processes under one software helps users manage production and saves them from manually transferring information between various stages. Scalability. Study software to help grow and respond to growing business needs, adapt to rapid growth, and increase production volume. Smart AM: Smart and sustainable additive manufacturing refers to the integration of advanced technologies and sustainable practices in the field of additive manufacturing (3D printing). Its purpose is to improve the effectiveness and productivity of 3D printing processes while decreasing their environmental effects [17]. Some specific applications of digital twins (DT) and smart manufacturing can also be highlighted. Implementing the digital counterpart of a manufacturing cell to automate and configure its process as well as the use of simulations for smart manufacturing has been developed with production zones and with the simulation of complete factories. DT in smart manufacturing has also been used in AM, armament manufacturing, and aircraft [18]. Recent works have shown a framework for designing digital twins in smart manufacturing systems, with a special focus on data analytics, cloud computing, industrial artificial intelligence, virtual reality, and blockchains. Sustainable AM: Designing sustainable production systems using methods, processes, and technologies that are Eco-Friendly and energy efficient is favorable and necessary for the sustainable development of services and products. Efforts should be made to maintain and establish such sustainable production systems. When designing a certain production goal, a new production system, and economic actions must be taken. For instance, the manufacturing capacity is specified to be at least a certain level, the cost of the production system must be within the budget, and the environmental impact is expected to be less than a certain guideline value. Östergren et al. [19] and Johansson et al. [20] describe how Discrete Event Simulation (DES) can be used in combination
316
Z. Isania et al.
with life cycle assessment (LCA) to reduce environmental impacts during food production. DES in combination with LCA can evaluate the performance of a production system by considering environmental measures before construction or actual use of the production system. The results of the study indicate the potential utility of using DES in combination with life cycle assessment data to generate the specifications needed to design sustainable manufacturing systems.
4 Opportunities in AM Industrialization Process After the United Nations Industrial Development Organization, 3DP is a vital part of the digital strategic response to the world’s challenges, including sustainability and pandemic scenarios. AM will bring disruptions in domestic and international value chains. This is because AM can enable the production of customized and complex parts with shorter lead times and lower costs than traditional manufacturing methods. Hence, parts production may shift from centralized locations to a distributed model, potentially leading to changes in the current supply chain structures. Moreover, the adoption of AM may also lead to a reduction in inventory and transportation costs, as parts can be manufactured on-demand, closer to the point of use [21]. A rational AM Industry-standard system should be developed to promote the innovation and comprehensive application of AM processes and equipment in compound with intelligent production systems and emerging technologies like the Internet of Things, cloud manufacturing, and big data, which will be of great importance to realize the development of a leap forward in production technology. In several areas, the research community should make efforts in the next ten years to increase the readiness of AM technology and its attractiveness as an alternative to traditional manufacturing processes [22]. In the future, the working volume in AM devices should be increased as much as possible to enable the fabrication of large parts in one piece (like large aircraft parts, body bones, and cars). At the present time, the size of the parts affects the production time, which can increase significantly and can limit the possible industrial applications of AM, especially in cases where high and fast production speed is required [23]. Although, standards and regulations will play a crucial role in the pace of AM development over the next 10 years. The medical sectors, aerospace, and automotive need standards to drive the use of AM components in vital structural applications. In other words, regulatory agencies require historical experience and data to understand how technologies are treated, so standards are expected to gradually expand their application in non-critical programs to the most complex design scenarios. The use of simulation and virtual testing will become more widespread, allowing manufacturers to optimize their designs and improve the predictability of production outcomes. AM simulation, also supported by deep learning models, can help optimize the production process, allowing manufacturers to identify and resolve potential issues before they occur [24]. By simulating the production process, manufacturers can validate the quality of parts and ensure they meet specifications. The benefit is that the cost of physical testing and the risk of production failures would be saved for additive manufacturers [25].
Some Challenges and Opportunities in Additive Manufacturing
317
5 Conclusions In conclusion, the industrialization process of 3DP requires integrating technology into processes of production planning and control, which increases efficiency, reduces costs, and improves product quality. This requires addressing key issues such as producibility, repeatability, and reproducibility of the AM process at an industrial scale. Scaling up 3DP for mass production is a key aspect of industrialization, which requires the use of modern operations management and smart solution for its sustainability. Simulation can play a crucial role in the industrialization of 3DP since it allows manufacturers to test and evaluate the performance of the 3DP process before it is physically realized. This can save time and resources by identifying and correcting issues early in the design of the manufacturing process. The industrialization of 3D printing is a multifaceted and continuing effort that necessitates investment in research and development, as well as partnerships between industry, government, and academia. Nevertheless, its significance in improving largescale manufacturing is expected to grow in the future.
References 1. Gardan, J.T.: Additive Manufacturing Handbook: Product Development for the Defense Industry, 2nd edn. CRC Press, USA (2017) 2. Gibson, I., Rosen, D., Stucker, B., Khorasani, M.: Design for Additive Manufacturing. Additive Manufacturing Technologies, 3rd edn. Springer, Switzerland (2021). https://doi.org/10. 1007/978-3-030-56127-7 3. Gehler, M., Kosicki, P.H., Wohnout, H.: Christian Democracy and the Fall of Communism. Leuven University Press, Belgium (2019) 4. Gao, W., et al.: The status, challenges, and future of additive manufacturing in engineering. Comput. Aided Des. 69, 65–89 (2015) 5. Espera, A.H., Dizon, J.R.C., Chen, Q., Advincula, R.C.: 3D-printing and advanced manufacturing for electronics. Prog. Addit. Manuf. 4(3), 245–267 (2019). https://doi.org/10.1007/s40 964-019-00077-7 6. Bourell, D.L., Leu, M., Rosen, D.: Roadmap for Additive Manufacturing: Identifying the Future of Freeform Processing. University of Texas at Austin Laboratory for Freeform Fabrication Advanced Manufacturing Center (2009) 7. Campbell, I., Diegel, O., Kowen, J., Wohlers, T.: Wohlers Report 2018: 3D Printing and Additive Manufacturing State of the Industry: Annual Worldwide Progress Report (2018) 8. Ashour Pour, M., Zanardini, M., Bacchetti, A., Zanoni, S.: An Economic insight into additive manufacturing system implementation. In: Umeda, S., Nakano, M., Mizuyama, H., Hibino, H., Kiritsis, D., von Cieminski, G. (eds.) APMS 2015. IAICT, vol. 460, pp. 146–155. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-22759-7_17 9. Schwab, K.: The Fourth Industrial Revolution, 1st ed. New York (2017) 10. Ghahfarokhi, P.S., et al.: Opportunities and challenges of utilizing additive manufacturing approaches in thermal management of electrical machines. IEEE Access 9, 36368–36381 (2021) 11. George, E., Liacouras, P., Rybicki, F.J., Mitsouras, D.: Measuring and establishing the accuracy and reproducibility of 3D printed medical models. Radiographics 37(5), 1424–1450 (2017)
318
Z. Isania et al.
12. Kerwien, S., Collings, S., Liou, F., Bytnar, M.: Measurement science roadmap for metal-based additive manufacturing. NIST (2013) 13. Kim, D.B., Witherell, P., Lu, Y., Feng, S.: Toward a digital thread and data package for metals-additive manufacturing. Smart Sustainable Manuf. Syst. 1(1), 75 (2017) 14. Khorram Niaki, M., Nonino, F.: Additive manufacturing management: a review and future research agenda. Int. J. Prod. Res. 55(5), 1419–1439 (2017) 15. Framinan, J.M., Perez-Gonzalez, P., Fernandez-Viagas, V.: An overview on the use of operations research in additive manufacturing. Ann. Oper. Res. 1–36 (2022) 16. Holmström, J., Gutowski, T.: Additive manufacturing in operations and supply chain management: no sustainability benefit or virtuous knock-on opportunities. J. Ind. Ecol. 21(S1), S21–S24 (2017) 17. Pang, T.Y., Pelaez Restrepo, J.D., Cheng, C.T., Yasin, A., Lim, H., Miletic, M.: Developing a digital twin and digital thread framework for an ‘Industry 4.0’ shipyard. Appl. Sci. 11(3), 1097 (2021) 18. Siedlak, D.J.L., Pinon, O.J., Schlais, P.R., Schmidt, T.M., Mavris, D.N.: A digital thread approach to support manufacturing-influenced conceptual aircraft design. Res. Eng. Des. 29(2), 285–308 (2017). https://doi.org/10.1007/s00163-017-0269-0 19. Östergren, K., Berlin, J., Johansson, B., Sundström, B., Stahre, J., Tillman, A.M.: A tool for productive and environmentally efficient food production management. In: Book of Abstracts European Conference of Chemical Engineering (ECCE-6), Copenhagen, pp. 16–20 (2007) 20. Johansson, B., Stahre, J., Berlin, J., Östergren, K., Sundström, B., Tillman, A.M.: Discrete event simulation with lifecycle assessment data at a juice manufacturing system. In: Proceedings of FOODSIM Conference (2008) 21. United Nations Industrial Development Organization. https://www.unido.org/news/future-ind ustrialization-post-pandemic-world-industrial-development-report-2022. Accessed 15 Feb 2023 22. Huang, Y., Leu, M.C., Mazumder, J., Donmez, A.: Additive manufacturing: current state, future potential, gaps and needs, and recommendations. J. Manuf. Sci. Eng. 137(1) (2015) 23. Attaran, M.: The rise of 3-D printing: the advantages of additive manufacturing over traditional manufacturing. Bus. Horiz. 60(5), 677–688 (2017) 24. Goh, G.D., Sing, S.L., Yeong, W.Y.: A review on machine learning in 3D printing: applications, potential, and challenges. Artif. Intell. Rev. 54(1), 63–94 (2020). https://doi.org/10.1007/s10 462-020-09876-9 25. Prashar, G., Vasudev, H., Bhuddhi, D.: Additive manufacturing: expanding 3D printing horizon in industry 4.0. Int. J. Interact. Des. Manuf. 1–15 (2022)
Design Parameters to Develop Porous Structures: Case Study Applied to DLP 3D Printing R. Rodrigues1 , P. Lopes2 , Luis Oliveira2
, L. Santana2,3
, and J. Lino Alves1,2(B)
1 Mestrado em Design Industrial e de Produto, Universidade do Porto, Porto, Portugal
[email protected]
2 INEGI, Faculdade de Engenharia da Universidade do Porto, Porto, Portugal 3 Universidade de São Paulo – USP, São Paulo, Brazil
Abstract. Generative design is a collaborative design process between humans and computers that provides multiple solutions to the same problem. Lattice structures are developed using this method to reduce one property, e.g. mass, while trying to maintain another, e.g. strength. This work arises with the aim of developing lattice structures for bone implants to make them porous and promote the healing process of bone tissue. Periodic Cell type test cubes were developed with maximum pore sizes of 500 and 300 μm and wall thicknesses of 1, 0.6, 0.4 and 0.2 mm, and Stochastic Voronoi type test cubes with a wall thickness of 0.4 mm with 500, 750, 1000, 1500 and 2000 points. The cubes were printed on a Digital Light Processing Technology 3D printer, to be analysed according to printability, uncured resin removal, mass and compressive strength. The periodic test cubes didn’t pass the printing stage as they came out deformed or pore filled. The stochastic type test cubes showed acceptable results regarding the printing and cleaning process. When comparing the results of the compression tests, it is seen that increasing the number of points, while keeping the thickness, there is an increase in the mechanical properties, which can be seen with the results of the samples of 500 points, that compared with the dense ones, present a reduction of 98% in relation to the maximum tension of compression and 99% in the compression modulus, and with the samples with 2000 points, that present a reduction of 97% and 98%, respectively. Keywords: Additive Manufacturing · Digital Light Processing · Generative Design · Lattice Structures · Periodic · Stochastic
1 Introduction Generative Design can be referred as a process of collaborative design between humans and computers and be associated to any method of design in which the designer or engineer uses a system to solve a problem in an automated way presenting a wide variety of solutions and shapes for one problem [1]. This method is typically used with two main objectives, either to produce a solution by fulfilling a given set of parameters/restrictions, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 F. J. G. Silva et al. (Eds.): FAIM 2023, LNME, pp. 319–327, 2024. https://doi.org/10.1007/978-3-031-38241-3_36
320
R. Rodrigues et al.
or a solution as a result of the maximization of a given goal [2]. This second objective can be achieved in order to, e.g. reduce the weight of the designed object, keeping the remaining properties. Lattice structures are the means to achieve these solutions and are defined as objects that are composed of cells that repeat themselves without gaps and are interconnected in all directions. These types of structures are only obtainable by traditional manufacturing processes when their shapes are simple and have medium/large scales. For more complex shapes and micro-scale, manufacturing using these processes become too expensive and limited [3, 4]. Due to the great advances occurring in additive manufacturing, nowadays these limitations are being overcome. These structures are gaining importance since they have as main objectives the reduction of time, energy and material required during the manufacturing process, and the optimization of the strength of the object produced by decreasing its mass [4]. They are used in various industries, such as medical, automotive, aerospace and aeronautics, structural and civil engineering [4]. They feature two different types: periodic and stochastic. Periodic Lattice Structures have a single cell shape, which is repeated along all three axes, and are created depending on required geometric properties such as cell size, angle, number of repetitions, and the boundaries of the final object [4]. Stochastic Lattice Structures are structures that have a random distribution of cells and their shapes. These structures can be analysed but cannot be accurately recreated or predicted. Both types are shown in Fig. 1(a) and (b).
a)
b)
Fig. 1. Representation of Lattice Structures: (a) Periodic and (b) Stochastic.
In this study, it is intended to demonstrate the process of defining parameters for the production of porous grid structures to be implemented in biodegradable implant components. These components shall promote regenerative behaviour and/or bone tissue formation. According to Sartori et al. [5], pores greater than 500 μm are able to support rapid vascularization of fibrovascular tissue, and pores with diameters between 120 and 350 μm support events related to bone tissue growth. In addition to defining the parameters required for the development of the structures, the impact of changing these parameters on the mechanical properties, along with the printability and removability of the resin will also be analysed.
2 Materials and Methods A lattice structure was developed for each of the mentioned types, periodic and stochastic, using version 6 of the Rhinoceros 3D software with the Grasshopper plugin. This software was chosen because it allows complex geometries to be easily modified and
Design Parameters to Develop Porous Structures
321
the parameter values to be changed, producing new shapes quickly. Periodic Lattice Structures with cubic cells and Stochastic Lattice Structures using the Voronoi method were chosen to be the target of the study. After the test cubes of the two types were modelled, they were printed using Phrozen ABS-like resin matte grey in a Sonic Mighty 4K printer, cleaned, measured with HLD300 scale, and tested with compressive tests, on an Instron 3387 machine, at a speed of 2 mm/min and with a 100 kN load cell. The cleaning process was also studied in order to determine the best procedure to remove the resin, since in the first samples it was observed difficulties in the removal of the uncured resin, presenting covered pores. 2.1 Design Software Rhinoceros 3D® modelling software allows the modelling of 2D and 3D shapes in a more refined way, in comparison with traditional CAD software, being used in sectors such as jewellery, architecture, marine engineering, automotive, among others. It is a software with an intuitive and interactive interface that allows 2D and 3D modelling, technical drawings, rendering, model preparation, and import and export of files in various formats. What makes it stand out is that it allows the designer to create and accurately modify complex curves, surfaces, and solids in a simple and effective way. On top of all that, it is possible to add more functionalities by installing plugins. Grasshopper is a visual programming language, being available as a plugin for Rhinoceros 3D software. It is primarily used in parametric modelling for structural engineering, and in algorithmic modelling, where complex shaped geometries are sought after. With this plugin, three-dimensional models are generated by dragging components (blocks of already pre-established functions) into a window and the outputs of these components are linked to the inputs of subsequent components. It is also possible to include scripting components. 2.2 Lattice Structures As previously mentioned, we will study one structure of each type, periodic and stochastic. Of the Periodic type, structures with cubic cells were chosen in order to verify the existence of a relation between the thickness and the maximum size of the pores, which would improve the printing process and the resin removal from the test cubes. The main advantages of this type of structures are the fact that theoretically they are simpler to develop due to their regular cell shape and the simplicity to be modelled. These structures allow direct control of certain characteristics, such as maximum pore size and thickness. Usually, these structures present a strong anisotropy [6]. Of the Stochastic type, the Voronoi method was selected due to the need, in the process of developing the digital models, to use a Voronoi function, of the modelling software, to divide the cells so that they occupy a certain volumetric space similar to each other. The principal advantage of this method is the shape and distribution of the cells being designed in a random way, producing more organic structures similar to those found in bone tissue. These types of structures can exhibit a mechanical behaviour similar to an isotropic object when the number of cells within a volume is sufficiently large [6].
322
R. Rodrigues et al.
2.3 Test Cubes Development Process The two types of previously presented porous structures were designed as a standard cube of 20 × 20 × 20 mm3 . During this development, it was set which parameters could be changed for each type of structure. 2.3.1 Periodic Lattice Structures – Cubic Cells During the development of these structures, we begin by modelling the cube that will represent the total volume of a Periodic Cubic Cell (Fig. 2). The dimension of this cube should be obtained following formula (1). Cell dimension (c) = Maximum pore dimension(a) + 2 × Thickness value(b)
(1)
Fig. 2. Composition of a periodic cubic cell: (a) Maximum pore dimension; (b) Thickness; (c) Cell dimension.
Initially two values were selected for the maximum pore dimension, 500 μm and 300 μm, being that after obtaining acceptable results, the value should be reduced to the limits of the available tools (computational and software capacity), being the thickness 1, 0.6, 0.4 and 0.2 mm. After modelling the cube of the total cell dimension, centred holes were drilled in it with a square profile with the maximum pore dimension (a) measurement on each side. Once this is done, the modelling of the cell is finished, and the repetition of that shape is performed, using an Array software function (Fig. 3). The number of repetitions is defined by formula (2): Cubic sample dimension (d )/(maximum pore dimension (a) + thickness (b))
(2)
Fig. 3. Cubic sample with a cell inside.
When the number of repetitions has decimal places, it is necessary to round up, to allow excesses in the model. After the software performs the repetitions, these excesses
Design Parameters to Develop Porous Structures
323
of the cube filled with cells, are removed. This method of developing the test cubes accepts as parameters the maximum pore dimension, thickness and the cubic sample dimension, while the number of repetitions is related to the cube volume and should not be changed. The top and bottom edges of the repeated cells are coinciding. 2.3.2 Stochastic Lattice Structures – Voronoi A modelling formula (Fig. 4) has been developed which allows the parameters, and therefore the model, to be modified efficiently, allowing quick adjustments to be made. It begins by modelling the sample cube, 20 × 20 × 20 mm3 , and then inserts randomly distributed points in it. These points will serve as centres for the creation of cells by the software’s Voronoi 3D function, this function separates the sample cube into similar volumes. After the sample volume is divided into cells, the lines of these are extracted to create a frame. A smooth function is applied to this frame to make it more organic and without sharp edges, and then the thickness is given. In the end the file is heavy, so before export it is necessary to reduce the quality of the mesh by 80% to be printable, otherwise the software that prepares the files for printing, Chitubox 64 1.8.1, fails. This method for developing Stochastic Mesh Structures using Voronoi Cells accepts as parameters the number of points to be inserted, which corresponds to the number of cells in the volume, the thickness of the walls and the measurements of the sample cube.
Fig. 4. Modelling Formula for Stochastic Lattice Structures Voronoi.
2.4 Developed Samples During the modelling of the test cubes of Periodic Lattice Structures with Cubic Cells, it was noticed that by decreasing the value of both parameters, while keeping the same size of the cubic samples, the number of repetitions of the cell increased, and consequently the software processing time. Reducing the thickness to less than 0.2 mm and/or the maximum pore size to 200 μm, resulted in the modelling software to stop responding. In the process of developing the test cubes of the Voronoi Stochastic Lattice Structures (Fig. 5) it was possible to conclude that increasing the number of points within the same volume and the value of the thickness would lead to a decrease in the size of the pores, but increasing the latter value makes the process of removing the resin more difficult, because the pores would come out immediately closed. The reduction of this value would also theoretically cause a loss of resistance, so a thickness of 0.4 mm was defined for the development of the study. It was also found that increasing the number of points increases the strain that the software and the actual computer had to endure.
324
R. Rodrigues et al.
Fig. 5. Stochastic Lattice Structures Voronoi: 2000 points with 0.4 mm thickness.
2.5 3D Printing and Resin Removal Process Three test cubes of each type and configuration were printed, and a cleaning process was performed to remove uncured resin. The process begins with a 45 min pre-cleaning in the Creality UW-01 machine in a 96% isopropyl alcohol concentration. Immediately after, the samples were weighed, cleaned with absorbent paper and weighed again, then dried for 48 h in a cardboard box with the interior covered with the absorbent paper and a silica bag, to promote dehumidification, in a controlled environment and without light, to avoid non-controlled curing. After this drying period, the samples were weighed, and then cleaned by ultrasound (Soltec 1200 M S3) for 2 min in a container with 96% isopropyl alcohol concentration. The test cubes were weighed again, underwent another 48 h drying period under the same conditions as before, and were weighed again. On completion of this cleaning process, the cubes were cured for 1 h in Creality UW-01 machine. The supports were then removed and the samples were measured, weighed, and subsequently subjected to testing (Fig. 6). During the printing process, the Periodic type sample cubes showed difficulties in removing the resin, as some of the pores were already filled at the beginning. In versions where the thickness value was less than the maximum pore size, it was not possible to obtain the printed models, as they showed little resistance. There were also some samples where the form of the pores came out deformed instead of having a square shape. All weighing was carried out on an HLD300 balance, with a maximum capacity of 300 g, and a sensitivity of 0.0005 g. Measurements of the test cubes were taken on a Mitutoyo Absolute Digimatic Calliper 500 with 0.02 mm of resolution. The printing process was carried out on the Phrozen Sonic Mighty 4Ki printer with supports placed automatically from the preparation program, with the samples at a 45° angle to each plane. Three dense cubes were also printed, with the same measurements, so that it was possible to make comparisons between the results obtained.
Fig. 6. Printed samples: 500 750, 1000, 1500, 2000.
Design Parameters to Develop Porous Structures
325
3 Results and Discussion Because it was not possible to obtain Periodic type cubes without filled pores, the analyses and tests were only performed on the Stochastic Voronoi type samples. These were analysed in terms of mass (Fig. 7), to study the effect of the cleaning process, and tested with compression tests to analyse the effect of the number of points on the mechanical properties (Fig. 8). The results obtained in the compression tests of the samples with different configurations, were evaluated using Analysis of Variance (ANOVA, α = 95%). The statistical analysis of the data showed that both the maximum compressive strength (MCS) (F(4,10) = 28.60, P = 0.00) and the modulus (CM) (F(4,10) = 12.15, P = 0.00), were significantly affected by varying the number of points. The graphs in Fig. 8(a) and (b) show the mean MCS and CM as a function of the number of points, respectively. The average values of each response were compared using the Scott-Knott post-hoc test. In the graphs, the bars with the same letters correspond to equal averages.
Fig. 7. Average mass values for each cleaning stage. M1: Immediately after cleaning; M2: After cleaning with paper; M3: After drying for 48 h; M4: Immediately after ultrasonic cleaning; M5: After drying for 48 h.
Fig. 8. Average Maximum Compressive Strength (a) and Compressive Modulus (b) values for each number of points.
326
R. Rodrigues et al.
The analysis of the average values in Fig. 8(a) and (b) shows that the MCS and CM tend to increase as the number of points increases. In other words, the closer the structure is to a dense body, with smaller pores and porosity, the better the mechanical properties will be. Comparing the porous samples with the dense ones, it can be observed that a mass reduction of -58% – from the dense ones (9.12 g.) to the cubes with Np = 500 (3.84 g.) – there is a decrease of -98% in the maximum stress (from 33.02 MPa to 0.53 MPa) and -99% in the compressive modulus (from 0.74 GPa to 0.01 GPa). In the case of cubes with Np of 2000, the change in mass compared to dense cubes is -46%, with a loss of -97% (from 33.02 MPa to 0.92 MPa) in MCS and -98% in CM (from 0.74 GPa to 0.02 GPa). Therefore, although the ANOVA test and the Scott-Knott test show an increase in the mechanical properties with the increase in the number of points, in the mass ratio with the dense cubes, the percentage difference of the losses in strength and modulus, do not differ much from the cubes with 500 and 2000 points. The losses in mechanical properties can be explained by the applied post-treatments and the decrease in the thickness of the walls of the porous structure may have motivated an overcure effect, which was not verified in the dense body. In addition, thin walls may be more permeable to alcoholic solutions and due to the long cleaning time, which may have affected the chemical structure of the photopolymer resin used. In dense structures, the described effects may occur, however, they may concentrate in the shell of the piece, maintaining a dense and mechanically resistant core.
4 Conclusions This study demonstrated that the cleaning process is indispensable in the production of these porous structures by Digital Light Processing 3D printing technology to remove uncured resin. It was also realized that with the printing parameters used in this study, it is not possible to print samples of the periodic type with cubic cells, probably due to the angles of the supports or the pore size. In the case of the Stochastic Voronoi type samples, it was understood that by increasing the number of points inserted within the same volume, the mechanical strength increased, and the configuration with 2000 points with 0.4 thickness, in comparison with the dense samples, showed a decrease in mass of 46%, 97% in relation to the Maximum Compressive Strength value and 98% in the Compression Modulus. Further studies can and should be carried out, on the Stochastic Voronoi Cube samples to analyse the effects of changing the thickness parameter and on the Periodic samples with cubic cells, to obtain information on what printing parameters should be used, to be able to obtain acceptable samples. Acknowledgments. This work was supported by FCT (Fundação para a Ciência e Tecnologia), within the project PTDC/CTM-CTM/3354/2021 “Biodegradable implants in porous iron obtained by additive manufactoring".
Design Parameters to Develop Porous Structures
327
References 1. Di Filippo, A., Lombardi, M., Marongiu, F., Lorusso, A., Santaniello D.: Generative design for project optimization. In: Proceedings – DMSVIVA 2021: 27th International DMS Conference on Visualization and Visual Languages, pp. 110–115 (2021) 2. Buonamici, F., Carfagni, M., Furferi, R., Volpe, Y., Governi, L.: Generative design: an explorative study. Comput. Aided Des. Appl. 18(1), 144–155 (2020) 3. Park, K.M., Min, K.S., Roh, Y.S.: Design optimization of lattice structures under compression: study of unit cell types and cell arrangements. Materials 15(1), 97 (2022) 4. Helou, M., Kara, S.: Design, analysis and manufacturing of lattice structures: an overview. Int. J. Comput. Integr. Manuf. 31(3), 243–261 (2018) 5. da Costa Sartori, T.A.I., Ferreira, J.A., Osiro, D., Colnago, L.A., de Jesus Agnolon Pallone, E. M.: Formation of different calcium phosphate phases on the surface of porous Al2O3-ZrO2 nanocomposites. J. Eur. Ceram. Soc. 38(2), 743–751 (2018) 6. Liu, H., Chen, L., Jiang, Y., Zhu, D., Zhou, Y., Wang, X.: Multiscale optimization of additively manufactured graded non-stochastic and stochastic lattice structures. Compos. Struct. 305(1), 116546 (2023)
Deep Learning Based Automatic Porosity Detection of Laser Powder Bed Fusion Additive Manufacturing Syed Ibn Mohsin1(B) , Behzad Farhang2 , Peng Wang1 , Yiran Yang2 Narges Shayesteh2 , and Fazleena Badurdeen1
,
1 University of Kentucky, Lexington, KY 40506, USA
[email protected] 2 University of Texas at Arlington, Arlington, TX 76010, USA
Abstract. Laser Powder Bed Fusion (LPBF) is a widely utilized additive manufacturing process. Despite its popularity, LPBF has been found to have limitations in terms of the reliability and repeatability of its parts. To address these limitations, a deep learning model based on You Only Look Once (YOLO) was adapted to automate the detection of defect areas from scanning electron microscopic images of LPBF-manufactured parts. The data on the defect areas are then integrated into an Artificial Neural Network to correlate the process parameters with defects. The results show that the development of defects is stochastic in nature with respect to the input process parameters. The high variability of defects generated from the same process parameters makes it difficult to reliably predict the quality of the parts using only a process data-driven approach. This highlights the importance of in-situ monitoring of the system for reliable prediction of part quality. Keywords: Additive Manufacturing · Machine Learning · Powder Bed Fusion
1 Introduction Additive Manufacturing (AM) is a layer-by-layer deposition method of materials to build final geometries. Its advantages over traditional manufacturing include faster prototyping, on-demand and on-location production, and the ability to create intricate designs. Its versatility and efficiency make it popular in industries like automotive, aerospace, and healthcare [1]. The additive manufacturing process where a laser is used to melt the powdered metal is called Laser Powder Bed Fusion (LPBF). Although there are numerous advantages of LPBF it has the usual problems that are associated with AM like the building up of residual stress, dimensional inaccuracy in the finished product, surface roughness, and staircase effect. Some defects are specific to each process of AM. As LPBF is a fusion-based process, lack of fusion is one of the key defects in the process. One of the major causes of lack of fusion is the differences in distance between two subsequent passes [2]. Although this process of generation of defects is systematic in nature, other factors result in a stochastic lack of fusion. Due © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 F. J. G. Silva et al. (Eds.): FAIM 2023, LNME, pp. 328–335, 2024. https://doi.org/10.1007/978-3-031-38241-3_37
Deep Learning Based Automatic Porosity Detection
329
to wetting and capillary forces in the molten layer induced by the stochastic powder bed, the layer built in this way may deviate from its intended geometry. Also, due to the different amounts of powder present in the region and fixed laser energy, the powders may not melt properly resulting in channel-like defects [3]. Another form of defect is gas porosity, where shielding gas is trapped under the powder to form a cavity [4]. It occurs due to a high powder flow rate. Also, gas porosity may occur from a chemical reaction of the powder with shielding gas or atmosphere under high energy fusion of the metal. Another probable cause of gas entrapment can be the impurities of the powder feedstock. These can create gas during atomization and entrap within the printed component [5]. The shape of this kind of pore is spherical in nature due to the shape of the cavity. The melt pool in LPBF dictates the formation of the component. The development of pores is one of the defects that result from the instability in the melt pool. End of track, keyholing, and random power fluctuations can be attributed to this class of defects. All different classifications of pores are illustrated in Fig. 1.
Porosity in LPBF
Gas Porosity
Atomization Gas Porosity
Solubility Porosity
Melt Pool Instabilities
End of Track
Keyholing
Lack of Fusion
Random Power Fluctuation
Systematic
Stochastic
Fig. 1. Classification of pores in LPBF based on Snow et al. [5].
The defects in LPBF need to be detected to ensure the quality of the parts. There are various nondestructive testing methods to detect pores like X-ray Computed Tomography and Ultrasonic scanning [6]. And there have been numerous works on post-process defect detection over the years. And in recent years, different in-situ monitoring techniques has been employed in LPBF [7]. Process information during the operation is known as process signature which can be either observable or derived. The most important signature, the melt pool, affects the quality and stability of parts in the fusion process. Other relevant signatures include the scan path, slice, and powder in the bed. The melt pool’s size, shape, and temperature need to be monitored with sensors such as CMOS cameras, pyrometer, interferometric imaging, and more. An accelerometer senses vibration and Optical Coherence Tomography detects surface and subsurface properties [8]. However, in-situ monitoring is not well adopted in the industry which results in defective parts that result in hesitance in large-scale AM technology adaptation in the industry. Traditionally many studies on numerical modeling of the melt pool to predict the properties of the manufactured component has been done [9]. Due to multiphysics interactions in different layers, the models developed in this way don’t produce consistent results due to the uncertainty in the process [10]. Depending on the available data using
330
S. I. Mohsin et al.
Machine Learning (ML) models give a major advantage in capturing the latent physical characteristics of the process based on the input parameters. It thus helps to determine the optimized input parameters to produce parts with minimum defects [11]. ML relies heavily on the amount and quality of data used in training the model. But for LPBF the challenge lies in acquiring high-quality labeled data as it takes time, effort, and money to collect data from the printed parts. Therefore, gathering a large quantity of data for training ML models is a big challenge. The manual process of hand labeling and calculating defect areas in images is time-consuming and prone to errors. Automating this process would allow for faster and accurate data collection. Another challenge in automating the process is the irregular size and shape of pores in LPBF images which make automating the defect detection process challenging. A large dataset is needed to train a model to detect features and bound them in a box. This study investigates whether data collection and process parameter relation to the defect generation can be automated. In this study, image has been collected from the sample of LPBF process. It is used to adapt a deep learning (DL) model to detect the defect area from the image. The defect area is then correlated to process parameter using an artificial neural network.
2 Experimental Setup and Data Collection In the study, the experimental setup consists of an M290 SLM metal 3D printer (EOS GmbH, M290, Germany) with argon as inert gas to shield against oxidation of the powder particles. The powder was gas atomized IN718 where Energy Dispersive X-ray spectroscopy shows the composition to be Ni 50–55%, Cr 17–21%, Nb 4.75–5.4%, Mo 2.8–3.3%, Ti 0.65–1.15%, Al 0.2–0.8% and rest to be Fe by weight. The build plate was heated up to 80 °C and the temperature was held for the entire duration of printing of 5 × 5 × 6 mm3 cubic specimen. The as-built sample was removed using a Techcut 5TM low-speed saw and sliced in both parallel and perpendicular directions. Grinding was done with Silicon Carbide disks of grit size 320 to 1200. Afterward, polishing was done on DiaMat polishing cloth embedded with 1 µm diamond suspension. The sample surface was mirror polished on a Red Final C polishing pad incorporating colloidal silica suspension (0.04 µm). The sample was then rinsed with micro-organic soap and IPA before drying with compressed air. As a final step, the sample was etched in Kalling’s 2 reagents (cupric chloride, hydrochloric acid, and ethanol). After the preparation of the sample, a Hitachi S-3000N electron microscope was used to generate SEM images. The dataset for the study consists of 305 SEM images (1280 × 960 pixel) of LPBF samples taken from 14 different tests. The parameters used during the manufacturing of the components are given in Table 1. All the images obtained from tests are hand labeled before using the YOLOv5 model.
3 Method for Automatic Pore Detection YOLOv5 (You Only Look Once) is a fast object detection algorithm that through the image once in a regression problem where it predicts the bounding box and what they are in a single evaluation [12]. This feature of YOLO makes it very fast compared to the other object detection models. Therefore, YOLOv5 is investigated for automatic pore detection in this study.
Deep Learning Based Automatic Porosity Detection
331
Table 1. Sets of processing parameters used in the image analysis. Sample No
Laser Power (W)
Scanning Speed (mm/s)
Hatch Space (µm)
Layer Thickness (µm)
Energy Density (J/mm3 )
1
256.5
1056
99
40
61.3
2
313.5
1056
120
40
61.8
3
256.5
864
120
40
61.8
4
256.5
864
99
40
75.0
5
285
1121.4
109.5
40
58.0
6
332.9
960
109.5
40
79.2
7
285
960
127.1
40
58.4
8
313.5
864
120
40
75.6
9
256.5
1056
120
40
50.6
10
285
960
109.5
40
67.8
11
237
960
109.5
40
56.4
12
285
798.5
109.5
40
81.5
13
285
960
91.8
40
80.8
14
313.5
864
99
40
91.6
Fig. 2. Architecture of YOLO.
For YOLOv5 each input image is converted to 640 × 640 dimension and RGB values. The backbone of the YOLOv5 (Fig. 2) architecture is comprised of Cross Stage Partial Networks (CSPNet) which contains five convolution layer, one Spatial Pyramid Pooling layer, and four CSP bottlenecks comprising of multiple convolution and pooling layers. The backbone is used to extract feature maps. The head part comprises of Path Aggregation Network that extract deep layer features and send it to the 2D convolution layer for detection. YOLOv5s is the small version of YOLO series model which requires 7,022,326 parameters to be trained. The pretrained model is trained on COCO dataset
332
S. I. Mohsin et al.
comprising of 118,287 images of over 80 categories. But the sample size of this study is only 305 images. Therefore, we can’t use the model as it is to detect the pore area. Therefore, a transfer learning approach is adapted. In this approach the detection layers, that is, the 2D convolution layers are modified to detect only 1 category. In the Fig. 2, the x in Conv2D stands for number of categories to be detected. As only pore is required to be detected the value of x will be one. In the detecting layers, different dimensions 20 × 20, 40 × 40, 80 × 80 are used for different size of feature detection and due to this approach, the model can detect very small features from the image. Three stands for the number of anchor frame and 5 stands for final parameters of interest which are, confidence prediction, center co-ordinate, height and width of bounding box, and category. As the dataset is small the deep feature extracting backbone and head was leveraged which was already trained on a very large dataset. This reduced the number of parameters to be trained significantly and reduced the chances of underfitting the model. The initial learning rate was lowered to 0.01 to avoid overfitting of the model. Also, as there is only one category of object, the classification loss can be ignored in the loss function and the model can be optimized for only the bounding box loss and object loss. Once the model starts to perform satisfactorily, some post-processing is required to gather the area information of the bounding box.
Fig. 3. Detection of defects from process parameter.
A multi-layer Artificial Neural Network (ANN) was trained using pore area calculations. The model has 1 input layer (3 neurons for laser power, scanning speed, and hatch space), 2 hidden layers (12 and 6 neurons with sigmoid activation), and 1 output layer (1 neuron predicting defect area%). The weights were updated during training with a gradient-based optimization algorithm (learning rate 0.01, 10000 iterations). The hidden layers use sigmoid activation function from the input received by the previous layers to perform a non-linear transformation which models the complex Multi-physics relationship. The entire modeling approach is illustrated in Fig. 3.
4 Discussion and Results The final result of the object detection model after transfer learning is shown in Fig. 4. Precision and recall are criteria used to evaluate the model: precision is true positive/(true positive + false positive), recall is true positive/(true positive + false negative). True positive is when the model detects the porosity correctly, false positive is when the model
Deep Learning Based Automatic Porosity Detection
333
detects the porosity incorrectly, and false negative is when the model fails to detect the porosity. High precision means low false positive, high recall means low false negative. It can be seen in Fig. 4 that the precision and recall tend to go over 0.8 or 80% which means it detects all the objects more than 80% of the time correctly. The mAP or mean average precision measures the overlap between the predicted bounding box with the ground truth by calculating them at different threshold intersections over union (IOU). It assesses the model performance in detecting objects at higher IoU. The values of mAP_0.5 was found to be greater than 0.85 and mAP_0.5:0.95 to be greater than 0.4. The object loss and bounding box loss indicate the model performance during training, with lower loss being better. The DL model successfully detects pores in the images (Fig. 5) but may ignore very small ones due to limitations in feature extraction. The smaller pores were also ignored in labeling as the images are already ×800 times magnified. This doesn’t affect the modeling performance.
Fig. 4. Training results of the dataset on YOLOv5, x-axis represents the number of epochs and y-axis represents respective parameters mentioned in the legend.
Fig. 5. Pores detecting on the LPBF sample using YOLOv5.
334
S. I. Mohsin et al.
After detection of pores, the area from the bounding box was calculated from its length and height. Once the DL model is trained, it doesn’t need to be retrained and employed in other samples. It could be used to detect the pore of LPBF samples from the images rather than doing it manually as illustrated in Fig. 5. This information can be fed to simple ANN models to predict process defects based on process parameters. The R-square value for the ANN was found on an average to be 0.25 (approx.) and the root mean square value was found on an average to be 1.17 (approx.). The Fig. 6(a) shows the relation between the predicted defect area percentage and the actual defect area percentage. It can be observed that the model has a very poor performance in predicting the defect area from the input parameter. It can be explained by the Fig. 6(b) where, the x-axis represents the set of processing parameter as shown in Table 1, and the y-axis represents the percentage of defect area. The stochastic nature of the defects generated in the LPBF process is demonstrated by the significant variation in the defect area, despite identical printing conditions being used for all experiments with the same process parameter, as shown in Fig. 6(b). The poor performance of the ANN is the result of the lack of correlation between the process parameters and the defect. This result leads to a better understanding of the significance of the in-situ monitoring of the process. As the experiment was done with standard process parameters, the result indicates that within the used process condition the defect is too uncertain to predict. As the process defect characterization can’t be done based on the offline data, the implementation of in-situ monitoring is necessary.
Fig. 6. (a) Performance of the Multilayer Perceptron comparing the values of defect area percentage. (b) Relation between Set of Processing Parameters from Table 1 and defect area.
This study helps to better understand the necessity of in-situ monitoring and works as a preliminary step to set up the system. The model developed can be easily translated into the in-situ monitoring system. One of the reasons for adaptation of YOLOv5 model is its fast processing of videos at 52.8 FPS (approx.) which can process data at a much higher rate compared to other object detection models. The model developed in this study can be modified and applied to in-situ data to detect defects.
Deep Learning Based Automatic Porosity Detection
335
5 Conclusion In this study, a DL model-based approach to detect and calculate the defect area from the images of the LPBF sample was developed. This method can help to process data more efficiently from images of Additive Manufactured parts and eliminate the need for manually calculating defect area data from the images. The defect area data obtained from the object detection algorithm was used in an ANN to correlate the process parameter with the defect area. The result was found to have high variability for the same input process parameters which indicates the stochasticity of the defect generation. It illustrates the importance of in-situ monitoring in LPBF additive manufacturing for accurate defect detection. This study will work as a preliminary step for the future development of an insitu monitoring setup where the model developed can be translated to optical or infrared images from in-situ data to detect defects. The limitation of the model lies on its reliance on acquiring high quality of process sensing optical data that can visualize the defects.
References 1. Attaran, M.: The rise of 3-D printing: the advantages of additive manufacturing over traditional manufacturing. Bus. Horiz. 60(5), 677–688 (2017) 2. Brennan, M.C., Keist, J.S., Palmer, T.A.: Defects in metal additive manufacturing processes. J. Mater. Eng. Perform. 30(7), 4808–4818 (2021). https://doi.org/10.1007/s11665-021-059 19-6 3. Bauereiß, A., Scharowsky, T., Körner, C.: Defect generation and propagation mechanism during additive manufacturing by selective beam melting. J. Mater. Process. Technol. 214(11), 2522–2528 (2014) 4. Everton, S.K., Hirsch, M., Stravroulakis, P., Leach, R.K., Clare, A.T.: Review of in-situ process monitoring and in-situ metrology for metal additive manufacturing. Mater. Des. 95, 431–445 (2016) 5. Snow, Z., Nassar, A.R., Reutzel, E.W.: Review of the formation and impact of flaws in powder bed fusion additive manufacturing. Addit. Manuf. 36, 101457 (2020) 6. Khanzadeh, M., Chowdhury, S., Tschopp, M.A., Doude, H.R., Marufuzzaman, M., Bian, L.: In-situ monitoring of melt pool images for porosity prediction in directed energy deposition processes. IISE Trans. 51(5), 437–455 (2019) 7. Wang, P., Yang, Y., Moghaddam, N.S.: Process modeling in laser powder bed fusion towards defect detection and quality control via machine learning: the state-of-the-art and research challenges. J. Manuf. Process. 73, 961–984 (2022) 8. Grasso, M., Colosimo, B.M.: Process defects and in situ monitoring methods in metal powder bed fusion: a review. Meas. Sci. Technol. 28(4), 044005 (2017) 9. Picasso, M., Hoadley, A.F.A.: Finite element simulation of laser surface treatments including convection in the melt pool. Int. J. Numer. Meth. Heat Fluid Flow 4(1), 61–83 (1994) 10. Tang, L., Landers, R.G.: Melt pool temperature control for laser metal deposition processes – Part I. Online temperature control. J. Manuf. Sci. Eng. 132(1) (2010) 11. Meng, L., et al.: Machine learning in additive manufacturing: a review. JOM 72(6), 2363–2377 (2020). https://doi.org/10.1007/s11837-020-04155-y 12. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: IEEE Conference on Computer Vision and Pattern Recognition (2016)
Influence of Process Parameters on Compression Properties of 3D Printed Polyether-Ether-Ketone by Fused Filament Fabrication Erika Lannunziata(B)
, Alberto Giubilini , Abdollah Saboori , and Paolo Minetola
Integrated Additive Manufacturing Centre (IAM@PoliTO), Department of Management and Production Engineering (DIGEP), Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Torino, Italy [email protected]
Abstract. Metal replacement is an effective approach for sustainable manufacturing of polymer products in various sectors with the key advantage of reducing the component weight. Technopolymers are a class of materials with increased properties, i.e., thermal and chemical stability as well as mechanical resistance, compared to traditional plastics, thus resulting in a more efficient alternative for metal parts. Nowadays, Additive Manufacturing is a game-changer production technology due to its high flexibility, geometrical accuracy, reduced time and costs, and minimal waste. Therefore, an attractive research topic for technopolymers is their application in Additive Manufacturing. Polyether-ether-ketone (PEEK) is a semi-crystalline technopolymer, its thermal susceptibility during the cooling step of the process remains the dominant cause of dimensional warping and job failure. The nozzle and bed/chamber temperature difference should be optimised to reduce the thermal gradient. Previous researchers investigated the nozzle and bed temperature effects deeply. However, the chamber temperature influence on the dimensional accuracy and compression properties is still missing in the literature, particularly for samples printed with an infill lower than 100%. Therefore, this study aims to fill these gaps and deepen the knowledge about PEEK printing via Fused Filament Fabrication by evaluating the effects of chamber temperature and infill percentage over compression properties, printing accuracy and energy consumption. The specific compression properties highlighted that the highest values were reached for not fully dense samples. Furthermore, the heating chamber did not affect the dimensional accuracy and compressive properties as strongly to justify an energy consumption increment of 45%. Keywords: Fused Filament Fabrication (FFF) · PEEK · Additive Manufacturing · Compression properties · X-ray Computed Tomography
1 Introduction The metal-to-plastic conversion, or metal replacement, has recently attracted the interest of the industrial world from the automotive sector to the biomedical one for a more sustainable manufacturing transition [1]. Moreover, plastics have been considered an © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 F. J. G. Silva et al. (Eds.): FAIM 2023, LNME, pp. 336–344, 2024. https://doi.org/10.1007/978-3-031-38241-3_38
Influence of Process Parameters on Compression Properties
337
interesting alternative to metals for purposes of saving weight, cost, and time processing. Since its introduction in the late 1980s, Poly-ether-ether-ketone (PEEK) has attracted lots of attention from industries [2]. PEEK is a high-performance thermoplastic polymer consisting of an aromatic backbone molecular chain, which imparts a special resistance to high temperatures and chemical attacks, interconnected by ketone and ether functional groups conferring flexibility and manufacturability [3]. Consequently, the physical and mechanical qualities of PEEK make it a potential option for substituting metal in a wide range of high-performance end-use applications [4]. With its high Young’s modulus, tensile strength, and low specific gravity, PEEK can replace aluminium or steel in various applications, including aerospace and automotive [5]. Furthermore, PEEK is biocompatible, easily sterilisable, radio-transparent, and it possesses a Young modulus closer to that of bones compared to metals. For all these properties, it is a promising material for biomedical applications, particularly as an appropriate alternative to precious metal implants [6]. The fundamental drawback of conventional manufacturing processes is their lack of flexibility in producing components with elaborate designs. In this regard, the advancement of Additive Manufacturing (AM) techniques nowadays makes it possible to manufacture components in PEEK with design freedom in an affordable way. Among the AM technologies, Fused Filament Fabrication (FFF) is the most commonly used for fabricating thermoplastic parts based on CAD designs. In this process, a polymeric filament is continuously driven through a heated print head, where it is softened and extruded through a nozzle to build the part layerwise on a heated platform. This additive technique offers low manufacturing cost, freedom in design, and supervision-free operation. Recently, great effort has been put into PEEK production by FFF. Previous investigations on the 3D printing of PEEK showed promising outcomes [7], although its thermal susceptibility during manufacturing remains the most significant obstacle. PEEK’s semicrystalline nature consists of well-ordered crystalline zones mixed with amorphous regions, which exhibit lower density, tensile strength, Young’s modulus, and stress cracking resistance than the crystalline domains. The percentage of crystallinity strongly depends on the thermal gradient that occurs during the crystallisation phase in the process of cooling from the melt state [8]. Thus, the nozzle and bed/chamber temperature difference during 3D printing should be optimised to improve the mechanical properties and dimensional accuracy of PEEK. As reported in a previous study [7], the nozzle temperature is a crucial parameter for dimensional accuracy, the crystallisation process, and interfacial strength between layers. The optimum range for the nozzle temperature was identified between 420 and 440 °C. It is well documented that a further increment could result in PEEK thermal degradation [7, 8]. The melted polymer from the nozzle is deposited on the bed, whose temperature promotes diffusion bonding among adjacent filaments and affects the surface topography of PEEK parts [9]. In order to reduce the thermal gradient between the nozzle and the bed, the chamber temperature should be higher than the ambient temperature. Wang et al. [10] demonstrated how high chamber temperature positively affected PEEK crystallinity, resulting in better mechanical properties. These results were related to the tensile tests, and the compression properties reference is still missing in the literature. The temperature dependence of PEEK in 3D printing has been studied only for a part
338
E. Lannunziata et al.
infill of 100%, lower infill percentages were not included in the thermal investigation [11, 12]. Therefore, this work aims to fill these gaps and contribute to deepening the knowledge about PEEK printing via FFF by evaluating the effects of the chamber temperature and infill percentage. Furthermore, dimensional accuracy and time and energy consumption were considered to provide a comprehensive evaluation that can help broaden the diffusion of metal replacement with PEEK by taking advantage of innovative manufacturing technologies, such as Fused Filament Fabrication.
2 Materials and Method The PEEK filament used in this study is the PEEK K10 produced by Kexcelled (North Bridge New Material Technology, Sozhou, China). According to the material manufacturer’s data sheet, the filament had a diameter of 1.75 ± 0.02 mm and a density of 1.28 g/cm3 . Before processing, the spool was oven dried at 80 °C overnight to remove environmental moisture. The 3D printer used for manufacturing all the samples was the CreatBot PEEK-300 (CreatBot, Zhengzhou, China), which has a building volume of 300 × 300 × 400 mm3 , heatable up to 150 °C. Furthermore, this machine has a double nozzle that allows the processing of both traditional polymers as well as technopolymers, which require high processing temperatures up to 500 °C. The 3D printed samples were cylinders with a diameter of 10 mm built from the platform plate (XY) and a height of 10 mm (Z-axis), with reference to Type B specimen of the ISO 604 guideline [13]. The cylinder samples were designed on Solidworks software and then imported into Simplify 3D software for the slicing procedure. Four different infill percentages were identified, respectively 30%, 50%, 70% and 100%, and for each value, six samples were produced with the printing parameters summarised in Table 1. Table 1. Process parameters used for the fabrication of all specimens. Parameter
Value
Nozzle temperature [°C]
420
Bed temperature [°C]
140
Printing speed [mm/s]
25
Layer height [mm]
0.2
Infill pattern
Rectilinear
Infill angle [°]
±45
Under the same printing conditions, two different sets of samples were produced with the chamber heated to 140 °C and without heating. Henceforth, the first set of samples will be referred to as “HC” and the second set as “CC” with reference to the hot chamber or cold chamber, respectively. The dimensional accuracy was analysed by comparing the nominal CAD geometry with the tomography scans acquired with a
Influence of Process Parameters on Compression Properties
339
micro-CT scan model Phoenix v|tome|x S240 (GE Baker Hughes-Waygate Technologies, Wunstorf, Germany). The compression tests were carried out using an AURA 5T machine (Easydur, Arcisate, Italy) performed according to the ISO 604 standard [13] at a test speed of 5 mm/min to evaluate the compressive strength and establish up to 70% strain deformation of the specimen. Five samples were 3D printed and tested for each infill percentage and chamber temperature condition. MATLAB software was used to calculate the compression modulus for each sample by evaluating the initial slope of the compressive curves in the elastic region.
3 Results Dimensional Accuracy. Dimensional characterisation of all 3D printed specimens was preliminary conducted with a micrometre, as well as the weight measurement, through a Gibertini 1000HR-CM balance. The results are listed in Table 2, along with the printing time and the energy consumption (EC) of specimen production, which was measured using a Meterk M34EU power meter plug for each infill percentage and thermal condition (TC) of the printing chamber. The analysis of these results verified the evident increasing influence of weight, printing time, energy consumption, and infill percentage. There is no significant effect of chamber temperature on printing time, whereas its influence on the energy consumption required to print each sample is evident. In fact, consumption increases up to 63% for the samples with 30% infill. Dimensional accuracy was confirmed by tomography analysis, whose scans are illustrated and compared to the nominal CAD model in Fig. 1. The results showed that extra material extrusion occurred by increasing the infill percentage, even up to reaching an over-extrusion in the case of 100% infill percentage. This phenomenon can be related to an unsuitable extrusion speed, which causes the spreading of extra material from the nozzle resulting in oversized printed samples [3]. From the tomography results, Table 2. Dimensional evaluation, weight, printing time, and energy consumption (EC) of all 3D printed samples in different thermal conditions (TC). TC
CC
HC
Infill [%]
Diameter [mm] Height [mm]
Weight [g]
Printing time [min per sample]
EC [kWh per sample]
30
9.70 ± 0.08
9.90 ± 0.04
0.58
7
0.16
50
9.84 ± 0.08
9.86 ± 0.07
0.72
8
0.17
70
9.76 ± 0.07
9.82 ± 0.04
0.85
9
0.2
100
10.16 ± 0.03
10.13 ± 0.03
1.01
11
0.22
30
9.72 ± 0.07
9.81 ± 0.00
0.58
7
0.26
50
9.77 ± 0.01
9.64 ± 0.01
0.72
8
0.27
70
9.71 ± 0.07
9.61 ± 0.01
0.85
9
0.29
100
10.17 ± 0.05
10.02 ± 0.01
1.02
11
0.31
340
E. Lannunziata et al.
and specifically evaluating the cumulative distribution at 90 % of the deviation between the nominal surface and the actual one (Table 3), it is possible to observe that the chamber heating (HC) increased the deviation. This effect could be explained by an increment of crystallinity due to slower cooling because of the hot chamber, thus suggesting a suitable scaling during the definition of the part design. Therefore, 50% and 70% infill percentage samples, with no heated chamber (CC), produced more accurate samples by saving time and energy at the same speed.
Fig. 1. Comparison of CAD model, printing paths in the slicing software, and tomographic scans with the chromatic representation of deviation between actual and nominal dimensions of the different samples.
Table 3. 90% of the cumulative deviation distribution between the nominal surface and the actual one of the samples at varying infill percentage and chamber condition (TC). 90% of the cumulative distribution of deviation Infill [%]
CC
HC
30
0.16 mm
0.15 mm
50
0.12 mm
0.16 mm
70
0.13 mm
0.19 mm
100
0.17 mm
0.18 mm
Influence of Process Parameters on Compression Properties
341
Compressive Properties. The compressive performance of 3D printed samples was characterised by compression tests, and the main findings are shown in Fig. 2 for the lowest and highest infill percentages and both chamber conditions (TC). Table 4 summarises the compressive strengths and modulus, as well as their values normalised according to sample mass and considered as specific values. All the compressive stress and strain curves (Fig. 2) displayed three defined regions: (I) a linear elasticity zone, (II) a plateau region, and (III) a densification region. After the first elastic deformation (I), during the second phase (II), the internal structure of the samples deformed with almost no increase in density. It can be observed from Fig. 2 that the length of the plateau decreases proportionally with the increase of the infill percentage due to a higher amount of extruded material and, thus, a greater possibility of internal rearrangement of the structure. In the third stage (III), stress increased rapidly without a significant increase in deformation since the samples began to compress as bulk material, resulting in a rapid increase in the sample response. These results of the experimental tests are consistent with previous research in the literature [14], and, as expected, the compressive performance increased for increasing infill percentages. The compressive strength evaluated at a 70% strain increased from 91.3 ± 10.1 for 30% infill to 306.1 ± 17.4 MPa for 100% infill. A similar trend was also observed for the compressive modulus, ranging from 651.5 ± 16.1 to 1197.3 ± 49.9 MPa.
Fig. 2. Compressive behaviour of the lowest infill percentage (30%) and the highest one (100%).
For a thorough and significant interpretation of these results, it is better to correlate them with their specific values, normalised according to the sample mass. Figure 3 highlights the differences between the values of compressive properties and the specific
342
E. Lannunziata et al.
ones for different infill percentages. The 30% infill corresponded to the lowest amount of material subjected to deformation, leading to the weakest response under load, as Fig. 2c shows with the sample crush. Considering the specific compressive properties, it is worth noting that up to 70% infill, there is always an improvement in the compressive behaviour. Beyond this value, the deposited material seems to give a less significant contribution, and therefore the 100% infill samples display a lower specific compressive modulus than 70% infill ones. These results are consistent with previous findings [15]. The chamber temperature also slightly improved the compressive properties in the order of about 10% for each infill percentage. The effect of chamber temperature on mechanical properties appears to be in contrast to what has been previously reported in the literature [11, 12]. One possible explanation for this result could be related to the small dimensions of the 3D printed geometries, which are significantly affected by the temperature of the heated plate and the nozzle maintained at a temperature higher than 400 °C [16].
Fig. 3. (a) the compressive modulus and its specific values, (b) the compressive strength and its specific values for each infill and thermal conditions.
Table 4. Compressive performance of the specimens for four different infill percentages (30%, 50%, 70%, 100%) and two chamber conditions (TC). TC
CC
HC
Infill [%]
30
Compressive strength [MPa]
Compressive modulus [MPa]
91.3 ± 10.1
651.5 ± 16.1
Specific compressive strength [MPa/g]
Specific compressive modulus [MPa/g]
157.4 ± 18.3
1116.5 ± 26.6
50
151.6 ± 7.2
859.6 ± 27.5
211.2 ± 9.3
1197.3 ± 33.5
70
252.8 ± 6.0
1055.3 ± 31.9
298.8 ± 8.2
1246.8 ± 39.9
100
306.1 ± 17.4
1197.3 ± 49.9
304.6 ± 35.9
1192.0 ± 55.9
30
92.1 ± 2.8
695.0 ± 12.1
157.9 ± 4.6
1197.9 ± 27.2
50
180.8 ± 1.5
872.7 ± 31.4
251.7 ± 1.8
1214.9 ± 47.7
70
280.5 ± 7.4
1073.3 ± 18.0
331.3 ± 7.7
1269.0 ± 26.0
100
310.2 ± 3.4
1250.7 ± 5.1
304.7 ± 4.1
1226.5 ± 3.8
Influence of Process Parameters on Compression Properties
343
4 Conclusions This work examined the infill percentage and the chamber heating impact on the dimensional accuracy, printing time, and energy consumption required for 3D printing PEEK compressive sample by FFF. Additionally, the compressive properties were investigated for different infill percentages with and without a heated 3D printing chamber. Based on the outcomes, it can be concluded that the infill percentage affected the dimensional accuracy, reaching an over-extrusion for the 100% infill samples, probably due to an unsuitable extrusion speed. Therefore, the extrusion speed might be increased for a high infill percentage to reduce the dimensional deviations from the nominal CAD model. 50% and 70% infill percentages pointed out to have the lowest cumulative deviation of the specimen surface to the nominal dimensions at the printing speed used in this study. Even considering the compression performance, the highest specific properties were reached for the samples with 70% infill, achieving time and material savings compared to fully dense cylinders. Furthermore, the heating chamber slightly improved the compressive properties up to a maximum of 10% increase. However, considering an average increase in energy consumption of about 45% resulting from the heating chamber, we can conclude that the compressive strength improvement did not justify the consumption increment for the sample designed in this study. The findings of this preliminary study are promising results that pave the way for further research, which should investigate the effect of the heating chamber on different sample geometries, e.g., with larger specimen dimensions or at higher chamber temperatures.
References 1. Carvalho, M.: EU energy and climate change strategy. Energy 40(1), 19–22 (2012) 2. Verma, S., Verma, S., Sharma, N., Kango, S., Sharma, S.: Developments of PEEK (polyetheretherketone) as a biomedical material: a focused review. Eur. Polym. J. 147(15), 110295 (2021) 3. Wang, Y., Müller, W.D., Rumjahn, A., Schwitalla, A.: Parameters influencing the outcome of additive manufacturing of tiny medical devices based on PEEK. Materials 13(2), 466 (2020) 4. Yang, C., Tian, X., Li, D., Cao, Y., Zhao, F., Shi, C.: Influence of thermal processing conditions in 3D printing on the crystallinity and mechanical properties of PEEK material. J. Mater. Process. Technol. 248, 1–7 (2017) 5. Shekar, R.I., Kotresh, T.M., Rao, P.D., Kumar, K.: Properties of high modulus PEEK yarns for aerospace applications. J. Appl. Polym. Sci. 112(4), 2497–2510 (2009) 6. Han, X., et al.: Carbon fibre reinforced PEEK composites based on 3D-printing technology for orthopaedic and dental applications. J. Clin. Med. 8(2), 240 (2019) 7. Zanjanijam, A.R., Major, I., Lyons, J.G., Lafont, U., Devine, D.M.: Fused filament fabrication of peek: a review of process-structure-property relationships. Polymers 12(8), 1665 (2020) 8. Jin, L., Ball, J., Bremner, T., Sue, H.J.: Crystallization behaviour and morphological characterisation of poly (ether ether ketone). Polymer 55(20), 5255–5265 (2014) 9. Wang, P., Zou, B., Ding, S.: Modeling of surface roughness based on heat transfer considering diffusion among deposition filaments for FDM 3D printing heat-resistant resin. Appl. Therm. Eng. 161, 114064 (2019) 10. Wang, R., Cheng, K.J., Advincula, R.C., Chen, Q.: On the thermal processing and mechanical properties of 3D-printed polyether ether ketone. MRS Commun. 9, 1046–1052 (2019)
344
E. Lannunziata et al.
11. Wu, W.Z., Geng, P., Zhao, J., Zhang, Y., Rosen, D.W., Zhang, H.B.: Manufacture and thermal deformation analysis of semicrystalline polymer polyether ether ketone by 3D printing. Mater. Res. Innov. 18, S5-12 (2014) 12. Hu, B., et al.: Improved design of fused deposition modelling equipment for 3D printing of high-performance PEEK parts. Mech. Mater. 137, 103139 (2019) 13. EN ISO 604. Plastic Determination of Compressive Properties (2002) 14. Ait-Mansour, I., Kretzschmar, N., Chekurov, S., Salmi, M., Rech, J.: Design-dependent shrinkage compensation modelling and mechanical property targeting of metal FFF. Prog. Addit. Manuf. 5, 51–57 (2020) 15. Gao, R., Xie, J., Yang, J., Zhuo, C., Fu, J., Zhao, P.: Research on the fused deposition modelling of polyether ether ketone. Polymers 13(14), 2344 (2021) 16. Basgul, C., Thieringer, F.M., Kurtz, S.M.: Heat transfer-based non-isothermal healing model for the interfacial bonding strength of fused filament fabricated polyetheretherketone. Addit. Manuf. 46, 102097 (2021)
3D Printing of Polycaprolactone Scaffolds with Heterogenous Pore Size for Living Tissue Regeneration N. Manou , George-Christopher Vosniakos(B)
, and P. Kostazos
Manufacturing Technology Laboratory, School of Mechanical Engineering, National Technical University of Athens, Heroon Polytehniou 9, 15773 Athens, Greece [email protected]
Abstract. Pore size of scaffolds plays a significant role in tissue regeneration both in vivo and in vitro. Extrusion based 3D printing methods allow the manufacture of scaffolds with unique pore architecture. As far as homogenous pores are concerned, small ones are required for increased mechanical strength, but they have to be large enough for efficient regeneration. This study aims to preliminarily investigate the impact of heterogeneous pore distribution in the scaffold on its mechanical and biological behavior compared to homogenous pores. Scaffolds with 0°/90° architecture, comprising heterogeneous and homogeneous 300–500 μm pore distribution were 3D printed and then tested in tension and compression. Thereafter, their seeding efficiency was evaluated using a 3D culture of HepG2 cell line. Results confirmed that presence of two different pore sizes in the scaffold results in an advantageous combination of mechanical properties and seeding efficiency compared to homogenous size pores. Keywords: 3D Printing · PCL · Tissue regeneration · Scaffold · Pore
1 Introduction In the context of tissue engineering over the past years, many studies examined the influence of pore characteristics on the mechanical and biological behavior of scaffolds used for tissue regeneration [1, 2]. However, it has not been clarified yet, whether heterogeneous pore sizes lead to higher regeneration efficiency alongside an intermediate mechanical strength interpolating that of homogeneous pores [3]. Extrusion based Additive Manufacturing (AM) techniques have evolved as powerful tools for realizing heterogenous porous scaffolds, due to computer controlled architecture and investigating their mechanical and biological performance [4]. Biodegradable polymer scaffolds have been widely used, fabricated by extrusion based AM [5] and subsequently evaluated for 3D cell culture [6]. Among biopolymers polycaprolactone (PCL) constitutes an excellent material due to its biocompatibility [7], slow biodegradation, mechanical strength and excellent rheological properties when heated [8, 9]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 F. J. G. Silva et al. (Eds.): FAIM 2023, LNME, pp. 345–353, 2024. https://doi.org/10.1007/978-3-031-38241-3_39
346
N. Manou et al.
This study explores the influence of pore size on the mechanical properties and seeding efficiency of extrusion based 3D printed PCL scaffolds with the same architecture but different pore sizes. By contrast to what has been reported so far, e.g. [10], emphasis is laid on comparison of homogenous and heterogenous pore cases. Their fabrication is covered in Sect. 2 and their mechanical and biological tests in Sects. 3 and 4. Conclusions are summarized in Sect. 5.
2 Scaffold Design and Fabrication Three types of scaffolds were designed with the same architectural structure in order to limit the influence of secondary factors, but with different pore sizes. The first one involves homogeneous pores of 300 μm openings (Scaffold 1), the second involves homogeneous pores of 500 μm openings (Scaffold 2), while the third one consists of the combination of the two aforementioned pores (Scaffold 3). Design parameter terminology as used in literature is as follows: Road Width (RW), Filament Diameter (FD), Filament Gap (FG), Layer Gap (LG), Slice Thickness (ST) [8–10]. The scaffolds were characterized by a 0°/90° layup pattern, 100% interconnected square pores and the same RW, LG and ST parameters (Table 1) Dimensions of scaffolds were selected to enable them to fit in culture plates (Costar 3513). Scaffolds were designed on SolidWorks™ software and .STL files were exported, see Fig. 1. Table 1. Design parameters of scaffolds at a fixed lay-down pattern of 0°/ 90°. Scaffolds RW (μm) FG (μm) ST (μm) LG (μm) FD (μm) Dimensions (mm)
STL triangles
1
600
300
600
600
900
11.40 × 25376 11.10 × 7.80
2
600
500
600
600
1100
11.60 × 21472 11.60 × 7.80
3
600
300
600
600
900
11.60 × 34380 11.60 × 7.80
500
1100
Ten scaffolds of each type were fabricated on an Anycubic i3 Mega 3D printer using Cura™ slicing software. Polycaprolactone (PCL) filament diameter was 1.75 mm (eSUN, China). According to the manufacturer its melting point is around 60 °C, it is non-toxic, biodegradable and FDA approved. It has a density of 1.16 g/cm3 tensile strength 18 MPa and elasticity modulus 345 MPa, whilst its strain at fracture is 800%. Printer nozzle/table temperatures are recommended at 70–100 °C/