127 100 35MB
English Pages 706 [696] Year 2023
IFIP AICT 690
Erlend Alfnes Anita Romsdal Jan Ola Strandhagen Gregor von Cieminski David Romero (Eds.)
Advances in Production Management Systems Production Management Systems for Responsible Manufacturing, Service, and Logistics Futures
IFIP WG 5.7 International Conference, APMS 2023 Trondheim, Norway, September 17–21, 2023 Proceedings, Part II
123
IFIP Advances in Information and Communication Technology
690
Editor-in-Chief Kai Rannenberg, Goethe University Frankfurt, Germany
Editorial Board Members TC 1 – Foundations of Computer Science Luís Soares Barbosa , University of Minho, Braga, Portugal TC 2 – Software: Theory and Practice Michael Goedicke, University of Duisburg-Essen, Germany TC 3 – Education Arthur Tatnall , Victoria University, Melbourne, Australia TC 5 – Information Technology Applications Erich J. Neuhold, University of Vienna, Austria TC 6 – Communication Systems Burkhard Stiller, University of Zurich, Zürich, Switzerland TC 7 – System Modeling and Optimization Lukasz Stettner, Institute of Mathematics, Polish Academy of Sciences, Warsaw, Poland TC 8 – Information Systems Jan Pries-Heje, Roskilde University, Denmark TC 9 – ICT and Society David Kreps , National University of Ireland, Galway, Ireland TC 10 – Computer Systems Technology Achim Rettberg, Hamm-Lippstadt University of Applied Sciences, Hamm, Germany TC 11 – Security and Privacy Protection in Information Processing Systems Steven Furnell , Plymouth University, UK TC 12 – Artificial Intelligence Eunika Mercier-Laurent , University of Reims Champagne-Ardenne, Reims, France TC 13 – Human-Computer Interaction Marco Winckler , University of Nice Sophia Antipolis, France TC 14 – Entertainment Computing Rainer Malaka, University of Bremen, Germany
IFIP Advances in Information and Communication Technology The IFIP AICT series publishes state-of-the-art results in the sciences and technologies of information and communication. The scope of the series includes: foundations of computer science; software theory and practice; education; computer applications in technology; communication systems; systems modeling and optimization; information systems; ICT and society; computer systems technology; security and protection in information processing systems; artificial intelligence; and human-computer interaction. Edited volumes and proceedings of refereed international conferences in computer science and interdisciplinary fields are featured. These results often precede journal publication and represent the most current research. The principal aim of the IFIP AICT series is to encourage education and the dissemination and exchange of information about all aspects of computing. More information about this series at https://link.springer.com/bookseries/6102
Erlend Alfnes Anita Romsdal Jan Ola Strandhagen Gregor von Cieminski David Romero Editors •
•
•
•
Advances in Production Management Systems Production Management Systems for Responsible Manufacturing, Service, and Logistics Futures IFIP WG 5.7 International Conference, APMS 2023 Trondheim, Norway, September 17–21, 2023 Proceedings, Part II
123
Editors Erlend Alfnes Norwegian University of Science and Technology Trondheim, Norway Jan Ola Strandhagen Norwegian University of Science and Technology Trondheim, Norway
Anita Romsdal Norwegian University of Science and Technology Trondheim, Norway Gregor von Cieminski ZF Friedrichshafen AG Friedrichshafen, Germany
David Romero Tecnológico de Monterrey Mexico City, Mexico
ISSN 1868-4238 ISSN 1868-422X (electronic) IFIP Advances in Information and Communication Technology ISBN 978-3-031-43665-9 ISBN 978-3-031-43666-6 (eBook) https://doi.org/10.1007/978-3-031-43666-6 © IFIP International Federation for Information Processing 2023 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.
Preface
The year 2023 has undoubtedly been a year of contrasts. We are experiencing stunning developments in technology, and creating new products, services, and systems that are changing the way we live and work. Simultaneously, we are experiencing multiple conflicts around the world and the brutal effects of climate change. While many experience success and improved standards of living, others face threats to their lives and even loss. A Scientific Conference cannot change this but can be seen as a symbol for aiming for a different future. We create new knowledge and solutions, we share all our achievements, and we meet to create new friendships and meet people from all over the world. The International Conference on “Advances in Production Management Systems” (APMS) 2023 is the leading annual event of the IFIP Working Group (WG) 5.7 of the same name. At the Conference in Trondheim, Norway, hosted by the Norwegian University of Science and Technology (NTNU), more than 200 papers were presented and discussed. This is a significant step up from the first APMS Conference in 1980, which assembled just a few participants. The IFIP WG5.7 was established in 1978 by the General Assembly of the International Federation for Information Processing (IFIP) in Oslo, Norway. Its first meeting was held in August 1979 with all its seven members present. The WG has since grown to 108 full members and 25 honorary members. After 43 years, APMS has returned to the city where it started. The venue in 1980 was Lerchendal Gård, and the topic marked the turn of a decade: “Production Planning and Control in the 80s”. The papers presented attempted to look into the future – a future which at that time was believed to be fully digitalized. One foresaw that during the coming decade, full automation and optimization of complete manufacturing plants, controlled by a central computer, would be a reality. The batch processing of production plans would be replaced by online planning and control systems. No other technology can show a more rapid development and impact in industry and society than Information and Communication Technology (ICT). The APMS 2023 program shows that the IFIP WG5.7 still can make and will continue to make a significant contribution to production and production management disciplines. In 2023, the International Scientific Committee for APMS included 215 recognized experts working in the disciplines of production and production management systems. For each paper, an average of 2.5 single-blind reviews were provided. Over two months, each submitted paper went through two rigorous rounds of reviews to allow authors to revise their work after the first round of reviews to guarantee the highest scientific quality of the papers accepted for publication. Following this process, 213 full papers were selected for inclusion in the conference proceedings from a total of 224 submissions. APMS 2023 brought together leading international experts from academia, industry, and government in the areas of production and production management systems to discuss how to achieve responsible manufacturing, service, and logistics futures. This
vi
Preface
included topics such as innovative manufacturing, service, and logistics systems characterized by their agility, circularity, digitalization, flexibility, human-centricity, resiliency, and smartification contributing to more sustainable industrial futures that ensure that products and services are manufactured, servitized, and distributed in a way that creates a positive effect on the triple bottom line. The APMS 2023 conference proceedings are organized into four volumes, covering a large spectrum of research addressing the overall topic of the conference “Production Management Systems for Responsible Manufacturing, Service, and Logistics Futures”. We would like to thank all contributing authors for their quality research work and their willingness to share their findings with the APMS and IFIP WG5.7 community. We are equally grateful for the outstanding work of all the International Reviewers, the Program Committee Members, and the Special Sessions Organizers. September 2023
Erlend Alfnes Anita Romsdal Jan Ola Strandhagen Gregor von Cieminski David Romero
Organization
Conference Chair Jan Ola Strandhagen
Norwegian University of Science and Technology, Norway
Conference Co-chair Gregor von Cieminski
ZF Friedrichshafen AG, Germany
Conference Honorary Chair Asbjørn Rolstadås
Norwegian University of Science and Technology, Norway
Program Chair Erlend Alfnes
Norwegian University of Science and Technology, Norway
Program Co-chairs Heidi Carin Dreyer Daryl Powell Bella Nujen Anita Romsdal David Romero
Norwegian University of Science and Norway Norwegian University of Science and SINTEF Manufacturing, Norway Norwegian University of Science and Norway Norwegian University of Science and Norway Tecnológico de Monterrey, Mexico
Technology, Technology/ Technology, Technology,
Organization Committee Chair Anita Romsdal
Norwegian University of Science and Technology, Norway
Doctoral Workshop Chair Hans-Henrik Hvolby
Aalborg University, Denmark
viii
Organization
Doctoral Workshop Co-chair David Romero
Tecnológico de Monterrey, Mexico
List of Reviewers Federica Acerbi Luca Adelfio Natalie Cecilia Agerskans El-Houssaine Aghezzaf Rajeev Agrawal Carla Susana Agudelo Assuad Kosmas Alexopoulos Kartika Nur Alfina Erlend Alfnes Antonio Pedro Dias Alves de Campos Terje Andersen Joakim Andersson Dimitris Apostolou Germán Arana Landín Simone Arena Emrah Arica Veronica Arioli Nestor Fabián Ayala Christiane Lima Barbosa Mohadese Basirati Mohamed Ben Ahmed Justus Aaron Benning Aili Biriita Bertnum Belgacem Bettayeb Seyoum Eshetu Birkie Umit Sezer Bititci Klas Boivie Alexandros Bousdekis Nadjib Brahimi Greta Braun Gianmarco Bressanelli Jim J. Browne Patrick Bründl Kay Burow Jenny Bäckstrand Jannicke Baalsrud Hauge Robisom Damasceno Calado Luis Manuel Camarinha-Matos Violetta Giada Cannas
Ayoub Chakroun Zuhara Chavez Ferdinando Chiacchio Steve Childe Chiara Cimini Florian Clemens Beatrice Colombo Federica Costa Catherine da Cunha Flávia de Souza Yüksel Değirmencioğlu Demiralay Enes Demiralay Tabea Marie Demke Mélanie Despeisse Candice Destouet Slavko Dolinsek Milos Drobnjakovic Eduardo e Oliveira Malin Elvin Christos Emmanouilidis Hakan Erdeş Kristian Johan Ingvar Ericsson Victor Eriksson Adrodegari Federico Matteo Ferrazzi Jannick Fiedler Erik Flores-García Giuseppe Fragapane Chiara Franciosi Susanne Franke Enzo Frazzon Stefano Frecassetti Jan Frick Paolo Gaiardelli Clarissa A. González Chávez Jon Gosling Danijela Gračanin Daniela Greven Eric Grosse
Organization
Zengxu Guo Christopher Gustafsson Petter Haglund Lise Lillebrygfjeld Halse Trond Halvorsen Robin Hanson Stefanie Hatzl Theresa-Franziska Hinrichsen Maria Holgado Christian Holper Djerdj Horvat Karl Anthony Hribernik Hans-Henrik Hvolby Natalia Iakymenko Niloofar Jafari Tanya Jahangirkhani Tim Maximilian Jansen Yongkuk Jeong Kerstin Johansen Björn Johansson Bjørn Jæger Ravi Kalaiarasan Dimitris Kiritsis Takeshi Kurata Juhoantti Viktor Köpman Nina Maria Köster Danijela Lalić Beñat Landeta Nicolas Leberruyer Ming Lim Maria Linnartz Flavien Lucas Andrea Lucchese Egon Lüftenegger Ugljesa Marjanovic Julia Christina Markert Melissa Marques-McEwan Antonio Masi Gokan May Matthew R. McCormick Khaled Medini Jorn Mehnen Joao Gilberto Mendes dos Reis Hajime Mizuyama Eiji Morinaga Sobhan Mostafayi Darmian
Mohamed Naim Farah Naz Torbjørn Netland Phu Nguyen Kjeld Nielsen Ana Nikolov Sang Do Noh Antonio Padovano Julia Pahl Martin Perau Margherita Pero Mirco Peron Fredrik Persson Marta Pinzone Fabiana Pirola Adalberto Polenghi Daryl John Powell Rossella Pozzi Vittaldas Prabhu Hiran Harshana Prathapage Moritz Quandt Ricardo Rabelo Mina Rahmani Slavko Rakic Mario Rapaccini R. M. Chandima Ratnayake Eivind Reke Daniel Resanovic Ciele Resende Veneroso Irene Roda David Romero Anita Romsdal Christoph Roser Nataliia Roskladka Monica Rossi Martin Rudberg Roberto Sala Jan Salzwedel Adrian Sánchez de Ocaña Kszysztof Santarek Biswajit Sarkar Claudio Sassanelli Laura Scalvini Maximilian Schacht Bennet Schulz Marco Semini
ix
x
Organization
Sourav Sengupta Fabio Sgarbossa Vésteinn Sigurjónsson Marcia Terra Silva Katrin Singer-Coudoux Ivan Kristianto Singgih Lars Skjelstad Riitta Johanna Smeds Selver Softic Per Solibakke Vijay Srinivasan Kenn Steger-Jensen Oliver Stoll Jan Ola Strandhagen Jo Wessel Strandhagen Nick B. Szirbik Endre Sølvsberg Iris D. Tommelein Mario Tucci Ebru Turanoglu Bekar Ioan Turcin Arvind Upadhyay Andrea Urbinati
Mehmet Uzunosmanoglu Bruno Vallespir Ivonaldo Vicente da Silva Kenneth Vidskjold Vivek Vijayakumar Gregor von Cieminski Paul Kengfai Wan Piotr Warmbier Kasuni Vimasha Weerasinghe Shaun West Stefan Alexander Wiesner Joakim Wikner Magnus Wiktorsson Heiner Winkler Jong-Hun Woo Thorsten Wuest Lara Popov Zambiasi Matteo Zanchi Yuxuan Zhou Iveta Zolotová Anne Zouggar Mikael Öhma
Contents – Part II
Digitally Enabled and Sustainable Service and Operations Management in PSS Lifecycle Lifecycle Management of Digitally-Enabled Product-Service Systems Offerings: The Next Challenge for Manufacturers . . . . . . . . . . . . . . . . . . . . . Oliver Stoll, Shaun West, Fabiana Pirola, and Roberto Sala
3
Source-Target-Link-Matrix: A Conceptual Approach for the Systematic Design of Data-Driven Product Service Systems . . . . . . . . . . . . . . . . . . . . . . Oliver Stoll, Simon Züst, Eugen Rodel, and Shaun West
17
Service Lifecycle Management in Complex Product-Service Systems. . . . . . . . Peter Dober, Shaun West, Stefan A. Wiesner, and Martin Ebel
32
An Investigation into Technological Potentials of Library Intralogistics Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Niloofar Jafari, Fabio Sgarbossa, Bjørn Tore Nyland, and Arild Sorheim
47
It’s Not About Technology – Stupid! Lessons from a Start-Up Developing a Digitally-Enabled Product Service System to Grow Plants . . . . . . . . . . . . . . . Marco Kunz, Shaun West, Oliver Stoll, and Michael Blickenstorfer
61
Smart Product-Service System Definitions and Elements – Relationship to Sustainability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stefan A. Wiesner, Jannicke Baalsrud Hauge, and Klaus-Dieter Thoben
76
Forecast-Based Dimensioning of Spare Parts Inventory Levels in the MRO Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tabea Marie Demke, Tim Kämpfer, Torben Lucht, Jens Wachsmann, and Peter Nyhuis
92
Exploring Digital Servitization in Manufacturing Servitization and Industry 5.0: The Future Trends of Manufacturing Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Dragana Slavic, Ugljesa Marjanovic, Giuditta Pezzotta, Ioan Turcin, and Slavko Rakic
xii
Contents – Part II
Measuring Acceptance and Benefits of AI-Based Resilience Services . . . . . . . 122 Wolfgang Boos, Max-Ferdinand Stroh, Rajath Honagodu Phalachandra, Suat Selvi, Sijmen Boersma, and Justus Benning Maximizing Customer Satisfaction in Sheet Metal Processing: A Strategic Application of the Customer Health Score . . . . . . . . . . . . . . . . . . . . . . . . . . 136 Greta Tjaden, Annika Baier, Maureen Strache, Cornelia Regelmann, and Anne Meyer Coalescing Circular and Digital Servitization Transitions of Manufacturing Companies: The Circular Economy Digital Innovation Hub . . . . . . . . . . . . . . 151 Claudio Sassanelli, Saman Sarbazvatan, Giorgos Demetriou, Lucie Greyl, Giorgio Mossa, and Sergio Terzi The Digital Servitization of Manufacturing Sector: Evidence from a Worldwide Digital Servitization Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Giuditta Pezzotta, Veronica Arioli, Federico Adrodegari, Mario Rapaccini, Nicola Saccani, Slavko Rakic, Ugljesa Marjanovic, Shaun West, Oliver Stoll, Stefan A. Wiesner, Marco Bertoni, David Romero, Fabiana Pirola, Roberto Sala, and Paolo Gaiardelli Sustainability-as-a-Service: Requirements Based on Lessons Learned from Empirical Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Clarissa A. González Chávez, Mélanie Despeisse, Björn Johansson, David Romero, and Johan Stahre Everything-as-a-Service (XaaS) Business Models in the Manufacturing Industry Moving Towards Everything-as-a-Service: A Multiple Case Study in Manufacturing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 Laura Scalvini, Federico Adrodegari, and Nicola Saccani Creation of Subscription-Related Service Modules. . . . . . . . . . . . . . . . . . . . . 213 Günther Schuh, Christian Holper, Lennard Holst, and Wolfgang Boos Suitability Criteria for Customers for Subscription Business Models in Machinery and Plant Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Günther Schuh, Daniela Greven, Lennard Holst, and Mariele Kreitz How to Acquire Customers for Subscription Business Models in Machinery and Plant Engineering: Challenges and Coping Strategies. . . . . . . . . . . . . . . . 243 Günther Schuh, Calvin Rix, and Lennard Holst
Contents – Part II
xiii
Digital Twin Concepts in Production and Services The Digital Thread Concept for Integrating the Development Disciplines for Mechatronic Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 Erik Rieger and Sylwester Oleszek A Digital Reverse Logistics Twin for Improving Sustainability in Industry 5.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Xu Sun, Hao Yu, and Wei Deng Solvang Model Simplification: Addressing Digital Twin Challenges and Requirements in Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 Adrian Sánchez de Ocaña, Jessica Bruch, and Ioanna Aslanidou Digital Service Twin - Design Criteria, Requirements and Scope for Service Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 Alicia Schultheiß, Edgar Polovoj, Stefan Dolanovic, and Katja Gutsche Towards Ontologizing a Digital Twin Framework for Manufacturing. . . . . . . . 317 Milos Drobnjakovic, Guodong Shao, Ana Nikolov, Boonserm Kulvatunyou, Simon Frechette, and Vijay Srinivasan Experiential Learning in Engineering Education Industrial Engineering Education for Industry 4.0 . . . . . . . . . . . . . . . . . . . . . 333 Giovanni Mummolo, Jim Browne, and Asbjørn Rolstadås Milky Chain Game: A Pedagogical Game for Food Supply Chain Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 Mizuho Sato, Tomoya Manago, and Hajime Mizuyama Introducing Active Learning and Serious Game in Engineering Education: “Experience from Lean Manufacturing Course” . . . . . . . . . . . . . . . . . . . . . . 363 Mattei Gianpiero, Paolo Pedrazzoli, Giuseppe Landolfi, Fabio Daniele, and Elias Montini Crafting a Memorable Learning Experience: Reflections on the Aalto Manufacturing Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378 Mikael Öhman, Müge Tetik, Risto Rajala, and Jan Holmström A Classification Framework for Analysing Industry 4.0 Learning Factories . . . 392 Simone Vailati, Matteo Zanchi, Chiara Cimini, and Alexandra Lagorio
xiv
Contents – Part II
Development and Stress Test of a New Serious Game for Food Operations and Supply Chain Management: Exploring Students’ Responses to Difficult Game Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403 Davide Mezzogori, Giovanni Romagnoli, and Francesco Zammori Challenges for Smart Manufacturing and Industry 4.0 Research in Academia: A Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418 M. R. McCormick and Thorsten Wuest Report on Integrating a COTS Game in Teaching Production and Logistics . . . 433 Jannicke Baalsrud Hauge and Matthias Kalverkamp Towards Novel Ways to Improve and Extend the Classic MIT Beer Game. . . . 446 Rudy Niemeijer, Paul Buijs, and Nick Szirbik Innovation & Entrepreneurship in Engineering Curricula: Evidences from an International Summer School . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461 Jovista Qosaj, Donatella Corti, and Sergio Terzi Lean in Healthcare Role of Manufacturing Industry for Minimizing the Barriers to Circular Transition in the Health Sector: A Framework . . . . . . . . . . . . . . . . . . . . . . . 479 Kartika Nur Alfina and R. M. Chandima Ratnayake Managing Performance in Technology-Enabled Elderly Care Services: The Role of Service Level Agreements in Modular Smart Service Ecosystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497 Godfrey Mugurusi, Anne Grethe Syversen, Inge Hermanrud, Martina Ortova, Pankaj Khatiwada, and Stian Underbekken Effect of Machine Sharing in Medical Laboratories . . . . . . . . . . . . . . . . . . . . 515 Aili Biriita Bertnum, Roy Kenneth Berg, Stian Bergstøl, Jan Ola Strandhagen, and Marco Semini Additive Manufacturing in Operations and Supply Chain Management What to Share? A Preliminary Investigation into the Impact of Information Sharing on Distributed Decentralised Agent-Based Additive Manufacturing Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533 Owen Peckham, Mark Goudswaard, Chris Snider, and James Gopsill The Potential of Additive Manufacturing Networks in Crisis Scenarios . . . . . . 548 Yen Mai Thi, Xiaoli Chen, and Ralph Riedel
Contents – Part II
xv
An Environmental Decision Support System for Determining On-site or Off-site Additive Manufacturing of Spare Parts . . . . . . . . . . . . . . . . . . . . . . . 563 Enes Demiralay, Seyed Mohammad Javad Razavi, Ibrahim Kucukkoc, and Mirco Peron Latest Technological Advances and Key Trends in Powder Bed Fusion: A Patent-Based Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575 António Alves de Campos and Marco Leite Integration of Additive Manufacturing in an Industrial Setting: The Impact on Operational Capabilities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590 Christopher Gustafsson, Anna Sannö, Koteshwar Chirumalla, and Jessica Bruch Additive Manufacturing: A Case Study of Introducing Additive Manufacturing of Spare Parts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605 Bjørn Jæger, Fredrik Wiklund, and Lise Lillebrygfjeld Halse Applications of Artificial Intelligence in Manufacturing Examining Heterogeneous Patterns of AI Capabilities . . . . . . . . . . . . . . . . . . 619 Djerdj Horvat, Marco Baumgartner, Steffen Kinkel, and Patrick Mikalef Enabling an AI-Based Defect Detection Approach to Facilitate Zero Defect Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634 Nicolas Leberruyer, Jessica Bruch, Mats Ahlskog, and Sara Afshar A Conceptual Framework for Applying Artificial Intelligence to Manufacturing Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650 Aymane Sahli, Eujin Pei, and Richard Evans Influence of Artificial Intelligence on Resource Consumption . . . . . . . . . . . . . 662 Naiara Uriarte-Gallastegi, Beñat Landeta-Manzano, Germán Arana-Landin, and Iker Laskurain-Iturbe Development of Predictive Maintenance Models for a Packaging Robot Based on Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674 Ayoub Chakroun, Yasmina Hani, Sadok Turki, Nidhal Rezg, and Abderrahmane Elmhamedi Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 689
Digitally Enabled and Sustainable Service and Operations Management in PSS Lifecycle
Lifecycle Management of Digitally-Enabled Product-Service Systems Offerings: The Next Challenge for Manufacturers Oliver Stoll1(B)
, Shaun West1
, Fabiana Pirola2
, and Roberto Sala2
1 Lucerne University of Applied Sciences and Arts, 6048 Horw, Switzerland
{oliver.stoll,shaun.west}@hslu.ch
2 Department of Management, Information and Production Engineering,
University of Bergamo, Viale Marconi, 5, 24044 Dalmine, BG, Italy {fabiana.pirola,roberto.sala}@unibg.it Abstract. The number of servitized manufacturers is growing, along with the number of product-service system (PSS) offerings. Digitally-enabled PSS offerings can be complex value constructs comprising multiple products and services organized and executed to create new value for the stakeholders. Recent and past literature have focused on developing and commercializing such PSS offerings. While the most recent literature focuses on the role and influence of digital servitization and PSS theory, very few papers have introduced the need for investigating the lifecycle management of PSS offerings. The early contributions focus on lifecycle management frameworks and are mainly based on academic data, thus needing proper industrial validation. Therefore, this paper explores practitioners’ challenges related to the three main types of PSS offerings: product-oriented, useoriented, and result-oriented. The paper’s results are based on insights generated by expert interviews from different industries. The results show a need for a systematic approach to the lifecycle management of digitally-enabled PSS offerings, especially for use- and result-oriented offerings. It is becoming more critical due to sustainability aspects. Keywords: Servitization · Product-Service Systems (PSS) · Digital Servitization · Lifecycle Management of PSS Offerings
1 Introduction While the term “lifecycle” refers to a three-phase model that encompasses the beginning, middle, and end of a product, asset, or service’s existence [1], “Lifecycle management” (LCM) is an integrated concept that involves managing activities, knowledge, and data to achieve the intended performance and sustainability of a product or asset, as planned [2]. The core objective of LCM for industrial products is to maintain the services’ support for an extended period [3]. These services facilitate the customer’s continued access to the product and its features. From the servitization perspective, it was shown by VendrellHerrero et al., [4] that PSS offerings lead to a longer product lifespan, increasing the servitization potential. © IFIP International Federation for Information Processing 2023 Published by Springer Nature Switzerland AG 2023 E. Alfnes et al. (Eds.): APMS 2023, IFIP AICT 690, pp. 3–16, 2023. https://doi.org/10.1007/978-3-031-43666-6_1
4
O. Stoll et al.
Despite the works of authors like Saaksvuori, Immonen [3] and Terzi et al., [1] that have described the importance of LCM in the product context, the Product-Service System (PSS) literature does not agree on its definition [5] and does not even analyze LCM approaches considering the different PSS offerings (i.e., product-oriented, useoriented, and result-oriented) and the impact of digital technologies. How LCM activities ensure that the organization fulfills its mission, strategy, and sustainable development needs to be clarified. Moreover, aspects directly affecting the LCM strategies of companies should be investigated in the context of PSS. In particular, as suggested by Vendrell-Herrero et al., [4], the lifespan aspect should be considered while companies define the proper PSS LCM strategy. Indeed, since PSS offerings are frequently built upon the concept of sustainability and longevity, a product’s useful life can increase thanks to the delivery of services (e.g., maintenance to prolong the useful life of a component or asset). Therefore, further research is needed to understand the importance of a lifecycle management process for PSS and, in particular, for digitally-enabled PSS. Using a qualitative approach based on semi-structured interviews and coding, the paper aims to extract insights into companies’ perceptions of the challenges related to digitally-enabled PSS in LCM, considering the three types of PSS offerings: product-oriented, use-oriented, and result-oriented. The paper is structured as follows: Sect. 2 provides a short theoretical background, Sect. 3 describes the research methodology, Sect. 4 presents the results and the related discussion, while Sect. 5 summarizes the achievements and delineates the next steps in the research.
2 Theoretical Background Lifecycle management activities have been identified as a source of services that organizations could provide [6, 7] to unlock benefits (e.g., cost savings) and improvements for higher competitiveness [8], as demonstrated by the work of Vendrell-Herrero et al. [4]. In LCM it is clear that access to product lifecycle data and knowledge [9, 10] is crucial to allow organizations to offer services that ensure product performance is maintained as intended. In turn, this creates opportunities for business improvements through services. Such options can generate additional value when customized on individual productservice bundles to be delivered to customers throughout the PSS lifecycle [11–13]. As said in the previous section, LCM definitions lack consistency [5] and Table 1 provides the two most representative definitions along with the lifecycle/lifespan definition they refer to. In this paper, the terminology provided by West et al. [2] will be used, since it considers LCM from a system perspective and highlights the need for collaboration from an ecosystem point of view. Wiesner et al. [16] investigated the role of combining PLM and SLM (service lifecycle management) approaches to manage the lifecycle of PSS. The authors identified a set of challenges and addressed the challenges by investigating the interactions between PLM and SLM. Also, they provided a set of PSS LCM phases based on the PLM model and the SLM model (Fig. 1).
Lifecycle Management of Digitally-Enabled Product-Service Systems
5
Table 1. Definitions for lifecycle management retrieved from West et al. [2] Term
Understanding
Sources
Lifecycle
Phase model from the beginning, Terzi et al. [1] through use-phase cycles until the end of life (‘cradle to grave’) of a product or system
Lifecycle management
Integrated concept supported by Information and communication technology (ICT), to manage knowledge, data and activities for achieving desired performances and sustainability for the product and related services in the different lifecycle phases
Terzi et al. [1], Remmen et al. [14], Itskos et al. [15]
Lifespan
Timeframe in which a product can be used (longevity) before reaching its end of life
Vendrell-Herrero et al. [4]
Nested lifecycle management
A systems perspective to asset management considering subsystems with different lifespans that must be managed holistically by different actors with different perspectives
Fig. 1. Process model of Service Lifecycle Management retrieved from Wiesner et al. [16]
Apart from definitions related to the LCM concept, a framework detailing its structure and the activities composing proper LCM should be defined, especially for PSS context
6
O. Stoll et al.
that is more complex due to the need to manage the service component of the offering. To this purpose, the articles of Wang et al., [17] and Peruzzini et al., [18] put forward models for PSS lifecycle management. The model proposed by Wang et al., [17] establishes a connection between the service scenario and the physical components, expanding on traditional research to incorporate actual system-level design but neglecting to show the application on different PSS offerings and related specifics. On the contrary, Peruzzini et al., [18] proposed a model (Fig. 2) which follows a four-step process: PSS ideation, PSS Design, PSS Implementation, and PSS delivery. But this does not account for the cyclical aspect of LCM and it remains on a high level. The work from West et al., [2] proposes a concept (Fig. 3) of nested lifecycles that provides a perspective for asset management within complex equipment. This study further indicates the gap that exists because the concepts of product lifecycle and product lifespan management need to be explicitly separated. This paper makes the case that, from a systems perspective, the lifecycle of an asset can be represented as a tiered structure of subsystem lifecycles, where the lifespan of those subsystems can also fluctuate in duration. The literature focuses on LCM as a source of opportunities to offer PSS. Still, it needs to provide knowledge on how to manage the lifecycle of the PSS over the MOL to the EOL considering the three offering typologies (product-oriented, use-oriented, and result-oriented), especially considering the potential that the PSS offerings can have for companies. It is necessary to address the problem of their complex and dynamic nature, which requires instruments (e.g., framework and tools) to guide companies in properly managing the offerings’ lifecycle.
Fig. 2. Product-service lifecycle management process retrieved from Peruzzini et al. [18]
Lifecycle Management of Digitally-Enabled Product-Service Systems
7
Fig. 3. Lifespans of different asset classes in a smart factory retrieved from West et al. [2]
3 Methodology The interviews followed the protocol suggested by Hennink et al. [19]. With the goal of creating homogeneity [19] around the topics of servitization and PSS, participants had to satisfy the following five requirements: 1. The role within the organization needs to be related to servitization or PSS. For example, an interviewee who is responsible for developing services or products, or has influences on the company’s strategy was considered. 2. Interviewees should be closely involved in digitalization initiatives, for example, engagement in developing smart services or digital twins. 3. The interviewees should know the theoretical concept of PSS or be able to understand and describe it. 4. The organization in which the interviewee works must be a manufacturing firm operating in a B2B environment. 5. The interviewees had to have a leadership position within the organization, for example, a candidate responsible for one or more of the points mentioned above. A gatekeeper strategy was adopted for selecting the candidates. Of the 16 candidates initially contacted, nine accepted to be interviewed, as shown in Table 2. Based on indications from the literature and Hennink et al. [20], the authors considered the number of participants as satisfactory. In support of this, the reliability of the result was granted by the homogeneity of the selected participants. 3.1 Data Collection The interview process consisted of four stages, shown in Fig. 4. The first contact with the interviewees was via email. In an attachment to the email, an interview guide was shared, anticipating and detailing the discussion topics. Following the agreement to participate, a series of physical (recorded with proper supporting instrumentation) and remote (e.g., via Microsoft Teams) interviews were run. The length of the interviews spanned from 46 to 71 min, with an average of 57 and a total sum of 513 min. To speed up the transcription process, the gotranscript.com service was used and then double-checked by a human for quality control. Also, interviews were anonymized by removing all information that could help identify the interviewee.
8
O. Stoll et al. Table 2. Sampling characteristics of the interviewees
Interviewee
Industry
Years of Experience
A
Aircraft OEM
14
B
Aircraft OEM
20
C
Pump OEM
22
D
Ship Engine OEM
3
E
Turbo machinery OEM
6
F
Turbo machinery OEM
20
G
Building Automation OEM
25
H
Medical Printer OEM
12
I
Medical Printer OEM
11
Fig. 4. In-depth interview data collection process
3.2 Interview Guide The interview guide starts with queries about the organization and the interviewee’s position. Subsequently, it explores the LCM of product-oriented, use-oriented, and resultoriented service offerings as shown in Fig. 5.
Fig. 5. Questions explored through the interviews
Lifecycle Management of Digitally-Enabled Product-Service Systems
9
4 Results and Discussion This section gives insights into LCM from the respective types of PSS offerings based on the PSS typologies and the interview data. The tables summarize the opinions of the interviewees related to lifecycle management aspects of digitally-enabled PSS. The results are structured by product-oriented service offerings (see Table 3), use-oriented service offerings (see Table 4), and result-oriented service offerings (see Table 5). Lastly, quotes related to lifecycle management in general are presented in Table 6. 4.1 Insights to the Lifecycle Management of Product-Oriented Service Offerings For product-oriented service offerings, external value creation comes from selling standardized products and services. Given that this kind of services are mainly on-call, the LCM can be achieved with traditional methods such as product lifecycle management [4]. As discussed in the literature [21], when it comes to the internal value creation of PSS, the digitalization level can enable the organization to achieve a competitive advantage by using digital methods to optimize operations. While the long-term view is a common term for all the interviewees (e.g., by dedicating fixed teams to the offerings), problems in LCM emerge concerning the possible obsolescence of the parts or the necessity to check the consistency of what has been designed and marketed. Interestingly, the theme of data management has been mentioned during the interviews. Thus, they confirmed that traditional lifecycle management tools, in combination with a designated management team, can be sufficient for product-oriented service offerings. The revenue model used here aligns with the cost inputs for the customer and focuses on the product. The risk also remains mostly with the customer, with the supplier providing basic warranty obligations. 4.2 Insights to the Lifecycle Management of Use-Oriented Service Offerings Use-oriented PSS offerings include a higher level of complexity than product-based ones since they rely on product availability and operational performance to generate value and, thus, revenues. In addition, value creation often depends on operational insights from customers on their usage. This focus on customers can reveal different ways the equipment is used and customized. The offerings can include a single product or asset, or a fleet incorporating multiple products or assets. Despite needing more management and complexity, the use-oriented LCM is believed to have more significant potential for valuable PSS offerings. The digitalization of PSS operations and the collection and analysis of quality data along the PSS lifecycle can support better LCM, leading to higher value generation for the companies and the customers. Accordingly, it becomes essential for the revenue model to deeply understand the product and how operations affect it over its life cycle. The revenue model here focuses on the use of the equipment, and again they align with cost inputs for the product, yet they have been transformed into a form that the customer can use directly in their cost models. Risk transfer occurs as the supplier is only paid when the equipment is used, so the approach provides an ‘evergreen’ warranty.
10
O. Stoll et al.
Table 3. Insights from the experts into the lifecycle management of product-oriented service offerings in the context of digitally-enabled PSS Interviewee Lifecycle Management in the context of product-oriented service offerings A
When the product gets close to its end-of-life, the costs of providing the offerings can increase due to the decrease in the number of products using the services. The organization regularly runs into obsolescence problems at the end of the product life because of this increase in cost
B
In large organizations, LCM starts with the management of ideas for offerings. This includes a structured approach to innovation management. The assessment of potential innovations may consist of four dimensions, technical, business, strategic, and customer. When launching the offering, the organization needs to monitor the four criteria and ensure they are still as planned because the offerings and the process are very agile
C
The LCM activities focus on how the asset fits into the process and how the requirements change over time. This may include spares and replacement of worn parts
D
A structured approach to lifecycle management is needed for PSS; however, at the moment, the organization does not have such a process
E
For product-oriented service offerings, a product lifecycle management approach should do the job, by assigning a fixed team to the management of the offerings
F
Data lifecycle management is essential as the information is one of the foundations of delivering value to the customer
I
It is essential to keep the support of the services over a long period, which can be up to 40 years, as this is the product’s lifespan. These services support the customer in keeping the product and its functionality available to them
4.3 Insights to the Lifecycle Management of Result-Oriented Service Offerings Despite being perceived as the most complex and riskiest by the interviewees, resultoriented service offerings are the ones that promise the most potential for value creation since they are correlated with the performance of the customer, which influences the revenue. PSS performance and the ability of the product to deliver the required outcomes in the time expected by the customer has a direct connection with the revenue stream. For this reason, as in the previous case, profound knowledge of the product’s behavior over time is required to guarantee the customer the contracted level of performance to maintain the revenue stream. As one interviewee explained, the more data is collected, the more knowledge they have of the product and the more the company can guarantee the expected performance. As for the use-oriented case, improved LCM can be achieved by adequately exploiting the digitalization features, allowing real-time monitoring and decision-making over service delivery. In turn, this causes LCM to become more complex and relevant to result-oriented service offerings. According to one interviewee, optimal lifecycle management requires balanced governance of value creation, analytics, and
Lifecycle Management of Digitally-Enabled Product-Service Systems
11
Table 4. Insights from the experts into lifecycle management of use-oriented service offerings in the context of digitally-enabled PSS Interviewee Lifecycle Management in the context of use-oriented service offerings A
The more digital elements are used in services, the more reliable the data needs to become; this requires the management of data generation and databases. If there are multiple data sources, they have to be more reliable, because multiple failure points affect the failure point of the system and its reliability. Profound knowledge of the product and how operations affect it over its life cycle is crucial, as this is directly related to the revenue model. Effective contracting is another aspect that needs to be considered. In an early stage, the offering is based on assumptions, and the reality check comes later; to mitigate the risk, a close collaboration with lead customers is required. Launching these services with multiple customers is not possible as it is not clear how scaling the offerings will affect the cost structures
B
The LCM of product- and use-oriented service offerings are strongly connected. The use-oriented offerings need more management because the context in which the value is created can change constantly. Therefore, close collaboration with the customer is imperative
C
A critical lifecycle management aspect in use-oriented service offerings is understanding how the asset is used in the ecosystem and how, for example, maintenance activities influence the overall customer process. Data quality along the lifecycle is another essential aspect that needs to be managed, as this will determine the effectiveness of the value created with such services
F
Monitoring and managing the factors influencing asset usage becomes critical since the revenue depends on it. Managing the costs behind such an offering is another relevant aspect, as the cost may increase over time, so from a financial perspective, cash management becomes vital
I
New technologies enable these offerings. These technologies often have shorter lifecycles that need consideration when developing lifecycle management activities. Complexity is added to the LCM of PSS when offering use-oriented services
J
When services are delivered or created using digital technologies, such as analytics and network connectivity, then the infrastructure and architecture need to be managed to ensure the reliability of the data is secured
network connectivity because the supplier directly takes a proportion of their customer’s income. Nonetheless, it indicates where the focus of future research can be laid. A possible starting point for the result-oriented service offerings could be the systemic perspective on value creation and how people make decisions on advanced analytics enabled by network connectivity. The importance of value creation along the lifecycle links closely with ‘value-based asset management’ [22]. Asset management for value demands the integration of different perspectives within the system to understand the value in context, allowing different investment/reinvestment strategies to be explored.
12
O. Stoll et al.
Table 5. Insights from the experts into lifecycle management of result-oriented service offerings in the context of digitally-enabled PSS Interviewee Lifecycle Management in the context of result-oriented service offerings A
One of the challenges is to manage the results and to keep them under control when the lifecycle context of the product changes, managing the lifecycle of the business case with the customer to make sure that a common understanding of performance is achieved constantly. The complexity of the value system increases because the performance may be based on a fleet of products. When the fleet changes, that may affect the results; this can be a risk or opportunity to renegotiate contracts
B
Managing customer needs, which multiple actors within the value systems determine, is essential to manage the lifecycle of result-oriented service offerings successfully
C
The result or performance becomes the organization’s responsibility; therefore, monitoring and ensuring that the results are achieved is essential. Hence, managing people and access to resources are lifecycle tasks that need to be done
D
The lifecycle management of result-oriented service offerings will be more complex than the other offerings
F
Balancing investments and managing the risk over the lifecycle of the offering is critical for result-oriented service offerings
H
Lifecycle management requires the balanced management of analytics, network & connectivity, and value creation. It is important to connect these three pillars for more efficient management
I
There is no difference between the lifecycle management of result-oriented and use-oriented service offerings
4.4 Main Insights to Lifecycle Management of PSS Offerings The discussion related to the general PSS LCM was consistently tied to data collection and management themes, as happened in the case of use- and result-oriented offerings. This is not unexpected given the fourth industrial revolution and the market, which forces companies to embed new technologies into products to cope with customer requests. Digital technologies were also enablers for service management and delivery, allowing firms to overcome product-oriented models, moving towards the adoption of use- and result-oriented ones. These would be difficult to offer without reliable control over the product performance, operations, and health state.
Lifecycle Management of Digitally-Enabled Product-Service Systems
13
Table 6. Quotes on lifecycle management of digitally-enabled PSS Interviewee Lifecycle Management Quotes A
The more systems I’m taking the data from, the more important is reliability, so it’s simple statistics. The more failure modes I have, the more reliable the solution needs to be. Otherwise, the resulting system will always have something that is not working
B
You need to, first of all, filter and make a proper assessment of what [service] makes sense or even if all of them would make sense […] because, obviously, budget is a limited resource. You need to prioritize […] trying to understand what is the most relevant to be fully developed When you are moving to a tailor-made solution to a industrialized solution, you can really have much better control on how the market is adopting this. Whenever we are talking in this mode, in a use services approach, we are talking agile mode Wherever you are really digesting customer feedbacks and translating this into functionalities, you can follow a waterfall approach. While when we are talking about a product approach is we might be thinking on these needs, we transform this into requirements in the longer term
D
As I mentioned before, with the operations, we need accordingly operations people, we need also salespeople. At the moment, we don’t have any sales really. The sales are done by our product manager. [At] the moment there is no real organization for that. I think also this will be the similar problem for lifecycle management in the end
E
The digital use case is more or less attached to the hardware use case, to the product we sell in our core business
F
Now we are at the heart of every service activity. It’s digital or not digital. You have to understand the business case of your customers The tool is what we discussed before it’s IoT, its data transmission. This is one aspect of the coin. The second and maybe more important topic is to define what do you need and to which extent you need it
H
It’s on all elements [product, software and service], on all the pillars. We have to find a… let’s say the thing is, how do we connect these pillars to make as… A single pillar is not efficient, but if you connect these three pillars, then you can even get more than 100% out of that, right? The question is how do you connect them, how to make them more efficient? (continued)
14
O. Stoll et al. Table 6. (continued)
Interviewee Lifecycle Management Quotes I
We spend a lot of time today on phasing in products and also phasing out products. Our machines are in general, I would say, maybe even 15 years’ service, maybe longer. I would also say that we have a good reputation of not cutting off the supply of certain things within the shortest or the minimum legal amount of time that something is in the market From a design point of view, it’s adding more complexity. You have to plan much more ahead, not just ‘I design a machine, it does the printing, and then my job is done’
J
Well, okay, [silence] so if the delivery of our service is based around some digital data collection architecture, infrastructure, then that architecture infrastructure needs to be reliable and secure and professional and scalable and maintainable I’d say at this point in time our voice of the customer mechanisms are extremely inadequate as well. We think we know the customer, we have a certain arrogance around ourselves. I think there’s a large opportunity for digitalization in terms of making the needs of the customer much more transparent inside the organization and enabled us to respond to them Creating a culture where people are data orientated, fact-based orientated, have the ability to get access to that data themselves and think outside of the box by breaking down the barriers. The digitalization I see is a way of just breaking down the barriers between the silos. Turning the organization inside out
5 Conclusion Effective PSS lifecycle management ensures that a product or service is designed, produced, and delivered sustainably and efficiently while meeting customer needs and expectations. The management of product and service components over the PSS lifecycle (from conceptualization to disposal) is called PSS LCM [1]. While the LCM literature provided multiple frameworks specializing in product, asset, service, or software lifecycle management, only two papers proposed a PSS lifecycle management framework [17, 18]. Despite focusing on products and services and trying to take a systemic perspective on the interactions between products and services, they need to incorporate the digital aspect that, considering the current industrial revolution, is critical for the success of the PSS offerings. Digital technologies like the Internet of Things (IoT), Artificial Intelligence (AI), and Blockchain can revolutionize PSS lifecycle management by enabling real-time monitoring, predictive maintenance, and seamless data exchange across different systems and stakeholders. Through a set of interviews conducted with nine industrial experts on the PSS topic and working in the service department of their companies, the paper tried to frame the understanding that companies have of LCM according to the three main PSS typologies (i.e., product oriented, use oriented and result oriented) and explore the consideration of the digitalization role in managing the PSS LCM.
Lifecycle Management of Digitally-Enabled Product-Service Systems
15
What emerged is that companies strongly consider the necessity to use digital means to guarantee the performance of the products in all the PSS typologies. This is even though, as expected, the topic of data management emerged while discussing use- and result-oriented offerings. To do so, integration of digital means is required, since it allows for more efficient reactive and preventive responses to product problems, guaranteeing higher customer satisfaction and revenues. Future research in PSS lifecycle management should aim to develop holistic and integrated frameworks that consider the digital aspect, customer needs, sustainability, and multi-stakeholder collaboration. Such frameworks would enable organizations to design and deliver PSSs that are efficient, sustainable, and customer-centric, providing significant value to all stakeholders involved.
References 1. Terzi, S., Bouras, A., Dutta, D., et al.: Product lifecycle management - from its history to its new role. Int. J. Prod. Lifecycle Manage. 4, 360–389 (2010) 2. West, S., Ebel, M., Anderson, M., et al.: Nested lifecycles-improving the visibility of product lifespans in smart factories. Front. Manufac. Technol. 2 (2022). https://doi.org/10.3389/fmtec. 2022.837478 3. Saaksvuori, A., Immonen, A.: Product Lifecycle Management. Springer, Berlin Heidelberg, Berlin, Heidelberg (2005) 4. Vendrell-Herrero, F., Vaillant, Y., Bustinza, O.F., et al.: Product lifespan: the missing link in servitization. Prod. Plan. Control 33, 1372–1388 (2022) 5. Rodriguez, A.E., Pezzotta, G., Pinto, R., et al.: A comprehensive description of the ProductService Systems’ cost estimation process: an integrative review. Int. J. Prod. Econ. 221, 18 (2020) 6. Baines, T., Lightfoot, H.W.: Servitization of the manufacturing firm Exploring the operations practices and technologies that deliver advanced services. Int. J. Oper. Prod. Manag. 34, 2–35 (2014) 7. Baines, T., Bigdeli, A.Z., Bustinza, O.F., et al.: Servitization: revisiting the state-of-the-art and research priorities. Int. J. Oper. Prod. Manag. 37, 256–278 (2017) 8. Schuman, C.A., Brent, A.C.: Asset life cycle management: towards improving physical asset performance in the process industry. Int. J. Oper. Prod. Manag. 25, 566–579 (2005) 9. Wan, S., Li, D.B., Gao, J., et al.: Process and knowledge management in a collaborative maintenance planning system for high value machine tools. Comput. Ind. 84, 14–24 (2017) 10. Zhu, H.H., Gao, J., Li, D.B., et al.: A Web-based Product Service System for aerospace maintenance, repair and overhaul services. Comput. Ind. 63, 338–348 (2012) 11. Wuest, T., Hribernik, K., Thoben, K.D.: Accessing servitisation potential of PLM data by applying the product avatar concept. Prod. Plan. Control 26, 1198–1218 (2015) 12. Stoll, O., West, S., Mueller-Csernetzky, P.P.: Using “ avatar journey mapping ” to reveal smartservice opportunities along the product life-cycle for manufacturing firms. In: 7th International Conference on Business Servitization (ICBS) Nova School of Business and Economics, pp. 22–23 (2018) 13. West, S., Stoll, O., Mueller-Csernetzky, P.: Avatar journey mapping’ for manufacturing firms to reveal smart-service opportunities over the product life-cycle. Int. J. Bus. Environ. 11, 298–320 (2020) 14. Remmen, A., Jensen, A., Frydendal, J.: Life Cycle Management – a business guide to sustainability. United Nations Environment Programme (2007)
16
O. Stoll et al.
15. Itskos, G., Nikolopoulos, N., Kourkoumpas, D.S., et al.: Energy and the Environment. In: Poulopoulos, S.G., Inglezakis, V.J. (eds.) Environment and Development, pp. 363–452. Elsevier, Amsterdam (2016) 16. Wiesner, S., Freitag, M., Westphal, I., et al.: Interactions between service and product lifecycle management. Procedia CIRP 30, 36–41 (2015) 17. Wang, P.P., Ming, X.G., Li, D., et al.: Status review and research strategies on product-service systems. Int. J. Prod. Res. 49, 6863–6883 (2011) 18. Peruzzini, M., Germani, M., Marilungo, E.: Product-service lifecycle management in manufacturing: an industrial case study. Product Lifecycle Manage. Global Market (Plm 2014) 442, 445-454 (2014) 19. Hennink, M., Hennink, M.M., Hutter, I., et al.: Qualitative Research Methods. SAGE Publications, London (2020) 20. Hennink, M.M., Kaiser, B.N., Marconi, V.C.: Code saturation versus meaning saturation: how many interviews are enough? Qual. Health Res. 27, 591–608 (2017) 21. Pirola, F., Boucher, X., Wiesner, S., et al.: Digital technologies in product-service systems: a literature review and a research agenda. Comput. Ind. 123, 103301 (2020) 22. Roda, I., Parlikad, A.K., Macchi, M., et al.: A framework for implementing value-based approach in asset management. In: 10th World Congress on Engineering Asset Management (WCEAM), pp. 487–495. Springer, Berlin, Tampere, Finland (2015)
Source-Target-Link-Matrix: A Conceptual Approach for the Systematic Design of Data-Driven Product Service Systems Oliver Stoll , Simon Züst(B)
, Eugen Rodel , and Shaun West
Hochschule Luzern T&A (HSLU), Institute of Innovation and Technology Management (IIT), Technikumstrasse 21, 6048 Horw, Switzerland [email protected]
Abstract. Digital-enabled product service systems (PSS) enable the overall improvement of multi-actor value streams by increasing the system efficiency and thus enabling a shared value for all actors. This work aims to provide a new approach to designing data models in digital-enabled PSS. The approach addresses key challenges in designing effective PSS by presenting the source target link matrix (STLM) concept. The STLM enables the identification of primary data sources and the determination of the minimal set of data transfers needed, ensuring that PSS are designed efficiently and effectively. The STLM approach can be fully integrated into the design process, enabling managers to validate the completeness and feasibility of their PSS designs. In summary, this approach provides a valuable tool for managers looking to design effectively and ensure collaboration and integration between different areas of expertise. Keywords: product service systems · data model design · business process modeling
1 Introduction Product service systems (PSS) combine products and services to provide value to customers [1]. Compared to a purely physical product, this additional value can be achieved by providing the same value at a lower total cost of ownership, higher availability, and/or by providing additional value. Recent research shows PSS continuously gain importance as well as in business modeling, as well as in sustainability topics [2]. From a business model and sustainability point of view, PSS enables an integral value provision by an efficient distribution of the required processes and tasks along the whole value chain [3]: In the ideal case, a value-add is provided by the actor in the value chain able to do it the most efficient and provide a cost-benefit for all the value chain stakeholders. Especially in multi-actor PPS – such as complex value streams – the flow of physical products and services must be accompanied by data streams [4]. This data-driven PSS enable the synchronization of the individual actors’ value-add to the overall value stream. © IFIP International Federation for Information Processing 2023 Published by Springer Nature Switzerland AG 2023 E. Alfnes et al. (Eds.): APMS 2023, IFIP AICT 690, pp. 17–31, 2023. https://doi.org/10.1007/978-3-031-43666-6_2
18
O. Stoll et al.
Data models play a crucial role in developing and maintaining data-driven PSS. A data model is a conceptual and technical design of how data is organized, stored, and accessed in a system [5]. In data-driven PSS with multiple actors, such models are critical because they ensure that data is consistent, accurate, and complete. This is crucial for the correct transmission and interpretation of data, which can have a significant impact on decision-making and thus finally on the success of the system. During the design of data models for data-driven PSS, procedures to reduce the complexity of the requirements involved are required: Developers and actors must understand how the system works and how data flows through the system [6]. Furthermore, these procedures need to establish proper communication and collaboration within the development teams of the different actors within the system. In addition, the process of designing a data model for a PSS must not be a one-way approach: The design of data models can aid in the design and development of data-driven PSS by facilitating the identification of data dependencies and relationships. Hence, the data model can be used in the verification of the PSS process and business model and answer the question ‘Can the required data be provided at the right time and does the business model motivate the data source to provide the required data?’ This ensures further, that the system is scalable, flexible, and adaptable to changing business requirements. In summary, data models are essential to data-driven PSS because they help ensure data consistency, accuracy, and completeness. Procedures to design such data models must assist in the reduction of the system complexity, the improvement of communication and collaboration, and aid in process design, development, and verification. The motivation of this work is to explore, how data models can be systematically designed within the context of PSS. This work will focus on multi-actor value streams and discuss a novel approach to synthesize required data elements based on the value propositions and use cases (use cases), identify and verify data sources, design the appropriate data model, and finally, verify the data model using the business processes and business models of the PSS. The practical application is shown based on a use case.
2 Literature Review 2.1 Digitally Enabled Product Service Systems A PSS is a business model that integrates products and services to provide a complete and optimized solution to the customer’s needs [7]. In a PSS, the focus is not only on selling a physical product, but also on providing value-added services that improve the functionality and usability of the product, increase customer satisfaction, and create additional revenue streams for the provider. The PSS approach focuses on building longterm relationships with customers, where the provider acts as a partner and collaborator rather than just a seller of products. Decision support systems (DSS) play an important role, especially in the managerial aspects of PSS [8, 9]. Such DSS requires the availability of information. Using digital-enabled data processing, this information can be provided systematically. Current research on industrial applications offers a wide range of frameworks, procedures, and use cases to synthesize and integrate digitally enabled PSS.
Source-Target-Link-Matrix: A Conceptual Approach for the Systematic
19
In [10] Abdel-Basst et al. discuss the importance of value propositions (VPs) in developing successful PSS that meet customer demands. Overall, the article highlights the importance of VPs in the development of successful smart PSS and provides a framework that uses the DEMATEL and CRITIC methods to evaluate VPs. To handle the complexity of data acquisition and processing in PSS Maleki et al. propose a sensor ontology as the backbone of a PSS knowledge-based framework [11]. The ontology is used to extract and reuse knowledge from these domains during the design process. The focus of the paper is on defining the embedded sensing systems tailored to industrial PSS and the use of these systems in providing customized services. During the integration and roll-out of such PSS Ferreira et al. [12] outline the importance of reducing internal organizational barriers as well as the development of partnerships and alignment between different stakeholder requirements. A comprehensive survey of the convergence of digitalization and PSS is provided by Zheng et al. [13]. The authors selected over 100 studies to summarize the key aspects, current challenges, and future perspectives of Smart PSS. The study found that self-adaptiveness with sustainability, advanced IT infrastructure, human-centric perspectives, and circular lifecycle management are the core future perspectives to explore. In summary, conceptual frameworks of digital-enabled PSS integrate three layers: the physical layer, the digital layer, and the service layer. For the successful integration of a digital-enabled PSS with multiple actors, the alignment between the different requirements along the three layers is crucial. Available use cases further show the importance of considering the customer journey, business model innovation, and ecosystem collaboration when designing and implementing digital-enabled PSS. 2.2 Data Model Synthesis and Integration A data model is an abstract representation of the structure of data in a particular system. It defines the types of data that exist in the domain, the relationships between these data types, and the constraints that govern these relationships. The definition and implementation of a data model implement the following steps [14–16]: First, the identification of the data requirements. This involves identifying and documenting the data objects, their attributes, and their relationships with other objects. Secondly, the creation of a conceptual model by creating a high-level representation of the data requirements that abstracts away implementation details. And thirdly, the refinement of the conceptual model into a logical model. This step considers the technical details of implementation and results in the technical definition of the data objects, their attributes, and their relationships in a way that is implementable in a selected database management system (DBMS). Discussing data models in the context of DSS and digitally enabled PSS the data and information layer must be distinguished [8, 9]: The data layer refers to the raw data that is collected from various sources, often in a structured format. This layer is focused on the storage and management of data, and it is where data is transformed into a format that can be processed by computer systems. The data layer is managed by a DBMS. The information layer, on the other hand, refers to the processed data that has been transformed into meaningful and useful information. This layer is focused on delivering value to the users by providing them with insights and decision-making support.
20
O. Stoll et al.
The information layer includes functions such as data analysis, data visualization, and reporting. 2.3 Summary and Research Gap Data-based PSSs can improve the overall value of multi-actor systems by increasing system efficiency and enabling shared values for all actors. PSS design usually includes the synthesis of the value proposition, service definition, and implementation of the required technology. Technical design is a well-known and established process in data modeling. The challenge to be addressed is the synchronization of the two domain processes (Fig. 1). Current shortcomings include systematic approaches to identifying the necessary data elements and data exchange processes based on the progress of PSS design, as well as system integration of the data model into PSS processes. Synchronization of these processes offers great opportunities. Using the information in the PSS design – for example, value propositions, service plans, and process layouts – one can systematically design custom data models. Furthermore, the validation of data models in PSS processes and their continuous integration can be realized.
Fig. 1. High-level state of art regarding PSS and data model design domains with established design procedures on their own. Required are systematic procedures to synchronize the two domains (red)
3 Developing the Conceptual Approach This work aims to close the gap between PSS design procedures and data model synthesis approaches for digital-driven PSS by providing a conceptual approach to interlink these two domains. This approach shall satisfy the following requirements: Primary data source prioritization: Within a system, data can be duplicated, copied, and modified, where all these steps are prone to errors. Hence, the required data shall be
Source-Target-Link-Matrix: A Conceptual Approach for the Systematic
21
acquired at the origin of the data. The approach must thus allow identifying the primary sources for the data elements needed within the PSS process model [17]. One-time-right and valued efforts: It is assumed, that each data transfer between actors takes efforts to collect the required data elements, transmit the data package, validate the received data, and process it at its destination. For the sake of efficiency, the approach must allow the identification of the minimal set of interactions in-between the actors. Data acquisition is connected to efforts and thus is not free. The efforts of the different actors must be reflected in PSS the business model. The approach must thus allow identifying the efforts (data sources) and benefits (data targets) within the plane process [18]. Validated process model and traceability: To validate a concept for a data-driven PSS process the required data must be (i) available and (ii) be at the right place at the right time. The approach must further allow full traceability between use cases and data required, and vice versa. This is especially important when an existing PSS or concept is modified. These requirements originate from requirements engineering such as [19]. Figure 2 shows the generalized structure of the proposed approach consisting of four steps. First, the relevant use cases for the PSS are defined. These use cases imply the required information to fulfill the value propositions for the PSS. Based on this first requirement, the target data is identified in the second step. Those are the data elements required to provide the needed information using distinct algorithms. Based on the target data known, the third step aims for the data sources: Where can the required data elements be acquired while satisfying the requirements above? Last, the transfer into the process model is performed. The core question at this point is the technical structure of the data packages and the procedure of the data transfer. The result of the approach is a data model description for a PSS process including the validation of the data model towards the PSS process and business model and vice versa. Steps two and three of the approach are the core elements since they link the target and source data. This linking is required to fulfill the requirements above. Hence, this linking is called the source-target-link-matrix (STLM).
use-case synthesis
target data identification
data source identification
• Q1: What • Q2: What data is • Q3: What data information is required to provide transfers are required for the use- the information required? cases?
process model integration • Q4: How to implement the data transfer?
source-target-link-matrix Fig. 2. Generic process for the data-driven multi-actor PSS data model design based on the source-target link matrix.
To distinguish between single data points, messages with multiple data points as well as the formal specifications of the data transfers, this work uses the terminology and relations shown in Fig. 3:
22
O. Stoll et al.
• data element: Single data point coming from a single source. E.g.: A price for an item, a measured weight with measurement uncertainty, a text-based specification, and others. • data package: A data package consolidates multiple data elements from a single source, which are to be transmitted within the same message. E.g.: The price of an item together with its measured weight and description. • data package specification: A formal and technical specification for each data package. E.g.: Price in cents as an integer, weight, and measurement uncertainty as a float tuple, and description as a longtext (string). • data model: Sum over all data package specifications as well as their technical representation (database structure).
Fig. 3. Terminology used throughout this work.
The aim of the first step – the use-case synthesis – is the characterization of the required information to enable the business-related value proposition. This common process is divided into the steps of value proposition definition, customer journey, usecase synthesis, and finally the identification of the required information for each use case. The task of the STLM is to (i) identify the required data elements for each use case, (ii) link the data elements to the data sources and (iii) identify the required data transfers. To systematically derive these intermediate results, we suggest the use of the matrix layout shown in Fig. 4. The right-hand section contains the k use cases – i.e., the targets – as columns. They are further grouped by actor (A). The next step consists of breaking down the information required for the use cases into the data elements and algorithms needed to provide this information. This step results in the column-wise arrangement of the n data elements and algorithms. Part ➀ of the STLM – the target matrix – thus fulfills task (i) from above. The next step is now to identify the correct data sources for these data elements. This linking between data elements and sources is realized in the second part ➁ of the STLM – the source matrix. Here, the columns consist of the m data sources assorted by actor, while the rows are again the n data elements. To quantify the STLM, the following procedure is used: Firstly, the use cases sorted by stakeholder are inscribed in the top column of the target matrix. For each use case, the required information is decomposed into the required data elements and the algorithm required for the data processing. The set of distinct data elements is then inserted into the target matrix and the links are drawn. Secondly, the potential data sources are identified and inserted into the source-matrix heading sorted by stakeholder. Now the data elements
Source-Target-Link-Matrix: A Conceptual Approach for the Systematic
23
Fig. 4. Outline of the source-target-link-matrix. Links in the STLM between use cases (targets), data elements, data acquisition (DQ) and processing (DP) algorithms, and data sources are indicated by filled circles (●). Within the data element-data source mapping, alternative data sources are indicated with circles (◯).
are linked to the sources. Whether a source is labeled as primary or secondary is decided based on the data history: The primary data source is the original data source, i.e., the primary data source. Secondary data sources are copies of the original data source. This step completes the quantification of the STLM. Hence, the vertical axis in the target matrix represents the information layer, while the horizontal axis represents the data layer. Using the STLM, the minimum set of data packages is identified (see red dashed line in Fig. 4). This determines the minimal amount of data transfers between actors needed to realize the PSS. The aim of the process mode integration is (i) to formally define the data transfers and (ii) to enable its technical implementation. The formal definition of the transfer formally answers the question ‘What data packages are transferred at which point in-between the actors?’.
24
O. Stoll et al.
The technical implementation includes the logical data model design, as well as the initialization of the physical data model design and database implementation. For the formal definition of the data transfers, a business modeling approach as shown in Fig. 5 is used. For each data transfer, the corresponding data package is defined. During this process, the STLM is used to identify the data elements to be transferred together, as well as to define the data processing steps following the receiving of a data package. With the required data packages defined, the entity-relationship model (ERM) is derived according to [14].
Fig. 5. Example of a BPM including the data packages (red), as well as the data transfers (dashed lines) in between the actors (A).
4 Use Case: Multi-actor Material Sourcing For the application of the presented methodology, a practical use case within an ongoing research project is considered. In this ecosystem, materials are available at different times in different quantities and qualities at various locations. The challenge for material procurement is to identify the correct sources for desired material qualities while running a cost-efficient procurement process. The new business model aims to establish a novel PSS approach, enabling digitalized and improved services around the procurement of raw materials for production. The PSS is located between material sellers and material procurers and acts as an intermediate broker. The system aims to provide additional value to the procurers by enabling access to the material availability, as well as the procurement of the right material with a defined and verified quality. Hence by the framework of Tukker [7], this is a product-oriented PSS. Within the system, three stakeholders are considered: Stakeholder 1 is a manufacturing company producing semi-finished goods. For this process, the manufacturer procures
Source-Target-Link-Matrix: A Conceptual Approach for the Systematic
25
preprocesses materials from several suppliers (stakeholder 2). The preprocessed materials are subject to a variance in quality, availability, and composition. The manufacturer aims to match the current material availability – including its composition and quality – with the current production planning to procure the right material at the right time. To do so, he used the services of a broker (stakeholder 3). Originally, the broker only offered transportation services. The newly broker-offered PSS is an information and ordering platform for the multi-stakeholder procurement of preprocessed materials for a manufacturer producing semi-finished goods: 1. Information: The broker provides the manufacturer with (near) real-time information about the available and procured materials 2. Logistics: The broker organizes the direct delivery to the manufacturer’s location during a defined time slot 3. Assessment: In the context of sustainability management, the broker provides an estimation of the material-specific ecological impact assessment. Firstly, the VPs for the actors – material sellers, and manufacturer – are identified based on the indented business concept: The material sellers are assisted in showcasing their available material, simplifying their internal processes as well as getting paid for the quality offered: • The broker helps the material sellers to get a better price for the preprocessed materials to be more cost-efficient when the material properties are relevant for the price. • The broker helps the material sellers to track the sales performance to processes relevant to quality. • The broker helps the material sellers to simplify/automate the sale (management) process to minimize/save time and transaction costs when the material is ready for sale. • The broker helps the material sellers to make a positive contribution to sustainability to increase environmental friendliness as regulations become stricter and more demanding. On the other hand, the manufacturer can optimize his procurements process using the newly available information within a DSS: • The broker helps the manufacturer to optimize sales activities to balance their warehouse when they purchase preprocessed materials (market push). • The broker helps the manufacturer to manage market demand to control preprocessed material purchases when planning production (market pull). • The broker helps the manufacturer to manage the use of resources to optimize production when they are fulfilling a customer order. • The broker helps the manufacturer to procure preprocessed materials to make the procurement process more efficient when special material compositions are needed. • The broker helps the manufacturer to monitor environmental impacts in material procurement regulations when they become relevant for audits (customer or government). Using a journey mapping approach, these VPs can be translated into the eight use cases shown in Fig. 6. These use cases can be translated into eight information packages
26
O. Stoll et al.
needed for the implementation, using a set of interviews with different experts (a total of 15 interviews with specialists, 3 at the manufacturer, 3 at the logistics partner and 9 at the material sellers). targets
0})
(23)
356
M. Sato et al. C,M C,M C,M C,M C,M (si,L C , si,LC −1 , . . . , si,LC −LM ) ← (0, si,LC , . . . , si,LC −LM +1 )
(24)
C,SW C,SW C,SW C,SW C,SW (sj,L C −2 , sj,LC −3 , . . . , sj,LC −LS ) ← (0, sj,LC −2 , . . . , sj,LC −LS +1 )
(25)
C,SS C,SS C,SS C,SS C,SS (sj,L C −2 , sj,LC −3 , . . . , sj,LC −LS ) ← (0, sj,LC −2 , . . . , sj,LC −LS +1 )
(26)
Score Adjustment at Game Closure. When closing the game, to offset the effect of the difference in final inventory levels among players on the game scores, the value of the raw milk and milk cartons remaining in the inventory are evaluated using their standard price and added to the money stock of each player as follows: R,M C,M M mM si,l + 100 · si,δ (i ∈ {1, 2, . . . , J }) (27) i ← mi + 80 · δ
l
mSj ← mSj + 150 · (
sC,SW δ j,δ
+
sC,SS ) δ j,δ
(j ∈ {1, 2, . . . , J })
(28)
3.3 Auction Algorithms Raw Milk Auction. In this auction, only a single-ask offer is created by the computer, which is a pair (qR,F , pR,F ) consisting of the maximum quantity of raw milk that can be sold and its reserve price (RP, the lowest acceptable unit price). Whereas a bid offer, denoted by qiR, M , qiR, M , piR,M ) is collected from each manufacturer i, which specifies the minimum and maximum quantities demanded and the willingness to pay (WTP, the highest acceptable unitary price). Let Bids be the list of bid offers arranged in decreasing order of their WTPs and ib be the manufacturer submitting b th offer in Bids. The raw milk auction is then conducted as follows: Step 0: Set Supply ← qR,F , ptR,F = pR,F , fi,tR,F ∗ ∗ ← 0(∀i), b = 1. Step 1: If b > |bids| ∨ Supply = 0 ∨ piR,M < pR,F holds, terminate the procedure b
and output ptR,F and fi,tR,F ∗ ∗ (∀i).
R,M ≤ Supply holds, set fibR,F Step 2: If qiR,M ,t ∗ ← min(Supply, qib ) and Supply ← b R,F ← piR,M , otherwise, set ptR,F ← Supply− fibR,F ∗ ,t ∗ . Then, if Suppy = 0 holds, set pt ∗ b R,M R,M min(ptR,F ). If qiR,M > Supply holds, set ptR,F ← max(ptR,F ). ∗ , pi ∗ ∗ , pi b b b Step 3: Set b ← b + 1 and go back to Step 1. C,M C,M Milk Carton Auction. In this auction, ask offers denoted by (qi,δ , pi,δ ) (δ ∈ _
{LC , . . . , LC − LM }) are collected from each manufacturer i, each of which specifies the maximum number of milk cartons of remaining life δ that can be sold and their unitary C,S RP. The bid offer submitted by each supermarket j is expressed by qjC,S and (C,S j,h , pj,h )
(h ∈ {1, 2, . . . , HjC,S }), which specify the maximum number of milk cartons demanded and several pairs of the minimum acceptable remaining life and WTP for such milk
Milky Chain Game: A Pedagogical Game for Food
357
cartons. Let Asks be the list of ask offers of δ = arranged in the increasing order of their reserve prices, and ia be the manufacturer submitting a th offer in Asks. Similarly, C,S C,S C,S let Bids be the list of (C,S j,h , pj,h ) that satisfies j,h ≤ ∧ pj,h ≥ p0 arranged in
decreasing order of their WTPs, jb the supermarket submitting b th offer in Bids, and hb C,S the id h of the b th offer (C,S j,h , pj,h ). Note that Asks, Bids, ia , and jb vary depending on , but this dependence is not made explicit in the notation. In the following algorithm, they are updated according to in Step 2. Following the aforementioned preparation, the milk carton auction is conducted as:
Step 0: Set ← LC − LM , Demandj ← qjC,S (∀j), p0 ← 0. C,M Step 1: If = LC + 1 holds, terminate the procedure and output pδ,t ∗ (∀δ) and
C,M fi,j,δ,t ∗ (∀i, j, δ).
C,M C,M Step 2: Update Asks and Bids according to , and set p,t ∗ ← p0 and fi,j,,t ∗ ← 0(∀i, j). Step 3: If Asks = ∅ ∨ Bids = ∅ holds, set ← + 1 and go back to Step 1.
Step 4: Set a ← 1, b ← 1, and Supply ← qiC,M . 1 , C,M < piC,M holds, set p0 ← p,t Step 5: If a > |Asks| ∨ b > |Bids| ∨ pjC,S ∗ and a , b ,hb ← + 1 and go back to Step 1. Step 6: If Demandjb = 0 holds, set b ← b + 1 and return to Step 5. Step 7: Allocate milk cartons of remaining life offered by manufacturer ia to supermarket jb by setting fiaC,M ,jb ,,t ∗ ← min(Demandjb , Supply), Demandjb ← C,M Demandjb − fiaC,M ,jb ,,t ∗ , and Supply ← Supply − fia ,jb ,,t ∗ .
C,M C,S C,M Step 8: If Supply = 0 holds, set p,t ∗ ← pj ,h , a ← a + 1, and Supply ← qi , . a b b
C,M C,M Otherwise, that is, if Supply > 0 holds, set p,t ∗ ← max(pi , , p0 ), and b ← b + 1. a Go back to Step 5.
3.4 Consumers’ Purchasing Behavior We define the utility function of every consumer who bought a milk carton of remaining life δ at price p as (the values of β and β P are set based on [11]): u(δ, p) = β · δ − β P · (p − 200)
(29)
and assume the utility of a consumer who did not buy a carton is equivalent to u(0, 200). The total number of consumers visiting supermarkets daily is determined randomly according to a normal distribution, with a mean is 300 · J . The proportion of consumers who visited supermarket j ∈ {1, 2, . . . , J } in period k ∈ {1, 3} is initially set to 1/(2 · J ), and continuously adjusted based on the average utility of such consumers on the previous day. Consumers visiting supermarket j in period k of day t ∗ are considered to arrive at the shelf individually, and their purchasing behavior is modeled by a nested logit model defined using Eq. (29) and the accompanying assumption stated below. In the first step,
358
M. Sato et al.
C,SS each consumer chooses δ from the set {δ|sj,δ > 0} at the time of arrival. In the second step, the consumer decides whether or not to buy the corresponding carton (if bought, it is removed from the shelf).
4 Game Experiments 4.1 Research Methodology and Experimental Settings In this study, we take the methodology of observational research to qualitatively evaluate the potential educational effectiveness of the game and gain insights into how to fully utilize it. Specifically, we conduct small-scale laboratory experiments, observe their process and results, and draw qualitative insights from the observational data. Proper facilitation is important for educational games, but since the focus of this study is the game itself, the intervention of the facilitator was kept minimum. We leave it to a future study how to leverage the educational effects through helpful facilitation. Two experimental conditions are defined, which
we call the 1/3 and 1/2 rules. For the 1/3 rule, the deadlines are set to LC , LS , LM = (9, 6, 3), whereas for the 1/2 rule,
they are set to LC , LS , LM = (9, 7, 5). In both conditions, the game horizon is set to T = 15, and default values are used for the other game parameters. Further, 11 university students (7 male and 4 female students) are recruited as game participants and offered opportunities for preliminarily playing the game under the 1/3 rule as practice before starting the experiments (8 of 10 subjects participated in the opportunities). Their majors are different including economics, molecular microbiology, horticultural chemistry, and none of them had taken a supply chain class or played the beer game. Five game sessions are performed in each condition from December 26, 2022 to January 10, 2023. Each game session was conducted in an online/onsite hybrid mode. A facilitator carefully proceeded with the game session while checking the status of the players without intentionally guiding their actions in a specific direction. We also conducted a post-hoc questionnaire, which asked the following questions to be answered on a 5-point Likert scale (1 = Strongly Negative, 2 = Negative, 3 = Neutral, 4 = Positive, 5 = Strongly Positive): Q1: Were you aware of the 1/3 rule (before playing this game)? Q2: Did you gain a better understanding of the 1/3 rule (by playing the game)? Q3: Did you come up with ideas for reducing food loss (by playing this game)? Q4: Do you want to play this game again? 4.2 Experimental Results The final game scores, total number of milk cartons produced, those wholesale by each manufacturer, purchased by each supermarket, retailed to consumers, and wasted are summarized in Tables 1 and 2 (M: Manufacturers, S: Supermarkets). The results of the post-hoc questionnaire are presented in Table 3. One prominent feature of the results is the consistent shortage of milk cartons supplied to the market. With three supermarkets (J = 3), each averaging 300 consumers per day over 15 days (T = 15), the total number of consumers visiting the supermarkets during each game session is approximately 4,500. However, the total number of
Milky Chain Game: A Pedagogical Game for Food
359
Table 1. Experimental results under 1/3 rule (The bottom row shows the average) M’s score
S’s score
Production
Wholesale
Purchase
Retail
M’s waste
S’ waste
303795
434236
4400
3800
4373
3946
0
0
305182
346240
5059
4703
4740
4067
85
0
350956
413372
5245
4668
4058
3483
0
0
345469
447380
5100
4654
3950
3550
146
0
269916
289204
5003
3991
5000
3363
290
1032
344625
473200
6400
5305
5000
4386
395
3
388368
425768
5441
5441
5268
4668
0
0
381228
386430
5098
5098
3521
3264
0
0
39844
403061
1900
1900
3650
3047
0
0
324586
455603
3900
3900
5459
4609
0
0
401826
407318
5000
5000
3650
3540
0
0
322335
440914
4200
4200
3991
3741
0
0
360546
421272
5100
5100
5077
3952
0
0
316586
454078
4550
4451
4341
3841
99
0
311024
454803
4989
4433
4566
4374
400
0
317752
416859
4759
4443
4443
3855
94
69
milk cartons sold to them is only approximately 3,865, resulting in 14% of consumers being unable to purchase milk. This phenomenon could be attributed to the fact that all participants were Agricultural Economics majors who are highly aware of the issue of food loss. However, it is paradoxical to address food loss reduction without ensuring an adequate supply. This serves as a valuable lesson for students to learn about the difficulty of balancing food loss reduction and consumer satisfaction. Another noteworthy observation is that the resulting figures obtained from both conditions exhibit striking similarities, indicating players’ tendency to exhibit similar behavior in both conditions. Only slight differences, such as the amount of waste being less under the 1/2 rule condition, can be identified. This may be attributed to the players’ focus on reducing food loss, which led them to restrict the flow of goods regardless of the condition applied. They did not seem to have adjusted their strategies based on the conditions. Consequently, their response to the post-hoc question Q2 falls around neutral, indicating that gameplay was not immediately effective in deepening their understanding of the effect of delivery and sell-by deadlines. However, upon reviewing the results as described, an obvious suggestion for the next step emerges. Prior to commencing the next round of game sessions, the students could be instructed to correct their one-sided focus on food loss reduction and strive to balance it with consumer satisfaction. By repeating similar game experiments with this instruction, it is expected that food loss would increase, prompting a deeper consideration on how to refine their strategy. Through this process, they are likely to recognize the
360
M. Sato et al. Table 2. Experimental results under 1/2 rule (The bottom row shows the average)
M’s score
S’s score
Production
Wholesale
Purchase
Retail
M’s waste
S’ waste
294044
438653
3700
3700
5250
4300
0
0
197621
433612
3550
3550
4350
4150
0
0
470991
426343
6103
6103
3753
3175
0
0
431034
452706
5979
5635
5377
4480
0
0
392361
467317
4913
4913
4334
4100
0
0
285161
383363
3900
3900
4737
3656
0
52
256509
424123
4500
4067
2919
2919
0
0
365493
496302
5318
4348
4900
4205
0
0
403050
482154
6029
5291
5887
5139
0
0
154224
390050
3700
3569
3247
2850
131
0
286051
475071
5092
4848
4300
3980
244
0
261665
439453
5200
3861
4731
4031
336
296
399724
389422
5100
5100
4096
3984
0
0
326095
435145
4400
4007
3950
3385
0
0
249116
425574
3547
3547
4608
3765
0
0
318209
437286
4735
4429
4429
3875
47
23
Table 3. Results of post-hoc questionnaire 1
2
3
4
5
Q1
4
3
0
1
2
Q2
1
2
4
2
1
Q3
0
4
5
1
0
Q4
0
0
4
5
1
relationship between the effectiveness of their strategy and the delivery and sell-by deadlines, which would improve their response to questions Q2 and Q3. Furthermore, reviewing the results of the second round would provide valuable insights for shaping the direction of the third round. This highlights the importance and potential effectiveness of applying this game in a sequential manner to fully leverage its educational benefits. While the immediate educational effect may appear limited, the potential for growth through repeated rounds is evident. The positive response to question Q4 is also encouraging for this approach. To facilitate this, the review process between game rounds becomes crucial and can be conducted under the supervision or facilitation of an instructor or through self-organized
Milky Chain Game: A Pedagogical Game for Food
361
debriefing sessions. This iterative approach allows for a deeper understanding of the complex dynamics of food supply chains, for example, those studied in [15].
5 Conclusions This paper identified several food product-specific issues in supply chain management that current supply chain serious games do not fully address. The Milky Chain Game was thus proposed as an educational food supply chain serious game incorporating these identified issues. The game’s educational effectiveness was tested with university students as participants. The results revealed that participants who were conscious of reducing food loss experienced persistent undersupply of milk cartons in the game experiments, indicating the challenge of balancing food loss reduction and consumer satisfaction in the supply chain. Further, the results also suggested the significance and potential effectiveness of using the Milky Chain Game in a sequential manner to fully leverage its educational benefits. Adopting an iterative approach allows for a deeper understanding of the complexities and challenges inherent in food supply chain management. Acknowledgement. The authors are grateful to the participants of the game experiment. This work was supported by JSPS KAKENHI Grant Number JP 20K15611.
References 1. Gumus, M., Love, C.E.: Supply chain sourcing game: a negotiation exercise. Decis. Sci. J. Innov. Educ. 11(1), 3–12 (2013) 2. Sterman, J.D.: Modeling managerial behavior: misperceptions of feedback in a dynamic decision making experiment. Manag. Sci. 35(3), 321–339 (1989) 3. Pullman, M., Wu, Z.: Food Supply Chain Management: Economic, Social and Environmental Perspectives. Routledge, New York (2011) 4. Consumer Affairs Agency: To solve food loss and waste issue. https://www.caa.go.jp/en/pub lication/annual_report/2020/white_paper_summary_07.html. Accessed 01 June 2023 5. Food and Agriculture Organization: The state of food and agriculture 2019: Moving forward on food loss and waste reduction. FAO (2019) 6. United Nations: The 17 goals. https://sdgs.un.org/goals. Accessed 15 Apr 2023 7. Ministry of the Environment, Government of Japan: Food waste and recycling in fiscal year 2018. https://www.env.go.jp/content/900517448.pdf. Accessed 15 Apr 2023 8. Shovityakool, P., Jittam, P., Sriwattanarothai, N., Laosinchai, P.: A flexible supply chain management game. Simul. Gaming 50(4), 461–482 (2019) 9. Global Supply Chain Games. https://www.gscg.org/index.html. Accessed 15 Apr 2023 10. Sato, M., Nakano, M., Mizuyama, H., Roser, C.: Proposal of a beer distribution game considering waste management and the bullwhip effect. In: Ma, M., Fletcher, B., Göbel, S., Baalsrud Hauge, J., Marsh, T. (eds.) JCSG 2020. LNCS, vol. 12434, pp. 78–84. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-61814-8_6 11. Sato, M., Mizuyama, H., Nakano, M.: Milk supply chain management game for waste reduction. In: Naweed, A., Wardaszko, M., Leigh, E., Meijer, S. (eds.) ISAGA/SimTecT -2016. LNCS, vol. 10711, pp. 302–314. Springer, Cham (2018). https://doi.org/10.1007/978-3-31978795-4_21
362
M. Sato et al.
12. Sato, M., Tsunoda, M., Imamura, H., Mizuyama, H., Nakano, M.: The design and evaluation of a multi-player milk supply chain management game. In: Lukosch, H.K., Bekebrede, G., Kortmann, R. (eds.) ISAGA 2017. LNCS, vol. 10825, pp. 110–118. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-91902-7_11 13. Sato, M., Mizuyama, H., Nakano, M.: How different commercial rules affect actors and waste in milk supply chain. In: Proceedings of the 49th International Simulation and Gaming Association Conference: ISAGA 2018, pp. 528–535 (2018) 14. Iwamoto, H.: Consumers’ willingness-to-pay for HACCP and eco labeled milk. HUSCAP11(2), 48–60 (2004). (in Japanese) 15. Mizuyama, H., Yamaguchi, S., Suginouchi, S., Sato, M.: Simulation-based game theoretical analysis of Japanese milk supply chain for food waste reduction. In: Kim, D.Y., von Cieminski, G., Romero, D. (eds.) APMS 2022. IFIPAICT, vol. 664, pp.107–115. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-16411-8_14
Introducing Active Learning and Serious Game in Engineering Education: “Experience from Lean Manufacturing Course” Mattei Gianpiero1 , Paolo Pedrazzoli1 , Giuseppe Landolfi1 , Fabio Daniele1 , and Elias Montini1,2(B) 1
2
DTI, University of Applied Sciences and Arts of Southern Switzerland, Lugano, Switzerland [email protected] Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milan, Italy Abstract. Traditional pedagogical methods in engineering programmes of most universities are not always effective. Various studies underline that active learning is generally more effective and can facilitate understanding of concepts and assimilation of content. In this context, “serious games” are widely used in various fields and prove their effectiveness. This article addresses the structure and provides some evidence of the effectiveness of a serious game developed in a Lean Manufacturing course held at the University of Applied Science and Arts of Southern Switzerland: the Lean LEGOTM Game. After a concise literature review reporting existing serious games covering industrial engineering, operations and lean management, this paper will report on the Lean LEGOTM Game objectives, mechanics and measured results. Keywords: Serious games · Education · Engineering · Manufacturing · Lean Manufacturing · Game-based learning
1
Introduction
The continuous advancement of technologies requires industrial engineers with high technical competencies, capable of collaborating with many professional personalities of organisations and managing various activities, projects and problems. Engineers must be prepared for these challenges, and universities have the difficult task of preparing them. From the outset, engineering education relies mainly on “teaching by telling” pedagogy, with large classes and individual disciplines in which professors use auditory, abstract, deductive, passive and sequential teaching styles. While traditional pedagogical methods are still present in the engineering programmes of most universities, they are not always effective [9]. Various studies underline that active learning is generally more effective than passive [11,20]. Engage students with active teaching methods to facilitate c IFIP International Federation for Information Processing 2023 Published by Springer Nature Switzerland AG 2023 E. Alfnes et al. (Eds.): APMS 2023, IFIP AICT 690, pp. 363–377, 2023. https://doi.org/10.1007/978-3-031-43666-6_25
364
M. Gianpiero et al.
understanding concepts and content assimilation. In this context, Serious Games (SGs) are widely used in various fields and prove their effectiveness in student participation. “A game is a structured or semi-structured context in which learners have goals that they seek to achieve by overcoming challenges constrained by a set of rules related to their limited context” [22]. Although various studies have shown that SGs engage and stimulate students, their effectiveness as a learning tool has yet to be proven [5,26]. This article provides some evidence of SGs’ learning effectiveness through the experiences gained in the Lean Manufacturing course held at the University of Applied Science and Arts of Southern Switzerland, where the Lean LEGOTM Game was developed. The article first provides a brief literature review identifying the most relevant SGs in the domains of industrial engineering, lean and operations management to provide a referenced framework of the game. Then, the Lean LEGOTM Game is introduced, explaining its mechanics and effects on learning lean concepts and methods. Finally, the research provides quantitative results demonstrating this game’s effectiveness.
2
Serious Games in Industrial Engineering and Manufacturing Education
Organisational changes and improvements are based on human decisions [24], which involve technical, but also cross-domain and soft skills [4]. Especially in the information age, engineers need critical thinking and problem-solving, teamwork, ability to manage complexity and creativity. Industrial and manufacturing education needs to focus on developing these skills as needed by new generations of engineers, applying innovative educational approaches and methodologies to enhance both knowledge and skills development. Many researchers identify SGs as great opportunities towards these purposes [1,21]. SGs have been defined as entertaining games with non-entertainment goals [23]. A SG aims to create an immersive environment to engage students completely in the learning activity [7]. If well-designed, this kind of game can provide a meaningful personal experience, which, through post-game reflection, can also be translated into learning outcomes. Games can involve one player acting alone, more players acting cooperatively, and, more frequently, players or teams competing between themselves, allowing learners to explore, experiment, cooperate and compete [24]. SGs are an effective tool for enhancing knowledge and skill development among learners, particularly in engineering and manufacturing. These games provide an immersive environment that engages students in the learning activity, creating a meaningful and personal experience. The use of SGs for training and education is a growing trend, as demonstrated by several studies, reporting higher engagement levels [19] and improving communication, risk awareness, risk management, and planning skills [13]. Moreover, they can simplify the teaching of complex topics, creating awareness and interest in learners [15]. However, the drawbacks include the high development cost and the need for expert facilitators. Various SGs are utilized within the domains of manufacturing and industrial engineering, encompassing a range of topics. Among these, supply chain
Introducing Active Learning and Serious Game in Engineering Education
365
management stands out as a commonly explored subject. Some SGs incorporate cooperative and competitive elements into their supply chain management simulations, enabling players to engage in strategic decision-making while simulating competitive markets [8,25]. Additionally, other SGs focus on operation management, addressing areas like maintenance management and OEE analysis [2], logistics [18], as well as the intricacies of both manual and automatic processes [17]. Lastly, there are lean manufacturing games that provide environments for learning the 5S methodology [12] and even offer realistic settings incorporating materials, assembly areas, and stochastic breakdowns [6]. In conclusion, SGs can provide an effective and engaging approach to training and education across diverse fields, including manufacturing, risk management, and lean principles. SGs can create a fun and competitive learning environment that promotes greater motivation and involvement among participants.
3
The Lean LEGOTM Game
The Lean LEGOTM Game is a SG introduced in the Lean Manufacturing course by SUPSI, meant to enhance students’ learning experience. The game was introduced to explain different Lean concepts and methods (e.g. pull production, 5S, 7 MUDA, Kanban, etc.) so that the students could familiarise with them and can see their application in practice. In addition, the game allows students to get their hands on some of the critical aspects of a lean project, such as interacting with operators, data collection, appropriate use and analysis of information, and strengthening students’ soft skills. In particular, the game was designed to achieve goals that are hardly reached with traditional lessons: – Understand lean concepts and methods: lean concepts and methods are not always intuitive. Explaining these concepts with just slides cannot be comprehensive enough. Additionally, practically applying lean concepts and logic can facilitate and reinforce learning, understanding and future application. – Hands-on problems experienced as close to reality as possible: confronting theoretical problems is easier than confronting real ones. Taking optimising a production flow as an example, traditional university courses offer exercises in which students can find information about the system. Depending on the purpose and complexity of the exercise, it is possible to determine the number of stations, product types, cycle times, problems, etc. Any industrial engineer with a basic knowledge of production systems should be able to solve it easily, modelling the system and identifying possible improvements. However, theoretical exercises differ from real cases. When real production system optimisation has to be developed practically, there are many hidden issues that many “beginners” underestimate. First, gathering data and information is not accessible as it appears. Collect cycle times, set-up times, etc... is not complex if personally managed. However, this requires a lot of time, and in many cases, workers collect cycle times and other data and provide information about the main issues and problems. Operators have a detailed knowledge
366
M. Gianpiero et al.
of their specific tasks but often a partial and sometimes unbiased vision. The data collection phase, preliminary to the analysis, is sometimes more complex than the analysis itself. Gathering information incorrectly and/or misinterpreting it can result in the failure of the improvement process even before it starts. – Stimulate the development of soft skills: traditional engineering education is focused on hard skills training. Information age engineers require soft skills which cannot be developed only in the work field. Education has to play an essential role in soft skills development, starting from problem-solving and creativity to communication and teamwork. 3.1
Game Mechanics
The Lean LEGOTM Game simulates the assembly line of LEGOTM forklifts (Fig. 1), which is a complex product available in more than 500 variants (4 lights choices, 3 tyres types or tracks, 5 propulsion and exhausts, 2 lifting systems, 2 forks length, 2 axle length), starting from 71 to 149 different basic pieces. The production line is instantiated along 4 workshops (of the duration of 4 h) articulated along 4 scenarios: 1) traditional company with push production system in place; 2) Introduction of 5S, warehouse standardisation and pull production system; 3) Introduction to lean, takt time, line balancing and layout improvement; 4) introduction of digitalisation. Students have an active role, playing the different players that animate the factory. These roles are: Assemblers: They are in charge of assembling all the LEGOTM components to obtain the different subassemblies and the final forklift. Quality Managers: They are in charge of evaluating the quality of the forklifts produced by the Assemblers. Logistic Employees: They are in charge of delivering the components to the Assemblers.
Fig. 1. 1 of the 500 variants of forklifts.
Production Managers: They are in charge of the creation of the Production Orders and Withdrawal Orders during Workshop #1 - push production system Consultants: They are in charge of observing the production system and interviewing the other players to support the analysis and evolution of the production system. Timekeepers: They are in charge of collecting events’ times (e.g., production order delivered, sub-assembly completed) during the game.
Introducing Active Learning and Serious Game in Engineering Education
367
The Lean LEGOTM Game is based on four different workshops. The first, the second and the fourth are driven by the Professor, which defines “factory” setup, tools and production logic. The third workshop is managed by the students, which can independently define all the “factory’s” elements. 3.2
Workshop #1 - Push Production System
In this session, the Professor provides the factory’s set-up (4 production stations, 1 quality check station, centralised warehouse) and sets logic and rules. The production process is based on a push logic. The orders are delivered by the customer (Professor), which can select between the 500 variants of forklifts. For every received customer order, production managers create, through an MRP, the production orders, containing which operations each station has to perform, and the withdrawal orders, containing which components are required in each station from the warehouse (Fig. 2). Production orders are delivered to Assemblers and Quality Managers’ stations, withdrawal to Logistic Employees.
Fig. 2. Workshop #1: information flow (dashed-blue, dashed-orange), components flow (orange), semi-finished products flow (blue). (Color figure online)
The session evolves for about 2 h. Production Managers create and release orders, Assemblers produce the sub-assemblies and the final forklifts, and Logistic Employees deliver components to the different stations. Quality Managers verify the final product quality and Time Keepers collect process and waiting time from each station and components delivery time from logistics. Finally, Consultants interview each factory’s player to analyse the whole system. At the end of this session, all the times collected by the Time Keepers are reported in a spreadsheet to calculate different Key Performance Indicators (KPIs) like cycle times, waiting times and takt time, supporting the analysis of the system, together with the information collected by Consultants. In the end, students are divided into groups, with the task of creating a small presentation to highlight the main identified issues and the possible improvements. Their outcomes are presented before the beginning of Workshop #2. Workshop #1 is designed to give students the chances to: – Understand the concept of push production and how it is usually managed.
368
M. Gianpiero et al.
– Face with the most common errors of information collection. At the beginning of the game, any tool is provided to timekeepers, leaving them the freedom to organise data collection. They often do not use the same procedure to collect times or adopt non-synchronised time sources. After a few minutes that the game is started, they usually realise that something does not work. – Apply the concept of 7 MUDA. This is not only a method to classify waste. It can be used to have structured waste and problem identification. – Collect information from different sources and experience how to filter, organise and make them useful. 3.3
Workshop #2: Pull Production System
This workshop starts with students’ feedback, where the different groups can show the issues and the possible improvements they have identified. Taking advantage of the outcomes and hints of the presentations, the Professor introduces the new factory’s setup, visual management tools, logic and rules for the second game session. The system is now paced by a pull production based on Kanban, without relying anymore on Production Managers, who are removed from the game. Kanban is a visual method for controlling production, which allows Just in Time (JIT), limits the level of buffers and ensures the production of only what the customer is asking for and nothing more. When the customer (Professor) takes a forklift, she/he activates the final station, which in turn consumes sub-assemblies, activating other upstream stations (Fig. 3). Each station consumes components, which have to be replaced by logistics. All this is managed through the production and withdrawal Kanban. Students experience, since the beginning of this game session, that the entire production system is more reactive and streamlined. However, this comes with a price. The forklift variants are reduced to 2.
Fig. 3. Workshop #2: information flow (dashed-blue, dashed-orange), components flow (orange), semi-finished products flow (blue). (Color figure online)
As the first game, the session lasts 2 h. After these, students have to develop a report analysing the two game sessions. Moreover, according to the output of this analysis, they have to define the factory’s setup and logic for Workshop #3. Workshop #2 is designed to give students the chances to: – Communicate and explain the outcomes of their analysis.
Introducing Active Learning and Serious Game in Engineering Education
369
– Understand pull production and how it is managed using Kanban. – Introduce 5S, visual management and refine factory layout. 3.4
Workshop #3: Lean Production Line
This workshop is based on the students’ proposal, who have complete freedom on production logic, factory management and layout, given a production objective. The latter is not to be as fast as possible, but to meet the takt time requirements Students have to use takt time analysis to determine the pace of production needed to meet customer demand. In addition, they have to consider the minimal number of operators required to run the line efficiently. To balance the line, they have to consider task allocation among operators to prevent bottlenecks and optimise workflow. Furthermore, they should redefine the line layout and determine the optimal number of stations to complete tasks efficiently. Finally, they have to improve warehouse management and the flow of materials and information through the production line to increase efficiency and reduce waste. Also, in this case, the timing of the various production phases is collected to compare the performance of the new system proposed by the students. Workshop #3 is designed to give students the chances to: – Introduce takt time and calculate the minimal number of operators. – Experience in process analysis and line balancing (Fig. 4). – Redefine layout, station number, warehouse management logics and flows.
Fig. 4. Task dependency diagram and task allocation developed by the students.
3.5
Workshop #4: Digital Push and Pull Production Systems
Recent trends indicate that there will be an increase in demand for highly qualified engineers who are capable of sustaining and adopting the digital revolution. Therefore, universities must take the initiative to teach digital skills to bridge the knowledge gap and integrate Industry 4.0 concepts into their engineering curricula [10]. To address this need, we have developed a “digital” enhanced extension of the game, which teaches concepts related to digitalisation such as IoT and digital twins [16], cloud computing and AI [3], and blockchain [14] how
370
M. Gianpiero et al.
they can be used to improve performance. During this workshop, the students can use additional modules, represented in Fig. 5, playing with a modular system capable of generating different game configurations responding to different needs. The additional modules are: – Data Collection and Transmission System (DCTS) and Technologies. The DCTS system supports the acquisition, collection and transmission of data. Thanks to this system, data and information can describe the events (e.g., order receipt, production start). – Factory Design Lab Digital Twin and Block-Chain (FDDT). This tool makes it possible to obtain a digital model of the internal logistics system supported by a blockchain to validate data collection. – Remote Production Management Support System (RPMSS). This system allows for carrying out the production planning and management process remotely from the production plant. The system supports sending production orders to assembly stations and picking orders to logistics.
Fig. 5. Workshop #4.1: main game steps and modules.
Workshop #4, which is structured in two different sessions (digital push production system and digital pull production) aims to provide students with the opportunity to: – Demonstrate the benefits of remote production management with the support of RPMSS. – Perceive the benefits of automatic data collection and dashboards showing real-time KPIs. – See a practical example of a digital Kanban systems, understand the concept of blockchain and digital twins applied in production systems.
Introducing Active Learning and Serious Game in Engineering Education
371
Workshop #4.1: Digital Push Production System. The production management team in the game, as in Workshop #1, receives client orders and converts them into production and picking orders using the MRP system (Fig. 5 - step 1). The orders are subsequently forwarded to the workstations and logistics operators by the production managers using the RPMSS (Fig. 5 - step 2). As a result, logistics physically picks up and delivers the necessary materials to the assembly stations while assemblers work on production (Fig. 5 - step 3). During production, the DCTS enables operators to collect data, monitor events, and update the digital model of the FDDT in real-time (Fig. 5 - step 4). This enables the recording of the temporal sequence of events, while magnetic labels with QR codes and bar codes are put into containers to identify and trace items used during production. Simultaneously, the FDDT provides the same data to the performance dashboard, which generates numerous distinct performance analyses from various angles. Workshop #4.2: Digital Pull Production System. As in Workshop #2, the production system management is based on Kanban. Therefore, when a customer picks up a product, the Kanban on the board are activated, and when they are picked, they are detected and assigned to the operators and stations using the DCTS systems, which automatically update the RPMSS system. Similarly to Workshop #4.1, materials and bins are identified and traced with QR codes and barcodes, and the FDDT sends data to the performance dashboard. To maximise the effectiveness and efficiency of the learning process, it was decided to maintain the physical version of Kanban identified with QR codes and barcodes, as well as the physical version of the production and withdrawal boards, although digital versions of both could be used.
4
The Lean Lego Game’s Modular Components
The game simulates a real company by combining real parts, objects, and information systems. The game’s modular design provides a flexible and customisable learning experience with varying levels of complexity. In the following sections, the main game components are discussed. LEGOTM Technic Components. The game combines LEGOTM Technic components to create various product variants (e.g., Fig. 6), allowing for an indepth exploration of the interplay between product management and production processes. The Forklift is composed of 4 main sub-assembly: frame/chassis (A), which involves 13 to 20 parts and 7 to 10 operations depending on the model selected; fork (B), which utilises 15 to 25 parts and takes between 6 to 12 operations; cabin (C), which requires 19 parts and 8 operations, and the final components (D), where the three subassemblies are combined with a variable number of components ranging from 19 to 92 through 11 to 19 operations. Assembly Stations and Logistics Kits. Each workstation has a kit whose contents vary depending on the workshop. Workshop #1 has production instruc-
372
M. Gianpiero et al.
Fig. 6. Two product variants: configurations and bill of material data.
tions1 , a timetable register, a ruler, and a quality control register. Workshop #2 adds Kanban production orders, different coloured tapes for workspace standardisation and visual management, a warehouse grid and Kanban-associated withdrawal order for logistics. Workshop #3 requires students to develop all the necessary materials. Workshop #4 provides a production instructions book, a ruler for the quality control station, and a mobile app. Warehouse and Transportation Bins. The warehouse comprises 3’920 components divided into 44 compartments identified through alphanumeric codes. The components are retrieved and transported by two different types of standard containers. On the transportation bins are applied a magnetic band used in Workshop #4. Production Management Information System. This system provides tools for production planning used in Workshop #1 and #4.1. The system includes various tools developed using MS Excel: i) the product configurator collects orders from customers and transfers the information to the MRP system; ii) the MRP system manages the production and withdrawal orders; iii) RPMSS, which provides support for managing production remotely. Data Collection System. There are two types of data collection used to analyse performance through KPIs: i) manual data collection, where for each order (or Kanban), students register on the timetable the order arrival time, starting time, and finish time; after the workshop, the collected data is inserted into the performance dashboard; ii) DCTS, a digital version of the manual-based system, which uses a web app interface and QR codes to collect data in real-time. Performance Dashboard. The performance dashboard supports performance analysis, including value-added and non-value-added activities evaluation, bottleneck identification, the productivity of the stations and logistics. It also enables the takt time identification. 1
Each station has mounting instructions with every elementary operation, a graphical schema and the required part list. Each book contains the list of operations identified with a code used to match operations in production orders.
Introducing Active Learning and Serious Game in Engineering Education
373
Table 1. The Lean LEGOTM Game’s workshops synopsis #
Information flow
Data collection and Material flow performances analysis
Production planning process
1
Customers’ orders configurator on paper; Production and withdrawal orders
Manual data collection; Performance dashboard on MS Excel
Logistics handles components and semi-finished products
MRP generates production and withdrawal orders
2
Kanban systems manage production and withdrawal (paper-based)
Manual data collection; Performance dashboard on MS Excel
Same as Workshop #1
Kanban boards manage production and withdrawal
3
CONWIP or Designed by students Kanban systems designed by students
Students design: i) Feeding system ii) Assembly line
Combination of MRP, Kanban and/or CONWIP boards designed by students
4.1 Production and withdrawal orders delivered digitally by RPMSS
Automatic and real-time data collection; Real-Time Performance Dashboard (Web-app)
Same as Workshop #1; RPMSS and DCTS provides parts traceability
MRP generates production and withdrawal orders, delivered via RPMSS
4.2 Production and withdrawal Kanban traced by RPMSS and DCTS
Automatic and real-time data collection; Real-Time Performance Dashboard (Web-app)
Same as Workshop #2; RPMSS and DCTS provides parts traceability
Production and withdrawal Kanban, traced and assigned by RPMSS and DCTS
Table 1 sum-ups the main elements of the different workshops, including their information flow, data collection and performance analysis methods, material flow, and production planning process.
5
Validation and Results
The Lean LEGOTM Game takes place as the final activity of a 60 h course, articulated over one semester, devoted to lean manufacturing themes and offered to second-year students in the Bachelor of Science in Industrial Engineering (Fig. 7). Each session takes up to 4 h (different instantiation of the game exists, from vertical 8h activity, offered to companies, to continuous education activity, but we analyse here the results of the standard version, offered to full-time students). To evaluate the game’s effectiveness as a learning tool, a questionnaire, answered anonymously, has been delivered composed of closed questions (evaluation range from 1 to 4) and open questions. Data are available from A.Y. 14/15 till A.Y. 22/23, thus covering 8 years with 136 answered questionnaires, with an average of 17 questionnaires answered per year, within classes composed of 39 students on average. The results collected are reported in Table 2.
374
M. Gianpiero et al. Table 2. The Lean LEGOTM Game results
μ
Description Overall appreciation of the course Course objectives and the Lean LEGO stated
σ
3.8 0.3 TM
Game activity are clearly
3.8 0.2
Background knowledge is sufficient for understanding covered topics
3.5 0.5
The overall organisation of the course is adequate (timing, materials provided, moodle structure)
3.6 0.3
The documentation provided is helpful
3.8 0.3 TM
The activities proposed in relation to the Lean LEGO Game allow 3.9 0.2 better understanding and contextualisation of the course’s concepts The activities proposed in the Lean LEGOTM Game allow the linking of theory and practice
3.7 0.3
The answer to the open question (i.e. What did you particularly enjoy about the course?) often focuses on game appreciation, as reported below: – “The Lean LEGOTM Game activity allowed us to understand the difficulty of implementing the topics covered in the theoretical lectures.” – “I feel that the game facilitated the complete learning of the topics covered in the classroom by making the experience interactive.” – “Presentations of industrial realities and the Lean LEGOTM Game were particularly popular.” – “I appreciated an excellent idea of applying the theoretical concepts covered with a practical exercise (Lean LEGOTM Game), which I found particularly useful.” – “Excellent game exercise, perfect for understanding what we studied!”
Fig. 7. Students playing the Lean LEGOTM Game.
Introducing Active Learning and Serious Game in Engineering Education
375
While the student’s general approval is self-evident when considering the questionnaire results, it is also possible to evaluate the effects on the course success rate, that without any significant change in topics and exam modalities, moved from an average of 82% to 88%. Regarding the game limitations, it is possible to highlight the following: 1. Developing and executing effective SGs requires expertise in game design and subject matter knowledge. This poses replicability problems and casts a shadow on the game’s future due to a lack of skilled personnel. 2. While LEGOTM is a versatile tool, it may have limitations in representing large-scale or complex production systems (such as harnessing and cable handling). LEGOTM models may not capture the intricacies and nuances of certain industrial processes (large warehouses, for instance), making it necessary to carefully manage expectations and choose appropriate scenarios for its application. 3. The nature of LEGOTM and the system represented in this specific game limit the maximum number of participants to 40 over two competing factories. 4. Traditionally, SGs in this field primarily involve physical building and handson activities, which may limit their integration with digital tools. The development of the fourth workshop addressed this limitation.
6
Conclusions
This paper introduces a systematic learning game that integrates production, managerial, and information processes, providing a comprehensive contextual framework encompassing multiple branches of knowledge and technologies, including operation management, lean production, and Industry 4.0. It is important to emphasise that, deliberately, not all the potentialities within these branches are fully explored in this work. The game’s design successfully emulates the complexity of a real production system, encompassing numerous product variants. It incorporates a modularbased product design, enabling a systematic model of the production system supported by various managerial tools and systems. Over the years, the game’s evolution has confirmed the validity of its initial concept, which leverages a modular structure to create diverse learning paths and complexity levels. This adaptability makes it suitable for training professionals in companies and students in industrial engineering, management, and technical specialisations. Recent developments in digitalisation have further validated the game’s concept while enhancing data availability and expanding the number of KPIs accessible through a digital dashboard. While this allows for more in-depth analysis and engagement with the field, there is a risk of losing the tangible connection with paper-based information flows. The game’s effectiveness has been validated through a questionnaire administered to participants, and a continuous improvement process is ongoing, facilitated by feedback from students and professionals. By surpassing traditional learning approaches in engineering education, the game emphasises the benefits
376
M. Gianpiero et al.
of active learning. The game will be extended to encompass different courses and broaden the range of topics supported. This expansion will be accompanied by measuring learning process efficiency and efficacy using a defined set of KPIs and their respective metrics.
References 1. Amory, A.: Game object model version II: a theoretical framework for educational game development. Education Tech. Research Dev. 55, 51–77 (2007) 2. Bengtsson, M.: Using a game-based learning approach in teaching overall equipment effectiveness. J. Qual. Maint. Eng. 26(3), 489–507 (2020) 3. Bettoni, A., Matteri, D., Montini, E., Gladysz, B., Carpanzano, E.: An AI adoption model for SMEs: a conceptual framework. IFAC-PapersOnLine 54(1), 702–708 (2021) 4. Braghirolli, L.F., Ribeiro, J.L.D., Weise, A.D., Pizzolato, M.: Benefits of educational games as an introductory activity in industrial engineering education. Comput. Hum. Behav. 58, 315–324 (2016) 5. Coller, B.D., Scott, M.J.: Effectiveness of using a video game to teach a course in mechanical engineering. Comput. Educ. 53(3), 900–912 (2009) 6. De Vin, L.J., Jacobsson, L.: Karlstad lean factory: an instructional factory for game-based lean manufacturing training. Prod. Manuf. Res. 5(1), 268–283 (2017) 7. Despeisse, M.: Games and simulations in industrial engineering education: a review of the cognitive and affective learning outcomes. In: 2018 Winter Simulation Conference (WSC), pp. 4046–4057. IEEE (2018) 8. Esposito, G., Galli, M., Mezzogori, D., Reverberi, D., Romagnoli, G., et al.: On the use of serious games in operations management: an investigation on connections between students’ game performance and final evaluation. In: Summer School Francesco Turco Proceedings (2022) 9. Felder, R.M.: Learning and teaching styles in engineering education (2002) 10. Ferrario, A., Confalonieri, M., Barni, A., Izzo, G., Landolfi, G., Pedrazzoli, P.: A multipurpose small-scale smart factory for educational and research activities. Procedia Manuf. 38, 663–670 (2019) 11. Freeman, S., et al.: Active learning increases student performance in science, engineering, and mathematics. Proc. Natl. Acad. Sci. 111(23), 8410–8415 (2014) 12. Gomes, D.F., Lopes, M.P., de Carvalho, C.V.: Serious games for lean manufacturing: the 5S game. IEEE Revista Iberoamericana de Tecnologias del Aprendizaje 8(4), 191–196 (2013) 13. Hauge, J.B., Riedel, J.C.: Evaluation of simulation games for teaching engineering and manufacturing. Procedia Comput. Sci. 15, 210–220 (2012) 14. Leng, J., et al.: Blockchain-empowered sustainable manufacturing and product lifecycle management in industry 4.0: a survey. Renew. Sustain. Energy Rev. 132, 110112 (2020) 15. Messaadia, M., Bufardi, A., Le Duigou, J., Szigeti, H., Eynard, B., Kiritsis, D.: Applying serious games in lean manufacturing training. In: Emmanouilidis, C., Taisch, M., Kiritsis, D. (eds.) APMS 2012. IAICT, Part I, vol. 397, pp. 558–565. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40352-1 70 16. Montini, E., et al.: An IIoT platform for human-aware factory digital twins. Procedia CIRP 107, 661–667 (2022). Leading manufacturing systems transformation - Proceedings of the 55th CIRP Conference on Manufacturing Systems 2022
Introducing Active Learning and Serious Game in Engineering Education
377
17. Ordaz, N., Romero, D., Gorecky, D., Siller, H.R.: Serious games and virtual simulator for automotive manufacturing education & training. Procedia Comput. Sci. 75, 267–274 (2015) 18. Pacheco-Velazquez, E., Palma-Mendoza, J., Arana-Solares, I., Rivera, T.C.: LOST: a serious game to develop a comprehensive vision of logistics. In: European Conference on Games Based Learning, pp. 550–559. Academic Conferences International Limited (2019) 19. Pourabdollahian, B., Taisch, M., Kerga, E.: Serious games in manufacturing education: evaluation of learners’ engagement. Procedia Comput. Sci. 15, 256–265 (2012) 20. Prince, M.: Does active learning work? A review of the research. J. Eng. Educ. 93(3), 223–231 (2004) 21. Quinn, C.N.: Engaging Learning: Designing E-learning Simulation Games. Wiley, New York (2005) 22. Randi, M.A.F., de Carvalho, H.F.: Learning through role-playing games: an approach for active learning and teaching. Revista Brasileira de Educa¸ca ˜o M´edica 37(01), 80–88 (2013) 23. Raybourn, E.M.: Applying simulation experience design methods to creating serious game-based adaptive training systems. Interact. Comput. 19(2), 206–214 (2007) 24. Riedel, J.C., Hauge, J.B.: State of the art of serious games for business and industry. In: 2011 17th International Conference on Concurrent Enterprising, pp. 1–8. IEEE (2011) 25. Romagnoli, G., Galli, M., Mezzogori, D., Reverberi, D.: A cooperative and competitive serious game for operations and supply chain management-didactical concept and final evaluation. Int. J. Online Biomed. Eng. 18(15), 17–30 (2022) 26. Wrzesien, M., Raya, M.A.: Learning in serious virtual worlds: evaluation of learning effectiveness and appeal to students in the E-Junior project. Comput. Educ. 55(1), 178–187 (2010)
Crafting a Memorable Learning Experience: Reflections on the Aalto Manufacturing Game Mikael Öhman1(B)
, Müge Tetik2
, Risto Rajala3
, and Jan Holmström3
1 Hanken School of Economics, Helsinki, Finland
[email protected]
2 LUT University, Lappeenranta, Finland 3 Aalto University, Espoo, Finland
Abstract. Along with the growing popularity of game-based teaching, research on how to design serious games has gained momentum. While some prior work discusses the creation of learning experiences, the role of player emotions and their potential in enhancing learning has largely been overlooked. This paper discusses the design and development of Aalto Manufacturing Game, a board game on manufacturing and service operations dynamics for students and practitioners. We elaborate on the process of crafting a learning experience, with a specific emphasis on emotional engagement. We argue that learning experiences can be crafted through constructing framing events. Based on our experience in game design, we arrive at design propositions for creating and controlling emotional engagement during a serious game. Further, we discuss the effectiveness of our design based on feedback provided through a learning protocol, filled by students that participated in the game. Through our design propositions we make the case for “emotional engineering” in serious game design. Keywords: Serious Game · Operations Management · Game Design · Emotional engagement
1 Introduction The management of operations (OM) requires answering many different types of challenging and complex questions, typically under some degree of time pressure. In preparing students, or training practitioners, to cope with the inherent dynamism in answering these questions, traditional classroom teaching often falls short [1], as it is not suited for conveying how-to knowledge [2]. Accordingly, game-based teaching has been a longstanding addition to OM curricula and practitioner education [3], and is typically highly appreciated by learners [4, 5]. From a pedagogical perspective, games can be seen as experiential learning [6], where learners engage in loops of abstracting, acting, observing and reflecting. In this learning through doing, learners develop knowledge and skills by becoming cognitively, behaviorally and affectively immersed in the learning situation [7]. Experiential learning © IFIP International Federation for Information Processing 2023 Published by Springer Nature Switzerland AG 2023 E. Alfnes et al. (Eds.): APMS 2023, IFIP AICT 690, pp. 378–391, 2023. https://doi.org/10.1007/978-3-031-43666-6_26
Crafting a Memorable Learning Experience
379
has been acknowledged as a good approach for management learning [8], as it increases active involvement and leads to better academic results [9]. Emotional engagement of learners leads to more effective learning [10–12], also in game-based operations management teaching [5]. Despite this, there is scarce research on the (explicit) design of emotional engagement in games [13], and from a pedagogical perspective, existing contributions focus on affective learning. Martin et.al. [14] discuss dramaturgy in educational program design, and in a later contribution, outline dramaturgical teaching as a pedagogical approach for a management course [15]. However, in these contributions, emotional engagement is seen as an outcome, not an explicit design perspective. Overall, Despeisse [16] notes that studies on learning games in industrial engineering tend not to focus on game design and development. In this paper, we place emotional engagement at centerstage, as an explicit perspective to game design, in an approach that can be termed emotional engineering. Hence, we seek to answer the research question: How can emotional engagement be designed into a serious game to enhance learning outcomes? We answer our research question by reflecting on the design and development of Aalto Manufacturing game (AMG). In AMG, participants are exposed to the complex nature of operation management activities by making decisions and seeing their effects in managing industrial assets [5, 17]. The learning objectives of AMG are to provide the participants with experience on 1) diverging incentives within supply chains, 2) uncertainty caused by limited demand visibility and 3) balancing performing and improving. We evaluate the design through analyzing game feedback (338 responses) collected from five academic institutions and a company workshop where the game has been played. The analysis is supported by observational data and a player group interview. Our findings suggest that in AMG’s constructivist experiential learning setting [18], emotional engagement emerges from the tension between cognitive frames and game experience. The game “dramaturgy” is designed to concentrate learning at specific framing events at peak emotional engagement, where this tension is released. The tension is released through a theory-based explanation for the tensions and a change in game rules that releases the pressure experienced by the participants. We discuss our findings considering different possible dramaturgical curves in game design. 1.1 Prior Research Simulations are simplified representations of real-life situations, traditionally used in military and aeronautics education. The use of simulations, or serious games, in education has since expanded to various fields, including medicine, nursing, engineering, and management [19]. Serious games allow students to test and acquire new knowledge and skills, providing a compelling paradigm for understanding concepts and developing methods for change [20]. By modeling diverse situations, games enable observing participants’ behavior and different solutions. Participants can make changes in serious games without fear of adverse outcomes, making learning from mistakes possible [21]. Hence, games improve learning and enhance critical thinking skills [22]. In a classroom setting, game participants not only interact with the game but also with one another. Research shows that learning is positively affected when participants work together in a simulation [23]. Games are interactive, allowing participants to actively
380
M. Öhman et al.
engage with the material instead of passively absorbing it, as in many traditional teaching methods. This engagement requires students to process the information themselves, leading to higher learning and understanding [24]. Many organizational serious games have OM-specific themes [3], indicating that serious games are valuable educational tools in fields like OM, where hands-on experience is essential to comprehend the complex theoretical landscape [1]. Serious games are associated with active, collaborative, and constructivist learning. Active learning involves discovery-based methods and heavily involves the participants [25]. Collaborative learning involves learners pursuing a task together and interacting with one another, building knowledge through sharing experiences and taking on different roles [26]. Constructivist learning emphasizes individual learning through interaction with others and the environment, with the teacher serving as a facilitator rather than a lecturer. Synergies between collaborative and constructivist views on learning become apparent in a serious game setting [27]. While emotional engagement has been the focus of study in digital game design (cf. [13]), research on emotional engagement for the purpose of learning is scarce. Higher education in management has been viewed through the lens of dramaturgy in subjects such as leadership [28] and political science [29]. As a teaching method, dramaturgy “involves employing actions, scenes, agents, agency and purpose to both experience and analyze” [29, p. 728], or “framing, scripting, staging and performing […] trust” [30, p. 60]. Also, the concepts of roles and casts are frequently discussed, sometimes referring to students [29] and sometimes to teachers [28, 31]. A process view on dramaturgy is analyzed in connection to management course design, where dramaturgy is manifested in socio/cultural, reflective, physical and creative activities [14, 15]. They discuss this concept in terms of dramaturgical waves, which they also refer to in later studies (e.g., [32]). On a general note, however, dramaturgy in education typically refers to longer experiences, such as courses ranging over entire semesters or management education ranging over several days. 1.2 Research Approach In this study, we apply a design science approach [33], where we learn from the act of design [34]. Design science is based on pragmatism, which puts the usefulness of knowledge at the forefront. Hence we seek to develop design propositions based on our study, forming an answer to the research question. Further, we perform a pragmatic evaluation of our design based on feedback data collected after game sessions. The AMG learning protocol consists of open-ended questions requiring participants to reflect on what they have learned during the game sessions in their own words. With the use of open-ended questions, participants can give a response using their phrases [35]. According to [35], these types of questions may measure participants’ education. Without informing them at first regarding the learning outcomes of the game and then asking them to elaborate on their learnings, feedback forms collect their reflections regarding the game in their own words. Player feedback (n = 338) was collected through student learning protocols after game sessions between 2017–2023 from five European institutions. The responses include both qualitative and quantitative answers. Based on the collected data, we seek
Crafting a Memorable Learning Experience
381
to understand how the emotional engagement of game participants affects their learning experience. Moreover, we observed game sessions to record student reactions to unfolding game events and explore the student game experience in-depth in a group interview.
2 Creating Tension Through Experience [6, p. 194] describe experiential learning as “a process of constructing knowledge that involves a creative tension among the four learning modes that is responsive to contextual demands.” The four learning modes referred to are Abstract Conceptualization (AC), Active Experimentation (AE), Concrete Experience (CE) and Reflective Observation (RO). [6, p. 194] go on to portray learning as “an idealized learning cycle or spiral where the learner touches all the bases – experiencing, reflecting, thinking and acting – in a recursive process that is responsive to the learning situation and what is being learned.”
Fig. 1. The experiential learning cycle manifested in a game round.
Our design premises is that serious games facilitate this learning cycle, making them tools for creating experiential learning experiences (Fig. 1). As games by their nature represent dynamic models of real situations [20], they are concerned with change within complex systems, which game participants can to some extent control. Here, change as such constitutes a cycle of experiencing the change and reflecting upon the change, whereas any attempt by participants to exert control on the system constitutes active experimentation. Further, from a design perspective, abstract conceptualizations can be embedded into game design through game artifacts, such as game boards, pieces, and as in our case, accounting sheets [11] (Table 1).
382
M. Öhman et al. Table 1. Game round phases and their relation to experiential learning.
Game round phase
Relation to experiential learning
1. Negotiations and decisions
Factories negotiate with the distributor on the number of boats to be delivered and the price to be paid for the delivery. The negotiations are intended to reinforce the game’s initial framing (AC), as one of economic competition where the winner is determined based on generated profits. The competitive bidding in the negotiations and time pressure imposed by the facilitator also serve as a distraction intended to elude factories into short-term thinking in maintaining their capability Once negotiations are closed, the factories make their production decisions on whether they wish to produce boats on their machines or whether they should maintain them. This is where factory participants exert control over the system (AE), with maintenance providers attempting to influence their decision
2. Capability check
With production decisions set, the game round proceeds to the capability check, where factory participants throw a dice that will determine whether their machines work or break down. The dice represents an undisputable way to introduce uncertainty into the decisions made in the previous phase by the participants (AE) The capability check also serves as the main stage for triggering emotions, as this is where factory participants will experience first-hand the deterioration of service quality (deliberate poor maintenance), as machines will tend to break down despite having just been maintained (CE)
3. Production, maintenance
The outcome of the capability check determines how many boats are produced and delivered. The boats are assembled from LEGO®-bricks by the factory participants, giving them a hands-on “production experience” (CE) The assembled boats are physically delivered to the distributor, which also serves for reflective purposes, in the sense that when the production crane breaks down, nothing happens in the factory – no assembly, and no deliveries, with any boats in the warehouse remaining there. And as machines break down, no new boats are assembled. This creates a stark contrast with the experiences from the early stages of the game (RO)
4. Accounting
In the accounting phase, participants are reminded that despite the lack of income due to physical paralysis of the factories, costs are still accumulating. (RO) The accounting phase also reinforces the competitive framing of the game (AC), reminding the participants that the bottom-line still determines success in the end
Crafting a Memorable Learning Experience
383
Serious games have been argued to foster a more comprehensive understanding of methods and concepts in complex systems [20], enabling the modeling of different situations and different solutions for different roles. In so doing, games allow the participants to experiment and make mistakes without having to bear any real consequences [21]. Our design premise is that the process of experimentation constitutes an equally important learning cycle, one which we here describe in terms of framing, tension-building, and reframing. This is analogous to double-loop learning [36, 37]. 2.1 Learning as Framing and Reframing All learning is re-learning, where we attach new things to our existing conception of reality. We are biased toward accepting learnings that reinforce our current understanding, yet we are open to new explanations if they offer better explanations for our conception of reality. These conceptions need not be mutually exclusive but can be mutually reinforcing, which makes it natural to talk about framing. Where the way to frame reality is a way to understand it and, by extension, predict the consequences of decisions made in that reality. Games create a simulated reality, the rules of which are manifested in game design and can be manipulated through game facilitation. (Note that this means that as long as teachings are intended to apply in the “real” world, the game has to balance between bending the rules to highlight selected phenomena and maintaining credibility as a de facto simulation of the real world.) This allows a game or a game facilitator to create situations where existing (or given) framing of reality lacks the power of explanation (i.e., decisions do not produce expected consequences). We argue that the learner is more receptive to learning if the student’s frame of reference lacks explanatory power, making the student more receptive to alternative or complementary framing, which results in greater explanatory power. A greater receptiveness to learning should lead to better learning outcomes. In other words, teaching should produce results in learning according to the following: The best results should come if the novel or complementary frame is provided as a response to imminent demand (i.e., the learner can directly apply the frame to the problem at hand). Fair results should come if the novel or complementary framing satisfies the previous demand (i.e., the learner can see past experiences in a new light). Poor results can be expected if the learner has to “store” the frame for potential later use (i.e., the learner cannot relate the frame to any prior experience). Framing and reframing mean that we can see experiential learning as creating a demand for new knowledge and satisfying that demand (either as a facilitated process – teaching – or as an exploratory process where the learner engages in experimenting with “self-generated” solutions to inadequate existing frames). In this view, education is inherently an activity of framing and reframing the subject under study, preferably in response to a demand for learning. Through games we can craft learning experiences, as framing, re-framing and tension-building can be built into the game. According to this view we could also argue that emotions are not necessarily an antecedent for learning (as claimed by some?), but rather that emotions are consequences of learning – the frustration of inadequate frames and the joy of finding new and better explanations for experiences.
384
M. Öhman et al.
3 Crafting an Emotionally Engaging Learning Experience Based on the discussion in the two previous sections, we argue that learning experiences can be crafted through constructing framing events, separated by periods of hands-on experience intended to create a demand for reframing, or a “thirst for learning” if you will. As these framing events constitute what the students learn, framing and reframing should be considered crucial activities in teaching, tightly linked to learning objectives. In MFG, this translates to game-specific framing activities derived from the game’s learning objectives. Figure 2 illustrates the game rounds, the learning objectives, and the framing activities that are associated with them. 3.1 Diverging Incentives Within the Supply-Chain At the outset of the game (before round 1, framing event “a” in Fig. 2), the setting is framed as an economic, competitive game in which all parties should seek to maximize their profit and minimize costs. Incentives are not mentioned in the initial framing, and the focus is deliberately directed to firm-level performance leading the participants to consider their own cost structures, and how they can succeed in light of a single actor’s performance. Further, the negotiations between distributor and factories deliberately direct factory attention “upstream”, as if the make or break of the game is decided there. The duration of the game is not revealed for the participants to restrict them from seeking to optimize their activity for that constraint. At round 5 (framing event “b” in Fig. 2), the original framing is enforced by presenting the standings after the first five rounds. By this time, the service providers tend not to have had a lot of business, putting them at the bottom of the score table. Partly this is intended to enforce the original framing of inter-firm competition, partly, this is a wake-up call to the service providers, which are now (at the latest) reminded by the facilitator what they need to do to climb in the rankings – i.e., reduce service quality. By round 10, the participants (factory and distributor) are typically becoming more and more frustrated, as the service providers have reduced service quality to improve their profitability. This frustration comes about from the original competitive frame neglecting inter-firm dependencies (especially between the factory and service provider). Hence factories (and by extension the distributor) realize that their success depends on the service provider’s service quality. After round 10 (framing event “c” in Fig. 2), the competitive situation is reframed by asking the participants to reflect on the incentives of the different firms involved in the competition, revealing the supply chain partners’ conflicting incentives in a situation where all firms seek to maximize their profitability. Based on the conflicting incentives the game participants are then invited to suggest remedies for the situation. After this discussion, the game is changed to introduce an outcome-based contract between the factory and the service provider, i.e., the service provider gets paid for every produced boat, while maintenance becomes a cost item. After round 20, at the final framing event (framing event “e” in Fig. 2), the participants have experienced ten rounds of outcome-based cooperation with the service provider, leaving them with a sense of “integrated operations” – where the factory and service provider make decisions less as profit-maximizing firms, and more as supply chains. This is also the essence of the final reframing, where the facilitator notes that the game
Crafting a Memorable Learning Experience
385
Fig. 2. Game rounds, objectives and framing activities associated with each other.
setup has changed from five single companies competing head-to-head to two competing supply chains. As the outcome of this sequence of framing and reframing, the participants have experienced the role of maintenance in an asset-reliant supply chain, and the effects of incentives in transitioning to supply-chain-based competition. 3.2 Uncertainty Caused by Limited Visibility At the game’s outset, the participants were told that the machines are in good condition but would deteriorate over time, while there was also a chance for random failures. In other words, as the game progresses, the uncertainty involved in production decisions
386
M. Öhman et al.
grows, which hampers the participants’ ability to make optimal production/maintenance decisions, and by extension, creates a demand for tools to help in risk management. After round 15 (framing event “d” in Fig. 2), this demand is unexpectedly answered, and framing the knowledge of the condition of the machines as an issue of visibility. At this stage, the facilitator will change the rules of the games in an illustration “introducing technology” which improves demand visibility – such as condition-monitoring technology. In practice, this means that the context-inherent uncertainty is not removed. However, risk management becomes significantly easier as the participants can turn the capability cards before throwing the dice. 3.3 Balancing Performing and Improving In the initial framing (before round 1, framing event “a” in Fig. 2), not only is machine condition described as “good”, but also market demand is described as “looking good”. Both statements are intended to encourage short-term thinking, where participants are expected to choose “performing” over “improving”. This learning objective somewhat conflicts with the other objective concerning the diverging incentives that hamper collaborative planning among the supply chain partners. That is, maintaining the machinery by the service providers’ initiative does not always result in actual improvement. Nevertheless, through the game, the participants learn the necessity of maintaining the production capability for long-term performance. At the end of the game, after round 20 (framing event “e” in Fig. 2), the entire game experience is also framed as seeking a balance between performing and improving, which the students generally relate well to.
4 Design Propositions for Emotional Engagement Based on the game design and the principles presented in the previous sections, we can identify a set of design propositions for controlling emotional engagement during a serious game. The control of emotional engagement seeks to create moments of learning (i.e., reframing events) at peak emotional engagement. We formulate our first design propositions based on the structure of the first learning objective. Manifested in the misaligned incentives, AMG is designed to create growing frustration as the initial frame employed by participants fails to deliver results. The tension created by frustration is then released by reframing the competitive situation as an issue of incentives, relieving the students from the frustration caused by the insufficient frame. Increase emotional engagement by purposefully framing the game in a way that neglects vital game mechanics. Once emotional engagement is tangible, introduce the necessary alternative framing (which is also a key learning objective of the game). As for the timing of introducing the alternative framing, the game facilitator has an important role. In AMG, the players’ frustration typically became tangible in the second quarter of the game, resulting in a general guideline to introduce the framing event after round 10. In AMG the emotion is frustration and sometimes, albeit rarely, even anger. This design proposition is, however, ambivalent towards the type of emotional engagement. An inverse compared to AMG would be a game where participants are
Crafting a Memorable Learning Experience
387
increasingly content and happy with their progress, only to discover that they have been pursuing the wrong goal. An example of this could be encouraging a cost focus in a supply chain management game, where success is increasingly determined by environmental performance. Implementing a solution in line with the first learning objective (i.e., implementing outcome-based contracts) not only aligns incentives but also redefines the competition from firm to supply chain vs. supply chain. The long-term nature of the reframing in AMG is outside the scope of what is feasible to illustrate through the game design. However, given its contrast to the original framing, and its dependency on the solution, participants should be able to reflect on their changed behavior, relating their learnings to the concept of supply chains based on their previous knowledge. Complementary reframing can be implemented by having the students discuss and reflect on how their behavior has changed due to the introduced alternative framing. Manifested in visibility, the game is designed so that the degree of outcome uncertainty grows, making the production decisions increasingly challenging. Treating the challenge as inherent before introducing the solution forces the participants to more in-depth reflection on the challenge, which could otherwise not happen, if participants are anticipating a solution. Although the participants may find this hard, they accept it as part of the game setup, meaning that the eventual introduction of technology is a welcome change, making risk management considerably more straightforward. Changing the game mechanics/rules (in connection to a framing event), in a way that changes the experienced outcome uncertainty causes an emotional response. In the case of AMG, the response of introducing visibility to equipment condition, and hence reducing outcome uncertainty, was typically joy and relief. However, it is plausible that an inverse change (i.e., increasing outcome uncertainty) would also provoke a response in terms of suspense and excitement. Further, the first framing event involves a change in the game mechanics which changes the power relationships between roles in the game. To the extent that these relationships complicate or simplify decisions, this type of change could also be expected to cause similar emotional responses. Finally, we note that the dynamic challenge conveyed throughout the game, i.e., the inherent conflict of short- and long-term performance in the game context, did not benefit from emotional engagement. However, from a design point of view, we can derive the following design proposition: Contrasting short-term versus long-term performance requires an open-ended game. In the earlier versions of the game, the participants were made aware of the total duration of the game, which caused them to optimize performance concerning when the game was to end, which undermined this learning objective. However, it is also to be noted that the facilitator sometimes needs to encourage participants to orient their thinking and decisions to optimize their performance in the short term. 4.1 Evaluation of Participant Feedback The emotional engagement of the game participants can to some extent be analyzed based on the post-game learning protocol that participants were asked to fill in. As part of the learning protocol, participants (N = 338) were asked (Q2.3) to what extent they would agree with the claim “I was emotionally engaged in the game (I felt angry/happy
388
M. Öhman et al.
during the game)”, using Likert scale between 1–5 where 1 = “Disagree completely” and 5 = “Totally agree”. The average of the responses was 3,96/5 (variance 0,936), suggesting that game participants were more inclined to agree that they were emotionally engaged during the game. This was also reflected in session observations, where student frustration was quite frequently verbalized in exclamations such as “What does that mean, I don’t understand!?”, “Why are you cheating on us? We’ve just maintained!”. We analyzed the relationship between emotional engagement and other learning protocol items through assigning a dummy variable (EED) to Q2.3, where a response of 1–3 would indicate no emotional engagement (EED0, n = 92), and a response of 4–5 would indicate emotional engagement (EED1, n = 246). Based on this analysis we make three interesting observations. First, emotional engagement was associated with a more active participation in team decisions making (Q2.2), highlighting potential synergies with collaborative learning. Second, emotional engagement was associated with the participant feeling that winning was important (Q2.4). While it is not possible to determine causality, we note that creating stronger incentives for winning may result in higher emotional engagement. Third, we note that both participants seeing the game as a pleasant experience (Q2.5) and seeing that game-based learning works well for them (Q2.8), was positively associated with emotional engagement. This indicates that although we were mainly seeking to create “negative” emotions, the game was still a pleasant experience. Further, 59% of the emotionally engaged participants “totally agreed” with the statement that game-based learning works well for them, compared to 38% of the non-engaged respondents (Table 2). 4.2 Limitations and Future Research Reflecting against our research question, we wish to point out the limitations of our method. As we approach our question through design, we develop a rich understanding of how to create emotional engagement. However, as we seek to produce knowledge through design [34], the design propositions we present are products of the learning objectives of the game, which drove the design process. In other words, a design process driven by other learning objectives will find complementing design propositions, along with nuances and perhaps contradictions to the ones we present. Further, we note that self-reported measures of emotional engagement have drawbacks, that we attempt to mitigate by observing game situations and interviewing game participants. Future research could employ other ways of measuring emotional engagement, for example by physiological measurements, or video analysis of game sessions using machine learning algorithms trained on recognizing emotions. Also, although we were able to some extent to analyze (post-game) learning outcomes through the learning protocol and exam answers [5], we are still reliant on prior work in connecting emotional engagement to learning outcomes. Future research should attempt to study the link between emotional engagement and learning outcomes in a serious game setting, for example through pre- and post-game evaluations. Finally, considering emotions as an angle to game design opens many interesting research avenues. First, in our study we focused on creating emotional engagement through surprise and frustration (and even anger), leaving room for studies/games that seek to evoke other types of emotions, such as happiness, sadness, fear. Second, we are
Crafting a Memorable Learning Experience
389
Table 2. Summary statistics of the learning report Question
All answers
EED averages
AVG
VAR
EED 0
EED 1
Q2.1 The game was approachable, and the rules were easy to understand
4,37
0,443
4,14
4,24
Q2.2 I actively participated in my team’s decision-making
4,45
0,884
4,24
4,66
Q2.3 I was emotionally engaged in the game (I felt angry/happy during the game)
3,80
0,946
2,64
4,45
Q2.4 Winning the game was important for me
3,50
1,098
2,99
3,71
Q2.5 Playing the game was a pleasant experience
4,51
0,489
4,10
4,59
Q2.6 I mainly learned from observing the other teams
2,76
1,197
2,52
2,69
Q2.7 I mainly learned from discussing/deciding with my team-mates
3,71
1,266
3,74
4,04
Q2.8 Game-based learning works well for me
4,42
0,542
4,18
4,51
all different in how we feel, express and handle emotions. Future studies could focus on the individual learning experience, and the role of emotions in it, considering personal differences in handling and leveraging emotions for learning. Third, our game is set in a face-to-face social setting, which could be assumed to affect emotional engagement. However, as more and more games take place in an online environment, an important line of inquiry would be to study how emotional engagement can be created and managed in online serious games.
5 Conclusion One of the aims of university teaching is to help students develop their critical thinking skills. From this perspective, teaching can be successful when students say that they have learned ‘to see the world and its phenomena through new lenses’ and ‘think more critically’. We have studied how this type of learning is manifested in a serious game, and specifically game design creates such a learning experience. In our design process, we came to refer to this as framing and reframing. In our design process we sought to manage the emotional engagement of participants throughout the game, as a means for leveraging the positive effects between emotions and learning [10–12]. We find that building tension between framing and
390
M. Öhman et al.
observation increases emotional engagement, building demand for a better explanation, i.e., reframing. In this paper we have identified design propositions through which emotional engagement can be designed into a serious game with the explicit objective of improving learning. Hence providing an answer to our research question. Overall, serious games have received increasing attention in academia, and the design of such games is one of the topics discussed. However, considering the positive relationship between emotions and learning, it is surprising how little attention emotions have gotten from a design perspective. We contribute to research on serious game design through showing that emotions matter, and emotions can, to some extent, be controlled, hence making them a dimension in serious game design. We have thus made a case for emotional engineering in serious game design.
References 1. Ammar, S., Wright, R.: Experiential learning activities in operations management. Int. Trans. Oper. Res. 6, 183–197 (1999) 2. Ben-Zvi, T., Carton, T.C.: Business games as pedagogical tools. In: PICMET’07-2007 Portland International Conference on Management of Engineering & Technology, pp. 1514–1518. IEEE, August 2007 3. Lewis, M.A., Maylor, H.R.: Game playing and operations management education. Int. J. Prod. Econ. 105, 134–149 (2007) 4. Costantino, F., Di Gravio, G., Shaban, A., Tronci, M.: A simulation based game approach for teaching operations management topics. In: Proceedings of the 2012 Winter Simulation Conference (WSC), pp. 1–12. IEEE, December 2012 5. Tetik, M., Öhman, M., Rajala, R., Holmström, J.: Game-based learning in an industrial service operations management course. In: 4th International Conference on Higher Education Advances (HEAd 2018). International Conference on Higher Education Advances, Editorial Universitat Politècnica de València, València, pp. 837–845, June 2018 6. Kolb, A.Y., Kolb, D.A.: Learning styles and learning spaces: enhancing experiential learning in higher education. Acad. Manag. Learn. Educ. 4, 193–212 (2005) 7. Gentry, J.W.: What is experiential learning. In: Guide to Business Gaming and Experiential Learning, vol. 9, p. 20 (1990) 8. Romme, A.G.L.: Learning outcomes of microworlds for management education. Manag. Learn. 34(1), 51–61 (2003) 9. Santos, J., Figueiredo, A.S., Vieira, M.: Innovative pedagogical practices in higher education: an integrative literature review. Nurse Educ. Today 72, 12–17 (2019) 10. Zull, J.E.: Key aspects of how the brain learns. In: New Directions for Adult and Continuing Education, vol. 110, p. 3 (2006) 11. Taylor, S.S., Statler, M.: Material matters: increasing emotional engagement in learning. J. Manag. Educ. 38(4), 586–607 (2014) 12. Kumar, J.A., Muniandy, B., Wan Yahaya, W.A.J.: Exploring the effects of emotional design and emotional intelligence in multimedia-based learning: an engineering educational perspective. New Rev. Hypermed. Multimed. 25(1–2), 57–86 (2019). https://doi.org/10.1080/ 13614568.2019.1596169 13. Dormann, C., Whitson, J.R., Neuvians, M.: Once more with feeling: game design patterns for learning in the affective domain. Games Cult. 8(4), 215–237 (2013). https://doi.org/10. 1177/1555412013496892 14. Martin, A., Leberman, S., Neill, J.: Dramaturgy as a method for experiential program design. J. Exp. Educ. 25, 196–206 (2002)
Crafting a Memorable Learning Experience
391
15. Leberman, S.I., Martin, A.J.: Applying dramaturgy to management course design. J. Manag. Educ. 29, 319–332 (2005) 16. Despeisse, M.: Games and simulations in industrial engineering education: a review of the cognitive and affective learning outcomes. In: 2018 Winter Simulation Conference (WSC), pp. 4046–4057. IEEE, December 2018 17. Holmström, J., Eriksson, M., Makkonen, E.: The manufacturing game in teaching asset management. In: 24th International Congress & Exhibition on Condition Monitoring and Diagnostic Engineering Management, COMADEM 2011, U.K. (2011) 18. Vince, R.: Behind and beyond Kolb’s learning cycle. J. Manag. Educ. 22(3), 304–319 (1998) 19. Pasin, F., Giroux, H.: The impact of a simulation game on operations management education. Comput. Educ. 57, 1240–1254 (2011) 20. Kriz, W.C.: Creating effective learning environments and learning organizations through gaming simulation design. Simul. Gaming 34, 495–511 (2003) 21. Geithner, S., Menzel, D.: Effectiveness of learning through experience and reflection in a project management simulation. Simul. Gaming 47, 228–256 (2016) 22. Gatti, L., Ulrich, M., Seele, P.: Education for sustainable development through business simulation games: an exploratory study of sustainability gamification and its effects on students’ learning outcomes. J. Clean. Prod. 207, 667–678 (2019) 23. Mayer, B.W., Dale, K.M., Fraccastoro, K.A., Moss, G.: Improving transfer of learning: relationship to methods of using business simulation. Simul. Gaming 42, 64–84 (2011) 24. Hyppönen, O., Linden, S.: Handbook for Teachers – Course Structures, Teaching Methods and Assessment. Helsinki University of Technology, Espoo (2009) 25. Wilson, L.E., Sipe, S.R.: A comparison of active learning and traditional pedagogical styles in a business law classroom. J. Leg. Stud. Educ. 31, 89–105 (2014) 26. Mitnik, R., Recabarren, M., Nussbaum, M., Soto, A.: Collaborative robotic instruction: a graph teaching experience. Comput. Educ. 53, 330–342 (2009) 27. Kim, J.S.: Effects of a constructivist teaching approach on student academic achievement, self-concept, and learning strategies. Asia Pac. Educ. Rev. 6, 7–19 (2005) 28. Barbuto, J.E.: Dramaturgical teaching in the leadership classroom: taking experiential learning to the next level. J. Leadersh. Educ. 5, 4–13 (2006) 29. Freie, J.F.: A dramaturgical approach to teaching political science. PS - Polit. Sci. Polit. 30, 728–732 (1997) 30. Halliday, S.V., Davies, B.J., Ward, P., Lim, M.: A dramaturgical analysis of the service encounter in higher education. J. Mark. Manag. 24, 47–68 (2008) 31. Preves, S., Stephenson, D.: The classroom as stage: impression management in collaborative teaching. Teach. Sociol. 37, 245–256 (2009) 32. Martin, A.J.: The dramaturgy approach to education in nature: reflections of a decade of international Vacation School Lipnice courses, Czech Republic, 1997–2007. J. Adventure Educ. Outdoor Learn. 11, 67–82 (2011) 33. Holmström, J., Ketokivi, M., Hameri, A.P.: Bridging practice and theory: a design science approach. Decis. Sci. 40(1), 65–87 (2009) 34. Öhman, M.: Design science in operations management: extracting knowledge from maturing designs (2019) 35. Geer, J.: What do open-ended questions measure? Public Opin. Q. 52, 365–371 (1988) 36. Kiili, K., Ketamo, H.: Exploring the learning mechanism in educational games. J. Comput. Inf. Technol. 15(4), 319–324 (2007) 37. Vallat, D., Bayart, C., Bertezene, S.: Serious games in favour of knowledge management and double-loop learning? Knowl. Manag. Res. Pract. 14(4), 470–477 (2016)
A Classification Framework for Analysing Industry 4.0 Learning Factories Simone Vailati, Matteo Zanchi , Chiara Cimini , and Alexandra Lagorio(B) Department of Management, Information and Production Engineering, University of Bergamo, Viale Marconi 5, Dalmine, BG, Italy [email protected], {matteo.zanchi,chiara.cimini, alexandra.lagorio}@unibg.it
Abstract. Learning factories offer hands-on practice for industrial environments, covering many areas, from product design to end-of-life phases. They are an effective testing ground for new products and processes, helping businesses to increase efficiency, productivity, and sustainability while providing workforce training. Using information coming from the major scientific databases, this paper provides a new classification framework for Industry 4.0 learning factories, to analyse the most relevant ones globally, with a focus on Europe and Italy and to give researchers and industrial stakeholders an overview of the activities that can be performed with them, particularly concerning learning contents and implemented technologies. The findings of the proposed analysis provide valuable insights into the more comprehensively covered topics, the overarching themes, and the existing gaps in the product lifecycle areas. The paper underlines that there is high coherence between the analysed laboratories and the Industry 4.0 requirements, with particular emphasis on the areas of production, logistics and R&D. Keywords: Learning Factory · Teaching Factory · Industry 4.0 · Workforce 4.0
1 Introduction Recently, learning factories (LFs) are becoming popular across universities and industrial companies around the globe. They are conceived as realistic environments reproducing production systems to promote hands-on practice in industrial environments [1]. LFs can be used for different purposes, covering topics ranging from product design and production to end-of-life phases [2]. Indeed, increasing students’ and workers’ knowledge through practical activities is becoming progressively important in the industry as manufacturers strive to enhance efficiency, productivity, and sustainability. In addition to providing training and education, LFs can also benefit businesses and organisations by serving as a testing ground for new products and processes. Indeed, the request for higher skills, technical training and engineering education has become crucial for employment since Industry 4.0 sets the standards for modern production systems, with a particular focus on integration, automation and digitalisation [3]. © IFIP International Federation for Information Processing 2023 Published by Springer Nature Switzerland AG 2023 E. Alfnes et al. (Eds.): APMS 2023, IFIP AICT 690, pp. 392–402, 2023. https://doi.org/10.1007/978-3-031-43666-6_27
A Classification Framework for Analysing Industry 4.0
393
Currently, several learning factory solutions have been developed in different universities, research centers, and industries. According to the different learning purposes and students’ targets, they embed different technologies and support learning modules related to several fields of investigation, such as manufacturing, automation, and computer science. Nevertheless, scientific literature lacks a reference model to classify the learning factory solutions to provide clear guidance to academics and practitioners about the potential of adopting such systems to perform specific training and education activities in the broad field of Industry 4.0. To overcome this gap, this paper aims to analyse the as-is situation of the most relevant LFs around the globe, with a particular focus on Europe and Italy, by providing a new classification framework. The objective is to give researchers and industrial stakeholders an overview of the activities that can be performed with LFs, particularly concerning different learning contents and implemented 4.0 technologies. To do this, the research relies on a literature review of articles from Scopus and Google Scholar, initially searching for general information about existing LFs and later focusing on the specific elements that characterise every learning environment. All these data have been analysed and included in a classification framework, represented by a matrix that shows their learning contents relating to the lifecycle phases of industrial products. The classification framework has been filled with the main technologies used and the main purpose the LFs aim to achieve. The paper is structured as follows. Section 2 presents the literature review of the Industry 4.0 requirements and the main characteristics of learning factory solutions. Section 3 discusses the methodology adopted to select the LFs analysed and classified in the framework presented in Sect. 4. Section 5 discusses the results of the LFs classification, mainly focusing on clusters of more covered topics, cross-cutting themes and gaps. Section 6 concludes the paper with limitations and further improvement of this research.
2 Context In order to properly analyse the various types and traits of LFs, it is important first to grasp the essential components and key necessities of Industry 4.0 that justify the importance and practicality of LFs (Subsect. 2.1), as well as establish how they can be defined (Subsect. 2.2). 2.1 Industry 4.0 Requirements In the last decade, the manufacturing industry faced its fourth revolution, where the digitalisation of factories has moved to another level. The automation of the processes and the more effective collection of data promoted the wide adoption of smart machines and the development of smart factories capable of producing goods across the value chain more efficiently and quickly [4]. Flexibility and efficiency become the main drivers in an environment where users can customise their demand, and often factories face individual lots. Exploiting refined
394
S. Vailati et al.
and real-time data collection and analysis enables companies to make better decisions and have more control over their processes [5]. As shown in Fig. 1, the factors that influence Industry 4.0 production environments, namely smart factories, require a higher level of integration based on increased control over the product, exploiting new technologies that ensure strong communication between humans and machines, and also require improved levels of both technical and methodological skills [6].
Fig. 1. Crucial aspects of Industry 4.0 production, crucial technologies, and required competences of engineers [6]
Given these premises, it can be stated that smart factories are very complex environments in which several technologies must be integrated and managed properly. With the advent of Industry 4.0, new requirements in terms of competences and skills to design and manage production systems emerge [7]. Over the last few years, competence models have been developed to provide a clear overview of the education areas that should be addressed to train the workforce 4.0 (e.g., [8]). In particular, given the increasingly high requests for practical-skilled workers (both engineers and shopfloor operators), novel strategies to perform education and training have started to be widely adopted, such as learning-by-doing and problem-based learning approaches [9]. One increasingly widespread strategy concerns the development and implementation of learning factories, which are presented in detail in the following sub-section. 2.2 Learning Factories This paragraph aims to describe the concept and characteristics of LFs. In the literature, different terminologies are used, and sometimes Learning Factories are also referred to as Teaching Factories, and there is a slight difference between these terms. The label “learning factory” should be used for systems that address both parts of the term, namely, including elements of learning or teaching, as well as a production environment [10].
A Classification Framework for Analysing Industry 4.0
395
The word “learning” in the term, as opposed to “teaching”, emphasises the importance of experiential learning. Research has shown that learning-by-doing leads to greater retention and application possibilities than traditional methods such as lectures [11]. However, since the concepts of learning and teaching factories are often used as synonyms, to reach the objective of this research and better analyse the state-of-the-art of these systems, both Learning and Teaching Factories will be considered. The term LFs was created in 1994 to depict an environment that provides a platform for individuals to develop practical skills and knowledge in a simulated factory setting. The primary objective of LFs is to bridge the gap between academia and industry by providing a practical learning experience to students and professionals, with continuous collaboration between them [12]. Indeed, by working in a simulated factory environment, individuals can learn about the latest manufacturing technologies and gain real-world experience that they can apply in their careers [11]. Moreover, participating in learning activities in an LF is expected to support the development of soft and organisational skills [13]. Finally, in a learning factory, it is common for students, researchers, and industry professionals to collaborate, experiment, and learn about advanced and innovative processes and technologies, stimulating synergetic approaches to boost innovation and knowledge exchange. Learning factories can be classified using different dimensions. The more systematic classification provided in the scientific literature is reported in Fig. 2 and discussed hereafter.
Fig. 2. Key features of learning factories (adapted from [14])
The classification dimensions suggested by [14] are: – “Purpose”, which describes the kind of learning applied, with a distinction between training, education or research. – “Process”, which describes the technologies applied and the order lifecycle.
396
S. Vailati et al.
– “Setting”, which helps to identify if the factor used in each simulation can be modified or if they are fixed. – “Product”, which is the main element used in the LFs, and it can be physical or digital according to the specifications of each environment. – “Didactics”, which shows if the learning processes used are formal, non-formal, or informal. – “Operating Model”, which relates the operation of the LFs to content, economy and personnel. Although this model analyses different dimensions of LFs, it lacks a specific focus on the different lifecycle steps that characterise industrial products and production systems. Therefore, a new classification framework has been developed to overcome this gap and provide a tailor-made approach to examine Industry 4.0 aspects, embedding product lifecycle, learning contents and implemented technologies. Indeed, currently, LFs are not just limited to traditional manufacturing processes: they are also extended to branches like logistics and research & development, which must be adequately considered during the LFs functionalities mapping and classification.
3 Methodology To meet the purpose of this research, as a first step of the methodological workflow represented in Fig. 3, a literature review has been conducted to retrieve information about the LFs currently available in Universities and Research Centers worldwide. The literature review also aimed at identifying the frameworks used for classifying LF contents. Scopus and Google Scholar databases have been used with the primary search based on the keywords “Learning Factories” and “Teaching Factories”, combined with “Industry 4.0”. In particular, journal articles and conference papers published between 2015 and today have been searching to ensure their contribution was significantly related to the paradigms of Industry 4.0 and Smart Factories. After a careful analysis of the abstract and titles, a subset of papers has been chosen and fully read. With a snowballing approach, other relevant articles have been retrieved to finally reach the corpus of 55 articles that include general knowledge about LFs definitions and features, descriptions of some LFs available in the world and some business cases that exploit LFs environments. In particular, along with a search of the general characteristics of LFs, we aimed to map the existing LFs described in the scientific literature and provide a novel classification. For this reason, the articles have been used first to retrieve general information about available LFs and currently available classification of LF features. Later, in-depth research was conducted on the websites of the single University/Research Center or LFs website (if available). At the end of this research, to analyse the most important and advanced laboratories for Industry 4.0 and to not be redundant, the number of sites analysed has reduced to 11. In particular, we focused on LFs related explicitly to Industry 4.0 and targeted to fill the training gap between the university and the industrial environment. Also, we limited the search of that LFs for which it was possible to understand clearly that both didactic, research and industrial training applications were possible. For those selected LFs, we provide a classification described in the next section.
A Classification Framework for Analysing Industry 4.0
397
Fig. 3. Methodology applied to this research.
4 Results The main purpose of this section is to present a classification framework able to analyse how LFs can be positioned within the lifecycle stages of a product and how they can support learning activities. To classify the different LFs, we decided to use two different axes. The first axis shows the phases of a product’s lifecycle: Research & Development, Supply, Production/Assembly, Distribution/Logistics, Service, and Disposal [15, 16]. The second one displays the learning contents supported by the characteristics of each LF organised on two main levels, divided into two sub-levels each. The first level concerns the Learning Contents and thematic areas that the LFs address. These areas can refer to topics related to macro-trends and business objectives or to operations management practices that can be tested and implemented on LFs (as proposed by [17]). The second level concern the Technical Solutions implemented in the learning systems to support technical learning modules about the most common hardware and software Industry 4.0 tools and technologies. It is assumed that Technical Solutions reported in Tables 3 and 4 can both support the previously mentioned Learning Contents and be learning items per se. The previously obtained classification framework is finally filled with the specific learning contents that can be delivered in each of the analysed LFs, whose information and data have been retrieved from current scientific and grey literature [6, 18–28]. The results of this study are shown between Table 1 and Table 4, dealing with the Learning objectives for Tables 1 and 2 and the Technologies for Tables 3 and 4.
398
S. Vailati et al.
Table 1. Learning factories classification – Learning Contents – Macro-trends and business objectives
Table 2. Learning factories classification – Learning Contents – Operation management practises Learning Factory
R&D Machine communication, manufacturing process integration Engineering costs for the development of the production Autofab reale (Darmstadt) units
Supply
Operations management practices
Learning Contents
SEPT
Università di Bolzano - Mini Digital planning, Human-machine Factory interaction
Productivity, Product quality, Flexibility Material supply equipment
BIREX Bologna Costs evaluation, Increase operators safety Cost evaluation
i-Fab AIE Stoccarda AAU Smart production Aalborg
Lifecycle stages Production/Assembly Flexible production, Production monitoring and control
Integration of material management
Flexibility, Design and planning of assembly process, Cycle time measurements, Layout design and workplace design Flexible production, Process management, Production costs, Quality control Productivity, Performance, Quality control
Service
Delivery times, Inventory level control, Setup activities
Preventive maintenance
Disposal
Lead times and Inventory level control Material handling, Inventory level control On-time delivery, Inventory level control
Cybersecurity
End-of-life management End-of-life management
Production optimization, Flexibility, Reconfigurability Reconfigurability
TU Vienna
Distr/logistic Traceability, Packaging processes
Traceability
Cybersecurity
Planning and control systems, Production scheduling, Automated information flows Productivity, Quality control, Planning of asssembly process
Worker guidance systems
I4.0 Lab
Maintenance
Stellenbosch Learning Factory (SLF)
Real-time order management, Demand prediction
Flexibility, Monitoring anc control, Performance, Process automation
Inventory control level
Maintenance
SLIM Unibg
Order planning
Scheduling, Production process control
Traceability, Invenotry control level, Warehouse management
Maintenance
Table 3. Learning factories - classification – Technologies – Software Learning Factory SEPT
R&D PLC, Electrical actuators, barcode, RFID, Big Data, I/O Link
RFID, OPCUA, Ethernet, Autofab reale (Darmstadt) TCP/IP, HMI, MES, PLC, Digital Twin, ERP
Supply
HMI, ERP, MES
versità di Bolzano - Mini Factory IoT BIREX Bologna
Software
Technologies
i-Fab
AIE Stoccarda AAU Smart production Aalbor
RFID, ERP, MRP, VR, Big Data, Digital Twin, Cloud, IoT, MES
Coordinate System, Big Data, RFID, IoT, 3D Systems RFID, MES, TCP/IP, OPC UA, SAP ERP
TU Vienna I4.0 Lab
ERP, MRP, MES, IoT
RFID, MES, ERP, IoT, OPC UA, AR, HMI, QR Code, Bar Code
RFID, MES, TCP/IP, OPC UA, SAP ERP
ERP, MES, IoT, HMI
RFID, IoT, HMI, VR, OPC UA, ERP, MES, Tool Management System, CAD/CAM PLC, Electrical actuators, Digital Twin, RFID, OPCUA, MES
MES, ERP, IoT, HMI
Disposal
Augmented Reality
IoT, Big Data, AI, Private Cloud, Virtual Machines, Remote Access, 5G RFID, ERP, MRP, VR, Big Data, Digital Twin, Cloud, RFID, ERP, MRP, VR, Big IoT, MES Data, Digital Twin, Cloud, IoT, MES Coordinate System, Big Data, RFID, IoT, 3D Systems
IoT
Service
CAD, Planning Software IoT, Big Data, AI, Private Cloud, Virtual Machines, Remote Access, 5G
IoT
PLC, Electrical actuators, Digital Twin, RFID, OPCUA, MES
Distr/logistic Laser Scanner, Ultrasonic Sensors
RFID, OPCUA, Ethernet, TCP/IP, HMI, MES, PLC, Digital Twin, ERP
MES, ERP
lenbosch Learning Factory (SLF)
SLIM Unibg
Lifecycle stages Production/Assembly PLC, Electrical actuators, barcode, RFID, Big Data, I/O Link
Big Data, Information Flow
NFC, Wifi, Ethernet, IoT, Pick to Light, TCP, PLC, Big NFC, Wifi, Ethernet, IoT, Pick to Data, RFID Light, TCP, PLC, Big Data, RFID RFID, MES, ERP, IoT, OPC UA, AR, HMI, QR Code, Bar Code
RFID, MES, ERP, IoT, OPC UA, AR, HMI, QR Code, Bar Code
RFID, ERP, MRP, VR, Big Data, Digital Twin, Cloud
A Classification Framework for Analysing Industry 4.0
399
Table 4. Learning factories classification – Technologies – Hardware
RFID: Radio Frequency IDentification, OPCUA: Open Platform Communications United Architecture, TCP/IP: Transmission Control Protocol/Internet Protocol, HMI: Human Machine Interface, MES: Manufacturing Execution System, ERP: Enterprise Resource Planning, IoT: Internet of Things, PLC: Programmable Logic Controller, Cobot: Collaborative Robot, AGV: Automated Guided Vehicle, AMR: Autonomous Mobile Robot, CAD: Computer Aided Design, CNC: Computer Numerical Control, QR Code: Quick Response Code, AR: Augmented Reality, VR: Virtual Reality.
5 Discussion The findings of the LFs categorisation are discussed hereafter, with an emphasis on the different framework parts that are more thoroughly covered, overarching themes, and gaps. First, it is possible to see that the most populated column of all the Tables is the Production/Assembly one. Most of the LFs focus on a production line and its management. Tied in second place are Logistics and Research & Development. Supply, Service and Disposal are the less populated columns, with only a few laboratories that look at these aspects. Looking at the tables by rows, it is possible to observe that most of the observed LFs are designed to cover the broadest possible range of Learning objectives. For example, most of them cover two or three phases of the lifecycle, but none searches for all the aspects simultaneously. This is quite interesting and reflects the current labour market requirements regarding skilled job profiles, able to acquire transversal competencies covering different areas and support job enlargement and easy job rotation. If analysed by keywords, each cluster contains aspects, processes or technologies that are repeated frequently. At the Learning objective level, and in particular, analysing the macro-trends and business objectives sub-categories, it is possible to observe that automation, advanced manufacturing implementation and productivity are the most frequently cited business objectives. Also, sustainability/energy efficiency, human-machine interaction and
400
S. Vailati et al.
safety/ergonomics appear frequently as business/research goals to reach with LFs. It is also interesting to underline that often these business objectives have been classified under the R&D column because the information analysed declared that those were also topics on which research activities would be developed using LFs as a research tool. Moving to the operations management practices sub-categories, it is possible to observe a clear trend: often LFs are used to analyse different process characteristics such as the performances, flexibility, reconfigurability and quality control concerning the production/assembly phases of the product lifecycle and traceability and inventory control regarding distribution and logistics phases. For this sub-category, R&D focuses more on cost evaluation, while the supply phase is related to the orders and incoming material management. Only maintenance and cybersecurity are considered in the LFs analysed regarding the service phase. Finally, only two cases analysed, Birex and iFab, included end-of-life products aspects. Analysing the second main category i.e. Technologies, it is possible to observe that all the factories are designed to respect the standards of Industry 4.0 previously shown in the context [1]. The main implemented software technologies are RFID, ERP, MRP, VR, Big Data, Digital Twin, Cloud, IoT, and MES, often linked with PLC, making data collection from the LF physically possible. This result, related to the main technologies utilized in the Learning Factories, is consistent with the ones obtained analysing of the main operations management practices. Concerning hardware technologies, automated warehouses, Cobot, AGVs and ARs, 3D printers, Augmented Reality devices, and Remote-Controlled devices are the most commonly applied in the LFs analysed. Interestingly, much attention is given to human-machine interactions, and specific HMIs are often designed and tested in LFs.
6 Conclusion This work aimed to propose a novel framework to classify learning factories and analyse the currently available ones, with a particular focus on their learning objectives and the technologies implemented on them. The provided classification aimed to fill a gap in the extant literature about LFs by providing a systematic overview for academics and practitioners about the potential of developing learning programs in the field of Industry 4.0 supported by practical activities on LFs. Moreover, the results of the analysis of 11 LFs worldwide showed that, generally, there is a high coherence between the requirements of Industry 4.0 and the laboratories’ characteristics. In particular, the analysed systems put a strong emphasis on Production, Logistics, and Research & Development, while paying less attention to segments like Supply, Service and Disposal, where there is now a significant shortage. LFs also give great attention to elements like sustainability, energy consumption and human-machine interactions, thus orienting towards the standards of Industry 5.0. Such topics, that a few years ago were not even considered, are now becoming more and more important, the reason why, for instance, modern LFs are now based on industrial systems that run with low energy consumption levels. The gaps that have emerged from this research set the course for future developments in this field. Indeed, the analysis conducted revealed gaps in the service, supply, and
A Classification Framework for Analysing Industry 4.0
401
disposal phases. All these three phases are of key importance with regard to sustainability aspects. Indeed, optimised maintenance ensures a longer service life for production machinery. At the same time, deepening the supply theme would lead to improved aspects related to resilience and collaboration in the supply chain. Finally, working on the product disposal phase would increase the concept of circularity of production processes. These aspects and the increasing relevance of human centricity in process design and management would help companies to move toward Industry 5.0, exploiting the possibilities given by the LFs usage in workforce training. Acknowledgements. This research has been funded by Regione Lombardia, regional law n° 9/2020, resolution n° 3776/2020.
References 1. Prinz, C., Morlock, F., Freith, S., Kreggenfeld, N., Kreimeier, D., Kuhlenkötter, B.: Learning factory modules for smart factories in industrie 4.0. Proc. CIRP 54, 113–118 (2016). https:// doi.org/10.1016/j.procir.2016.05.105 2. Baena, F., Guarin, A., Mora, J., Sauza, J., Retat, S.: Learning factory: the path to industry 4.0. Proc. Manuf. 9, 73–80 (2017). https://doi.org/10.1016/j.promfg.2017.04.022 3. Oztemel, E., Gursev, S.: Literature review of Industry 4.0 and related technologies. J. Intell. Manuf. 1–56 (2018) 4. Osterrieder, P., Budde, L., Friedli, T.: The smart factory as a key construct of industry 4.0: a systematic literature review. Int. J. Prod. Econ. 221, 107476 (2020). https://doi.org/10.1016/ j.ijpe.2019.08.011 5. Zheng, T., Ardolino, M., Bacchetti, A., Perona, M.: The applications of Industry 4.0 technologies in manufacturing context: a systematic literature review. Int. J. Prod. Res. 1–33 (2020). https://doi.org/10.1080/00207543.2020.1824085 6. Simons, S., Abé, P., Neser, S.: Learning in the AutFab – the fully automated Industrie 4.0 learning factory of the University of Applied Sciences Darmstadt. Proc. Manuf. 9, 81–88 (2017). https://doi.org/10.1016/j.promfg.2017.04.023 7. Flores, E., Xu, X., Lu, Y.: Human Capital 4.0: a workforce competence typology for Industry 4.0. J. Manuf. Technol. Manag. 31, 687–703 (2020). https://doi.org/10.1108/JMTM-08-20190309 8. Prifti, L., Knigge, M., Kienegger, H., Krcmar, H.: A competency model for “Industrie 4.0” employees. (2017) 9. Salvador, R., et al.: Challenges and opportunities for problem-based learning in higher education: lessons from a cross-program Industry 4.0 case. Ind. High. Educ. 37, 3–21 (2023). https://doi.org/10.1177/09504222221100343 10. Wagner, U., AlGeddawy, T., ElMaraghy, H., MŸller, E.: The state-of-the-art and prospects of learning factories. Proc. CIRP 3, 109–114 (2012). https://doi.org/10.1016/j.procir.2012. 07.020 11. Cachay, J., Wennemer, J., Abele, E., Tenberg, R.: Study on action-oriented learning with a learning factory approach. Proc. Soc. Behav. Sci. 55, 1144–1153 (2012). https://doi.org/10. 1016/j.sbspro.2012.09.608 12. Abele, E., et al.: Learning factories for future oriented research and education in manufacturing. CIRP Ann. 66, 803–826 (2017). https://doi.org/10.1016/j.cirp.2017.05.005
402
S. Vailati et al.
13. Balve, P., Ebert, L.: Ex post evaluation of a learning factory – competence development based on graduates feedback. Proc. Manuf. 31, 8–13 (2019). https://doi.org/10.1016/j.promfg.2019. 03.002 14. Abele, E., et al.: Learning factories for research, education, and training. Proc. CIRP 32, 1–6 (2015). https://doi.org/10.1016/j.procir.2015.02.187 15. Stark, J.: Product Lifecycle Management (Volume 1): 21st Century Paradigm for Product Realisation. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-98578-3 16. Siedler, C., et al.: Maturity model for determining digitalization levels within different product lifecycle phases. Prod. Eng. Res. Dev. 15, 431–450 (2021). https://doi.org/10.1007/s11740021-01044-4 17. Tonelli, F., Demartini, M., Loleo, A., Testa, C.: A novel methodology for manufacturing firms value modeling and mapping to improve operational performance in the Industry 4.0 era. Proc. CIRP 57, 122–127 (2016). https://doi.org/10.1016/j.procir.2016.11.022 18. Elbestawi, M., Centea, D., Singh, I., Wanyama, T.: SEPT learning factory for industry 4.0 education and applied research. Proc. Manuf. 23, 249–254 (2018). https://doi.org/10.1016/j. promfg.2018.04.025 19. Matt, D.T., Rauch, E., Dallasega, P.: Mini-factory – a learning factory concept for students and small and medium sized enterprises. Proc. CIRP 17, 178–183 (2014). https://doi.org/10. 1016/j.procir.2014.01.057 20. Madsen, O., Møller, C.: The AAU smart production laboratory for teaching and research in emerging digital manufacturing technologies. Proc. Manuf. 9, 106–112 (2017). https://doi. org/10.1016/j.promfg.2017.04.036 21. Hennig, M., Reisinger, G., Trautner, T., Hold, P., Gerhard, D., Mazak, A.: TU Wien pilot factory industry 4.0. Proc. Manuf. 31, 200–205 (2019). https://doi.org/10.1016/j.promfg.2019. 03.032 22. Steenkamp, L.P., Hagedorn-Hansen, D., Louw, L.: Framework for the development of a learning factory for industrial engineering education in South Africa. 28th SAIIE Annual Conference (2017) 23. Venanzi, R., et al.: Enabling adaptive analytics at the edge with the Bi-Rex Big Data platform. Comput. Ind. 147, 103876 (2023). https://doi.org/10.1016/j.compind.2023.103876 24. BI-REX Big Data Innovation & Research Excellence. https://bi-rex.it/. Accessed 04 May 2023 25. Cannas, V.G., Ciano, M.P., Pirovano, G., Pozzi, R., Rossi, T.: i-FAB: teaching how industry 4.0 supports lean manufacturing. In: Rossi, M., Rossini, M., Terzi, S. (eds.) ELEC 2019. LNNS, vol. 122, pp. 47–55. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-414 29-0_6 26. Application Center Industrie 4.0|Institute of Industrial Manufacturing and Management|University of Stuttgart. https://www.iff.uni-stuttgart.de/en/research/infrastruktur_und_ labore_en/applikationszentrum_industrie_4.0/. Accessed 04 May 2023 27. SLIM Lab UNIBG. https://slim.unibg.it/. Accessed 04 May 2023 28. Industry 4.0 Lab POLIMI. https://www.industry40lab.org/assets. Accessed 04 May 2023
Development and Stress Test of a New Serious Game for Food Operations and Supply Chain Management: Exploring Students’ Responses to Difficult Game Settings Davide Mezzogori1
, Giovanni Romagnoli2(B)
, and Francesco Zammori2
1 “Enzo Ferrari” Department of Engineering, University of Modena and Reggio Emilia, v.
Vivarelli, 10, 41125 Modena, MO, Italy 2 Department of Engineering and Architecture, University of Parma, v.le delle scienze, 181/A,
43124 Parma, PR, Italy [email protected]
Abstract. Serious games (SGs) in engineering education are a proven topic, whose implementation has been significantly growing in the last decades. They are recognized as effective tools to teach and learn subjects like Operations and Supply Chain Management. The research on SGs, however, is primarily focused on displaying applications and teaching results of particular games to achieve given purposes. In this paper, we provide an exploratory research and a stress test of a new SG on a specific target group in the field of food operations and supply chain management. We provide an overview of the SG and detail its mechanics. Also, we explain how the mechanics has been implemented, by means of a set of parameters and indicators that better explain the roles available to players in the game. We conclude by reporting and discussing the results of a game session played by a class of Vocational Education and Training students under stress conditions generated by an accelerated game time. Keywords: Serious Game · Experiential Learning · Operations Management · Supply Chain Management · Food · Stress Test
1 Introduction Serious Games (SG) have been intensively investigated in recent past due to the fact that they do not limit to entertain players, but they also hold learning potential and foster the acquisition of new skills [1]. SGs are effective didactical methods, that improve motivation and involvement of gamers, promote attention and collaboration, and develop technological competences [2, 3]. Several studies, in fact, suggest that SGs hold a large potential [4], as they provide experiences that cannot be easily lived otherwise, due to time, cost, or risk issues [5]. Recent advances in laboratory theory and practice [6], experiential learning [7, 8], and in information and communication technology [9] have generated a big increase in research and application of SGs, that are ever more recognized as effective and immersive didactical methods [10]. © IFIP International Federation for Information Processing 2023 Published by Springer Nature Switzerland AG 2023 E. Alfnes et al. (Eds.): APMS 2023, IFIP AICT 690, pp. 403–417, 2023. https://doi.org/10.1007/978-3-031-43666-6_28
404
D. Mezzogori et al.
An important feature of SGs is that they allow students to test their knowledge and skills in a safer and simulated environment that does not provide as much stress as in the real ones [11]. Indeed, the behavior of players in SGs under stress conditions is an under investigated topic, especially outside of the field of research of virtual reality games ([12– 14]). The lack of experience or familiarity with digital games, especially if coupled with weighted grading, could be associated with intense stress and thus impede participants’ motivations [15] or even lead to stopping the use of the engineering tools [16]. As an example, a framework built upon a SG as an experimentation mechanism is investigated by [17] for studying human behavior to search for regularities in engineering decision making, stress claims, and methods against scenario perturbations. Furthermore, [18] states that SGs can help to cope with stressful situations, and that most existing games do not consider players’ personality and stress level to adapt the game experience. The present paper builds on previous research on the use of SGs for teaching and learning Operations and Supply Chain Management. We will expand the work of [19, 20], where a cooperative multiplayer SG called Op&SCM is presented and a set of functioning parameters is introduced to allow teachers to make the game’s dynamic easier or harder. Specifically, we present ‘The Food Factory’ (TFF) a new SG originating from Op&SCM. Its learning objectives, basic game elements, roles, challenge, and peculiarities are detailed in Sect. 2. Next, after its functioning parameters and performance indicators are discussed in Sect. 3, in Sect. 4 we provide the results of a stress test of TFF performed in March-April 2023, with a group of 21 students that played for a total of 16 h of game. Although TFF is quite structured and complex, the ‘correct’ game behavior is quite predictable and likely generates satisfactory results. In this preliminary test, the facilitators of the game session first explained the correct game operation. Next, students were put in front of the stress generated by a game session characterized by accelerated game time, to test whether their behavior under these conditions followed the provided pattern. The goal, which is also the main research question of the present study, was to investigate if students operating in stressful conditions follow the given game behavior and achieve the expected results. Eventually, Sect. 5 draws conclusions and suggests potential future research directions.
2 The Food Factory (TFF): An Overview In ‘game-based learning’, SGs are used to accomplish given learning outcomes by means of different elements and aspects (i.e., mechanics and dynamics, roles, scenarios, and game instruments), and by means of a learning activity presented as a game to the learners [21]. TFF targets bachelor students of operations and supply chain management courses, and it is designed for the food and beverage sector. In the game, players face several real-time challenges of a food manufacturing company. TFF has been designed by means of the well-known ARCS model, with ARCS being an acronym for Attention, Relevance, Confidence, and Satisfaction. 2.1 Learning Objectives The game design started from its learning objectives; after a game session of at least 16 h, learners will be able to:
Development and Stress Test of a New Serious Game for Food Operations
405
– distinguish the alternative levers of action of each business area, understand their effects and consequences, and examine the characteristics of the simulated universe, given the accelerated time and the generated game events. Specifically, learners will: a. indicate the importance of product tracking by experiencing the possible consequences generated by poor evaluation and tracking of product lot numbers; b. analyze and contrast due dates and production lead times, to assess the purchasing and manufacturing needs for the feasible satisfaction of orders; c. assess the consequences of poor manufacturing practices; – conceive a strategy to successfully address a specific market; – assess the results achieved by the selected strategy and, possibly, generate a new strategy that is more effective in addressing the target market. 2.2 Basic Elements of the Game The game is a multiplayer web application, where players interact in response to events generated either by the game engine, or by other players of the same team, in an environment where the passage of time is accelerated, to allow for a rapid interaction, and for an efficient and effective learning. The only requested game material is a tablet provided with a web browsing software and an internet connection, as TFF mimics a simplified Enterprise Resource Planning system. As in [10], players are grouped into teams of 2–3 students, with every team representing a food manufacturing company (FMC). The portfolio of end products provided by each FMC is fixed, and it is composed of 6 different savory pies: (i) spinach pie, (ii) chard and rice pie, and (iii) vegan pie, all of which can be round- or square-shaped. The bill of materials of a square-shaped spinach pie is reported in Fig. 1. These FMCs compete in the same market and share the same suppliers and customers characterized by a limited production and demand capacity, respectively. All teams operate in a single-echelon supply-chain and they must manage their relationships with suppliers and compete for the same customers’ demand. Each FMC competes for satisfying a finished set of customer agents, which define the market demand in terms of quantities, due dates, shelf life, and price of the end products they requested. 2.3 Roles and Challenge Before starting operations, each team should try to understand the simulated game universe and define a proper strategy to in-crease the market share. Indeed, the goal on which the players are evaluated is profit maximization. Hence, considering that all teams share the same product catalogue and serve the same customers, the optimal choice should be to define a market segmentation strategy, focusing on a specific subset of the customers, as this would increase the odd to succeed in the market. A proper implementation of this strategy requires several different activities, both at the strategic and at the operational level. We start by noting that an especially important aspect of TFF is the game time. To maximize the number of decisions taken during a game session, an accelerated clock is used. In this study, we will refer to a normal game time when a 1/60 rate applies (i.e., a real minute corresponds to an hour in the game), and
406
D. Mezzogori et al.
Fig. 1. Tabular bill of materials of a square-shaped spinach pie. Quantities for a production lot are shown by clicking on any element of the bill (the filling is composed of eggs, Parmigiano Reggiano, chards, and spinaches).
to an accelerated game time when a 1/120 rate applies. Also, each player has a specific role within its company, with a precise goal and a preset array of tasks that he or she can do. The following roles are available in TFF: purchasing manager (PM), operations manager (OM), and sales manager (SM). PM must procure raw materials, manage stock levels, plan timely orders, and select the best suppliers/material combination to meet production requirements in terms of quantity, purchasing cost and delivery date, and expected shelf-life. OM oversees production resources and schedules production orders, aiming to meet demand, which could either be customers’ orders or sales forecasts. The production order planning form and the timeline of production resources available to the OM are reported in Fig. 2. SM deals with customers’ requests, places bids to win customers’ orders and manages shipments. 2.4 Peculiar Characteristics Some peculiar aspects of the agri-food industry play a fundamental role in the game. These are: (i) products’ shelf life (ii) production lots and (iii) product’s contamination. All products are fresh and have a limited life before they expire. However, to simplify the game, the system does not allow the use of expired products, but it does not get rid of them automatically. Expired products remain in the warehouse and consume storage space, until the players pay for their disposal. Although truly relevant, the expiration date is not the only constraints that players must consider. Indeed, when a customer places an order proposal, he or she specifies the desired unit price, delivery date and shelf life, the latter being the remaining time (from the date of delivery) before expiration. All these elements can be bargained, but once the order has been accepted, the FMC agrees to comply with both the due date and the shelf life, and in case of infringement the company incurs a significant penalty. If the delivery date is exceeded a penalty proportional to the delay is paid. If the shelf life is not respected the order is rejected, and the profit is lost. Also, due to the short expiration date of the products, it is almost impossible for the canceled order to be reused for another order. Hence, in addition to the lost sale and
Development and Stress Test of a New Serious Game for Food Operations
407
Fig. 2. Combined view of the production order planning form (below) and of the timeline of production resources, on which the OM can schedule production activities.
to the sustained purchasing, production and delivering costs, the company also incurs the costs of disposal. In the game, the shelf life of an end product is calculated as the smallest shelf life of each ingredient included in the product, as indicated in the BOM. This generates a heavy constraint on the choice of raw materials that can be used to fulfill a customer request and forces the purchasing manager to take great care to the shelf life of the raw materials he or she buys. Each company must also deliver products that comply with sanitary and hygienic safety requirements. In case of bacterial or chemical contamination, purchase orders are immediately rejected. Contamination can be exogenous (i.e., due to contaminated raw materials) or endogenous (i.e., generated by production machines which were not appropriately sanified). Endogenous contamination can be minimized by sanitizing the machine before processing a production batch: while cleaning reduce the probability of internal contamination, sanitizing cancels it off. Note that, unless the machine has been sanitized, the players do not know whether a batch has been contaminated or not, and they will find out only upon delivery. They could try to infer the contamination probability from the number of rejected batches. Concerning external contamination, suppliers always send a notification (within a brief time from delivery) to signal contaminated lots they shipped. Upon this, the batch can be disposed of or, if it has already been used in production and/or shipped to the customer, it can be recalled. In either case the lot is refunded, and the disposal/recall costs are not charged to the players. The critical point is that the notification is forwarded to all players and not only to the player who purchased the contaminated lot. This issue is critical, as it leads to
408
D. Mezzogori et al.
the concept of traceability. In fact, unless players keep track of the purchased lots, they cannot use the notification to identify the lots to be disposed. Internal traceability is even more important: if players do not track the lots used in production they cannot know if a contaminated lot has been used to fulfill an order and they cannot identify the lots to be recalled. Batches that have been delivered to the customer but that have not been stocked yet can be recalled; however, attempting to dispose of a healthy batch by claiming reimbursement for contamination generates a high penalty. It is thus clear that players must cooperate and coordinate their daily activities for an effective management of their FMC. They are also required to take consensual decisions about their supply chain management strategy. TFF, in fact, is designed to convey the importance of communication and coordination among roles. To achieve this goal, the game interface does not provide to all team members the full visibility of the decisions taken by their colleagues. It only reports partial and aggregated information about other roles, so as to encourage communication and coordination. For instance, before accepting an offer, the seller should interface with the production and the purchasing manager, to find out whether the delivery date is feasible or not. Similarly, the purchasing manager should signal eventual contamination to the production and sale manager, to trigger adequate countermeasures. Furthermore, a certain number of breaks is scheduled, during which the game-time is paused, and TFF is available in a read-only mode. During these breaks, guided by the supervising instructor, players are encouraged to confront, discuss, and analyze their operations, and refine their future strategies.
3 TFF Mechanics: Functioning Parameters and Performance Indicators 3.1 TFF Game Parameters An overview of the main parameters of TFF is provided by Tables 1, 2 and 3. Table 1 describes the parameters used to model the behavior of the 8 suppliers available in TFF. A total of 17 different raw materials (or basic ingredients) are considered, but only 2 “generic” suppliers can sell them all, whereas the other 6 can only sell specific types of raw materials (e.g., packaging, dairy, flour, and farm products). These specific suppliers have higher sale prices, but they ensure a higher level of quality. In fact, the shelf life is longer, and the contamination probability is smaller. In this regard, based on past experience, we avoided using other quality indicators and so, shelf life and contamination probability are the only parameters that define raw materials’ quality. To mimic real-life data, raw materials are characterized by a contamination probability ranging from 0 to 5%, with higher values assigned to cheaper products, and vice versa. These values are stochastic but, to make the game more challenging, only the expected values (without variance and coefficient of variation) are provided to the players in a product list. Also, raw materials are available both in a standard and in a premium version. The price of premium products is on average 72% higher than that of standard products, but the shelf life is extended by 74% (see data in Table 1). The contamination probability, instead, depends only on the supplier, but it stays the same for the standard and premium version. If a supplier has a product on its list, it can provide
Development and Stress Test of a New Serious Game for Food Operations
409
Table 1. Suppliers’ parameters with minimum, average, and maximum values. If only the average is reported, the value is deterministic. The Units of Measurement (UoM) are number or kg. Field
Min
Avg
Max
Notes
Suppliers
8
2 general suppliers, 2 farms, 2 packaging, 1 dairy and 1 flour supplier
Raw materials
17
7 general products, 2 dairy, 2 flour, 6 packaging products
Cost [e/UoM]
0.1
3.6
20.0
Standard raw material with normal shelf life
Long shelf life cost [e/UoM]
0.2
6.2
25.0
Every raw material is also available in ‘long shelf life’ option at a higher cost
Avg shelf life [days]
11.0
23.9
60.0
Stochastic shelf life calculated at receiving
Avg long shelf life [days]
15.0
41.6
120.0
Contamination probability [%]
0.0
3.5
5.0
Lot size
1.0
11.2
60.0
Stochastic value Almost all products have a given probability of being contaminated Dimension of lot in UoM
both the standard and the premium version. For both versions, what we said concerning general and dedicated suppliers holds true. It is then up to the players to optimize the trade-off between price and quality, depending on the orders they have won and the agreed delivery date and shelf life. This trade-off is indeed one of the main challenge and competitive leverage that learners must deal with in the game. Table 2 reports the main production parameters and their costs. Production operations require 3 different machines: the sheeting machine (used to produce the puff pastry), the kneading machine (used to produce the fillings), and the combined forming machine (used to form and package the savory pies). All these machines are available in standard and super versions, which feature easier and more effective sanitization and shorter production cycle times. All machines share the same volumetry, that is, they can process the same amount of product, and are designed to operate only at full load. At the very beginning of the game, players have no assets, and they must decide which machines to buy. At most 2 machines per kind can be purchased; the second one can be purchase at any time, but the sale of a used machine is not allowed. Also, at the beginning of the game each company is provided with warehouse space sufficient to store only one semi-finished product per type (i.e., only one batch of filling or pastry). Each company can increase the warehouse capacity, but to do so it must spend money for a warehouse expansion. As for machines, each company can expand the warehouse at the beginning of the game or any time thereafter. All these investments are reported in the first 5 rows of Table 2 with their annual depreciation rates, together with all other unit costs of operations. The strategic decision to buy additional machines and/or to expand the warehouse capacity is another fundamental trade-off that learners must
410
D. Mezzogori et al.
Table 2. Production parameters and corresponding costs. Possible Units of Measurement (UoM) are number of items or lots. Field
Cost [e/UoM]
Notes
Warehouse expansion
5,000
Annual depreciation cost
Standard sheeter or kneader
10,000
Annual depreciation cost
Standard forming machine
30,000
Normal baseline machine
Super sheeter or kneader
15,000
Improved machine (better sanitization and shorter cycle times)
Super forming machine
40,000
Improved machine
Setup
50
Unit cost if applied
Washing
30
Reduces the machine contamination level
Sanitization
80
Resets the machine contamination level
Raw materials disposal
50
Fixed cost per lot (on top of previous costs)
WIP disposal
50
Fixed cost per lot
End product disposal
50
Fixed cost per lot
Late delivery penalty
−10%
Up to a certain delay, the customer reduces the total sale price
Order cancellation penalty
50%
Over a maximum delay, the customer cancels the order and requires a penalty
Penalty for contaminated delivery
150%
This penalty exceeds the total sale price
Bidding cost
40
Per submitted bid
Production cost
20
Per scheduled operation
Transportation cost
50
In and out transportation cost per order
solve. All these investments are highly costly and so it is almost impossible to recoup the investment made for the acquisition of an excessive production capacity. Lastly, Table 3 reports the main parameters used to generate the customers’ demand. There are 4 customers, 2 of which order respectively vegan and non-vegan products manufactured in any shape, whereas the other 2 order any kind of product, but only in circular or rectangular shape. The purchase price proposed by the customer is randomly generated, but it is determined in a way that always assure a minimum profit margin to the seller (i.e., there are no unprofitable orders). Anyhow, prices can be bargained: lowering them broadens the likelihood of winning the bid and vice versa. Therefore, players could also implement a market penetration policy, using a sale price below production cost. However, if this policy is iterated too long it will inevitably lead to bankruptcy, in full alignment with the educational goal of the game. Anyhow, an overreduction (i.e., a price much lower than cost) causes the bid to be lost. This is another lesson that can be transferable to the industry, since in a mature sector as the agri-food
Development and Stress Test of a New Serious Game for Food Operations
411
industry an extremely low price is always interpreted by the customer as a possible fraud or as an indication of a very low-quality level. This constraint is also designed to prevent possible opportunistic behaviors, such as a team that uses extremely low prices to win all the offers. Although this team would be ‘naturally punished’, as it would be inevitably doomed to bankruptcy, such behavior is inadmissible. Indeed, it would prevent other teams from playing, as they would have no orders to process. Delivery time and shelf life are stochastic too, and can be bargained, with a similar effect on the probability to win the order. In this case, however, although rarely, some infeasible orders are purposely generated. These orders have a delivery date and/or a shelf life that is so tight that can never be respected. Therefore, before responding to a customer’s enquire, players are forced to estimate the total LT needed to complete a production order and compare it with the delivery date asked by the customer. We intentionally used the term ‘forced’ to emphasize that the only way to generate some profit is to weighing customer offers carefully, by choosing only the feasible and profitable ones. In this regard, a comparison between LT and Delivery Dates is fundamental since a delay may lead either to a penalty or even to the order refusal. It is, however, a choice, and players are free to choose any orders they wish. We also note that the estimation of the Lead Time is facilitated by the Gantt Chart of the production orders. We did not include more sophisticated devices for two reasons. First, TFF is an educational game, and we believe that this kind of calculations are fundamental for effective production planning and control. Secondly, since we wanted to test the effect of stress on decision making, an oversimplification of this step would have been counterproductive as it would have reduced the stress perceived of the players when managing an order. A similar reasoning also applies to the shelf life requested by the customer, which must be carefully evaluated. Failing to do so, may lead to the acceptance of unfeasible orders with detrimental effects on the cash flow. We conclude this section noting that, due to the way in which the games’ parameters were set, the game logic is very constrained, meaning that the strategic and operational options that allow players to win are limited. Typically, in fact, player should choose a segmentation strategy to focus on a limited number of products and customers, and they should operate according to a make to order policy. In fact, neither purchasing nor production orders can be anticipated (based on sales forecasts), as this would lead to high costs due to disposal and rejected orders. Furthermore, there is no room for opportunistic behaviors, as it is not possible to stockpile products and/or to try to win most of the customer orders, as this would jeopardize the company, leading to a very rapid bankruptcy. Realism could be certainly improved, but these choices were made to ensure that players are forced to learn and to understand the correct dynamics governing this market niche, in line with the previously defined learning objectives. A constrained game logic is also perfectly aligned with the research objective of this study that is to measure the effect of stress on the choice made by the players. If many alternative choices were available, it would be difficult to understand whether a deviation from the correct behavior is stress-related or not. Making the game constrained and giving hint to the players about the optima choices they should take makes the quantification of the stress effect easier. In TFF this is even more true, since the correct actions take times to be implemented (think for example to the time needed to track lots and/or to estimate
412
D. Mezzogori et al.
the production Lead Time). Therefore, it is possible that players got caught up in the frenzy and that, to speed up operations, adopt easy, but sub optimal rules of thumb. Table 3. Customers’ parameters and corresponding minimum, average, and maximum values. If only the average is reported, the value is deterministic. Field
Min
Customers
Avg
Max
4
End products
1 customer for standard products, 1 vegan, 1 square-shaped and 1 round-shaped
6
See Sect. 2 for more details
Min unit price [e]
7.4
8.0
9.0
Max unit price [e]
10.4
11.2
12.6
Min requested quantity [No.]
240.0
Max requested quantity [No.]
720.0
No. of customer requests per day in the game Delivery Dates [days] Shelf Life [days]
Notes
Calculated across all end products and customers
2.4
3.0
3.6
Per number of active teams in the game
10.0
13.0
16.0
Days from request to expected delivery
6.0
8.0
10.0
Minimum number of days from expected delivery to expiry date
3.2 TFF Performance Indicators Each company starts a game session with a budget of 150,000 Euros and should invest at least e 50,000 to buy 3 machines of distinct kinds (sheeting, kneading, and forming machine). So, the net starting budget sums up to e 100,000. Starting from these premises, we parametrized the production and delivery lead times, the average number of order proposal (issued by customers per days) and the expected contribution margin per sale in a way that makes it possible to recover the initial investment within the standard game session, which is approximately 2 months of simulated time. Therefore, an excellent team should end with a positive balance of e 150,000, whereas an ending balance of e 100,000 should be the norm. Dropping below e 50,000 is an indication of an inferior performance. The results of a pre-test conducted in late March 2023 with a single team of 3 students that was playing under normal game time confirmed this hypothesis, with a final balance of around e 106,000. In addition to the cashflow and the ending balance, the following indicators were also measured to evaluate the performance of the teams.
Development and Stress Test of a New Serious Game for Food Operations
413
– Number of successful sale events. The number of orders won by the company that were delivered in full and in time, without contamination and with an acceptable shelf life. This metric measures the capability to carefully select the customer’s orders, to correctly schedule production, and to avoid products’ contamination. – Repayments due to contamination. If contaminated lots are correctly identified and/or recalled, they are fully refunded. Hence, this indicator measures the capability of the company to correctly track purchase, production, and delivery lots. – Costs sustained for machinery purchasing and Warehouse expansions. A value that is too high (or too low) denotes the inability to properly interpret the market and to choose an appropriate production capacity. – Lot disposal. After their expiry date, ingredients cannot be used in production and must be disposed of at a cost. The higher this cost the worse materials management was, i.e., excessive quantities or even wrong ingredients were bought. – Penalties incurred due to contamination or late deliveries. In case of late and/or contaminated delivery, penalty fees are incurred. Hence, also this metric measures the ability of a team to correctly schedule production and to correctly track purchase, production, and delivery lots.
4 Methodology, Results, and Discussion In this study, we applied a qualitative research methodology, which is a common choice when a significant uncertainty about a phenomenon exists, with the goal of theory building [22]. More precisely, we adopt an action research approach, which is the ‘systematic collection and analysis of data for the purpose of taking action and making change by generating practical knowledge’ ([23]), and this research methodology is coupled with a case study experimental validation. By means of the mentioned methodology, we report the results of the first gaming session that allowed us to test TFF and preliminarily evaluate students’ performance in stress conditions, i.e., under accelerated game time. This test was conducted from March 15 to April 7 with a class of 21 students from a post-college Vocational Education and Training course. Before playing the game, several instructions were given to all teams, particularly: – The importance of tracking batches and considering contamination was explained. It was suggested that it was the purchasing manager who should keep track of contamination notices issued by suppliers. – The use of a make-to-order policy with deferred purchases tied to winning an order was recommended, given the high perishability of the goods. – Players were advised to avoid collecting too many orders and then risking not being able to fulfill them and thus incurring in steep penalties. The results of the TFF performance indicators are reported in Table 4. According to the result of the pre-test we made, a minimum of 16 successful sale events (also considering possible repayments) were expected. However, no team reached this target, and the best one (C05) totaled only 10 successful sales (and repayments). Quite surprisingly, most companies resulted to be more profitable than expected per sale event, as they outperformed the average expected turnover per sale event of e 2,400. This result suggests that most of the companies focused on the ‘most-promising orders’,
414
D. Mezzogori et al.
Table 4. Results of the preliminary test of TFF in difficult game settings. Values are in thousands of Euros, with the number of events in brackets. Negative values are highlighted in red. Field Successful sale events Repayment
C02
C03
18.7 (4)
C01
3.9 (1)
4.0 (1) 16.8 (5) 26.9 (7)
C04
1.8 (3)
2.6 (3)
2.5 (2)
2.7 (3)
C05
2.9 (3)
C06
C07
2.5 (1) 20.4 (4) - (0)
5.8 (1)
Machinery purchase
70.0 (3) 50.0 (3) 90.0 (4) 70.0 (3) 50.0 (3) 110.0 (5) 70.0 (3)
Warehouse expansion
20.0 (4) 25.0 (5) 20.0 (5) 20.0 (4) 10.0 (2)
Lot disposal
1.6 (32) 2.6 (51) 1.4 (27) 1.5 (30) 1.1 (21) 5.3 (105) 1.3 (25)
Penalties
17.9 (4)
Final cash flow
14.2
8.8 (2) 13.9 (3) 16.9 (3) 109.8
3.0
30 (6) 20.0 (4)
7.1 (2)
- (0)
- (0)
70.6
91.1
44.9
19.7
i.e., on the orders with high quantities and/or high unit prices. Another interesting point concerns the numerous ‘mistakes’ made by the teams in the first moments of the game, despite the many advice given to them by the facilitators, to avoid rush and wrong decisions. For instance, to operate the company, 3 machines are more than enough, but team C03 and C06 bought 4 and 5 machines, sustaining an extra-cost of about 40 and 60 thousand Euros, respectively. A similar behavior can be tracked also in warehouse’s expansion, as all teams decided to expand the warehouse at least 4 times, which is a very bad investment since a make to stock policy is not feasible. Lastly, we note that both the number of disposed lots and the number of incurred penalties due to contaminated or late deliveries are much higher than expected, with most teams disposing several tens of lots, and several teams incurring in a few penalties for problems with their deliveries for a total spanning from e 7,000 to almost e 18,000. The only two teams that did not incur in problems with their deliveries are C06 and C07. As a result, no team reached the target of e 100,000 in their final balance, and the team that performed best achieved a result of about e 71,000 that, as mentioned, is still a performance below the expected target. All other teams ended with a final balance below e 50,000, with a couple of teams ending up with a significant negative balance.
5 Conclusions and Future Research Directions In this study we presented an exploratory research that tested a new SG, namely The Food Factory (TFF), on a specific target group in the field of food operations and supply chain management. We described the SG in terms of learning objectives, game elements, roles, challenges, and performance indicators. Next, we reported the results of an exploratory stress test of TFF that involved a group of 21 students. As TFF is a SG where a ‘correct’ game behavior is quite predictable, and it produces reliable results, we put students in front of the stress generated by an accelerated game time, to test whether students’ behaviors under these conditions followed the provided pattern. Despite the important briefing and the numerous advice, the effect of stress still played a significant role, reducing the expected students’ performances. Given the time constraints, in fact, most
Development and Stress Test of a New Serious Game for Food Operations
415
teams did not follow the advice and preferred to equip themselves with more machines and wider warehouse space than what was needed, thus consuming resources without producing useful results. They also purchased materials in advance, and without paying attention to the bill of materials of the end products contained in won orders. This choice also resulted in high disposal costs. Furthermore, the time constraints and difficult settings led almost all the teams to neglect traceability, contamination control, and fluent communication. This fact led to excessive costs and a limited number of orders correctly completed and shipped on time. Students engaged in TFF attended a post-college Vocational Education and Training course, and that they were quite unexperienced in the topics of food operations and supply chain management. For this reason, the test has already been repeated with students at the sixth semester of the Bachelor Degree in Management Engineering during May 2023: these students have already attended, in fact, more courses on management, logistics, production, and supply chain. This further test, which will be disclosed in future studies, is aimed to assess whether the stress generated by an accelerated game time also consistently leads players with a different background to deviate from correct game patterns towards less rational policies. Finally, to increase TFF realism and to offer a wider choice on SGs, the following features could be introduced in the game: – Customer loyalty and retention. Customers have no memory of past orders and so, anytime they ask for a quotation, they end up choosing among the received offers based on price, quantity, delivery time and shelf life. It could be interesting to include the company’s reputation (e.g., based on the quality and punctuality of past deliveries) as a further decision criterion. This would make the segmentation policy even more effective and realistic. – Frozen products. To emphasize the concept of ‘expiration’ and ‘shelf life’, companies can only sell fresh products. The product range could however be extended to frozen pies. This change would make the shelf life less demanding as freezing could significantly push the expiry date forward. Thus, frozen products could be managed more easily, with a significant increase in warehouse and shipping costs (i.e., frozen areas and transports), and therefore a lower margin at the same sale price. – Quality controls. TFF does not allow players to know if a purchased lot is contaminated before the supplier sends a corresponding notification. Similarly, there is no way to know if internal contamination has occurred, until the lot has been shipped and (eventually) rejected by the customer. It could be interesting to insert these options by means of quality controls, both before and after production, which entail an increase in production costs, times, and would consume a part of the tested lot. Indeed, we are working on some of these points for future research.
References 1. Hummel, H.G.K., Joosten-ten Brinke, D., Nadolski, R.J., Baartman, L.K.J.: Content validity of game-based assessment: case study of a serious game for ICT managers in training. Technol. Pedagog. Educ. 26(2), 225–240 (2017). https://doi.org/10.1080/1475939X.2016.1192060
416
D. Mezzogori et al.
2. Annetta, L., Mangrum, J., Holmes, S., Collazo, K., Cheng, M.T.: Bridging realty to virtual reality: investigating gender effect and student engagement on learning through video game play in an elementary school classroom. Int. J. Sci. Educ. 31(8), 1091–1113 (2009). https:// doi.org/10.1080/09500690801968656 3. Van Der Zee, D.J., Holkenborg, B., Robinson, S.: Conceptual modeling for simulation-based serious gaming. Decis. Support Syst. 54(1), 33–45 (2012). https://doi.org/10.1016/j.dss.2012. 03.006 4. Lewis, M.A., Maylor, H.R.: Game playing and operations management education. Int. J. Prod. Econ. 105(1), 134–149 (2007). https://doi.org/10.1016/j.ijpe.2006.02.009 5. Leonard, J.M., Wing, R.L.: Advantages of using a computer in some kinds of educational games. IEEE Trans. Hum. Factors Electron. HFE 8(2), 75–81 (1967). https://doi.org/10.1109/ THFE.1967.233315 6. Esposito, G., Mezzogori, D., Reverberi, D., Romagnoli, G., Ustenko, M., Zammori, F.: Nontraditional labs and lab network initiatives: a review. Int. J. Online Biomed. Eng. 17(5), 4–23 (2021). https://doi.org/10.3991/ijoe.v17i05.20991 7. Burghardt, M., Ferdinand, P., Pfeiffer, A., Reverberi, D., Romagnoli, G.: Integration of new technologies and alternative methods in laboratory-based scenarios. In: Auer, M., May, D. (eds.) REV 2020. AISC, vol. 1231, pp. 488–507. Springer, Cham (2021). https://doi.org/10. 1007/978-3-030-52575-0_40 8. Pfeiffer, A., Lukarov, V., Romagnoli, G., Uckelmann, D., Schroeder, U.: Experiential learning in labs and multimodal learning analytics. In: Ifenthaler, D., Gibson, D. (eds.) Adoption of Data Analytics in Higher Education Learning and Teaching. AALT, pp. 349–373. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-47392-1_18 9. Gazis, A., Katsiri, E.: Serious games in digital gaming: a comprehensive review of applications, game engines and advancements. WSEAS Trans. Comput. Res. 11, 10–22 (2023). https://doi.org/10.12753/2066-026X-19-009 10. Romagnoli, G., Galli, M., Mezzogori, D., Reverberi, D.: A cooperative and competitive serious game for operations and supply chain management. Didactical concept and final evaluation. Int. J. Online Biomed. Eng. 18(15), 17–30 (2022). https://doi.org/10.3991/ijoe.v18i15.35089 11. Chachanidze, E.: Serious games in engineering education. In: The 15th International Scientific Conference eLearning and Software for Education, pp. 78–82 (2019). https://doi.org/10. 12753/2066-026X-19-009 12. Bouchard, S., et al.: Modes of immersion and stress induced by commercial (off-the-shelf) 3D games. J. Def. Model. Simul. Appl. Methodol. Technol. 11(4), 339–352 (2014). https:// doi.org/10.1177/1548512912446359 13. Finseth, T., Barnett, N., Shirtcliff, E.A., Dorneich, M.C., Keren, N.: Stress inducing demands in virtual environments. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 62(1), 2066–2070 (2018). https://doi.org/10.1177/1541931218621466 14. Cavalcanti, J., Valls, V., Contero, M., Fonseca, D.: Gamification and hazard communication in virtual reality: a qualitative study. Sensors 21(14), 1–18 (2021). https://doi.org/10.3390/ s21144663 15. Khalili-Mahani, N., et al.: Reflective and reflexive stress responses of older adults to three gaming experiences in relation to their cognitive abilities: mixed methods crossover study. JMIR Ment. Health 7(3), e12388 (2020). https://doi.org/10.2196/12388 16. Cook-Chennault, K., Villanueva, I.: An initial exploration of the perspectives and experiences of diverse learners’ acceptance of online educational engineering games as learning tools in the classroom. In: 2019 IEEE Frontiers in Education Conference (FIE), pp. 1–9 (2019). https:// doi.org/10.1109/FIE43999.2019.9028605 17. Vermillion, S.D., Malak, R.J., Smallman, R., Fields, S.: Linking normative and descriptive research with serious gaming. Proc. Comput. Sci. 28, 204–212 (2014). https://doi.org/10. 1016/j.procs.2014.03.026
Development and Stress Test of a New Serious Game for Food Operations
417
18. Hocine, N.: Feedback and scenario adaptation of serious games for job interview training. Int. J. Comput. Digit. Syst. 12(3), 597–606 (2022). https://doi.org/10.12785/ijcds/120148 19. Galli, M., Mezzogori, D., Reverberi, D., Romagnoli, G., Zammori, F.: Experiencing the role of cooperation and competition in operations and supply chain management with a multiplayer serious game. In: Dolgui, A., Bernard, A., Lemoine, D., von Cieminski, G., Romero, D. (eds.) APMS 2021. IFIPAICT, vol. 633, pp. 491–499. Springer, Cham (2021). https://doi.org/10. 1007/978-3-030-85910-7_52 20. Romagnoli, G., Galli, M., Mezzogori, D., Zammori, F.: An exploratory research on adaptability and flexibility of a serious game in operations and supply chain management. Int. J. Online Biomed. Eng. 18(14), 77–98 (2022). https://doi.org/10.3991/ijoe.v18i14.35083 21. Despeisse, M.: Games and simulations in industrial engineering education: a review of the cognitive and affective learning outcomes. In: 2018 Winter Simulation Conference (WSC), vol. 1, no. 1, pp. 4046–4057 (2018). https://doi.org/10.1109/WSC.2018.8632285 22. Haq, M.: A comparative analysis of qualitative and quantitative research methods and a justification for use of mixed methods in social research. In: Annual Ph.D. Conference, University of Bradford School of Management, pp. 1–22 (2014) 23. MacDonald, C.: Understanding participatory action research: a qualitative research methodology option. Can. J. Action Res. 13(2), 34–50 (2012). https://doi.org/10.33524/cjar.v13 i2.37
Challenges for Smart Manufacturing and Industry 4.0 Research in Academia: A Case Study M. R. McCormick(B)
and Thorsten Wuest
West Virginia University, Morgantown, WV 26505, USA [email protected],[email protected] Abstract. Smart Manufacturing and Industry 4.0 are driving the digital transformation of industrial processes. Barriers to entry provide challenges for researching these topics in academia. Due to the limited analysis of and case studies regarding these challenges, they are not well understood. This paper aims to improve that understanding through the first-hand case study of a lab in a R1 research institution, comparing the case study to related work, generalizing underlying drivers of challenges, and identifying research gaps which limit research acceleration. Through the lens of practical applicability, this paper provides insights which aid researchers in avoiding and overcoming similar challenges. Keywords: Smart Manufacturing of Things · Digital Transformation
1
· Industry 4.0 · Industrial Internet · Cyber-Physical Systems
Introduction
Smart Manufacturing (SM) and Industry 4.0 (4IR) are driving the digital transformation of industrial processes through the application of advanced technologies. This digital transformation enables enhanced productivity, real-time data and analytics, flexibility and customization, optimized supply networks, energy efficiency and sustainability, and improved workforce development. These ideas and capabilities foster the potential to revolutionize the way manufacturing businesses operate, drive innovation, reduce costs, and deliver a sustainable future. As the global market becomes more competitive and resource constraints intensify, the adoption of these technologies will become increasingly vital for businesses to maintain their competitive edge and adapt to ever-evolving customer demands. At the same time, the successful and value-added implementation and leveraging of these technologies is fraught with challenges, especially in comparison to similar applications encountered outside of manufacturing. Consumer products connected to the Internet of Things (IoT) produce a wealth of information which is transmitted to downstream analytics in a userfriendly, seamless manner. Likewise, the development of software which enables product development, data collection, and analytics is fostered by a developerfriendly ecosystem with accessible and affordable, if not free, resources. As a result, the tech sector has exploded over the last 20 years. c IFIP International Federation for Information Processing 2023 Published by Springer Nature Switzerland AG 2023 E. Alfnes et al. (Eds.): APMS 2023, IFIP AICT 690, pp. 418–432, 2023. https://doi.org/10.1007/978-3-031-43666-6_29
Challenges for Smart Manufacturing
419
However, this growth has yet to manifest similar outcomes in manufacturing. The technologies and capabilities which are expected and appear rudimentary in a consumer product context are infrequent, if not completely absent, in manufacturing environments. To digitally transform industrial processes in a similar fashion to consumer products and capitalize on emerging technologies, a core Industrial Internet of Things (IIoT) infrastructure which provides data collection, transmission, and storage capabilities is required to facilitate the downstream analysis and decision making which aim to deliver business value. One reason for the slower adoption of modern equipment on the shop floor is the prevalence of legacy systems. Manufacturing equipment and the hardware they are constructed from are designed to function for decades and do not exhibit short life cycles similar to consumer products such as smartphones, and concepts like planned obsolescence are virtually unknown in machine tools. Due to capital equipment supply chains requiring vast retooling to deliver evolving products for forward-thinking customers, both businesses and their suppliers face barriers to entry regarding the rudimentary IIoT infrastructure needed to collect the data which unlocks the analytics capabilities and resulting business outcomes promised by SM and 4IR. In lock step, academic institutions conducting SM and 4IR research are facing similar challenges to their industrial counterparts. In addition, they are typically reliant on those counterparts to produce equipment which enables that research. While academic research is usually infatuated with contributing to outcomes which broadly influence industry, this circular dependency degrades the ability of academia to realize significant contributions. The structure of this paper is organized as follows: First, this paper examines Related Work, defines the Method employed, presents the Case Study, offers a subsequent Discussion including comparison to related work, and finally highlights Conclusions derived from prior sections.
2
Related Work
Existing academic literature analyzing the challenges of SM and 4IR implementation in industry and academia provide a wealth of valuable insights. For comparison and generalization, recent literature addressing challenges in academia were examined and summarized. A 2020 pilot study prior to the construction of a 4IR university lab anticipated challenges focusing on the maturity level of interoperability and integration including i) limited external communication interfaces, and ii) limited documentation quality and thoroughness regarding integration and learning [3]. In addition, it concluded that standardization of communication technologies needs to be achieved, and due to an apparent lack of understanding of interoperability, integration is currently not possible [3]. Additionally, a 2021 case study examined the brown-field retrofitting of a legacy injection molding machine in a university setting with the intent of creating a digital twin in the future [6]. In addition to the existing control system,
420
M. R. McCormick and T. Wuest
a second Programmable Logic Controller (PLC) was added which enabled readonly data acquisition and export by splicing into existing control system sensor wiring [6]. While a digital shadow of the system was implemented, a major future challenge was implied: The need to reimplement the control system to enable bidirectional communication facilitating a digital twin [6]. A retrospective 2022 case study encompassing multiple projects in a university lab focusing on mechatronic and robotic applications of Augmented Reality (AR) identified three primary challenge areas: i) development environments, ii) communication, and iii) application scenarios [7]. Outside of AR-specific challenges, middleware was integrated between PLCs and downstream devices to address interoperability and communication challenges, which was driven by proprietary PLCs which could not be adapted [7]. However, this was limited by the lack of real-time capability of the industrial communication protocols in an AR context [7]. Finally, a 2016 study examined the construction of a 4IR lab environment focusing on an assembly process utilizing LEGO Duplo workpieces in a custombuilt testbed which has been continuously refined since initial construction in 2001 [9]. The system consists of i) an Operational Technology (OT) level consisting of a workpiece transport system with warehouse storage, industrial robotics, image processing, RFID, and PLCs, and ii) an Information Technology (IT) level consisting of Customer Relationship Management (CRM), Supplier Relationship Management (SRM), and Manufacturing Execution System (MES), each of which are modules in a commercial Enterprise Resource Planning (ERP) system. The ERP system also included an Application Programming Interface (API) for extensibility [9]. While challenges weren’t specifically analyzed, a major challenge was implied by stating that middleware will be needed to connect and translate data between the OT and IT levels within the confines of ISA-95 modeling and soft real-time requirements [9].
3
Method
A wealth of literature focusing on SM and 4IR research in academic settings has been presented. However, the current literature primarily focuses on the subject matter being studied and typically includes the presentation and analysis of challenges as a secondary component of the content. At the same time, some papers require readers to infer challenges, and provide limited in-depth explanations with enough detail to effectuate similar implementations. In tandem, while barriers to entry have been thoroughly analyzed in the context of industry, a transparent analysis of how interactions between challenges in industry and academia affect academic research is lacking to date. To overcome this research gap, this paper aims to present challenges encountered while executing SM and 4IR research in an academic setting, and compare those challenges to both stated and implied challenges in related work. Furthermore, this paper aims to equip the reader with insights that enable them to anticipate, identify, avoid, and overcome challenges in similar environments, as
Challenges for Smart Manufacturing
421
well as execute effective project planning. Finally, this paper aims to contextualize challenges in academia by discussing how they interact with industrial counterparts. In practice-oriented fields, the knowledge of a repertoire of cases enhances a practitioner’s ability to convolve personal experience and model cases in an effective manner to effectuate desired outcomes [4]. Case studies have been an effective tool for conveying model cases and are found in three forms: i) Deductive (Hypothesis Testing), ii) Inductive (Theory Generating), and iii) Abductive (Naturalistic Generalization or Synthesizing a Case) [4]. Deductive studies validate or falsify a hypothesis, while inductive studies generate theories and concepts based upon facts and reasoning using methods such as Grounded Theory [4]. While deductive studies prove that a rule must be true and inductive studies conclude that a rule probably is operative in similar cases, abductive studies face unexpected facts and posit that a case may be applicable in a given situation [4]. Since this paper is retrospectively analyzing challenges without a preformulated hypothesis, deductive reasoning cannot be employed. Furthermore, rigorous methods and quantitative analysis employing inductive reasoning such as Grounded Theory can not be employed due a lack of forethought and anticipation of encountered challenges. As a result of a process of elimination and the unexpected appearance of facts, a primarily abductive naturalistic generalization case study method is employed where a situation is compared with known cases and results in the ability to act based on the individual case [4].
4
SML Case Study
This case study examines the inception and maturation of SM research in the Smart Manufacturing Lab (SML) at West Virginia University. It begins with an introduction containing pertinent background information contextualizing later sections, followed by individual sections illustrating challenges related to technical expertise, security, code maintainability, hardware maintainability, capability, intellectual property, interoperability, and morale. 4.1
Introduction
The SML research testbed illustrated in Fig. 1 was imported from Europe. The system consists of eight compact, modular, and unique manufacturing stations, each with a Siemens S7 PLC and Human Machine Interface (HMI). The system functions as a discrete production line for a single configurable but noncustomizable product. The system also includes a simple and proprietary MES which enables order orchestration and metric tracking including Overall Equipment Effectiveness (OEE). The intended use of the system is as an industrial grade training system, focusing on educating operators and technicians from a user perspective on topics such as operating and troubleshooting manufacturing equipment. Overall, the system is well constructed and delivers on the marketed
422
M. R. McCormick and T. Wuest
promise of a turnkey, industrial grade operator training system. However, several challenges were encountered when attempting to utilize the system as a research testbed.
Fig. 1. Smart Manufacturing Lab Testbed
While pricing typically isn’t publicly available and might be subject to a NonDisclosure Agreement (NDA), turnkey testbed systems in general use industry grade controls and materials, and thus carry a substantial price tag that can range from US$200,000 to in excess of US$1,000,000. After the system was purchased, it was received and installed by a contracted third party integrator in 2018. On-site training was provided by the system vendor focusing on the intended use of operator training. Multiple undergraduate students, graduate students, and Post-docs, some with limited manufacturing experience, spanning majors such as electrical engineering, industrial engineering, and computer science, engaged in engineering research projects. These projects focused on, e.g., adding Arduino-based sensors to the system, using augmented reality [2], and augmenting native data collection by adding sensors to the product [5]. However, they did not attempt to utilize the native IIoT functionality of the system outside of built-in MES functionality, nor attempt to modify the vendor-supplied PLC code to customize the system. The fruits of vendor training and lessons learned were not passed on to future researchers when those students graduated in 2019/2020, amplified by the ongoing pandemic. In August 2021, multiple significant research projects targeting SM and 4IR research utilizing the testbed kicked off, at which time incoming graduate students developed an in-depth understanding of the system, its capabilities, and its limitations from scratch. One of the projects included a partnership with the system vendor as an industry partner, which facilitated access to vendor resources which would have otherwise been less accessible. A graduate student with a mechatronics background and limited industry experience initially made strides in developing an understanding of the system. An additional graduate student was added in January 2022 with a mechanical, electronics, and software background, and significant industry experience, spearheaded a second project. These two graduate students executed the legwork associated with understanding, reverse engineering, and overcoming challenges associated with the system.
Challenges for Smart Manufacturing
4.2
423
Technical Expertise
The first challenge was to accumulate the technical expertise necessary to execute this legwork. There were limited resources and knowledge available with regard to implementing SM and 4IR solutions, accompanied by a lack of coursework covering these topics. Thus, the team had to rely on self-study to acquire the technical skills necessary to reverse engineer the system, develop IIoT infrastructure, and overcome many of the challenges that were faced. As a result, students leaned on resourcefulness and autodidactic ability to overcome challenges. In part, this was enabled by self-starter attitudes, previous education, applicable industry experience, and the proactive identification and leveraging of external resources. An initial challenge was understanding the system’s functionality from the perspective of an engineer in lieu of verbose and overwhelming documentation written for operators and technicians. In addition, a lack of proficiency with PLCs, the used IEC 61131-3 programming languages, and programming software limited the ability to utilize available source code. As a result, the primary mode of developing an understanding of the system was through reverse engineering. Furthermore, the lack of PLC programming software proficiency coupled with a PLC code project setting mishap caused English-speaking-only researchers to believe that code documentation was in German. Upon discovery of a software setting that enabled switching between languages in a bilingual project, PLC code comment value improved slightly. The lack of access to spare PLCs, HMIs, and industrial grade components that were not integrated into the system, which could be utilized for self-education and experimentation without negatively impacting the testbed, crippled efforts to overcome challenges related to gaining proficiency. 4.3
Security
While a lack of proficiency with industrial controls can be overcome with access to hardware, additional practical considerations come into play in an academic environment. For instance, the Information Technology (IT) department regularly runs security vulnerability scanning software on the university network. As a result of the consumer PC which runs the MES software being connected to the university network for internet and cloud access, it is subject to vulnerability scans and remediations in response to identified vulnerabilities. During a vulnerability scan, a third party software dependency of software required by the system was identified to exhibit a vulnerability, and an upgrade of the identified software was requested by IT to resolve the issue. When the vendor was notified of the vulnerability, they were unaware that a software dependency had a vulnerability, offered to upgrade the software suite, and indicated that some non-critical system functionality which is showcased in marketing materials will be lost because of the upgrade. To ensure that vulnerabilities identified in the future can be remediated, it is necessary to maintain the ability to accept software updates from the vendor if connected to the university network.
424
4.4
M. R. McCormick and T. Wuest
Code Maintainability
A critical component of SM and 4IR research is the flexibility of and ability to customize a testbed for the purposes of conducting experiments which expose specific phenomena to be studied. As a result of the need to maintain the ability to update software from the vendor, an additional challenge is encountered with respect to the ability to customize PLC code and thereby the testbed as a whole. Specifically, the ability to customize PLC code hinges on the ability to maintain and update multiple codebases simultaneously, as would be the case when updates need to be applied to an already customized codebase. In the context of modern programming languages including C# and python where distributed version control systems such as git provide both the functionality and strategies for maintaining multiple parallel codebases, the same can not be said for IEC 61131-3 languages. While many PLC software packages include plugins for git and other version control systems, the specific implementations of these plugins and specifics of how files are managed in git can result in dramatically different outcomes than modern programming languages. Such outcomes can include the requirement of a visualization tool to intuitively identify code changes, or limitations when attempting to merge changes when using proprietary file formats. 4.5
Hardware Maintainability
Similar to code, maintenance of the system was underrepresented in the available documentation. For example, due to galling and material transfer between actuating aluminum components, actuating components seized up. As a result, disassembly of the station including re-finishing operations and lubrication were necessary to restore motion. While this procedure was rudimentary given the level of skill and prior industry experience, many faculty and graduate students lack the ability or confidence to engage in behaviors which they perceive may damage components. In addition, research budgets typically do not include funds for updating existing or repairing broken testbeds, and the risk of losing future funding because of not having a functioning testbed is a consideration. Regardless of whether these fears are legitimate or unfounded, the cost and time commitment of overcoming these challenges increases exponentially when external resources are required. Moreover, researchers may be prevented from performing maintenance per university policy or state law. For example, a university’s maintenance department may be required to hang a framed poster on a wall using a drywall screw. 4.6
Capability
Regardless of the language used and maintainability of the system, a core feature of the system as an educational tool is the ability to reuse the same workpieces indefinitely without having to purchase additional workpieces over time. This is achieved by not performing any machining operations on or inducing permanent
Challenges for Smart Manufacturing
425
deformation of the workpieces. While this is optimal for training purposes, it creates inherent limitations for research since the system does not actually modify the workpieces outside of a snap-fit assembly operation. This leaves researchers with the choice between accepting the limitation of only conducting research with the constraint of an assembly operation, or incurring the cost of ordering an additional workpiece for each trial. Since SM and 4IR research is inherently data-hungry and requires a wealth of data, the cost of the additional workpieces required to execute related research has the potential to be exorbitant. Ultimately, devising ways of making measurable, reversible, and non-destructive changes to workpieces to avoid this cost proves to be an extremely challenging endeavor, and is a major limitation when attempting to utilize this type of testbed for manufacturing research. As an extension of maintenance concerns and in an attempt to overcome challenges related to capability, consideration was given to the idea of developing an industry-grade testbed from scratch. By leveraging a graduate student with years of cradle to grave industry experience designing products, designing the processes to manufacture those products, then designing the equipment to execute those processes, a custom testbed tailored to achieve specific goals could be realized. However, with a single individual who was destined to graduate and leave after designing, manufacturing, assembling, troubleshooting, and operating the one-of-a-kind testbed, it became evident that both faculty and future students would be incapable of maintaining or modifying the system. Due to limited ability to assess the competency of the student to complete the task, the time and expense required to execute the task, and the cost-prohibitive constraint of requiring external support after the student left, the idea was abandoned. 4.7
Intellectual Property
Similarly, when initiating research targeting 3D digital twins with the intent to publish papers which include the results of that research, effort was made to reduce required labor by requesting Computer-Aided Drafting (CAD) models of the system from the vendor. The vendor conveyed that they would only share their models under a NDA. Since a NDA limits the ability to freely publish and present research results, and considering that more than 50% of the components required to model the system had CAD models available to download from the internet without limitation and free of charge, including from the vendor’s own website, the decision was made not to use the vendor’s models and incur the additional cost of labor to model the required components. 4.8
Interoperability
In line with the intended use as a turnkey training system, the PLC logic of individual stations are dependent on and require communication with the provided proprietary MES to orchestrate order management. As a result of the MES identifying what workpieces are available in each station, directing workpiece transportation between stations, and dictating what manufacturing operations
426
M. R. McCormick and T. Wuest
each station should execute, the system can not function without the MES software. Since the MES software is proprietary, closed source, does not include an Application Programming Interface (API) enabling extensibility or automation, and the vendor would not share the source code, the ability to design and execute experiments is limited to the existing functionality of the software. Additionally, the PLC logic prevents external devices from modifying tags through the available Open Platform Communications Unified Architecture (OPC UA) interface while under the control of the MES system, which further limits the ability to design and execute experiments. The system does allow read-only data collection, but without the ability to automate order creation or integrate additional control features, this functionality is of little value. While modifying PLC code was considered to overcome some of these challenges, it was ultimately avoided due to the lack of access to spare hardware and required educational investment to gain proficiency, anticipated challenges in maintaining multiple code bases, and limitations which would still be imposed by the proprietary MES even after PLC code customization. The creative use of unintended functionality yielded the most flexible method for executing research. The PLC logic includes multiple modes of operation including ‘automatic’ (or ‘MES’) mode which is coupled with and requires the MES for operation, and ‘manual’ mode which does not interact with or require the MES for operator training of manual actuation and for debugging purposes. While reverse engineering IIoT connectivity, it was noticed that while in ‘manual’ mode, OPC UA tags could be read from and written to, while ‘automatic’ mode enforced read-only functionality. When challenges were encountered with devising a method for implementing the bidirectional communication required for a digital twin due to read-only functionality, the idea of utilizing ‘manual’ mode was examined along with the expense of losing both PLC logic and MES functionality. Ultimately, the decision was made to replace the local PLC logic with a remote Python application which interfaces with the existing PLC logic in ‘manual’ mode through OPC UA. By effectively using the existing PLC as a Remote I/O module and moving the business logic to a Python application running on a networked consumer PC, the flexibility of the system for research purposes was dramatically improved. Using this method and serving as a proof of concept, an energy digital twin of a single station with real-time bidirectional control communication was successfully implemented and presented in April 2023 [1]. To regain the equivalent functionality and enable extensibility beyond the original MES, the development of a new and customizable MES system tailored for research applications is required and planned. In addition, this method also harbors limitations with regard to Operational Technology (OT) security research, as well as other research targeting direct application in industry, since this method does not represent OT hardware and how it is currently utilized in industry. Most importantly, since this method does not require any modifications to the testbed, researchers can easily switch back and forth between the native MES software coupled with PLC logic by changing the mode on each HMI to ‘auto-
Challenges for Smart Manufacturing
427
matic’ mode, and business logic implemented in Python communicating with PLCs which function as Remote I/O modules in ‘manual’ mode. This segregation allows two distinct codebases to be completely independent and managed separately with one being updated from the vendor. In addition, this enables additional devices and sensors, including new control capabilities, to be added to the system without impacting native functionality. 4.9
Morale
While not a technical challenge itself, the morale of researchers can be affected by the quantity and veracity of challenges. However, through the process of constructing and analyzing a case study explicitly focusing on challenges encountered during implementation, an unexpected result emerged: A mindset shift occurred in which every challenge, roadblock, or failure encountered was transmuted into additional case study content and therefore a success. Even though academic literature which focuses on reframed failures typically carries less merit and is less publishable, this tool carries therapeutic value, can vanquish frustration, and reframe losses into wins, thereby improving morale.
5
Discussion
This section begins with a comparison to Related Work, generalizes underlying drivers for challenges, and finally identifies research gaps. 5.1
Comparison to Related Work
This case study correlates strongly with challenges identified and implied in Related Work section. While the 2020 pilot study [3] was preliminary and could not convey insights through implementation results, it did properly anticipate challenges with respect to communication as illustrated in the Interoperability section of the SML case study. However, the SML case study indicates that the nature and magnitude of anticipated challenges may have been off base, which was hampered by an admittedly limited understanding of interoperability and lack of actual implementation. This might have been in part overcome through i) accessible educational material and ii) using simulators to prototype the system before procuring hardware. With an enhanced understanding of interoperability resulting from prototyping, researchers would be better able to develop requirements and specifications for procured and retrofitted equipment, documentation, and training. Similarly, the injection molding retrofit case study [6] faced challenges addressed in the Interoperability section. While injection molding inherently offers significantly more capability in terms of process variability, the lack of control functionality limited the ability to capitalize on this capability through tailored experiments. This imposes a complex challenge with regard to devising a new control system to enable this capability, and ultimately inhibits researchers
428
M. R. McCormick and T. Wuest
from achieving the goal of a digital twin. While the SML case study focuses on an assembly processes with low sensitivity to latency, the proposed method for overcoming interoperability challenges may not be applicable to processes such as injection molding which are more sensitive to latency and have more stringent real-time requirements. However, the injection molding case study exhibits the strongest capability potential for process-focused research since permanent deformation is incurred by the workpiece. A similar approach to overcoming interoperability challenges was also employed when implementing AR with mechatronics and robotics [7]. However, a key distinguishing feature is that while middleware was implemented as a translation mechanism between AR and control hardware, the control logic was still located on the control hardware. Due to the nature of robotics and AR, this may impose even more stringent real-time requirements than injection molding and further limit flexibility, even if the proprietary PLCs were customizable. In contrast, the method of overcoming interoperability challenges proposed in the SML case study moves the control logic from the PLCs to the middleware which improves flexibility at the cost of deviating from typical industrial application. This method can provide protocol translation as well if needed. Finally, the case study examining the custom-built testbed which assembles LEGO Duplo workpieces [9] shared many similarities with the SML case study. Due to the construction of a custom-built testbed, this combination of case studies may illustrate variability in technical expertise across disciplines and institutions, as well as may showcase variability in the ability or willingness to construct and maintain custom testbeds. However, since construction of this testbed commenced in 2001 and the resulting study was published in 2016 with an incomplete system indicated by the need for middleware to connect OT and API-enabled IT layers, this may indicate that significant time is required to develop and maintain technical expertise, as well as implement and maintain a custom testbed in an academic setting based on that expertise. Regardless of expertise and maintainability, the modularity of products and therefore assembly operations enabled by LEGO Duplo workpieces far exceeds the flexibility and capability of workpieces exhibited in the SML case study, even though both testbeds are limited to assembly operations. By extension, they are not readily capable of inducing permanent or reversible changes in the workpiece similar to injection molding or similar processes. 5.2
Generalizations
While each has distinct advantages and disadvantages, the relationship between academia and industry is tumultuous with each having strong opinions and perceptions of the other, yet synergistic in contributing substance which the other lacks. Through partnerships, this has the potential to produce a better outcome than the sum of their parts, especially in a SM and 4IR context. To understand and realize this potential outcome, it is necessary to understand the current state of and challenges in industry, the current state of and challenges in academia, and how those states and challenges interact. Given industry’s expectation that
Challenges for Smart Manufacturing
429
academia will produce the skilled workers to satisfy the pressing need in industry, there is a mutual understanding and objective of educating students in SM and 4IR [8]. To bridge these gaps, a stronger relationship between academia and industry must be forged. While industry is primarily concerned with economic incentives, academia is primarily concerned with research directions aligned with funding agencies’ priorities. For better or worse, this detachment from economic incentives enables the pursuit of research on intellectual merits rather than economic merits, and thereby carries the capacity to develop concepts and technologies which may be of benefit to numerous entities (i.e., politically or socially profitable) without being economically profitable. By leveraging this double-edged sword in the context of SM and 4IR research, an opportunity is presented for academia to address research gaps which have no economic incentive, or through industry partnerships supplement and facilitate research which otherwise has limited economic incentive. Since academia is largely dependent on industry for testbed production and hardware which enables the customization of existing and construction of custom testbeds, the advancement of required technologies and hardware which carry limited economic incentive for industry could be facilitated by partnerships between academia and industry. These partnerships could break the circular dependency which currently limits the acceleration of academic research and industry advancements by capitalizing on the distinct advantages of each party. To provide a high-level conceptual understanding of the drivers for associated challenges and the interactions between industry and academia, the following three subsections generalize this case study and related work into the most pressing challenges for SM and 4IR research in academia. Data Access. SM and 4IR research require a significant amount of data to facilitate data-hungry emerging technologies. In academia, manufacturing data is especially hard to come by, and many researchers are reliant on publicly available static datasets. However, using these datasets limits research capabilities since researchers can not define and execute their own experiments to generate significant quantities of novel data tailored to solving a specific problem. Since researchers are driven by formulating novel solutions to problems, which can include identifying and solving novel problems, the formulation and testing of those solutions may explicitly require the generation of novel data, especially if they require an interactive or adaptive control component. For these reasons and in lieu of industry partnerships to provide authentic manufacturing data, the ability for researchers to generate novel data using manufacturing testbeds offers significant advantages over static datasets. In some cases, the ability to conduct research requires a manufacturing testbed. To generate manufacturing data in academia, researchers must overcome initial barriers to entry: i) procure equipment, ii) procure infrastructure, and iii) configure equipment and infrastructure. Prior to procuring equipment and infrastructure, researchers need to understand how equipment and infrastructure will
430
M. R. McCormick and T. Wuest
be connected and configured to develop a specification for what technologies and features the equipment and infrastructure they procure should have. If incorporating existing or legacy equipment and infrastructure into a new research initiative, technical expertise and a thorough survey of existing hardware is necessary to understand what current limitations exist and what upgrades are necessary. Technical Expertise. The nature of research is that it rides the leading edge of advancements and requires both novel technologies to facilitate that research and subject matter expertise to execute that research. Since most academic researchers do not have the skill, time, or funding to design and manufacture the technologies which facilitate their research, their only option is to lean on industry to provide these technologies. Frequently, the companies which provide these technologies specialize in specific niches. Since both the research team and equipment builder contribute knowledge and resources which the other lacks, a symbiotic relationship is fostered where the equipment builder facilitates research by providing technologies while academia drives refinement of the technologies to meet advancing research requirements. On the other hand, subject matter expertise is an entirely different story. In academia, the continuous turnover of graduate students who execute the bulk of legwork results in the continuous loss of knowledge. Unlike industry where tribal knowledge is the primary mechanism for knowledge transfer from seasoned veterans, knowledge which is transferred through literature in academia is continuously generated by students who have yet to reach the level of subject matter expert. Upon reaching the level of subject matter expert, only a small fraction of these students continue in academia where universities struggle to recruit and retain talent, resulting in a loss of expertise. Funding. Regardless of expertise, the considerable initial investment required to initiate SM and 4IR research, as well as the cost to maintain it, is a major challenge. Whether researchers have already procured a testbed, developed infrastructure, and demonstrated proficiency can influence the ability to acquire funding. Without a testbed or infrastructure, this challenge can appear insurmountable. Even after procuring a testbed, it can take more than a year to establish the knowledge, infrastructure, and connectivity necessary to execute research. As a result, multi-year funding with built-in expectations for establishing a foundation have significant advantages over single-year funding which may spend a significant portion of the funding duration achieving the capability to execute research. In addition, proposals for short-term funding with infrastructure not already in place may be at risk of not being funded since a large proportion of funds will go toward establishing infrastructure rather than the research itself. 5.3
Research Gaps
While several research gaps have been implied in both the related work and case study, this section provides a succinct list of research gaps which accelerate future research:
Challenges for Smart Manufacturing
431
– How to overcome the lack of technical expertise required to execute SM and 4IR research among faculty. – How to create impactful SM and 4IR Science, Technology, Engineering, and Mathematics (STEM) experiences for high school and university students. – How to define, identify, recruit, and retain optimal candidates for SM and 4IR research. – How to define technical requirements for and acquire green-field testbeds for SM and 4IR research. – How to transform brown-field manufacturing equipment into SM and 4IR research testbeds. – How to optimally design manufacturing equipment for SM and 4IR to facilitate both industry and academic use. – How to analyze, measure, test, benchmark, quantify, document, evaluate, and compare testbeds, hardware, software, infrastructure, technologies, and architectures. 5.4
Method Limitations
This case study fosters a number of limitations with regard to research method and content. First, the content of the paper is biased by the industry experience, research experience, and perspectives of the primary author who was assigned responsibility for addressing the vast majority of challenges presented in this case study. While the authors’ intent is to transparently convey content in a balanced fashion, these biases are inextricably tied to the philosophies and viewpoints therein. As a result, while perspectives are grounded in an arguable understanding of both industry and academia, the perspective of faculty and administrators is limited. Additionally, this paper was initiated and constructed retrospectively during research, and forethought regarding a rigorous methodology was not executed prior to commencing research. This resulted in this case study being informally composed after challenges were addressed and before being restructured with a formal case study methodology in mind.
6
Conclusions
By providing a summary of related work and a detailed first-hand case study illustrating the current state of and challenges for SM and 4IR in academia, this paper arms readers with the foreknowledge to avoid and overcome challenges when conducting similar research. Additionally, a comparison between related work and the presented case study highlights correlations and differences between cases which enhance that foreknowledge. By generalizing the concepts underpinning challenges in related work and the presented case study, readers are armed with an understanding of high-level drivers of challenges. Moreover, the extrapolation of lessons learned into research gaps pave the way forward for research accelerators. Finally, this paper lays the foundation for future work, including the expansion of this paper with an extended comparison with related work,
432
M. R. McCormick and T. Wuest
a deeper analysis of the relationship between industry and academia and its effect on academic research, and additional case studies including infrastructure commissioning and a hybrid additive/subtractive testbed. Acknowledgments. This material is based upon work supported by the National Science Foundation under Grant No. 2119654. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. The authors express their appreciation and gratitude to Dr. Mohammed Shafae, as well as APMS conference reviewers, for their feedback and resulting improvements.
References 1. Billey, A.: Energy digital twins in smart manufacturing systems (2023) 2. Gross, J., Wuest, T.: Smart products in smart manufacturing systems: an opportunity to utilize AR? In: Lalic, B., Majstorovic, V., Marjanovic, U., von Cieminski, G., Romero, D. (eds.) Advances in Production Management Systems. The Path to Digital Transformation and Innovation of Production Management Systems, pp. 487–494. Springer International Publishing, Cham (2020) 3. Jepsen, S.C., Mørk, T.I., Hviid, J., Worm, T.: A pilot study of industry 4.0 asset interoperability challenges in an industry 4.0 laboratory. In: 2020 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM), pp. 571–575 (2020) 4. Johansson, R.: Case study methodology. In: New Teaching Resources for Management in a Globalised World (2019) 5. Lenz, J., MacDonald, E., Harik, R., Wuest, T.: Optimizing smart manufacturing systems by extending the smart products paradigm to the beginning of life. J. Manuf. Syst. 57, 274–286 (2020) 6. Santos, G., Lopes, M., Rodrigues, J., Ribeiro, E., Vasco, J.: Icm 4.0 - injection moulding machine control and monitoring (2021) 7. Wittenberg, C., Bauer, B.: Challenges in research projects on augmented reality in the domain of mechatronic and robotic applications. Human Fact. Robots Drones Unmanned Syst. 57, 92–98 (2022) 8. World Manufacturing Foundation. Skills for the future of manufacturing (2019) 9. Zarte, M., Pechmann, A., Wermann, J., Gosewehr, F., Colombo, A.W.: Building an industry 4.0-compliant lab environment to demonstrate connectivity between shop floor and it levels of an enterprise. In: IECON 2016–42nd Annual Conference of the IEEE Industrial Electronics Society, pp. 6590–6595 (2016)
Report on Integrating a COTS Game in Teaching Production and Logistics Jannicke Baalsrud Hauge1,2(B)
and Matthias Kalverkamp3
1 KTH Royal Institute of Technology, Södertälje, Sweden
[email protected], [email protected]
2 Bremer Institut für Produktion und Logistik GmbH an der Universität Bremen, Bremen,
Germany 3 Wiesbaden Business School, RheinMain University of Applied Sciences, 65183 Wiesbaden,
Germany [email protected]
Abstract. The experiential learning principle has a long tradition in engineering education. Within production & supply chain management as well as logistics, a primarily learning goal is connected to the complexity of decision making and how the same decision may impact differently depending on the context. Such decisions are complex and difficult to understand, and serious games have proven to contribute to this understanding. Many of the games used for teaching the relevant topics are typically applied in a workshop setting and are often been specifically made for a specific course. However, not all educational institutions have the possibility to develop tailored games since the development requires multi-disciplinary knowledge, are costly and time consuming. The usage of commercial off-the-shelf games might be a solution. We know from existing work that this requires that the game can be modded or adapted to fit the intended learning outcomes in the course it may be used. This article takes previous work on the integration of commercial off-the-shelf games into logistics, engineering and supply management education one step further, and reports on the first results of full implementation. Keywords: Logistics Education · Experiential Learning · Production Management · Logistics and Supply Chain
1 Introduction Games have been used to train and educate future engineering and business management students for decades [1, 2] either as an integrational part or as an add-on. However, even if experiential learning and the deployment of serious games have proven effective [3–7] the overall deployment in higher education is lower than within primary and secondary education [8]. At this level, many commercial-off-the shelf (COTS) games can be used to teach science, technology, engineering, and mathematics (STEM) subjects without much need for being adapted to the relevant intended learning outcomes, since the competencies to train and the learning goals are similar in many curricula. This is © IFIP International Federation for Information Processing 2023 Published by Springer Nature Switzerland AG 2023 E. Alfnes et al. (Eds.): APMS 2023, IFIP AICT 690, pp. 433–445, 2023. https://doi.org/10.1007/978-3-031-43666-6_30
434
J. Baalsrud Hauge and M. Kalverkamp
often different in engineering education at university level, where the curricula are more specified and therefore often tailored made games are used [9, 10]. The challenges and opportunities related to tailored and in-house produced games vs. modding of COTS games have been discussed in several articles: research has revealed that the transition between the instructional design and the actual game design implementation relevant for pedagogy is lacking [11], and that the required facilitation competencies are high [12]. On the one hand to be able to investigate if a COTS game can be adapted, re-engineered or modded, the complexity of the game elements and their relations to the intended learning objectives (ILOs) must be understood [13–15]. This takes time and requires a good understanding of the gaming concept [15, 16] as well as the course topic [9, 12, 17]. On the other hand, tailored games are costly [3] and require trust in the ability of a game developer to deliver a game that will achieve the desired learning outcomes for a particular target group [3, 9, 15]. Summarised, the challenges teachers face when considering reusing and adapting tailored serious games can be divided into different challenges. First, the game implementation, where a challenge is the lack of the elasticity to be altered and/or extended to meet new requirements in terms of specific learning objectives listed in a certain curriculum; most teachers are not game developers or programmers. They have varying experience in game-based learning [15, 16]; and the whole processes of identifying, analysing and changing a game is highly complicated. In addition, it is difficult to obtain the required information [3, 4, 16, 18, 19]. To some extent, taxonomies [10, 20] may help to overcome these challenges. Second, a different challenge related to re-using and adapting tailored games is often the change in the interaction between learning-mechanics (LM)and game-mechanics (GM) and how this will impact both the immersiveness of the game play as well as on the learning taking place while playing. Since a potential change in the interaction of LM and GM and the impact on the immersiveness often needs to be tested and adjusted, the adoption of tailored games is time consuming and leads to instable quality during game play [15] even if there are guidelines in place [21, 22]. A second challenge related to re-using and re-purposing is the complexity, specifically of multi-user games [15, 23]. Teachers left alone with taxonomies, methodologies, and guidelines [3, 6, 15, 20] may experience this as an entrance barrier. In order to overcome this latter challenge, prototypically implemented supporting tools have been developed/are under development [13–15, 17, 33]. A different possibility is to change existing games. Modding and re-engineering as defined in [15] is a way of overcoming the challenges described above. In this article we follow the same definition and refer to “‘modding’ as the act of modifying or customizing digital game hardware/software not originally conceived for bespoke purposes, whereas reengineering refers low-level, systematic changes in game structure and underlying simulation model. The main aim of both is to reduce the ground-up development costs while ensuring the customization retains genre familiarity for the users” [15]. Even though modding from a cost-efficiency perspective might be a very good option, the literature also shows that there are limitations and drawbacks [24] and that the success of the implementation of a modded game in a course depends mainly on the teachers’ knowledge, experience in instructional design, and game design [4, 9, 25, 26].
Report on Integrating a COTS Game in Teaching Production and Logistics
435
This paper reports the first results of a modded COTS game used for teaching bachelor and master students at two different universities during the academic year 2022/2023. The authors followed the suggestions in the literature and embedded the game called Production Line in a course. The analysis and alignment of this game and the ILOs are explained in Kalverkamp et al. [17]. Even if the sample size is small, first lessons learned and improvement for a larger roll out next academic year can be drawn. The rest of the paper is structured as followed: Sect. 2 explains the methodology as well as explains the previous steps undertaken (and therefore not included in this paper), while Sect. 3 explains the experimental set. The results are presented in Sect. 4 while section five discusses the findings and the limitations. Section 5 presents outlook and next possible steps.
2 Method The paper builds upon previous presented research [17] which describes the selection process of a suitable COTS game, the constructive alignment of game and intended learning contents. The objective of the study was to explore under which conditions it would be possible to use/re-use the same game and same tutorial and learning material in different types of courses with just minor adjustment by the teachers, since the effort and time teachers need to invest in developing and testing supportive material when introducing a new game is a main barrier The hypothesis is that this will reduce the work teachers have to use in adapting games for a specific course while sharing the workload with other teachers. We planned the design of the study with three cases yet had to refrain to an exploratory study of two case studies (see Sect. 3.2 for details). The focus of this study is on exploring and understanding challenges of deploying the same COTS game in two different courses at different higher education institutions, aiming for overcoming the barrier for uptake. We had therefore in the original study design selected courses within the same area (logistics) but with differences in terms of target groups (academic level, engineering vs. management), and differences in the intended learning outcomes, but always having in mind that the selection of courses should overlap and be complementary [17]. With the reduction of the number of courses in which the game was implemented, it seemed better to use a more open research approach, so this study can be seen as an exploratory study. There are several existing exploratory studies on GBL, also conducted by the authors, investigating the implementation of both COTS and in-house build games in a course. However, most are focussing on only the game part and not also looking at the reuse of tutorials and additional materials. However, research exists that has pays specific attention to re-use and re-purposing of serious games and corresponding learning material. This research proposes to consider specific elements. In this context, for our study we considered as specifically relevant research that uses or analyses the usage of several COTS games [27–30] and thus served as a basis for our analysis.
436
J. Baalsrud Hauge and M. Kalverkamp
3 Experimental Setup The preparation of the implementation of the game in the possible courses started with a first development of basic scenarios as well as a first draft of a guideline by one of the responsible teachers and his research assistant in early autumn 2022. In a second step, the involved teachers teamed up in a common week to analyse and play the game, where different variations in the implementation were discussed, depending on the available time in each course. While playing the game and discussing how to mod the game, different options where tested. The deduced requirements are described in some more details in the next section, but a key finding was the need of a tutorial since the in-built tutorial was made for pure leisure gaming and therefore not supporting the strategic decision-making and explorative learning process as foreseen in the instructional design. A third phase of the preparation followed with the development of the instructional material including the tutorial and a survey to be distributed to the students. The survey was developed based on the same approach as used in [31]. In addition to the development of the tutorial, efforts were made in testing and playing the game by the teachers to ensure that it could be embedded in the courses. Testing and playing the game is a time-consuming process, with a high uncertainty in the time students may use in carrying out the same tasks as a part of their course. The planning therefore contained different options for the teachers, so that it would be possible to re-allocate both tasks, steps and how to play the game to have suitable strategies for overcoming typically pitfalls and challenges [12, 32]. 3.1 Fulfilment of Existing Requirements From the literature [6, 9, 15, 31, 32] it is well known that it is important that the teachers can embed the game into the context. Besides knowing the course content, they need to develop a suitable narrative so that a modded game can be well integrated in the educational framework and develop a supporting learning path. This work is crucial when considering that COTS games may not meet the individual requirements. Getting familiar with the selected game, specifically since the freedom of the game and the different scenarios is big, takes many hours of play time. Furthermore, since we have selected a COTS game which is not mainly focussing on teaching production and logistics management, further time was used to develop the instructional design to ensure that each group or student applies the theoretical knowledge and gets the possibility to reflect upon what is seen and experienced in the game. According to our expectations [3], this required much work and deep understanding of the game and its game mechanics [3]. We also had to consider that a one-stop experience was preferred, but that we could not change the game assets, nor really change the game rules and had no access to the source code [15], so that we put all requirements of constructive alignment on gameplay. It would have been an option to only use free scenario playing, but this would require a very deep understanding of the game at a very early stage, specifically since the player is easily put out of the game by simply going bankrupt. From the literature it is known that it is preferable to combine information from different sources and completely integrate the information as a part of the narrative and game experience [7, 15, 17, 19]. This is at the current version of Production Line not
Report on Integrating a COTS Game in Teaching Production and Logistics
437
possible. We therefore added an element which we have long experience in using – adding reflectional tasks and debriefing as an integrated part of the learning path [3, 12, 31, 33]. In general, the specific requirements of the courses are related to logistics and production operations and the decision-making processes that are related to planning and operation of these; the course-specific requirements are described in Kalverkamp et al. [17]. 3.2 Setup This section describes the setups we used for the experiment. In the beginning we planned to use the game in three different courses in the academic year of 2022/2023, with different combinations of setups, based on the available time per course (see below). Eventually, the game was played at two different universities due to time constraints (the third course will therefore not be considered here, see next section). At Wiesbaden Business School (WBS; at RheinMain University of Applied Sciences) the game was played with 2nd/3rd year undergraduate students from business administration, international management, and industrial engineering in a 6 ECTS course. At Bremen University, a group of two master students used the game as a part of a 3 ECTS game based course on decision-making in distributed production. The course comprises different games as well as theory-lectures. The students need to apply the theory while playing. As described in Kalverkamp et al. [17], Production Lines is a single-player commercial simulation game, where the goal of the game is to increase the efficiency of the production line. During the game play the players may make their decision(s) based on several indicators related to both production aspects (production time, buffer, throughput time, utilisation, etc.) as well as economics (such as turnover, costs, profits). In addition to the key performance indicators delivered by the game, the game play contains sufficient information for the students to calculate most relevant indicators informing their decisions and helping in understanding the theory; but it requires that the students understand the data provided by the game, as such data can be live values that change constantly or that are calculated at a specific time, e.g., every hour of the simulated time. Hence students need to learn when and how to use the data and calculate averages, for example. The game under study focusses on the production of passenger vehicles and offers different playing options such as sandbox or scenarios, which open up for the possibility to tailor it for specific learning goals. Scenarios are focusing on the production goals within a specific time frame and quality with budget constraints that are defined around a given number of vehicles of specific car body types (sedan, SUV, etc.) that must be built within a given time frame and starting with a specific budget [17]. To support the complexity of decision making in distributed environment such as supply chains and production systems, as well as to support the learning curve related to mastering the game play, it is mandatory for every student to complete the tutorial on his/her own. In addition, all students needed to complete the first setup. Both groups of students had this in common, while it was implemented differently in the next rounds. The students at WBS played the following setups in groups, while the students at the University of Bremen played individually; in both courses the students had common
438
J. Baalsrud Hauge and M. Kalverkamp
sessions to learn from each other as well as to discuss the theory. Figure 1 gives an overview of how the game was implemented. In addition to the setups shown in Fig. 1, in a final setup 2c the simulation runs until day 10 (end of game) and final reports are prepared. At WBS, one additional “free play” setup 3 was conducted yet for a limited playtime and without any relevance for reports or examination but rather for the teacher to see whether alternative aspects of planning and executing would surface during this setup for further discussion.
Fig. 1. Overview of the generic experimental set-up as implemented in the two courses (excl. set-up 2c)
In the alignment of the game and course activities, it must be noted, that the game environment needed to be adopted to fit both course curricula. While the course at WBS also comprises learning goals related to project management, the course at the University of Bremen only comprises decision making methods in distributed production and logistics. The WBS course had, so far, a traditional set up with mostly lectures while the course at the University of Bremen is designed as a game-based learning course. Figure 2 gives an overview of the differences in the learning topics.
Fig. 2. Overview of the addressed topics in the courses
Report on Integrating a COTS Game in Teaching Production and Logistics
439
3.3 Implementation This section outlines how the game was implemented in the involved courses. For WBS and University of Bremen, it was planned to play the game throughout the whole semester, partly as group work, but with the first part as individual work to ensure that each student had the same possibility to get familiar with the game play and also to develop the required skills. In a production logistics course at KTH it became apparent, that the time available was too short. We could therefore not conduct the study as planned; the implication of the need of more play time to fully explore, understand, and apply the relevant theoretical foundation will lead to a re-design of this course. Therefore, we report on the results for the two courses where we were able to integrate and use the game. The following outlines the implementation in the two institutions, namely WBS and University of Bremen. Figure 3 below shows a possible configuration of the starting scenario requested during the tutorial and setup 1 (see Fig. 1). However, with this scenario configuration, the students would likely not be able to achieve the goal of the game within the time frame and within the given budget. This is the outcome of one of the first scenarios the responsible teachers played themselves. As mentioned above, the scenario configuration is very flexible and therefore, as a take-away after several bankruptcies, we included the element of first doing one scenario setup after the tutorial in the to-be environment (in game: scenario “medium”) in groups. Afterwards the students first had to discuss their scenarios, designs and the results, before a new scenario (setup 2 onwards) starts. This setup 2 is based on the analysis of the first scenario. On the one hand, the objective of this was to ensure that all students tried modelling a scenario and to be familiar with the software, and on the other hand to overcome the risk of potential frustration while not being able to produce and win the game.
Fig. 3. A screenshot from Production Line by Positech played remotely by the authors during a test session in April 2022 [17]
440
J. Baalsrud Hauge and M. Kalverkamp
The observation of the reported results revealed some challenges in implementing the chosen strategies without re-designing the game scenario. It also revealed challenges in applying the theory and draw conclusion from the observed. The usage of the different scenarios embedded in the class with both feedback and discussion proved successful. In the smaller course at University of Bremen, this setting also allowed the students to reflect upon which theoretical foundations they were not sufficiently familiar with, so that the teacher could provide more input on that. This was provided both as flipped-class room lectures and as on-site lectures, re-using existing material from other courses as well as from the Heriot-Watt University. In the course at WBS, additional theoretical material was proactively provided by the teacher, and students were given tasks related to the application of theory in their scenarios.
4 Results This section presents the outcome of analysing the surveys which were distributed to the students upon final grading. The survey was completed anonymously and stored on a separate server. The survey contained overall 65 open and closed questions, divided in different groups of questions: relevant impairments, experience in gaming, educational and work-related questions, motivational questions as well as questions related to how they could apply the theory and how the game supported the targeted learning goals. These groups determine the following structure. Information on the Study Group and Previous Game Experience Out of the 21 students that joined the courses (WBS: 19, Bremen University: 2), 12 students completed the survey (WBS: n = 10, Bremen: n = 2). Only three students reported on past relevant job experience, and two on impairments that affected the game experience to a minor degree. Most of the students had previously played different types of games, with big variation based on gender as well as by the time spent on game playing. The students reported only little to no experience with serious games (except for one respondent). General Aspects of the Workshop and the Game Most of the students knew some of the other students in the group. In one group one student reported that motivation was low and, two reported on some degree of stress and some lack of transparency on the tasks to be carried out. The latter applies to the smaller group as well (1 out of 2 students). Since the smaller group only comprised of two students, we cannot draw any conclusion based on that. The majority of students agree that the teamwork went ‘ok’ to well (17% neutral to and 58% disagreeing to “did work bad”). Most students also felt motivated (75%, 17% neutral) during the game. The outcome on the questions related to realism, intuitiveness as well as the structure does not deliver conclusive answers as the majority of the students decided to be neutral. For example, 58% consider the game to be realistic with 42% being neutral on this item and 50% report that the game did allow to apply theoretical knowledge although with 42% being neutral. However, regarding how the game play contributed to understanding the theoretical content, most students where positive (two answered slightly negative).
Report on Integrating a COTS Game in Teaching Production and Logistics
441
They feel that the game helps to understand the theoretical content of the course(s) (67%) and even more so to increase the understanding on production processes (92%). Overall, the gaming concept received positive remarks (see Fig. 4) with 75% negating the “is boring” statement, the rest being neutral on this item, and 83% negating that the game is “not usable for supporting the decision-making process in general”. However, some students reported that the game was somewhat difficult to understand (17%, and 42% neutral).
Fig. 4. Answers on the gaming concept across both groups (n = 12)
Game Experience vs. Reality Regarding the results on decision making, quite a few students reported that they did not make many decisions, i.e., we may assume that they just played without really reflecting upon the next move. Although together with the previous finding that students feel that they could apply theoretical knowledge this may indicate that the students were able to only apply some of the theory but not all and were missing some information to do so. Nevertheless, 75% of students report on new learnings and 92% agree that their decisions had a role for the game outcome. Hence we may assume that they could identify at least some of the impacts of their decisions. Furthermore, this observation must be seen in the light of the teacher’s and assistant’s observations at WBS. Both reported independently from each other that particular students had difficulties to connect theoretical knowledge on production and especially on KPIs with the development in the game. These students may not have taken many decisions, especially not based on the theoretical inputs; and they may only somewhat of a feeling that their decisions have an impact, yet they were not able to connect the outcome to their specific decisions. Others, in contrast, did not show any of such difficulties but were rather quick in connecting their decisions with the game and could easily elaborate on the connections (e.g. during presentations). In the smaller course, based on feedback from the course, students may have experienced challenges in connecting theory with the game but were more likely to approach the teacher and ask for help that was then provided. When asking about the connection to reality, especially the group at WBS provides a somewhat unconclusive picture. Although the tendency overall shows that the game increases the understanding of how decision-making influences output in production, how to handle time pressure, and that information gathering and analysis is relevant and supported by the simulation, especially at WBS between 30% to 50% of students remain
442
J. Baalsrud Hauge and M. Kalverkamp
neutral on most of the related questions. At University of Bremen, the picture is much clearer with only one neutral answer in the related set of eight questions.
5 Discussion and Outlook The first results of implementing and modding the game “Production Line” in two different courses are promising. However, we must bear in mind, that the first setup also indicated some limitations. The game play is complex, and the user interface requires that the students get sufficient time to familiarize with the game features. A conclusion is therefore that the game should only be embedded in courses where the experiential learning plays an important role, and the time given for carrying out the task is sufficient. This needs to be reflected in the related ECTS. Analysis of the survey results also showed that some students made their decisions intuitively and not based on the KPIs. There can be many reasons for this, but it is likely that they found it difficult to relate the KPIs in the game to the decisions they made, since this is a complex process which requires a deep understanding both of game play as well as theory. A supporting tutorial might overcome this problem to some degree. Another likely reason is that the students got engaged in the game play and therefore focussed more on winning the game than applying the knowledge (observations at WBS indicate this for some students individually rather than at a team level). A recommendation from the students in the smaller course was therefore to regularly remind the students on the relevant theory and to use more analysis. Similar comments were made in the bigger course. These findings can be summarized in initial recommendations on how to handle the integration of Production Line in a higher-education course and what pitfalls to avoid: • The game is complex and therefore requires sufficient time for the students to become familiar with the game. Ideally, the time for learning the game is inter-twined with ‘refreshers’ and over time with more in-depth discussion on theory. • For inhomogeneous groups, or inexperienced students, teachers should prepare to provide additional support on how to identify and read the KPIs in the game as well as how to relate those back to the theory. • Especially more diverse groups of students (such as at WBS) with different study backgrounds and different levels of prior knowledge require the teacher to pay attention to these characteristics and that, for example, more details are provided on KPI identification and development as well as their interpretation. • The latter two points relate to the overall learning process, which benefits greatly from the discussion of results not only during debriefing sessions but also while playing the game with the facilitator being present. This was observed at both courses. • In addition, teachers must plan also for course logistics since the game is not running on MAC and not every student has a PC. Further, the students would need to install the software on their devices and delete it after the course due to licence restrictions. We therefore reserved computer labs for the courses. These findings and recommendations are in line both with literature and with our previous experience in integrating (COTS) games into learning environments. However,
Report on Integrating a COTS Game in Teaching Production and Logistics
443
our current work has limitations that impede deriving more general results and better guidelines. The main reason is the basis of 21 students and only 12 completing the survey, which is a small sample. Thus, we intend to collect more results from the courses that apply the game Production Line in the upcoming academic year. Acknowledgements. This work has been partly co-funded by the European Commission through the Erasmus+ program (INCLUDEME (No. 621547-EPP-1-2020-1-RO-EPPA3-IPI-SOC-IN as well as individual Erasmus + Grant Teacher Exchange from RheinMain University of Applied Science) and the work also acknowledges the projects DigiLab4U (Nos. 16DHB2112/3), by German Federal Ministry of Education and Research (BMBF). The presented work represents the authors’ view. The authors are grateful for the financial contributions.
References 1. Seng, W.Y., Yatim, M.H.M.: Computer game as learning and teaching tool for object oriented programming in higher education institution. Proc. - Soc. Behav. Sci. 123, 215–224 (2014) 2. Vos, L.: Simulation games in business and marketing education: how educators assess student learning from simulations. Int. J. Manag. Educ. 13(1), 57–74 (2015) 3. Baalsrud Hauge, J., Bellotti, F., Nadolski, R., Berta, R., Carvalho, M.B.: Deploying Serious Games for Management in Higher Education: lessons learned and good practices. In: EAI Endorsed Transactions on Serious Games, vol. 2 (2014) 4. Almeida, F., Simoes, J.: The role of serious games, gamification and Industry 4.0 tools in the Education 4.0 paradigm. Contemp. Educ. Technol. 10(2), 120–136 (2019) 5. Nazry, N., Nazrina, M., Romano, D.M.: Mood and learning in navigation-based serious games. Comput. Hum. Behav. 73, 596–604 (2017) 6. Baalsrud Hauge, J.: The use of game based learning methods for teaching supply chain management subjects. J. Adv. Distrib. Learn. Technol. 2(5), 5–15 (2014). http://www.jadlet. com/index.php/jadlet/article/view/33/42 7. Connolly, T.M., Boyle, E.A., MacArthur, E., Hainey, T., Boyle, J.M.: A systematic literature review of empirical evidence on computer games and serious games. Comput. Educ. 59(2), 661–686 (2012) 8. Ariffin, M.M., Oxley, A., Sulaiman, S.: Evaluating game-based learning effectiveness in higher education. Proc. - Soc. Behav. Sci. 123, 20–27 (2014) 9. Zhonggen, Y.: A meta-analysis of use of serious games in education over a decade. Int. J. Comput. Games Technol. 2019, 8, Article ID 4797032 (2019). https://doi.org/10.1155/2019/ 4797032 10. Popescu, M., Bellotti, F.: Approaches on metrics and taxonomy in serious games. In: Proceedings of the 8th International Scientific Conference - eLearning and Software for Education, Bucharest, Romania, 26–27 April 2012 (2012). ISSN 2066–026X 11. Suttie, N., et al.: Introducing the “Serious Games Mechanics” a theoretical framework to analyse relationships between “Game” and “Pedagogical Aspects” of serious games. Proc. Comput. Sci. 15, 314–315 (2012) 12. Baalsrud Hauge, J., et al.: Current competencies of game facilitators and their potential optimization in higher education: multimethod study. JMIR Serious Games 9(2), e25481 (2021). https://doi.org/10.2196/25481. PMID: 33949956; PMCID: PMC8135020 13. Grudpan, S., Hauge, J., Baalsrud Hauge, J., Malaka, R.: Transforming game premise: an approach for developing cooperative serious games. In: de Rosa, F., Marfisi Schottman, I., Baalsrud Hauge, J., Bellotti, F., Dondio, P., Romero, M. (eds.) GALA 2021. LNCS, vol. 13134, pp. 208–219. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-92182-8_20
444
J. Baalsrud Hauge and M. Kalverkamp
14. Grudpan, S., Samarngoon, K., Hauge, J., Baalsrud Hauge, J., Malaka, R.: Towards transforming game premise: validating an approach for developing cooperative serious games: an approach for developing cooperative serious games. Int. J. Serious Games 9(3), 43–61 (2022). https://doi.org/10.17083/ijsg.v9i3.502 (Original work published 9 September 2022) 15. Stanescu, I.A., Baalsrud Hauge, J., Stefan, A., Lim, T.: Towards modding and reengineering digital games for education. In: De Gloria, A., Veltkamp, R. (eds.) GALA 2015. LNCS, vol. 9599, pp. 550–559. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-40216-1_59 16. Eastwood, J.L., Sadler, T.D.: Teachers’ implementation of a game-based biotechnology curriculum. Comput. Educ. 66, 11–24 (2013) 17. Kalverkamp, M., Hauge, J.B., Lim, T.: Exploring a commercial game for adoption to logistics training. In: Kim, D.Y., von Cieminski, G., Romero, D. (eds.) APMS 2022. IFIPAICT, vol. 664, pp. 232–239. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-16411-8_29 18. Baalsrud Hauge, J., Duin, H., Thoben, K.-D.: Reducing network risks in supply networks by implementing games for mediating skills on risk management. In: Cunningham, P., Cunningham, M. (eds.) Collaboration and the Knowledge Economy: Issues, Applications, Case Studies, pp. 707–714. IOS Press. Amsterdam, Berlin, Oxford, Tokyo, Washington DC (2008) 19. Greitzer, F.L., Kuchar, O.A., Huston, K.: Cognitive science implications for enhancing training effectiveness in a serious gaming context. ACM J. Educ. Resour. Comput. 7(3), 2-es (2007) 20. Gala consortium. studies.seriousgamessociety.org 21. Carvalho, M.B., et al.: An activity theory-based model for serious games analysis and conceptual design. Comput. Educ. 87, 166–181 (2015). https://doi.org/10.1016/j.compedu.2015. 03.023. ISSN 0360-1315 22. Göbl, B., Hristova, D., Jovicic, S., Hlavacs, H.: Serious game design for and with adolescents: empirically based implications for purposeful games. In: Fang, X. (ed.) HCII 2020. LNCS, vol. 12211, pp. 398–410. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-501648_29 23. Fernández-Manjón, B., Moreno-Ger, P., Torrente, J., del Blanco, A.: Creación de juegos educativos con . Revista del Observatorio Tecnológico (ISSN:1989-2713), Instituto Superior de Formación y Recursos en Red para el Profesorado (ISFTIC) del Ministerio de Educación, Política Social y Deporte (MEPSYD) de España (2009). http://observato rio.cnice.mec.es/ 24. Ratan, R., Sah, Y.J.: Leveling up on stereotype threat: the role of ava-tar customization and avatar embodiment. Comput. Hum. Behav. 50, 367–374 (2015). https://doi.org/10.1016/j.chb. 2015.04.010. ISSN 0747-5632 25. Stanescu, I.A., Warmelink, H.J.G., Lo, J., Arnab, S., Dagnino, F., Mooney, J.: Accessibility, reusability and interoperability in the European serious game community. In: Proceedings of the 9th International Scientific Conference “eLearning and software for Education”, Bucharest (2013) 26. Baalsrud Hauge, J., Duin, H., Kammerlohr, V., Göbl, B.: Using a participatory design approach for adapting an existing game scenario – challenges and opportunities. In: Ma, M., Fletcher, B., Göbel, S., Baalsrud Hauge, J., Marsh, T. (eds.) JCSG 2020. LNCS, vol. 12434 pp. 204–218. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-61814-8_16 27. Vu, P., Feinstein, S.: An exploratory multiple case study about using game-based learning in STEM classrooms. Int. J. Res. Educ. Sci. 3(2), 582–588 (2017) 28. Ke, F., Xie, K., Xie, Y.: Game-based learning engagement: a theory-and data-driven exploration. Br. J. Educ. Technol. 47(6), 1183–1201 (2016) 29. Pivec, M., Dziabenko, O., Schinnerl, I.: Aspects of game-based learning. In: 3rd International Conference on Knowledge Management, Graz, Austria, vol. 304 (2003) 30. De Freitas, S., Oliver, M.: How can exploratory learning with games and simulations within the curriculum be most effectively evaluated? Comput. Educ. 46(3), 249–264 (2006)
Report on Integrating a COTS Game in Teaching Production and Logistics
445
31. Baalsrud Hauge, J.: Mediating skills on risk management for improving the resilience of Supply Networks by developing and using a serious game. Dissertation. Universität Bremen, Bremen. Fachbereich Produktionstechnik der Universität Bremen (2015). http://elib.suub.unibremen.de/edocs/00104706-1.pdf 32. Garneli, V., Giannakos, M., Chorianopoulos, K.: Serious games as a malleable learning medium: the effects of narrative, gameplay, and making on students’ performance and attitudes. Br. J. Educ. Technol. 48(3), 842–859 (2017) 33. Crookall, D.: Serious games, debriefing, and simulation/gaming as a discipline. Simul. Gaming 41(6), 898–920 (2010)
Towards Novel Ways to Improve and Extend the Classic MIT Beer Game Rudy Niemeijer1 , Paul Buijs1 , and Nick Szirbik2(B) 1
2
FEB/Operations, University of Groningen, Groningen, The Netherlands [email protected] FSE/Engineering Design, University of Groningen, Groningen, The Netherlands [email protected] https://www.rug.nl/staff/r.niemeijer/ Abstract. This paper critically examines the potential benefits and drawbacks of merging supply chain and social deduction game mechanics. From a practical standpoint, designing and implementing such a game requires careful calibration to ensure a balanced and engaging experience for all participants. Ethically, concerns arise regarding the potential for collusion and its impact on fairness and trust within the game. Additionally, legal implications may arise if the game inadvertently promotes or facilitates unethical behaviour or violates anti-competition laws. Ultimately, our aim is to encourage further exploration in this area while promoting responsible game design and fostering a conducive learning environment.
Keywords: bullwhip-effect network collusion
1
· logistics teaching via games · supply
The Need for a New Generation of Supply Chain Games
Supply chain phenomena - like for example the bullwhip effect - are challenging to teach to first year bachelor students. This is due first to the phenomena complexity and second to the multi-disciplinary nature of the theoretical explanations. Various and overlapping arguments can be found scattered in different areas of operations research (supply chain dynamics models and nonlinear behaviour), business management and psychology (mostly behavioural economics models), IT (aspects related to lack of information visibility), logistics (systems with uncertainty and variability), etc. A concise, informative, and also engaging way to introduce freshmen students to these phenomena is via playing supply chain games, like the popular “MIT beer game” introduced more than 60 years ago [1]. This type of game, like many others, evolved over time, taking advantage of developing internet-based technologies, allowing the game to be played online. It became also possible for students to play the game against bots (in this case it is more an interactive c IFIP International Federation for Information Processing 2023 Published by Springer Nature Switzerland AG 2023 E. Alfnes et al. (Eds.): APMS 2023, IFIP AICT 690, pp. 446–460, 2023. https://doi.org/10.1007/978-3-031-43666-6_31
Improving the MIT Beer Game
447
simulation than a real game), and also to have hybrid versions, where human input and bots’ inputs are mixed [2]. Nevertheless, the authors of this paper argue that this kind of transformation of the game into a learning instrument that is heavily digitalized and often reflecting a mere mathematical model of the supply chain embedded in the “game”, subtracts from the main advantage of playing this kind of game. That is, understanding that the main causes of non-linearity, uncertain and unexpected behaviour are in fact coming from overreacting decisions of human agents with bounded rationality. This is a position paper, which lays the incipient ideas to combine the mechanics of the classical MIT beer game (as played in a board-style, with only the practical level of digital support) with the mechanics of a totally different kind of game, that is, the “mafia” game [3]. The latter, attributed to and developed in the 1980s by the Soviet psychologist Dimitry Davidoff, is a type of game that can be used amongst others, in training leadership, negotiations, conflict resolution, and communication under stress [5]. In the proposed combined mechanics of these two games, these aforementioned negative behaviours will be put to use for deliberate price collusion as an added phenomenon in a supply network that adds dynamic prices and supplier switching to the classical beer game. This paper is organised as follows: in Sect. 2, we explain the theoretical background, in Sect. 3 we present the current game played at our university, in Sect. 4 we explain how we envisage the introduction of colluding cartels in the existing games, in Sect. 5 we discuss the various issues raised by this potential attempt, then we discuss the current status and next potential steps in Sect. 6, and we conclude with the main position statements in Sect. 7.
2
The Historical Trend for More Digitalization
In the early 1960s, everything used to enact the game by the human agents playing was non-digital, “board” based. For example, the beer game was played with some paper cards (writing down for orders) and poker chips for “beer” deliveries [1]. As computers evolved from the 1960s’ mainframes to the desk PC and later laptops, pads, and smart phones, the natural tendency was to digitalize some parts of a game or the whole game [4]. Moreover, games could be played fully online, which proved to be of a crucial advantage during the remote teaching performed during the restrictions due to the last pandemic. Digitalization took different forms and levels. In order to play the games online, the obvious manner is to facilitate interfaces for the players to follow the game play and input their decisions. To increase the number of “players”, either due to a lack of available human players or just to scale up the “size” of the game environment, bot players - with a reasonable “pre-programmed” level of decision making rationality - were introduced [7]. It also became possible to play these games as “human vs machine” only, where a single person or a team of students, played only one role (e.g. the retailer in the beer game) and could observe how the game evolved and the
448
R. Niemeijer et al.
bullwhip effect appeared. Obviously, the effect in this case was not a cause of the nonlinear phenomena induced by the irrational stocking decisions of the human players (like in the case of a non-digital board game), but the observed effect was a result of the behaviour programmed into the mathematical model embedded in the digital game, especially the bench-marked data series used as input for the model as the “game progressed” [7]. Multiple educators - not only in management sciences, but also in sociology and psychology [12], who were initially enthusiastic about the new possibilities open by the digitalized games, realised that something has been lost, that is, the gradual understanding of the students that most of these phenomena are triggered by human decision making and not some abstract mathematical model reflecting reality. Erroneously, some students started even to believe that the “machine” (i.e. the mathematical model of the supply chain used for the game) can be considered like a sort of digital twin of the real world, implying the real world of a supply chain can be modelled like chunk of physical reality captured by the laws of physics [12]. When a false understanding is developing, it is becoming dangerous for learning. In this case, instead of understanding that the bullwhip effect is just a phenomenon that can occasionally appear if certain conditions are in place (which in reality, many times they are), students might start to believe that ANY supply chain, in ANY circumstance, has this “property” of snowballing inventories, as “natural”. This is going against the main message for learning from this kind of game, which is that human irrationality, and also bounded rationality, i.e. the lack of a more holistic view of the supply chain), are the real causes [6]. Therefore, we want add to the existing argument that despite the potential advantages offered by heavily digitalized games, the human input in terms of decision making, and also other human aspects that may appear during inperson negotiations and conflict resolution are essential for the learning goals sought after for these games. We consider that the best direction of development of the supply chain games like the MIT beer game is not necessarily towards the improvement of the IT part (interfacing, bots, online capabilities), but towards those aspects that raise the levels of engagement, attention, understanding, enhanced by more complex forms of human interaction.
3 3.1
Introducing More Unknowns in the Game How the MIT Beer Game Is Typically Played
In the “classical manner”, the MIT beer game is played around a single table, on a game board, large enough to represent four typical stages of a supply chain: manufacture (a brewery), distribution (a logistic third party), wholesale (a regional intermediary warehousing company), and retail (one or more shops that sell beer). To play the game with a whole class, i.e. 20–25 students, a teacher can organise 4–5 gaming boards, with 4–5 students per table (the number of retailers can be bigger than one). Each stage on a board will have a “component manager”, whose role is to order beer batches (on pieces of paper) and mimic
Improving the MIT Beer Game
449
the transit of inventory in the supply chain with some props (coins, poker chips, cards, or just pieces of paper). The boards and the props can be prepared for this purpose in an more elaborate way and reused, or the players can just improvise with what the teacher brings (just paper, chips, matches, etc.) [1]. The game has “weekly” rounds, and for each week, a demand value is presented to the retailer(s) - a stack of prepared cards can contain this information. In turn, each manager is placing orders upstream the chain. All processes mimicked by the game (ordering, transportation, manufacturing) incorporate lead times. The main rule of the game is that backorders have to be filled as soon as possible. Each component manager has only local information, meaning that customer demand is known only to the retailer. There is a fixed cost for stock holding and also one for stockouts, and each week, these costs for inventories and backorders are accrued for each component manager and each team playing at a table. It was observed [1] that the best results of playing the game are obtained when the backorder costs per batch are two times higher than the inventory costs. The main goal is to keep these costs to a minimum. To achieve a game ranking, the teams of five at the five tables (who have the same demand pattern) compete each other, and the results can be made “public” for all teams after each round or after a number of rounds during the game. The game is played for 6–12 “months” (26–52 rounds) - the real duration of the game resulting being something between less than an hour or maximum two hours, which is practical for the class schedules. The demand pattern setup is constant for an initial period (1–2 “months”), and then doubles and stays constant to the end. In most games played with freshmen, the bullwhip effect hits hard, and typically players are surprised even if they heard about this phenomenon in a lecture before [1]. Some authors [12] consider that most of these lectures obscure the basic understanding of the effect with complex theoretical forays into feedback loops, time delay inducing effects, and supply chain nonlinearity. The students, when directly confronted with the negative outcome of their play, instinctively start to blame each other for bad strategies. The role of the teacher, after playing the game, is to explain that the main causes are the lack of visibility in the supply chain, the human tendency to over-react to sudden change, and also the stimulus effect - induced by the salience of the fact that backorder costs are double the inventory costs per batch. The main lesson learned is that supply chains need integration, e.g. IT-based information visibility about demand and existing stock levels, allowing also for decision centralisation, and also allowing the reduction of lead times for ordering. There are many game variants that keep the board and the human presence in class, but are enhanced with some digital means. For example, the group of David Simchi-Levi at Northwestern University [7] introduced in the 1990s a “computerised beer game”, which allowed students to incrementally play different version of the game (a “lack of visibility” version first, a “global information” option, a “centralised decision” option, a “short lead time” option), observing how the bullwhip effect is mitigated by these improvements of supply chain management.
450
R. Niemeijer et al.
In reality, there are more other causes and that can lead to the bullwhip effect, and various ways to mitigate it. The literature [8] mentions amongst others price inflation, price discounts, promotions, news related to potential supply chain shortage (e.g. strikes, accidents, natural disasters, etc.) can trigger managers to react irrationally and provoke huge distortions of the inventory policy within supply chains. 3.2
An Extended Version of the Game
With the new supply chain game played by first year students in Operations at the University of Groningen, we try to introduce more complex real-life issues and supply chain details to the attention of the students. In the more typical examples of the game, the generic structure is a four or five echelon serial supply chain, with a raw materials supplier, a manufacturer, a distributor, the wholesaler, and the retailer.
Fig. 1. The supply chain structure
To this, we add a sixth echelon, that is the consumer, who tends to buy batches from the retailer. Because distribution can occur before manufacturing, we changed the supply chain structure to supplier, distributor, manufacturer, wholesaler, retailers and consumers, as in Fig. 1. The distinct novelty of this structure is that the final customers are not just “a pack of cards” that represent each period’s demand but a real player that changes the demand pattern being triggered by an event inserted by the game master (teacher). Moreover, this is a player who has from the very beginning a full view of all the stocks, orders, backorders, deliveries in the chain, but without the permission to share this data with the rest of the players. Only when the game ends, this student (or pair of students) have to explain to the rest what they have witnessed during the game. The information about demand is supplied to them by the teacher, via a Google sheet that acts as a blackboard for the game.
Improving the MIT Beer Game
451
To make the supply chain longer (also in geographical terms) and more complex - but still easy to grasp, even for laypersons, we have chosen for another type of product than beer, that is, coffee beans that are single origin, procured from the source by a type of supply chain that is named “Direct Trade Coffee”, which is shortly explained to the students [9]. The first echelon in the chain is the a speciality coffee producer (farmer), whose label will appear on the final product (bag sold by the retailer). This is played by a student or a pair of students who cannot make orders, only send deliveries and receive orders from the next echelon, the suppliers. In principle, they have unlimited supply, but there are triggers that the experimenter can use, to affect the green coffee beans supply. They are also privy to all the information in the supply chain, and will participate in the end in the debriefing of the game session.
Fig. 2. Plasticised cards that are used to make orders and deliveries - in Dutch (on the left, an order from the wholesaler, on the right, a delivery from the roaster to the wholesaler
Instead of playing “weeks” for periods of order-and-delivery cycle, we play “days” (20–30 cycles), to emphasise the importance of lead times. Coffee transportation from the tropical areas to north-west Europe (where the retailers and consumers in this game are supposed to be) can take quite a bit of time - albeit we reduced the real values to allow the game to finish in time. Also, after being roasted, the coffee has “to rest” for a period, and cannot be put into retail immediately. Each “day”, four steps (A, B, C, D) are clocked by the experimenter, and each set of players has to execute these steps: determine the order size, place the order (physically giving a piece of paper to the other player), determine delivery, and make delivery (physically giving a token - another piece of paper with the quantity delivered). We provide the students with plasticised cards (as in Fig. 2) indicating the exact meaning of the operation, on which they have only stick a paper sticker with the quantity ordered or delivered. These plasticised cards are immediately returned to their “owner”. The four echelons in the middle of the chain, are played by teams of 4 students. Two are playing “purchasing”, and two are playing “sales”. They have a partial view over the blackboard, via a Google sheet enabled dashboard on their
452
R. Niemeijer et al.
laptops - one dedicated to each player, and will make their decisions based on the limited information they have. For example, the dashboard of the wholesaler, with separate tables for purchasing and sales is presented in Fig. 3.
Fig. 3. The Dashboard of the team playing wholesalers, separated into the “Purchasing Department”, and the “Sales Department”
The game starts with pre-set inventory and backorder levels for each echelon, and an initial demand value. The final “consumers” can vary demand day by day, but within very small intervals. The lead times (transport, roasting, distribution handling) are incorporated in the mechanics of the game, which compute after each step what is possible to deliver and when. In this setup, maximum twenty students can play the game, two who play the consumers, and two who play the farmers (who can oversee all the values on the dashboard) and four students for supplier, roaster, wholesaler, and retailer. It came out with experience that the best layout of the tables in the room is to isolate as much as possible each group of four, and arrange them in a circle, putting the table next to the walls of the classroom, as in Fig. 4. The students can follow the evolution of the supply chain parameters via their own laptops or smartphones (a dedicated computer room, with fixed tables would not serve properly the purpose of this game). This lecture/practical setup of the course assumes that students know already that small fluctuations in consumer demand can cause bigger fluctuations upstream. Unknown to the students, there are more triggers that can induce exaggerate responses by the purchasing players. If the bullwhip effect is an “known unknown”, we tried to add unexpected triggers, presented in more detail in the next subsection. 3.3
Introducing Unknown Unknowns
The students who play this game (first year bachelors in the course “Global Supply Management”) come to the practical class where this game is played
Improving the MIT Beer Game
453
Fig. 4. The classroom arrangement - student exchange orders physically, and follow supply chain status via partial or overseeing dashboards
with some prior knowledge of the bullwhip effect, which they encounter via reading the syllabus of the course and via the explanations given during the lecture. They shall know that the effect will appear, and in the older versions of the beer game (or similar) played in the decades before, it happened that some teams managed to avoid it, and it was not unheard that in a class with four teams playing the older version (like the one presented in Sect. 3.1), all four teams avoided the excess inventories with a minimum level of stockouts [7]. In this version of the game, there are a few new mechanisms that help to induce the bullwhip effect, regardless of the level of theoretical awareness of the students. One is that the two pair of students in teach team of four in the central echelons of the chain, tend to develop a tunnel vision, focusing on their roles as salespersons and purchasing managers, and do not cooperate enough to decide together a better strategy that would avoid overreacting. Moreover, the tempo of the clock set by the teacher is one minute per step (4 min per “day”), and they do not have enough time to judge the current situation, but only to implement the mechanics of the game. These are presented to each team on a plasticised A4, explaining their role and tasks. Another mechanism is the higher level of stockouts costs (double than the inventory hold). This is also variable, and in the mechanics of the game, the final price of the coffee at the point of retail (which is a value that is visible for everybody) is inflated by the inventory costs. Subsequently, a higher cost for the customer is explained to produce customer desertion of the retailer, and therefore, higher backorder costs (which can be manipulated by the teacher, and they are also publicly shown to all students). Teams also get a “secret information” sheet, at a given moment in time different for each team, see Fig. 5). For example, the customers’ table gets a trigger that the number of retail points of this retailer is going from one to two - on “day” 7, and they have to adjust the estimated demand accordingly (which
454
R. Niemeijer et al.
they do, because there is also another variable of the game visible to them, that is, customer desertion due to stockouts, which is manipulated by the teacher). This brings more realism than the simple stack of cards that dictate demand, where a doubling of the demand happens after a quarter of or third of the cycles spent.
Fig. 5. The plasticised card with the secret information - in Dutch, for the supplier player (see explanation in the text)
Also on “day” 7, the retailer is “informed” - via it’s secret card that a competitor local retailer introduces the same single source brand coffee (i.e. same farmer) in their sales, threatening customer desertion, especially in case of stockouts, and also in the light of having now two retail points to supply. The wholesaler and the roaster encounter also problems due to the introduction of standardised transport units. One unit is 25 kg (50 bags of 0.5 kg), and it is possible from a given moment (“day” 3) to order only batches of bags that are strictly multiplies of 50. Also on “day” 3 (this values for the days when triggers are secretly introduced can be liberally changed, but it is wise to cascade them early in the game), the roaster is introducing a bigger roasting unit, which allows roasting only on odd days, meaning that on even days, the order fulfilment by the roaster will be zero. Around the same period, as illustrated (in Dutch) in Fig. 5, the logistic third party that is playing under the banner of “supplier” introduce more palletised
Improving the MIT Beer Game
455
transport, forcing a batching of 100 kg of beans for any ordering and delivery from the farmer to the roaster. Finally, the farmer is also getting a secret card on day 10, that due to an external event (carnival, e.g.), on days 10 and 11, there is no supply of coffee, due to lack of labour who got free days for that event. Basically, we have five distinct type of triggers that will induce a sense of panic in the purchasing players, afraid of stockouts: order batching due to transport constraints, order synchronisation with operations, rationing due to external events, reactive ordering due to competition, and also experimenter induced shortage. Almost inevitably, each time this version of the game was played, irrespective of the awareness of the students, the bullwhip effect appeared poignantly. At the debriefing, the students could see both the inventory costs and the backorder costs were affected by their purchasing strategies (see Fig. 6. One of the main shortcomings of such a single multi-echelon supply chain game is that prices are not taken into account as results of potential negotiations between the sales and purchasing players. In such a context, where is no competition or a market price negotiation is meaningless. The single way we could introduce price variation was via a relation between inventory costs, stockout costs and customer desertion (variations executed by the experimenter, therefore not very realistic). In order to have variable prices, we shall have a competitive supply network, where purchasing players can have access to multiple sale points, which openly compete on a market. However, due to some evidence from literature [10], this gave us the idea to introduce a new unknown unknown in the game, that is, pre-set deceptive behaviour, via selecting players who will manipulate prices together for their own interest.
4
Introducing Price Collusion
A similar supply network game that contains prices and markets will have a higher number of elements and potentially players - making it unfeasible for a limited class size. For example, if we consider three single origin farmers, we have to consider three groups of consumers, each inclined for certain origin of the coffee beans. In Fig. 7 we show a network of this kind, that also has three retailers, three roasters, two wholesalers, and two suppliers. We also allow that a retailer who has its own roasting unit can order directly from the farmer, or that a retailer can order directly from the roaster, having also shorter lead times (but in both these cases, the amount that can be ordered is quite limited and the price for transport - via courier - is higher).
456
R. Niemeijer et al.
Fig. 6. The final Google Sheet that becomes available during the debriefing
Fig. 7. A supply network that allows price based competition
In such a configuration, we would play with smaller teams (only two students, one for sales, and one for purchasing), and only one for the farmers and the customer groups - having a maximum of 26 players in a game, which is feasible, given the previous experiences with various class sizes engaged in games with a single teacher. On the supply channels, each echelon can post their announced prices (they have also a reservation price, which is lower), the available quantities for sale, and the purchasing players can engage in negotiation for one or more orders. Because this game is supposed to be played by the same students who already played the previous single supply chain version, we also keep the same type of triggers, but in a different form. For example, instead of roaster units available on certain days, we have ships arriving with supplies on certain days only, and instead of carnivals affecting the availability of the farmer workforce, we have strikes by one of the wholesalers’ workforce. The behaviour we want to highlight in this version of the game is that firms sometimes collude to mimic the actions of a monopoly. The monopoly outcome is
Improving the MIT Beer Game
457
that a colluding cartel of companies reduce prices artificially, eliminate an essential competitor, and later either restrict output to inflate prices for the final customer, or rise prices, or divide the market [11]. To introduce this element in the game, we have been inspired by the “mafia game” [3], which is known also “villagers and werewolves”, or “faithfuls and traitors”. In such a game, the majority of the players are an “uninformed majority”, i.e., the faithfuls, and an “informed minority” (the traitors). The latter have to be selected before the game starts, and they have to be informed of a price bidding strategy that will eliminate some competitor via low price setting initially, and enforce a monopolist behaviour via high price setting later. They are able to borrow from a “bank” (part of the collusion cartel - who is played by the experimenter) to sustain low prices for the initial phase. This version of the game has a clear end, that is, when the monopoly is established. For example, a scenario enabled by collusion would be that two roaster companies collude to eliminate the third, and together with a wholesaler on the cartel, establish a single supply chain by eliminating one of the wholesalers (starting the attack when the other is affected by the strike). A team becomes bankrupt when it does not have money to pay for its orders (this version of the game has to implement the money flows in the blackboard mechanics also). All players shall be able to borrow from the “bank”, played by the experimenter, but this can refuse loans without any explanation (being on the cartel, it does in concert with the colluding teams). Such an arrangement presumes that out of the 26 students, six (three teams, playing two roasters and one wholesaler) are selected by the teacher in advance and have a preparation session. The game ends when the targeted wholesaler goes bankrupt. In the debriefing, it is shown how the formed monopoly could pay back its loans to the bank by making rigged profits after that. In the original Traitors game, the cycles and the mechanics are different. There are open (in the “light”) sessions, when all players discuss and challenge each other, voting to ostracise presumed traitors - when a player is eliminated this way, it has to reveal to all what role it played, faithful or traitor. These are followed by closed (in the “dark”) sessions, where the traitors decide which faithful to eliminate. The game ends when either the faithfuls manage to eliminate all the traitors (and share a prize equally), or the number of players is less than a certain number (typically 4, if the initial group is around 20 players with 3 initial traitors) and there are traitors still in the game, who share all the prize only among themselves. The main difference with our proposal for the supply network game with a colluding cartel is that the “faithful” teams are not aware of the existence of the traitors, and only at the debriefing, they are informed that these players were in the game and what rigged strategy they applied.
5
Practical and Ethical Issues
Such a setup presumes that the “faithful” students will not be aware that “traitors” exist in the game, and the lesson learned will be that collusion is
458
R. Niemeijer et al.
effective if there is a purposeful cartel that wants to enact monopolies duopolies. Such a lesson could be coupled with further learning on how this can be prevented, by legal or strategic means, especially in subsequent courses on Purchasing Management. It would be possible to play the game again, in a more extended setting, with more players and cycles spread over multiple days or even weeks. In such a setting, the main purpose would be not to show what collusion is (the students who play are already informed) but to identify the “traitors” and use means to eliminate them or oblige them to become “faithfuls”. The means could be legal, in the manner that there is a new type of player: courts of justice that can fine colluding behaviour. However, if the accusation is “wrong” (the faithfuls accuse unjustly another faithfuls), there are financial consequences for the accusers - like high judicial costs and/or eventual counter-fines. Another mechanism is altruistic punishment, when faithfuls decide together not to sell or buy from a presumed traitor (positive ostracism). To organise the first version of the coffee supply chain game is time consuming and necessitates careful training of the teachers who supervise the gaming sessions (we observed that the success of the lessons are very dependent on how well the teacher manages to bring students into the flow state of the game). To organise a bigger game, which involves more students and more time, would be a very time-consuming task. If the students are supposed not to know that “traitors” exist in that game, that would mean to keep the game setup a secret from the students who are supposed to play it (our target audience would be the students who follow the Purchasing Management course). Keeping such game mechanics a secret, given the “word of mouth” spread among students would be quite difficult. To play it as the long “traitor” elimination by voting game will be even more complicated to organise (in this case secrecy is important only to make sure that the “traitors” identity is kept unknown until revealed via the game rules). An interesting idea is to keep a log of this game (e.g. via podcasts, narrating the daily events from an insider perspective), which can be made available to all the students after the game ends. That raises the possibility to involve students from journalism, who play an “investigative journalism” role, revealing how the dynamic of faithfuls and traitors played up. Finally, there shall be some thought given if such a game, played with students does not pose ethical issues. One can conclude that some student learn actively how to exhibit colluding behaviour, how to rig a market, and how to bring honest parties into bankruptcy. It shall be investigated first if such an endeavour in academia would create unwelcome outcomes, if it has potential legal implications, and if it aligns with the ethical standards of education of the institutions involved.
6
Discussion and Future Steps
Currently, only the single supply chain version of the game is played as part of the education curriculum (we are now, in 2023, in the second year it is played). The
Improving the MIT Beer Game
459
number of students who play it each year is around 350, and all the students who follow later the Purchasing Management course played this game. The current plan is to play the supply network version via a Learning Community set-up, to experiment with voluntaries who played the first game and prepare the new game for the PM course. Given the number of students who take this course (250 on average), it will be probably time consuming to set up and perform the games necessary for the network version. If this is proving to be too difficult, a selection of students interested particularly in the ethical and legal aspects of purchasing could be engaged - maybe together with students from the law faculty. Until the game is experimented with, it is difficult to estimate how long it takes until the bankruptcy of the targeted party will occur (we estimate that the game may be confined to a couple of hours, but this cannot be proven until played effectively). For the full version proposed (the one with “light” and “dark” meetings of elimination), this presumes a far larger effort, in number of participants, time consumed, organisations, support, documenting the events, etc. We intend to collaborate with other universities and investigate how the mechanics of these two types of games (supply chain and “faithfuls and traitors”) can be integrated, and maybe organise workshops and student exchanges where this line of research could be extended. Nevertheless, a first check of the legal and ethical issues related to such a game shall be addressed. This new version of the game will allow students to learn new essential knowledge about supply chains. The difficult subject of price collusion will revealed through a real case, as perceived by playing the game. The main lesson shall be that detection is possible, and reporting and acting necessary. The ethical implications, the adjacent legal framework, and the alternatives (legitimate competitive practices) can be detailed further via group discussions and debates, also by inviting guest speakers from regulatory bodies and expert professionals who experienced price collusion cases.
7
Conclusion
This is a position paper. We have presented the mechanics a supply chain game that is currently played at the university of Groningen, by the first year students in management - for two years already, together with some insights about the advantages of our variant as compared with the more classical MIT beer game. By introducing more echelons, more realism and uncertainty of the triggers that generate the bull-whip effect, we consider this variant very effective in realising our teaching goals. However, we understand the limitations of the game, especially the lack of price variation due to a market mechanism. We also posit that just introducing the market mechanism will only complicate the game mechanics but will not bring any new learning objective in focus. We answered the “why to play such a game?” question by adding colluding behaviour as a “unknown unknown” to the players, intending to make them aware how damaging these behaviours
460
R. Niemeijer et al.
are, and also to teach them in an even more elaborate version of the game (with elimination rounds) how to detect, deter, and stop monopolistic-driven colluding cartels, as well as the importance of regulatory policy and practices. Our main general position is that games in industrial management and supply chain management shall be less focused on further digitalization, bots behaviour enhancement, and instead focus on the introduction of more human related aspects, like deception, keeping the digital support to a pragmatic minimum, and focusing on more complex human interaction between players. To our knowledge, this is the first time the combination of a supply chain game with a social deduction game (like the Mafia game) is proposed in the literature. We hope that this idea will generate debate and interest, and maybe collaboration between researchers and academic centres where such an initiative spurs interest may get traction.
References 1. Sterman, J.D.: Modeling managerial behavior: misperceptions of feedback in a dynamic decision making experiment. Manage. Sci. 35(3), 321–339 (1989). INFORMS 2. Schmuck, R.: Education and training of manufacturing and supply chain processes using business simulation games. Procedia Manuf. 55, 555–562 (2021). FAIM 2021 Conference, Athens, Greece, Elsevier 3. Davidoff, D.: The original Mafia rules, downloaded March 2023 from https://www. servinglibrary.org/journal/2/the-original-mafia-rules 4. Snider, B., De Silveira, G., Balakrishnan, J.: Running “The Beer Game” for Large Classes. In: Decision Sciences Institute Conference Proceedings (2010) 5. Walls, B.L.: An AI model to predict dominance, nervousness, trust, and deception from behavioral features in videos, Ph.D. thesis, The University of Arizona (2020) 6. De Felice, S., de C. Hamilton, A.F., Ponari, M., Vigliocco, G.: Learning from others is good, with others is better: the role of social interaction in human acquisition of new knowledge Phil. Trans. R. Soc. 378(1870), 20210357 (2022). https://doi.org/ 10.1098/rstb.2021.0357 7. Kaminsky, P., Simchi-Levi, D.: A new computerized beer game: a tool for teaching the value of integrated supply chain management. In: Global Supply Chain and Technology Management, vol. 1/1, pp. 216–225, Citeseer (1998) 8. Bhattacharya, R., Bandyopadhyay, S.: A review of the causes of bullwhip effect in a supply chain. Int. J. Adv. Manuf. Technol. 54, 1254–1261 (2011) 9. anon: What is the difference between direct and fair trade? https://www. thespecialtycoffeecompany.com/resources/direct-trade-coffee/. Accessed 21 Mar 2023 10. Haldar, T., Damodaran, A: Identifying market power of retailers and processors: evidence from coffee supply chain in India. IIMB Manage. Rev. 34/3, 286–296 (2022). https://doi.org/10.1016/j.iimb.2022.09.002 11. Lande, R., Marvel, H.: The Three Types of Collusion, Fixing Prices, Rivals, and Rules, ScholarWorks@University of Baltimore School of Law. https://scholarworks. law.ubalt.edu/cgi/viewcontent.cgi?article=1367&context=all fac 12. Senge, P.: The Fifth Discipline, (revised edition). Doubleday, New York (2006)
Innovation & Entrepreneurship in Engineering Curricula: Evidences from an International Summer School Jovista Qosaj1(B)
, Donatella Corti1
, and Sergio Terzi2
1 Department of Innovative Technologies, University of Applied Science and Arts of Southern
Switzerland, Via la Santa 1, 6962 Viganello, Lugano, Switzerland [email protected] 2 Department of Management, Economics and Industrial Engineering, Politecnico di Milano, Via Raffaele Lambruschini, 4/B, 20156 Milan, Italy
Abstract. Entrepreneurship has seen a significant growth in recent years as a topic taught within engineering curricula. The growth can be attributed to the continued progress of technology, which drives innovation and economic advancement. Today’s engineers now need to be entrepreneurial in their thinking and actions to effectively contribute to the advancement of technological innovations. The purpose of this paper is to showcase the creation, execution, and assessment of a summer school program dedicated to fostering innovation and entrepreneurship for engineers. The program was designed with the objective of promoting the growth of innovation and entrepreneurial abilities within an international class. Some lessons learnt at the end are derived with a twofold aim: to show the main added values based on this summer school assessment and to identify guidelines to design and deliver similar initiatives. Keywords: Summer School · innovation & entrepreneurship · innovation on education · sustainable manufacturing · engineering curricula · entrepreneurship education
1 Introduction Over the past few years, there has been a significant rise in the inclusion of entrepreneurship in engineering courses, which can be attributed to the continued impact of technology on driving innovation and the economy [1]. Engineering graduates today need to be versed in their technical competency and must be able to adapt and thrive within a business-based environment. Beyond that, they also need to have the capability of adapting to changes within their industry, creating new solutions to challenging technical problems, and recognizing new opportunities for development. Many of these characteristics relate back to what is referred to as an “entrepreneurial mindset” [2]. Entrepreneurship and innovation are increasingly acknowledged for their significant economic impact. In order to create a more entrepreneurial technology workforce, STEM (Science, Technology, Engineering, and Mathematics) institutions have started to incorporate entrepreneurship programmes. © IFIP International Federation for Information Processing 2023 Published by Springer Nature Switzerland AG 2023 E. Alfnes et al. (Eds.): APMS 2023, IFIP AICT 690, pp. 461–475, 2023. https://doi.org/10.1007/978-3-031-43666-6_32
462
J. Qosaj et al.
The purpose of this article is to describe the development, implementation, and evaluation of a Summer School (SS) on innovation and entrepreneurship designed to encourage development of innovation and entrepreneurial skills targeting both engineering students and professionals. It presents detailed information on the approach taken to insert entrepreneurship and innovation in engineering curricula. The intention is to promote the growth of innovation and entrepreneurial abilities within an international class. Some lessons learnt at the end are derived with a twofold aim: to show the main added values based on this SS assessment and to identify guidelines to design and deliver similar initiatives considering that most of the work in the literature suffer from problems of replication, validity and generalization of results [3]. It is hoped that these innovative approaches will inspire a transformation in the way entrepreneurship is taught in engineering and lead to more effective achievement of entrepreneurial learning objectives. According to a framework used to analyse different types of entrepreneurship teaching programmes [3], the paper falls under the dimension of “Case studies” that focus on presenting different methodologies and programmes and evaluating their impact. This dimension answers two questions: “How?” and “Which?”. The paper is organized in six sections. After the literature review carried out in Sect. 2, Sect. 3 deals with key principles that the SS could delivery; Sect. 4 provides an overview of the organization and the program structure; Sect. 5 is focused on the general lesson learnt, thus guidelines that could support the development of other courses. Finally, Sect. 6 reports the paper’s conclusions.
2 Literature Review The fourth industrial revolution (4IR) presents numerous advantages, opportunities, and challenges compared to previous industrial revolutions. Unlike earlier revolutions, 4IR technologies focus on integrating skilled workers in human-robot collaboration rather than replacing them. The ability to participate in 4IR production depends on industrywide access to relevant skills. Engineers play a crucial role in identifying relevant emerging challenges and providing solutions for businesses. Developing business and entrepreneurial skills empowers engineers to meet evolving career demands and gain a competitive edge [4]. According to [5], thousands of new jobs for professional engineers are expected to emerge by 2026. To maintain connectivity and progress in their careers, and meet the job requirements, they need more than just technical skills acquisition, they need to acquire a set of business and entrepreneurial skills [6], that include the capacity to create opportunities, take risks, adapt to change, demonstrate unwavering commitment to goals, and typically act with an innovative mindset. These skills happen to be among the most in-demand in the 4IR [4]. 2.1 Entrepreneurship Education The education of entrepreneurship involves acquiring the skills, knowledge, and attitude necessary for learners to understand the challenges of life in different forms and to systematically offer solutions to challenges faced by companies [6]. It covers every activity
Innovation & Entrepreneurship in Engineering Curricula
463
aimed at fostering entrepreneurial mindset, skills, and attitudes as well as a variety of aspects like start-ups generation and innovation [7]. In [8] is reported that entrepreneurial education has increased significantly, mainly due to the need to prepare students to cope with current changes in the work environment successfully. The study conducted by [9], emphasizes the crucial role of teachers’ skills and responsibilities in delivering effective entrepreneurship education. To promote entrepreneurship among students, educators must possess knowledge of diverse teaching methodologies. A didactic approach in universities is necessary to equip students with practical experience and enhance their comprehension of entrepreneurship, fostering critical thinking and problem-solving abilities [10]. The pioneer of entrepreneurial education was Shigeru Fiji, who initiated this field of education at Kobe University in Japan in 1938. In 1940, courses on business management began to emerge and were introduced into entrepreneurship at Harvard Business School in the United States in 1947. This phenomenon gained greater universal recognition within half a century [11]. Nowadays, American Assembly of Collegiate Business Schools (AACSB) includes courses on entrepreneurial education, with a significant growth at a global level [12]. Engineering entrepreneurship programs have expanded the traditional business school model, which focuses on business basics and creating a company, by incorporating innovative content, experiential teaching methods, and less formal co-curricular activities [13]. The introduction of certain initiatives has led to the emergence of a new subfield in engineering education, namely engineering entrepreneurship. Initially, the focus was on program descriptions [14] and conceptual papers advocating for the incorporation of entrepreneurship in engineering education [15]. However, the needs of this subfield were quickly met with the development of various programs [13, 16], exploration of an engineering entrepreneurial mindset[17, 18], review of program models [19], research in assessment[20], study of faculty beliefs [21], examination of engineering student career choices and intentions [22], and the creation of numerous classroom interventions[23]. The scholarly works produced in these areas, and others, have contributed to the growth of engineering entrepreneurship programming, both curricular and co-curricular. The wide adoption of entrepreneurship programs signifies growing acceptance in nurturing entrepreneurial engineering students [24]. However, these programs are not mandatory and are often offered as optional courses or extracurricular activities. Furthermore, the fact that entrepreneurship education heavily relies on co-curricular programs in engineering suggests that there are alternative paths for students to pursue such education [24]. 2.2 Teaching Methods of Entrepreneurship Education Teaching methods are mostly categorized by writers into two main groups, namely “traditional methods” which are usually lectures and “innovative methods” which are students centered and action-based [26, 27] also known as “passive methods” and “active methods”, respectively [28]. Active methods require the teacher to facilitate learning rather than control and apply approaches that allow students to discover on their own. The three most used methods are lectures, case studies, and group discussions. There are also other methods used, focused on the engineering field, but not as common as those in the first group. These include business, computer or game simulations, videos,
464
J. Qosaj et al.
recordings, models or guest speakers, business plan creation, project work, and competitions, establishing actual entrepreneurial activities, laboratories, presentations, and study visits. This last category of methods called “active,” would be more suitable for developing entrepreneurial characteristics among participants [28]. Authors describe traditional approaches as deductive, in which students gradually learn commercial language through the repetition of basic concepts and principles presented in lessons, exercises, and in the creation of a business plan and project [29]. In contrast, innovative methods are based on a more active pedagogical approach [30]. In recent years, there has been a growing interest in entrepreneurship education, accompanied by changes in how these courses are taught. According to [25] entrepreneurship courses should incorporate educational content aimed at developing specific crosscutting skills that are often neglected in traditional education. As for the delivery of educational content, the most effective approach involves a combination of e-learning and more practical work, where students can interact with actual entrepreneurs to achieve optimal results. The presence of several significant challenges becomes evident, most of existing research encounters issues related to replicating findings, establishing validity, and generalizing results. Additionally, few articles examine the relevance and effectiveness of using Internet-based and computer-based technologies, such as in distance learning [3]. There is a new area of exploration in the engineering community to attract more engineering students to voluntarily participate in entrepreneurship education programs. This is crucial because the goal is to increase the number of graduates who possess an entrepreneurial mindset. Therefore, it is essential to design activities using a combination of e-learning and practicum phases in which students can interact with real entrepreneurs [25] to attract students to participate in these programs with the aim of developing or improving their entrepreneurial skills in engineering field. Moreover, these initiatives would serve as a framework for crafting and implementing similar programs, thus addressing the existing gap in the literature.
3 Design Key Principles The SS focus, innovation, and entrepreneurship for sustainable manufacturing, was the pivotal element around which the detailed program was developed. Based on this initial idea, the design process was triggered by a discussion meant to identify the key values the SS could delivery to meet the need of both master students and professionals with a scientific background (the only requirement to apply was to hold a Bachelor of Science). In the following, the distinctive principles identified to translate the initial idea into a quality program are described. 3.1 Learning by Doing Approach The use of a mix of teaching methods has been essential to create a learning environment in which participants are activated and are immersed in a challenging atmosphere. Every day of the SS was unique in terms of covered contents and type of proposed activities. Lecturers were asked to present methods and tools related to the innovation process or
Innovation & Entrepreneurship in Engineering Curricula
465
to the entrepreneurship and to have participants implementing them in practical cases through teamwork and workshops. Company visits allowed participants to directly experience how real organizations manage the innovation process and the work on industrial challenges forced them to take a decisional role for the company they were associated to. To make the most out of this mix of activities, the presence was essential and a minimum of 80% attendance was set to make sure all participants could fully benefit out of the provided set of experiences. 3.2 Industrial Involvement The direct involvement of industrial organizations in the design and delivery of the SS was an essential ingredient to offer a learning-by-doing approach. Companies contributed to the definition of the learning outcomes of the SS so to ensure a good alignment with the actual needs of the industrial world. Furthermore, they were involved in the delivery of experiences. On the one hand, they were willing to open their doors and share with participants their approach towards innovation. On the other hand, they proposed challenges related to their context thus allowing participants to experience companies’ real-life environment. 3.3 Multicultural and Heterogeneous Group The Summer School was primarily meant as part of the European Institute of Innovation & Technology for Manufacturing (EITM) master programme and had to be designed for an heterogenous class including an international group of participants with different scientific backgrounds. This requirement was considered as an additional opportunity to offer a valuable experience. In fact, the ability to interact within a multicultural environment is seen as a learning outcome per se. The teams for the challenges were created by the SS coordinators in such a way participants could bring complementary approaches to the work. This also allowed the creation of a more cooperative atmosphere within the class for all the other activities. Furthermore, in order to make sure that all participants could start the SS with a common set of concepts and enjoy it from the first day, a set of digital nuggets meant to cover basic knowledge about the focal topics of the SS, namely innovation, entrepreneurship, and sustainable manufacturing has been developed and made available to participants in advance in a remote way. 3.4 Evaluation Criteria The SS was part of the study plan of the EITM master students and was worth 5 credits. It was thus necessary to define an evaluation criterion to assign the credits. In order to stress the importance of the learning-by-doing approach, it was decided to base the evaluation of the SS on the solution proposed by teams of students to the assigned industrial challenge. By challenge it is meant an innovation opportunity that a company wants to explore. Each team of students was expected to work on a different challenge and to support the proponent organization to understand how to make value out of it, while contributing to improve the level of sustainability.
466
J. Qosaj et al.
3.5 Learning Outcomes The design of an educational programme must identify a set of coherent learning outcomes (LO) to deliver through the different learning experiences. In the case of the Innovation & Entrepreneurship for Sustainable Manufacturing (IESMA) Summer School, each organization involved in the delivery of contents has been asked to map the specific learning experience over the 6 European Institute of Innovation & Technology (EIT) learning outcomes taken as a reference list. Since they have been identified within the EIT initiative by experts in the field of innovation and entrepreneurship, satisfying them means that the programme can achieve its intended educational goal. Indeed, the set of delivered contents results to touch all of them that are shown below. EIT LO1: Entrepreneurship skills and competencies. The capacity to identify and act upon opportunities and ideas to create social, cultural and financial value for others, including translating innovations into feasible business solutions, with sustainability at their core. EIT LO2: Innovation skills and competencies. The ability to formulate knowledge, ideas and technology to create new or significantly improved products, services, processes, policies, new business models or jobs, and to mobilise system innovation to contribute to broader societal change, while evaluating the unintended consequences of innovation and technology. EIT LO3: Creativity skills and competences. The ability to think beyond boundaries and systematically explore and generate new ideas. EIT LO4: Intercultural skills and competencies. he ability to engage and act internationally and to function effectively across cultures, sectors and/or organisations, to think and act appropriately and to communicate and work with people from different cultural and organisational backgrounds. EIT LO5: Making value judgments and sustainability competencies. The ability to identify short- and long-term future consequences of plans and decisions from an integrated scientific, ethical and intergenerational perspective and to merge this into a solution-focused approach, moving towards a sustainable and green society. EIT LO6: Leadership skills and competencies. The ability of decision-making and leadership, based on a holistic understanding of the contributions of higher education, research, and business to value creation, in limited sized teams and contexts.
4 IESMA Summer School in a Nutshell The Summer School was the first summer school organized within the EIT Manufacturing Master Programme, a European international initiative aiming at training a new generation of entrepreneurs attending engineering courses. It took place in July 2022 and lasted for three weeks (15 days of activities). It was jointly coordinated by the University of Applied Sciences and Arts of Southern Switzerland (SUPSI) and the Polytechnic of Milan, with the participation of eight other partners. The event was organized within the
Innovation & Entrepreneurship in Engineering Curricula
467
EITM master programme, but it was also open to individuals who desired to gain a deeper understanding of emerging digital technologies, to learn how to translate technology into business, and, ultimately, to launch a start-up. This provided a unique opportunity for the innovation and entrepreneurship upskilling in the manufacturing sector, thus contributing to the enhancement of the necessary skills for sustainable production in the future. According to the Cambridge dictionary, the word “upskill” is referred to the process of learning new skills or to the act of teaching workers new skills. The IESMA SS, although intended for educational purposes, also adds a business perspective to the learning experience, stimulating learners towards innovation and creativity. This results in a concrete and operational synergy between education and business. In the next sessions, the main elements characterizing the SS are introduced to provide an overview of what has been offered. 4.1 Participants The IESMA SS was an international summer school open to both master students and young professionals with a Bachelor of Science. In its first edition, 41 participants from 12 different nationalities participated. 29 of them were enrolled in the EITM master program (2021–2023 edition), while the remaining 12 were external to the EITM network. Participants came from a variety of countries including Italy, India, Pakistan, Spain, Thailand, Switzerland, Nigeria, Serbia, Bangladesh, China, Finland, and South Africa. 4.2 Content The SS was focused on innovation and entrepreneurship for sustainable manufacturing. Based on the design key principles introduced in Sect. 3, a few calls with all the 10 partners participating to the SS project was organised at the beginning of the year (February-May 2022) to define the programme in more detail. First, each partner proposed a preliminary list of topics based on its expertise. Next, the different topics were analysed to ensure they fit with the SS’s purpose, avoid overlaps with other content, and, most importantly, ensure coherence with one another so that the SS programme was logically developed and complete enough to meet the expected goals. After confirming the adequacy of the topics, each partner was asked to identify the teaching approach to be used and map the specific learning experience onto the EIT learning outcomes defined within the education pillars (see Sect. 3.5 for further details). Eventually, the partners agreed on the SS calendar. The developed programme is based on the three main types of activities described below. Interactive Lectures. Lectures, that covered 8 days of the SS, introduced specific topics in the field of innovation and entrepreneurship and included group activities. One of the distinctive features of the school was the invitation of a series of international speakers to give lectures. Efforts to maintain the coherence of the training program and avoid duplicating content were rewarded as the outcome was highly satisfactory and highly valued by the students who had the opportunity to experience different teaching methods, connect with an international faculty and delve into specific topics with industry experts. The covered topics are summarized in Table 1.
468
J. Qosaj et al. Table 1. List of topics covered during lecture’s days.
Day
Topic
No. of speakers
1
IPR management: Patents, Trademarks and Design
1
2
Design Thinking for Manufacturing and Inclusive Design for Sustainable Manufacturing
2
3
Innovation in human-Computer Interaction
2
4
Digital Maturity Assessment and Digital Strategy
1
5
Digital intrapreneurship fast track
2
6
Digital intrapreneurship fast track
2
7
Digital transformation in logistics and open interoperability
1
8
Set Based Innovation for Manufacturing
1
Tours to Companies and Innovation Centers. Visits to companies’ facilities and innovation centres were organized to give concrete and practical examples on how innovation and entrepreneurship are managed. All the involved organizations were part of the project consortium delivering the SS, so they were aware of the whole structure and goals of the SS and could structure the visits to make them a proper education activity aligned with the study plan. 4 days were dedicated to the visits. A summary of the visits’ contents is provided in Table 2.
4.3 Challenges Companies involved in the SS organization were asked to identify one or more challenges related to their innovation goals that could be analysed through the lens of sustainability. With a total number of 41 participants, it has been decided to launch 10 different industrial challenges. The main topic of the final list of challenges is the following: (1) circular manufacturing model, (2) green energy transition, (3) servitization, (4) innovative application of an exoskeleton that could improve sustainability performance, (5) innovative application for a cobot that could improve sustainability performance, (6) innovative application of a machine inspection recognition archetype that could improve sustainability performance, (7) dynamic warehouse discovery, (8) intelligent small hubs for local delivery, (9) how to make a suit born in the Industry 4.0 era a «sustainable suite»?, (10) innovation for companies led by enterprises. The application of tools introduced during the Summer School was evident in the approach used to develop solutions. This was the first and direct implementation of the knowledge acquired during the SS. Each team is expected to work on the idea formulation, market analysis (and competitors’ analysis, if relevant), business model of the identified solution/s (value proposition, resources, and channels), profit & loss analysis and tentative implementation plan. Team Composition. The composition of the teams was defined by the IESMA coordinators with the dual purpose of ensuring that each group had at least one member with a managerial background and of creating multicultural teams to enhance the international
Innovation & Entrepreneurship in Engineering Curricula
469
Table 2. List of activities carried out during the tours. Activity
Company or Competence Centre
Half a day dedicated to participating in Industrial automation (Turin, Italy) presentations on relevant innovation topics for the company (human & technology: a new way of working; deal with humans; deal with technologies; MATE exoskeleton experience). Half a day involved in a robot (called e.DO) challenge After a factory tour there were be included Domestic appliances Manufacturing (Varese, speeches on (1) innovation and R&D roadmap, Italy) (2) TWI-Training Within Industry, (3) company strategic imperative: Automation and Industry 4.0, (4) and Sustainability & Net Zero Presentation on (1) The role of the competence Competence Centre - Industry 4.0 (Milan, centre in technology transfer for digitalization, Italy) (2) The topic from research to market: technology transfer methodology for manufacturing industry in TECH2Market Project, (3) visit to discover their technologies, (4) teamwork (1) Series of talks in digital fabrication, robotics, design, and creativity, (2) visit to the FabLab of Digital Fabrication, (3) visit to the Mini-Factory
SUPSI (Lugano and Mendrisio, Switzerland)
experience. The former criterion was implemented due to the high number of individuals with a managerial background, and as they are believed to be better equipped with methodological tools to support the solutions’ development for the challenges. This was deemed appropriate to ensure that all teams could count on the same set of competencies. Consequently, 9 teams of 4 members and 1 team of 5 members were formed. The list of groups was sent to the students the day before the challenge presentation. No student raised any objections regarding the group composition, and upon completion of the program, they expressed that it was valuable to discuss and exchange ideas with individuals from a completely different cultural and professional background. Evaluation Criteria. The final day of the SS was devoted entirely to the teams’ pitches. Each team was allocated 30 min, consisting of 15 min for the presentation and 15 min for answering questions from the jury. The schedule was dispatched to the teams one week in advance, taking into account the availability of the various jury members as it was not feasible for all of them to be present for the entire day. The jury consisted of representatives from a majority of the IESMA partner organizations and the proponent companies, with the SS coordinators, a few representatives from the IESMA consortium and the Head of the Master School attending the whole day. The other jury members connected remotely for the time required to evaluate the challenges they were involved
470
J. Qosaj et al.
in. The jury was in hybrid mode, with a part participating remotely and another part in person. Nine people were involved in the jury altogether. Members of the jury were asked to rate each presentation based on the five pre-defined evaluation criteria on a scale of 1 to 5. Evaluation criteria have been defined as follows. (1) Quality: appropriateness of the value proposition and proposal’s completeness, (2) Originality: presence of innovative elements, (3) Sustainability: contribution to enhance sustainability level, (4) Feasibility: potential for effective implementation, (5) Presentation effectiveness: clarity of the presentation and communication skills. At the conclusion of all presentations, the jury tallied the scores assigned to each team to determine the winning team. To ensure consistency in evaluation, four jury members participated in all presentations, with at least one member being from the proposing company. Since the number of jury members could vary slightly, the total score assigned to a team was divided by the number of voters. Members of the winning teams were assigned a small prize during the closing ceremony.
4.4 Learning Path Given the heterogeneity of the background of participants and the short duration in time of the SS (3 weeks in a raw), there was the need to make sure that some basics knowledge was shared among all the participants to start with a common set of relevant definitions and concepts. With the goal of establishing a foundation of understanding regarding the central themes of the SS, namely innovation, entrepreneurship, and sustainable production, a structured learning path was devised, lasting 120 min, titled “Innovation and Entrepreneurship for Sustainable Production”. This learning path consists of 8 distinct nuggets that address various themes yet are tied together by a shared overarching concept. According to [31] a nugget is a self-consistent element aimed at providing a concise set of information on a specific topic. It therefore aims at allowing the participants getting the competence level “knowledge”. The nuggets come in different formats and varying durations, including short videos that illustrate key concepts or specific examples, visual representations of information that help synthesize complex concepts, quizzes to assess information retention, and presentations that quickly summarize key concepts or provide relevant information. 70% of the participants followed these nuggets, which were made available only to attendees registered on the Skills.move platform, which is EIT Manufacturing’s learning platform that supports the European manufacturing industry in upgrading and retraining its current and future workforce, providing individuals with easy access to a customised learning experience. After the SS, the learning path went through a further quality check process managed by the EITM initiative and it will be soon available in the same platform for all its users. The structure of the learning path is illustrated in Table 3. 4.5 Quality Assessment A multifaceted assessment process has been set up to evaluate the value, relevance, and quality of the IESMA SS. The principal consideration was to measure how well the students were able to learn within the program, as well as evaluate how well the
Innovation & Entrepreneurship in Engineering Curricula
471
Table 3. Innovation and Entrepreneurship for Sustainable Production Learning Path. No
Topic
Duration [min]
1
Digital tools for innovation
15
2
Methods for innovation
13
3
Collecting requirements
6
4
Innovation for sustainable manufacturing
15
5
Innovation in human interaction
15
6
Entrepreneurship vs Intrepreneurship
10
7
Supporting technology transfer with European networks: European Digital Innovation Hub
10
8
Quiz for Innovation and entrepreneurship for sustainable manufacturing
10
material created and lectures facilitated knowledge creation. To this aim, a pre and post assessment was carried out to understand the individual competence level development of the IESMA SS participants according to the topics, team works, and challenges presented. Through the ability to establish a baseline value the self-assessment allows the instructors and coordinators to evaluate the students perceived progress in a statistical manner. Seven competences (sustainability, manufacturing, entrepreneurship, digital innovation and transformation, design thinking, team collaboration and team management) were identified as the main ones to be employed and developed in the context of the SS due to their linkage with the covered topics. Participants were asked to assess each competence using three criteria, namely autonomy, proactivity and variability management using a scale out of 5 (with 0 = none; 5 = expert). The target range for improvement was set at a minimum threshold of 3-proficent and an optimal target of 4-advanced. The comparison of the pre- and post- assessment showed an improvement for all the analysed competences with the largest improvement in the areas of manufacturing, entrepreneurship, digital innovation and transformation and design thinking. Another quality assessment was conducted to evaluate how well the SS was organized and the subject matter material was communicated. This assessment phase allowed for the students to evaluate the entire program as well as the individual digital nuggets that were consumed. The feedback from 41 students regarding a SS program is positive overall. Students appreciated the location, diverse lecture topics, interesting excursions, relevant cases, diverse peers, and the opportunity to work in diverse groups. However, some aspects for improvement were identified such as having more intense lectures with more practical work as students appreciated such events, covering more aspects of sustainability, recommending a shorter SS duration, allowing more time for working on challenges and company student meetups. Regarding the assessment of the individual digital nugget, students completed an evaluation based on the quality of the content, clarity of presentation, and utility to the learning experience using a scale of 1 to 5 (1 = very poor, 5 = very good). The consortium also asked if students found any mistakes or misspellings
472
J. Qosaj et al.
to improve the digital content. Based on 28 student responses, the digital nuggets were evaluated as good with an overall average assessment score of 4. Overall results shown a well-received response for the first edition of the SS. Suggestions for improvements collected with this questionnaire are highly valuable for the design of next initiatives.
5 Lessons Learnt The IESMA SS has been designed to meet the requirements of a specific context, the EITM master programme, yet the experience gained during its delivery can be exploited to identify general guidelines that could support the development of other courses for a dual purpose, on the one hand with the intention to contribute to the literature (see Sects. 1, 2), and on the other meant to transfer innovation and entrepreneurial skills to engineers. Main findings can be summarized as follows: Learning by Doing Approach: It proved to be a definitive choice. Applying tools and methods in a real case is essential to quickly adsorb new contents that, most often, are not aligned with the technical expertise that characterizes courses for engineers. The practical approach facilitates understanding the implications of what is being taught. Direct Involvement of Companies: The involvement of companies maximises the effectiveness of the learning by doing approach. In addition, it makes sure that case studies brought to the attention of participants reflect real challenges and are always up to date. The valuable contribution of companies is counter-balanced by the possibility for them to collect ideas from fresh minds. Intensive Course: Being entrepreneurship a minor in the engineering curricula, it is a winning idea to have students immersed in the topic for a few days in a row. The limited time available for teach this topic within the study plan makes it reasonable to concentrate it in a single period during which students are completely dedicated to this topic. It is believed that the number of academic hours being the same, this is the best format to increase the interest of students towards it. Modular Contents: The SS has been organized around self-contained modules that are complementary with each other. In this way, it has been possible to have an international faculty and the prepared classes can be exploited also for shorter courses in different context. From the students’ point of view, the modularity of contents allows them to better organize the study and to search for contents after the end of the course. Identification of a Focus Topic: In the case of the SS, the focus was on sustainable manufacturing. This choice was useful to further give concreteness to the introduced tools for innovation and entrepreneurship. Furthermore, it contributed to an even and balanced assessment of the achievement of the SS’s objectives. In fact, all the challenges were on the same topic and the level of difficulty as well as the type of competences needed to work on them was the same for all the teams. Heterogenous Class: Entrepreneurial skills are universal for engineers regardless of their specialization. It is thus important to exploit the opportunity to have in the same class people with a different background and favour the interaction among them to show
Innovation & Entrepreneurship in Engineering Curricula
473
how different competences are complemented with each other. This is an excellent way to experiment a real working environment where the interaction happens with people speaking different technical languages and featuring different cultures. Intrapreneurship and Entrepreneurship: Very often, the main cited topic is entrepreneurship for engineers. With the SS it was found out that innovation management is something that goes hand in hand with entrepreneurship. In other words, nowadays intrapreneurship is as much important as entrepreneurship for engineers: not everyone would like to launch its own start-up, but all engineers have a role in contributing to make more innovative the company they work for. Understanding how to translate an idea into a commercial solution is essential for any engineer. Multifaceted Assessment: The quality assessment process was designed along with the contents of the SS, and this ensured the collection of information that actually reflect the whole initiative and are valuable to trigger a continuous improvement process. The use of complementary assessments was also useful to cover all the relevant aspects. In particular, the self-assessment of the level of competence before and after the SS was also valuable to evaluate the level of achievement of the learning outcome.
6 Conclusion This paper provides a twofold contribution to the evolving field of entrepreneurship education for engineers. On the one hand, it enriches the literature with some empirical evidence from the experience gained during the Innovation & Entrepreneurship for Sustainable Manufacturing Summer School that took place in the July 2022. On the other hand, the lessons learnt thanks to this initiative can be used by practitioners to design and deliver similar courses for engineers. By doing so, it addresses concerns related to the reproducibility, validity, and generalisation of research findings. In fact, in a special issue focused on the analysis of the commitment put forth by the engineering education community to understand best practices for entrepreneurship education, a list of established best practices is identified and this paper aids in the validation and broader application of the reported best practices [3]. So far, only one edition of the SS has been done and no comparisons can be carried out to assess performance of different classes. Also, it has not possible yet to test the validity of some improvement activities that have been identified during the final quality assessment procedure. Finally, the evaluation criteria seem subjective and each of the five has not been defined in further detail, a more objective scale could be considered for future editions. The experience gained during the SS will be exploited to offer not only similar programs in the future, but also to design shorter courses that implemented only a limited number of modules of the SS. A more structured analysis of literature in the field of entrepreneurship for engineers is called for so to better positioning this experience into a wider context and enrich it with state-of-the-art contents and approaches. Acknowledgment. This work has been partly funded by the European Commission, within the EIT Manufacturing Initiative through the Innovation & Entrepreneurship for Sustainable Manufacturing Summer School (activity ID: 22187, IESMA Summer School).
474
J. Qosaj et al.
References 1. Huang-Saad, A., Bodnar, C., Carberry, A.: Examining current practice in engineering entrepreneurship education. Entrepre. Educ. Pedagogy 3(1), 4–13 (2020) 2. Bodnar, C.A., Clark, R.M., Besterfield-Sacre, M.: Lessons learned through sequential offerings of an innovation and entrepreneurship boot camp for sophomore engineering students. J. Eng. Entrepre. 6(1), 52–67 (2015) 3. Naia, A., Baptista, R., Januário, C., Trigo, V.: A systematization of the literature on entrepreneurship education: challenges and emerging solutions in the entrepreneurial classroom. Ind. High. Educ. 28(2), 79–96 (2014) 4. Abdullahi, I.M., Bin Jabor, M.K., Akor, T.S.: Developing 4IR engineering entrepreneurial skills in polytechnic students: a conceptual framework. Int. J. Innov. Technol. Explor. Eng. 9(3), 2636–2642 (2020) 5. Landry, L.: 7 business skills every engineer needs (2018) 6. Swamidass, P.: Engineering Entrepreneurship from Idea to Business Plan: A Guide for Innovative Engineers and Scientists. Cambridge University Press, Cambridge (2016) 7. Ekpiken, W.E., Ukpabio, G.U.: Entrepreneurship education, job creation for graduate employment in south-south geopolitical zone of Nigeria. Brit. J. Educ. 3(1), 23–31 (2015) 8. Küttim, M., Kallaste, M., Venesaar, U., Kiis, A.: Entrepreneurship education at university level and students’ entrepreneurial intentions. Procedia Soc. Behav. Sci. 110, 658–668 (2014) 9. Arasti, Z., Falavarjani, M.K., Imanipour, N.: A study of teaching methods in entrepreneurship education for graduate students. High. Educ. Stud. 2(1), 2–10 (2012) 10. Ndou, V., Mele, G., Del Vecchio, P.: Entrepreneurship education in tourism: an investigation among European Universities. J. Hosp. Leis. Sport Tour. Educ. 25, 100175 (2019) 11. Israel, K.J., Johnmark, D.R.: Entrepreneurial mind-set among female university students: a study of University of Jos Students. Nigeria. Chin. Bus. Rev. 13(5), 320–332 (2014) 12. World Economic Forum Asian Development Bank (ADB). ASEAN 4.0: what does the Fourth Industrial Revolution mean for regional economic integration? World Economic Forum, Geneva, Switzerland (2017) 13. Gilmartin, S.K., Shartrand, A., Chen, H.L., Estrada, C., Sheppard, S.D.: Investigating entrepreneurship program models in undergraduate engineering education. Int. J. Eng. Educ. 32(5), 2048–2065 (2016) 14. Creed, C.J., Suuberg, E.M., Crawford, G.P.: Engineering entrepreneurship: an example of a paradigm shift in engineering education. J. Eng. Educ. 91(April), 185–195 (2002) 15. Byers, T., Seelig, T., Sheppard, S., Weilerstein, P.: Entrepreneurship: it’s role in engineering education. The Bridge 43(2), 35–40 (2013) 16. Gilmartin, S., Shartrand, A., Chen, H. L., Estrada, C., Sheppard, S.: US-based entrepreneurship programs for undergraduate engineers: scope, development, goals, and pedagogies. Epicenter Tech. Brief 1 (2014) 17. Rae, D., Melton, D.E.: Developing an entrepreneurial mindset in us engineering education: an international view of the KEEN project. J. Eng. Entrepre. 7(3), 1–16 (2016) 18. Shekhar, P., Huang-Saad, A.: Conceptualizing the entrepreneurial mindset: definitions and usage in engineering education research. In: ASEE Annual Conference and Exposition, Conference Proceedings (2019) 19. Duval-Couetil, N., Shartrand, A., Reed, T.: The role of entrepreneurship program models and experiential activities on engineering student outcomes. Adv. Eng. Educ. 5(1), 1–27 (2016) 20. Shekhar, P., Bodnar, C.: The mediating role of university entrepreneurial ecosystem on students’ entrepreneurial self-efficacy. Int. J. Eng. Educ. 36(1A), 213–225 (2020) 21. Zappe, S.E., Hochstedt, K.S., Kisenwether, E.C.: Faculty beliefs of entrepreneurship and design education: an exploratory study comparing entrepreneurship and design faculty. J. Eng. Entrepre. 4(1), 55–78 (2013)
Innovation & Entrepreneurship in Engineering Curricula
475
22. Jin, Q., et al.: Entrepreneurial career choice and characteristics of engineering and business students. Int. J. Eng. Educ. 32(2), 598–613 (2016) 23. Gerhart, A., Melton, D.E.: Entrepreneurially minded learning: incorporating stakeholders , discovery, opportunity identification , and value creation into problem-based learning modules with examples and assessment specific to fluid mechanics. In: ASEE Annual Conference and Exposition (2016) 24. Huang-Saad, A., Celis, S.: How student characteristics shape engineering pathways to entrepreneurship education. Int. J. Eng. Educ. 33(2), 527–537 (2017) 25. Digital Learning for Enhancing Entrepreneurial Skills of Future Engineers. Lecture Notes in Networks and Systems, 633 LNNS, pp. 1030–1037. (2023). https://doi.org/10.1007/978-3031-26876-2_96 26. Mamdani, M.: A delphi technique in proposing a conceptual postmodernism (2013) 27. Li, M., Faghri, A.: Applying problem-oriented and project-based learning in a transportation engineering course. J. Prof. Issues Eng. Educ. Pract. 142(3), 04016002 (2016) 28. Mwasalwiba, E.S.: Entrepreneurship education: a review of its objectives, teaching methods, and impact indicators. Educ.+ Train. 52(1), 20–47 (2010) 29. Prince, M.J., Felder, R.M.: Inductive teaching and learning methods: definitions, comparisons, and research bases. J. Eng. Educ. 95(2), 123–138 (2006) 30. Tasnim, N.: Playing entrepreneurship: can games make a difference? Entrepre. Pract. Rev. 2(4), 4–18 (2012) 31. Michele F., et al.: Small and medium enterprises’ workforce upskilling through digital contents streaming: a case study (2022)
Lean in Healthcare
Role of Manufacturing Industry for Minimizing the Barriers to Circular Transition in the Health Sector: A Framework Kartika Nur Alfina1,2(B) and R. M. Chandima Ratnayake1 1 Department of Mechanical and Structural Engineering and Materials Science,
University of Stavanger, Stavanger, Norway [email protected], [email protected] 2 School of Business and Management, Institut Teknologi Bandung (ITB), Bandung, Indonesia
Abstract. High-quality healthcare prevents disease and improves the quality of life by utilizing doctors, nurses, drugs, medical support, and other services. The primary goal of the healthcare supply chain is to deliver products on time to satisfy the needs of healthcare service providers. Due to numerous relevant barriers, it is challenging for the health sector to achieve sustainable consumption and production trends in supply chains. The future agendas of the transition to a circular economy include a dual mission of profitability and sustainability. A transition towards a circular economy (CE) represents a shift from a take-make-dispose economy toward a regenerative economy that matches the targets for all the 17 Sustainable Development Goals (SDGs). This study aims to define the barriers to circular transition and to highlight the role of manufacturing in minimizing barriers in the health sector. Manufacturing’s role in the health sector, particularly at the product development stage, could be one of the long-term solutions to sustainable and circular development. The manufacturing and modern technology roles were highlighted as accomplishments of sustainable development in the circular transition in the conceptual framework that was developed based on overcoming barriers. The conceptual framework was created to make health sector managers, practitioners, and supply chain actors aware of the hurdles to circular economy implementation. Keywords: Barriers · Circular Economy · Health sector · Manufacturing · Supply chain
1 Introduction The healthcare system is one of the world’s largest and fastest-growing, which implies it is dynamically changing, involving a wide range of industries such as hospitals, clinics, medical infrastructure, medical equipment, pharmaceutical, biotechnology, health insurance, and so on. Organizations, notably healthcare organizations, have recently faced two substantial pressures that may challenge their business models and considerably impact their operations. The first pressure is digitization which promises more efficient © IFIP International Federation for Information Processing 2023 Published by Springer Nature Switzerland AG 2023 E. Alfnes et al. (Eds.): APMS 2023, IFIP AICT 690, pp. 479–496, 2023. https://doi.org/10.1007/978-3-031-43666-6_33
480
K. N. Alfina and R. M. C. Ratnayake
processes, cost minimization, higher reliability, compelling design, and improved management control [1]. Another essential pressure is the sustainability of business [2]. With concerns about climate change, diminishing resources, and increased environmental pollution, as well as rapid growth in the world population, the importance of resource utilization for sustainability has grown in line with a sustainable resource management approach; this approach is seen as a method for solving these problems. [3]. In this sense, social, economic, and environmental sustainability should be incorporated within the company and in all stages of its supply chain [4]. 1.1 Practical Implication Background Healthcare supply chains provide healthcare products, medicines, and various health services to the public while collecting data and information regarding requirements, supply, and circulation for the healthcare management system [5]. On the other hand, manufacturing companies have played an essential role in improving living standards worldwide. However, manufacturing is also linked to unsustainable production and consumption patterns in linear settings. The fourth industrial revolution and its digital transformation, Industry 4.0, are advancing rapidly. The digital revolution fundamentally reshapes how individuals live and work, and the public remains optimistic regarding the opportunities Industry 4.0 may offer for sustainability [6]. Due to numerous relevant barriers, it is challenging for the healthcare sector to achieve sustainable consumption and production trends in supply chains [7]. Collaboration among suppliers, manufacturers, and healthcare institutions is critical, and increasing biodegradable and recyclable materials is an alternative solution in the healthcare industry. The future agendas of the transition to a circular economy include a dual mission of profitability and sustainability [8]. A transition toward CE represents a shift away from a take-make-dispose economy and toward a regenerative economy that matches the targets for all the 17 SDGs from United Nations 2030 agendas [9]. CE practices can be tools to achieve a considerable number of SDGs. CE is gaining a grasp on business, emphasizing sustainability’s environmental and economic elements. 1.2 Contribution Statement This study aims to define the barriers to circular transition and to highlight the role of manufacturing in minimizing barriers in the health sector. Manufacturing’s position in the health sector could be one of the long-term solutions to sustainable and circular development. The roles of manufacturing and modern technology in the circular transition were highlighted as accomplishments of sustainable development in the conceptual framework constructed based on overcoming barriers. The conceptual framework was developed to raise awareness of the barriers to circular economy implementation among health sector managers, practitioners, and supply chain actors. Previous studies have not defined this contribution as the best of the researcher’s knowledge. This research starts by mapping the barriers, analysing barrier management using the Bowtie diagram, and developing a conceptual framework for the manufacturing role in minimizing circular transition barriers.
Role of Manufacturing Industry for Minimizing the Barriers to Circular Transition
481
2 Literature Review A systematic literatures review was used to identify relevant research and assess the contributions of circular transition in the health sector and supply chain practices (see Fig. 1). The articles for the literature review were collected from publications associated with the Scopus database (such as ScienceDirect, Emerald Group Publishing, Springer, etc.). The time interval chosen is the last five years (between 2017 and 2022).
Fig. 1. Systematic literatures review.
According to the findings of the literature research (see Fig. 2), the implementation of circular economy in the health sector has remained limited during the last five years. However, the circular transition is widely used in supply chain activities, giving rise to the term “circular supply chain.” The barriers to circular economy principles also still limited to the other sector, specifically for food waste, plastic waste, and supply networks. The next section will go over the barriers to circular transition in the health sector, which will be classified into people, management, policy, supply, and environmental constraints. Following that is Industry 4.0, which is still in its early stages but has great promise to support the circular economy in the future.
Fig. 2. Literatures review result relate to circular transition in health sector.
482
K. N. Alfina and R. M. C. Ratnayake
2.1 Identification of Barriers for CE Adoption The health sector’s usage of resources, materials, and energy has increased dramatically over the years [10]. This increasing resource need and demand in the health sector has led to an increase in the usage of disposable medical equipment and single-use medical supplies. Many of these healthcare supplies are used once and then discarded [11], which causes enormous disruptions and burdens from the environmental perspective. Most disposable medical equipment types are plastic products frequently used for various medical applications. The non-biodegradable nature of these is harmful to the environment. As a result of these coming trends, the healthcare sector has been affected and is striving for sustainable solutions. The conservation of natural resources is obstructed by insufficient infrastructure for managing residual waste resulting from single-use medical materials, increased energy use, and its environmental burden. In this context, circularity and sustainability concepts have become essential in healthcare to improve the sector’s negative impacts on the environment [12, 28]. The circular economy is an alternative solution to healthcare issues. However, the application of CE comes with barriers. Non-financial barriers to circular transition in the health industry were found in this study. These limitations address significant barriers to supply chain operations and the manufacturing sector. Based on a review of the literature and expert feedback, barriers to CE adoption were identified and then translated into elements (see Table 1), which were constructed using a fishbone diagram perspective. Table 1. The identified barriers to circular transition in the health sector. Elements
Barriers
Description
Authors
People
Lack of enthusiasm about circularity
Lack of public perception and commitment to environmental issues is also a considerable barrier to accomplishing CE practices
[12–14]
Consumer perception of reused components being flawed
Lack of public interest and [12–14] reaction to circular equipment (reusable medical devices) hold organizations back and prevent organizations pursuing circular trends
Unsustainable cultural behaviour
Unsustainable cultural behaviour, the consumption of single-use medical supplies such as masks and gloves still becomes a habit
[12–14]
(continued)
Role of Manufacturing Industry for Minimizing the Barriers to Circular Transition
483
Table 1. (continued) Elements
Barriers
Description
Management
Conflict of interest among stakeholders
Conflict interest for the [12, 13, 15, 16] higher priority of other problems or needs (e.g., quality of care for patients is the top priority in the healthcare industry rather than CE)
Lack of top management support for circularity
Limited commitments from top management and organizational structures to support CE implementation in the health sector
Lack of standardization measuring CE
Lack of a standard system [12, 13, 15, 16] for performance assessment and measurement toward CE implementation from the government or association
Proactive strategies for environmental burden
Proactive barriers for [12, 13, 15, 16] circular transition, which one focusing on environmental and social well-being is, difficult to accomplish
Policy
Authors
[12, 13, 15, 16]
Lack of circular policies, The policies, incentives, and [12, 13] incentives, and regulations in regulations are not stringent, healthcare and there is no existing tool to analyse the effectiveness of the proposed CE rules Unclear vision regarding CE in supply chains
Regulatory barriers [12, 13] encompass the unclear national vision, such as goals, objectives, targets, and indicators
Existing laws in waste management are not supporting CE
The environmental laws in some systems do not fit CE concepts
[12, 13, 16]
(continued)
484
K. N. Alfina and R. M. C. Ratnayake Table 1. (continued)
Elements
Barriers
Description
Authors
Supplies
Lack of collaboration between supply chain actors
Individuals in supply chains components are reluctant to collaborate/support CE initiatives
[12–14]
Lack of demand for Not sufficient demand for eco-friendly medical supplies eco-friendly medical suppliers from consumers/patient
[12–14]
Lack of interoperability measures CE
Lack of information system regarding available technologies and best practices for integration CE in the health sector
[12–14]
High-cost requirement for CE implementation
Implementing CE in [12–16] healthcare supply chains results in restructuring facilities, trained staff, construction, technology, etc
Environmental Single-use medical devices and supplies
Lack of safe management of medical waste in healthcare
The sterilization issue to prevent hazardous material also infectious disease becomes the main resistant reason toward circularity
[12]
Lack of guidance for safety [12] requirements for highly technical products and issues related to hazardous components and materials
2.2 Role of Manufacturing and Sustainable Development in Relation to CE Adoption in the Health Sector Industry and renewable feedstocks supply chain 4.0 technologies have the potential to operationalize a circular economy and support sustainable development [17]. Industry 4.0 implementations can facilitate better resource utilization, foster efficient operational control of manufacturing systems [18], and accelerate the transition toward operations focused on the circular economy tenets. According to the Ellen MacArthur Foundation [19] and scholarly data, Industry 4.0 enables the transition to a circular economy, which includes the health sector. The circular economy concept has gained momentum recently, bursting the cycle by keeping resources ‘in the loop’ [20]. Transitioning from Industry 3.0 to Industry 4.0, or the Industry 3.5 stage, and shifting to a circular economy are two essential concepts for organizations that require significant changes to their current
Role of Manufacturing Industry for Minimizing the Barriers to Circular Transition
485
processes. To remain competitive, these transitions must adhere to sustainable resource management and digital transformation [21]. There is a strong connection between the circular economy and SDGs adopted by the United Nations [9, 18]. The circular economy can help support the achievement of the SDGs by promoting sustainable production and consumption patterns, improving resource efficiency, and reducing waste and negative environmental impacts. By embracing the circular economy, governments, businesses, and individuals can help build a more sustainable future. Several SDGs in the health sector are related to medical supplies, as shown in Table 2. The promotion of sustainable medical supplies in the health sector has a strong connection to SDGs 3 and 12. It focuses mainly on reducing the environmental impact of medical supply, medical equipment and disposal. However, other SDGs are equally critical for promoting sustainable healthcare practices and minimizing the carbon footprint of healthcare facilities. Table 2. The identified SDGs relate medical supplies in the health sector. Goals number
SDGs name
Description
SDG 3
Good Health and Well-being
This goal involves a focus on providing cost-effective and essential medications and vaccinations, as well as enhancing the supply chain for medical products and technologies
SDG 6
Clean Water and Sanitation
This aim focuses on providing access to reliable and safe water sources, which are necessary for delivering medical supplies such as clean water for medical procedures and sterile equipment
SDG 7
Affordable and Clean Energy
This aim encourages clean and renewable energy sources in the healthcare industry, such as solar energy for powering medical equipment and facilities
SDG 9
Industry, Innovation, and Infrastructure
This goal includes encouraging innovation in the healthcare sector, such as the development of innovative medical technologies and sustainable manufacturing processes for medical supplies (continued)
486
K. N. Alfina and R. M. C. Ratnayake Table 2. (continued)
Goals number
SDGs name
Description
SDG 12
Responsible Consumption and Production
This goal involves minimizing the environmental impact of the manufacture and disposal of medical supplies and equipment, in addition to improving the efficiency and sustainability of healthcare systems
SDG 13
Climate Action
This aim involves a focus on reducing greenhouse gas emissions in the healthcare industry while encouraging sustainable healthcare practices
SDG 17
Partnerships for the Goals
This goal includes developing partnerships and collaboration among various players in the healthcare sector, such as governments, non-governmental organizations, and private sector organizations, in order to enhance the sustainability of medical supplies and healthcare systems
3 Methodology This study involved preliminary research utilizing a literature review and further analysis by a qualitative method involving expert insight from previous studies. By defining the barriers to circular transition in the health sector, this research starts by mapping the barriers, analysing barrier management using the Bowtie diagram, and developing a conceptual framework for the manufacturing role in minimizing circular transition barriers. The complete research methodology of this study is presented in the flowchart below (Fig. 3).
Fig. 3. Research methodology flowchart
Role of Manufacturing Industry for Minimizing the Barriers to Circular Transition
487
4 Results and Discussions 4.1 Barriers Mapping Using Fishbone Diagram Circular transition in the health sector refers to the shift towards a more sustainable and circular economy, where resources are used to maximize their value and minimize waste and environmental impact. The barriers mentioned above (see Table 1) relate to the healthcare supply chain to supply medical needs in healthcare services visualized using a fishbone diagram presented in Fig. 4.
Fig. 4. Fishbone Diagram representing the mapping of barriers toward circular transition in health sector.
The fishbone diagram is a visual way of illustrating cause and effect. It is a more structured approach than other tools for brainstorming causes of problems (e.g., the Five Whys tool) [22]. This form is usually used in manufacturing to find the root cause of operational issues. Common causal elements can also be called input variables to specific problems divided into five elements: man, machine, material, method, and environment. The main issue highlighted was the slowing circular transition due to predetermined barriers in the health sector. In this context, the fishbone diagram illustrating causal elements in the healthcare supply chain, such as people, management, policy, technology, supplies, and environment. This form is an appropriate approach to map the barriers into circular transition in the health sector. The six elements represent the connection among healthcare supply chain actors such as suppliers, manufacturers, healthcare service providers (e.g., hospitals, clinics), and the government. 4.2 Barrier Management with Bowtie Diagram The demand for risk-based thinking is formally implied by ISO 9001:2015 for quality management systems. Risk-based thinking is essential for a quality management system [23]. The risk-based thinking is mandatory for every project, including circular transition projects in the health sector. A bowtie diagram is commonly used to define barriers in a safety management system or bowties used for regulatory development. This study uses the bowtie diagram to identify barriers in circular transition. This study conducts the
488
K. N. Alfina and R. M. C. Ratnayake
risk assessment to identify threats, consequences, and reactive and proactive barriers, represented in Fig. 5. The bowtie methodology visually represents a risk assessment. It is a better way of understanding the risk to a system. The hazards present in operational healthcare services hinder circular transformation. Thus the ‘Top Event’ in this context is slowing circular transition.
Fig. 5. Bowtie diagram representing barrier management toward circular transition in health sector.
There are several threats that could impede the success of circular transition, as shown in following Table 3. Table 3. The identified threats for circular transition in health sector Threat
Description
Supply chain complexity
The health sector depends on complex supply chains involving multiple parties, including manufacturers, distributors, and healthcare providers. Implementing circular practices requires coordination and cooperation among these stakeholders, which may be challenging
Technological limitations
A circular economy’s success primarily relies on data and technology to track and manage resources. However, many health institutions may lack the data and technical infrastructure required for a circular approach
Resistance to change
The current linear paradigm has existed for a long time, and stakeholders who are content with the status quo may be resistant
Regulatory barriers
The existing legislation may be incompatible with circular practices, resulting in regulatory impediments that make it challenging to execute circular efforts
Role of Manufacturing Industry for Minimizing the Barriers to Circular Transition
489
The ‘Preventive barriers’ are actions that can be taken to proactively address possible concerns and keep them from becoming impediments to the circular transition in the health sector is represented in Table 4. The ‘Consequences’ of a slowed circular transition in the health sector, such as ‘Increased environmental harm’ because the health sector is a large contributor to environmental harm, including waste generation, pollution, and resource depletion, are discussed. A slow circular shift could result in ‘Resource inefficiency,’ increased costs, and lower availability because the health sector consumes natural resources such as materials, water, and energy. Lastly, the delayed circular transition may also increase the risks of ‘Poor health quality’ to public health, such as exposure to hazardous waste, pollution, and other environmental hazards. The existing linear health sector paradigm contributes to the spread of infectious illnesses, pollution, and climate change, all of which can have serious public health consequences. The ‘Reactive barriers’ attempt to mitigate the impact of possible attacks while expediting healthcare organizations’ recovery. The result of ‘increased environmental harm’ is reactive obstacles like ‘waste minimization’ and ‘using Eco-design.‘ These two barriers could quickly adjust to reduce the threat’s influence on the circular transition. The mitigating implications of ‘Resource inefficiency’ are ‘Sustainable sourcing’ and ‘Design for circularity’; these two reactive barriers could address manufacturing roles that reduce barriers in the circular transition. The endpoint of ‘poor health quality’ is addressing ‘Education & awareness’ and ‘healthy lifestyle’ as low-hanging fruit solutions and long-term initiatives toward sustainability and circularity awareness. Building healthy people entails lowering sick people and increasing future development with a robust and bright generation. 4.3 Conceptual Framework Development The conceptual framework developed from manufacturing and modern technology roles that were highlighted as accomplishments of sustainable development and overcoming barriers to circular transition in the health sector (see Fig. 6). Manufacturing is critical to reducing barriers to the circular transition in the health sector. Manufacturers may build more sustainable, ecologically friendly, and socially responsible healthcare products by implementing circular product design ideas and techniques. The evolution of Industry 4.0 also plays an important role in minimizing barriers, as highlighted in the framework. It refers to the use of modern technologies such as the Internet of Things (IoT), cloud computing, artificial intelligence (AI), and additive manufacturing to improve efficiency, productivity, and flexibility. According to prior study from [12] cloud computing was discovered to be the most essential big data option for overcoming CE restrictions in the healthcare business. This was expected, as cloud computing can provide organizations with additional benefits such as lowering the cost of technology investments (capital, operational expense savings, and labour cost) and, as a result, providing better healthcare services by overcoming poor infrastructure, insufficient resources, and a lack of expertise and technology. There is also a smart healthcare waste disposal system that incorporates circular economy principles to recover value from disposables. As compelling grounds for a smart healthcare waste disposal system, two criteria stand out: (i) digitally connected healthcare centers, trash disposal firms, and
490
K. N. Alfina and R. M. C. Ratnayake Table 4. The identified preventive barriers for circular transition in health sector
Threat
Preventive barrier
Description
Supply chain complexity
Circular Business Model
Health organizations can implement circular business models prioritizing the recovery and reuse of resources and products, such as product-as-a-service or take-back programs
Green Procurement
Prioritize the purchase of sustainable and circular products, such as reusable medical devices and eco-friendly packaging, to reduce waste and enhance resource efficiency
Technology Integration Investing in technology such as tracking and tracing systems, data analytics, and digital platforms can help to streamline the supply chain and minimize complexity. With technological integration, supply chain visibility is much clearer and can be an accelerator for circular transition Technological limitations
Adopt the Internet of Things (IoT)
IoT can offer real-time data on the flow of resources and products along the supply chain. This can improve openness and traceability while reducing complexity and increasing efficiency in circular transition
Additive manufacturing
The innovation from additive manufacturing, such as 3D printing, can be used to create products designed for circularity, such as products made from recycled materials or modular products that can be easily disassembled and recycled (continued)
Role of Manufacturing Industry for Minimizing the Barriers to Circular Transition
491
Table 4. (continued) Threat
Resistance to change
Regulatory barriers
Preventive barrier
Description
Artificial Intelligent (AI)
The critical areas of use of AI tools in the healthcare sector are clinical workflow design, training of healthcare workers, healthcare professional performance, forecasting potential problems and diseases, and legal and ethical application practices that support CE
Culture change
Shifting the culture toward circularity for medical supplies and consumption in the health sector
Standard Operating Procedure (SOP)
Develop standardization using SOP to support circular transition and avoid human error practices in the health sector
Performance metrics
Develop performance metrics to measure the effectiveness of CE implementation. It could also frame and accelerate the circular shift in the health industry by improving performance at the department and individual levels
Capacity building
This can involve regulatory compliance training, stakeholder engagement, and other skills necessary to execute circular solutions
Policy Advocacy
Stakeholders can help overcome regulatory barriers to the circular transition by advocating for circularity policies
Collaboration & Knowledge sharing
By sharing best practices and case studies, stakeholders can learn from one another and identify the potential for innovation within the regulatory framework
pollution control boards, and (ii) delivering a pollution control board’s feedback app to the public and other stakeholders. Finally, this work proposes a causal relationship model between the entangled drivers of industry 4.0 and circular economy for constructing a smart healthcare waste disposal system enhanced with circular economy benefits [24].
492
K. N. Alfina and R. M. C. Ratnayake
Fig. 6. Conceptual framework of Circular transition in health sector (adopted from [25]).
Several actions can be taken by the healthcare institution to implement circular economy, as defined by[26] such as establishing a green team, with members of the team being the directors of buying and health and safety. Such as establishing a green team, with members of the team being the directors of purchasing and health and safety. According to [27] Solar energy, water management, and corporate social responsibility connect the social role of healthcare institutions with sustainable practices while also improving smart technologies. The applicability of the internet of things and the internet of services provides value to sustainable activities. This paper fills a gap by defining the barriers to circular transitions and emphasizing the role of manufacturing to minimize barriers in the health sector. From a literature analysis and expert input, 19 barriers are identified, and six aspects such as people, management, policy, technologies, supply, and environment are identified. Furthermore, by applying barrier management for supporting circular transformation in the health sector, this work extends the theoretical contribution. Clarifying the threats, identifying preventive and reactive barriers, and defining the consequences will help health sector stakeholders make decisions in the circular economy business transition along with provide high quality health services. This study also highlighted the product development as the key of circular transition [29]. The new product development (NPD) strategy for circular transition in the healthcare sector entails implementing circular economy ideas into the design, development, and production of new healthcare products. Here are some key elements of the NPD approach for circular transition:
Role of Manufacturing Industry for Minimizing the Barriers to Circular Transition
493
• Design for durability: the NPD process should prioritize the development of products that are long-lasting, durable, and repairable. This includes using high-quality materials, taking into account the product’s lifespan, and ensuring that components can be easily repaired or replaced. • Use of sustainable materials: Incorporating recycled and sustainable materials into the NPD process is critical for circularity. The approach should include sourcing materials from recycled, biodegradable, or renewable sources, as well as selecting materials that can be easily recycled at the end of their lifespans. • Product life extension: the NPD strategy should concentrate on providing solutions that allow the product’s life to be extended. This can involve providing maintenance services, spare parts, and putting in place refurbishing initiatives. • Integration of Digital Technologies: Internet of Things (IoT) devices and data analytics, for example, can play a crucial part in the circular transformation. These technologies should be considered for incorporation into the NPD strategy to enable product tracking, monitoring, and optimization. • Collaboration with suppliers: collaboration with suppliers is vital for adopting circularity in the NPD strategy. Engaging suppliers early in the process can assist identify sustainable material possibilities, organizing take-back programs, and ensure a cascaded-loop approach. • Lifecycle Assessment and Metrics: it is critical to conduct a lifecycle assessment (LCA) of new products in order to identify environmental implications and potential areas for improvement. The measuring and tracking of key performance indicators (KPIs) related to circularity, such as material efficiency, waste reduction, and product lifespan, should be part of the NPD strategy. • Continuous improvement and learning: the NPD strategy should promote a culture of continuous improvement and learning. Regular feedback loops, customer insights, and post-market monitoring can provide useful information for refining product designs, addressing performance concerns, and creating new circular opportunities. From a benchmarking standpoint, the European Union has robust innovation through legislation to promote the rapid transformation of sustainable items such as medical equipment and supplies. To ensure more excellent safety and quality standards in the new industry, the EU Medical Device Regulation (EU MDR) authorized new rules for applying the ‘CE’ mark to medical devices [30]. In public procurement, Nordic Ecolabelling’s [31] mandate for biodegradable and green products governs sustainable medical supplies. This primarily relates to a new brand strategy and a more practical concept and approach, with digital transformation as a crucial component toward building a circular economy. Important criterion development work has occurred in various industries, including the health sector. ISO 14001 could also help with the circular transition in the health sector; for example, hospitals should include more sustainable practices in their strategy. ISO 14001 is an international standard that provides environmental management system criteria. It helps businesses improve their environmental performance by utilizing resources more effectively and reducing waste, resulting in a competitive advantage and stakeholder trust [32].
494
K. N. Alfina and R. M. C. Ratnayake
5 Conclusion The healthcare sector has a considerable environmental impact, generating a significant quantity of waste and consuming important resources. By utilizing circular economy concept, the industry may reduce waste output, conserve resources, and minimize its overall environmental impact. This transition toward sustainability is critical for minimizing climate change, safeguarding ecosystems, and maintaining natural resources for future generations. The circular economy transition has inherent barriers that must be overcome for its implementation. The goal of this study is to examine the role of manufacturing to minimize barriers to circular transition. This study’s conclusions found 19 barriers and six factors such as people, management, policy, technology, supply, and environment. The additional analysis with barrier management is novel in order to promote the circular transformation, particularly in the health sector. The emergence of Industry 4.0 also plays an essential part in minimizing barriers; it refers to the use of modern technologies such as the Internet of Things (IoT), cloud computing, artificial intelligence (AI), and additive manufacturing to improve efficiency, productivity, and flexibility. The new product development (NPD) strategy for the circular transition, which has been emphasized as an important feature in the healthcare sector, comprises incorporating circular economy ideas into the design, development, and production of new healthcare products. By combining these factors into the NPD strategy, the healthcare business can generate innovative goods that adhere to circular economy principles. This move contributes to waste reduction, resource efficiency, and a more sustainable healthcare system. Further research should focus on empirical investigations conducted in health organizations such as service providers, which serve as the major hub for the healthcare supply chain. It also filled a gap in the limitations of this investigation. Acknowledgments. The Norwegian Program for Capacity Development supported this research through the Higher Education and Research for Development (NORHED II) Project ID 68085 initiative for the project, “Enhancing Lean Practices in Supply Chains: Digitalization,” under the sub-theme of “Politics and Economic Governance.” This project is a collaboration involving ITB (Indonesia), the University of Stavanger (Norway), and the University of Moratuwa (Sri Lanka).
References 1. Gupta, S., Gupta, P.: Digitization for reliable and efficient manufacturing. Life Cycle Reliabil. Saf. Eng. 7(4), 245–250 (2018). https://doi.org/10.1007/s41872-018-0051-y 2. Mangla, S., et al.: Barriers to effective circular supply chain management in a developing country context. Prod. Plan. Control 29, 551–569 (2018). https://doi.org/10.1080/09537287. 2018.1449265 3. Wu, K.J., Tseng, M.L., Lim, M.K., Chiu, A.S.: Causal sustainable resource management model using a hierarchical structure and linguistic preferences. J. Clean. Prod. 229, 640–651 (2019). https://doi.org/10.1016/j.jclepro.2019.04.394 4. Ahi, P., Searcy, C.: Assessing sustainability in the supply chain: a triple bottom line approach. Appl. Math. Model. 39(10–11), 2882–2896 (2015). https://doi.org/10.1016/j.apm. 2014.10.055
Role of Manufacturing Industry for Minimizing the Barriers to Circular Transition
495
5. McKone-Sweet, K.E., Hamilton, P., Willis, S.B.: The ailing healthcare supply chain: a prescription for change. J. Supply Chain Manag. 41(1), 4–17 (2005). https://doi.org/10.1111/j. 1745-493X.2005.tb00180.x 6. Ghobakhloo, M.: Industry 4.0, digitization, and opportunities for sustainability. J. Clean. Prod. 252, 119869 (2020). https://doi.org/10.1016/j.jclepro.2019.119869 7. Mangla, S.K., Govindan, K., Luthra, S.: Prioritizing the barriers to achieve sustainable consumption and production trends in supply chains using fuzzy Analytical Hierarchy Process. J. Clean. Prod. 151, 509–525 (2017). https://doi.org/10.1016/j.jclepro.2017.02.099 8. Ripanti, E., Tjahjono, B.: Unveiling the potentials of circular economy values in logistics and supply chain management. Int. J. Logist. Manag. 30, 723–742 (2019). https://doi.org/10. 1108/IJLM-04-2018-0109 9. United Nations. The 2030 Agenda for Sustainable Development’s 17 Sustainable Development Goals ( SDGs). UN, SDGs (2022) 10. van Straten, B., Dankelman, J., van der Eijk, A., Horeman, T.: A circular healthcare economy; a feasibility study to reduce surgical stainless steel waste. Sustain. Prod. Consum. 27, 169–175 (2021). https://doi.org/10.1016/j.spc.2020.10.030 11. van Boerdonk, P.J.M., Krikke, H.R., Lambrechts, W.: New business models in circular economy: a multiple case study into touch points creating customer values in health care. J. Clean. Prod. 282, 125375 (2021). https://doi.org/10.1016/j.jclepro.2020.125375 12. Kazanço˘glu, Y., Sa˘gnak, M., Lafcı, Ç., Luthra, S., Kumar, A., Taço˘glu, C.: Big data-enabled solutions framework to overcoming the barriers to circular economy initiatives in healthcare sector. Int. J. Environ. Res. Public Health 18(14), 7513 (2021). https://doi.org/10.3390/ijerph 18147513 13. Govindan, K., Hasanagic, M.: A systematic review on drivers, barriers, and practices towards circular economy: a supply chain perspective. Int. J. Prod. Res. 56, 278–311 (2018). https:// doi.org/10.1080/00207543.2017.1402141 14. Masi, D., Kumar, P.V., Garza-Reyes, J.A., Godsell, J.: Towards a more circular economy: exploring the awareness, practices, and barriers from a focal firm perspective. Prod. Plan. Control 29, 539–550 (2018). https://doi.org/10.1080/09537287.2018.1449246 15. Farooque, M., Zhang, A., Liu, Y.: Barriers to circular food supply chains in China. Supply Chain Manag. 24, 677–696 (2019). https://doi.org/10.1108/SCM-10-2018-0345 16. Paletta, A., Filho, W.L., Balogun, A.-L., Foschi, E., Bonoli, A.: Barriers and challenges to plastics valorisation in the context of a circular economy: Case studies from Italy. J. Clean. Prod. 241, 118149 (2019). https://doi.org/10.1016/j.jclepro.2019.118149 17. Tsolakis, N., Goldsmith, A.T., Aivazidou, E., Kumar, M.: Microalgae-based circular supply chain configurations using Industry 4.0 technologies for pharmaceuticals. J. Clean. Prod. 395, 136397 (2023). https://doi.org/10.1016/j.jclepro.2023.136397 18. Sharma, H.B., Vanapalli, K.R., Samal, B., Cheela, V.R.S., Dubey, B.K., Bhattacharya, J.: Circular economy approach in solid waste management system to achieve UN-SDGs: solutions for post-COVID recovery. Sci. Total Environ. 800, 149605 (2021). https://doi.org/10.1016/j. scitotenv.2021.149605 19. Ellen MacArthur Foundation. Towards the Circular Economy, no. 8, pp. 26–29 (2013) 20. Bjørnbet, M.M., Skaar, C., Fet, A.M., Schulte, K.Ø.: Circular economy in manufacturing companies: a review of case study literature. J. Clean. Prod. 294, 126268 (2021). https://doi. org/10.1016/j.jclepro.2021.126268 21. Ozkan-Ozen, Y.D., Kazancoglu, Y., Mangla, S.K.: Synchronized barriers for circular supply chains in Industry 3.5/Industry 4.0 transition for sustainable resource management. Res. Conserv. Recycl. 161, 104986 (2020). https://doi.org/10.1016/j.resconrec.2020.104986 22. Yunana, D., et al.: Developing Bayesian networks in managing the risk of Legionella colonisation of groundwater aeration systems. Water Res. 193, 116854 (2021). https://doi.org/10. 1016/j.watres.2021.116854
496
K. N. Alfina and R. M. C. Ratnayake
23. Southeast Asian Geotechnical Society. ISO 9001 Quality management systems (2015) 24. Chauhan, A., Jakhar, S.K., Chauhan, C.: The interplay of circular economy with industry 4.0 enabled smart city drivers of healthcare waste disposal. J. Clean. Prod. 279, 123854 (2021). https://doi.org/10.1016/j.jclepro.2020.123854 25. Ratnayake, R.M.C.: Translating sustainability concerns at plant level asset operations: industrial performance assessment. Int. J. Sustain. Strategic Manag. 3(4), 314 (2012). https://doi. org/10.1504/ijssm.2012.052655 26. Voudrias, E.A.: Healthcare waste management from the point of view of circular economy. Waste Manag. 75, 1–2 (2018). https://doi.org/10.1016/j.wasman.2018.04.020 27. Daú, G., Scavarda, A., Scavarda, L.F., Portugal, V.J.T.: The healthcare sustainable supply chain 4.0: the circular economy transition conceptual framework with the corporate social responsibility mirror. Sustainability 11(12), 3259 (2019). https://doi.org/10.3390/su11123259 28. Tseng, M.-L., Ha, H.M., Kuo-Jui, W., Xue, B.: Healthcare industry circular supply chain collaboration in Vietnam: vision and learning influences on connection in a circular supply chain and circularity business model. Int. J. Logist. Res. Appl. 25(4–5), 743–768 (2021). https://doi.org/10.1080/13675567.2021.1923671 29. Moreno, M., Ríos, C., Rowe, Z.O., Charnley, F.: A conceptual framework for circular design. Sustainability 8, 937 (2016). https://doi.org/10.3390/SU8090937 30. European Medical Device Regulation (EU MDR) (2021) 31. Nam, T., Holdings, L.: Nordic Ecolabelling, no. 182485 (2016). https://myanmar.unfpa.org/ sites/default/files/pub-pdf/UNFPA_AnnualReport_2016.pdf 32. Nascimento, G., Araujo, C.A.S., Alves, L.A.: Corporate sustainability practices in accredited Brazilian hospitals: a degree-of-maturity assessment of the environmental dimension. Revista de Administração 52(1), 26–35 (2017). https://doi.org/10.1016/j.rausp.2016.10.001
Managing Performance in Technology-Enabled Elderly Care Services: The Role of Service Level Agreements in Modular Smart Service Ecosystems Godfrey Mugurusi1(B) , Anne Grethe Syversen1 , Inge Hermanrud2 Martina Ortova1 , Pankaj Khatiwada3 , and Stian Underbekken4
,
1 Department of Industrial Economics and Technology Management in Gjøvik,
Norwegian University of Science and Technology, Trondheim, Norway [email protected] 2 Department of Organization, Management and Governance, Inland Norway University of Applied Sciences, Lillehammer, Norway 3 Department of Information Security and Communication Technology, Norwegian University of Science and Technology, Trondheim, Norway 4 IKOMM AS, Lillehammer, Norway
Abstract. Elderly care services are increasingly becoming more technologysupported due to the changing socio-demographics globally. In this paper, we study the use of privately owned technology to deliver more personalized elderly care services. The introduction of technology into existing elderly care service models presupposes new forms of organizing these services which in turn challenges the performance goals of elderly care services, and yet little empirical research can be found on this issue. In this paper, we examine the organizational changes that happen in the elderly care service models when new technologies are introduced and how those changes affect the performance of those services. In addition, we want to explore how service level agreements can be used to align the performance of the different actors involved in the delivery of technology-enabled elderly care services. We argue that technologically driven changes in traditional elderly care service models have performance consequences (e.g., on quality of services) which influence the cost and resource sensitivity of business models of most public organizations. To validate the arguments made, the study draws empirics from an ongoing project on technology development and deployment of ambient assisted living technologies in Lillehammer municipality in Norway. Keywords: Service models · elderly care services · modularization · service level agreements · performance management · smart service ecosystems
© IFIP International Federation for Information Processing 2023 Published by Springer Nature Switzerland AG 2023 E. Alfnes et al. (Eds.): APMS 2023, IFIP AICT 690, pp. 497–514, 2023. https://doi.org/10.1007/978-3-031-43666-6_34
498
G. Mugurusi et al.
1 Introduction Increasingly smart and digital technologies are shaping a lot of value-creation processes in both service and manufacturing industries. Within the service sector and for public sector services in particular, smart technologies have been highly adopted to achieve better user experiences and to minimize the cost impact of shrinking public sector budgets. In this paper, we set out to explore the adoption of smart technologies for elderly care services within the public sector context and examine how these technologies redefine the performance of those services. The adoption of smart technologies in healthcare, and especially in elderly care services has opened many opportunities for curative and self-management for the elderly. This has as a result freed up resources for health systems to offer high-quality, efficient, and accessible healthcare. Chan et al., [1] claim smart wearable systems such as pulse measuring wristbands, body temperature sensors, electronic patches, chest belts, gastric pressure sensors, etc. support independent living for the elderly and enhance individual sportive or technical abilities. Dittmar et al., [2] describe the importance of ambulatory monitoring technologies such as microsensors, wrist devices, health-smart clothes, and health-smart homes not only in improving the patient’s living conditions but also reduce costs of long hospitalization. McFarland et al., [3] maintain that in an environment where the healthcare burden outstrips healthcare resources and community provision, technology-enabled solutions will make healthcare more efficient and effective. However, a balanced view of the performance effects of technology-enabled care on different actors is still lacking. In fact, several studies (e.g., [3, 4]) claim that not all forms of technology may result in improved care and health system performance. Both Vimarlund et al. [5] and Bennett, et al., [4] point out that technology has no significant impact on the models of elderly care but should be seen as an enabler for communication and knowledge sharing among different service actors which is crucial for creating an effective logistic system and resulting in a high quality of service (QoS). Carretero et al., [6] show that technology-enabled homecare services require new forms organizing which impact the scale of deployment and locus of control compared to the traditional service models. It is these technology-induced changes in models of care that have not been deeply studied in the organizational sciences literature and are of interest in this study. For example, Vimarlund et al. [5] show that when technologies are not adapted to existing models of care (e.g., work routines and habits), negative and economic consequences should be expected among the organizations involved. So, in this study, we set out to explore two research questions (RQs). In RQ1, we examine the organizational changes that happen in the elderly care service model when new technologies are introduced and how those changes affect the performance of that service. In RQ2, we explain how service level agreements (SLAs) can be used to align the performance of the different actors who deliver different service modules in technologyenabled elderly care services. Both RQs highlight the need for a better understanding of elderly care services offered by public authorities because of the gradual cost and resource sensitivity of their business models. Carretero et al., [6] note that the fear of the unknown additional cost burden that technology-enabled elder care models advance compared to the existing or traditional models is a worry for most public authorities.
Managing Performance in Technology-Enabled Elderly Care Services
499
The contribution of the paper is twofold. By answering these RQs, we test the argument made in the literature (e.g., [6]) that new emerging technologies in elderly care are positively changing the service models of the actors involved which may well impact service performance and the QoS in particular. In addition, we attempt to assess the significance of SLAs as a mechanism of managing the performance of services involving multiple actors where it is long been argued (e.g., [7, 8]) that plurality in decision-making affects the global efficiency of healthcare systems. Because changes to digital ecosystems increase the complexity of workflows, SLAs can become the basis for maintaining and measuring QoS performance [9]. This paper is organized as follows. In the next section, we review the literature. Then a short description of the methods and materials is made. Thereafter we present the results and analysis. The last section is that of discussion and conclusions.
2 Review of the Literature 2.1 Elderly Care Service Models By definition, a service model is defined as a way an organization offers and or delivers intangible value to its (potential) customers in a consistent way [10]. Within the services management literature, service models for elderly care are still understudied, and yet within the healthcare literature, they appear well-studied. This underwhelming focus on service models in elderly care can be explained by these services being delivered by state actors whose business models and revenue models are arguably oversimplified and often governed by public policy [11]. Baldissera & Camarinha-Matos [12] further underscore this point by distinguishing between professional business services that are typically the focus of the service management literature and the mechanisms of state-supported services in which long-term care lies. According to Béland & Hollander [13] North America has mainly two service model categories – community-based models and state/provincial models. They argue that irrespective of which model is adopted, some of the key factors to be considered are how care can be coordinated effectively across different types of services, and how all the care provider organizations can be coordinated to ensure continuity of care for elderly persons. Rauch [14] study the so-called Scandinavian social service model that’s characterized by comparatively high degrees of universalism and defamiliarization. Woo [15] describes the elderly care services model in Hong Kong where the service model is based on communal resources due to the lack of a well-developed primary care system. At the more specific level, both Low et al. [16] and Low & Fletcher [17] identified and studied four service models of elderly care from a user perspective. First is the case management service model which is a collaborative process to meet a user’s health needs through the coordination of different actors and resources in a cost-effective way. This model is customized and resource intensive and often leads to miscoordination. The second type, integrated care, is a remedy for the first model. Here services are coordinated at a system level rather than focusing on individual users. Resources and structures to coordinate care across different levels are pooled together. Its downside is the focus on institutional linkages and less on individual care [16]. The third service model is consumer-directed care which involves an interactive exchange between the user of the
500
G. Mugurusi et al.
service and the caregiver. The user exercises their agency of choice but has a limitation of transfer of cost towards the consumer. The fourth and final one, the restorative care model addresses the user’s functional ability to be an independent service user. While the restorative service model has better QoS outcomes for the users, Low & Fletcher [17] cites challenges associated with the high cost of this service since the consumer uses more auxiliary services relative to other models. 2.2 Technology-Enabled Elderly Care and Changing Service Models All the service models in the literature have a relationship with technology where IT infrastructure components are used to provide the functionality of the service or interaction of the actors involved in the delivery of the service to the user [12]. The concept of Ambient Assisted Living (AAL) technologies is a result of that where different technologies enable the elderly to tackle problems in ways they would otherwise need help from another person, family, or external systems such as public or special systems [18]. AAL based services are offered through platforms where several actors including users, family, caregivers, solution providers, and the authorities, are involved all in the coproduction of that service [19]. AAL technologies are often interconnected through Internet of Things (IoT) infrastructure that integrates public e-health systems [20]. In Table 1, we present a summary of AAL technologies that we found in the literature. Given the collaborative nature of AAL or the technology-enabled service model, Baldissera & Camarinha-Matos [12] proposed the concept “Elderly Care Ecosystem (ECE)” which they define as the system that supports the creation, management, and analysis of virtual organizations to attend customer’s needs. According to Baldissera et al., [8] an ECE has several actors namely the seniors (customers) and their care needs, services, and service provider entities, among others that are mediated by technology or technologies. Both Baldissera & Camarinha-Matos [12] and Baldissera et al., [8] argue that ECEs help to integrate different service domains and service expectations, which eases support coordination hence better QoS outcomes and performance. So, is technology changing the existing elderly service models – why and how? Mostaghel [24] studied the roles of different actors involved in elderly care and found that families used these technologies to connect more easily with their people; healthcare providers used welfare technology to improve their work, while technology developers saw technology changes as a new market opportunity and the government actors saw this is as a possibility for better monitoring in elder care. Johansson-Pajala & Gustafsson’s [25] study on social actors’ attitudes in introducing care robots in elderly care service models show ethical reflections, collaboration, and lack of knowledge as part of the findings, but also the lack of national governance and infrastructure were among the hurdles that must be overcome to carry out a successful implementation. They conclude that it is more difficult to implement assisting robots than other types of welfare technology, and it demands major changes at many levels in society to build acceptance for care robots as assistance for elder people and their caregivers.
Managing Performance in Technology-Enabled Elderly Care Services
501
Table 1. Notable technologies in elderly homecare services Technology and Descriptions [21–23]
Examples
Supplier
Wearable Medical Sensors to monitor heart rate, blood pressure, oxygen levels, activity levels, etc.
Smartwatches, fitness trackers, Apple Watch, Fitbit, Garmin wearable patches, etc.
Environmental sensors to monitor temperature, humidity, air quality, and the presence of smoke or gas leaks, etc.
Smart thermostats, air quality Air Things, Honeywell, monitors, smoke detectors, etc. Develco
Actuators that can control certain functions in the home, e.g., lights, thermostats, or doors
Smart light bulbs, smart thermostats, smart locks
Wi-Fi or cellular connectivity, mobile apps, telehealth platforms, and other communication tools for real-time monitoring, remote consultations, and health-related notifications
Video calling apps, telehealth Dignio, Teladoc, MyChart platforms, mobile health apps, etc.
Philips Hue, Nest, August
AI &ML technologies that ML algorithms for health analyze data collected from monitoring, AI-powered sensors and wearable devices analytics platforms, etc. to identify patterns, trends, and anomalies in health conditions
Merative, Sensogram, EarlySense
Fall Detection and Emergency Fall detection sensors, Response Systems using emergency call buttons, sensors or wearable devices to wearable panic buttons, etc. detect falls and automatically trigger emergency alerts
Philips Lifeline, Medical Guardian, MobileHelp
Medication Management Systems for medication schedules, reminders, and dispensing for dementia patients
MedMinder, TabSafe, Hero
Automated pill dispensers, medication reminder apps, smart pill bottles, etc.
(continued)
502
G. Mugurusi et al. Table 1. (continued)
Technology and Descriptions [21–23]
Examples
Supplier
Home Automation and Voice-Assisted Technologies for voice commands or automated routines to control various aspects of the home for the elderly
Voice assistants (e.g., Amazon Amazon Echo, Google Nest, Alexa, Google Assistant), Samsung SmartThings smart home hubs, smart plugs, etc.
Remote Monitoring and Remote patient monitoring Telehealth Solutions for systems, telehealth platforms, remote monitoring of health virtual care apps, etc. conditions, virtual consultations, and remote care management
Philips Telehealth, TytoCare, Medtronic CareLink
Data Privacy and Security Technologies for the protection and security of data collected from sensors, wearable devices, and other sources in smart homes
Irdeto, Imprivata, Medigate CyberMDX
Encryption and authentication mechanisms, data privacy frameworks, cybersecurity protocols, etc.
2.3 SLAs and Performance in Technology-Enabled Elderly Care Services In Sects. 2.1 and 2.2, we attempted to highlight the different service models in elderly care and review the changes due to technology-enabled service models. There are obvious tensions and performance gaps that will arise. In this section, we reviewed the literature on the gaps in ECE environments and how they affect service performance, and how SLAs can contribute to better alignment amidst technology heterogeneity [26]. Firstly, the issue of the performance of elderly care services can be seen from multiple angles. The ECE concept shows that the performance of such services is a function of a series of actors including the users’ families, technology developers, specialized home care agencies, caregivers, etc. [12]. Hirdes et al., [27] maintain that home care clients are the most important actor that influences QoS based on how they (or their carer perceives) the service, or the level of autonomy the user expects of such a service. So QoS is a critical measure for the performance of elderly care. In the service marketing literature, QoS refers to a measure of how an organization delivers its services compared to the expectations of the users of the service [28]. In the gerontology literature QoS is defined by Campbell et al., [29] as to whether the individual has access to structures and processes of care and whether the care received is effective. Access represents availability where the needs of the user are met, while effectiveness is the extent to which care is delivered to improve the health outcomes of individuals [30]. From the perspective of users, Cleland et al., [31] propose QoS to be measured by the quality-of-care experience but mostly using clinical indicators of care quality. Sandhu et al. [32] offered some metrics for measuring QoS including general satisfaction by the
Managing Performance in Technology-Enabled Elderly Care Services
503
user, whether social services check that users are satisfied with their service, whether care workers come to visit at suitable times; and whether changes asked for in the help they receive are made. These indicators are similar to those in the services management literature, particularly from the GAP model of Parasuraman et al., [33]. They include tangible aspects (e.g., equipment and personnel), reliability of the service, responsiveness, assurance (e.g., competence and politeness of the personnel), and empathy (e.g., personalized assistance) [34]. The idea of the GAP model are the discrepancies that happen when the service offered does not meet user expectations [33, 34]. Some studies (e.g., [35]) have proposed that service level agreements (SLAs) could be a mechanism to assess QoS by measuring the collaboration of actors in ECE environments. SLAs are defined by Zumkeller [36] as contractual agreements between a service provider and a service recipient regarding the content and QoS. Weyns & Höst [35] use an example of using SLAs to manage dependability and to standardize municipal IT services. SLAs could be a basis of communicate the different roles and needs of different partners in a service ecosystem [35]. Because of the different actors involved and their service modules, Rana et al. [37] show that service provisioning using SLAs is exceedingly difficult because of the multiple resources involved in the execution of one task or process. As a result, they tend to leave collaborations with major risks of not detecting failures and breaches of trust or contractual state which we can classify as “value leakages [38].
3 Methods, and Material This is an explorative case study on digitally enabled elderly services in Norway. The case study is of a project called “Internet of my things (ioMt)” in Lillehammer municipality in Norway, which aims to offer personalized care services by use of existing privately owned technologies. Part of the results of this case have been published in Mugurusi, et al. [38] where the actors involved in developing the project’s proof of concept (POC) for the smart digital service are described. The main source of data were focused group discussions (FGDs) organized as IoMt project meetings and workshops every quarter between 2020 and 2023 (refer to Table 2). The FGDs involved several firms and actors including technology developers, a platform owner, representatives of caregivers, third-party technology vendors, the home access installation firm, and the municipality as service owner. In addition, we also used survey the data of homecare personnel in another municipality other than that for which the project is set. Three authors in the paper were directly involved in the project and participated in the FGD meetings where the data for the paper were collected. 3.1 A Case Description of the Digital Home-Access Solution in the IoMt Project The “ioMt” project is a research project that addresses how users of municipal homecare services could use their privately owned technologies within municipal homecare infrastructure. The goal of the project is to develop a POC of a digital-door lock solution as a pilot for user-owned technology to allow care nurses access elderly inhabitants’ homes in the municipality of Lillehammer to deliver home care services. The development of a
504
G. Mugurusi et al. Table 2. Focus group discussion (FGD) meetings
Data collection tool
Service user*
Home care givers*
FGD in Q3/2020
––
––
FGD in Q4/2020
––
––
FGD in Q1/2021
––
––
FGD in Q2/2021
––
––
FGD in Q3/2021
––
––
FGD in Q2/2022
––
––
FGD in Q3/2022
––
––
FGD in Q1/2023
––
––
Service owner (Municipality) √ √ √
Platform owner –– –– ––
3rd party IT vendors √
Key access installation firm ––
√
√
√
√ √
√
√
√
√
√
√
√
√
√
√
√
√
√
√
√
–– –– √
––
* Data collected by a survey at Elverum municipality.
digital-door lock solution aims to replace the obtrusive manual keys and key boxes. The idea is to develop a robust IoT enterprise architecture where several welfare technologies can be added to give users autonomy and the agency to decide on their quality of life. The digital door lock technology is an example where users will have the agency to decide how the municipality, visitors, family, courier firms, etc. can access their home. Currently, the municipality accesses user homes using traditional keys that are kept in a municipality-mounted keybox on the user’s property. When the user applies and is approved for care services, the municipality purchases and installs a keybox and makes a copy of the keys that are kept in the box. The documentation and processes of key handling are extremely time-consuming and inefficient. The cost of purchase and installation of the key boxes is a concern. Plus, the users perceive the keys and key box system as obtrusive and offers no privacy. The municipality believes there are security risks that could have legal consequences. So, through the ioMt project, the municipality sought to digitalize the key handling procedures through a user-owned digital-door lock solution. The municipality would free up a lot of resources (e.g., money), give caregivers more time to focus on actual care work, and give users the agency to manage homeaccess rights for both scheduled (e.g., homecare) and unplanned visits (e.g., emergency care when the safety alarms go off). Aside from these benefits, the new digital door lock solution necessitates significant changes in the current elderly-care service model which is at the core of this paper. Initially, the users had ranked rate efficiency, trust, reliability, and data privacy as the most
Managing Performance in Technology-Enabled Elderly Care Services
505
important requirements for the proposed digital-doorlock solution. In addition, the user needs to trust that the digital technology and data that is shared with the municipality’s homecare service caters to the user’s individual privacy preferences. The municipality needed to trust that the door-solution would operate without fail 100% of the time.
4 Results and Analysis The results show that the traditional elderly care service model has often been bundled into one service that involves several actors such as the user, their family, their general practitioner (GP), volunteers, private non-profit foundations, private commercial enterprises, and public bodies. The Norwegian service model puts a public body at the center of taking care of the elderly often in cooperation with the family. The care is organizationally assigned to the budget of the municipality and regulated by the Health and Care Services Act. Elderly care is realized mostly from public sources, only a few municipalities such as Oslo have some privately owned care homes [39]. In this case study, we focused on the municipality as the owner of the service. In Fig. 1, we show the chained service model of typical elderly care services in Norway. The user or with help from their GP, the family, or the hospital applies to the municipality for the homecare service. Before the application is granted, a lot of internal business processes are involved, the most important of which is mapping the actual needs of the user (e.g., rehabilitative, therapeutic, and assistive home healthcare) and matching them with the available resources. The needs of the traditional service model are usually divided into two – home health nursing and support contact. According to Holm et al., [40] home health nursing services are more comprehensive and are administered to people who require short or long care as a result of illness, impaired health, old age, or other factors, while support contact services primarily help users engage meaningfully in society and provided to individuals and families who, as result of disability, age, or mental health problems, require personal assistance to avoid isolation or to live a socially active life.
Fig. 1. Components of the technology-enabled service model.
In the case of Lillehammer municipality, the ioMt project as a pilot uses the digital door-lock solution to replace the obtrusive key box at the customer’s/service user’s home. The municipality believes this will reduce the cost of old-age care, and administrative time of homecare professionals. On average home-care nurse makes 20 to 23 visits a day,
506
G. Mugurusi et al.
which comes with complex logistics associated with movement from home to home. The proposed home-access management solution in the project is to reduce the complexity of managing keys. The keys to the different user homes are managed in a pool (by a bunch per shift) and per care professional. Aside from the inefficiency of managing the physical keys-pool among 100s of caregivers, the challenges with this key and key-box system are many. Therefore, the digital door lock solution that’s connected to electronic patient records is an efficiency game changer. All service actors including the users, families, social service institutions, and operators are connected through an app and a common IoT platform, giving full play to the efficiency of resources, and improving QoS for the elderly. In Table 3 more specific advantages of the home-access management solution are presented suggesting the shift from the traditional, one-in-all service model to a technologyenabled service has its own advantages. Bundles of AAL technologies can be used to free up administrative costs, time, and care resources while giving elderly users autonomy in their everyday lives. Currently, home care nurses spend 32% of their time on the job on direct patient work, and 68% on non-direct activities such as driving, administrative reporting, and documentation, according to Helgheim & Sandbaek [41]. In the proposed home-access solution discussed in the case study, the service model comprises a service bundle of both the physical service component (e.g., drug dispensing, good delivery, hygiene, house cleaning, grocery shopping, rehabilitation, etc.) and the technology component where care tasks are tightly interwoven with the logistics processes making it difficult to untangle the service from the technology used to support it. Table 3. Advantages of the digital door lock solution in a technology-enabled service vs. the traditional service model Home access-management challenges with the Home access-management system using the traditional key and key-box system digital door lock solution User
Municipality
User
Municipality
An obtrusive key box
Relatively high procurement, installation, and maintenance costs
User-not locked into one vendor of the digital door lock
Relatively lower total life cycle cost
Limited privacy and control of who comes in and goes out of home
Time-consuming and resource-demanding
Ease of giving and withdrawing access
Ease of integration with many other home-care technologies
A lot of Complex and high decision-making power administrative conceded to the paperwork municipality
User-owns and Seamless access controls their data and management for the consent healthcare professionals
Managing Performance in Technology-Enabled Elderly Care Services
507
The positive performance changes due to the shift to technology-enabled care (using the home access solution case) show that home care professionals become more efficient due to better workflow, better asset tracking, and lower transactional costs at the organizational level, while the service users have a better QoS due to fewer mistakes and errors by caregivers. The changes towards technology-enabled elderly care challenge the current service model mainly because of the structure of the service itself which has two mechanisms of workflow, i.e., the person and routines involved in delivering service on one hand and the way they use the technology on the other hand. Suboptimal performance of one service module affects the other. The case data in the home access solution summarized in Table 4 shows some tensions and how these two modules must be aligned. The challenges appear to be driven by the mismatch in the competencies users and homecare professionals have (or perceive to have) in relation to the new technology. While the homecare professionals believe the home access solution will reduce their administrative burdens, they have concerns about how much time and knowledge is required to learn the new routines. The users on the other hand have high expectations of the technology that is aligned with the Norwegian Directorate of e-Health [42] indicators on healthcare services technology. For example, in the home access solution, the users want a wholesome service that’s reliable and doesn’t breach their privacy. The tensions and areas of alignment are presented in Table 4. In Norway, technology to support health care services ought to be designed and developed to meet the following indicators according to the Norwegian Directorate of e-Health [42]: It has to be (1) effective, (2) safe and secure, (3) involves the users and gives them influence, (4) is coordinated and characterized by continuity, (5) Utilizes resources in a good way, (6). Accessible and distributed. We contrasted those indicators with the goals of the ioMt’s home access solution, and the findings show some areas where the two service modules are generally aligned, but two areas aren’t. Table 4 specifically shows that performance gaps emerge because of a lack of alignment in service modules I and II. We explore further if SLAs are a better tool for alignment of the tensions between service module I and service module II, specifically for indicators 1 on “effectiveness” and 6 on “accessibility”. In service module I (the physically delivered service module), the users expect that the digitally enabled home-access solution makes living at home easier, reduces time looking for physical keys, and that users have autonomy. In the ioMt project, autonomy was conceptualized as users actively giving and withdrawing privacy consent within the digital door lock solution. Within the service module II (the technology module), effectiveness was interpreted as system efficiency and reliability. The expectations of the third-party technology vendors and developers was that the home-access solution would work fine 100%, but the service module 1 challenges such as low battery, and low user/caregiver technology skills could affect the expected service level hence low QoS. Also, technology-related challenges (in service module II) such as bandwidth, throughput capacity, cyber-security breaches, etc. could not guarantee an 100% service level. On indicator 6 (i.e., accessible and distributed), the users wish that the digital door lock solution could integrate their other infrastructure in their home – such as personal
508
G. Mugurusi et al. Table 4. Tensions and areas of alignment among the QoS indicators
Service performance indicator
Service module I: the physically delivered service part
Service module II: the technologically enabled part
Tensions and alignment within the homecare access solution
1) Effective
– Makes living at home easier, reduces time spent, and has a low system error rate
System efficiency and reliability
Tension
2) Safe and secure
– Feels safe, secure, and in line with current legislation
User needs and privacy Aligned
3) Involves the users and gives them influence
– Users can give their User participation and consent and adjust autonomy access door lock services according to their needs
Aligned
4) Is coordinated and – Can be used by all characterized by kinds of caretakers continuity and next of kin
System or platform robustness
Aligned
5) Utilizes resources in a good way
– Use existing technology and infrastructure at home and home care service
Existing services integration
Aligned
6) Accessible and distributed
– Builds on ordinary Integrates with other infrastructure in enterprise architecture ordinary homes. No (interoperability) need for training
Tension
wearables, smoke alarms, air quality, and smoke or gas leak sensors. In the technology module, this indicator was interpreted as integration with other enterprise architecture. The ioMt project has addressed this aspect as interoperability where the challenges of emerging new platform-based technologies make it extremely difficult for integration. The healthcare platform in Norway has different APIs than the current IoMt platform which makes it difficult to technically integrate users’ data from the hospital side to the municipality-coordinated elderly care social services. Mugurusi et al. [38] have in part addressed issues of socio-technical interpretability in the ioMt project.
Managing Performance in Technology-Enabled Elderly Care Services
509
5 Discussion, Conclusion, and Implications Reverting to RQ1 where we sought to examine the changes in the current eldercare service model as a result of new technology adoption or the transition to the “technologyenabled service model”, there are two important results to discuss further. First, we found that technology positively augments some but not necessarily all mechanisms of the delivery of a service, and this largely depends on the state of the user. Øydgard [43] in part describes the Norwegian elderly care system that uses a system called the care ladder i.e., “omsorgstrappen” to characterize and classify user needs based on their morbidity, resource needs, and the nature of specialized services needed. On the lower end of the scale are fairly independent users who need only basic support while on the top end of the scale are users who need intensive and highly specialized care. This study demonstrates that different types of technologies enable different types of service outcomes. The digital door lock solution described in this paper freed up a lot of resources for the municipality, simplified caregivers’ routines, increased workflow coordination, and most importantly gave users more autonomy to decide the service level needed. Bennett et al., [4] maintain that technologies do not remove the need for elderly care services rather they enable both care workers and service users to contribute to a coordinated care service model. Mostaghel [24] sees the role of well-fare technology as improving care workers’ routines and even changing the content of work. Second, we found that the technology-enabled modularization of elderly care service packages makes it easy to customize and personalize service to the needs of different users in the care ladder. We demonstrated how the transition from the traditional key handling routines to a digital home-access solution frees up the time of caregivers and gives the municipality better efficiency in the coordination of care. According to Carlborg & Kindström [44], service modularization involves the separation of an object into components (modules), where it’s possible to combine these components into customizable offerings (sometimes referred to as bundling). In the smart services ecosystem, modularity provides technology developers the foundation for further service innovation and avoids the re-invention of existing modules [24]. In addition, the modularization of elderly care services reduces healthcare costs and improves patient-centered services (less process complexity) and flexibility due to bundling different service modules to create a unique service offering [44]. However, because of modularity in smart ecosystem environments, elderly care services tend to become too complex to organize which has a performance effect on the service encounter process and user perceptions of utility. In our findings, we demonstrate that users, technology developers, and municipal agencies see the elderly care service as a process consisting of a combination of physical and non-physical modules that are integrated into various customer-specific configurations according to the care ladder. In reality, the municipality as the service owner has challenges of engaging with the different ecosystem actors each trying to optimize their module or task. This affects the quality of the service - QoS. We characterized these challenges in Table 4 as performance tensions between the delivery of the technical module versus the physical module. The two tensions in Table 4 were assessed using welfare technology standards of the Norwegian Directorate of e-Health [42]. The first tension was about effectiveness where both modules are codependent on each other. The technical module must be reliable
510
G. Mugurusi et al.
and efficient at the system level to ensure that the system is devoid of errors while users should simultaneously feel their lives are much better off at home. But because such a goal on the system is not foolproof (e.g., due to interoperability problems), there are a lot of resource demands on the service. This reduces the efficiency of the system which negatively affects QoS. This paradox can however be solved by creating routines and practices to balance the tensions in the two modules. The other tension was that of accessibility and the interoperability of the technologyenabled service. Users demand that their service is a part of other auxiliary services in their homes or daily lives, while the technical module struggles to integrate with legacy technology platforms. This is a tension between what the user needs and what the service owner is willing to offer. As a result, this affects how users perceive the QoS. This is true for elderly users who are often less able and eager to make use of new products and services in general. Mugurusi et al. [38] claimed that attaining full-scale social and technological interoperability in projects involving the design and deployment of smart services is near impossible in ecosystem environments because different service actors deliver their modules on different platforms which tends to affect service integration. To resolve the impact of these tensions on QoS, SLAs have been proposed. SLAs are particularly relevant for smart service ecosystems because technology-enabled homecare services involve many actors and their respective platforms. The complexity increases because the chain of care involves technology owners and third parties who act as intermediaries in the delivery of the non-physical service module through non-technology actors (nurses, physicians, professional caregivers, etc.). The complexity in the chain of care has a performance impact on users’ quality of life and QoS [38]. In Fig. 2 we show how SLAs can be used to address performance challenges of service models in smart service ecosystems. We have argued that traditional elderly care service models place a significant performance burden on the service owner, and little on the users of the service, which is consistent with [5, 40, 41]. Emerging care models (e.g. [4, 15, 16]) show that service users become empowered when they have tools that give them autonomy in their care. Technology is seen as such a tool hence the emerging technology-enabled service models [5]. Technology-enabled service models give autonomy to users and involve users in the service co-creation processes, which has a direct and positive impact on QoS. However, there are two (2) important challenges of technology-enabled service models: (a) if the service model is based on technology modules delivered by third parties, then many performance tensions should be expected. Neuhuettler et al., [34] show that QoS of smart services is a function of the physically delivered service, the digital service, and the technology behind it. Sandhu et al., [32] conclude that user perceptions of the QoS depend on the entire care package including both the physical service module and the technology/non-physical service module. This case study shows that the Norwegian care system has attempted to reduce performance tensions using the care ladder in which user needs are classified by the level of complexity showing the benefits of service modularization [43]. (b) if the chain of care is spread across many actors, then performance tensions should be expected too. Mugurusi et al., [38] showed that some of the performance tensions are a result of the social-technical paradoxes that are natural in smart service ecosystems. Baldissera et al., [8] suggest that elderly care ecosystems have structural boundaries that affect service transitions but
Managing Performance in Technology-Enabled Elderly Care Services
511
mostly exist due to the differences in organizational policies and technical architectures of individual vendors hence the negative impact on QoS. Figure 2 shows examples of service transitions between different actors – from the service owners to the platform owners, from platform owners to third-party actors, etc.
Fig. 2. Service level agreements in a technology-enabled service model
In this paper therefore we can conclude that SLAs, which have traditionally been used between two actors, can form a basis for standardizing service delivery in an ecosystem environment. Figure 2 shows and concurs with Hein et al., [45] that while value co-creation occurs usually across several actors with different roles in the delivery of the service, the service owners usually have the power to orchestrate technological interactions using SLAs. SLA1 is the main contract that communicates the expectations of the service between the user and service owner; SLA2 is a sub-contract that communicates the expectations of the service owner and the technology platform owner; SLA3 and SLAn between the technology platform owner and other third-party vendors. According to Hein et al., [45] SLAs establish governance mechanisms that define the ground rules for orchestrating interactions in the ecosystem. Weyns and Höst [35] note that SLAs help service owners standardize IT services that are purchased outside the organization. In sum, and to answer RQ2, SLAs can be used to align the performance of the different actors who deliver different service modules in technology-enabled elderly care services. The ability to bring together and involve the different technology providers in an ecosystem environment may be useful for service owners, but also for user expectations of the technological solutions is a performance measure of QoS. The next step for the research is to develop a SLA template for specific QoS indicators (e.g. service reliability, throughput capacity, service availability, etc.) factoring in the autonomies of the agents, their incentives, and how the eco-system co-creates value. The implication of the findings in this case study based on the development and deployment of the digital home access solution in Lillehammer municipality, infer a possible shift in focus from the public sector choosing the welfare technology, to the user choosing and making use of his or her technology in a municipality-owned platform. However, as our identified tensions imply, elderly care services provided within a smart service ecosystem may increase service efficiency and user customization, but the
512
G. Mugurusi et al.
complexity will affect QoS. The cost to service owners will increase significantly due to the tensions among different service agents and technological systems irrespective of the robustness offered by SLAs. The other aspect of service modularization is viewing the citizens as “consumers” with autonomy and freedom to choose between public and private services is a scenario for future elderly care. However, this autonomy must be balanced with the user’s physical and mental conditions. We must avoid new performance gaps in elderly care occurring due to the technology, i.e., when the user is no longer able to control access themselves. Privacy and ease of giving/withdrawing consent by elderly users are a core part of the ioMt project. Finally, we proposed that standardization of SLAs can be a tool to align expectations among the different actors in the elderly care service ecosystem, however, service owners must be aware of the risk where SLAs can build unreliable expectations of the role of technology. Besides the need to limit the possibility of unintended consequences of overreliance on technology, service owners must invest in developing robust QoS indicators for elderly care and understand their strengths and limitations. Gaps in the literature appear to show the need to align QoS with other variations such as quality of experience (QoE), Quality of Business (QoB), and specifically quality of care (QoC). Acknowledgment. This paper is a result of the ongoing work on the ioMt project (https://www.int ernetofmythings.no/) funded by the Regional Research Fund (RFF) Innlandet of Norway. Special thanks to the RFF Innlandet and the project consortium including IKOMM AS, Eidsiva Bredbånd, KeyFree AS, Safe4 Security Group AS, HelseInn, Lillehammer municipality, NTNU and Høgskolen i Innlandet.
References 1. Chan, M., Estève, D., Fourniols, J.Y., Escriba, C., Campo, E.: Smart wearable systems: current status and future challenges. Artif. Intell. Med. 56(3), 137–156 (2012) 2. Dittmar, A., Axisa, F., Delhomme, G., Gehin, C.: New concepts and technologies in home care and ambulatory monitoring. Stud Health Technol. Inf. 108, 9–35 (2004) 3. McFarland, S., Coufopolous, A., Lycett, D.: The effect of telehealth versus usual care for home-care patients with long-term conditions: a systematic review, meta-analysis, and qualitative synthesis. J. Telemed. Telecare 27(2), 69–87 (2021) 4. Bennett, L., Honeyman, M., Bottery, S.: New models of home care. The King’s Fund, NY (2018) 5. Vimarlund, V., Olve, N.G., Scandurra, I., Koch, S.: Organizational effects of information and communication technology (ICT) in elderly homecare: a case study. Health Inf. J. 14(3), 195–210 (2008) 6. Carretero, S., et al.: Can technology-based services support long-term care challenges in home care? Analysis of evidence from social innovation good practices across the EU CARICT Project Summary Report, 15 (2012) 7. Lamine, E., Zefouni, S., Bastide, R., Pingaud, H.: A system architecture supporting the agile coordination of homecare services. In: 11th IFIP WG 5.5 Working Conference on Virtual Enterprises, PRO-VE 2010, St. Etienne, France, 11–13 October 2010, pp. 227–234 (2010)
Managing Performance in Technology-Enabled Elderly Care Services
513
8. Baldissera, T.A., Camarinha-Matos, L.M.: Towards a collaborative business ecosystem for elderly care. In: Camarinha-Matos, L.M., Falcão, A.J., Vafaei, N., Najdi, S. (eds.) DoCEIS 2016. IAICT, vol. 470, pp. 24–34. Springer, Cham (2016). https://doi.org/10.1007/978-3-31931165-4_3 9. Staifi, N., Belguidoum, M.: Adapted smart home services based on smart contracts and service level agreements. Concurr. Comput. Pract. Exp. 33(23), e6208 (2021) 10. Garschhammer, M., et al.: Towards generic service management concepts, a service modelbased approach. In: 2001 IEEE/IFIP International Symposium on Integrated Network Management Proceedings, pp. 719–732 (2001) 11. Szebehely, M., Trydegård, G.B.: Home care for older people in Sweden: a universal model in transition. Health Soc. Care Commun. 20(3), 300–309 (2012) 12. Baldissera, T.A., Camarinha-Matos, L.M.: SCoPE: service composition and personalization environment. Appl. Sci. 8(11), 2297 (2018) 13. Béland, F., Hollander, M.J.: Integrated models of care delivery for the frail elderly: international perspectives. Gac. Sanit. 25, 138–146 (2011) 14. Rauch, D.: Is there really a scandinavian social service model? a comparison of childcare and elderly care in six European countries. Acta Sociologica 50(3), 249–269 (2007) 15. Woo, J.: Development of elderly care services in Hong Kong: challenges and creative solutions. Clin. Med. 7(6), 548 (2007) 16. Low, L.F., Yap, M., Brodaty, H.: A systematic review of different models of home and community care services for older persons. BMC Health Serv. Res. 11, 1–15 (2011) 17. Low, L.F., Fletcher, J.: Models of home care services for persons with dementia: a narrative review. Int. Psychogeriatr. 27(10), 1593–1600 (2015) 18. Vimarlund, V., Borycki, E.M., Kushniruk, A.W., Avenberg, K.: Ambient assisted living: Identifying new challenges and needs for digital technologies and service innovation. Yearb. Med. Inf. 30(01), 141–149 (2021) 19. Ferro, E., et al.: The universaal platform for aal (ambient assisted living). J. Intell. Syst. 24(3), 301–319 (2015) 20. Hu, C.L., Chen, S., Guo, L., Chootong, C., Hui, L.: Home care with IoT support: architecture design and functionality. In: 2017 10th International Conference on Ubi-Media Computing and Workshops (Ubi-Media), pp. 1–6 (2017) 21. Stavropoulos, T.G., Papastergiou, A., Mpaltadoros, L., Nikolopoulos, S., Kompatsiaris, I.: IoT wearable sensors and devices in elderly care: a literature review. Sensors 20(10), 2826 (2020) 22. Dengler, S., Awad, A., Dressler, F.: Sensor/actuator networks in smart homes for supporting elderly and handicapped people. In: 21st International Conference on Advanced Information Networking and Applications Workshops, vol. 2, pp. 863–868 (2007) 23. Jakob, D.: Voice controlled devices and older adults–a systematic literature review. In: Proceedings 8th International Conference, ITAP 2022, 26 June–1 July 2022, Part I, pp. 175–200). Springer, Cham (2022). https://doi.org/10.1007/978-3-031-05581-2_14 24. Mostaghel, R.: Innovation and technology for the elderly: systematic literature review. J. Bus. Res. 69(11), 4896–4900 (2016) 25. Johansson-Pajala, R.M., Gustafsson, C.: Significant challenges when introducing care robots in Swedish elder care. Disabil. Rehabil. Assist. Technol. 17(2), 166–176 (2022) 26. Lewis, G.A., Morris, E., Simanta, S., Wrage, L.: Why standards are not enough to guarantee end-to-end interoperability. In: Seventh International Conference on Composition-Based Software Systems, pp. 164–173 (2008) 27. Hirdes, J.P., et al.: Home care quality indicators (HCQIs) based on the MDS-HC. Gerontologist 44(5), 665–679 (2004) 28. Brown, T.J., Churchill, G.A., Jr., Peter, J.P.: Research note: improving the measurement of service quality. J. Retail. 69(1), 127 (1993)
514
G. Mugurusi et al.
29. Campbell, S.M., Roland, M.O., Buetow, S.A.: Defining quality of care. Soc. Sci. Med. 51(11), 1611–1625 (2000) 30. Mosimah, C.I., Battle-Fisher, M.: Quality of care. In: ten Have, H. (ed.) Encyclopedia of Global Bioethics, pp. 2369–2378. Springer, Cham (2016). https://doi.org/10.1007/978-3-31909483-0_360 31. Cleland, J., Hutchinson, C., Khadka, J., Milte, R., Ratcliffe, J.: What defines quality of care for older people in aged care? a comprehensive literature reviews. Geriatr. Gerontol. Int. 21(9), 765–778 (2021) 32. Sandhu, S., Bebbington, A., Netten, A.: The influence of individual characteristics in the reporting of home care services quality by service users. Res. Policy Plan. 24(1), 1–12 (2006) 33. Parasuraman, A., Zeithaml, V.A., Berry, L.: SERVQUAL: A multiple-item scale for measuring consumer perceptions of service quality. 1988 64(1), 12–40 (1988) 34. Neuhuettler, J., Ganz, W., Liu, J.: An integrated approach for measuring and managing quality of smart senior care services. In: Ahram, T.Z., Karwowski, W. (eds.) Advances in The Human Side of Service Engineering, pp. 309–318. Springer International Publishing, Cham (2017). https://doi.org/10.1007/978-3-319-41947-3_29 35. Weyns, K., Höst, M.: Service level agreements in Municipal IT dependability management. In: IEEE 7th International Conference on Research Challenges in Information Science, pp. 1–9 (2013) 36. Zumkeller, S.: Using Smart Contracts for Digital Services: A Feasibility Study Based on Service Level Agreements. Master’s Thesis in Information Systems, Technical University of Munich, Germany (2018) 37. Rana, O.F., Warnier, M., Quillinan, T.B., Brazier, F., Cojocarasu, D.: Managing violations in service level agreements. In: Talia, D., Yahyapour, R., Ziegler, W. (eds.) Grid middleware and services, pp. 349–358. Springer US, Boston (2008). https://doi.org/10.1007/978-0-38778446-5_23 38. Mugurusi, G., et al.: The significance and barriers to organizational interoperability in smart service ecosystems: a socio-technical systems approach. In: APMS 2022. IFIP Advances in Information and Communication Technology, vol. 664, pp. 253–261. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-16411-8_31 39. Eide, T., Gullslett, M.K., Eide, H., Dugstad, J.H., McCormack, B., Nilsen, E.R.: Trust-based service innovation of municipal home care: a longitudinal mixed methods study. BMC Health Serv. Res. 22(1), 1–20 (2022) 40. Holm, S.G., Mathisen, T.A., Sæterstrand, T.M., Brinchmann, B.S.: Allocation of home care services by municipalities in Norway: a document analysis. BMC Health Serv. Res. 17(1), 1–10 (2017) 41. Helgheim, B.I., Sandbaek, B.: Who is doing what in home care services? Int. J. Environ. Res. Public Health 18(19), 10504 (2021) 42. Norwegian Directorate of e-Health (2023). Kunnskapsgrunnlag e-helsestrategi fra 2023. Retrieved from: PowerPoint-presentasjon (ehelse.no). Accessed 25 Apr 2023 43. Øydgard, G.: Individuelle behovsvurderinger eller standardiserte tjenestetilbud? En institusjonell etnografi om kommunale saksbehandleres oversettelse fra behov til vedtak. Tidsskrift for omsorgsforskning 4(1), 27–39 (2018) 44. Carlborg, P., Kindström, D.: Service process modularization and modular strategies. J. Bus. Ind. Mark. 29(4), 313–323 (2014) 45. Hein, A., et al.: Digital platform ecosystems. Electron. Mark. 30, 87–98 (2020)
Effect of Machine Sharing in Medical Laboratories Aili Biriita Bertnum(B)
, Roy Kenneth Berg, Stian Bergstøl, Jan Ola Strandhagen , and Marco Semini
Department of Mechanical and Industrial Engineering, Norwegian University of Science and Technology, 7491 Trondheim, Norway [email protected]
Abstract. While medical research continuously develops new and innovative analysis methods, investing in new competence and resources is costly and challenging with the rigid budgeting of healthcare systems. This, combined with an increasing demand for healthcare services, implies an increasing need for efficient utilization of existing resources, while maintaining the quality of service. Being organized by medical skills rather than by processes, the different departments may carry out similar processes, and, thus, require similar resources. This study aims to investigate the effects of resource sharing on medical laboratory performance. Medical laboratories are often the first part of the diagnostics and treatment processes, contributing with information obtained from analyzing patients’ biological samples, such as blood, urine, or tissue. A case study, including a quantitative simulation analysis, has been performed at a large Scandinavian medical laboratory, where the current situation of each department having their own DNA isolation machine has been compared to a hypothetical future situation where the machines are shared by centralizing the process. Results suggest that resource sharing can reduce time and cost but may reduce the mix flexibility and the quality of the analyses carried out. Resource sharing will also increase the complexity of operations. This study contributes to increased knowledge on the effects of machine sharing in medical laboratories and demonstrates how simulation may be used to justify investments. Keywords: Medical Laboratories · Resource Sharing · Machine Sharing · Performance Measurement · Simulation
1 Introduction The healthcare sector consists of various actors collaborating to provide high-quality healthcare services to the population. Population growth, especially of the share of elderly, leads to more patients in need of healthcare services at all levels of the healthcare sector. Medical laboratories are often the first part of the diagnostics and treatment processes, contributing with information obtained from analyzing patients’ biological samples, such as blood, urine, or tissue [1]. The role of medical laboratories © IFIP International Federation for Information Processing 2023 Published by Springer Nature Switzerland AG 2023 E. Alfnes et al. (Eds.): APMS 2023, IFIP AICT 690, pp. 515–530, 2023. https://doi.org/10.1007/978-3-031-43666-6_35
516
A. B. Bertnum et al.
is of great importance as it supports correct and timely patient treatment [2]. Rapidly providing accurate analysis results contributes to rapid patient recovery and can prevent unnecessary long hospital stays [3], saving large expenses in the healthcare sector. Medical research continuously develops new and innovative analysis methods, replacing existing methods, with an improved precision for a broad assortment of medical conditions [4]. For example, by mapping and investigating a patient’s genetic code, patient treatment can be customized to achieve the best effect with the lowest risk. This is called personalized or precision medicine, which has an increasing number of application areas [5, 6]. However, the competence and resources required to offer personalized medicine is very costly. Especially in the healthcare sector, finding funding to invest in costly innovations can be challenging. Governmental budgets are often strict and do not necessarily reflect demand increase. Furthermore, rigid budgeting leads to siloes where the cost of investment is allocated to one specific budget, but the potential savings may be seen in other parts of the healthcare system [7]. If the healthcare sector is to meet the needs of a growing population, it must not only provide timely and high-quality results, but also utilize its resources optimally. Operations management and logistics can help balance these requirements and improve performance within the existing logistics system and a given set of resources [8]. Both literature and practice show an increased focus and interest in healthcare research within operations management and management science. Departments in the healthcare sector are organized by medical skills rather than by processes [9], which means that several departments may have similar processes and, thus, need similar resources. This leads to opportunities for sharing resources, e.g., when a resource is underutilized, expensive, or scarce. Resource sharing can improve the quality of services, provide a better control of existing resources, and save costs [10]. While it is not uncommon to share expensive resources in hospitals, such as intensive care beds and wards [10], there is a lack of systematic research-based knowledge on the effects of sharing resources. Such knowledge can contribute to more resource sharing, potentially improving performance with the given set of resources and justify investments in new and innovative equipment. In particular, research on resource sharing in medical laboratories is scarce. Such domain-specific research is needed because of the particularities of medical laboratories, including individual one-piece flow and critical importance of speed. And, especially in the case of public institutions, the requirement to process all items, limited opportunity to manage/control incoming flows, and strict budgets not necessarily reflecting demand increase. The objective of this study is to investigate the effects of resource sharing on medical laboratory performance. A case study, including a quantitative simulation analysis, was performed at a large Scandinavian medical laboratory. The purpose was to assess the potential effects of sharing the machines needed for the DNA isolation process, which is carried out at several of the laboratory’s departments. Today’s situation, where each department has its own machines to perform the process, has been compared to a hypothetical future situation where the process is centralized, and the machines are shared. The effects on throughput time and machine utilization were assessed by simulation analysis. In addition, case observations and theoretical reasoning were used to provide a
Effect of Machine Sharing in Medical Laboratories
517
holistic assessment of the machine sharing effects. Results suggest that sharing resources can increase performance in most areas but is also likely to increase complexity. The remainder of this paper is structured as follows. Existing literature on resource sharing and performance measurement is reviewed in Sect. 2. The methodology, built up by a case study and a simulation analysis, is introduced in Sect. 3. The case study is presented in Sect. 4, with a description of the medical laboratory, its processes, and the resource sharing situation. The simulation analysis with its assumptions, including the simulation model, the experiments, and the results are described in Sect. 5. In Sect. 6, the effect of machine sharing on medical laboratory performance is discussed based on the theory, case study and simulation results. The study is concluded with a summary, the theoretical and practical contributions, limitations, and further work.
2 Theoretical Background 2.1 Resource Sharing in Medical Laboratories Medical laboratories are organized by medical skills rather than by processes. For instance, where a pathologist investigates the biological sample for cancer, a microbiologist cultivates the bacteria or virus of a biological sample to identify the cause of sickness. Such focus on a limited area could be recognized as similar to a focused factory from operations management literature. By focusing on a limited set of tasks the organization is able to improve performance and outperform its competitors [11]. However, focus does not always reduce the complexity. Hill [12] suggested a reclassification into two categories, splitting and focusing. Both entails breaking the operations into smaller units, but whereas splitting reflects a product-, market-, or process-strategy, focusing is based on an order-winner strategy [12]. Haartveit et al. [13] expands it further by introducing the resource sharing concept, which will be investigated in this study. A resource is an object used in the creation, production, or delivery of a product or service, which is not changed or consumed throughout the process [10, 14]. A shared resource is cooperatively managed by e.g. two or more actors [15], two or more processes [16], or two or more production lines [10]. Such logistics collaboration can result in improved competitiveness and service by reducing cost and delivery times, as well as improving efficiency and productivity through gaining a better control of resource utilization [10, 17–19]. However, it requires that the capacity of the shared resource is well-coordinated between the users of the resource [10, 15]. Resource sharing is typical in hospitals due to their technical infrastructure, and highly specialized staff and machines [10]. Still, a process with many shared resources has been emphasized to be more difficult to manage than a process with no shared resources, and can lead to patient groups competing for resources [10]. If not managed properly, resource sharing can result in less accessibility and a lower quality of healthcare services. There are several studies on resource sharing, e.g. from the processing industry [15], waste recycling [16] and sharing economy [20]. Within transportation, resource sharing has become a topic of interest as collaborative vehicle routing enables the exchange of transportation requests among the various actors to achieve a more efficient and sustainable operation [17, 19, 21]. However, to the best of our knowledge, existing
518
A. B. Bertnum et al.
research does not consider the situation and challenges of sharing resources in medical laboratories, with their specific characteristics, despite its importance. 2.2 Logistics Performance of Medical Laboratories Logistics performance can be translated to having an effective and efficient flow of materials, where effective and efficient means “doing the right thing” and “doing things right”, respectively [22]. In a competitive market especially four dimensions are important: the production cost, the product or service quality, the delivery speed and reliability, and the flexibility of product volumes and product mix [23–25]. In a healthcare setting, this can be translated to service provision cost, quality, speed and reliability of the service, and service flexibility and capacity [10]. In medical laboratories, material flow is often called sample flow as most activities revolve around the handling of biological samples. The sample flow can be divided into three phases: pre-analytical, analytical and post-analytical phase [26]. The pre-analytical phase involves sample reception, order verification, and sample preparation for analysis. The analytical phase involves setting up the analysis machine or workstation, inserting the samples, and executing the analysis. The post-analytical phase involves extracting, verifying, and reporting the analysis result to the customer. Table 1. Importance of the four performance measures for medical laboratories. Importance for medical laboratories Quality of analysis
Quality is a measure of how correctly the analysis was performed and how reliable the analysis result is. An analysis result of less quality can lead to wrong decisions being made during consultation and treatment of patient, which in the worst-case can result in patient injury or death. Quality is, thus, the most important performance measure and an invariable requirement of a medical laboratory
Cost of analysis
Cost represents all components that contribute to service provision in medical laboratories, which combined makes the cost of analysis the clinician pays for. This includes materials, personnel, maintenance, repair, as well as investments into machines and equipment, among other. In addition to the cost of analysis, the clinician pays for the transportation of the sample. Cost is an important performance measure, but if the clinician perceives the service to be better at a medical laboratory at a greater distance, the clinician is willing to pay for the additional transportation cost (continued)
Effect of Machine Sharing in Medical Laboratories
519
Table 1. (continued) Importance for medical laboratories Response time
Response time is the time from the order and biological sample is sent until the analysis result is received by the clinician, which includes transportation time. However, since transportation time is out of the direct control of the medical laboratory, it is often more relevant to look at the throughput time, from the arrival of the biological sample until the analysis result is sent to the clinician. As explained, timeliness is often critical in medical laboratories, and the clinician might choose to send the order to another medical laboratory if the response time is perceived faster
Mix and volume flexibility Due to the wide variety and variable volumes of the samples to be analyzed, mix and volume flexibility are critical. Mix flexibility represents both the number of different analyses offered and the number of customization possibilities due to varying sample types in terms of quality and substances. Volume flexibility represents the ability to run an analysis regardless of the number of incoming samples within the response time, for instance by having enough machine capacity. Having backup machines is a way to ensure that the analysis can be executed during a machine breakdown
The clinician sends the patient’s biological sample and the order to the medical laboratory, where high analytical quality, full availability and timeliness of the analysis service is valued [27]. The main goal of medical laboratories is, thus, to provide accurate and timely analysis results of the highest quality possible [2, 3]. Failing to meet the clinicians’ requirements can result in the order being placed at a different medical laboratory, implying that distance and cost of analysis come second. Table 1 highlights the importance of the four performance measures: quality of analysis, cost of analysis, response time, and mix and volume flexibility for medical laboratories.
3 Methodology This study was carried out as a part of an ongoing research collaboration with the medical laboratory of a large Scandinavian hospital. Resource sharing was among several research areas identified as particularly relevant, and the DNA isolation process was selected for the study on resource sharing presented in this paper. To obtain an indept understanding of relevant processes and resources, a case study methodology was applied. A case study draft was presented at a preliminary meeting with representatives from the hospital’s laboratory, resulting in a revision of the case study design. 3.1 Case Selection The case laboratory serves customers from all over the country. It consists of six departments within the medical fields: immunology and transfusion medicine, clinical pharmacology, medical biochemistry, medical genetics, medical microbiology, and pathology.
520
A. B. Bertnum et al.
Each department offers or will in the near future offer genetic analyses. The department of medical genetics (DMG), the department of pathology (DP), and the department of medical biochemistry (DMB) were selected for the study, all of them analyzing human DNA and isolating this DNA with a machine. DP and DMB see an upcoming investment need due to currently using a DNA isolation machine with too little capacity for the future demand increase. Whereas DMG has a flexible and underutilized DNA isolation machine, which could, thus, potentially be shared with DMB and DP. The selected cases and the DNA isolation process will be presented in Sect. 4. 3.2 Data Collection To get as much knowledge as possible about the processes and resources of the case departments, semi-structured interviews were conducted. An interview guide was developed based on the preliminary meeting and the literature study, which was revised and updated after each interview. One to two employees with profound knowledge of the overall process flow and resources were interviewed at each case department, e.g., department, unit, or field managers with a bioengineering degree. Each interview lasted one to two hours. A guided tour of the processes, workstations, and machines used for genetic analyses was given at each case department. The visit allowed for additional questions to be asked, which were not covered during the interviews. In addition, input to the simulation model were gathered, such as machine specifications, analysis frequencies, and batch sizes of relevant workstations and machines. The collected data was examined, structured, and put into an operations management and logistics context, which was presented at a workshop to the involved personnel from each case department. The workshop verified the data collection, which further strengthened the data analysis, results, discussion, and conclusion of the study. 3.3 Analysis Methods The collected data has been analyzed both quantitatively and qualitatively. Simulation was used to analyze throughput time and machine utilization. These performance measures were considered important drivers for the choice of whether to share machines. The simulation study is presented in Sect. 5. The machine sharing effects on the remaining performance measures identified in Sect. 2.2 are discussed in Sect. 6, based on the literature study and case study insights.
4 Case Study 4.1 Overall Sample Flow The focus of this study are the genetic analyses performed by the selected case departments. Table 2 provides an overview of these departments’ operations concerning genetic analyses. They are quite similar in terms of work hours and yearly demand. However, DMG has more than three times the employees compared to DMB and DP. DMG receives
Effect of Machine Sharing in Medical Laboratories
521
biological samples for which the cause of sickness is unknown. Thus, the interpretation of the analysis results is a resource-demanding and time-consuming process. DMB and DP, on the other hand, have a suspicion of the cause of sickness and can do a very specific and targeted analysis of changes in a specific gene. In rare cases, when DMB or DP are not able to identify the cause of sickness, the biological sample is transferred to DMG for further analysis. Table 2. Overview of the selected case departments’ operations concerning genetic analyses Genetic analyses Department
DMG
DP
DMB
Employees
20
6
6
Work hours
Monday-Friday, 8 am–4 pm
Description of analysis offer
Unknown genetic changes related to rare diseases, conditions, or cancer
Known genetic changes related to cancer suspicion
Known genetic changes indicating increased risk of diseases or conditions
Response time
Months
Days to weeks
Days to weeks
Current yearly demand, trend
3000 samples, increasing
2500 samples, increasing
2500 samples, increasing
Sample types
Blood, amniotic fluid, Tissue and cells saliva, and buccal smear
Blood
The overall sample flow is similar in the three departments and shown in Fig. 1. The biological sample is sent from the clinician, henceforth called the customer. The department receives the sample and manually prepares it for DNA isolation. DNA isolation is performed by specialized machines. To achieve similar concentrations of the isolated DNA, the sample is normalized, either manually or by a machine. The isolated and normalized DNA is analyzed in batches by an analysis machine, where the analysis result is manually interpretated and sent to the customer. The processes involve department-specific activities, which may affect the throughput time. For example, the time-consuming interpretation of DMG can lead to a throughput time of several months.
Fig. 1. Overall sample flow of genetic analyses at the case departments.
522
A. B. Bertnum et al.
4.2 Machine Sharing Opportunity DNA isolation is the process where the DNA is extracted from the biological sample. The preparation step usually involves manually attaching a barcode to each test tube before placing them into the DNA isolation machine. The preparation step of DP is more comprehensive as the samples are received as paraffin-embedded blocks or incisions of the block, which is due to the preservation need of pathological samples. Thus, the tissue sample must be extracted to ensure only pure tissue is left for DNA isolation. DNA isolation is performed in batches. The machine is set up by checking if there are enough kits and other materials to perform the DNA isolation. The isolated DNA is extruded from the original test tube to a plastic tube, which is used in the remaining steps of the overall process flow. The DNA isolation machines can only be run during working hours as both the remaining biological material from the sample and the isolated DNA need to be manually placed in a refrigerator to maintain the shelf life. DMG has an underutilized DNA isolation machine, henceforth called machine A. With increasing demand, both DP and DMB are experiencing capacity problems with each of their DNA isolation machine, henceforth called machine B and machine C, respectively. Machine A can process different sample types in the same batch, although being contained in different test tubes. Furthermore, machine A is judged to provide DNA isolations of sufficient quality for all three departments. This situation, therefore, suggests that sharing machine A is feasible and may provide an opportunity to save investment costs. The case laboratory was, therefore, interested in assessing the consequence of sharing the machine(s) needed for DNA isolation. A hypothetical scenario was developed, where machine A is used for DNA isolation of all samples from all three departments (see Fig. 2). An important feature of the sharing situation is that the departments’ samples are placed in the same queue, which is possible because different sample types can be processed in the same batch. Thus, this specific part of the departments’ operation is centralized. They do not just share the machine through, for instance, time slots allocation, but integrate the different sample flows into a single flow, before separating them again after DNA isolation. Medical laboratories cannot turn down analysis requests, thus, at least two machines are needed to perform a certain operation in case one of the machines breaks down. Therefore, the sharing scenario was assumed to have two machines of type A.
5 Simulation Analysis A discrete-event simulation model was developed to analyze the effect of sharing DNA isolation machines. Due to the random nature of incoming patient samples, laboratory operations can be said to have a stochastic behavior. In such situations, discreteevent simulation is a useful method to assess the effects of different configurations on quantifiable, material flow related aspects, such as queue lengths and waiting times. 5.1 Model Development To analyze the effect of sharing DNA isolation machines, it was sufficient to include the stages from sample arrival to the finished DNA isolation, as shown in Fig. 2. Today’s
Effect of Machine Sharing in Medical Laboratories
523
Fig. 2. Comparing the current situation of not sharing DNA isolation machines to a hypothetical scenario where the machines are shared. Sample flow after DNA isolation is unchanged.
solution of not sharing machines was compared to a hypothetical solution where the three departments use the same machines, as presented in Sect. 4.2. Sample arrival was assumed to be random, so the number of incoming samples per day varied in the simulation, with average daily arrival based on approximate yearly volumes per department obtained through the interviews. Due to uncertainty regarding the extent of the future demand increase, four demand scenarios were included, such as to allow assessment of capacity utilization and capacity need (number of machines). Laboratory personnel stated that the demand increase is expected to be 20–80% annually for the next four years. The four demand scenarios assumed demand increases of 20%, 40%, 60%, and 80%, respectively. Resulting average numbers of daily sample arrivals after four years are shown in Table 3, which are the numbers used in the simulation analysis. Table 3. Average number of daily sample arrivals at each of the three departments after four years, for each of the four demand scenarios. DMG
DP
DMB
20%
24
20
20
40%
44
37
37
60%
76
63
63
80%
121
101
101
Following the situation given by the case departments, genetic analysis operating hours were assumed to be 8 am to 4 pm during workdays. It was assumed that the DNA isolation process could only take place during these hours (due to the need for manual sample preparation, manual loading and unloading of the DNA isolation machines, and manual storage in refrigerator). Pseudo-random numbers were used to ensure that each
524
A. B. Bertnum et al.
simulation run was based on the same conditions. Thus, the sample arrival times were identical in all the scenarios simulated. The processing times at the two processes included (preparation and DNA isolation) were assumed to be deterministic. Each process was assumed to be performed with a given, fixed batch size as specified in Fig. 2, and a process was only initiated when the required batch size was reached. The batch sizes initially specified in the simulation model were based on the various batch sizes currently used at each department in each process, usually determined by the size of the racks. Samples queued up before the processes until they reach a complete batch size, which is then processed as soon as there is free capacity. Transportation between the processes was assumed to be performed by operators, short and with constant duration. The model was implemented in FlexSim. The case study was used to validate the model through accurate input data, such as demand and process, machine, and sample specifications. The answers collected in the semi-structured interviews were doublechecked with the interviewees for correctness and consistency. Furthermore, after model development and analysis, a workshop with laboratory personnel was held to ensure correct understanding of the system modelled and to discuss the model’s logic and assumptions, experiments, and results. The simulation period simulated a month by twenty consecutive workdays. A total of eight scenarios were tested, “sharing” and “not sharing” for each of the four demand scenarios. The simulation model assessed the following performance measures: • Throughput time: The time from when a sample entered the system until it exited the DNA-isolation process. This measure is part of the sample analysis’ response time, which is a critical measure in healthcare operations (see Sect. 2.2). • Machine utilization: The percentage of time the machine is in use, with respect to the total laboratory operation time (8 h per weekday). This measure is part of the cost of analysis as it provides insight into when there is a need to invest in additional machines, being another critical measure in healthcare operations. 5.2 Analysis Results Average Throughput Time. Table 4 compares average throughput times between sharing and not sharing, separately for each department and demand scenario. For both DMG and DMB, machine sharing resulted in shorter throughput time for all demand scenarios other than 80% annual increase. For DP, the results were opposite: Resource sharing shows a poorer performance for all simulations, except when the expected annual volume increase is 80%. DMG and DMB operate with a batch size of 24 and 32 samples, respectively. Since the batch size in the shared situation is 24, and the number of arrivals in the queue is higher than at the individual departments in the not sharing situation, the batch size will be reached faster in the sharing situation. However, this is not necessarily the case for DP with its batch size of 12 samples. It is not until the demand becomes too large for the capacity of DP’s machine to handle, that the resource sharing scenario shows better performance. That is, at an annual volume increase of 80%. For DMG and DMB, the capacity utilization at their individual machines, in the not sharing situation, is
Effect of Machine Sharing in Medical Laboratories
525
lower than that of the two type A-machines in the sharing situation. This explains why not sharing leads to shorter throughput times in the 80%-demand scenario, where the demand exceeds the capacity of the two shared type A-machines. Table 4. Average throughput time in workdays from sample reception until finishing the DNA isolation process. Annual volume increase
DMG
DP
DMB
Average throughput time Improvement with machine in workdays sharing Not sharing
Sharing
20%
1,77
1,34
43%
40%
1,38
1,23
15%
60%
1,31
1,22
9%
80%
1,25
2,12
−87%
20%
1,84
2,08
−24%
40%
1,72
1,85
−13%
60%
1,70
1,81
−11%
80%
3,20
2,65
55%
20%
1,96
1,34
62%
40%
1,84
1,22
62%
60%
1,44
1,21
23%
80%
1,43
2,13
−70%
Machine Utilization. Figure 3 compares the machine utilization between the two situations, for each of the four demand scenarios. The utilization is especially affected by two factors: (1) the simulation starts with an inventory level of zero, which means that the system takes some time to reach the steady-state behavior of the queues. (2) DNA isolation of a batch can only be initiated if it also can be completed within working hours, which means no overtime is allowed. This explains why capacity utilization is below 100% even with an 80% demand increase. In the sharing situation, the second machine starts being used at the 40% scenario. At the 60% scenario, both machines are being utilized around 60%. It is not until an 80% annual volume increase that the machine utilization approaches 100%. All in all, this shows that the three departments can have a very large increase in demand before there is a need for more than two machines of type A. Furthermore, it can also be seen that the capacity utilization is higher in the sharing situation, which is one of the typical expected benefits from machine sharing. Based on the results so far, additional simulation experiments were carried out: • Increasing capacity in the sharing situation by adding a third machine of type A. As expected, it turned sharing into the best option for DMG and DMB also for the
526
A. B. Bertnum et al.
Fig. 3. DNA isolation machine capacity utilization in the sharing and not sharing situation
80% demand increase scenario, with an average throughput time of approximately 1,2 days for DMG and DMB, and approximately 1,7 days for DP. • Assuming that the type A machines are operated with batch size 12, which typically increases costs. As expected, sharing now turns into the best option also for DP in all demand scenarios. The 20% and 40% scenario had an average throughput time of 1,76 days and 1,70 days, respectively. The capacity of the 60% scenario was not sufficient, but a batch size of 18 resulted in an average throughput time of 1,69 days. In summary, the simulation study quantitatively confirmed that sharing machines improves throughput times as long as there is sufficient capacity and processing batch sizes are not higher. While throughput times are of critical importance, they are not the only performance measure to consider when assessing the effect of machine sharing in medical laboratories. In the next section, the complete set of relevant performance measures identified in Sect. 2.2 will be used to make a more holistic assessment of the effects of machine sharing.
6 Effect of Machine Sharing in Medical Laboratories This section will qualitatively discuss the expected effects of machine sharing based on theory, case study insights, and the simulation analysis. Section 2.2 highlighted the importance of quality, cost, time, and flexibility in medical laboratories. Further, the effects of machine sharing will be assessed based on these four performance measures. As discussed by Haartveit et al. [13], when a certain process (such as DNA isolation) is contained in several production flows and required by several departments, several types of resources may be shared. In addition to machines and equipment, which is the focus of this study, also location and personnel may be shared. In the present context, sharing machines implies sharing location because the machines cannot easily be moved. Furthermore, if the benefits from sharing machines are to be exploited, personnel operating the machines should be cross-trained (either dedicated to the shared process or also performing other processes), and able to process samples from all departments sharing the machine. Otherwise, samples from different departments cannot be combined
Effect of Machine Sharing in Medical Laboratories
527
in the same batch, and the throughput time savings from combining samples, identified in the simulation analysis, will not be achieved. Therefore, we assume that sharing machines implies sharing location and personnel. Relevant theoretical perspectives to assess the effects of sharing in such a context include economies of scale, centralization vs. decentralization, and process vs. product/cellular layout. Starting with the time dimension, throughput time was identified as the critical measure. As seen in the simulation, machine sharing affects the time the sample spends at the laboratory. Given that there is sufficient capacity at the shared machine(s) and the batch sizes are not larger than in the not sharing situation, remarkable throughput time reductions can be achieved from combining samples from different departments in the same batch. This is because the required batch size is reached faster, so waiting time before the shared machine is reduced. However, a prerequisite to exploit this opportunity is that samples do not accumulate before or after the shared process because of larger distances between the shared machine(s) and some of the departments, hampering access and reducing visual control. Some of the throughput time reductions may be sacrificed in favor of cost reductions by increasing the batch sizes. This reflects the typical trade-off between inventory and setup costs. Setup costs in this context being costs incurring once per batch. Arguably, cost reductions are the most prominent benefit and rationale for machine sharing. Typically, it leads to higher machine capacity utilization, as not each department needs its own machine, the latter situation often implying more idle time. Medical equipment is expensive and goes through rapid technological development [28], requiring regular replacement. Thus, high capacity utilization is imperative to achieve an acceptable return on investment. Machine sharing allows sharing investment costs across departments, which typically also makes it easier to get budgets approved. Faster and more effective equipment may be acquired as more total capital is available and risks are shared [20]. In addition to a reduction of total investment costs, also space, service and maintenance costs will be reduced. The centralization resulting from machine sharing will also typically imply reduced need for tooling and inventory of consumables. However, a certain increase in labor cost must be expected from transporting the samples over larger distances between the shared process and preceding and subsequent processes. Sharing machines may also lead to higher costs related to organization and coordination [20]. Whereas machine sharing is likely to have a beneficial effect on time and cost, the effects on flexibility and quality must also be taken into account. From the perspective of individual departments, volume flexibility may be improved due to the risk pooling effect when sharing machines. However, for the overall laboratory, higher machine utilization implies that a total demand increase sooner leads to insufficient capacity at the shared machines, turning the shared process into a bottleneck. Sharing machines may, also, negatively impact the laboratory’s overall mix flexibility: Whereas department-specific machines can be selected to cover a wide range of functions within specific application areas, compromises will have to be made when acquiring machines to be used by several departments. Although the mix flexibility (multifunctionality) of the shared machines would usually be higher than that of the departments’ individual machines [29], the laboratory’s total mix flexibility may be reduced.
528
A. B. Bertnum et al.
The required compromises regarding the shared machines’ specifications may affect the quality of the analysis results. The shared machines need to be more general, cover a wider range of functionalities. Therefore, there is a risk that the quality of some of the analysis results is reduced. A satisfactory level of quality for all involved departments is a necessary condition if machines are to be shared. Generally, there may also be the risk of reduced quality when the shared machines are operated by cross-trained personnel. In the present case, this was not a relevant aspect because operating the DNA isolation machines was basically the same for all the departments’ samples. In this section, the effect of machine sharing on performance has been discussed. Since a shared machine is involved in several sample flows, a larger part of the laboratory’s operations depends on it. Therefore, managing it becomes increasingly important [20]. Managing also becomes more complex. Lot sizing, scheduling, and prioritization needs to consider more factors; there is less excess capacity; personnel operating the shared machine(s) need to handle a higher variety of sample types, requiring more knowledge and increasing the risk of human error; machine specifications cover a wider range of applications, potentially making the machine itself more complex to operate and maintain; and the physical arrangement of samples, materials and equipment near the machine becomes more challenging because of higher volumes and variety.
7 Conclusions The organization of healthcare system, by medical skills rather than by processes, entails the risk of duplication and low utilization of resources when several entities need to perform similar processes. With a continuing population increase, combined with limited budgets, utilizing existing resources efficiently becomes paramount. The purpose of the present study has been to investigate the effects of machine sharing on medical laboratory performance. We considered the case of DNA isolation at a large medical laboratory, a process required by most of its departments. We used a simulation model to assess the effect of sharing DNA isolation machines on throughput time and machine utilization. In addition, we took a holistic perspective on the effects, combining the simulation results with qualitative insights from the case study. Results indicate significant reduction in throughput times from machine sharing as well as reduced investment and operating costs, but also a risk of reduced mix and product flexibility, reduced quality, and increased complexity. The implication for practice is that laboratories should investigate machine sharing opportunities as they may reduce time and cost if a sufficient level of quality is maintained and increased organizational and managerial complexity is adequately handled. The contribution of the paper is increased knowledge on the effects of machine sharing in medical laboratories. The study also demonstrates how simulation analysis may be used to justify investments. A limitation of the present study is that it is based on a specific case. To validate, further develop, and increase the generality of this study’s results, additional case studies will be required. Furthermore, the effects of machine sharing should be empirically assessed after implementation. More work is also needed on understanding the potential benefits from having dedicated personnel operating the machines. Last, but not least, an opportunity is to investigate how to efficiently manage the increased complexity resulting from machine sharing.
Effect of Machine Sharing in Medical Laboratories
529
References 1. Yang, T., et al.: The optimization of total laboratory automation by simulation of a pullstrategy. J. Med. Syst. 39(162), 1–12 (2015) 2. Plebani, M., Laposata, M., Lippi, G.: A manifesto for the future of laboratory medicine professionals. Elsevier (2019) 3. Ong, M.-S., Magrabi, F., Coiera, E.: Delay in reviewing test results prolongs hospital length of stay: a retrospective cohort study. BMC Health Serv. Res. 18(369), 1–8 (2018) 4. Ivanov, A.: Barriers to the introduction of new medical diagnostic tests. Lab. Med. 44(4), e132–e136 (2013) 5. Brittain, H.K., Scott, R., Thomas, E.: The rise of the genome and personalised medicine. Clin. Med. 17(6), 545–551 (2017) 6. Plebani, M.: Clinical laboratories: production industry or medical services? Clin. Chem. Lab. Med. (CCLM) 53(7), 995–1004 (2015) 7. European Alliance for Personalised Medicine, Innovation and Patient Access to Personalised Medicine. Irish Presidency Conference (2013) 8. Van Sambeek, J., et al.: Models as instruments for optimizing hospital processes: a systematic review. Int. J. Health Care Qual. Assur. 23(4), 356–377 (2010) 9. Gonçalves, P.D., Hagenbeek, M.L., Vissers, J.M.: Hospital process orientation from an operations management perspective: development of a measurement tool and practical testing in three ophthalmic practices. BMC Health Serv. Res. 13, 475 (2013) 10. Vissers, J., Beech, R.: Health operations management: patient flow logistics in health care. In: Routledge Health Management Series. Routledge, Oxford (2005) 11. Skinner, W.: The focused factory. Harvard Bus. Rev., 114–121 (1974) 12. Hill, A.: How to organise operations: focusing or splitting? Int. J. Prod. Econ. 112(2), 646–654 (2008) 13. Gotteberg, D.E., Haartveit, M.S., Alfnes, E.: Splitting or sharing resources at the process level: an automotive industry case study. In: Emmanouilidis, C., Taisch, M., Kiritsis, D. (eds.) Advances in Production Management Systems. Competitive Manufacturing for Innovative Products and Services, pp. 467–473. Springer Berlin Heidelberg, Berlin, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40352-1_59 14. Pittman, P.H., Atwater, J.B. (eds.): ASCM. ASCM Supply Chain Dictionary, 17th edn. APICS, Chicago (2022) 15. Van Donk, D.P., Van Der Vaart, T.: A case of shared resources, uncertainty and supply chain integration in the process industry. Int. J. Prod. Econ. 96(1), 97–108 (2005) 16. Wu, J., et al.: Two-stage network processes with shared resources and resources recovered from undesirable outputs. Eur. J. Oper. Res. 251(1), 182–197 (2016) 17. Cruijssen, F., Dullaert, W., Fleuren, H.: Horizontal cooperation in transport and logistics: a literature review. Transp. J. 46(3), 22–39 (2007) 18. Sandberg, E.: Logistics collaboration in supply chains: practice vs theory. Int. J. Logist. Manag. 18, 274–293 (2007) 19. Becker, T., Stern, H.: Impact of resource sharing in manufacturing on logistical key figures. Procedia Cirp 41, 579–584 (2016) 20. Freitag, M., Kück, M., Becker, T.: Potentials and risks of resource sharing in production and logistics. In: Proceedings of the 8th International Scientific Symposium on Logistics. BVL, Karlsruhe (2016) 21. Gansterer, M., Hartl, R.F.: Collaborative vehicle routing: a survey. Eur. J. Oper. Res. 268(1), 1–12 (2018) 22. Gleason, J.M., Barnum, D.T.: Toward valid measures of public sector productivity: performance measures in urban transit. Manag. Sci. 28(4), 379–386 (1982)
530
A. B. Bertnum et al.
23. Chapman, S.N.: The Fundamentals of Production Planning and Control. Pearson/Prentice Hall, Upper Saddle River (2006) 24. Neely, A., Gregory, M., Platts, K.: Performance measurement system design: a literature review and research agenda. Int. J. Oper. Prod. Manag. 15(4), 80–116 (1995) 25. Schönsleben, P.: Integral Logistics Management: Planning and Control of Comprehensive Supply Chains, 2nd edn., p. 992. CRC Press, New York (2003) 26. Schimke, I.: Quality and timeliness in medical laboratory testing. Anal. Bioanal. Chem. 393, 1499–1504 (2009) 27. Dolci, A., et al.: Total laboratory automation: Do stat tests still matter? Clin. Biochem. 50, 605–611 (2017) 28. Boudoulas, K.D., et al.: The endlessness evolution of medicine, continuous increase in life expectancy and constant role of the physician. Hellenic J. Cardiol. 58(5), 322–330 (2017) 29. Wilson, S., Platts, K.: How do companies achieve mix flexibility? Int. J. Oper. Prod. Manag. 30(9), 978–1003 (2010)
Additive Manufacturing in Operations and Supply Chain Management
What to Share? A Preliminary Investigation into the Impact of Information Sharing on Distributed Decentralised Agent-Based Additive Manufacturing Networks Owen Peckham(B) , Mark Goudswaard , Chris Snider , and James Gopsill School of Civil, Aerospace and Mechanical Engineering, University of Bristol, Bristol, UK [email protected] Abstract. Distributed Decentralised Additive Manufacturing (DDAM) networks are considered a complementary method to mass/batch production paradigms providing ramp-up, robust and responsive global-local manufacturing capability. One implementation is to use Artificially Intelligent (AI) agents that represent jobs and machines, and broker on their behalf. Fundamental to the brokering process is the sharing of information about the jobs and machines such that machines are able to select appropriate jobs for which they have the capabilities and resources to complete. Here a challenge may exist, in that different jobs and machines may be unable (i.e. due to partial information) or unwilling (i.e. due to IP concerns) to share, then potentially limiting decision-making capability and system performance. This paper examines the nature of information sharing on the performance of a brokered DDAM network. To do so, AnyLogic was used to create a multi-agent simulation. The simulation modelled a DDAM network where the characteristic information shared to the machines by the jobs could be varied. The results of the simulation showed that in general more information sharing boosted system performance, but that different types of information shared by jobs and machines created a varying impact on the performance benefit that is realised and should be prioritised to maximise system throughput. Keywords: Additive Manufacturing · Agent-Based Modelling Artificial Intelligence · Decentralised Production · Distributed Production
1
·
Introduction
Production is experiencing unprecedented change [1]. Societal behaviours, such as the ‘Maker’ movement, mass-customisation, and the desire for rapid delivery drive necessity for on-demand and responsive manufacturing capabilities. c IFIP International Federation for Information Processing 2023 Published by Springer Nature Switzerland AG 2023 E. Alfnes et al. (Eds.): APMS 2023, IFIP AICT 690, pp. 533–547, 2023. https://doi.org/10.1007/978-3-031-43666-6_36
534
O. Peckham et al.
Moreover, mentalities around Net Zero and the Circular Economy have created existential crises with regard to how society should manufacture and supply itself. Market fragility and supply chain uncertainty increases risk in forecasts, resulting in smaller and more intermittent orders being made [2]. National Security agendas are encouraging the re-shoring of manufacturing and the increase in global ‘shock’ events1 are requiring more resilient manufacturing and supply chain architectures [3,4]. The digitalisation of production is enabling the development and deployment of solutions to meet the aforementioned societal needs. Innovations, such as Additive Manufacturing, robotics, automated inventory, Computer Aided Design - Computer Aided Manufacture (CADCAM) workflows, the Internet-of-Things (IoT), networking, cloud infrastructures and manufacturing resource planning software systems, enable manufacturing to be deployed across factories, warehouses, offices, universities, schools and homes, allowing flexibility with little-tono re-configuration required2 to manufacture one component to the next [5,6]. Digitally connecting the capability offers the opportunity for global-local 3 production to occur affording [7–9]: 1. 2. 3. 4.
Reduction in distribution costs; Shorter lead times; Less capital investment; More flexibility in production capacity; 5. Better production capacity utilisation;
6. Larger pool of experts; 7. Diffusion of risk; 8. Boost to local economies; and, 9. More sustainable solutions.
manufacturing
Brokering networks of production jobs and AM machines are receiving increased attention where researchers have applied Artificially Intelligent (AI) agents that represent and bid on behalf of jobs and machines [10–12]. Such a network is exemplified in Fig. 1. An advantage of an AI agent is that control, governance and IP is maintained with the owner throughout the negotiations with the network [13]. Owners are able to define their own rules, constraints and logic by which they wish to negotiate a contract and only when a contract is made are the design/manufacturing instructions sent encrypted through the network—never resting in the cloud in the process. The autonomous nature of an AI agent approach also makes the bidding and awarding of small scale jobs accessible to firms that would otherwise dismiss them due to traditional overhead and cost associated with manual bidding processes. AI agents can also come together to represent collections of jobs and machines thereby enabling smaller firms to ‘join’ forces to bid for larger packages of work that would otherwise also be inaccessible. The result are ephemeral, amorphous supply chains formed as and when demand requires. 1 2 3
E.g., COVID-19 and the war in Ukraine. Especially when compared to mass production methods. Responding to global needs with local manufacturing solutions.
What to Share? A Preliminary Investigation
535
Fig. 1. Brokering DDAM through AI agents operating on behalf of jobs and machines.
This manufacturing capability is referred to herein as Distributed Decentralised Additive Manufacturing (DDAM) and has implications across the emerging Circular Economy. DDAMs can support the: 1. Beginning-of-Life production of products to meet increasingly diverse consumer demand; 2. In-Service-Life of products by facilitating spare parts on-demand production; and, 3. End-of-Life circularity of products through re-manufacture and utilisation of recycled resources into Beginning-of-Life products. And fundamental to their operation is the sharing of information during the brokering process such that jobs and manufacturing machines can be paired appropriately. The majority of research has specified and assumed full divulging of information for the brokering processes they examine [4,10,14]. However, there may be firms that are unwilling to share particular information such as sensitive company Intellectual Property (IP) [15]. In these cases, the brokering processes would simply not permit the firm to partake in the brokering process. Therefore, it would be favourable for increased adoption that the brokering process could accommodate AI agents that share varying levels of information (in accordance with what companies could share) and for the AI agents to have to consider and manage any uncertainty in their decision-making given it is based upon incomplete information. This paper contributes to this area of research by examining the impact of information sharing on DDAM network performance such that brokering protocols without full information divulgence can be developed. The general approach of this paper is to simulate a DDAM network using AnyLogic and track the system’s performance across runs whilst the amount of information shared by the jobs was varied. Given the large number of possible parameters needed to facilitate machines being able to produce jobs, four information types are synthesised for use in the study. These allowed the study to only consider four example parameters, whilst still investigating the wider scope of possible information that
536
O. Peckham et al.
could be shared. The paper considers only a network of homogeneous machines to allow the authors to control the scope of the study. The use of information types removes the need for the machine characteristics to bear weighting on the overall system since the brokering transaction only ever considers the jobs relative to the machines. The paper continues by reviewing the related work in modelling DDAM networks and the parameters used in their brokering processes (Sect. 2). This is followed by the methodology where the parameters are generalised into a number of information classes that influence the brokering process in specific ways and the creation of an agent-based model to investigate their impact on DDAM network performance (Sect. 3). The results are then presented followed by a discussion on the design of brokering processes for DDAM networks (Sects. 4 and 5). The paper concludes with the key findings from the study (Sect. 6).
2
Related Work
DDAM networks feature jobs and machines that can individually and collectively reason and develop their own strategies for brokering and distributing work. This is opposed to prevalent centralised control and governed manufacturing systems, and require the sharing of information between firms in order to operate. Previous work has included numerical modelling of DDAM networks which showed them to be more robust, resilient, and responsive compared to centralised control [16]. [4] focused on changes in demand behaviour with a DDAM network configured to handle a steady state demand input that then experienced a step, ramp, or saw-tooth change in demand. The study was interested in the ability of different DDAM configurations to respond to sudden changes in demand and showed different configurations performed well against the different demand profiles; therefore, one needs to change machine logics in line with the impending demand. The information shared during the negotiations within this network was manufacturing times, machine capabilities and job due dates. In a study focusing on machine logics, it was observed that the switching of job selection priority based on the job type composition of incoming demand enabled machines to effectively respond to sudden changes in the distribution of job types being submitted [10]. Following the demand change, the system would slowly return to its original state although some machine logic configurations resulted in more machines remaining on one type of job even though there was a steady-state stream of jobs equally distributed across the job types. This highlighted that the composition of machine logics can affect the behavioural stability of the system and that it may not return to its original condition post a demand-change event. Fundamental to changing type was the provision of information that could be used to ‘type’ a job (e.g., material requirement). These were boolean values that would require a re-configuration change to the machine in order to accommodate the job. The validation and low-TRL development of technologies that can put DDAM theory into practice has also seen innovations with [17] detailing an opensource Brokering Additive Manufacturing platform that has been used in a
What to Share? A Preliminary Investigation
537
Table 1. Parameters used in brokering proccesses for DDAM. # Parameter
Type
1
Print Volume
Continuous The volume the component will take up on a AM [18] print bed.
Description
Ref(s)
2
Print Time
Continuous The estimated time the component will take to manufacture.
3
Material(s)
List
The material(s) that are required to manufacture [10, 17, 18] the component.
4
Gcode Flavour(s)
List
The style of gcode the manufacturing data has been formatted for. This can prevent some machines from manufacturing the component.
5
Quality
Continuous The tolerances/accuracy that the machine should [18] be able to achieve in order to produce the component.
6
Due Date
DateTime
A date time object detailing when the component should be produced by.
7
Multi-Extrusion
Integer
The number of nozzles required for a multi-material component.
8
Nozzle size(s)
List
The diameter of the nozzles required.
9
Required Material Length(s)
List
The amount of each material that the component requires.
[4, 17]
[17]
[16]
Living Lab scenario. Data from the studies has both validated DDAM models as well as identified the challenges in implementing DDAM in the real-world. The reported studies used print time, material and gcode flavour. Other more theoretical work such as [18] has reviewed the DDAM modelling landscape to determine the underlying set of parameters that make up a DDAM network and covers demand profile, job, installed base and brokering strategy. The consideration of job parameters included quality, size, material and priority. The review has highlighted 9 parameters used in the brokering processes of DDAM network and are summarised in Table 1. This is not an exhaustive list yet serves to demonstrate the kind of information a machine may require from a job to allow production. It can be seen that there is a rich and diverse mix of parameters that cover continuous, boolean and categorical values. No research however has considered how varying the amount of available information for decision making impacts the performance of DDAM networks.
3
Methodology
To explore the impact of information sharing on system performance, the authors first performed a dimensional reduction exercise to group and classify the types of information that is shared and its role in the brokering decision making processes of the AI agents (Sect. 3.1). A numerical study was then created to explore the impact of different information types on system performance (Sect. 3.2). 3.1
Information Types
The Related Work (Sect. 2) highlighted a number of parameters that would influence an AI agent’s decision making process. For example, a machine AI agent logic may be configured to identify and select jobs with the lowest print
538
O. Peckham et al.
time enabling the machine to process more jobs within a given timespan. Reasons for this may include increasing profitability if the sum of a set of jobs is valued greater than a single long job and/or the desire to clear the system of jobs. After reviewing and debating the information being shared in previous work, the authors proposed that information being shared during a brokering process can be distinguished by two criteria. The first criterion is whether the piece of information constrains the decision making at a global (system) or local boundary. Global boundary information ascribes information identically across the system and affords a common ground for all AI agents to make decisions on. An example of global level information is print time, which is often equivalent for all machines. Thus, all machines are likely to consider print time in a similar way. Local boundary information can be affected by the specific state of the job/machine the AI agent within the network represents. For example, the amount of filament required will often be compared to the filament remaining, with a machine agent likely to favour finishing a reel over having to restock (and thus incur downtime). This could be for sustainability reasons as well as risk mitigation as pausing the manufacturing process to replace filament may lead to component defects. The second criterion is whether a piece of information supports an AI agent in selecting work scoring or assessing job compatibility. Compatibility information is used to determine whether a machine is capable of manufacturing the job. This is critical for AM due to its inherent heterogeneity and wide variety of materials that it can print in [4]. For example, should a machine not be capable of fabricating in the material required by a job then it is considered non-compatible by the AI agent(s). Scoring information supports an AI agent in ranking jobs such that they can be prioritised for selections, such as fabrication time. The two criteria form a strategic diagram featuring four types of theoretical information that can effect an AI agent’s decision logic (Table 2) – Local Compatibility, Global Compatibility, Local Scoring, and Global Scoring. In a practical DDAM network, all of these except for the Global Compatibility class are present. This information type is omitted as it is not likely to be considered in the system any job that fails this compatibility check at a global scale is not permitted entry into the network. For example, checkers for print volume that would ensure that a machine(s) exist on the network that could manufacture the job. A number of assumptions are made throughout the study relating to the information shared to focus scope. First, that the information types proposed form a suitably exhaustive set to encapsulate a majority of machine and job parameters. As such, while this study focuses on a single machine type, other machines in future work may be included in the same system. Second, that of general compatibility between all job and machine parameters considered. This assumes that sufficient standard practice exists in the AM field for machines and jobs to be cross-compatible, and hence for brokering to be viable. Third, that further processing and quality management may be here ignored. While post-printing operations and managing part quality are critical elements of an effective DDAM, they are here considered to be outside of paper scope.
What to Share? A Preliminary Investigation
539
Table 2. Types of Brokering Information.
3.2
Model Structure
To investigate the influence of sharing varying information on system performance, a Multi-Agent System (MAS) model was created. The MAS featured two agent populations – Machines and Jobs. Machine agents represent AM machines and are aware of their own state according to some set of internal parameters. These parameters are combined with the types of information being shared by the agents and form the data upon which an AI’s decision logic bases its decisions. Job agents comprise an order to fabricate a part, with parameters again corresponding to the information types. The model is configured to depict a coordinated machine-led system in which Machine agents initiate communication and may select a job according to the value they ascribe to it. In this configuration, Machine agents are not required to share information on their own state, and jobs are not permitted to reject or renege on a valid machine request unless they have already been selected by some other machine. Job and Machine agents were assigned states according to the variables described in Table 3. Each Machine agent was randomly assigned a material on creation. An equal quantity of filament was assigned to each machine. The machine was configured such that no in-process filament changeover was permitted. Therefore, if a job was accepted that required more filament than that remaining in the machine, a time penalty would be incurred to represent changing the filament prior to the new job commencing. Job agents were generated according to a pre-defined demand profile (Fig. 2a). Each job was assigned a random material and print time when created. Print time was assigned according to an arbitrary triangular distribution that featured values commonly observed in manufacturing AM components (Fig. 2b). State Charts for Job and Machine agents are given in Fig. 3. When a Machine agent is instantiated or not fabricating, they enter a request state and ask for jobs. Job agents send a response alongside some information (depending on study case) if they are available. This creates a response list for the Machine agent. The Machine agent then uses the information contained within the response list, its current state and internal decision logic to score, rank, and select a job to manufacture. A ‘SELECTED’ message is then sent to the Job agent and the Job agent returns an ‘ACCEPTED’ or ‘DECLINED’ response. If ‘ACCEPTED’, the
O. Peckham et al.
10 −3
Prob.Density
Submission Prob.[%]
540
10 5 0 0
2,000 4,000 6,000
3 2 1 0 0 200 400 600 ManufactureTime[min]
Simulation Time[min] (a) DemandProfile.
(b) ManufacturingTimeDistribution.
Fig. 2. Manufacturing scenario. Table 3. Model parameters. Parameter
Machine
Job
Class
Material
Filament material type installed on the machine
Filament type requested
Local Compatibiltiy
Print time
N/A
Required fabrication time
Global Scoring
Required filament
N/A
Amount of filament required
Local Scoring
N/A
Local Scoring
Remaining filament Amount of filament remaining on the machine
Job agent sends full information for manufacture, and the Machine agent runs a final compatibility check. If ‘OK’, the machine proceeds to manufacture and on completion, returns to searching for jobs. If ‘DECLINED’, the Machine agent returns to searching for jobs. A new Job agent instance starts in an ‘AVAILABLE’ state. In this state, the Job agent listens for Machine agents that are ‘ASKING FOR JOBS’ and ‘HAVE SELECTED YOU’ messages. On ‘ASKING FOR JOBS’, a Job agent will respond with some information that describes the job. On ‘HAVE SELECTED YOU’, a Job agent with ‘ACCEPTED’ if it is still in the ‘AVAILABLE’ state sending all the manufacturing information required for the machine to manufacture the job and moving into the ‘SELECTED’ state, or ‘DECLINED‘ in any other state. The Job agent will then listen for ‘COMPLETED’ or ‘REJECTED’ messages from the Machine agent. ‘COMPLETED’ denotes that manufacturing of job has completed while ‘REJECTED’ denotes that a Machine agent has later checked its compatibility with all the manufacturing information and has determined that the job cannot be processed. If the job is rejected then it returns to the ‘AVAILABLE’ state and enters back into the brokering process.
What to Share? A Preliminary Investigation
541
Fig. 3. Agent state charts.
3.3
Study Parameters
The study used the model to investigate the impact of sharing different combinations of information types during the brokering process. Eight cases were studied comprising each permutation of the three transmitted information types (Local Compatibility, Global Scoring, Local Scoring). In all cases the data for each information type was assigned to the agents, but the values were shared only during the brokering process when allowed by the study permutation. All information was shared prior to fabrication when the ‘deal’ had been made and enabled the Machine agent to perform a final check for compatibility before manufacturing the job. The model was run 100 times per case and results averaged. 15 machine agents were used in each run. Each agent was randomly assigned to one of three material types according to a uniform distribution and started with the same quantity of material (15 units). Job agents entering the simulation according to the step demand profile explained below were similarly assigned a material type according to a uniform distribution and a print time according to a triangular distribution (30, 120, 600) determined by typical print times for jobs at the University’s Fablab (Fig. 2b). Filament required was also assigned by an equivalent triangular distribution (1, 3, 10) with arbitrary units, with scale again based on observed print sizes. The machine agents all used a common decision logic for each transmitted information type. If the jobs were sharing the local compatibility information the machines simply chose the first compatible job to respond ’AVAILABLE’. For global compatibility the machines chose the job with the shortest print time from all the eligible respondents. When the Local Scoring information was shared the machines aimed to empty their spool in the smallest number of jobs: they would
542
O. Peckham et al.
choose a job with the material requirement closest to what they had remaining on their spool. The simulation was run for 7,200 time steps representing five days of full-time operation – one minute per time step. Jobs were created each minute according to a step demand profile (Fig. 2a), corresponding to a set job creation likelihood in a given timestep of 4% for the first day, 12% for the second day, and 4% for days three to five. This resulted in an average of 403 jobs per run as a result of the step-demand profile. The step-demand profile allowed establishment of a steady state and subsequent evaluation of system performance and recovery when pushed beyond this steady state. System performance was measured through number of jobs completed, machine utilisation (% manufacturing time divided operational time), and time Queued in System (QIS) taken as a rolling average. These metrics have been used in previous studies [4,10–12].
4
Results
Table 4 shows the data for jobs completed, mean QIS and utilisation with a percentage change to a baseline of no information sharing. No information sharing is equivalent to a machine having to randomly select a job from the pool of available jobs. All but two permutations saw an improvement in the number of jobs being completed by the system with sharing all information providing the greatest increase of 24%. However, the mean QIS all increased when information was shared so while the system is able to manufacture more jobs, information sharing adds lag to the system. Figure 4 compare the different permutations of information sharing according to jobs completed, rolling QIS and machine utilisation. Figure 4a demonstrates that the number of jobs completed is impacted by the information shared within the brokering process. Jobs completed by the system vary by as much as 89% Table 4. Results. LS: Local Scoring, GS: Global Scoring, LC: Local Compatibility. Study
Jobs Completed Change Mean QIS Change Utilisation Change [#] [%] [min] [%] [%] [%]
Nothing Shared 287
–
229
–
67.6
–
LS only
188
–34.5
548
139.3
45.6
–32.5
GS only
300
4.5
619
170.3
58.7
–13.2
LC only
330
15.0
508
121.8
78.2
15.7
LS & GS
262
–8.7
681
197.4
51.2
–24.3
LS & LC
338
17.8
694
203.1
78.9
16.7
LC & GS
343
19.5
426
86.0
77.6
14.8
LS, LC, & GS
356
24.0
606
164.6
79.3
17.3
Jobs Completed [#]
What to Share? A Preliminary Investigation
543
400 300 200 100 N
GS
LBGS LBGSLS LB
LBLS LSGS
LS
N GS LCGSLS LCLS LC LSGS LS
800 600 400 200
0.8 Utilisation
Rolling QIS [min]
(a) Jobs Completed Inter-Quartile Range Boxplot.
0.6 0.4 N LCGSLS LC LS
0.2 0
0 0
2,000 4,000 6,000 Time [min]
(b) Rolling QIS.
0
GS LCLS LSGS
2,000 4,000 6,000 Time [min] (c) Utilisation.
Fig. 4. System performance with varying levels of information sharing.
(LS only vs. LC, GS & LS). There also appears to be a general trend of more information sharing leading to more jobs completed. However, it is interesting to see that no information sharing (and hence no structured decision-making) performed better than several other cases, in particular those that scored jobs. The availability of LS information reduces overall throughput compared to a brokering process where nothing is shared. In contrast GS and LC both increases job completion. The variance in jobs completed across the 100 runs for each information sharing combination is also tighter when more information is shared during the brokering process. Figure 4b shows the effect of information on the rolling QIS. It shows how the sharing of any information types results in an increase of QIS across the run when compared to no sharing. The figure shows how the rolling QIS suddenly rises at the onset of the step in demand to the system except for the nothing shared. Figure 4c shows the average machine utilisation for each combination. All combinations that feature Local Compatibility in the brokering process feature a step-change in utilisation reaching 80% utilisation compared to 50–60% for other combinations. Sharing nothing sits in the middle of two groups with the under-utilised cases featuring the sharing of scoring information.
544
5
O. Peckham et al.
Discussion
The study has shown that the type of information shared during the brokering process of a DDAM network influences system performance in terms of jobs completed, job QIS and machine utilisation. A differential of up to 89% between lowest performing (LC only) and highest (all information shared) was observed. At the machine level, utilisation can vary from 45% to near 80% for corresponding information sharing cases (Fig. 4c). This demonstrates the importance of considering the information that is shared during brokering, and perhaps even mandates to ensure good system performance. The study also revealed that the information types effect system performance in different ways. For example, it is better for the system to allow no information sharing at all than to allow only scoring-type information. This is likely due to the interplay between ranking, decision-making, and job desirability. When a job has poor desirability (i.e., a long print time), it is likely to be ignored by the system of machines leaving a corpus of unwanted jobs waiting permanently in the network. The effect is magnified when Local-Scoring is enabled, as machines self-limit acceptance of jobs based on their own state (i.e. here that they have insufficient filament for many jobs), leaving some machines waiting for a highly desirable (and hence infrequent) job to appear. This then causes a reduction in utilisation and completed jobs. It is interesting to note however that this behaviour is not necessarily undesirable. Where jobs are scored and prioritised, that those scoring poorly will incur longer wait times creates an incentive to become desirable. This ‘survivalof-the-fittest’ approach may create an internal form of competition, where it is simply not sensible to submit a job to the system if it is not wanted by the machines and hence those submitting jobs either must increase desirability (and hence system performance) prior to submission, or must otherwise incentivise machines (i.e., through costing) to be selected. These findings highlight many opportunities for further study. The first is in flexible information sharing. The study conducted considered cases where all jobs and machines are restricted to sharing only the same permutations of information. There is no flexibility or freedom to share only certain information based on preferences of the job or machine (i.e., for IP-protection purposes). Mandating information to be shared could limit uptake and participation in a DDAM network. Here there is a two-fold need: First, for a technical method of fairly making decisions based on mixed sets of information. Machines must be able to select jobs fairly even though they are not provided the same information base, and it should be assumed that a failure to share indicates poor job fitness - it may also be that highly desirable jobs hide their benefit due to their unwillingness to share. Second, variance in shared information across the system may further impact performance. There is a need to investigate this through further study, where freedom to share (or not share) has been created. Adding a degree of ‘recklessness’ to machines selecting jobs whereby a machine does not simply select the top-scoring product may help alleviate machines competing for the same work and reducing system performance.
What to Share? A Preliminary Investigation
545
The second reflects on the information types used to frame this study. The types are based on how that information is then used to make decisions, with the first criterion distinguishing between boolean pass/fail vs. job ranking, and the second distinguishing between decisions based on global system state, or local machine state. While this framing is proposed as allowing delineation of all machine parameters, it is not considered to be the only framing possible, and it may be that alternative categorisations provide new views on how shared information types influence system performance. Further work should more broadly consider all types of information that could be shared in DDAM networks such as geographical distance between machines and jobs, nozzle size(s) or required material length(s) to facilitate further explorations of delineations that may be made between these information types. Third is to examine information across a broader range of scenarios. This study considered a single step-change demand scenario, chosen to place the system under temporary stress before allowing return to steady state. However, the demand profile applied to a system is known to have a large impact on subsequent system performance [4,10,14], and further study should investigate the interplay between this and information that is shared. Should the demand profile show a consistent interplay with system performance and information shared, it may be advantageous for a DDAM to encourage certain information regimes to maximise performance or when placed under stress. Fourth is the role of the broker in the brokering process. While this work has shown preferential information sets that increase performance, it also highlights that certain jobs may be classed as undesirable at a system level and languish in the job queue. There are also likely to be several competing machines competing for individual highly-desirable jobs. This raises the question of business models, job bidding, and cost incentivisation. This work does not currently consider how machines and jobs negotiate and pay for their services, the business models for engagement, or how this may be used to optimise system or individual performance. It is viable that free-market architectures [11] or appropriate business engagement could be used to optimise system performance. Accordingly, future work should consider the interplay of business models and cost structures with the system.
6
Conclusion
Information sharing is fundamental to the brokering processes of Distributed Decentralised Additive Manufacturing (DDAM) networks, this study evaluated different information types and their effects before demonstrating that an 80% change in system performance can be achieved by varying the degree of sharing. In general, greater information sharing during the brokering process is often preferential in both increasing performance and decreasing variance making the system more predictable. The study also demonstrated that the four information types of Global Compatibility, Local Compatibility, Global Scoring, and Local Scoring contribute to performance in different ways with Local Compatibility
546
O. Peckham et al.
information offering the most significant step-up in performance. In contrast, Local Scoring has been shown to inhibit performance if shared on its own, which is likely due to the increased competition it brings with machines competing for the same jobs and leaving a tail of undesirable jobs in the system. Exploring heterogeneous sharing of information during DDAM brokering processes would be a logical next step for researchers in the field. Acknowledgements. The work has been undertaken as part of the Engineering and Physical Sciences Research Council (EPSRC) grants – EP/R032696/1, EP/W024152/1 and EP/V05113X/1.
References 1. Luman, R., Fechner, I.: Trade outlook 2023: slow steaming in rough water (2022). https://think.ing.com/downloads/pdf/article/trade-outlookslowsteaming-in-rough-waters-what-to-expect-in-2023. Accessed 07 Mar 2023 2. Forum, W.E.: Net-Zero Challenge: The supply chain opportunity (2021). https://www3.weforum.org/docs/WEF Net Zero Challenge The Supply Chain Opportunity 2021.pdf. Accessed 07 Mar 2023 3. Remko, V.H.: Research opportunities for a more resilient post-COVID-19 supply chain - closing the gap between research findings and industry practice. Int. J. Oper. Prod. Manag. 40(4), 341–355. https://doi.org/10.1108/IJOPM-03-2020-0165 4. Goudswaard, M., Gopsill, J., Ma, A., Nassehi, A., Hicks, B.: Responding to rapidly changing product demand through a coordinated additive manufacturing production system: a COVID-19 case study. IOP Conf. Ser. Mater. Sci. Eng. 1193(1), 012119 (2021). https://doi.org/10.1088/1757-899X/1193/1/012119 5. Giunta, L., Obi, M., Goudswaard, M., Hicks, B., Gopsill, J.: Comparison of three agent-based architectures for distributed additive manufacturing. In: Procedia CIRP 107 (2022). Leading manufacturing systems transformation - Proceedings of the 55th CIRP Conference on Manufacturing Systems, pp. 1150–1155 (2022). ISSN: 2212–8271. https://doi.org/10.1016/j.procir.2022.05.123 6. Reeves, P., Tuck, C., Hague, R.: Additive manufacturing for mass customization. In: Fogliatto, F.S., da Silveira, G.J.C. (eds.). Mass Customization: Engineering and Managing Global Operations, pp. 275–289. Springer, London (2011). ISBN: 978-1-84996-489-0. https://doi.org/10.1007/978-1-84996-489-0 13 7. Catapult, T.D.: The rise of distributed autonomous manufacturing (2018). https:// www.digicatapult.org.uk/wp-content/uploads/2021/11/The rise of distributed autonomous manufacturing.pdf 8. Radius, F.: Why distributed manufacturing is the future of production (2021). https://www.fastradius.com/resources/distributed-manufacturing-benefits 9. Kuuse, M.: What is distributed manufacturing? (2022). https://manufacturingsoftwareblog.mrpeasy.com/distributed-manufacturing/ 10. Obi, M., Snider, C., Giunta, L., Goudswaard, M., Gopsill, J.: Coping with diverse product demand through agent-led type transitions. In: Jezic, G., Chen-Burger, ˇ Y.-H.J., Kusek, M., Sperka, R., Howlett, R.J., Jain, L.C. (eds.). Agents and MultiAgent Systems: Technologies and Applications 2022, pp. 277–286. Springer, Singapore (2022). https://doi.org/10.1007/978-981-19-3359-2 24. ISBN: 978-981-193359-2
What to Share? A Preliminary Investigation
547
11. Ma, A., Nassehi, A., Snider, C.: Anarchic manufacturing. Int. J. Prod. Res. 57(8), 2514–2530 (2019). https://doi.org/10.1080/00207543.2018.1521534 12. Ma, A., Frantz´en, M., Snider, C., Nassehi, A.: Anarchic manufacturing: distributed control for product transition. J. Manuf. Syst. 56, 1–10 (2020). ISSN: 0278–6125. https://doi.org/10.1016/j.jmsy.2020.05.003 13. Johns, J.: Digital technological upgrading in manufacturing global value chains: the impact of additive manufacturing. Global Netw. 22(4), 649–665 (2022). https:// doi.org/10.1111/glob.12349 14. Gopsill, J., Obi, M., Giunta, L., Goudswaard, M.: Queueless: agent-based manufacturing for workshop production. In: Jezic, G., Chen-Burger, Y.-H.J., Kusek, ˇ M., Sperka, R., Howlett, R.J., Jain, L.C. (eds.). Agents and Multi-agent Systems: Technologies and Applications 2022, pp. 27–37. Springer, Singapore (2022). ISBN: 978-981-19-3359-2 15. Srai, J.S., et al.: Distributed manufacturing: scope, challenges and opportunities. Int. J. Prod. Res. 54(23), 6917–6935 (2016). https://doi.org/10.1080/00207543. 2016.1192302 16. Ma, A., Nassehi, A., Snider, C.: Anarchic manufacturing: implementing fully distributed control and planning in assembly. Prod. Manuf. Res. 9(1), 56–80 (2021). https://doi.org/10.1080/21693277.2021.1963346 17. Giunta, L., Hicks, B., Gopsill, J.: Creating a living lab software stack for validating aget-based manufacturing. In: CIRP Design Conference (2023) 18. Goudswaard, M., et al.: Required parameters for modelling heterogeneous geographically dispersed manufacturing systems. In: Procedia CIRP 107 (2022). Leading Manufacturing Systems Transformation - Proceedings of the 55th CIRP Conference on Manufacturing Systems, pp. 1545–1550 (2022). ISSN: 2212–8271. https:// doi.org/10.1016/j.procir.2022.05.189 19. Goudswaard, M., Hicks, B.J., Nassehi, A.: The creation of a neural network based capability profile to enable generative design and the manufacture of functional FDM parts. Int. J. Adv. Manuf. Technol. (2021)
The Potential of Additive Manufacturing Networks in Crisis Scenarios Yen Mai Thi, Xiaoli Chen, and Ralph Riedel(B) Westsächsische Hochschule Zwickau, Kornmarkt 1, 08056 Zwickau, Germany {yen.mai.thi,xiaoli.chen,ralph.riedel}@fh-zwickau.de
Abstract. During the COVID-19 pandemic, many supply chains were disrupted significantly, which led to various severe impacts on supply chain performance. When infection rate increased, especially the supply of medical and hygiene products was disrupted. This caused a critical shortage and asked for an active response from the society. Among others additive manufacturing (AM) was utilized for closing this gap. Established as well as spontaneously formed additive manufacturing networks contributed a fast and valuable support during the pandemic scenario. Despite this huge benefit, the applicability, performance and efficiency of these networks depend on multiple factors including but not limited to its working principles, situation, human and social factors. This paper aims to provide a close look into the challenges of AM networks as well as onto the requirements for applying these networks to gain a high performance by two complementary empirical analyses: (1) interviews with AM network participants including producers and cooperators of existing and ad-hoc networks during the pandemic, and (2) a detailed analysis of different commercial AM platforms. The result of the research is supposed to contribute to the further development in this field, especially for constructing a supporting AM network platform in the future. Keywords: Supply Network · Additive Manufacturing network · COVID-19
1 Introduction Due to the fast-changing market environment and dynamics of customer requirements, companies see supply networks as a mean to stay competitive in the ever-increasing global competition. There is evidence that supply networks can contribute to a companies’ pool of resources, outsourcing of unique technologies, fusion and fission of advantages and so on [1]. Supply networks are ‘…nested within wider interorganizational networks and consist of interconnected entities’ [2], which aim for procuring, manufacturing, and delivering of raw resources to end products/services. It comprises a pattern of value-adding processes at different network nodes. The importance of supply networks has been hotly advanced, both in theoretical studies and in industries [3, 4]. Despite their attractiveness, supply networks do not necessarily bring forth success. Many sensitive factors (e.g. the right mixture of resources, clarity on the responsibilities and tasks of participants) should be paid high attention to, so as to achieve an efficient © IFIP International Federation for Information Processing 2023 Published by Springer Nature Switzerland AG 2023 E. Alfnes et al. (Eds.): APMS 2023, IFIP AICT 690, pp. 548–562, 2023. https://doi.org/10.1007/978-3-031-43666-6_37
The Potential of Additive Manufacturing Networks
549
network. A deep understanding of the operation and requirements of an efficient network is the foundation for their successful realization. Research on supply networks has already addressed some important factors including the precise identification of requirements, a quick and accurate matching of suppliers, the choice of a suitable network form [4, 5], an on-going operating and controlling of networks [6], and so on. The success (including the viability) of a networked production stems from adopting the right mixture of internal and external resources [7]. To ensure this, a systematic consideration of a lot of factors, not only financial assets and technical ability, but also organizational, managerial as well as logistical capabilities should be considered [8–10]. Furthermore, a quick matching of the right partners is another challenge for the efficiency of a supply network. This is especially required during a crisis time, when there is little time for response. Crises question not only the performance of traditional approaches in supply chain management, but create also unforeseen demand in industry, public and private levels. COVID-19, known as the global outbreak crisis caused by the SARS-CoV-2 virus that began in late 2019, has had a major impact on almost every dimension of society. During the COVID-19 crisis, global supply chains were disrupted in various dimensions due to sealed borders, trade restrictions, and unavailable labor, which led to a negative impact on the balance of demand and supply [11]. A resource shortage seems to be one of the greatest bottlenecks [12]. In this work, additive manufacturing (AM) with its flexibility and ability of individual adaption is suggested as a potential solution. Therefore, the management of AM networks is considered as a promising target for a deeper analysis. To understand their mechanisms and supporting factors, this work deals with the following issues: – What are the particularities of an AM network, how does it work? – What are the relevant roles in AM networks, with which responsibilities and tasks? – What are the requirements for an effective and efficient AM network and how are these currently supported by existing platforms? The remainder of this paper is organized as follows: We explore first the problems of supply networks under a crisis scenario including the contribution of AM networks. Based on a detailed empirical analysis of ad-hoc and existing AM networks, their structure and processes as well as limitations and problems are revealed, from that requirements for efficient and effective AM networks are derived. Another empirical study investigates to which extend existing (AM) platforms are able to support AM networks’ requirements.
2 Theoretical Background and Related Work 2.1 Supply Network and Crisis Scenario A supply network is a cluster of suppliers who are interconnected to add value to products/services. With the help of supply networks, participants are supposed to match their specific requirements on components, products, and/or services to suitable suppliers. Here, a network is suggested as one of the best solutions to get access to complementary abilities and obtain “value-creation potential of pooled resources” [13].
550
Y. M. Thi et al.
A crisis is a situation deriving from big changes, which will (or might) threaten an individual, group, or even the whole society [14]. There are many kinds of crisis and almost all have a common characteristic, which is negative with no warning. Therefore, little time is left for response. In the past years, one of the most witnessed crises is the global pandemic COVID-19. It spread rapidly surrounding the world in a very short time, which has brought significant consequences on social lives and environment issues. In industry, because of the urgent and huge demands for protective items (such as masks and protective glasses), scarcity of raw resources and lack of experienced employees for the work on site, created big bottlenecks and reduced the ability of existing supply networks to fast reactions. This set the focus on resilient and highly responsive networks. As a result, plenty of scientific work can be found related to the resilience and reconfigurability of supply networks during and after the COVID-19 pandemic [15]. This includes for instance supplier selection, multiple supplier strategies [16, 17] as well as Artificial Intelligence, Internet of Things, and Big Data analysis. Prominent approaches are for example a Multiple-criteria decision-making (MCDM) approach for a highly resilient supply chain [16], and a Fuzzy Ordinal Priority Approach (OPA-F) for evaluation [17]. By integrating Fuzzy techniques for order preference by similarity to ideal solution (TOPSIS) and multi-segment goal programming (MSGP), a novel approach has also been developed for the supplier selection in a clothing and textiles company [18]. Similarly, for a resilient supplier selection, a fuzzy hybrid neutrosophic decision-making technique has been proposed by [19]. Criteria, such as quality, usage of personal protective equipment, cost or price, safety and health practice and wellbeing of suppliers, and economic recovery program have been clustered for the selection of a sustainable supplier during scenarios with a high uncertainty [20]. Taken together, during and after the pandemic, approaches for new or for adapting existing supply networks for more resilience have been emphasized in research and practice. However, considering the urgency resulting from crisis, a quick response should be highlighted as one of the most priority issues. Hence, the question on how to build up reliable and performant supply networks in order to meet unexpected pandemic impacts will be analyzed in the following steps. 2.2 Additive Manufacturing and Its Role Under Pandemic AM Technologies. To overcome the pandemic-caused shortages, many technologies have been analyzed and compared to find the most potential one. The two most cited solutions are mass production [21] and additive manufacturing [22]. Considering large output with low unit cost, mass production seems to be a possible solution. However, this only works under the condition of rare machine setup or short setup time, large batch material ordering, and the use of standards technologies and materials [21]. During the COVID-19 pandemic period, urgent demand for protective equipment has led to big challenges to mass production, as this needs a time-consuming tool developing processes (e.g. for molding) and frequent machine setup. Moreover, the lack of raw material also limited the volume of material batches, which consequently influenced the output and unit cost of products. Therefore, additive manufacturing with its high responsiveness and flexibility was a promising alternative.
The Potential of Additive Manufacturing Networks
551
Additive manufacturing (AM) is known as the formalized term for what used to be Rapid Prototyping or three-dimensional (3D) Printing. The term Rapid Prototyping is used in many industries to describe a process of rapid generation of a presentation or systems for some examination process before the official commercialization [22]. Known as a one-stage production process, AM enables parts to be produced directly from a CAD model using layer-based material deposited technology without mold preparation process [22]. Therefore, advantages of AM are a short time for production engineering and product to market (especially compared to using molds for casting), the possibility to work at a higher level of (product) complexity and the saving of material, which reduces the demand of raw material during crisis time [22]. Consequently, AM is a potential technology which enables a rapid, flexible and short-delivery time solution. This fulfills requirements of a crisis scenario very well, though it challenges sometimes with problems, such as quality, production time and reliability. Roles and Practice of AM During Pandemic. During COVID-19, AM products have shown their potential as a solution when dealing with supply shortages in highly demanded products, especially for medical usage [23–26]. AM supply chains have also been built on a digital platform, which helps to connect demanders and suppliers by creating a common working place [27]. In general, platforms generate an ecosystem of different parties (including contractors, suppliers, demanders…), for sharing information, identification of requirements, matching of suppliers, supporting the operation and governance of networking, and so on. As a response to COVID-19, a lot of projects have been established and are under development, which, among others, focus on the application of AM technology to an AM supply platform. One of the most well-known initiative is EUR3KA, also known as the European Vital Medical Supplies. The anticipated outcome of this project is a plugand-respond repurposing resource coordination framework for crisis response during a pandemic. Other projects, such as LINKS (“Strengthening links between technologies and society for European disaster resilience”) [28], NO-FEAR (Network Of practitioners For Emergency medicAl systems and cRitical care) [29], and CO-VERSATILE [30], share similar objectives. All these show the big potential of AM technology and AM networks for overcoming pandemic-caused shortages. However, an AM network does not automatically mean success. In practice, many problems show gaps regarding AM knowledge. For example, given the large number of AM machine owners, different levels of experience, capabilities and capacity pose a major challenge to matching and coordination within a network. Different availability of machines would also weaken the effectiveness of an AM network. Moreover, with variations on quality, time or reliability of AM machines within the network, general efficiency and effectiveness of a network can neither been planned nor guaranteed. Additionally, under the scenario of crises, one of the biggest challenges is the limited time for response. Thus, due to the urgency of demand, the network configuration and building process needs to be fast as well. Little time can be used for extensive evaluations, tests, setting up formal coordination structures, aligning diverse IT-infrastructure etc. In order to avoid problems, master challenges and to unleash the full potential of AM technology in a network, the “black box” needs to be
552
Y. M. Thi et al.
opened, i.e. a better understanding of how such type of networks are composed and operated shall be created.
3 Research Plan and Methodology 3.1 Research Question Obviously, the application of AM in COVID-19 pandemic brought significant benefit to the society. The question of how and in which extent should an AM network establishment be supported, so that it can bring the best support while being efficient at the same time, has not been deeply investigated so far. Therefore, a deeper and better understanding about AM network working principles, network entities and roles, as well as challenges faced by AM networks in the crisis are needed to be studied. To be able to answer this question, an empirical research has been conducted to answer the following 4 sub-research questions: 1. What are the functions of an AM network, which allows them to provide the high demand during a crisis scenario with the high responsiveness? 2. What are the acting entities in an AM network during crisis and what are their roles/ responsibilities? 3. Are existing AM networks sufficient/ applicable to support society in crisis? And to which extent? 4. What are the limitations and problems existing in current commercial AM platforms, which should be improved for better performance? 3.2 Research Methodology This work aims to provide a research agenda and will define respective requirements for AM networks and platforms based on empirical work which covers three (explorative) case studies and an analysis of several AM supply platforms. A case study is defined as an in-depth, multifaceted investigation, using qualitative research method, of a single phenomenon [31], normally conducted in a great detail and based on the use of several data sources. Due to the fact that no prior research in this field was available we chose the qualitative case study approach, pursuing an inductive research approach [32]. The overall research process can be described as a thorough 5-step process: First, (1) a review of papers related to AM supply networks in COVID-19 from 2020–2023 was conducted, (2) a research problem was defined, which led to (3) the generation of the research question(s). In order to answer these, (4) a parallel process including (a) empirical research via semi-structured interviews and (b) a study of different commercial AM platforms was conducted. The qualitative case study with three different AM network cases in crisis, problems and challenges of these network should be found. The study of different AM platforms was expected to provide insights regarding functions and features and regarding their abilities to support networks in a crisis scenario. Finally, (5) a conclusion, including a comparison between different AM network types in the crisis, was generated to answer the research questions.
The Potential of Additive Manufacturing Networks
553
For the empirical study of AM networks, an intensive 60-min length interview with semi-structured questionnaires was conducted with six different participants actively working in AM supply networks during COVID-19. The interviews were virtually conducted and stored as written format for further analysis. Among the six interviewees, three of them acted in (A) an existing AM platform within an available network, two of them acted in (B) a temporarily established (ad-hoc) network during COVID-19 to provide support to the high medical demand in a special region, and one of them acted as (C) a machine producer and also as an AM product manufacturer at the same time. The variation of the interviewees’ role allows us to collect multiple perspectives. To conduct the interview in a more visual and clearer manner, a flowchart of a typical AM production process [33] was given. The interview section started with the AM process review, and went then through the three major sections of the semi-structured questionnaire: the first section contained questions regarding use case description, product information, overall performance, participants and overall process; while the second section guided the interviewee through six major processes, which are order placement, production planning, AM process, quality checkup, packaging, and delivery. In each process, question of the specific process requirements and information/ data were investigated. Last but not least, the third section offered the interviewees the chance to provide extra information from their case, which aims to gather their opinions about potential improvements in the future to deal with the crisis, and the challenges when generating an AM network to respond to crisis demand. From our empirical study six important perspectives have been observed for the understanding of AM networks, which are (1) workflow, (2) user data requirements, (3) product range, (4) stakeholders/ participants and their interrelation, (5) their unique selling point, and (6) limitations and challenges. Based on these six perspectives, a thorough analysis of current commercial existing AM platforms was conducted. There were seven platforms selected due to their popularity on the market at the time of research. These AM platforms include Siemens AM network, Makerverse AM platform, Protiq AM platform, Materialise platform, Replique, Xometry, and Stratasys. The characteristics, structure and problems of AM networks gained from the interview section, were then compared with commercial platform properties to derive limitations of these platforms in terms of network support.
4 Empirical Insights 4.1 Case Studies on Networks The interviews’ result shows that all of the three cases selected AM or 3D printing technology as the method for manufacturing to respond to the urgent demand in COVID19. The result from the six interviewees emphasizes the role and the interaction between the different entities within the network during crisis. The structure of each case (network) is different; however, the general structure includes three main entities: (1) demander – where demand comes from, (2) suppliers or supporters – who provide manufacturing support, and (3) coordinator (network initiator/ manager) – who connects demand and support. There could be another fourth entity: the community. The communication and information exchange were mostly based on
554
Y. M. Thi et al.
social media platforms, virtual meetings, phone calls and emails. Work was planned and monitored via project management apps or social media platforms. The number of products produced was reported from medium to high, due to the contribution of multiple partners within the existing network. a. Existing AM networks In the first interview, the AM network was recognized to be established before the pandemic but also operated during the pandemic time; its entities include: (1) suppliers, (2) demanders, and (3) coordinator (network initiator). Technical and other support was done within the coordinator’s organization. Connections between the demanders (customers) and AM producers/ suppliers was done via the coordinator’s website/ platform, on which the available suppliers with their profiles could be found. Originally, the network aimed at commercial service; however, during the crisis time, it was opened for community support. The type and number of products produced are not known, because the platform was only used for connecting demanders and manufacturers. In the second interview, the existing AM network presents the case of medical face shields production in the beginning of the pandemic. The head holder for the plexiglass, which is fixed to the forehead with a band, should be printed. Printing capacity depended on printer availability, model, and resolution. Six printers were available at the coordinator’s site, leading to a production volume of around 150 pcs per week. However, the majority of production volume came from the network partners. Finally, 5000 masks were produced for donation to 2 demanders (a hospital in Spain and a humanitarian aid organization in Germany) within four-weeks’ time. The third interview was done with a research institute in Germany which was connected to an existing AM network to produce face shields. With given designs, the inquiry was distributed through 19 other members of the research alliance of this organization; however, only six institutes participated, due to their technical capabilities. The interviewee reported that more than 550 face shield holders were produced. b. Ad-hoc AM network The ad-hoc AM network was created during the COVID-19 pandemic due to the high demand of face shields from hospitals, doctors, nursing services etc. at multiple places. The interviewees were an individual maker with his own 3D printer and a coordinator of a regional initiative. The interviews indicated that there were three major entities in this AM network, which include (1) customers or demanders, (2) producers and coordinator as the same time, and (3) a community. The interviewee reported that the network was established at the beginning of the pandemic in April 2020. There was no fixed number of required amounts from demanders, but dropped in with different amounts and at different times. The face shields were produced from the makers voluntarily, depending on their capabilities and capacities in a certain time. The network was managed to provide approximately 2300 pieces of face shields during the pandemic. Everything had to be more or less self-organized with a high level of self-motivation and a continuous optimization. There was always a sense of high urgency. Products were officially made for private use only. From beginning, demand and distribution of work were self-organized with excel sheets, later, a project management app (Trello) was used. Makers volunteered for
The Potential of Additive Manufacturing Networks
555
a certain number of products based on their technical capabilities. Due to the dynamics of the situation, coordination task was time-consuming and therefore, people were not conducive to motivation. c. AM network by printer producer as a part producer The last interview revealing the perspective of a machine producer and a producer in an ad-hoc AM network brought some interesting insights regarding network features and its operational characteristics. In this case, the AM network was established at the beginning of the pandemic, when a high demand of PPE (Personal Protective Equipment) was detected. 3D printable products were produced to support the interruption in PPE supply. The production of ten different products belonging to the medical application with a small amount was reported. A local network, local doctors and known partners were supported by direct means of contact. In conclusion, the six interviews belonging to three cases of AM networks indicate that AM networks played an important role to deal with PPEs supply disruption caused by the COVID-19 pandemic. With the leverage of local networks, the rapid prototyping method, a high responsiveness PPE supply network was created. On the other hand, quality management, supplier qualification, IP protection, and collaboration issues haven’t been in the focus in these cases. In the next sections, a generalized workflow model is presented, which clarifies the entities and roles, and a summary of current challenges of AM networks is given. AM Network Model with Roles and Functions A general workflow process model of the AM networks was generated based on the interviews’ result. This model provides a closer look to the AM network structure, as well as to the flow of information and material within the network. As shown in Fig. 1, the AM network includes four major entities: (1) demander – who creates the demand for the product, and then contact (2) the coordinator for the support from (3) the community and (4) producers. From the given information of required products, the coordinator searches for the available known resources to solve the demand. During this phase, social media or chat groups were generated for raising demand and communicating with a supportive community, which consists of volunteers for different purposes. In case of the established network, partner searching is not necessary. The community was to support in terms of technical issues, material and accessories, as well as delivery. A producer is normally an individual or organization owning 3D printers, who volunteers to offers manufacturing capacity. The communication within the networks was mostly based on chatgroups, Trello boards, web-meeting and phone calls. With the rapid help of the community and the producers, the coordinator could fulfil the demands in a noticeably shorter time compared to conventional purchasing and producing process. Due to the high amount of human interaction and a complex decision-making process, imprecise information and delay in response may happen. Challenges of Current AM Network During Crisis Even though there are great advantages of AM networks to support the society in crisis, there are still several critical challenges which have been reported by the AM network participants, see table 1 for a summary.
556
Y. M. Thi et al.
Fig. 1. AM Network during COVID Pandemic – summarized from interviews
Case ID refers to the interview group of AM network: (a) existing AM network, (b) ad-hoc AM network, (c) machine producer & part supplier in an existing AM network. 4.2 Platform Analysis The study of seven different commercial AM platforms has been conducted following the analysis protocol presented in the methodology section. First of all, a trial usage at these platforms (if available) was conducted to acquire information regarding user data requirements and the overall work process. Secondly, a deeper information mining process on the websites of each platforms gathered information of their features, possible product offers, pros and cons. Finally, a summary was generated to compare these platforms and to derive a conclusion for research question 4. The overall structure of a standard commercial AM platform includes three entities: (1) customers, who raise demand on the platform, (2) the platform itself, where the different resources were selected and matched to fulfill the demand from customers, and (3) suppliers, who are normally manufacturers of 3D printing products. Workflow Overview and Description of Roles and Functions A process model of these commercial AM platform was composed using the BPMN modelling language. This model, displayed in Fig. 2, depicts the participated entities and their roles in an AM commercial platform. Referring to Fig. 1, the differences between the workflows at a commercial AM platform and the AM networks in crisis can be seen. Open Issues and Limitations of Commercial Platforms A comparison of the seven commercial AM platforms regarding different dimensions is provided in Table 2. From Table 2, some open issues of commercial platforms including IP issues, knowledge preparation, medical approval of products and supplier qualification can be seen.
The Potential of Additive Manufacturing Networks
557
Table 1. Current AM network challenges during Covid-19 (interview result) Case ID
Challenges
a
The proposal generation process needs to be speed up
a, b, c
The intellectual property right regarding product design needs to be considered
a, b
Control position between customer and suppliers must be well assigned
a, b, c
Agile medical approval certificate process for 3D printing product is needed
b
Demand/ needs have not been described well
a
Economic efficiency is low compared to mass production
a
Technical feasibility and reasonableness have not been clearly structured
a, b, c
Demander’s 3D printing knowledges is not sufficient
a
Resources are limited during pandemic
a
Current system has not been adapted with new method of production
a
Preparedness level of current network is low
a
Temporary AM network to respond to pandemic has low performance
a
Testing process consumes great time
a, b
Technical support is lacking
b
Fund raising work was not properly done
b
Interhuman and management issues were not well handled
b
Social technical support is scarce
b
Tools for management task should be provided
b
Production process management was not provided
c
Logistics (delivery) requirements were not known
b, c
Supplier matching process is needed
First of all, while Makerverse, Replique, and Xometry have multiple actions to address IP protection and cybersecurity (NDAs, ITAR, 2-factor authentication, European server based, ISO 27001, and internal regulations), others have not mentioned any information about this issue. Medical products could be found at some platforms (Replique and Stratasys) with recognized certificates. Compared to other platforms, Makerverse and Replique offer more comprehensive and standard processes for supplier qualification. Contracts were found to be made between platform and customers in case of Makerverse, which indicates the highest responsibility level of the platform to the customers. AI techniques were found being used in several processes such as quotation making and matching process. A wide range of materials (metal, plastic, ceramics) and techniques (different AM methods and non-AM methods) were found at most platforms; however, biocompatible and medical usable products were only offered at two platforms (Replique and Strasys). Last but not least, while most of platforms required CAD as input data to start the process; Makerverse and Xometry define the data format in detail, with the
558
Y. M. Thi et al.
Fig. 2. Overall workflow of an AM commercial platform
Table 2. Comparison of AM commercial platforms
D1: CAD or 3D printable data, D2: Purpose of product, D3: User Data
opportunity to use a 2D file or technical drawing as an input. This is considered as a support for customers with insufficient 3D modeling skills.
The Potential of Additive Manufacturing Networks
559
5 Discussion 5.1 Results The empirical study has shown the importance and potentials of AM network when dealing with the uncertainty and supply disruption caused by a crisis. Due to the flexibility of AM technology, its widespread application, also at the private sector, led to an agile and flexible response to high demand. Either by already established relations (existing network) or by the fast connection of mainly regional partners, the networks were able to respond to different and varying demands. Four major entities were defined in an AM network during the pandemic situation: these include (1) demanders, (2) coordinator, (3) producers, and (4) community. The coordinator took the role of connecting demands with the supply side, coordinates capabilities, production capacities and even logistics. The community provides (free of charge) knowledge on products and technology, and hereby helps to solve problems and to improve products and processes. Existing networks had a faster ramp-up phase, because potential producers are already known, whereas ad-hoc networks were dependent on volunteers, so that the overall production capacity couldn’t been really predicted. In both types of networks, no formal planning processes (production quantities, material demand, resource capacity) were established and the performance (e.g. productivity, efficiency) had no priority, compared to fast fulfilment of demand. Also, coordinators had to rely on those partners, who made themselves available; there was no selection process and no optimal distribution of tasks or capacities. Other shortcomings were IP protection, quality management, medical certification, logistics, and the efforts accompanying information and communication exchange. A major shortcoming was also, that most customers had little to no knowledge about the AM/ 3D printing products and manufacturing technologies. A detailed study of seven popular AM platforms revealed, that these were only partly able to solve the aforementioned problems. Of course, they connect demand and supply side; however, in many times some AM expertise at the demand side is assumed. A complete process chain including quality management and logistics is hardly provided. It can also be questioned, that those platforms would be able to deal with massive and dynamic demand like it occurred in the pandemic. Other open issues relate to IP protection and supplier qualification. 5.2 Research Limitation Limitations of our research lie in the limited amount and limited variation of case studies and in the narrative literature review being used. The interview candidates were selected based on their availability within a limited time allowed by a research project, within the existing network. Therefore, the amount and scope of selected cases are not high enough to cover all the potential cases in the field, including existing and ad-hoc networks. Also, we only focused on the coordinators’ and makers’ perspective, the customer/ demander perspective has not been investigated for those cases. The combination machine manufacturer/parts producer was only examined once. There may therefore be special aspects
560
Y. M. Thi et al.
that could not be discovered due to the small sample number; a generalization for this type of partner is also not possible. The review of AM platforms is a narrative one, which is based on popularity. This may not reflect a complete overview of the current state of AM platforms. Insights were limited to publicly available information and own tests. Therefore, a deeper study to involve comprehensive insights of platform owners’ perspective is required. 5.3 Contribution of the Research and Future Research Directions Our research led to a better and deep understanding of how AM networks in the pandemic were able to fulfil dynamic and varying demands from different “customers” in an agile way. The empirical research has highlighted the advantages of flexible networks, but at the same time has also revealed their disadvantages resp. Shortcomings. Platforms seem to be a mean to support networks in many respects: information exchange, security, partner selection and matching, process orchestration, planning and control, etc. As it became obvious, existing AM platforms are not yet able to provide the necessary functionality, further research is required to address mechanisms of network configuration and operation as well as suitable methods, tools, environments for support. Promising research directions seem to be. • semantic technologies to describe demand, technological capabilities and functions in the value chain • partner selection, matching models and algorithms, using sophisticated approaches (data analytics, AI) • the (semi-) automated generation of value-adding processes to fulfil specific customer demand • planning models and methods in a volatile, uncertain situation, as in a crisis • the use of simulation to test and evaluate different scenarios of networks and processes • knowledge management for the (AM) domain itself and for networks as well • platform support, platform ecosystem Thereby, a good basis for resilient and efficient networks which are able to respond to the requirements/ challenges in a crisis would be created. Acknowledgement. The project on which this report is based was funded by the German Federal Ministry of Economic Affairs and Climate Protection under the funding code MM MMMKO01416521 - 0I1MK22001G. The responsibility for the content of this publication lies with the authors.
References 1. Chen, X.L., Riedel, R., Mueller, E.: Partner selection in innovation alliance. In: Proceedings of the 4th International Conference on Changeable, Agile, Reconfigurable and Virtual production (CARV 2011), Montreal, Canada, 2–5 October 2011 (2011) 2. Harland, C.M., Lamming, R.C., Zheng, J.R., Johnsen, T.E.: A taxonomy of supply networks. J. Supply Chain Manag. 37(4), 21–27 (2011)
The Potential of Additive Manufacturing Networks
561
3. Reuvid, J., Yong, L.: Doing business with China. Eur. Bus. Rev. 12(3) (2000) 4. Chen X., Riedel, R., Müller, E.: Motivations and criteria for partner selection in innovation alliance - a comparison between companies from Germany and China. In: Proceedings of IEEE International Conference on Industrial Engineering and Engineering Management (IEEM), Bangkok, Thailand, 10–13 December 2013 (2013) 5. Chen, X., Hesse, C., Riedel, R., Müller, E.: The choice of a collaboration form - a special insight in the case of R&D consortia. In: Proceedings of the IEEE International Conference on Industrial Engineering and Engineering Management (IEEM) , Bali, Indonesia, 04–07 December 2016 (2016) 6. Chen, X., Mahling, A., Riedel, R., Müller, E.: Organizational structure and the dynamics of collaboration relationship. In: Proceedings of the IEEE International Conference on Industrial Engineering and Engineering Management (IEEM) , Singapore, 06–09 December 2015 (2015) 7. Emden, Z., Calantone, R.J., Droge, C.: Collaborating for new product development: selecting the partner with maximum potential to create value. J. Prod. Innov. Manag. 23(4), 330–341 (2006) 8. Maryam, N., Dominique, R.J.: Value creation through strategic alliances: the importance of the characteristics of the partners and the resources brought by them. In: Proceedings of TMGF International Conference on Portland International Center for Management of Engineering and Technology (PICMET), Istanbul, Turkey (2006) 9. Pegram, R.: Selecting and Evaluating Distributors. National Industrial Conference Board, New York (1965) 10. Zadeh, L.A.: Fuzzy sets. Inf. Control 8(3), 338–353 (1965) 11. Xu, Z., Elomri, A., Kerbache, L., El Omri, A.: Impacts of COVID-19 on global supply chains: facts and perspectives. IEEE Eng. Manag. Rev. 48(3), 153–166 (2020) 12. Queiroz, M.M., Ivanov, D., Dolgui, A., Wamba, S.F.: Impacts of epidemic outbreaks on supply chains: mapping a research agenda amid the COVID-19 pandemic through a structured literature review. Ann. Oper. Res. 319, 1159–1196 (2022) 13. Lin, Z., Yang, H.B., Arya, B.: Alliance partners and firm performance resource complementarity and status association. Strateg. Manag. J. 30, 921–940 (2009) 14. Brecher, M., Wilkenfeld, J.: A Study of Crisis. University of Michigan Press, Ann Arbor (1997) 15. Pimenta, M.L., et al.: Supply chain resilience in a Covid-19 scenario: mapping capabilities in a systemic framework. Sustain. Prod. Consum. 29, 649–656 (2022) 16. Hoseini, S.A., et al.: A combined interval type-2 fuzzy MCDM framework for the resilient supplier selection problem. Mathematics 10(1), 44 (2021) 17. Mahmoudi, A., Javed, S.A., Mardani, A.: Gresilient supplier selection through fuzzy ordinal priority approach: decision-making in post-COVID era. Oper. Manag. Res. 15(3), 208–232 (2022) 18. Kao, H.: Integrated Fuzzy-MSGP methods for clothing and textiles supplier evaluation and selection in the COVID-19 era. Math. Probl. Eng. 5, 1–13 (2022) 19. Pamucar, D., et al.: A novel fuzzy hybrid neutrosophic decision-making approach for the resilient supplier selection problem. Int. J. Intell. Syst. 35(12), 1934–1986 (2020) 20. Dang, T.-T., et al.: A two-stage multi-criteria supplier selection model for sustainable automotive supply chain under uncertainty. Axioms 11(5), 228 (2022) 21. Ivanov, D., Tsipoulanidis, A., Schönberger, J.: Global Supply Chain and Operations Management, 3rd edn. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-72331-6 22. Gibson, I., Rosen, D.W., Stucker, B., Khorasani, M.: Additive Manufacturing Technologies, 3rd edn. Springer, New York (2021). https://doi.org/10.1007/978-3-030-56127-7 23. Daoulas, T., et al.: The role of three-dimensional printing in coronavirus disease-19 medical management: a French nationwide survey. Ann. 3D Print. Med. 1, 100001 (2021)
562
Y. M. Thi et al.
24. Vordos, N., et al.: How social media and 3D printing tackles the PPE shortage during COVID19 pandemic. Saf. Sci. 130, 1–7 (2020) 25. Bione, G.B.B.D.S., et al.: 3D printing applications during COVID-19 pandemic: a literature review. RGO 69(11), e2021006 (2021) 26. Kunovjanek, M., Wankmüller, C.: An analysis of the global additive manufacturing response to the COVID-19 pandemic. J. Manuf. Technol. Manag. 32(9), 75–100 (2021) 27. Hein, A., et al.: Digital platform ecosystems. Electron Markets 30, 87–98 (2020) 28. https://links-project.eu 29. No-Fear|Project website (no-fearproject.eu) 30. Europe’s manufacturing rapid responsiveness for vital medical equipment|CO-VERSATILE 31. McCutcheon, D.M., Meredith, J.R.: Conducting case study research in operations management. J. Oper. Manag. 11(3), 239–256 (1993) 32. Eisenhardt, K.M.: Building theories from case study research. Acad. Manag. Rev. 14(4), 532–550 (1989) 33. Yang, L., et al.: Additive Manufacturing of Metals: the Technology, Materials, Design and Production. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-55128-9
An Environmental Decision Support System for Determining On-site or Off-site Additive Manufacturing of Spare Parts Enes Demiralay1(B)
, Seyed Mohammad Javad Razavi1 and Mirco Peron1
, Ibrahim Kucukkoc2
,
1 Department of Mechanical and Industrial Engineering, Norwegian University of Science and
Technology, Trondheim, Norway [email protected] 2 Department of Industrial Engineering, Balikesir University, Balikesir, Turkey
Abstract. Effective spare part management can increase the competitiveness of supply chains, but the intrinsic characteristics of spare parts (e.g., intermittent demands, dependence on suppliers) make their effective management complicated. In recent years, additive manufacturing (AM) has emerged as a possible way to overcome these issues and received significant research attention, especially the topic of supply chain configuration. AM enables the easy production of parts close to the point of use, thus favoring the decentralization of supply chains (i.e., on-site production), but while this topic has been studied extensively from an economic perspective, its environmental implications remain unexplored. The literature is limited merely to mentions of the reduced transportation emissions associated with on-site production strategies, without, for example, a lifecycle perspective in which the production phase is considered. It is common knowledge that different countries adopt different energy mixes, thus generating different carbon dioxide–equivalent emissions during the production phase. A lifecycle perspective therefore casts doubt on whether on-site production strategies are always environmentally preferable over strategies in which spare parts are produced far from the point of use and then shipped (i.e., off-site production or centralized supply chains). In this paper, we aim to resolve this doubt by developing a decisionsupport system that can assist managers and practitioners in determining the most environmentally friendly AM spare part production strategy, considering both the transportation and production phases. Keywords: Additive Manufacturing · Decentralized Supply Chain · Spare Parts Production
1 Introduction Spare parts are crucial for ensuring the high availability of production systems, and an appropriate spare parts management is needed to ensure that the right spare parts are available at the right time, at the right location, and in the right amount. However, it is © IFIP International Federation for Information Processing 2023 Published by Springer Nature Switzerland AG 2023 E. Alfnes et al. (Eds.): APMS 2023, IFIP AICT 690, pp. 563–574, 2023. https://doi.org/10.1007/978-3-031-43666-6_38
564
E. Demiralay et al.
not easy to manage spare parts correctly due to their main characteristics: intermittent demand (hard to predict both quantity and frequency), long lead times, high costs if they are not immediately available, and strong dependence on supplier [1, 2]. Researchers and practitioners have recently been investigating additive manufacturing (AM; also known as 3D printing) for the production of spare parts [3, 4], which would limit some of the disadvantages linked to their main characteristics, particularly the long lead times. Indeed, AM enables the production of spare parts both on-demand and close to the point of use [5–7], and in addition to economic benefits (e.g., lower transportation costs, lower holding costs due to lower inventory levels), researchers have reported that it also generates environmental benefits due to production close to the point of use [8, 9]. In the current work, we investigate, for the first time, whether AM and its decentralized use (i.e., close to the point of use, hereinafter called “on-site production”) is indeed environmentally beneficial. The existing works in the literature mention the reduced environmental burden of on-site production through reduced transportation emissions, and while they are accurate, a lifecycle perspective should be used to properly evaluate whether an on-site production strategy is environmentally preferable to a strategy in which parts are produced far from the point of use and then shipped (hereinafter called “off-site production”). Just as an example, producing 1 kWh of electricity in China produces more than twenty times more carbon dioxide equivalent (CO2 e) emissions than in Sweden [10], so when considering both the production and transportation phases and the fact that different countries have different energy mixes, it becomes clear that on-site production might not always be the best environmental strategy. In this work, we develop a decision-support system (DSS) to help managers and practitioners to determine whether to adopt on-site or off-site AM production of spare parts to minimize CO2 e emissions (i.e., the most environmentally friendly production strategy). To develop this DSS, a four-step methodological framework is used, as described in the Methodology section. The remainder of this paper is structured as follows: Sect. 2 provides a literature review regarding the impact of AM on spare part supply chains from different perspectives; in Sect. 3, the methodology used to obtain the DSS is described; in Sect. 4, the DSS is presented and discussed; and in Sect. 5, the conclusions are presented, together with managerial implications, limitations, and possible future research.
2 Literature Review As mentioned, AM offers significant potential benefits for spare part supply chains, and researchers have examined, from different perspectives, when spare parts should be produced using AM. Initially, the focus was on the economic perspective, with the aim of understanding when spare part supply chains using AM technologies are convenient compared to conventional manufacturing (CM) technologies (casting, forging, etc.). Examples of these works include [5, 6, 11–16], in which AM and CM spare part supply chains are compared by considering different supply chain costs (ranging from inventory holding costs alone to all costs from a lifecycle perspective), different spare part characteristics (e.g., demands, properties, materials), and different constraints (e.g., limited storage capacity).
An Environmental Decision Support System
565
More recently, researchers have begun to consider the environmental perspective; here, too, the main focus has been on comparing the environmental footprint of AM spare part supply chains with those using CM technologies, such as [17–20], which considered different raw materials (e.g., aluminum alloys, steel alloys, titanium alloys), AM production methods (e.g., selective laser sintering, electron beam melting, multi-jet fusion), transportation vehicles (e.g., trucks ships, trains), and energy mixes (i.e., the amount of CO2 e emissions based on the sources used to generate electricity). However, these works have mostly been case-specific and have considered either offsite or on-site AM production, but to the best of our knowledge, not both. In this paper, we aim to address this gap by developing a DSS that helps managers and practitioners to determine whether to adopt off-site or on-site AM production of spare parts with the goal of minimizing CO2 e emissions.
3 Methodology The proposed DSS in the current research is a decision tree derived from a comparison of the CO2 e emissions of different spare part supply chain scenarios (i.e., supply chains characterized by different backorder costs, production costs, production and transportation lead times, energy mixes, etc.). To develop the DSS, we follow a four-step methodology, but before describing these steps in detail, the basic features of the DSS and the assumptions behind it will be described. As already mentioned, the DSS is intended to help managers and practitioners to determine the most environmentally friendly production strategy (i.e., on-site or offsite) for AM spare parts, and to achieve this, a lifecycle perspective is needed. The lifecycle of a spare part consists of different phases: raw material extraction, production, and transportation and spare part production, transportation, use, and recycling/disposal [21]. In this work, we assume that the decision to produce spare parts on-site or off-site depends only on the production and transportation phases and that these are independent of the others (the environmental footprint of raw material extraction and preparation, for example, is considered the same regardless of the on-site or off-site production strategy). Other assumptions are as follows: – We consider a single material—316L stainless steel— because this is one of the most commonly used, but the methodology is independent of the material. – In the case of an on-site production strategy, we assume that the production is located sufficiently close to the point of use that the transportation phase is negligible. – For the transportation phase, we consider four different types of transportation vehicles: trucks, trains, airplanes, and cargo ships. With the main features, control volume, and assumptions of the proposed DSS defined, we will discuss the four-step methodology. In Step 1, a mathematical model to compare the CO2 e emissions of on-site and off-site production is developed. In Step 2, an ANOVA is performed to determine the most relevant input parameters for the mathematical model. In Step 3, those input parameters are used in a parametric analysis, enabling the creation of a dataset consisting of realistic spare part supply chain scenarios (i.e., supply chains with different transportation modes, energy mixes, and distances and
566
E. Demiralay et al.
spare parts with different mean times to failure, backorder and production costs, and lead times). Finally, in Step 4, the DSS is obtained in the form of a decision tree using a machine learning algorithm (specifically, a decision-tree algorithm) fed and trained with the results of the parametric analysis. Each step is described in detail below. 3.1 Mathematical Model In each scenario, the on-site and off-site production strategies are evaluated in terms of CO2 e emissions using a mathematical model based on the input parameters shown in Table 1. The model allows the comparison of the CO2 e emissions of the two strategies, considering both the production and transportation phases, so that, for each scenario, the strategy that minimizes CO2 e emissions can be selected (Eq. (1)): minCO2 e
(1)
where CO2 e is the sum of the CO2 e arising from the production and transportation phases. CO2 e = CO2 ep + CO2 et
(2)
The production-based CO2 e emissions (CO2 ep) are calculated by multiplying the energy consumption of the production phase (Ec), the order-up-to level (Si ), the CO2 e emissions from the production strategy i (Emi ), and the part size (Ps). CO2 ep =
2
Ec · Si · Emi · Ps
(3)
i=1
The transportation phase is then considered only if the off-site production strategy is adopted. The transportation-based CO2 e emissions (CO2 et) are calculated by multiplying the distance to the production factory (d ), the CO2 e emissions resulting from the selected transportation method (t), the order-up-to level (Si ) and the part size (Ps). CO2 et =
2
d · t · Si · Ps
(4)
i=2
Si is representative of the number of spare parts that need to be produced and is calculated through a sub-optimization problem. For this, we assume that spare parts are produced on a make-to-stock basis, as was done in [6]. Such a sub-optimization problem aims to minimize, for each production strategy, the sum of the production, holding, and backorder costs. Therefore, an inventory management model needs to be considered, and we use a periodic review model in which the spare part demand follows a Poisson distribution. The inventory management model then proceeds by finding different optimal values for Si through the choice of various sourcing alternatives and review period T . Given the stochastic demand (y) and after identifying T , the optimization problem is as follows: minCTotal = min(Chi + Cbi + Cp)
(5)
An Environmental Decision Support System
567
Table 1. Input parameters Parameter
Description
Unit Measure
Input Parameters i = 1, 2
Production strategy: on-site (1) or off-site (2)
-
T
Review period
[time]
h
Holding rate
[euro / euro * time * unit]
MTTF
Mean time to failure
[time / unit]
λ
Failure rate
[unit / time]
cbi
Unitary backorder cost
[e / unit]
cp
Unitary production cost
[e / unit]
Li
Lead time of production strategy i
[time]
Lt
Transportation lead time
[time]
Ec
Energy consumption of production
[kWh / kg]
d
Distance
[km]
t
Transportation mode
[gCO2 / ton * km]
Emi
Energy-mix of production strategy i
[gCO2 / kWh]
Ps
Part size
[kg]
S
Order-up-to level
[unit]
Maximum order-up-to level
[unit]
Chi
Holding cost
[e]
Cbi
Backorder cost
[e]
Cp
Production cost
[e]
Constraints S max Costs
Objective Functions CO2 e
CO2 e equivalent emission
[gCO2 ]
CO2 et
Transportation-based CO2 e equivalent emission
[gCO2 ]
CO2 ep
Production-based CO2 e equivalent emission
[gCO2 ]
Equation (5) minimizes the time unit costs; it is rewritten in Eq. (6), where Chi is the S−1 average number of units in stock y=0 (Si − y)·Pλ,T ,y during the coverage time (T + Li ) the unitary production multiplied by the holding cost (h · cp ), which is proportional to cost (cp), and Cbi is the average number of units on backorder ∞ S+1 (y − Si ) · Pλ,T ,y during the coverage time (T + Li ) multiplied by the unitary back-order cost (cbi ). minh · cp ·
S−1 ∞ (Si − y) · Pλ,T ,y + cbi · (y − Si ) · Pλ,T ,y + λ · cp · T y=0
S+1
(6)
568
E. Demiralay et al.
A backorder takes place each time a demand cannot be met by the stocked units. Cp is the unitary production cost (cp) multiplied by the failure rate (λ), which is obtained from each mean time to failure (MTTF) (see Eq. 10) and from T , which is the expected number of demands during a period. In this equation, the Li value in the (T + Li ) coverage time represents the lead time for the on-site and off-site production strategy, although the production lead times are the same for both; however, for off-site production, the lead time is calculated as L2 = L1 + Lt to account for delays due to transportation. Pλ,T ,y =
(λ · (T + Li )y · e−(λ·(T +Li ) y!
(7)
0 ≤ Si ≤ Smax
(8)
Si ∈ N
(9)
λ=
1 MTTF
(10)
Equation (7) computes the probability that y failures take place during (T + Li ) time using a Poisson distribution with an expected demand of λ · (T + Li ). Equation (8) imposes a maximum order-up-to level Smax . Equation (9) imposes a discrete S. 3.2 Anova In Step 2, an ANOVA is used to determine which input parameters of the mathematical model influence the choice of production strategy (i.e., on-site or off-site). For this, a preliminary parametric analysis is first performed. As shown in Table 2, the different input parameters have three different values, whose extremes are defined according to the sources also listed in the table. The only exception is t, for which four values have been considered because of the four types of vehicles available as transport options. A total of 78,732 scenarios are thus created. It is worth noting that h is considered fixed and equal to the cost per unit and week and that cp and L1 depend on the Ps; the lowest value of cp and L1 are encountered when the part size is small (0.8 kg) and the highest when it is large (8 kg), and the same holds for the middle value [6]. The mathematical model developed in Step 1 then determines the AM production strategy that minimizes CO2 e emissions for each scenario. An ANOVA is then performed, using Minitab software, in which the input parameters to the model are input factors to the ANOVA and the optimal production strategies determined by the model are the responses. 3.3 Parametric Analysis After performing the ANOVA, the parameters with negligible effect on determining the most environmentally friendly production strategy are excluded. Those that significantly affect the results are used in Step 3 and are varied to create an extensive dataset, which is needed to feed and train the decision-tree algorithm to develop the DSS.
An Environmental Decision Support System
569
Table 2. Input parameters’ values and sources Parameter
Admissible Values
Unit measure
Source used to define the admissible values
T
4; 8; 12
[weeks]
[6]
MTTF
26; 91; 156
[weeks / unit]
[6]
cb
1000; 26000; 51000
[e / unit]
[6]
cp
150; 700; 1400
[e / unit]
[6]
L1
0.1; 0.2; 0.4
[time]
[6]
L2
1; 2; 4
[time]
[6]
Ec
20; 100; 180
[kWh / kg]
[22]
d
200; 7600; 15000
[km]
Authors’ experience
t
14.4; 18.9; 90; 1080
[gCO2 / ton * km]
[22]
Emi
50; 350; 650
[gCO2 / kWh]
[10]
Ps
0.8; 4; 8
[kg]
[6]
h
0.0058
[euro / euro * weeks * unit]
[6]
The data used to perform this parametric analysis is obtained as follows. First, the possible values of the input parameters are determined according to the results of the ANOVA (see Table 3). The values of the input parameters with negligible effect are treated as constants and equal to the intermediate values reported in Table 1, while additional values are considered for the input parameters with non-negligible impacts on the production strategy decision. Specifically, the extreme values remain the same as in Table 2 and more intermediate values are added (Table 3). In this way, a data set consisting of 2,268 scenarios is obtained. Finally, the mathematical model developed in Sect. 3.1 is applied to the data for each scenario and the optimal production strategies and CO2 e emissions determined for each. 3.4 Decision Tree Finally, in step 4, a DSS in the form of a decision tree is developed using a decisiontree algorithm, which is a classification method that predicts an item’s class based on specific parameters. The results obtained by applying the mathematical model to each of the scenarios created by the parametric analysis performed in Step 3 are then used as a dataset to feed and train the decision tree, as follows. Starting from a root node, the dataset is iteratively divided into binary branches based on the Gini Diversity Index (gdi), where k is the class label and p(k) is the probability of choosing the data point with class k. The gdi (see Eq. (10)) measures the probability of misclassification of a given data point in a dataset when randomly selected. Thus, gdi = 0 means that all data points in the dataset belong to a particular class, while gdi = 1 implies that the data points are randomly distributed among the different classes. At each tree node, an attribute and its breakpoint are selected to create
570
E. Demiralay et al. Table 3. Input parameters’ values to create extensive dataset
Parameter
Admissible Values
T
8
MTTF
91
cb
26000
cp
700
L1
0.2
L2
2
Ec
20; 40; 60; 80; 100; 120; 140; 160; 180
d
200; 2050; 3900; 5750; 7600; 9450; 11300; 13150; 15000
t
14.4; 18.9; 90; 1080
Emi
50; 150; 250; 350; 450; 550; 650
Ps
4
h
0.0058
two branches to minimize Eq. (11). Thus, the branches that provide the maximum purity are determined. In Eq. (11), n is the number of data points in the original node, nleft is the number of data points in the new node in the left branch, nright is the number of data points in the new node in the right branch, gdileft is the Gini diversity index in the new node in the left branch, and, gdiright is the Gini diversity index in the new node in the right branch [6]. gdi = 1 −
K
p(k)2
(11)
k=1
min(
nleft nright gdileft + gdiright ) n n
(12)
The elements obtained at the end of the decision tree, after the last branching, are called leaves. The number of leaf branchings corresponds to the number of depth levels of the tree. To develop a user-friendly DSS, the decision tree is trimmed by determining the maximum depth level (Dmax ) using a sensitivity analysis; this also helps to prevent the problem of overfitting while creating the tree. For pruning, a sensitivity analysis of the total accuracy (A) of the decision tree is performed by imposing various values for Dmax and calculating the overall A by dividing the number of correct predictions by the total number of predictions. A=
#correct predictionstree #predictionstree
(13)
Finally, the effectiveness of the decision tree is evaluated against three key performance indicators (KPI) related to the tree’s leaves. The first is the accuracy of a leaf (a),
An Environmental Decision Support System
571
which is calculated by dividing the number of correct predictions by the total number of predictions in the leaf. The second KPI is the ratio of items reaching each sheet (p), which is calculated by dividing the total number of predictions in that leaf by the total number of predictions in the tree. The third KPI is the average percentage CO2 e emission increase (c) that occurs when an incorrect estimation is made, which is the arithmetic average of the extra CO2 e emission incurred by each wrong estimate. #correct predictionsleaf
a=
p=
c=
#wrong predictionsleaf k=1
(14)
#predictionsleaf #predictionsleaf #predictionstree
(15)
cost of wrong prediction−cost of correct predictonk ∗ 100 cost of correct predictonk
#wrong predictionsleaf
(16)
4 Results and Discussion A DSS in the form of a decision tree was developed to help with determining whether on-site or off-site AM production of spare parts should be adopted to minimize CO2 e emissions. After developing the mathematical model to compare the CO2 e emissions of on-site and off-site production strategies, an ANOVA was performed, whose results are presented in Fig. 1. These results show that five of the ten input parameters (T , Cbi , MTTF, Ps, and L2 ) have a negligible effect on determining the most environmentally friendly production strategy, with the mean effect curves created from the ANOVA results being almost horizontal. In contrast, the other input parameters (Ec, Emi , d , and t) have a non-negligible influence on the decision-making process. From the parametric analysis, 15,876 scenarios were created (see Sect. 3.3). Applying the mathematical model to each scenario determined whether on-site or off-site AM production of spare parts would minimize CO2 e emissions, creating the dataset used to feed and train the decision-tree algorithm. The resulting DSS is presented in Fig. 2. As can be seen, not all of the five input parameters are used in the decision tree, which indicates that they are not equally important; as could have been anticipated from the main effect plot, input parameter d (distance) is missing. Examining the decision tree in detail according to the DSS, for seven of the twelve leaves, the most environmentally friendly strategy is on-site production, which is consistent with the existing literature. However, in the five remaining scenarios (26.36%), off-site production is the most environmentally friendly strategy because of variations in the factors affecting CO2 e emissions. More specifically, the on-site production strategy is preferable if the energy mix of off-site production is greater than or equal to the energy mix of on-site production or if transportation is done by air. Figure 2 shows the KPI values of each leaf of the decision tree, in which the accuracy rates of some leaves show very high prediction reliability (a ≥ 90%), while other leaves showed that the predictions may not be sufficiently reliable (a < 80%). However, the
572
E. Demiralay et al.
Fig. 1. Results of the ANOVA (Main effects plots)
Fig. 2. Decision tree with a maximum depth of 4 levels
potential rise in CO2 e emissions (c) if the estimation is wrong, which managers and practitioners must consider, is often less than 20%. Although incorrect predictions would negatively affect the environmental friendliness of a company, the low c values of the leaves mean that managers and practitioners can rely on the predictions offered by the decision tree.
5 Conclusion Choosing on-site or off-site AM production of spare parts can significantly impact companies in terms of economic and environmental sustainability. Although the effects of AM on supply chain management issues have previously been studied, the focus has
An Environmental Decision Support System
573
generally been on economic concerns, and CO2 e emission rates produced by on-site or off-site strategies have been neglected. This study therefore aimed to fill this gap in the literature by developing a DSS to determine the conditions under which on-site and offsite production are optimal for spare parts in terms of producing lower CO2 e emissions. The DSS developed is decision tree–based and was chosen for its user-friendliness and speed of use. To develop the DSS, the following procedure was used: i. Develop a mathematical model to determine, from an environmental viewpoint, whether to adopt on-site or off-site AM production for spare parts. ii. Examine the effects of input parameters on the decision using ANOVA and determine the relevant non-negligible input parameters. iii. Perform a parametric analysis by considering the non-negligible input parameters and apply the mathematical model to determine whether to adopt an on-site or off-site AM production for each scenario resulting from the parametric analysis. iv. Feed and train the decision-tree algorithm using the dataset developed through the parametric analysis to develop a DSS. Some leaves in the decision tree have very high accuracy rates, while others are lower, but even with the lower rates, the average additional CO2 e emission increase is not very large. The DSS therefore does a good job of determining whether to adopt onsite or off-site AM production for spare parts. Nevertheless, future studies are needed to reduce possible errors in the decision-making processes by using other machine learning algorithms, such as artificial neural networks, random forests. The DSS also has the following implications for managers: – If the energy mix of off-site production is greater than or equal to the energy mix of on-site production, on-site production is always the most environmentally friendly production strategy. – If airplanes are used as transport for off-site production, for such production strategy to be convenient the energy mix of on-site production must be greater than energy mix of off-site production by at least 1.5 times, and energy consumption must be greater than 50 kWh/kg. Future research could add an economic perspective and analysis to the current study, developing a multi-objective mathematical model to obtain the optimal production strategies with establishing an appropriate trade-off between environmentally friendliness and cost-efficiency.
References 1. Roda, I., Arena, S., Macchi, M., Orrù, P.F.: Total Cost of ownership driven methodology for predictive maintenance implementation in industrial plants. IAICT, vol. 566, pp. 315–322 (2019). https://doi.org/10.1007/978-3-030-30000-5_40 2. Huiskonen, J.: Maintenance spare parts logistics: Special characteristics and strategic choices. Int. J. Prod. Econ. 71, 125–133 (2001) 3. Dobrzy´nska, E., Kondej, D., Kowalska, J., Szewczy´nska, M.: State of the art in additive manufacturing and its possible chemical and particle hazards—review. Indoor Air. 31, 1733– 1758 (2021)
574
E. Demiralay et al.
4. Kunovjanek, M., Reiner, G.: How will the diffusion of additive manufactur-ing impact the raw material supply chain process? Int. J. Prod. Res. 58, 1540–1554 (2019) 5. Cantini, A., Peron, M., De Carlo, F., Sgarbossa, F.: A decision support system for configuring spare parts supply chains considering different manufacturing technologies. Int. J. Prod. Res. (2022) 6. Sgarbossa, F., Peron, M., Lolli, F., Balugani, E.: Conventional or additive manufacturing for spare parts management: An extensive comparison for Poisson demand. Int. J. Prod. Econ. 233, 107993 (2021) 7. Yang, S., Tang, Y., Zhao, Y.F.: A new part consolidation method to embrace the design freedom of additive manufacturing. J. Manuf. Process. 20, 444–449 (2015) 8. Peng, T., Kellens, K., Tang, R., Chen, C., Chen, G.: Sustainability of additive manufacturing: an overview on its energy demand and environmental im-pact. Addit. Manuf. 21, 694–704 (2018) 9. Javaid, M., Haleem, A., Singh, R.P., Suman, R., Rab, S.: Role of additive manufacturing applications towards environmental sustainability. Adv. Indus. Eng. Polymer Res. 4, 312–322 (2021) 10. Ritchie, H., Roser, M., Rosado, P.: Energy. https://ourworldindata.org/energy 11. Knofius, N., Van Der Heijden, M.C., Zijm, W.H.M.: Selecting parts for addi-tive manufacturing in service logistics. J. Manuf. Technol. Manag. 27, 915–931 (2016) 12. Noorwali, A., Babai, M.Z., Ducq, Y.: Impacts of additive manufacturing on supply chains: an empirical investigation. Supp. Chain Forum Inter. J. 24, 182–193 (2022) 13. Peron, M., Basten, R., Knofius, N., Lolli, F., Sgarbossa, F.: Additive or conventional manufacturing for spare parts: effect of failure rate uncertainty on the sourcing option decision. IFAC-PapersOnLine 55, 1141–1146 (2022) 14. Peron, M., Knofius, N., Basten, R., Sgarbossa, F.: Impact of failure rate uncertainties on the implementation of additive manufacturing in spare parts supply chains. IAICT, vol. 634, pp. 291–299 (2021). https://doi.org/10.1007/978-3-030-85914-5_31 15. Cestana, A., Pastore, E., Alfieri, A., Matta, A.: Reducing resupply time with additive manufacturing in spare part supply chain. IFAC-PapersOnLine 52, 577–582 (2019) 16. Ghadge, A., Karantoni, G., Chaudhuri, A., Srinivasan, A.: Impact of additive manufacturing on aircraft supply chain performance: a system dynamics approach. J. Manuf. Technol. Manag. 29, 846–865 (2018) 17. Rinaldi, M., Caterino, M., Fera, M., Manco, P., Macchiaroli, R.: Technology selection in green supply chains - the effects of additive and traditional manufacturing. J. Clean Prod. 282, 124554 (2021) 18. Kellens, K., Baumers, M., Gutowski, T.G., Flanagan, W., Lifset, R., Duflou, J.R.: Environmental dimensions of additive manufacturing: mapping application domains and their environmental implications. J. Ind. Ecol. 21, S49–S68 (2017) 19. Dev, N.K., Shankar, R., Qaiser, F.H.: Industry 4.0 and circular economy: operational excellence for sustainable reverse supply chain performance. Resour. Conserv. Recycl. 153, 104583 (2020) 20. Huang, R., et al.: Environmental and economic implications of distributed additive manufacturing: the case of injection mold tooling. J. Ind. Ecol. 21, S130–S143 (2017) 21. Haber, N., Fargnoli, M.: Product-service systems for circular supply chain management: a functional approach. Sustainability 14, 14953 (2022) 22. Huang, R., et al.: Energy and emissions saving potential of additive manu-facturing: the case of lightweight aircraft components. J. Clean. Prod. 135, 1559–1570 (2016)
Latest Technological Advances and Key Trends in Powder Bed Fusion: A Patent-Based Analysis António Alves de Campos(B)
and Marco Leite
IDMEC, Instituto Superior Técnico, Universidade de Lisboa, Lisboa, Portugal {antonio.campos,marcoleite}@tecnico.ulisboa.pt
Abstract. Metal additive manufacturing has revolutionized the way we design and produce complex metallic components, enabling the creation of parts with intricate geometries and tailored material properties. Among various metal additive manufacturing techniques, metal powder bed fusion technology has emerged as a leading candidate, offering a high degree of geometric flexibility and material properties control to produce high-performance components. Assessing technology development is essential as it enables the identification of emerging trends, advancements, and innovations in the focus field. Specifically, tracing technology trajectories and pinpointing the latest advances in manufacturing technologies allows for a better understanding of the evolutionary process and regularities within the technological domain, which in turn informs policy and drives sustainable economic and social growth. This paper investigates the technological development trajectories of the powder bed fusion technology domain, employing a Genetic Knowledge Persistence-Based Main Path methodology to trace and analyse its evolutionary progress and key advancements. Results show that the recent innovations within the powder bed fusion technology domain are heavily focused on improving process monitoring and control, and materials and structure development. Key advancements demonstrate the ongoing efforts to enhance the manufacturing process’s efficiency, quality, and versatility. Furthermore, innovations in heat exchangers, cooling systems, and manufacturing tooling and fixtures drive greater efficiency and flexibility in the powder bed fusion domain. This comprehensive analysis of technological development trajectories in the powder bed fusion technology domain provides valuable insights for industry stakeholders, researchers, and policymakers to support strategic decision-making and foster sustainable growth in additive manufacturing. Keywords: Additive Manufacturing · Powder Bed Fusion · Patent Citation Analysis · Technological Development
1 Introduction Additive manufacturing (AM), often referred to as 3D printing, is a manufacturing technology that enables the creation of intricate three-dimensional objects directly from digital data sources [1]. AM technologies usually involve the deposition of materials © IFIP International Federation for Information Processing 2023 Published by Springer Nature Switzerland AG 2023 E. Alfnes et al. (Eds.): APMS 2023, IFIP AICT 690, pp. 575–589, 2023. https://doi.org/10.1007/978-3-031-43666-6_39
576
A. Alves de Campos and M. Leite
layer by layer based on a digital blueprint, a distinct departure from traditional subtractive manufacturing methods involving cutting, drilling, or milling[2]. AM technologies have revolutionized manufacturing in recent years, enabling unprecedented design freedom, minimizing material waste, and potentially reducing production times for complex, customized parts[1, 2]. Powder Bed Fusion (PBF) is an AM technology that constructs three-dimensional metal geometries, particularly for individualized products in small series, by selectively melting a powder bed at specified locations [3]. PBF has become a leading AM technology, enabling the production of complex components with high precision and minimal geometrical constraints [4]. This technology encompasses laserbased PBF of metals, direct metal laser sintering (DMLS), and processes that use either laser or electron beams to sinter or melt the feedstock material [5–7]. PBF offers geometrical freedom that surpasses conventional machining processes, establishing it as a viable industrial manufacturing method [5]. The widespread application of PBF methods in various industries has laid the groundwork for significant research and development efforts, as these techniques enable the fabrication of high-resolution components with complex geometries, unattainable through traditional methods such as casting or forging [4]. In high-tech industries, innovation is widely recognized as a significant driver of economic success. Even minor technological advancements have the potential to confer substantial competitive advantages [8]–[10]. The significant role of innovation in shaping technological change has made it imperative for firms that design and manufacture engineering products and services, research institutions, policymakers, and private and public investors to comprehend the underlying dynamics of this phenomenon [8, 11–17]. PBF technology has vast research potential. Several articles have been focused on recent trends, applications, and future outlooks in PBF technological domain. Previous research findings have highlighted the potential and challenges faced by PBF technology in AM. Studies have noted the expensive and slow nature of the selective laser melting (SLM) process compared to conventional manufacturing and the need for improved process parameters and control to enhance its competence [18]. Innovations in laser PBF machines have aimed to increase part quality and process speed, with developments in ultra-short pulse lasers, closed-loop control powder handling, and multi-laser systems [19]. Recent advancements in PBF have been driven by industry demands, but challenges remain in designing new alloys specifically for laser based PBF to overcome metallurgical defects and improve part performance [20]. The future of PBF may involve a wider application space for AM alloys, benefiting from physical metallurgy advantages and embracing machine learning techniques and artificial intelligence for process control and monitoring systems [20, 21]. Additionally, opportunities exist in integrating realtime modelling and response surface development to gather essential information on material-process-structure-property aspects [21]. By leveraging a comprehensive systematic patent-based methodology, this paper complements the previous research in the analysis of trends in PBF, offering a comprehensive understanding of the technology’s evolutionary progress. Examining and evaluating technology trajectories can offer valuable insights into the progression of technology [8, 22, 23]. Numerous patent-based main path methodologies have been widely applied to analyse inventive activities and visualize technology trajectories [24–27]. Park and Magee [27] introduced a main path-tracing approach that employs
Latest Technological Advances and Key Trends in Powder Bed Fusion
577
a backwards-forwards search for pertinent domain patents. This method is contingent on the persistence of knowledge in a technological domain’s patent citation network to calculate the most crucial technology trajectories. Presently, patent-based methods are among the most exhaustive and dependable approaches for carrying out quantitative technology change and innovation assessments. Nonetheless, it can be argued that relying solely on patent data for evaluating technology change might have inherent limitations, as it does not account for the unpatented technological knowledge generated within academia and industry through strategies such as open innovation and market/technology leadership. Although patents serve as a suitable indicator for technological advancements, patenting behaviour in technologically advanced and capital-intensive domains has been demonstrated to mirror their actual innovation outcomes. This paper utilizes an objective patent-based methodology to provide a comprehensive review of recent progress in patented inventions of PBF. For that, the following methodology is applied: a set of patents is retrieved using a patent class method. The obtained patent set is tested for relevance and once validated its domain representativeness it becomes the unit of analysis. A technical knowledge persistence algorithm is applied to trace the main technological trajectories and identify the recent most relevant innovations and technical aspects associated with them. Building on existing research, this paper introduces a distinct and valuable contribution to the understanding of PBF technology’s evolution by systematically employing a unique Genetic Knowledge Persistence-Based Main Path methodology to analyse its developmental trajectories. Unlike other reviews in the field, our study integrates a patent-based approach, offering a non-biased analysis that emphasizes the importance of the technological and innovative developments traced through patented inventions. This methodologically rigorous approach allows us to illuminate the latest technological advancements, trends, and key contributors within PBF, further supporting the analysis of the technological evolution and the progress of manufacturing technology. By presenting this comprehensive, impartial, and systematic review, we aim to offer a fresh perspective on PBF technology’s ongoing developments, underlining our contribution to the current literature and providing an indispensable reference for future work. Section 2 introduces the technological domain of focus in the paper and describes in detail the methodology applied. Section 3 presents and discusses the main trajectory results and insights. Section 4 concludes.
2 Data and Methods The Data and Methods section of this study focuses on the extraction of a set of patented inventions that represent PBF technology and a citation network analysis to collect insights into recent technological advances. A representative patent set is collected using a method that relies on the Cooperative Patent Classification (CPC) system to ensure high relevancy and completeness. The main path analysis, which involves constructing a patent citation network, measuring knowledge persistence, and tracing the main path from high persistence patents, is applied to identify and visualize technological trajectories and extract key insights into the recent evolutionary process of the PBF domain.
578
A. Alves de Campos and M. Leite
2.1 Collect Technology Domain Representative Patent Sets This study analyses Powder Bed Fusion’s recent technological trajectories using U.S.granted patents. The sole use of U.S. patented inventions is based on two patent-related factors. Firstly, due to the U.S.A.‘s market size and technological research and development central position, most valuable inventions filed in other patent offices are also filed in the United States Patent and Trademark Office (USPTO) to be protected in the U.S. domestic market by the U.S. law [28, 29]. Thus, resorting solely to US-filed patents does not constitute an analysis bias. Secondly, the U.S. patent system has a broad representation regarding the amount and variety of information compared to other relevant patent systems [28, 30]. The USPTO’s citation practices are particularly significant in this study since our method relies on patent citation networks and the USPTO has more extensive and careful citation practices. To analyse Powder Bed Fusion’s technological domain is required to gather a set of patents representative of the technology domain. For a patent set to be representative of a domain, it should be highly relevant and complete [31, 32]. Relevancy is the ratio of relevant patents to the total number of patents in the domain, while completeness is the ratio of relevant patents to the number of relevant patents in the whole patent system. Although a 100% relevant and complete patent set is ideal, it is unattainable due to the vast number of patents and limitations in searching techniques. To retrieve a set of patents representative of PBF it was conducted a thorough search throughout the hierarchical structure of the CPC system [33]. The CPC structure’s hierarchical nature effectively stratifies the scope of knowledge of the CPC groups into increasingly specific branches of knowledge. This feature ensures that connected groups at consecutive hierarchical levels are related, and the knowledge scope narrows as one moves from higher to lower-level groups. Additionally, the CPC system assigns names or designations to each group that briefly but accurately describe the knowledge contained within it. These attributes make the CPC a reliable tool to retrieve sets of patents representative of technological domains in search methodologies. The search term used in Patseer – a patent search engine – to retrieve the patent set was PBC:(US) AND PTYP:(Patent) AND CPC:(B22F10/28) this resulted in a set with 2361 patents. To ensure the representativeness of the set, both relevancy and completeness must be assessed. Relevancy percentage was determined using a method suggested by Benson and Magee [34], which involves reading 300 patents (top 100 most cited and 200 randomly selected) to determine if they belong to the domain. The resultant relevancy score was 94.3%. Being highly relevant, the obtained dataset can be said to adequately represent the Powder Bed Fusion technological domain. 2.2 Main Path Analysis and Tracing of Technology Trajectories Main path analysis can be applied to patent citation networks to identify and visualize technological trajectories, extract improvement trends, and provide key insights into the evolutionary process – disruptive innovations and incremental technological developments – of technological domains [22, 35]. Recently, Park and Magee (2017) [27] proposed technological trajectories tracing methodology based on a backwardsforwards search for relevant domain patents. This method, Genetic Backward-Forward Path (GBFP), identifies the network’s main paths by searching for high persistence
Latest Technological Advances and Key Trends in Powder Bed Fusion
579
patents (HPPs) and has been successfully applied across several distinct technological domains [28, 36]–[38]. Knowledge persistence of the patents in the citation network of a technological domain is computed using the genetic knowledge persistence measurement (GKPM) algorithm [24]. GKPM is conceptually based on the Mendelian notion of genetic inheritance, i.e., the new knowledge of focal patents results from the combination of the existing knowledge present in all previous patents that the focal patent backwardly connects to. Thus, a patent with high persistence has its knowledge forwardly inherited via direct and indirect citations. The method can be applied as follows: Construction of the Patent Citation Network Within the Technological Domain. A patent citation network is a directed graph where nodes represent patent documents and edges represented citations between these documents. The basic assumption behind graph analysis of patent citation networks is that a citation between two patents represents a knowledge flow from the cited patent to the citing patent [24]. Since the unit of analysis of this method is the technological domain, only citations within the domain are considered. The cited-citing pairs are obtained from the backward citation metadata of patents in the patent sets. Measuring the Knowledge Persistence. The GKPM algorithm, introduced by Martinelli and Nomaler [24], is used to calculate knowledge persistence. To begin measuring knowledge persistence for patents, a lineage structure is constructed. Each patent is assigned to a layer through a backward mapping process. Startpoints patents, which are patents that do not cite any patents in the domain, are assigned to the first layer. The subsequent layers that other patents, including endpoints patents, are assigned to, are then calculated. Once the layer-based network is established, the knowledge persistence of the patents within the network is calculated. The knowledge persistence value quantifies the amount of knowledge a patent inherits and passes on to endpoint patents. The proportion of genetic knowledge inherited by a patent to the next generation (i.e., the next layer) is calculated by dividing 1 by the number of backward citations of the descendant patents [27]. The equation for the knowledge persistence of a patent in the domain, denoted as patent A, is given by, KP A =
j −1 mi l n
i=1 j=1 k=1
1 BWDCit(Pijk )
(1)
where KP A is patent’s A (PA ) knowledge persistence, n is the number of patents in the network’s last layer, which are (indirectly) connected to PA , mi is all possible backward paths from Pi to PA , lj is the number of patents on the j-th backward path from Pi to PA , Pijk is the k-th patent on the j-th backward path from Pi to PA , and BWDCit(Pijk ) is the number of backward citations of Pijk , without considering backward citations by patents in between the first layer and layer t − 1, when PA belongs to layer t [27]. Tracing the Main Path from High Persistence Patents. In the study of technological domains, the main paths consist of network paths formed by patents with high knowledge persistence. Two perspectives are taken to identify these patents: a global perspective and a local or layer perspective. The global perspective identifies the most important patents in the overall network based on their persistence value, known as global persistence
580
A. Alves de Campos and M. Leite
(GP), while the local perspective identifies the most important patents in each layer based on their persistence value, known as layer persistence (LP). Incorporating a layer perspective allows for the inclusion of recent but potentially important patents in the main paths [27]. HPP and LPP cut-off values are used to classify each patent as having high or low persistence. The HPPs serve as the starting points for searching the main paths. A forward and backward search is conducted for each HPP, and citing and cited patents with higher GP connected to the HPP are selected as part of the main path. The citation connection between them is chosen as a main path section. The search continues to find the patents with higher GP connected to the previous set. The search stops when the start and endpoints are reached for all HPPs. The cut-off values for HPPs can be set according to the desired complexity of the technological domain’s main path [27]. This study computed several main paths for each technological domain, varying the GP and LP cut-off values in each run. Each main path was evaluated based on the percentage of domain patents including the ratio of LPPs to HPPs, and the total number of patents. The goal was to choose a main path with enough complexity to capture important technological trends, yet with a total number of patents that allows for manual analysis. GP values varied from 0.4 to 0.1 with a 0.05 step, and LP values varied from 0.8 to 0.1 with a 0.1 step. Two constraints were set to guarantee that the number of inventions that would capture the complexity of the overall technological development will be feasible to manually analyse: The main path selected should not contain more than ~ 20% of the total number of patents in the domain or not more than 100 patents. The resultant main path is introduced in the results section. 2.3 Dimensions of Analysis For a comprehensive analysis of key trends and innovations, it is crucial to not only identify the technical aspects described in a patented invention but also to recognize how these aspects improve the state of the art. Therefore, the analysis and classification of main path patents were conducted using these two perspectives/dimensions, providing a more complete understanding of their contributions and advancements. To classify patents along these two dimensions, the titles, abstracts, backgrounds, and summaries of inventions were examined. This approach ensures that the full scope of each patent’s technical aspects and improvements to the state of the art were accurately captured and considered during the classification process. The first perspective/dimension is based on the technical aspect described on a patented invention and involved classifying patents according to the main technological artefact, system or method disclosed. The second perspective is based on performance improvement and labelled patents according to their contribution to the overall improvement of technological performance. To capture incremental technological performance improvement, a functional performance metric (FPM) for AM can be defined. This FPM should reflect the important practical aspects or properties that a technology buyer values and should be closely tied to the purchasing decision of the technology [34]. The performance of a manufacturing technology can be measured along five metrics: geometric and properties flexibility, productivity, reliability
Latest Technological Advances and Key Trends in Powder Bed Fusion
581
and cost [39]. FPM =
fg × fg × Productivity × Reliability Cost
(2)
where fg and fp refers to the geometric and technological properties flexibility, respectively. Geometric flexibility involves a manufacturing technology’s ability to create parts with varying dimensions, shapes, and complexity. Properties flexibility pertains to the technology’s capability to generate components with diverse mechanical properties. Productivity refers to the time taken for a component to complete the manufacturing process and be considered a finished product. Reliability captures the thorough understanding and control of all process steps to ensure high-quality production. The cost metric encompasses all manufacturing-related expenses, including material, energy, consumables, equipment, and labour costs.
3 Results and Discussion The chosen cut-offs were LP = 0.2 and GP = 0.1, which satisfy the imposed constraints while allowing for a satisfactory number of patents to be analysed. The resultant main path is comprised of 88 patents, of which 54 are high persistence patents and 34 are low persistence. In the analysis of the latest technological advances and trends only inventions patented after 2017, inclusive, were considered. With this distinction, the patents in the PBF technological domain fall into 2 categories: patents belonging to the main path, which can be subdivided into Old HPPs (HPPs published before 2017), Old LPPs (LPPs published before 2017), recent HPPs (HPPs published after 2017) and recent LPPs (LPPs published after 2017) and other patents. Figure 1 presents a comprehensive patent citation network of the entire PBF domain, where nodes represent patents and unidirectional connections (edges) signify citations between patents. The node sizes are proportional to patent centrality, indicating their importance within the network. Patents belonging to the main path are colour-coded based on persistence and recency, while the remaining patents in the domain are shown in light grey. The colour of the edge corresponds to the colour of the citation source, meaning that if a random patent cites an Old HPP (red), the connection between them is coloured red. Three isolated regions containing patents are highlighted to emphasize distinct isolated areas of patented knowledge in the domain. The region of isolated patents in red has 1,265 isolated patents, the region of single connection patents (patent pairs) has 96 patents, and the small, isolated clusters (3–8 patents) include a total of 31 patents. The presence of isolated patents, in the three coloured regions, suggests that they may be inventions tangential to the technology domain and currently do not contribute significantly to impactful advances. One possible hypothesis for these patents being isolated is that they may represent specialized or niche applications that have not yet been integrated into the broader PBF domain. Alternatively, but less likely they could also represent early-stage innovations based on technological leaps that have not yet garnered widespread attention or adoption. As they stand now, these isolated patents seem to have, at best, limited influence on the overall advancement of the PBF domain, but future research or technological breakthroughs may change their relevance within
582
A. Alves de Campos and M. Leite
the network. Focusing now solely on the trajectories that constitute the main path of PBF, Fig. 2 illustrates the main path of the technology domain. This citation network consists of nodes and edges, with patents labelled from 1 to 88 and arranged horizontally by publication year (from 1984 to 2023). Unidirectional connections represent citations between patents. The legend indicates the relationship between node number and patent ID for recent patents relevant to this study (published in 2017 or later). Node shapes correspond to different clusters in the innovation contribution dimension, while node colours represent various clusters in the technical system dimension. Larger nodes signify high persistence patents, and smaller nodes denote low persistence patents.
Fig. 1. Patent citation network of the entire PBF technology domain.
This research classified patented inventions into six groups. Cluster C1, Materials and Alloys for Specific Applications, focuses on industry-specific material and alloy development. Cluster C2, AM Processes and Techniques, houses patents centered on PBF methods, subdivided into two sub-clusters: C2.1, AM Process Monitoring and Control, and C2.2, AM Materials and Structures. Cluster C3, Robotic Assembly and Manufacturing Systems, emphasizes patents on robotic assembly and manufacturing methods. Cluster C4, Heat Exchangers and Cooling Systems, is a small group concentrating on patents related to temperature regulation during manufacturing. Lastly, Cluster C5, Manufacturing Tooling and Fixtures, focuses on patents about manufacturing tooling and attachments.
Latest Technological Advances and Key Trends in Powder Bed Fusion
583
To perform the analysis, we first conduct an overarching examination of the technical clusters and identify the aspects they aim to improve, followed by a more in-depth analysis focusing on individual patents and their specific innovations.
Fig. 2. The main path of PBF technological domain.
Table 1 presents a breakdown of PBF patents, with a significant emphasis on process monitoring and control (Cluster 2.1) featuring 18 out of 44 patents. Key areas of focus in this cluster are reliability (10 out of 18) and productivity (4 out of 18), highlighting the importance of consistent quality in PBF. Also notable is Cluster 2.2, featuring 13 patents devoted to materials and structures development. Here, the focus is on improving geometric flexibility (5 out of 13), reliability (5 out of 13), and properties flexibility (3 out of 13), emphasizing the recent PBF trend of optimizing components of diverse shapes, sizes, and properties while ensuring high quality. The remaining Clusters (1, 3, 4, and 5) have a moderate to low presence, with inventions mainly focused on improving geometric flexibility, reliability, and properties flexibility in these areas. Cluster 3 - Robotic Assembly and Manufacturing Systems (4 patents out of 44) patents are focused on improving productivity and reducing cost. This trend suggests that PBF inventions in this area aim to enhance the efficiency and affordability of robotic systems and apparatuses used in assembly and manufacturing
584
A. Alves de Campos and M. Leite
processes. Heat Exchangers and Cooling Systems (Cluster 4) inventions are primarily focused on improving productivity (2 out of 4), with one patent contributing to reliability and the other to geometric flexibility. This indicates that PBF inventions in this area are targeting the optimization of fluid flow and temperature regulation during manufacturing processes to enhance productivity and accommodate varying component geometries. Clusters 1 (Materials and Alloys for Specific Applications) and 5 (Manufacturing Tooling and Fixtures) have the least presence in the patents – with 2 and 3 out of 44 patents, respectively. Cluster 1 emphasizes developing and applying tailored materials and alloys for targeted industries. These inventions focus on improving properties’ flexibility. This suggests that innovations in this area aim to overcome existing challenges and broaden the applicability of PBF techniques across various industries. Inventions with innovative potential in Tooling and Fixtures (Cluster 5) focus on improving geometric flexibility (2 out of 3) and properties flexibility (1 out of 3). This suggests that PBF inventions in this area aim to develop innovative tooling and fixtures to support manufacturing components with varying shapes, sizes, and properties. Overall, the findings suggest that PBF inventions aim to enhance the efficiency and quality of the manufacturing process while accommodating varying component geometries, sizes, and properties. Table 1. Confusion matrix illustrating the distribution of patent classifications along the two dimensions of analysis. Geometric flexibility
Reliability
Properties flexibility
Productivity
Cost
C1
0
1
1
0
0
C2.1
2
10
2
4
0
C2.2
5
5
3
0
0
C3
1
0
0
2
1
C4
1
0
1
2
0
C5
2
0
1
0
0
Advancements in Metal Alloys for Specific Applications (Cluster 1). Patent #59 details an improved nickel superalloy for Direct Metal Laser Melting, enhancing properties flexibility and reducing microcracking susceptibility. This patent emphasizes efforts to improve performance and reliability in powder bed fusion techniques. Patent #85 presents a novel nickel-based alloy for gas turbine applications, featuring high oxidation resistance, thermal mechanical fatigue strength, and a unique composition balancing mechanical, thermal, and corrosion properties. This patent highlights the potential for additive manufacturing methods in repair or refurbishment by addressing existing drawbacks, such as poor weldability. These patents demonstrate ongoing innovative efforts in the development of metal alloys within powder bed fusion, driving technology adoption across industries with specific requirements.
Latest Technological Advances and Key Trends in Powder Bed Fusion
585
Advancements in PBF Process Monitoring and Control (Cluster 2.1). Advancements in Cluster 2.1/Productivity. Improvements in PBF Process Monitoring and Control are apparent in patents #83, #69, #68, and #52. Patent #83 details a method for controlled solidification in 3D object manufacturing, ensuring quality and reproducibility. Patent #69 describes an in-situ monitoring and control device for Electron Beam Additive Manufacturing (EBAM), using sensors to optimize processes and productivity. The invention detects, measures, and analyses features of interest in the EBAM process using sensors to maintain consistency, enabling the optimization of the manufacturing process for improved productivity. Patent #68 discloses a system and method for optimizing build time in AM machines. This invention streamlines the fabrication process by generating a dictionary of optimized scan parameter sets for various geometric structures using iterative learning control. Patent #52 presents a 3D printing system with multiple build modules and an inert atmosphere for uninterrupted printing, minimizing design constraints. These patents show a trend in enhancing productivity in additive manufacturing. Collectively, these inventions demonstrate a growing trend in the development of PBF Process Monitoring and Control techniques aimed at enhancing productivity and efficiency. Advancements in Cluster 2.1/Geometric Flexibility. Patent #86 discloses a method that provides control data in an additive manufacturing apparatus that determines different regions to be solidified and defines a scanning sequence for the beam. This invention reduces stress and curl effects during the manufacturing process, resulting in higher precision and fewer defects in parts with complex shapes. Patent #48 discloses a system for large-scale manufacturing using a movable build unit, addressing challenges in scanning large angles, and enabling the production of large, high-quality objects. These patents show a trend towards enhancing geometric flexibility, allowing for components with diverse dimensions and complexities. Advancements in Cluster 2.1/Properties Flexibility. Patent #54 introduces a method for additive manufacturing detecting powder layer thickness and adjusting the energy beam accordingly. This innovation allows precise control of material characteristics by melting and re-melting layers based on detected thickness, enhancing final product accuracy and predictability. These patents emphasize properties flexibility in AM PBF Process Monitoring and Control, enabling products with tailored material characteristics for specific application requirements. Advancements in Cluster 2.1/Reliability. AM process monitoring and control advancements focus on enhancing reliability. Patent #87 discloses apparatuses detecting infrared radiation in electron beam powder bed fusion printers to adjust beam intensity and scanning rates, reducing errors, and improving component quality. Patent #74 presents a method for non-destructively verifying additive manufactured part integrity, monitoring temperature changes to detect defects, and ensuring process reliability and high-quality components. Patent #73 discloses in-situ monitoring and feedback control for EBAM, maintaining process consistency and part quality reproducibility. Patent #63 estimates additive product quality during manufacturing using an imaging device, non-destructively identifying internal defects and enhancing process reliability. These patents, along with patents #58, #55, #51, #50, #49, and #45, demonstrate the growing trend of innovations aiming to improve reliability in additive manufacturing. Advancements in AM Materials and Structures (Cluster 2.2). Advancements in Cluster 2.2/Geometric Flexibility. Patent #88 discloses custom additively manufactured
586
A. Alves de Campos and M. Leite
core structures for transport structures like vehicles, enabling larger parts production and increased design flexibility. Patent #65 introduces a method for reducing particulate material volume, shortening manufacturing processes, and increasing geometric flexibility for complex components. These patents, along with patent node numbers 61 and 60, highlight the growing trend of innovations in geometric flexibility for PBF materials and structures, enabling the production of diverse components for various industries. Advancements in Cluster 2.2/Properties Flexibility. Patent #84 describes a method for manufacturing composite structures with additively manufactured cores optimized for specific applications, varying properties like strength, stiffness, and ductility. This patent showcases producing components with different materials, mechanical properties, and geometric flexibility. Patent #82 discloses a method for designing and manufacturing nodes for bonding components in transport structures, allowing efficient bonding with various materials and mechanical properties. Advancements in Cluster 2.2/Reliability. Patent #81 describes a method for developing sealing solutions for adhesive connections between additively manufactured components in transport structures, ensuring highquality and reliable bonds by using a seal that provides a hermetically sealed enclosure for adhesive injection and curing. Patent #66 presents methods for manufacturing structures with augmented energy absorption properties for transport vehicles’ collision components, improving safety and reliability by constructing 3D structures with spatially dependent features to absorb and redistribute crash energy. Advancements in Robotic Assembly and Manufacturing Systems (Cluster 3). Patents #67, 79, 75, and 77 showcase Robotic Assembly and Manufacturing Systems trends. Patent #67 details methods for 3D printing objects with complex topology, enhancing geometric flexibility and reliability. This allows the production of diverse, complex components and control of microstructure for homogeneous distribution of crystal phases and metallurgical morphologies. Patent #79 presents a buffer block apparatus that securely connects nodes during processes, reducing custom fixturing needs and cutting costs. Patent #75 describes a modular attachment device for robotic assembly in transport structures, enhancing productivity by accommodating varying geometries. Advancements in Heat Exchangers, and Cooling Systems (Cluster 4). Patents such as # 62, #57, #76, and #78 showcase emerging trends in heat exchangers, cooling systems, and related technologies for regulating fluid flow and temperature during manufacturing. Patent #62 enhances “geometric flexibility” with a novel heat exchanger design featuring a 3D lattice structure and baffles, enabling components of varying shapes and sizes. Patent #57 details an additive manufacturing system with a coolant fluid dispenser, improving productivity, reliability, and process quality by reducing cooling time and temperature fluctuations. Patents #76 and #78 showcase heat exchanger advancements using additive manufacturing. Patent #76 details a layer-by-layer method, improving productivity and geometric flexibility by reducing residual powder removal time and enabling various shapes and dimensions. Advancements in Manufacturing Tooling and Fixtures (Cluster 5). Recent inventions in this area focus on tooling plates, actuators, and attachment devices. Patent #47 describes a method for producing corrodible tools with gradient properties such as corrosion rate, tensile strength, compressive strength, modulus, or hardness. This innovative
Latest Technological Advances and Key Trends in Powder Bed Fusion
587
approach enables the creation of articles with distinct regions of varying properties from a single material, addressing the challenges of producing complex corrodible items without requiring separate components. This enhances efficiency and lowers the cost of corrodible tool manufacturing for industries like oil/gas. Furthermore, Patents #56 and #70 exemplify innovations in geometric flexibility. Patent #56 introduces a system for manufacturing composite structures using flexible, interlocking tooling plates that allow for diverse shapes and dimensions. This improves manufacturing efficiency and reduces costs. These patents collectively demonstrate the ongoing advancements in manufacturing tooling and fixtures, driving greater efficiency and flexibility in the powder bed fusion domain.
4 Conclusions In conclusion, this study uniquely utilized a patent citation methodology, thereby shining an illuminating light on the most significant technology trajectories, revealing the most promising patented inventions, and presenting unique findings on the latest advancements and trends in powder bed fusion. It demonstrated PBF’s distinctive focus on enhancing process monitoring and control, materials and structure development, and overall process reliability. The innovations, found to be primarily aimed at delivering high-quality output, accommodating varying component geometries, sizes, and properties while optimizing productivity and efficiency, can be considered a unique contribution of this research. Our distinct observations include notable advancements in materials and alloys, tooling and fixtures, heat exchangers and cooling systems, robotic assembly and manufacturing systems, and process monitoring and control techniques, all of which underscore the ongoing efforts to overcome existing challenges and expand PBF’s applicability across various industries. These findings not only drive technology adoption but also stress the importance of geometric and properties flexibility, ensuring consistent performance across diverse applications. This research uniquely highlights the increasing trend of PBF inventions and their potential to revolutionize industries with demanding requirements such as aerospace. By addressing existing drawbacks and enhancing the manufacturing process’s efficiency, flexibility, and reliability, the unique advancements revealed in this study further reinforce the notion of PBF as a key metal manufacturing technology across multiple industries. Acknowledgements. This work was supported by Fundação para a Cíência e Tecnologia (FCT), through IDMEC, under LAETA project, UIDB/50022/2020. The authors also gratefully acknowledge the funding of Sustainable Stone by Portugal - Valorisation of Natural Stone for a digital, sustainable, and qualified future, nº 40, proposal nº C644943391–00000051, co-financed by PRR Recovery and Resilience Plan, Portuguese Republic, and by the European Union (Next Generation EU)". This work has been also supported by the European Union under the Next Generation EU, through a grant of the Portuguese Republic’s Recovery and Resilience Plan (PRR) Partnership Agreement, within the scope of the project PRODUTECH R3 – "Agenda Mobilizadora da Fileira das Tecnologias de Produção para a Reindustrialização", aiming the mobilization of the production technologies industry towards of the reindustrialization of the manufacturing industrial fabric (Project ref. nr. 60 - C645808870–00000067).
588
A. Alves de Campos and M. Leite
References 1. Lim, C.W.J., Le, K.Q., Lu, Q., Wong, C.H.: An overview of 3-D printing in manufacturing, aerospace, and automotive industries. IEEE Potentials 35(4), 18–22 (2016) 2. Przekop, R.E., Gabriel, E., Pakuła, D., Sztorch, B.: Liquid for fused deposition modeling technique (L-FDM)—a revolution in application chemicals to 3D Printing technology: color and elements. Appl. Sci. 13(13), 7393 (2023) 3. Schmid, S., et al.: A new approach for automated measuring of the melt pool geometry in laser-powder bed fusion. Prog. Additive Manufact. 6(2), 269–279 (2021) 4. Javidrad, H.R., Javidrad, F.: Review of state-of-the-art research on the design and manufacturing of support structures for powder-bed fusion additive manufacturing. In: Progress in Additive Manufacturing. Springer Science and Business Media Deutschland GmbH (2023) 5. Grünewald, J., Reinelt, J., Sedlak, H., Wudy, K.: Support-free laser-based powder bed fusion of metals using pulsed exposure strategies. In: Progress in Additive Manufacturing (2023) 6. Samantaray, M., Thatoi, D.N., Sahoo, S.: Modeling and optimization of process parameters for laser powder bed fusion of AlSi10Mg alloy. Lasers Manuf. Mater. Process. 6(4), 356–373 (2019) 7. Kladovasilakis, N., Charalampous, P., Kostavelis, I., Tzetzis, D., Tzovaras, D.: Impact of metal additive manufacturing parameters on the powder bed fusion and direct energy deposition processes: a comprehensive review. In: Progress in Additive Manufacturing, vol. 6(3), pp. 349– 365. Springer Science and Business Media Deutschland GmbH (2021) 8. Dosi, G.: Technological paradigms and technological trajectories. Res Policy (1982) 9. Benson, C.L., Magee, C.L.: A framework for analyzing the underlying inventions that drive technical improvements in a specific technological field. Eng. Manag. Res. 1(1), 2 (2012) 10. Christensen, C.M.: Innovators Dilemma (1997) 11. Solow, R.M.: Technical Change and the Aggregate Production Function 39(3), 312–320 (1957) 12. Arrow, K.: The economic implications of learning by doing. The Rev. Econ. Stud. Rev Econ. Stud. 29(3), 155–173 (1962) 13. Kline, S.J., Rosenberg, N.: An overview of innovation. Eur. J. Innov. Manag. 38, 275–305 (1986) 14. Christensen, C.M.: Exploring the limits of the technology S-curve. part I: component technologies. Product. Operat. Manag. 1(4) (1992) 15. Christensen, C.M.: Exploring the limits of the technology S-curve. Part Ii: Archit. Technol. Product. Operat. Manag. 1(4), 358–366 (1992) 16. Koh, H., Magee, C.L.: A functional approach for studying technological progress: Application to information technology. Tech. Forecast Soc. Change 73(9), 1061–1083 (2006) 17. Basnet, S., Magee, C.L.: Modeling of technological performance trends using design theory. Design Sci. 2, 1–35 (2016) 18. Dejene, N.D., Lemu, H.G.: Current status and challenges of powder bed fusion-based metal additive manufacturing: literature review. Metals 13(2). MDPI (2023) 19. Khorasani, A.M., Gibson, I., Veetil, J.K., Ghasemi, A.H.: A review of technological improvements in laser-based powder bed fusion of metal printers. Int. J. Adv. Manuf. Technol. 108(1–2), 191–209 (2020) 20. Ladani, L., Sadeghilaridjani, M.: Review of powder bed fusion additive manufacturing for metals. Metals (Basel) 11(9) (2021) 21. Singh, R., et al.: Powder bed fusion process in additive manufacturing: An overview. In: Materials Today: Proceedings, pp. 3058–3070. Elsevier Ltd. (2019) 22. Verspagen, B.: Mapping technological trajectories as patent citation networks: a study on the history of fuel cell research. Adv. Complex Syst. 10(1), 93–115 (2007)
Latest Technological Advances and Key Trends in Powder Bed Fusion
589
23. Bhupatiraju, S., Nomaler, Ö., Triulzi, G., Verspagen, B.: Knowledge flows - Analyzing the core literature of innovation, entrepreneurship and science and technology studies. Res. Policy 41(7), 1205–1218 (2012) 24. Martinelli, A., Nomaler, Ö.: Measuring knowledge persistence: a genetic approach to patent citation networks. J. Evol. Econ. 24(3), 623–652 (2014) 25. Yoon, J., Kim, K.: Identifying rapidly evolving technological trends for R&D planning using SAO-based semantic patent networks. Scientometrics 88(1), 213–228 (2011) 26. Griliches, Z.: Patent statistics as economic indicators: a survey, vol. I (1998) 27. Park, H., Magee, C.L.: Tracing technological development trajectories: a genetic knowledge persistence-based main path approach. PLoS ONE 12(1), 1–18 (2017) 28. Feng, S., Magee, C.L.: Technological development of key domains in electric vehicles: Improvement rates, technology trajectories and key assignees. Appl. Energy 260 (2020) 29. Huang, M.H., Yang, H.W., Chen, D.Z.: Increasing science and technology linkage in fuel cells: a cross citation analysis of papers and patents. J. Informetr. 9(2), 237–249 (2015) 30. Kim, J., Lee, S.: Patent databases for innovation studies: a comparative analysis of USPTO. EPO, JPO and KIPO, Technol. Forecast Soc. Change 92, 332–345 (2015) 31. Benson, C.L., Magee, C.L.: Technology structural implications from the extension of a patent search method. Scientometrics 102(3), 1965–1985 (2015) 32. Benson, C.L., Magee, C.L.: A hybrid keyword and patent class methodology for selecting relevant sets of patents for a technological field. Scientometrics 96(1), 69–82 (2013) 33. Cooperative Patent Classification System - https://www.epo.org/searching-for-patents/hel pful-resources/first-time-here/classification/cpc.html 34. Benson, C.L.: Cross-domain comparison of quantitative technology improvement using patent derived characteristics (2014) 35. Martinelli, A.: An emerging paradigm or just another trajectory? Understanding the nature of technological changes using engineering heuristics in the telecommunications switching industry. Res Policy 41(2), 414–429 (2012) 36. Park, H., Magee, C.L.: Quantitative identification of technological discontinuities using simulation modelling (2016) 37. Magee, C.L., Kleyn, P.W., Monks, B.M., Betz, U., Basnet, S.: Pre-existing technological core and roots for the CRISPR breakthrough. PLoS ONE 13(9), 1–19 (2018) 38. You, D., Park, H.: Developmental trajectories in electrical steel technology using patent information. Sustainability (Switzerland) 10(8) (2018) 39. Potter, K.: An Introduction to Composite Products - Design, development, and Manufacturing. Chapman & Hall (1997)
Integration of Additive Manufacturing in an Industrial Setting: The Impact on Operational Capabilities Christopher Gustafsson(B) , Anna Sannö , Koteshwar Chirumalla , and Jessica Bruch Mälardalen University, Hamngatan 15, 63105 Eskilstuna, Sweden {christopher.gustafsson,anna.sanno,koteshwar.chirumalla, jessica.bruch}@mdu.se
Abstract. The purpose of this paper is to explore changes in operational capabilities and their impact when integrating additive manufacturing into a traditional manufacturing company. This was exemplified based on a sub-system component (gears and shafts in a gearbox) for automotive applications that are developed and manufactured at a manufacturing company in the commercial vehicle sector. The research was set up based on a case study consisting of semi-structured interviews, informal meetings, observations, company documents, reports, presentations, and field notes. The literature and findings from empirical data converged into 57 themes of operational capabilities that were categorized into six aggregated dimensions. The results suggest two emerging dimensions, namely stakeholders and strategy, building upon an existing theoretical framework. Integrating AM into traditional industrial settings requires several changes in operational capabilities. It is essential to consider the following priorities, 1) gain understanding and knowledge in design for AM, 2) build robust AM infrastructure as an integral part of existing infrastructure, 3) evaluate AM process changes in product development and production, 4) establish an internal AM team with at least one person working full time with AM, 5) establish collaborations with suitable AM research partners, and 6) conduct AM management and evaluate the business impact not only short-term but also long-term. Future research should investigate operational capabilities from other use cases, dynamic capabilities in the same or similar contexts, and transform the required operational capabilities into guidelines and best practices for managers and other decision-makers in the manufacturing industry. Keywords: Industrial 3D printing · Ordinary capabilities · Facilitation · Adoption · Implementation
1 Introduction Additive manufacturing (AM) is an essential ingredient in the future of manufacturing providing manufacturing companies with essential capabilities for gaining competitive advantages [1]. AM is defined as the “…process of joining materials to make parts © IFIP International Federation for Information Processing 2023 Published by Springer Nature Switzerland AG 2023 E. Alfnes et al. (Eds.): APMS 2023, IFIP AICT 690, pp. 590–604, 2023. https://doi.org/10.1007/978-3-031-43666-6_40
Integration of Additive Manufacturing
591
from 3D model data, usually layer upon layer, as opposed to subtractive manufacturing and formative manufacturing technologies.” [2]. Some benefits of AM include part complexity, instant assemblies, part consolidation, mass customization, freedom of design, lightweight, and on-demand manufacturing [3, 4]. AM provides the ability to manufacture complex products that seemed impossible with traditional manufacturing. A complex product is made up of multiple subsystems, modules, or components, which work together to provide a sophisticated set of functionalities or capabilities [5, 6]. To utilize the identified benefits, effective integration of AM into existing product development and production systems is critical. In getting AM integrated into the manufacturing industry, manufacturing companies should have a holistic understanding of how AM affects several phases of product development and production [3, 4, 7] concerning both technical and managerial aspects [8, 9]. Integration is referred to as interaction processes involving information exchange and co-production to combine a new idea or technology into the existing systems, structures, and processes of the functional organizations or individual industry professionals [10]. Information exchange and co-production are essential parts to ensure the integration of AM into product development and production [11]. Even though previous research has highlighted several challenges of integrating AM [12–18]. One major challenge can be highlighted when integrating AM into traditional product development and production from a long-term perspective to enhance a manufacturing company’s competitive advantage (e.g., reduced waste, reduce costs, reduce lead-time, increased profit, satisfied customers, etc.). Engineers and managers in manufacturing companies still have difficulties understanding the full impact of integrating AM into traditional industrial settings and truly gaining a long-term competitive advantage. Capability theory is an interesting solution since it emphasizes the role of a company’s resources in creating and sustaining competitive advantage both short-term and long-term [19]. The integration of AM-based operational capabilities presents decisionmakers with multiple options regarding manufacturing resources, collaboration with stakeholders, and delivery of technological improvements to customers [20, 21]. While simultaneously keeping an overview of involved industry professionals, organizational structures, current work processes, and other work-related resources [22]. The current body of literature lacks research regarding capability theory as a means of overcoming this challenge and understanding the impact of operational capabilities when integrating AM [20, 22]. Especially highlight concrete changes in operational capabilities when integrating AM into a traditional manufacturing company. The purpose of this paper is to explore changes in operational capabilities and their impact when integrating AM into a traditional manufacturing company.
2 Frame of Reference 2.1 Integration of Additive Manufacturing To integrate AM, a manufacturing company has presumably adopted and implemented the technology into the organization to some degree. Meaning, for example, that AM technology is used for aesthetic and functional prototyping, spare parts for aftermarket purposes, or for producing tools, fixtures, and jigs in production.
592
C. Gustafsson et al.
Some research [23–25] related to product development and production of automotive applications (gear stage and gearbox housing) for commercial vehicles showcase different results with AM. The results highlighted lightweight designs with lattice structures and topology optimization, improved quality, and integrated functionality. Getting AM knowledge integrated into a traditional manufacturing company should be facilitated step-by-step, which has been vaguely investigated in some studies [16–18]. In this research, integration is viewed as a simplified threestage sequence (pre-integration, integration, and post-integration) [26]. The integration process can be seen as a lifecycle model [26] that was later extended into an implementation process focusing on advanced manufacturing technology [27] and more recently focusing on AM [28]. It was suggested that careful planning and several factors should be understood before, during, and after the implementation process [18, 28]. 2.2 Capabilities for Integrating Additive Manufacturing Manufacturing companies possess two kinds of capabilities, namely, ordinary capabilities [19, 29], also known as operational capabilities [29], and dynamic capabilities [19]. Ordinary capabilities originate from the field of strategic management which emphasizes the role of a firm’s resources and capabilities in creating and sustaining competitive advantage [19]. This resource-based view of the firm has its roots in economics and has lately been developed and applied in operations management [19]. However, ordinary capabilities can be seen as operational [19], thereby making them interchangeable with operational capabilities [29]. Based on this background, operational capability is referred to as the combination of skill sets, facilities and equipment, processes and routines, and administration and coordination, developed by a company to produce and sell a defined set of products and services [19, 29]. • Skill sets refer to the specific abilities, knowledge, and expertise that an individual possesses required to perform a particular task. • Facilities and equipment refer to the physical resources necessary for the operations of a company. • Processes and routines refer to the systematic and repeated actions used to accomplish tasks or achieve goals. • Administration and coordination refer to the management of resources and the coordination of efforts to achieve desired outcomes efficiently and effectively. Some traditional capabilities [5, 6, 30, 31], such as concept development and change management, are still relevant and used in AM context [32]. Based on the presented frame of reference, traditional and AM-related capabilities have been identified in the literature (Fig. 1).
Integration of Additive Manufacturing Skill sets • Lattice structures, topology optimization, avoiding anisotropy, etc. [3, 4, 7, 32, 36] • Part consolidation [3–5, 30, 31, 36] • AM expertise & experience [3, 4]
Administration & coordination • Technical planning [3–6, 30–32, 36, 37] • Managerial planning [4–7, 21, 33, 34] • Make or buy [21, 28, 33, 34]
Processes & routines • AM product development [5, 6, 30, 31] • AM production development [5, 6] • Pre-processing [3, 4, 5, 30–32, 36, 37] • AM production process [3, 4, 6] • Post-processing [3, 4] • AM parts qualification process [4, 7] • AM material handling [4, 37] • AM patents & standards [3]
Facilities & equipment • AM-enabled software [4, 32, 36] • AM system [3, 4, 37] • AM material [3, 4, 37] • AM lab [3, 4, 37] • Qualification equipment [7]
Miscellaneous • Business impact [4, 5, 7, 21, 33, 34] • Engineering training [3, 4] • Technology uncertainty [3, 33] • Technology fit [35]
• Digitization & digitalization [3, 21, 33, 34] • New customers & suppliers [5, 6] • AM service provider [3, 4, 37] • AM machine provider [3, 4, 37] • Research institutes & universities [5, 6]
593
Fig. 1. Traditional and AM-related operational capabilities found in the literature
3 Research Methodology This research was set up collaboratively with one large-sized heavy equipment manufacturing company that has more than 10 000 employees in the commercial vehicle sector with factories located in Sweden and around the world. Their business involves the development and production of heavy equipment, machines, and tools that are used globally by their customers. The company was selected based on opportunistic sampling [38], and the selection is based on the following criteria 1) the company develops and manufactures complex products, 2) the company utilizes AM in different product development projects, and 3) has some experience in AM. Furthermore, this research was conducted following the case study guidelines (Yin, 2018) as it allows for an in-depth exploration of required capabilities when integrating AM into large-sized heavy equipment manufacturing companies based on a selected use case. A use case can be referred to as the “…application of a technology for a specific operational purpose.” [p. 568, 35]. The research is exemplified based on a sub-system component (i.e., gearbox) for automotive applications that are developed and manufactured at the manufacturing company. The gearbox was selected based on the following criteria 1) a product that currently is or has been developed and manufactured by the company and 2) is highly driven by customers’ requirements. Machine elements such as gears, shafts, and gear stages or gear trains were selected for further data collection, analysis, and synthesis. Data was collected using semi-structured interviews as they provided descriptions of the studied phenomenon [40]. Five semi-structured interviews were conducted with different specialists (Respondents A–E) in product development and production of gears and shafts (Table 1).
594
C. Gustafsson et al. Table 1. Summary of the respondents
Designation
Job role
Area of expertise
Functional organization
Work experience (years)
Interview length
Respondent A
Product development engineer
Gears and shafts
Technology (R&D)
28
57:46
Respondent B
Design engineer
Gears
Technology (R&D)
16
01:17:50
Respondent C
Manufacturing engineer
Gears
Operations
25
01:18:50
Respondent D
Product development engineer
Shafts
Technology (R&D)
35
01:14:52
Respondent E
Production engineer
Shafts
Operations
23
02:05:58
The respondents were asked a set of questions related to 1) product development of the gears and shafts, 2) production of the gears and shafts, 3) the interface between product development and production, 4) AM in general and AM-related strategy, and 5) general questions regarding new concepts of the gears and shafts that were designed for AM. All interviews were recorded (video and audio) and transcribed manually. To facilitate the interviews, 12 concepts of gears, shafts, and gear stages were developed (four concepts for each gear, shaft, and gear stage) following the action design research guidelines [41] using Creo 9 CAD software with AM-related applications. The purpose of using the concepts was to showcase to the respondents the potential of integrating AM and gain their insights on the matter. The concepts highlighted the following designs of the gear, shaft, and gear stage 1) the original/current design, 2) new concepts with lattice structures in the design, 3) new concepts with topology optimization in the design, and 4) new concepts with lattice structures plus topology optimization in the design. Additional data was collected during two years from informal meetings and discussions, observations (e.g., tours in some of the company’s factories, webinars, and workshops), company documents, reports, presentations, and field notes. Reviewing the literature served to build up the frame of reference. The selection of relevant literature was based on fit for purpose [42] and collected from Scopus and Web of Science. In the analysis of the collected qualitative data, a “winnow” approach [43] was used. Meaning that the “…process focused in on some of the data and disregarding other parts of it” [p. 245, 40]. This was used to, when applicable, remove confidential data before the analysis and synthesis and remove certain collected data that did not provide substance in fulfilling the research purpose. The current operational capabilities were identified in the data collection and categorized into themes and aggregated dimensions following the Gioia method [44]. The same analysis process was conducted for identifying and categorizing AM capabilities from the literature. Thereafter, the identified current
Integration of Additive Manufacturing
595
operational capabilities and AM capabilities were compiled highlighting similarities and differences. The changes in operational capabilities and the impact when integrating AM into a traditional manufacturing company were then presented in a framework and then discussed.
4 Empirical Findings 4.1 Current Practices in Product Development and Production Current practices in product development and production of gears and shafts have been highlighted by the respondents in the interviews and a simplified overview is presented. In product development of gears and shafts, the need to develop a new gearbox or improve an existing one becomes the starting point. A machine concept manager, together with one or two machine concept specialists, has developed an idea concerning the gearbox. Thereafter, a cross-functional project team is assembled, to design a project plan, and allocate required resources. Together, the team develops a design requirements specification for the gearbox before starting design activities. The main design process steps include a range of activities from defining gear tooth design and additional forms and shapes of the gears and shafts to creating 2D drawings and sending them to the final production. Additionally, industry professionals from production are frequently involved in the design process to provide feedback and insights regarding manufacturability. Once the main design process steps are completed, technical reports are written, and important information is stored in the management system. When designing new gears and shafts, it was mentioned that certain knowledge and understanding were needed related to material science, mathematics, and solid mechanics. However, according to one respondent, that was not enough. “You need feeling and aptitude for mechanical design, which is difficult to define. You can have the world’s best tools [in terms of] CAD and simulations, but you somehow need to have an eye for how it [the bigger picture] should look like and come together” (Respondent D). This feeling and aptitude for mechanical design were reflected in the respondents pinpointing to having gained relevant work experience from previous similar development projects. In the final production of gears and shafts, there is usually one production system for gears and a different production system for shafts. This is not always the case since there are several variants of gears and shafts that require different variants of production systems. As a starting point, the production functional organization receives an order to produce a certain variant of gears and shafts and retrieves the associated 2D drawings from the management system. A production manager assembles a cross-functional project team, designs a production plan, and allocates required resources. Together, the team developed production requirements specifications and created (or acquired) CAM models for the ordered gears and shafts. Additionally, an assembly requirements specification was also developed, and assembly instructions were created (or acquired). Both requirements’ specifications were then evaluated based on the selected gearbox. The main production process steps include a range of activities from retrieving raw materials from storage to gears and shafts assembled into gear stages, usually during the assembly of the whole gearbox. In between all production process steps, different types of transportation (e.g., conveyor belt, truck transportation via EU pallet, delivery to an external
596
C. Gustafsson et al.
supplier) are conducted. If certain production process steps are not available in-house, collaboration with external suppliers would be required. Once the main production process steps are completed, technical reports are written, and important information is stored in the management system. However, it was mentioned by the respondents that a completely new production system rarely needs to be developed. Rather, current production systems do need to be set up differently, or changes in production process steps incorporated to account for the different variants of gears and shafts. “It is important to have worked with previous similar production systems for a while so that you know and understand how things are affected by each other” (Respondent E). This work experience and understanding were deemed helpful during the daily production of gears and shafts but might be more so when developing new production systems. In the interface between product development and production, the respondents highlighted a shared view that industry professionals from production are frequently involved, especially in the early phases of product development. However, it was expressed by the respondents working in production that the high frequency of involvement was time-consuming, and that due to daily work in production, making it difficult for them to be involved regularly. All respondents mentioned that involvement by industry professionals from production in product development was crucial for the success of producing the gears and shafts. Since they could share knowledge and having work experience with such production systems was essential. “If there is [technological] development that you do not know as a mechanical designer, you would want to know about its advantages and disadvantages” (Respondent B). The knowledge-sharing exchange was perceived as positive since everyone possessed their specialist knowledge and a variety of gained experiences. 4.2 Current Practices in Additive Manufacturing Regarding Gears and Shafts It has been observed that the heavy equipment manufacturing company already had started its AM journey by exploring AM for other operational purposes such as 1) aesthetic and functional prototyping, 2) material investigations, 3) AM-produced tools for dealers and use in production, 4) AM produced jigs and fixtures for use in assembly, 5) AM produced spare parts for replacing phased-out parts, and 6) several 3D printers available for in-house use. In the context of gears and shafts, it has been observed that AM has been used mainly for rapid prototyping in the development of new concepts and changes in current production systems. The prototypes have mainly been printed in plastic material for aesthetic and functional testing. However, the use of AM has not gone beyond prototyping to the final production of gears and shafts since the respondents and their colleagues have yet to see the full potential of AM. Additional observations revealed that the heavy equipment manufacturing company was in the process of defining a business strategy for AM and planning to integrate it into current practices in the near future. The respondents suggested the following activities they wanted to see with AM concerning gears and shafts: 1) advantages and disadvantages of AM parts, 2) when should AM be selected compared to traditional manufacturing, and 3) verification and validation of AM materials (both plastic and metal) about desired mechanical and material properties in new design concepts. The respondents, despite their overall low knowledge level in AM, highlighted
Integration of Additive Manufacturing
597
several opportunities they saw with AM for gears and shafts including a reduction in lead times to gain profit, production of different demonstrators, low-volume production, rapid prototyping for early conceptualization, and AM prototype testing for prevention of eventual problems in the final production. However, the respondents had several concerns with AM including difficulty in seeing mass production with AM, the strength of AM materials, accuracy and precision of AM systems, and time and effort in learning and getting training in AM, for them, a new technology. Based on the question of whether gears and shafts could be produced with AM. It was mentioned that “Yes absolutely, I don’t see any problems except that you have to make the journey. In my opinion, there should be materials that are adapted for gears [and shafts]. We have to start in the right place and make a request. It may also be necessary to replace [or make minor or major changes] to our current workshop” (Respondent C). Overall, the respondents seemed positive about the idea of producing gears and shafts with AM, but they urged the need to see evidence of clear benefits.
5 Discussion Based on the collected empirical data, 27 themes were categorized into the operational capability framework in which two additional dimensions emerged, namely stakeholders and strategy (Fig. 2 and Fig. 3). Thereafter, from the analysis of empirical data and the literature, a framework was developed synthesizing the changes in operational capabilities and the impact when integrating AM into a traditional industrial setting (Fig. 4). Integrating AM into the product development and production of gears and shafts requires certain changes to current operational capabilities. A total of six priorities are suggested, namely, design for AM, AM infrastructure, AM process changes, AM team, AM research partners, and AM management. In terms of skill set, the main observed challenge was that current industry professionals were still embedded in the traditional way of designing and producing gears and shafts. It is suggested that efforts should be made in exploring the design for AM of the gears and shafts. Benefits could be gained since parts can be consolidated from several into a fewer number of parts (e.g., gears can be combined with shafts) and made lightweight with lattice structures and topology optimization [4, 23–25, 32]. Embracing AM might take some time due to potential resistance in willingness to learn new skills [34]. Current facilities and equipment, for example, software, partially include (e.g., partfiles can be exported into STL-files and software for 3D printers were available) or does not account for AM (e.g., software for designing gears). It is suggested to evaluate current software and, if needed, acquire AM-enabled software that considers the whole AM workflow. The literature highlighted the importance of having a robust AM infrastructure where AM facilities and equipment are an integral part of existing infrastructure [3, 21, 33, 34]. Equally important is the hardware, due to the use of AM materials, AM systems, AM lab for R&D, and qualification equipment for AM parts [3, 4, 7, 23–25, 37]. Such hardware could be acquired through services provided by and collaboration with AM service and machine providers [3, 4]. Regarding processes and routines, there was difficulty among the industry professionals in understanding which work processes could be affected by AM and to what
598
C. Gustafsson et al.
extent. In the investigation of the gears and shafts, some production process steps (e.g., turning and drilling) and some product development activities (e.g., traditional manufacturing features in the design) could be eliminated [3, 21, 33, 34]. Thereby, AM impacts gears and shafts in terms of new designs and changing current production systems [23– 25]. It is suggested to explore AM process changes to gain an understanding of the changes in current product development and production systems. st
nd
1 order concepts
2 order themes
Understanding what is good enough in new designs. Understanding/knowledge of materials, mathematics, and manufacturability. Specialist knowledge of design features and functions.
Understanding & knowledge
Interpreting design requirements specifications. Modular thinking in the gearbox design. The ability to see the bigger picture. Utilize design for manufacturing and assembly. Test an idea before running it for real in production.
Aggregated dimensions
Interpretation Systems thinking
Skill sets
Theory into practice
Experience working with different production systems and on different development projects.
Practice makes perfect
Physical/virtual workspace for design, production planning, and collaboration.
Physical/virtual workspace
Usually, one machine for each production process step. Some traditional machines are multi-functional. Traditional workshop for R&D.
Utilize hardware
Software for creating designs, design verification/validation, production preparation, and management-oriented software.
Utilize software
Missing machines or tools are available in another factory or at a supplier. Facilities/equipment located in other cities/countries. Production information available on the 2D drawing.
Location of manufacture
Conducting quality control procedures in production. Different production processes for gears and shafts.
Work in daily production
Different means of transportation options. Transportation between each production process step.
Transportation
Investigate missing pieces in the information exchange. Enhanced cross-departmental and external project collaboration. Process control and process follow-up.
Process improvement
Work in product development of gears and shafts. Work in the development of production systems for gears and shafts. Customer requirements in new/improved gearboxes.
Mechanical engineering
Facilities & equipment
Processes & routines
Fig. 2. Operational capabilities identified from the empirical data
Concerning administration and coordination, it was observed that no one was working full-time with AM consequently leading to, for example, limited internal collaboration with AM. To begin integrating AM, certain sufficient resource allocation must be
Integration of Additive Manufacturing st
1 order concepts
nd
2 order themes
Sufficient testing and evaluation prior to production. Deciding design requirement specifications together. Project directives containing common objectives.
Decision-making
Knowledge sharing between individual employees & departments.
Knowledge sharing
Communication with internal and external suppliers. A continuous dialogue with different suppliers.
Communication
Building good relationships with other employees and with suppliers.
Build relationships
Collaboration within the company, between departments, and with factories/suppliers in other cities/countries.
Collaboration
Personnel allocation in daily production and development projects.
Resource allocation
Production engineers involved early and often in product development. Inclusion of feedback and input from all involved. Strive for equal involvement in development projects.
Involvement
Internal customers wanting to develop and produce a new gearbox. Providing new/improved gearboxes to customers. Input from customers in the development of gearboxes.
Internal/external customers
Including suppliers located in other factories/cities in development projects. External production equipment supplier and production at external suppliers.
Internal/external suppliers
Employees working in daily production, production development, product development, and management.
Employee roles
Stricter requirements become more important. Regulatory influence accelerates carbon neutrality.
Regulatory influence
Foresight for technical opportunities and limitations. Product development should have technology foresight. New technologies to improve current production.
Technology foresight
Production development requires large investments. New designs might require changes in production.
Technology investments
Identify suitable use cases and turn them into pilot use cases. Identify prerequisites for the business cases.
599
Aggregated dimensions
Administration & coordination
Stakeholders
Strategy
Business case development
Fig. 3. Operational capabilities identified from the empirical data
provided to individual industry professionals or entire functional organizations so that AM can be included in, for example, technical and managerial planning [3, 4, 7, 21, 32–34, 36]. It is suggested to check if there are industry professionals that are interested in AM so that a cross-functional AM team can be established. Regarding stakeholders, it was observed that there was a lack of in-house AM experts. The involvement of AM experts is essential to get AM integrated into product development and production of gears and shafts [3, 4, 14–18]. The literature highlighted the
600
C. Gustafsson et al.
Priorities
importance of having AM research partners (e.g., other manufacturing companies, consultancy firms, research institutes, and universities) with AM capabilities suitable for gears and shafts [3, 4]. It is suggested to identify suitable AM research partners and start collaboration, and perhaps in the long-term gain and build AM expertise in-house.
Design for AM
AM infrastructure
AM process changes
AM team
AM research partners
AM management
Work in daily production Transportation
Operational capabilities
Findings
Understanding & knowledge
Process improvement
Knowledge sharing
Interpretation
Physical/virtual workplace
Mechanical engineering
Communication
Regulatory influence
Systems thinking
Utilize hardware
AM product development
Build relationships
Technology foresight
Theory into practice
Utilize software
Pre-processing
Involvement
Internal/externa l customers
Technology investments
Practice makes perfect
Location of manufacture
AM production process
Collaboration
Internal/externa l suppliers
Technology uncertainty
AM expertise & experience
AM-enabled software
Post-processing
AMResource managerial planning allocation
Employee roles
Business case development
Lattice structures
AM system
AM parts qualification
Decision Make or buy making
New customers & suppliers
Business impact
Topology optimization
AM material
Production development
DecisionTechnical making planning
AM service providers
Engineering training
Avoid anisotropy
Qualification equipment
AM material handling
Managerial planning
AM machine providers
Digitization & digitalization
Part consolidation
AM lab
AM patents & standards
Make or buy
Research institutes
Technology fit
Skill set
Facilities & equipment
Processes & routines
Administration & coordination
Stakeholders
Strategy
Fig. 4. Changes in operational capabilities when integrating AM into traditional industrial settings. Operational capabilities are found in the literature (blue), the empirical data (purple), and both in the literature and the empirical data (yellow).
Referring to strategy, it was observed that there was difficulty in understanding the long-term business impact of AM among industry professionals. Even though AM has been used mainly for rapid prototyping in the development of new concepts and changes in current production systems of the gears and shafts. Nowadays, such efforts are common sense to reduce costs and lead time, but this does not necessarily provide additional benefits for the customers. It is important to foresee the technological development of
Integration of Additive Manufacturing
601
AM to be able to capture new business opportunities for next-generation gearboxes [3, 23–25, 33, 34]. The next steps would be to conduct AM management, for example, evaluate the gears and shafts in terms of changes in cost, lead time, supply chain, product development activities, and production systems and decide whether AM is the right technological fit or not to slowly build up a business case [4–7, 21, 33–35]. However, due to the complexity of AM, such investigation must be handled case-by-case and critical reflections should be done before generalizing any results [3, 4].
6 Conclusions, Limitations, and Future Work To summarize, integrating AM into traditional industrial settings requires several changes in operational capabilities. It is essential to consider the following next steps, 1) gain understanding and knowledge in design for AM, 2) build robust AM infrastructure as an integral part of existing infrastructure, 3) evaluate AM process changes in product development and production, 4) establish an internal AM team with at least one person working full time with AM, 5) establish collaborations with suitable AM research partners, and 6) conduct AM management and evaluate the business impact not only short-term but also long-term. Additionally, the literature and findings from empirical data were synthesized into a framework covering six aggregated dimensions. The results suggested the emergence of two new dimensions in operational capabilities, namely stakeholders and strategy, building upon the current framework. This research might be useful for managers and other decision-makers to understand the effort needed to scale up from having AM, a digital manufacturing technology, integrated and planning long-term strategies. The resource-based view provides insights into specific operational capabilities needed in this endeavor. The results of this research can be transferred to other traditional industrial settings. However, it would require a sense of critical reflection before applying this knowledge whether it is suitable or not. Moreover, the nature of AM highlighted the emergence of digital operational capabilities (e.g., design for AM, AM infrastructure, etc.). However, the evidence is still inconclusive and should be critically reflected upon as future research is needed. The selected use case was limited to gears, shafts, and gear stages in a gearbox. Additional use cases should be studied to further enhance generalizability. Only a handful of participants provided data for this research and should include more industry professionals from other, also relevant, roles (e.g., managers) and functional organizations (e.g., marketing, maintenance, remanufacturing). Further on, future research should investigate 1) the other parts and assemblies within a gearbox, as well as the surrounding parts and assemblies directly and indirectly connected with the gearbox, 2) operational capabilities from other case studies, and if applicable, dynamic capabilities as a means of facilitating the integration of AM, 3) the relationships between the identified operational capabilities, and if applicable, dynamic capabilities, 4) extending beyond by exploring digital operational capabilities, its definition and meaning, in an industrial context, and 5) transforming the operational capabilities into guidelines and best practices for managers and other decision-makers in the manufacturing industry.
602
C. Gustafsson et al.
Acknowledgment. This research project has been funded by the Knowledge Foundation within the framework of the ARRAY++ Research School and the participating companies, and Mälardalen University, Sweden. The people that contributed with their knowledge are gratefully thanked for making these experiences available.
References 1. Culot, G., Orzes, G., Sartor, M., Nassimbeni, G.: The future of manufacturing: a Delphi-based scenario analysis on Industry 4.0. Technol. Forecasting Soc. Change 157, 120092 (2020). https://doi.org/10.1016/j.techfore.2020.120092 2. ISO/ASTM 52900:2021 Additive manufacturing—General principles—Terminology: Fabrication additive—Principes généraux—Terminologie. Switzerland: ISO/ASTM International (2021) 3. Gibson, I., Rosen, D., Stucker, B., Khorasani, M.: Additive Manufacturing Technologies. 3rd edn. Springer Nature, Switzerland (2021). https://doi.org/10.1007/978-3-030-56127-7 4. Diegel, O., Nordin, A., Motte, D.: A Practical Guide to Design for Additive Manufacturing. Springer Nature, Singapore (2020). https://doi.org/10.1007/978-981-13-8281-9 5. Ulrich, K.T., Eppinger, S.D.: Product Design and Development. 6th edn. McGraw-Hill Education, United States of America (2016) 6. Bellgran, M., Säfsten, K.: Production Development: Design and Operations of Production Systems. Springer-Verlag, London Limited (2010) 7. Reiher, T., Lindemann, C., Jahnke, U., Deppe, G., Koch, R.: Holistic approach for industrializing AM technology: from part selection to test and verification. Prog. Additive Manuf. 2(1–2), 43–55 (2017). https://doi.org/10.1007/s40964-017-0018-y 8. Rikalovic, A., Suzic, N., Bajic, B., Piuri, V.: Industry 4.0 implementation challenges and opportunities: a technological perspective. IEEE Syst. J. 16(2), 2797–2810 (2022). https:// doi.org/10.1109/JSYST.2020.3023041 9. Bajic, B., Rikalovic, A., Suzic, N., Piuri, V.: Industry 4.0 implementation challenges and opportunities: a managerial perspective. IEEE Syst. J. 15(1), 546–559 (2021). https://doi.org/ 10.1109/JSYST.2020.3023041 10. Gustavsson, M., Säfsten, K.: The learning potential of boundary crossing in the context of product introduction. Vocat. Learn. 10, 235–252 (2017). https://doi.org/10.1007/s12186-0169171-6 11. Yi, L., Gläßner, C., Aurich, J.C.: How to integrate additive manufacturing technologies into manufacturing systems successfully: A perspective from the commercial vehicle industry. J. Manuf. Syst. 53, 195–211 (2019). https://doi.org/10.1016/j.jmsy.2019.09.007 12. De Lima, A.F., et al.: The “V” model for decision analysis of additive manufacturing implementation. J. Manuf. Technol. Manag (2023). https://doi.org/10.1108/JMTM-10-20220377 13. Gustafsson, C., Sannö, A., Bruch, J., Chirumalla, K.: Exploring challenges in the integration of additive manufacturing. In: Kim, D.Y., von Cieminski, G., Romero, D. (eds.) Advances in Production Management Systems. Smart Manufacturing and Logistics Systems: Turning Ideas into Action. APMS 2022. IFIP AICT, vol. 663, pp. 370–379. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-16407-1_44 14. Chaudhuri, A., Rogers, H., Soberg, P., Pawar, K.S.: The role of service providers in 3D printing adoption. Ind. Manag. Data Syst. 119(6), 1189–1205 (2019). https://doi.org/10.1108/IMDS08-2018-0339
Integration of Additive Manufacturing
603
15. Martinsuo, M., Luomaranta, T.: Adopting additive manufacturing in SMEs: exploring the challenges and solutions. J. Manuf. Technol. Manag. 29(6), 937–957 (2018). https://doi.org/ 10.1108/JMTM-02-2018-0030 16. Deradjat, D., Minshall, T.: Implementation of rapid manufacturing for mass customization. J. Manuf. Technol. Manag. 28(1), 95–121 (2017). https://doi.org/10.1108/JMTM-01-20160007 17. Rylands, B., Böhme, T., Gorkin, R., III., Fan, J., Birtchnell, T.: The adoption process and impact of additive manufacturing on manufacturing systems. J. Manuf. Technol. Manag. 27(7), 969–989 (2016). https://doi.org/10.1108/JMTM-12-2015-0117 18. Mellor, S., Hao, L., Zhang, D.: Additive manufacturing: a framework for implementation. Int. J. Prod. Econ. 149, 194–201 (2014). https://doi.org/10.1016/j.ijpe.2013.07.008 19. Teece, D.J.: A capability theory of the firm: an economics and (Strategic) management perspective. N. Z. Econ. Pap. 53(1), 1–43 (2019). https://doi.org/10.1080/00779954.2017.137 1208 20. Holmström, J., Liotta, G., Chaudhuri, A.: Sustainability outcomes through direct digital manufacturing-based operational practices: A design theory approach. J. Clean. Prod. 167, 951–961 (2017). https://doi.org/10.1016/j.jclepro.2017.03.092 21. Ruffo, M., Tuck, C., Hague, R.: Make or buy analysis for rapid manufacturing. Rapid Prototyping J. 13(1), 23–29 (2007). https://doi.org/10.1108/13552540710719181 22. Roscoe, S., Cousins, P.D., Handfield, R.: The microfoundations of an operational capability in digital manufacturing. J. Oper. Manag. 65, 774–793 (2019). https://doi.org/10.1002/joom. 1044 23. Barreiro, P., Armutcu, G., Pfrimmer, S., Hermes, J.: Quality improvement of an aluminum gearbox housing by implementing additive manufacturing. Forsch Ingenieurwes 86, 605–616 (2022). https://doi.org/10.1007/s10010-021-00541-3 24. Barreiro, P., Bronner, A., Hoffmeister, J., Hermes, J.: New improvement opportunities through applying topology optimization combined with 3D printing to the construction of gearbox housings. Forsch Ingenieurwes 83, 669–681 (2019). https://doi.org/10.1007/s10010-019-003 74-1 25. Fiedler, P., et al.: Additive manufacturing technologies for next-generation powertrains. In: 10th International Electric Drives Production Conference (EDPC), pp. 1–8. IEEE, Ludwigsburg, Germany (2020). https://doi.org/10.1109/EDPC51184.2020.9388196 26. Voss, C.: Implementation: a key issue in manufacturing technology: the need for a field of study. Res. Policy 17(2), 55–63 (1988). https://doi.org/10.1016/0048-7333(88)90021-2 27. Small, M.H., Yasin, M.M.: Advanced manufacturing technology: Implementation policy and performance. J. Oper. Manag. 15, 349–370 (1997). https://doi.org/10.1016/S0272-696 3(97)00013-2 28. Niaki, M.K., Nonino, F.: Selection and implementation of additive manufacturing. In: The Management of Additive Manufacturing. Springer Series in Advanced Manufacturing. Springer, Cham (2018). https://doi-org.ep.bib.mdh. https://doi.org/10.1007/978-3-319-563 09-1_7 29. Wu, S.J., Melnyk, S.A., Flynn, B.B.: Operational Capabilities: The Secret Ingredient. Decis. Sci. 41(4), 721–754 (2010). https://doi.org/10.1111/j.1540-5915.2010.00294.x 30. Mott, R.L., Vavrek, E.M., Wang, J.: Machine Elements in Mechanical Design, 6th edn. Pearson, New York (2018) 31. Ullmann, D.G.: The Mechanical Design Process, 4th edn. McGraw-Hill, New York (2010) 32. Wiberg, A., Persson, J., Ölvander, J.: Design for additive manufacturing – a review of available design methods and software. Rapid Prototyping J. 25(6), 1080–1094 (2019). https://doi.org/ 10.1108/RPJ-10-2018-0262 33. Friedrich, A., Lange, A., Elbert, R.: Make-or-buy decisions for industrial additive manufacturing. J. Bus. Logist. 43(4), 623–653 (2022). https://doi.org/10.1111/jbl.12302
604
C. Gustafsson et al.
34. Friedrich, A., Lange, A., Elbert, R.: Supply chain design for industrial additive manufacturing. Int. J. Oper. Prod. Manag. (2022). https://doi.org/10.1108/IJOPM-12-2021-0802 35. Maghazei, O., Lewis, M.A., Netland, T.H.: Emerging technologies and the use case: A multiyear study of drone adoption. J. Oper. Manag. 68, 560–591 (2022). https://doi.org/10.1002/ joom.1196 36. Renjith, S.C., Park, K., Okudan Kremer, G.E.: A design framework for additive manufacturing: integration of additive manufacturing capabilities in the early design process. Int. J. Precis. Eng. Manuf. 21, 329–345 (2020). https://doi.org/10.1007/s12541-019-00253-3 37. Gao, W., et al.: The status, challenges, and future of additive manufacturing in engineering. Comput. Aided Des. 69, 65–89 (2015) 38. Patton, M.Q.: Qualitative Research & Evaluation Methods, 3rd edn. SAGE, London (2002) 39. Yin, R.K.: Case Study Research: Design and Methods. 6th edn. SAGE, Thousand Oaks, CA (2018) 40. Creswell, J.W.: Research Design: Qualitative, Quantitative, and Mixed Methods Approaches, 4th edn. SAGE, Thousand Oaks, CA (2014) 41. Sein, M.K., Henfridsson, O., Purao, S., Rossi, M., Lindgren, R.: Action design research. MIS Quartely 35(1), 37–56 (2011). https://doi.org/10.2307/23043488 42. Boaz, A., Ashby, D.: Fit for purpose? Assessing research quality for evidence based policy and practice. Working Paper 11, ESRC UK Centre for Evidence Based Policy and Practice, London (2003) 43. Guest, G., MacQueen, K.M., Namey, E.E.: Applied thematic analysis. Sage, Thousand Oaks, CA (2012) 44. Gioia, D.A., Corley, K.G., Hamilton, A.L.: Seeking qualitative rigor in inductive research: notes on the gioia methodology. Organ. Res. Methods 16(1), 15–31 (2012). https://doi.org/ 10.1177/1094428112452151
Additive Manufacturing: A Case Study of Introducing Additive Manufacturing of Spare Parts Bjørn Jæger , Fredrik Wiklund, and Lise Lillebrygfjeld Halse(B) Molde University College, 2110, 6402 Molde, PB, Norway {bjorn.jager,fredrik.wiklund,lise.l.halse}@himolde.no
Abstract. Additive Manufacturing (AM) allows for on-demand production of items. From a logistics perspective, AM of spare parts may represent a safeguard against manufacturing down-time caused by inventory stockouts or delayed deliveries from suppliers. AM of spare parts may lead to reduced inventories and reduced supply chain emissions. This study explores the implementation of AM of spare parts in a manufacturing company within the metal industry. Based on experience using AM on non-critical parts in-house, the respondents in the study emphasize several advantages by using AM for this purpose. The five key findings for when AM are advantageous were when: 1) certain parts are unavailable, 2) some parts can-not be bought separately from their assembly, 3) the costs of spare parts are high, 4) lead times are long, and 5) costs of parts produced by AM are lower than purchased parts. The evolution of AM technology, the AM cost and required competence for utilizing AM technology will be decisive for the configuration of AM supply chains in the future. Keywords: Additive Manufacturing · Spare Parts · Metal Industry
1 Introduction Additive Manufacturing (AM) allows for on-demand production of items. This ability makes AM interesting from a logistics perspective, as it may be introduced in spare parts logistics as a safeguard against reduced production downtime caused by inventory stockouts or delayed deliveries from spare part suppliers [1]. The potential of AM is not limited to reducing downtime, but also potential reductions in supply chain emissions [2], keeping lower inventories [3], and providing manufactures with the opportunity to produce spare parts in-house as a part of their asset management. Manufacturers focus on their core activities to produce their products. To do so the manufacturer uses shop floor machinery, supporting equipment and facilities purchased from vendors. Spare parts are needed in the events of failures.As AM evolves it can potentially substitute traditional sourcing of spare parts and equipment used. However, AM is a fast-evolving technology requiring competence and investments to be an attractive alternative to traditional sourcing. From a specialization viewpoint [4], this might © IFIP International Federation for Information Processing 2023 Published by Springer Nature Switzerland AG 2023 E. Alfnes et al. (Eds.): APMS 2023, IFIP AICT 690, pp. 605–616, 2023. https://doi.org/10.1007/978-3-031-43666-6_41
606
B. Jæger et al.
not be a good approach as it requires knowledge development and operational capacity that is on the side of its core production line. While there is a considerable number of conceptual studies addressing the opportunities represented by AM in manufacturing, there is a need for empirical studies that show the implementation of AM for different purposes. In this study we focus on the use of AM for spare parts, and ask the following research question: How can a metal industry manufacturer utilize AM for its spare parts? To answer this research question, we have carried out a case study of a metal industry manufacturer. We cover issues like spare parts operations, stocking policy and decisionmaking processes, and the flow of information and materials between departments and suppliers. Uncovering challenges in the supply of spare parts, will provide a baseline for understanding how AM can fit into the plant´s supply chain. Furthermore, we need to understand how implementation of AM in the spare parts context can affect spare parts logistics operations. Since the manufacturer already has some experience in using AM this entails uncovering the benefits and challenges AM has presented thus far, and how the technology has changed the management of the plant´s assets. Eventually, we will examine the implications of the different strategic considerations regarding the organization of AM capabilities and their effect on spare part logistics. Wider use of AM may have implications for the structure and configuration of the supply chain.
2 Theory 2.1 AM in Manufacturing AM is a process where objects are made from a digital model by “depositing the constituent material/s in a layer-by-layer manner using digitally controlled and operated material laying tools” [5]. AM represents a flexible manufacturing technology suitable for quick manufacturing for customized items of low volume and high value-added products. As an emerging technology, AM faces some challenges, such as high costs, size limitations and the fact that not all spare parts may be manufactured using AM [4]. Despite an increased interest in AM for spare parts, its use is still limited in industries. Peron and Sgarbossa [6] have conducted a literature review on the use of AM for spare parts. They concluded that although researchers are attempting to address the challenges of AM, a significant amount of work is still needed to fully understand the profitability of AM in relation to traditional manufacturing. In particular, the authors highlight four research areas requiring more knowledge: • Mechanical characterization of parts produced with AM. • Guidelines for choosing which parts to produce with AM, and which parts to redesign to suit AM. • Life cycle cost analysis comparing traditional manufacturing and AM. • How AM impacts the structure of the supply chain, considering both a centralized and decentralized configuration. A recent comprehensive literature review, performed by Mecheter, Pokharel, and Tarlochan [7] covers 60 articles specifically on the spare parts application of AM. The review shows an increasing trend in researchers considering the application of AM for spare parts. Overall, the consensus is that AM is considered complimentary to traditional
Additive Manufacturing: A Case Study of Introducing
607
spare management, as opposed to a replacement. Especially for smaller parts with variable demand, AM has potential to induce greater supply chain efficiency. The review highlights challenges relating to making a specific investment in new technology and new materials, as well as the difficulty of converting existing manufacturing processes to digital ones. When researching the applicability of AM in spare parts logistics, several issues are discussed in the literature, including the configuration of the supply chain; should production be centralized or decentralized? In addition: whether a single or dual sourcing approach is most suitable. While AM has developed from being a prototyping tool to a fully-fledged manufacturing method [8], it has mostly been used in a prototyping capacity by the case company. The literature mentions several benefits tied to AM. A fundamental advantage over traditional manufacturing is that set-ups are less expensive and time intensive, and there is a reduced need for tools during the production set-up [9, 10]. This may enable companies to produce small batches of spare parts compared to traditional manufacturing. Another benefit outlined in the literature, is the ability of AM to manufacture parts with complex geometries compared to traditional manufacturing [11]. This gives the possibility to improve certain spare part designs, for instance by including air pockets or having hollowness. 2.2 AM and Cost In terms of the cost-effectiveness of AM as a manufacturing method, several authors have analyzed the unit costs of AM technologies in relation to traditional manufacturing methods [12–14]. A fundamental difference between AM and Traditional Manufacturing (TM), is the lack of economies of scale. Economies of scale only partially applies to AM, where units costs decrease as you improve the utilization of the 3D printer´s build volume. In terms of AM´s competitiveness over TM, Ituarte, Khajavi, and Partanen [13] describes that AM will struggle to compete with TM for high-volume production, even if the AM cost curve decreases heavily in the future. The implication is that AM is more attractive as an option for manufacturing smaller volumes and for production where adjustments are made frequently. Authors like Chekurov et al. [4] and Attaran [15] highlight equipment costs as a one of the main barriers to implementation of AM among firms. Other authors such as Ballardini, Ituarte, and Pei [16] point to barriers such as the fear of inferior product quality. Another important issue highlighted by Ballardini, Ituarte, and Pei [16] and Khajavi, Salmi, and Holmström [17] is that many firms seem to lack digitalized information about their spare parts in the form of CAD-files. This represents a hurdle when attempting to implement AM which can indicate that AM is still immature for logistics. Among other studies, Dekker, Kleijn, and De Rooij [18] describe three main elements of spare parts logistics: costs, availability and time. On the cost side, Khajavi, Partanen, and Holmström [1] and Knofius et al. [19] highlight slow-moving parts as particularly challenging.
608
B. Jæger et al.
2.3 Supply Chain Configuration When switching to a centralized SC configuration, transportation costs will become the dominant cost factor [2]. Consequently, the location of the AM service provider greatly impacts the transportation costs of the AM supply chain. Liu et al. [20] recommends decentralized approach for parts with high and stable demand, and a centralized approach for parts with long lead times, low and variable demand. However, manufacturing costs may be high, which means that it may be difficult for a firm to generate enough demand on their own to achieve high-capacity utilization of the 3D printers [21].
3 Method To answer the research question, a case study of a metal industry manufacturer has been conducted. Case study research is suitable for this research [22, 23] since: (1) AM can be studied in its natural settings, which can both generate new theory and modify existing theory regarding AM for spare parts production. (2) Answering “how” questions allow us to fully understand AM handling from the manufacturer’s perspective, and (3) Explorative investigation of AM handling is seen from a manufacturer’s perspective. Data was collected by carrying out two digital meetings with different representatives from the company (maintenance manager, supervisor competency development and others), a physical meeting at the company aluminium manufacturing plant (maintenance manager, supervisor competency, purchaser), a digital group interview (maintenance manager, supervisor competency, purchaser), and eventually a digital interview only with the maintenance manager.
4 Case Company: Hydro Sunndal Hydro Sunndal is a producer of aluminium which is heavy industries manufacturer and a subsidiary of the global Hydro enterprise. Hydro is present in over 140 locations spread over 40 countries, with a total of 32 000 employees. Hydro is involved in several aspects of the aluminium value chain, such as energy production, mining and refining of raw materials, aluminium production and recycling of aluminium [24]. The first two parts of the supply chain include extraction of bauxite and refining alumina. What follows in the supply chain is the energy production; much of this energy is then used to supply Hydro´s aluminium production. Over 70% of electricity applied in the primary aluminium plants is acquired through renewable power [24]. Hydro´s aluminium metal division includes primary aluminium production and casting, which is comprised of products such as standard ingots and alloys. Primary aluminium production takes place at Hydro´s fully owned Norwegian plants as well as co-owned plants in Australia, Brazil, Slovakia, Canada and Qatar. 4.1 AM in Hydro Sunndal In Norway, Hydro has aluminium manufacturing plants in Sunndal, Husnes, Høyanger, Karmøy and Årdal. Sunndal is the only plant that has implemented AM. They currently
Additive Manufacturing: A Case Study of Introducing
609
operate two FDM (Fused Deposition Modelling) 3D printers; Flashforge Creator pro 2 and Prusa Raise3D pro 2. Hydro Sunndal has been experimenting with AM at a small scale, starting in 2021. So far, a handful of different spare parts have been produced in-house using AM. These first pilot projects were not critical parts. For instance, they have 3D printed hooks and clothes hangers for wardrobe lockers. Another project was battery covers for headsets. The battery covers would often break, and ordering this individual part from the supplier was not possible, as they would have to order a whole new headset. Therefore, it made sense to print these small parts. Initial pilot projects have sparked the interest in more widespread use of the technology. In terms of AM competence at the plant, they have established a small group of engineers and operators responsible for 3D printing parts. They receive suggestions for what parts to manufacture from other departments, which they then digitally draw and print. Following this, they receive feedback on whether the part fits or if needs to be reprinted. In some cases, Hydro Sunndal has 3D printed prototypes of parts and then adjusted the drawings before sending them to a supplier to be produced with metal. There is a need for more knowledge to understand AM´s impact on spare parts logistics, contributing to their overarching strategic goals related to improving the sustainability of their upstream supply chain to facilitate net-zero aluminium production.
5 Findings This chapter reports on the findings from the interviews [25]. The maintenance manager explains that the plant´s AM journey initially was born out of a wish for more competency among the maintenance operators on the sustainability for AM. This wish was then followed up by introducing more AM training schemes for the operators, by purchasing new AM equipment and receiving AM training from suppliers. To further materialize the AM training of operators, a partnership with a vocational school was initiated. Among the courses in the training program was a course about 3D printing and digital drawings. The motivation for starting with AM at the plant was described as the fact that a spare part was desired, but there were uncertainties regarding the exact shape and specifications. Therefore, AM was used to develop prototypes, which were then tested and adjusted until a working design was found, and re-printed. In addition, there has been motivation to AM because of challenging supply characteristics, including: • When spare parts are entirely unavailable on the market. • When purchasing a given spare part is not possible, but the part can only be bought as a bigger component that includes the spare part the company needs. • When spare parts are available on the market, but the price is too high. In the event of high prices, the company has on several occasions been able to print parts for a much more reasonable price using one of their in-house 3D printers. The maintenance manager explains: “We have also encountered that some parts cannot be sourced, or that you cannot buy the exact part, but you can buy a bigger component which contains that part. We also see that some parts can be bought, but the price is relatively high. We come out with a fully-fledged product here to a far more reasonable price.”
610
B. Jæger et al.
The maintenance manager viewed AM as a cost-effective method for the manufacturing of spare parts explaining that while investments in machines and digitalizing design drawings is necessary, the 3D printer can then work without requiring wages. The globalized nature of modern supply chains is highlighted as a complicated aspect that forms a part of the motivation for 3D printing. The maintenance manager describes that: “The motivation for 3D printing will be multiple things. Lead time is one aspect. The international environment makes it, so we observe a considerable increase in ordering time and lead time of components.” The maintenance manager also explains that obsolescence is an important aspect behind the desire to 3D print: “We have a plant that has been assembled over many years. We have installations of different ages, and some of the equipment is no longer available on the market”. The plant is comprised of components and machinery going back many years. Consequently, obsolescence complicates spare parts logistics. The event of failure of components in an obsolete machine gives rise to modifications. Typically, a replacement part is purchased from a supplier, and the parts of the machine are reconfigured to fit the replacement part. Such modifications are difficult to execute, not to mention expensive. The maintenance manager also imagines that producing a spare part with AM could be cheaper if the alternative is hiring a Norwegian industry worker to traditionally manufacture the part, given the high labour costs in the Norwegian market. Even though AM has mainly been utilized for prototyping so far at the plant, the ambition is to include AM as a part of the overall supply strategy for spare parts and integrate it into the overall management system for maintenance. The maintenance manager also stresses that if one is to succeed with 3D printing, it needs to be included in the overarching supply strategy of the company: “No matter how you attack 3D printing it is about a supply strategy that includes that possibility. It is something that follows a decision-making process; it is not just a simple activity you start. It entails a strategy for supplying spare parts and materials into an organization. it is important that the overarching strategies are there in our management system if we are to succeed with it.” In terms of the long-term goal of AM, the respondents together emphasize that large scale in-house implementation of AM is not likely, nor is it the goal. The company´s AM journey is largely concerned with the aim of “training” the ability to see opportunities that arises from AM. In essence, they are awaiting a development in the surrounding market of suppliers to include AM as a manufacturing method. The ambition cited by the purchaser for the purchasing department is to collaborate more with external companies, the maintenance department and Hycast (a daughter company of Hydro concerned with R&D activities) to build competence and take advantage of the possibilities of AM for spare parts. This goal is one of the sustainability goals that the purchasing department has set. The maintenance manager points to the fact that companies will over time be
Additive Manufacturing: A Case Study of Introducing
611
pushed more in terms of sustainability by governmental regulations. This provides a motivation for companies to become more circular and attempt to extend the life cycle of their equipment: “If a component has a part that makes up 10% of the purchasing cost; if that part is worn out, we need to replace that part and continue to use the equipment. A simple way of putting this, is that it costs more and more to dispose of waste.” The plant experiences a supplier market that leverages their position by offering a discounted/low initial price, and then gaining bigger profits on the aftermarket. This results in high-cost levels for spare parts, especially for individual components that are part of a bigger component/assembly. The prices for these individual parts are often marked up to a point that encourages buying the complete assembly. The maintenance manager states that the emergence of 3D printing might change this landscape. The suppliers might be forced to look at “other methods” of repairing or acquiring parts for a cheaper price, such as with AM. When it comes to the purchasing process, the purchaser explains that it is hard to see any change, considering that the plant has not sought out to purchase 3D printed parts from suppliers. Eventually as the supplier market for 3D printed spare parts grows, one might be able to examine the effect AM has on the purchasing process. In terms of procurement of the materials and 3D printers used at the plant, the maintenance manager states that there have been no challenges regarding the supply. The material they print with is polylactic acid (PLA). The supervisor for competency development states that AM with metal materials becomes more challenging; they envision that in that case you would need to involve someone with competence in the material science field in the process, given the complicated nature of different qualities of metal. In terms of supply chain effects, the purchaser explains that they are not able to say that they have experienced a shortening of the supply chain following the implementation of AM. Given that they have only implemented AM in a small scale, mostly prototyping capacity, the purchaser describes that they do not know of such effects yet, but that it could be an interesting hypothesis for the future. The maintenance manager describes several capabilities that a potential AM supplier needs to have. The supplier needs to be skilled in material science, as well as construction of machine parts. He emphasizes that AM opens new possibilities when compared with parts traditionally manufactured by molding, such as the ability to create parts that are hollow and have air pockets. In addition, legal expertise surrounding AM is mentioned as an important ability the supplier needs to have. Being good at sales and marketing is also mentioned as important in the initial phase. According to the maintenance manager, large parts of the industry are not “mature” regarding AM, and do not see the possibilities yet. Sufficient marketing from the suppliers may help convince companies to adopt AM. In terms of the nature of the relationship with a potential supplier, the maintenance manager envisions a frame agreement they may call on when it is needed both in terms of replacing broken spare parts and printing parts to extend the life cycle of older machines. It is probable that demand will be varying, therefore they stress the importance of the supplier having a market beyond just Hydro´s Sunndal plant, but also other companies within the mid-Norway region. The location of the supplier being relatively close is
612
B. Jæger et al.
preferable in an initial phase, as it better enables close collaboration and dialogue with the supplier. The maintenance manager emphasizes that a potential supplier should not only look to Hydro, but also to other manufacturing companies, including the spare part suppliers if AM provides an alternative solution to traditional manufacturing for certain parts. Currently, many suppliers apply a pricing strategy that does not favor overhauling and repairs. Also, suppliers often price individual parts so high that they almost reach the price of the complete component they are part of. The maintenance manager believes that spare part suppliers need to rethink their strategy to provide more overhauling and repairs as opposed to mainly sales. The emergence of AM could be an avenue for spare parts suppliers to earn money on digital parts according to the maintenance manager. The implications of this possible change in strategy become interesting for Hydro and heavy industries in general, especially considering sustainability and backshoring. While many spare parts are produced in low-cost countries, AM might provide an opportunity to move production closer to home as manufacturing will to a higher extent be performed by machines rather than by traditional labor. The key findings are summarized in Table 1. Table 1. Summary of the key findings. #
Finding
1
Certain spare parts unavailable on the market
2
Certain spare parts cannot be purchased separately, resulting in purchasing of the larger assembly of which the parts belong
3
High price levels for certain spare parts
4
International supplier market result in increased lead time
5
Some non-critical parts produced in-house with AM cheaper than market could offer
6 Discussion 6.1 AM and Spare Parts This study’s main issue is to shed light on how and why manufacturers should utilize AM for spare parts. When it comes to managing spare parts, the company describes similar elements to what is found in the literature, as costs, availability and time [18]. Furthermore, the challenge with slow-moving parts [19] is experienced at the plant where some parts can be stored up to 10 years. These slow-moving parts especially incur costs related to maintenance and preservation of the spare parts. In addition, the spare parts inventory is large, taking up significant space, adding to the complexity of spare parts logistics. The company often find themselves in a position where they must accept high prices from their suppliers, indicating that the limited number of suppliers for spare parts puts the suppliers in a strong position regarding their pricing strategy [4].
Additive Manufacturing: A Case Study of Introducing
613
The company broadly separates between critical and non-critical spare parts. Critical parts are parts which are kept in stock, while non-critical parts are parts which need not be kept in stock. The supply strategy for critical parts is defined with MRP (material resource planning) data, with orders being sent out automatically in the event of the stock level dropping below a pre-determined ordering point. However, this ordering point must be adjusted in the event of challenges in Hydro´s operations, or if the supplier has challenges in their manufacturing. While AM has developed from being a prototyping tool to a fully-fledged manufacturing method [8], it has only been used for the production of a handful of different spare parts by the case company. The reduced need for tools during the AM set-up [9, 10] has enabled the case company to produce small batches of spare parts, something that likely would have been less economically viable than if traditional manufacturing was used. The ability of AM to manufacture parts with complex geometries [11] is highlighted as a major benefit by the case company. It provides motivation for further engagement with AM technologies. 6.2 AM and Cost The findings from the interviews indicate that high-volume production of AM parts is not favored from the case company´s perspective. They primarily look to AM for producing spare parts that have lower consumption levels. However, mass production of non-critical spare parts using AM might be a future avenue for spare part manufacturers. The case company views AM as a cost-effective method under the right conditions. They have on several occasions been able use AM for spare parts with less cost than if purchased in the market. This has been the case in situations characterized by problematic supply situations, such as very high prices or the parts not being available on the market. In some cases, parts are available to purchase, but suppliers mark up prices on individual parts to encourage buying the complete assembly that the part is included. Equipment costs is one of the challenges to implementation of AM among firms [4, 26], although this is likely more of a problem for smaller companies compared with the case company. In this case, the maintenance manager viewed AM as a costeffective method for the manufacturing of spare parts. While investments in machines and digitalizing design drawings is necessary, the 3D printer can then work without requiring wages. This aspect is interesting in situations where AM is an alternative to hiring an industry worker, given the wage level in Norway. 6.3 Barriers Fear of inferior product quality is reported as a barrier to the implementation of AM [16]. In the case of the case company, they expressed that did not have sufficient experience with AM to truly say whether they believed AM to have inferior product quality. However, they have in some instances used AM to improve the quality of certain spare parts by redesigning them. Lack of competence may be a barrier to the implantation of AM. The case company having taken the initiative to build competency among their engineers in AM design and
614
B. Jæger et al.
digital drawings to avoid the hurdle of lacking digitalized information of spare parts as CAD files [16, 17]. 6.4 Supply Chain Configuration Currently, the plant has a decentralized configuration; with its own 3D printers located inhouse serving the demand for additively manufactured parts. The demand comes from various departments involved in plant operations. The findings from the interviewees indicated that the plant´s ambition to connect themselves to an AM service provider to further build their AM capabilities. This would mean opting for a centralized supply chain configuration in the future, or at least a mixture of decentralized and centralized, since the plant could keep (and possibly expand) their current in-house AM capabilities. Still, the interviewees indicate an intention to mainly use an AM service provider for manufacturing AM parts. With their current centralized AM set-up, they have experienced some of the benefits outlined in the literature, such as no transportation costs for the parts as they are produced at the plant on demand. As well as the reduction in lead times. Implementation of AM in other plants could generate more demand for additively manufactured parts within the organization. The other plants will likely have some similarities in terms of the spare parts they require. However, this decentralized configuration might be costly given the investment in AM machines, not to mention other costs related to training of personnel and maintenance of the 3D printers [21, 27]. Therefore, a centralized configuration may be more realistic for serving all the plants with 3D printed spare parts. For a centralized approach, the location of the AM service provider will be of great importance when it comes to the more critical parts where lead time could be of the essence. Having said that, it has been claimed that the future reductions in the cost of acquiring AM machinery will favor the decentralized configuration [1, 2]. Future technological development of AM into a more general-purpose technology as another point that will increase the attractiveness of decentralized AM in the future [3]. In the case of the plant studied in this case study, a long-term goal aiming for the centralized configuration in the future has been defined. However, if there are considerable reductions in the cost of AM equipment in the future, this may contribute to a mixed configuration being a realistic option.
7 Conclusions This study has aimed at answering the question: How can a metal industry manufacturer utilize AM for its spare parts? The question has been addressed by carrying out a case study of Hydro Sunndal, which is a large aluminium manufacturing plant located in Norway. The company has started to build competence within the field of AM among the maintenance operators to explore the possibilities of using AM for spare parts. The key findings in this study indicate that AM for spare parts may be relevant when spare parts are entirely unavailable on the market, or can only be purchased as part of a larger spare part, when the price of the spare part is high, and in cases when lead times are long. AM is also expected to contribute to meeting sustainability requirements. The project
Additive Manufacturing: A Case Study of Introducing
615
with producing spare parts with AM is still in an early stage at this manufacturing plant. Currently, some non-critical parts have been manufactured at the Hydro Sunndal, which is the only Hydro manufacturing plant testing this technology. When implementing AM on a larger scale at Hydro Sunndal, one should consider outsourcing of AM to a supplier. If the Hydro group decides to implement AM at other manufacturing sites within the group, a centralized AM approach would be appropriate, serving all plants with 3D printed spare parts. The evolution of AM technology, the AM cost and required competence for utilizing AM technology will be decisive for the configuration of AM supply chains in the future. This is a single case study from the metal industry. The findings may, however, be relevant in other industries and contexts since the issues addressed are of a general kind. The findings contribute to the emerging literature on AM as they show what considerations and experiences are made when a manufacturing company implements this technology for spare parts in practice.
References 1. Khajavi, S.H., Partanen, J., Holmström, J.: Additive manufacturing in the spare parts supply chain. Comput. Ind. 65(1), 50–63 (2014) 2. Li, Y., et al.: Additive manufacturing technology in spare parts supply chain: a comparative study. Int. J. Prod. Res. 55(5), 1498–1515 (2017) 3. Holmström, J., et al.: Rapid manufacturing in the spare parts supply chain: alternative approaches to capacity deployment. J. Manuf. Technol. Manag. 21, 687–697 (2010) 4. Chekurov, S., et al.: The perceived value of additively manufactured digital spare parts in industry: an empirical investigation. Int. J. Prod. Econ. 205, 87–97 (2018) 5. Tofail, S.A.M., et al.: Additive manufacturing: scientific and technological challenges, market uptake and opportunities. Mater. Today 21(1), 22–37 (2018) 6. Peron, M., Sgarbossa, F.: Additive manufacturing and spare parts: literature review and future perspectives In: Advanced Manufacturing and Automation X, vol. 10, pp. 629–635. Springer, Singapore (2021). https://doi.org/10.1007/978-981-33-6318-2_78 7. Mecheter, A., Pokharel, S., Tarlochan, F.: Additive manufacturing technology for spare parts application: a systematic review on supply chain management. Appl. Sci. 12(9), 4160 (2022) 8. Petrovic, V., et al.: Additive layered manufacturing: sectors of industrial application shown through case studies. Int. J. Prod. Res. 49(4), 1061–1079 (2011) 9. Achillas, C., et al.: A methodological framework for the inclusion of modern additive manufacturing into the production portfolio of a focused factory. J. Manuf. Syst. 37, 328–339 (2015) 10. Pour, M.A., et al.: Additive manufacturing impacts on productions and logistics systems. IFAC-PapersOnLine 49(12), 1679–1684 (2016) 11. Bogue, R.: 3D printing: the dawn of a new era in manufacturing? Assem. Autom. 33(4), 307–311 (2013) 12. Baumers, M., Holweg, M.: On the economics of additive manufacturing: experimental findings. J. Oper. Manag. 65(8), 794–809 (2019) 13. Ituarte, I.F., Partanen, J., Khajavi, S.: Challenges to implementing additive manufacturing in globalised production environments. Inter. J. Collaborative Enterp. 5, 232–247 (2016) 14. Ruffo, M., Tuck, C., Hague, R.: Cost estimation for rapid manufacturing - laser sintering production for low to medium volumes. Proc. Instit. Mech. Eng. Part B: J. Eng. Manuf. 220(9), 1417–1427 (2006)
616
B. Jæger et al.
15. Attaran, M.: The rise of 3-D printing: The advantages of additive manufacturing over traditional manufacturing. Bus. Horiz. 60(5), 677–688 (2017) 16. Ballardini, R.M., Flores Ituarte, I., Pei, E.: Printing spare parts through additive manufacturing: legal and digital business challenges. J. Manuf. Technol. Manag. 29(6), 958–982 (2018) 17. Khajavi, S., Salmi, M., Holmström, J.: Additive Manufacturing as an Enabler of Digital Spare Parts, pp. 45–60 (2020) 18. Dekker, R., Kleijn, M.J., de Rooij, P.J.: A spare parts stocking policy based on equipment criticality. Int. J. Prod. Econ. 56–57, 69–77 (1998) 19. Knofius, N., et al.: Improving effectiveness of spare parts supply by additive manufacturing as dual sourcing option. OR Spectrum 43(1), 189–221 (2021) 20. Liu, P., et al.: The impact of additive manufacturing in the aircraft spare parts supply chain: supply chain operation reference (scor) model based analysis. Product. Planning Control 25(13–14), 1169–1181 (2014) 21. Friedrich, A., Lange, A., Elbert, R.: Make-or-buy decisions for industrial additive manufacturing. J. Bus. Logist. 43(4), 623–653 (2022) 22. Meredith, J.: Building operations management theory through case and field research. J. Oper. Manag. 16(4), 441–454 (1998) 23. Voss, C., Tsikriktsis, N., Frohlich, M.: Case research in operations management. Int. J. Oper. Prod. Manag. 22(2), 195–219 (2002) 24. Hydro, Annual Report. 2022, Hydro (2022) 25. Wiklund, F.: Assessing the Impact of Additive Manufacturing in Spare Parts Logistics: A Case Study of Norsk Hydro. Molde University College, Molde, Norway (2023) 26. Attaran, M.: The rise of 3-D printing: the advantages of additive manufacturing over traditional manufacturing. Bus. Horiz. 60 (2017) 27. Mellor, S., Hao, L., Zhang, D.: Additive manufacturing: a framework for implementation. Int. J. Prod. Econ. 149, 194–201 (2014)
Applications of Artificial Intelligence in Manufacturing
Examining Heterogeneous Patterns of AI Capabilities Djerdj Horvat1(B) , Marco Baumgartner2 , Steffen Kinkel2 , and Patrick Mikalef3,4 1 Fraunhofer Institute for Systems and Innovation Research ISI, 76139 Karlsruhe, Germany
[email protected]
2 Institute for Learning and Innovation in Networks, Karlsruhe University of Applied Sciences,
76133 Karlsruhe, Germany {Marco.Baumgartner,Steffen.Kinkel}@h-ka.de 3 Department of Computer Science, Norwegian University of Science and Technology, 7034 Trondheim, Norway [email protected] 4 Department of Technology Management, SINTEF Digital, 7031 Trondheim, Norway
Abstract. This study explores the heterogeneous patterns of companies in terms of their AI capabilities by analyzing various combinations of AI-specific resources. Drawing on the resource-based theory of the firm, we develop an analytical framework comprising two key dimensions: AI infrastructure and AI competencies, and employ two scores to quantify these dimensions. We apply this approach to a dataset of 215 companies and categorize them into four distinct groups: beginners, followers with strong AI-infrastructure, followers with strong AI-specific human resource, and leaders in terms of AI capabilities. Our analysis provides insights into the companies’ sectoral affiliation, size classes, fields of usage of AI, and make or buy decisions regarding the uptake of AI solutions. Our findings suggest that the manufacturing and construction industry had the highest proportion of beginner companies with low AI capabilities, while the services and IT industry had the largest share of leader companies with strong AI capabilities. The study also shows that companies with different levels of AI capabilities have distinct motives for adopting AI technologies, and leading companies are more likely to use AI for product innovation purposes. Overall, the study provides a comprehensive analysis of the various AI-specific resources that contribute to a company’s AI capabilities and sheds more light on configurations of AI-specific resources. Our analytical framework can help organizations better understand their AI capabilities and identify areas for improvement. Keywords: AI capabilities · heterogeneous patterns · manufacturing · survey
1 Introduction In the recent years, artificial intelligence (AI) has gained significant attention and become a top technological priority for organizations, mainly due to the availability of big data and the emergence of advanced techniques and infrastructure [1]. Different studies have © IFIP International Federation for Information Processing 2023 Published by Springer Nature Switzerland AG 2023 E. Alfnes et al. (Eds.): APMS 2023, IFIP AICT 690, pp. 619–633, 2023. https://doi.org/10.1007/978-3-031-43666-6_42
620
D. Horvat et al.
also shown a significant increase in the number of companies implementing AI [2]. However, despite the potential business value that AI can deliver, organizations face numerous challenges that prevent them from realizing performance gains [3]. Several studies have highlighted that many companies are yet to realize the expected benefits of AI [4]. One of the main reasons for this is the implementation and restructuring lags that organizations face in leveraging their AI investments, leading to a modern productivity paradox [5]. To overcome this challenge, organizations need to invest in complementary resources that will help them build an AI capability [6, 7]. This raises the so far underaddressed question of what these complementary resources are and how companies can orchestrate them to effectively build AI capabilities. In this paper, we contribute to filling this gap by examining combinations of AI-specific resources to better understand heterogeneous patterns of companies regarding their AI capabilities. As a theoretical background, we draw on the resource-based-view (RBV) of the strategic management literature [8–12] and its recent implementation in the field of information systems research [13–18]. We use this literature to explain how relevant resources to information technologies (IT) can be leveraged in order to form the so-called IT capabilities, which in turn can conditionally influence competitive performance [19, 20]. More precisely, we use the recent studies on more specific AI capabilities, which provide valuable insights into the organizational resources needed for firms to develop their AI capabilities and achieve performance gains [21]. Following the RBV the authors have identified some key types of AI-specific resources and grouped them into three categories based on the framework of Grant (1991): tangible resources (data, technology, basic resources), intangible resources (inter-departmental coordination, organizational change capacity, risk proclivity) and human resources (business and technical skills). Moreover, they examined the impact of AI capabilities on organizational creativity and performance. However, what we do not know from previous studies is how companies differ in their AI capabilities and if there are some distinctive patterns of firm’s AI-specific resource combinations. Knowing more about such patterns of AI-specific resources would contribute to a better understanding of the micro-foundation of AI capabilities, thus contributing significantly to both, research and practice. In order to investigate the micro-foundations of AI capabilities, we first develop an analytical framework for systematizing AI-specific resources using RBV and the recent literature on AI capabilities. We start with the theoretical notion that the mere implementation of AI techniques alone is unlikely to lead to competitive gains, as these techniques are widely available and easily replicated in the market. Similarly, relying solely on data to fuel these techniques will not be sufficient to create distinctive AI capabilities. Companies thus need to develop and implement a unique blend of human resources and combine them with other intangible and tangible resources to create an AI capability. Hence, on the one hand, our framework focuses on human resources, including employees’ knowledge and abilities. On the other hand, given the diverse range of AI applications, each with unique technical requirements, data needs, and organizational contexts, we examine the relevance of tangible and intangible resources, as AI infrastructure, using companies’ capabilities as general criteria for introduction, implementation and development of AI solutions. We subsequently apply this framework to analyze a comprehensive dataset obtained from a long-term research project investigating the adoption of AI for work
Examining Heterogeneous Patterns of AI Capabilities
621
and learning in organizations. Specifically, we examine a sample of 215 companies to identify diverse combinations of AI-related resources and elucidate distinct patterns of AI capabilities. Our study sheds more light on configurations of AI-specific resources and thus provides support for the heterogeneity assumption concerning AI capabilities. Thus, by showing its potential to capture the heterogeneity in an empirical analysis, our framework can serve as a comprehensive basis for deeper understanding of the concept of AI capability and thus for improving existing or even developing new measurement for different patterns of AI capabilities. It also has the potential to support managers in understating the position of their companies in terms of AI capabilities.
2 Theoretical Foundation 2.1 From RBV to AI Capabilities The idea of the role of complementary and unique firm-level resources and their orchestration for gaining competitive advantage has its roots in the resource-based-view (RBV) of the strategic management literature [17, 18]. While the RBV generally encompasses a broad definition of resources that includes assets, knowledge, capabilities, and organizational processes, Grant’s (1991) framework provides a more nuanced understanding of resources by distinguishing between resources and capabilities and classifying resources into tangible, intangible, and personnel-based categories. Tangible resources refer to financial and physical assets, while intangible resources include reputation, brand image, and product quality. The third group represents personnel-based resources that encompass technical and other knowledge assets as well as employee skills. Organizational capabilities, on the other hand, are an organization’s ability to integrate and deploy valuable resources to achieve a competitive advantage [22] or with other words to orchestrate resources to create competitive advantage [23, 24]. Grant (1995) proposes a hierarchy of organizational capabilities that range from specialized capabilities to broader functional capabilities such as marketing, manufacturing, and IT capabilities. These functional capabilities, in turn, can be integrated to form cross-functional capabilities, such as customer support capabilities that result from the integration of marketing, IT, and operations capabilities. Recent literature in the field of information systems adopted this approach to explain how information technology (IT) related resources can be leveraged in order to form the so called IT capabilities, which in turn can conditionally influence competitive performance [19, 20, 25]. There are different kind of IT-capabilities examined in this term, for instance social media capabilities [26] or business analytics and big data capabilities [14]. In this context, a firm’s IT capability is characterized as its capacity to effectively utilize IT-based resources in conjunction with other resources and capabilities [17]. Following the Grant’s classification of resource types, the IT-based resources might be: IT infrastructure (hardware) and data for tangible, organisational and managerial characteristics for intangible and technical and managerial IT skills for human resources. While tangible resources, for instance IT equipment and software, can be bought on the market, and thus do not represent factors of competitive advantage per se, intangible and human resources, as enablers of IT application in organizations, are valuable sources of heterogeneity, and therefore of competitiveness on the market [8, 21].
622
D. Horvat et al.
More recent studies have explored the development and management of AI-specific capability as imperative for organizations seeking to realize performance gains from AI solutions [3, 7, 21, 27]. They identified the organizational resources necessary for firms to develop their AI capabilities and achieve performance gains. Building on the theoretical underpinnings of the RBV and recent studies in the information systems literature, Mikalef and Gupta (2021) propose eight resources that jointly constitute an AI capability: (1) tangible resources (data, technology, and basic resources), (2) human resources (business and technical skills), and (3) intangible resources (inter-departmental coordination, organizational change capacity, and risk proclivity). According to the RBV, the simple implementation of AI techniques alone is unlikely to lead to competitive advantage, as these techniques are widely available and easily replicated in the market. On the other hand, relying solely on data to fuel these techniques will not be enough to create distinctive AI capabilities. Companies thus need to develop and implement a unique blend of tangible, intangible, and human resources to create an AI capability which in turn implies a firm’s ability to select, orchestrate, and leverage its AI-specific resources [13]. 2.2 AI Capabilities as Bundle of Tangible, Intangible and Human Resources Tangible resources represent the IT infrastructure needed for AI applications, in terms of the necessary hardware for storing and processing data as well as software for (big) data processing required by AI [28, 29]. These resources cover data [30, 31]. Due to the key fact that AI-based systems learn through different data types and large amounts of data, this topic plays not only one of the most important, but also one of the most challenging roles concerning the implementation of AI [29]. Intangible resources are highly unique and heterogeneous resources due to the mix of organizational history, people, processes, and conditions that characterize organizations. Studies on firms’ readiness for AI emphasize the high relevance of intangible resources for both, adoption of AI solutions for improving companies’ performances, as well as for reaping business benefits from adopted technologies. Because of the high complexity of AI implementation projects resulting from high purpose- or context-specificity of AI [30], typical organizational features, like for instance firm or project team size, interdisciplinarity, cross-functional collaboration and boundary spanners [24, 32] play a crucial role for implementation and leveraging value of AI. Human resources cover collective knowledge and skills of employees, as well as their training, experience, and professional connections [8–10]. Considering the field of IT, besides the technical IT skills essential for introducing and application of IT solutions in the company, for instance hardware development, software development, data science etc. [33], managerial IT skills plays also a crucial role for implementing IT solutions in companies. IT managerial skills encompass the capacity to conceptualize, create, and utilize IT applications to bolster and improve other operational aspects of a business [25]. Those skills include for instance abilities of the management to understand the needs of the company and to implement the suitable IT solution, as well as to coordinate IT activities in the company. Hence, project management, moderating skills and leadership paly crucial role here [17]. Technical and managerial skills evolve over long period of time through the accumulation of experiences. Hence, they are often tacit in nature and
Examining Heterogeneous Patterns of AI Capabilities
623
thus organization specific [34]. Therefore, differences in benefits companies gain from IT has been attributed largely to their managerial IT resources [17, 25]. 2.3 The Framework for Analyzing Patterns of AI Capabilities For researching into different patterns of AI capabilities, we adopt the RBV logic of the relevance of different resources for introduction and usage of AI solutions in companies [21]. We start with the premise that tangible resources are freely acquirable by most firms through the market and thus are not adequate on their own to develop AI capabilities that can provide a competitive edge. Whereas, intangible resources represent those that are, due to their evolutionarily organizational character for the company [35–37], unique and heterogeneous [21]. Therefore, for analyzing patterns of companies with respect to their AI capabilities, it makes sense to consider them together with the intangible resources, i.e. as one dimension, representing AI infrastructure as enabling factor for adopting AI solutions in the company. Moreover, we understand that there might be different combinations of tangible and intangible resources enabling successful introduction of an AI solution in one companies’ processes, as one phase of the AI adoption process, and for effective implementation of this solution in the everyday work, as the another phase. To gain a deeper understanding of the crucial role that human resources play as the third group of AI-specific resources in enabling AI capabilities, it is essential to explore the competence of employees in working and learning with AI. In conjunction with the AI infrastructure, we propose a comprehensive two-dimensional framework (see Fig. 1) for analyzing the patterns of AI capabilities across different industries and organizations. This framework serves as an effective tool for identifying best practices and areas for improvement in AI adoption and implementation. Additionally, it offers valuable insights into the correlation between varying degrees of AI capabilities and their impact on organizational performance.
Fig. 1. Framework for analyzing patterns of AI capabilities
Integrating two critical dimensions representing different AI-relevant resources, the framework classifies companies into four distinct groups, each representing a different
624
D. Horvat et al.
level of AI capability. Companies categorized as beginners exhibit a low level of AIrelevant skills and knowledge and possess limited AI infrastructure. These organizations are just embarking on their AI journey and have yet to fully leverage the potential of AI technologies. Followers with strong infrastructure have invested heavily in tangible resources, such as digital equipment, necessary for AI implementation. However, their focus on infrastructure development often comes at the expense of enhancing AI-relevant skills and knowledge among their employees. Companies falling into the category of followers with strong human resources recognize the importance of nurturing talent and cultivating a deep understanding of AI within their organization. While their AI infrastructure may not be as advanced as that of leaders, their strong human resources lay a solid foundation for future growth. Finally, leaders represent the pinnacle of AI capability within the framework. These companies have effectively combined tangible and intangible resources to achieve a significant competitive advantage in the AI landscape. With this resource-based systematization, the framework also elucidates two distinct paths that companies follow on their journey from beginners to leaders. The first path involves heavy investment in tangible resources, primarily focusing on AI infrastructure. This approach prioritizes the acquisition of digital equipment and tools necessary for AI implementation. The second path emphasizes the development of AI-relevant skills and knowledge among employees, recognizing the significance of human resources in AI capability. This path aims to create a skilled and knowledgeable workforce capable of driving AI initiatives within the organization.
3 Methodology Our work follows a mixed methods approach, combining qualitative and quantitative methods. First, to explore the required tangible, intangible and human resources that companies need for effectively implementing AI in their processes, and thereby to develop a fundament for our conceptual framework, we conducted qualitative expert interviews with various practitioners. We targeted interview partners with sound knowledge of the AI solutions used by companies as well as the areas of their implementation. To obtain a broad heterogeneous picture, interviewees from different domains were recruited including four representatives of companies developing AI technologies, four representatives of companies applying AI technologies, and two representatives of companies developing AI training. Each interview lasted between 45 and 60 min and with the permission of the interview partners we recorded and transcribed for subsequent coding. The guideline for the interview consisted of two main categories of questions: 1) skills and knowledge of employees required for introduction, implementation and development of AI solutions, 2) AI-related infrastructure, which includes challenges and other required resources related to the adoption and use of AI. In order to systematically analyze the data collected, we used these main categories to systemize the various competences and related infrastructure identified. We found that there are four typical categories of competences in the context of AI that represent different skills and knowledge in organizations: 1. AI-specific competences include all skills and knowledge directly related to AI.
Examining Heterogeneous Patterns of AI Capabilities
625
2. Leadership and moderation competences include all skills and knowledge needed to engage people and coordinate the project. 3. Project management competences include additional skills and knowledge that should be available in the project team to successfully implement AI projects. 4. AI usage competences include the skills and knowledge that future AI users should have. Moreover, we identified four distinct organizational AI capabilities that manifest AI-specific combinations of tangible and intangible resources in companies. Hence, these capabilities represent the AI-related infrastructure that enables the successful introduction and implementation of AI solutions for various tasks related to AI deployment: a. AI use case identification describes the organization’s current capability to find appropriate application areas for AI within the organization using existing resources. b. AI process integration describes the organization’s current capability to embed AI solutions into established business processes using existing resources. c. AI utilization describes the organization’s current capability to adequately use AI solutions within the organization with given existing resources. d. AI development describes the organization’s current capability to develop AI solutions independently within one’s own company using existing resources. In the second stage of our research, we employed a quantitative online survey to further develop and expand upon our qualitative findings. A total of 215 participants were recruited for the study. We specifically targeted companies that require strategic alignment of their human resources structure and other tangible and intangible resources. To ensure that the participants met our inclusion criteria, we only included individuals who (a) were employed in enterprises with at least ten employees and (b) had personnel or decision-making responsibility regarding the introduction of new technologies. Screening questions at the beginning of the survey ensured participant eligibility. The survey was conducted in German, and only German-speaking residents of Germany were recruited. Participation in the survey was voluntary, and participants were compensated in accordance with the panel provider’s terms of service. Out of the total companies surveyed, 35 (16.3%) were small companies with less than 50 employees, 77 (35.8%) medium-sized companies with 50 to 249 employees, and 101 (47.0%) large companies with 250 or more employees. In terms of industry distribution, 90 companies (41.9%) were from the manufacturing and construction sector, 75 companies (34.9%) were from the services and IT industry, and the remaining 47 companies (21.9%) were from other industries. For each company, we calculated two scores to assess its level of artificial intelligence (AI) capabilities (see Fig. 2). The first score is the AI Competence Score, which is measured by questions about the availability of AI-related competences within the companies. Thereby, we refer to 35 competence items, which are the result of our qualitative research (cf. [38]). Of these 35 competence items, 1) eleven items are grouped as AI-specific competences, 2) nine items are grouped as leadership and moderation competences, 3) ten items are grouped as project management competences, and 4) five items are grouped as AI usage competences. Responses were rated on a 5-point Likert scale for
626
D. Horvat et al.
each question. By averaging the scores of each group, we calculated an average for each competence category, ranging from zero to four. By adding up these mean values, we calculate the AI Competence score. The second score is the AI Implementation Score, which assesses the availability of AI infrastructure. It is determined it by summing the responses to four questions that assess the extent to which AI infrastructure is in place to a) identify AI use cases, b) integrate AI into processes, c) use AI, and d) develop AI. Again, responses to these questions were given on a 5-point Likert scale.
Fig. 2. Classification of Companies Based on their AI Capability Patterns
The scores for both criteria can range from 0 to 16, with low capability scores less than or equal to 5.33, high capability scores greater than or equal to 10.33, and medium scores in between. These values can be used to evaluate the overall AI capability of a company and compare it with other examined companies. As shown in Fig. 2 we categorized the 215 companies of our survey into four groups: beginners, followers with strong infrastructure, followers with strong human resource and leaders. While beginners (n = 15) have both a low AI competence score and a low AI implementation score, leaders (n = 53) have high scores in both. Followers with strong infrastructure (n = 71) have high or medium AI implementation scores and low or medium AI competence scores, with the AI implementation score always higher than the AI competence score. On the other hand, followers with strong human resource (n = 76) have high or medium AI competence scores and low or medium AI implementation scores, with the AI competence score always higher than the AI implementation score.
Examining Heterogeneous Patterns of AI Capabilities
627
4 Results As elucidated in Sect. 2.3, the AI capability index evaluates the association between the degree of a company’s AI infrastructure (including both tangible and intangible resources) and the proficiency of their employees (i.e., their skills and knowledge) in working with and learning from AI. This framework enables not only the categorization of companies based on their AI capabilities but also a comprehensive analysis of the resources that facilitate those capabilities. The Fig. 3 shows three distinct groups of companies in terms of their AI capabilities: beginner, follower, and leader. The majority of companies surveyed fall into the follower group, which is further divided into two sub-groups: those with developed AI infrastructure and those with stronger employee competencies in working with AI. This suggests that companies can take different paths towards becoming leader AI users, either through investing in their AI infrastructure or in their employee competencies.
Fig. 3. AI capability index - general statistics
Based on the available data, it seems that the development of both AI infrastructure and employee competencies are equally crucial for attaining higher levels of AI capability. This is evident from the noticeable concentration of companies along the diagonal line. Hence, these findings imply that companies must allocate their resources towards both areas, ideally in tandem, to effectively adopt AI technologies. This outcome highlights the importance of considering both tangible and intangible resources, as well as human resources, when implementing AI solutions in organizations. We conducted a detailed analysis of the distribution of companies based on their AI capabilities across three different sectors: manufacturing and construction, services and IT, and other industries relevant to the manufacturing value chain (see Fig. 4, left). We classified the companies into three categories, namely beginners, followers, and leaders, based on their AI capabilities, using the same categories as mentioned above. Our findings indicate that the manufacturing and construction industry has the highest proportion of companies with low AI capabilities among beginners, while the services and IT industry has the largest share of leader companies with strong AI capabilities. Within
628
D. Horvat et al.
the group of follower companies, more manufacturing companies prefer investing in AI infrastructure compared to investing in building strong human resources for adoption of AI. In other words, these manufacturing companies are more likely to have allocated resources towards acquiring and implementing AI technology rather than investing in developing the skills and knowledge of their human workforce in relation to AI. Finally, among leader companies, the dominant group is the service and IT providers, representing 58% of the total. This suggests that the service and IT industry is leading the charge in terms of AI adoption and innovation, while manufacturing companies are still in the early stages of embracing AI technology.
Fig. 4. AI capability index - overview of industries (left) and company sizes (right)
After examining AI applications that are specific to certain sectors, we proceeded to investigate AI capabilities categorizing the companies based on their size (see Fig. 4, right). As expected, larger companies have higher AI capabilities compared to smaller companies. Among leaders, 58% of large companies have AI capabilities compared to 6% of small companies. However, our data reveals a surprising result that a substantial proportion of large companies belong to the group of beginners. This underscores the need for increased investment in both AI infrastructure and competencies, irrespective of company size. Upon analyzing the group of followers, our statistics demonstrate that small and medium-sized companies invest significantly more in AI infrastructure than in the skills and knowledge of their employees, while this gap is less pronounced among large companies in the followers group. After analyzing sector- and size-specific AI applications, our research delved into specific purposes of AI use, categorized by the level of AI capabilities (see Fig. 5, left). For this analysis we used the same classification of three groups of companies: beginners, followers, and leaders. Generally, our data reveals that companies with varying levels of AI capabilities have distinct motives for adopting AI technologies. Novices have just begun utilizing AI in select areas, such as providing employee support, data evaluation, or workflow automation. Companies following with stronger human resources are likely still developing their AI infrastructure, resulting in lower percentages across all categories compared to those with robust infrastructure and leading companies. As expected, the leading companies exhibit the highest percentages of AI usage across all purposes, indicating their advanced and comprehensive AI capabilities. Notably, they
Examining Heterogeneous Patterns of AI Capabilities
629
Fig. 5. AI capability index - overview of fields of implementation (left) and whether companies are developing or buying AI solutions (right)
demonstrate a significant increase in the usage of AI for product innovation activities, particularly in developing new products, compared to other groups. Finally, we analysed how companies approach the development of AI solutions, i.e. whether they purchase AI solutions technology companies on the market or develop those themselves (see Fig. 5, right). The result indicates that a majority of the surveyed companies prefer to buy AI solutions instead of developing them in-house. Only a minority of companies reported developing their solutions by themselves, with the leader category having the highest percentage at 36%; this was also expected due to their high level of infrastructure and competences for developing AI solutions. In contrast, none of the beginner companies reported developing their solutions in-house. The most common approach among the analysed companies is to buy individual solutions that are developed according to their specific requirements. Furthermore, the result shows that a significant number of companies buy standardized solutions with external support. The highest percentage of companies that buy standardized solutions with external support is found in the follower with strong human resource category at 43%, followed by the beginner category at 40%. Lastly, the result reveals that the percentage of companies that buy standardized solution without external support is the lowest across all categories of companies. Among those, the beginner category has the highest percentage at 20%.
630
D. Horvat et al.
5 Concluding Remarks In this paper, we aim at exploring various combinations of AI-specific resources to gain a deeper understanding of the heterogeneous patterns of companies regarding their AI capabilities. Therefore, we build on previous research on AI-specific tangible, intangible, and human resources [3, 7, 21, 27] and develop an analytical framework, as shown in Fig. 1, for analyzing the AI capabilities of companies. Our framework comprises two key dimensions: AI infrastructure and AI competencies. We employ two scores to quantify these dimensions: the AI Competence Score, which measures the availability of AIrelated competencies (human AI-specific resources), and the AI Implementation Score, which assesses the availability of AI infrastructure (tangible and intangible AI-specific resources). We apply our approach to a comprehensive dataset of 215 companies that we gathered from investigating the adoption of AI for work and learning in organizations. Based on their AI capabilities, we categorize the companies into four distinct groups: beginners, followers with strong AI-infrastructure, followers with strong AI-specific human resource, and leaders. We further analyze the AI capabilities of these groups with respect to their sectoral affiliation, size classes, field of usage of AI, and their make or buy decisions regarding the uptake of AI solutions. This analysis provides us with additional insights into these groups, ultimately advancing our understanding of the heterogeneous patterns of companies in terms of their AI capabilities. Overall, our analysis suggests that the manufacturing and construction industry had the highest proportion of beginner companies with low AI capabilities, while the services and IT industry had the largest share of leader companies with strong AI capabilities. Within the follower category, manufacturing companies tended to invest more in AI infrastructure than in developing human resources for AI adoption. The dominant group among leader companies were service and IT providers, suggesting that this sector leads in AI adoption and innovation while manufacturing companies are still in the early stages of embracing AI technology. Further, our findings indicate that larger companies tend to have higher AI capabilities compared to smaller companies. This is consistent with the expectation that larger companies have more resources to invest in AI infrastructure and competencies. However, the data also reveals that a significant proportion of large companies are still in the early stages of AI adoption, highlighting the need for increased investment in AI infrastructure and competencies across companies of all sizes. We also found that small and medium-sized companies invest more in AI infrastructure than in the skills and knowledge of their employees, while this gap is less pronounced among large companies in the followers group. This suggests that small and medium-sized companies may need to prioritize investing in their employees’ skills and knowledge to fully realize the benefits of AI technology. We also show that companies with different levels of AI capabilities have distinct motives for adopting AI technologies, and the leading companies are more likely to use AI for product innovation purposes. This suggests that the adoption of AI technologies is positively associated with product innovation and thus with competitiveness [21]. Finally, we found that companies are more likely to buy AI solutions rather than develop them in-house, and that buying custom or off-the-shelf solutions with external support is a common approach. This could indicate a lack of in-house AI expertise or resources
Examining Heterogeneous Patterns of AI Capabilities
631
among companies. It could also highlight the complexity of emerging AI applications and the difficulty of providing people with such specialized skills. We contribute to theory by developing an analytical framework that combines tangible, intangible, and human resources to analyze the AI capabilities of companies [3, 7, 21, 27]. This framework provides a comprehensive understanding of the heterogeneous patterns of companies in terms of their AI capabilities, including the factors that influence their adoption and use of AI technologies. In terms of resource-based view, we emphasize the importance of a company’s resource base as a bundle of tangible, intangible and human resources in shaping its AI capabilities [8, 14, 17, 21, 25, 27, 39]. In doing so, we show the relevance and adaptation of the RBV in the context of today’s business environment that requires organizations to invest in digital technologies such as AI. By analyzing a comprehensive dataset of 215 companies, our study highlights the differences in AI capabilities among companies of different sizes, sectors, and field of usage of AI. For instance, we show the relevance of AI capabilities for product innovation and thus competitiveness. This finding indicates that the RBV in its adapted form can explain competitive variations among companies that leverage digital technologies to gain an edge. In turn, the findings also challenge our understanding concerning the key drivers of competitive success, which over the last years are becoming increasingly embedded with digital technologies and particularly AI. Moreover, we provide insights into the relevance of different levels of AI capabilities for companies’ make or buy decisions regarding the uptake of AI solutions. Overall, our framework can thus be practically used to assess a company’s strategic fit with AI and help firms identify opportunities for competitive advantage based on their resource combinations. To expand upon the findings and increase the scope of inquiry, we suggest following directions for further research. Firstly, longitudinal studies could be conducted to examine the changes in AI capabilities over time and establish causal relationships between single types of resources and overall AI capabilities. Secondly, further studies could use objective measures of AI capabilities, such as data from AI applications or patent filings. This would provide a more accurate and detailed assessment of AI capabilities and enable researchers to identify trends and patterns in AI development. Thirdly, future research could examine additional factors of AI capabilities, like ethical and social implications of AI adoption. Finally, it would be beneficial to expand the scope of the study with more comprehensive, multivariate research into how different AI capabilities affect various aspects of a company’s performance, such as productivity, profitability, and innovation.
References 1. Davenport, T.H., Ronanki, R.: Artificial intelligence for the real world. Harv. Bus. Rev. 96, 108–116 (2018) 2. McKinsey: The state of AI in 2022—and a half decade in review (2022). https://www. mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2022-and-a-halfdecade-in-review 3. Fountaine, T., McCarthy, B., Saleh, T.: Building the AI-powered organization. Harv. Bus. Rev. 97, 62–73 (2019) 4. Ransbotham, S., Khodabandeh, S., Fehling, R., et al.: Winning with AI. MIT Sloan Management Review (2019)
632
D. Horvat et al.
5. Brynjolfsson, E., Rock, D., Syverson, C.: Artificial intelligence and the modern productivity paradox: a clash of expectations and statistics. In: The Economics of Artificial Intelligence: An Agenda, pp. 23–57. University of Chicago Press (2018) 6. Ambati, L.S., Narukonda, K., Bojja, G.R., et al.: Factors influencing the adoption of artificial intelligence in organizations–from an employee’s perspective (2020) 7. Chui, M., Malhotra, S.: AI Adoption Advances, But Foundational Barriers Remain. Mckinsey and Company, New York (2018) 8. Barney, J.B.: Firm resources and sustained competitive advantage. [s.n.], [S.l.] (1991) 9. Horvat, D., Moll, C., Weidner, N.: Why and how to implement strategic competence management in manufacturing SMEs?. Procedia Manuf. 39, 824–832 (2019) 10. Grant, R.M.: The Resource-Based Theory of Competitive Advantage: Implications for Strategy Formulation. California Management Review Reprint Series. California Management Review, University of California, Berkeley, CA (1991) 11. Makadok, R.: Toward a synthesis of the resource-based and dynamic-capability views of rent creation. Strateg. Manag. J. 22, 387–401 (2001) 12. Helfat, C.E., Peteraf, M.A.: The Dynamic Resource-Based View: Capability Lifecycles. SSRN, S.l (2003) 13. Conboy, K., Mikalef, P., Dennehy, D., et al.: Using business analytics to enhance dynamic capabilities in operations research: a case analysis and research agenda. Eur. J. Oper. Res. 281, 656–672 (2020) 14. Mikalef, P., Pappas, I.O., Krogstie, J., et al.: Big data analytics capabilities: a systematic literature review and research agenda. IseB 16, 547–578 (2018) 15. Mikalef, P., Pateli, A.: Information technology-enabled dynamic capabilities and their indirect effect on competitive performance: findings from PLS-SEM and fsQCA. J. Bus. Res. 70, 1–16 (2017) 16. Wamba, S.F., Gunasekaran, A., Akter, S., et al.: Big data analytics and firm performance: effects of dynamic capabilities. J. Bus. Res. (2017). https://doi.org/10.1016/j.jbusres.2016. 08.009 17. Bharadwaj, A.S.: A resource-based perspective on information technology capability and firm performance: an empirical investigation. MIS Q. 24, 169–196 (2000) 18. Schryen, G.: Revisiting IS business value research: what we already know, what we still need to know, and how we can get there. Eur. J. Inf. Syst. 22, 139–169 (2013) 19. Melville, N., Kraemer, K., Gurbaxani, V.: Review: technology information an performance: organizational integrative model of IT business value. MIS Q 28, 283–322 (2004) 20. Zhang, M.J.: Information systems, strategic flexibility and firm performance: an empirical investigation. J. Eng. Tech. Manage. 22, 163–184 (2005) 21. Mikalef, P., Gupta, M.: Artificial intelligence capability: conceptualization, measurement calibration, and empirical study on its impact on organizational creativity and firm performance. Inf. Manage. 58, 103434 (2021) 22. Amit, R., Schoemaker, P.J.H.: Strategic assets and organizational rent. Strateg. Manag. J. 14, 33–46 (1993) 23. Sirmon, D.G.: Resource Orchestration to Create Competitive Advantage: Breadth, Depth, and Life Cycle Effects. SAGE Publications, New York (2011) 24. Horvat, D., Dreher, C., Som, O.: How firms absorb external knowledge—modelling and managing the absorptive capacity process. Int. J. Innov. Manag. 23, 1950041 (2019) 25. Mata, F.J., Fuerst, WL, Barney JB (1995) Information technology and sustained competitive advantage: a resource-based analysis. MIS Q. 19, 487–505 (1995) 26. Benitez, J., Castillo, A., Llorens, J., et al.: IT-enabled knowledge ambidexterity and innovation performance in small US firms: the moderator role of social media capability. Inf. Manage. 55, 131–143 (2018)
Examining Heterogeneous Patterns of AI Capabilities
633
27. Sjödin, D., Parida, V., Palmié, M., et al.: How AI capabilities enable business model innovation: scaling AI through co-evolutionary processes and feedback loops. J. Bus. Res. 134, 574–587 (2021) 28. Najdawi, A. (ed.): Assessing AI Readiness Across Organizations: The Case of UAE. IEEE (2020) 29. Heimberger, H., Horvat, D., Schultmann, F.: Assessing AI-readiness in production – a conceptual approach: conference proceedings. In: 26th International Conference of Production Research (ICPR) (2022) 30. Jöhnk, J., Weißert, M., Wyrtki, K.: Ready or not, AI comes—an interview study of organizational AI readiness factors. Bus. Inf. Syst. Eng. 63, 5–20 (2021) 31. Mikalef, P., Fjørtoft, S.O., Torvatn, H.Y.: Developing an artificial intelligence capability: a theoretical framework for business value. In: Abramowicz, W., Corchuelo, R. (eds.) BIS 2019. LNBIP, vol. 373, pp. 409–416. Springer, Cham (2019). https://doi.org/10.1007/978-3-03036691-9_34 32. Tornatzky, L.G., Fleischer, M.: The Processes of Technological Innovation. Issues in Organization and Management Series. Lexington Books, Lexington (1990) 33. Kinkel, S., Baumgartner, M., Cherubini, E.: Prerequisites for the adoption of AI technologies in manufacturing – evidence from a worldwide sample of manufacturing companies. Technovation 110, 102375 (2021) 34. Sambamurthy, V., Zmud, R.W.: IT management competency assessment: a tool for creating business value through IT (working paper). Financial Executives Research Foundation (1994) 35. Kogut, B., Zander, U.: Knowledge of the firm, combinative capabilities, and the replication of technology. Organ. Sci. 3, 383–397 (1992) 36. Dierickx, I., Cool, K.: Asset stock accumulation and sustainability of competitive advantage. Manage. Sci. 35, 1504–1511 (1989) 37. Grant, R.M.: Prospering in dynamically competitive environment: organizational capability as knowledge and strategy resources for the knowledge-based economy, Woburn (1999) 38. Baumgartner, M., Horvat, D., Kinkel, S.: Künstliche Intelligenz in der Arbeitswelt – Eine Analyse der Kompetenzbedarfe auf Unternehmensebene. In: Gesellschaft für Arbeitswissenschaft e.V. (ed) Nachhaltig Arbeiten und Lernen. GfA-Press, Sankt Augustin (2023) 39. Wang, N., Liang, H., Zhong, W., et al.: Resource structuring or capability building? An empirical study of the business value of information technology. J. Manag. Inf. Syst. 29, 325–367 (2012)
Enabling an AI-Based Defect Detection Approach to Facilitate Zero Defect Manufacturing Nicolas Leberruyer1,2(B) , Jessica Bruch1 , Mats Ahlskog1 , and Sara Afshar2 1
Division of Product Realization, School of Innovation, Design and Engineering, Mälardalen University, Eskilstuna, Sweden [email protected] 2 Volvo Construction Equipment, Eskilstuna, Sweden Abstract. Artificial Intelligence (AI) has proven effective in assisting manufacturing companies to achieve Zero Defect Manufacturing. However, certain products may have quality characteristics that are challenging to verify in a manufacturing facility. This could be due to several factors, including the product’s complexity, a lack of available data or information, or the need for specialized testing or analysis. Prior research on using AI for challenging quality detection is limited. Therefore, the purpose of this article is to identify the enablers that contributed to the development of an AI-based defect detection approach in an industrial setting. A case study was conducted at a transmission axle assembly factory where an end-of-line defect detection test was being developed with the help of vibration sensors. This study demonstrates that it was possible to rapidly acquire domain expertise by experimenting, which contributed to the identification of important features to characterize defects. A regression model simulating the normal vibration behavior of transmission axles was created and could be used to detect anomalies by evaluating the deviation of new products compared to the model. The approach could be validated by creating an axle with a built-in defect. Five enablers were considered key to this development. Keywords: Smart production · Zero Defect Manufacturing 4.0 · Industrial Artificial Intelligence · Anomaly detection
1
· Quality
Introduction
Detecting defects in products in a manufacturing facility can be a challenging task. In fact, it is unlikely that a manufacturer can test a product in all possible scenarios that may arise during its utilization phase. Defects are detected during manufacturing with various techniques such as visual inspection, measurements, testing, and sampling to verify if the product meets the defined quality standards. Test results are documented and recorded for analysis with tools such c IFIP International Federation for Information Processing 2023 Published by Springer Nature Switzerland AG 2023 E. Alfnes et al. (Eds.): APMS 2023, IFIP AICT 690, pp. 634–649, 2023. https://doi.org/10.1007/978-3-031-43666-6_43
Enabling AI for ZDM
635
as Statistical Process Control (SPC). There are several limitations to the use of SPC for challenging quality evaluation: it assumes that the monitored variable has a steady state characterized by a mean value and a standard deviation, that the limits are known from historical data and that the different monitored variables are not correlated [1]. Lastly, recognizing and interpreting trends from these monitored variables can be difficult [2]. Zero Defect Manufacturing (ZDM) is a framework made possible thanks to data-driven smart production innovation. ZDM uses real-time data to evaluate the potential risk of defects. This helps to anticipate defects and fix them before it is too late. ZDM has the potential to operate successfully given the correct quantity and quality of data together with the right tools, such as Artificial Intelligence (AI), which is a key enabling technology of ZDM [3,4]. AI, with its data-driven algorithm’s subfield, Machine Learning (ML), can process complex datasets and discover patterns. However, there is a lack of industrial application of AI to further advance ZDM [5]. There are many process models describing how to implement AI to meet business requirements in many application areas [6]. However, in an industrial context, several challenges hinder the implementation of AI. Data accessibility and quality may not be optimal [7]. Domain expertise is required to comprehend which data points are pertinent to a particular need and how to measure them. A significant level of customization is frequently necessary to tailor the AI solution to specific requirements [4]. Therefore, the purpose of this article is to identify the enablers that contributed to the development of an AI-based defect detection approach in an industrial setting. A case study was applied at a transmission axle facility where this approach was conducted and led to the identification of five enablers.
2 2.1
Frame of Reference Challenging Quality Evaluation in Manufacturing
Obtaining a product’s quality characteristics can be challenging for different reasons. Some products are made of many sub-components with different functions. Inspection is difficult because it might be impossible to visualize the inside of the product. As a result, performance testing under a dynamic test is the only way to assess some quality characteristics. Designing an appropriate quality test that captures all the possible failure modes is not straight-forward, especially when the test needs to consider the production takt time. Furthermore, several time-dependent parameters are captured during a dynamic test, and the quality must be evaluated by combining them and it is not always simple to set a rule for pass or fail [8]. There can also be uncontrolled variables impacting the test result in different ways. These uncontrolled variables may impair the data’s accuracy or reliability as well as cause inconsistencies in testing conditions, affecting the validity of the results. In an industrial setting, it is difficult to control and minimize their impact. From a customer perspective, some quality measures are more subjective, e.g., aesthetics, customer experience, performance, etc. For example, an acceptable noise level inside a car is based on the customer’s perception of
636
N. Leberruyer et al.
noise, hearing ability, acceptance level, etc. Quality measures based on human perception are difficult to define, standardize and control [9]. This dilemma with subjective quality judgments has also been observed in the lock industry, where humans are used for final quality control by rotating the key in the lock and thereafter judging pass or fail [10]. Lastly, it can be challenging to design a test that is applicable to all the variants of a company’s manufactured products, especially for products with low production volumes or high customization. A product-specific testing strategy may be required as an alternative. 2.2
Statistical Process Control
Statistical Process Control (SPC) introduced the use of data-driven monitoring of a production process and is now widely applied in industry. The control chart, one tool of the SPC, monitors the value of key process parameters in order to find trends and be proactive in detecting variations that could lead to anomalies. This is the first introduction of anomaly detection models in production systems. However, there are several limitations to the use of SPC [1,2]: – It assumes that the monitored variable has a steady state characterized by a mean value and a standard deviation, i.e. a normal distribution. – It assumes that the upper and lower limits are known from historical data. – It assumes that the different monitored variables are not correlated. – Recognizing trends and interpreting them can be difficult. In the case of challenging quality evaluations as described in the previous section, it is difficult to find absolute limits when the mean and standard deviations are time dependent and correlated with other parameters. 2.3
Defect Detection with AI
AI comprehends the whole system needed for intelligent behavior. It includes sensors, intelligence techniques, such as ML a data-driven technique, that analyze data and make decisions automatically, and actuators to implement the decisions [11]. Defect detection with AI can use conventional metrology systems or cameras for product inspection by learning from examples to comprehend how a system behaves [12]. By using data to construct a representation of a normal sample, ML anomaly detection algorithms identify deviations from the norm as anomalies. They are used in financial fraud detection [13], medicine [14], cybersecurity [15] and predictive maintenance [16]. When it comes to manufacturing, many publications use anomaly detection to detect visual defects [17]. However, there is existing literature on applying ML algorithms for anomaly detection based on physical properties acquired during quality evaluation tests. For example, to detect defects in aircraft blades, an algorithm called Principle Component Analysis (PCA) is used to analyze product strain with piezoelectric sensors under test to detect damages [18]. In another study, ML anomaly detection algorithms were used to detect defects in automotive rim production by using data from several
Enabling AI for ZDM
637
hydraulic presses in a multi-step hot forging process [19]. All these ML anomaly detection algorithms evaluate anomalies by assigning an anomaly score to each sample. A significant challenge is to find an appropriate classification threshold in order to classify an anomaly as a defect. This threshold can be evaluated with domain expertise and several iterations. Design of experiments (DOE) can help ensure that the anomaly detection threshold is correct. It is a very effective way to find which parameters play a significant role in the output product’s characteristics. By changing the controlled variables of a process in a systematic way, it allows the study of cause and effect on the output parameters with a minimal amount of testing [20]. Hence assisting in building domain expertise for a specific product and its associated process.
3 3.1
Research Method Research Design and Data Collection
A case study method was chosen because it can provide a detailed understanding of the phenomenon being studied [21,22] and allow researchers to use different data collection and sourcing techniques to gather a rich set of data from observations, interviews, documents, etc. The case study was conducted at a heavy-duty vehicle manufacturer. The case company has a global industrial footprint and features advanced manufacturing technologies and a high level of automation. A case study was performed at a transmission axle manufacturing plant from September 2022 to April 2023. This study is a continuation of another case study done between September 2021 and May 2022 [23]. Various methods for collecting data were employed. First, three one- to two-hour brainstorming sessions were conducted with assembly workers, manufacturing process owners, product developers, and quality managers to determine what activities could be performed to improve knowledge of how to detect defects in assembled transmission axles. Second, three experiments were conducted to gain a better understanding of the process of vibration generation by the drive gear. 3.2
Problem Description
The vehicle assembly plant reported that vibrations from the transmission axles were causing an annoying noise in the driver’s cabin. In order to judge the possible risk of creating an annoying noise, the transmission axle manufacturing plant installed vibration sensors on a transmission axle cleaning cell that is used to flush the inside of the axles to remove any residuals from production. During this process, the transmission shaft is rotated by an electric motor, and the brakes within the axle are activated to simulate the same friction as when the complete vehicle is being driven on flat ground. The transmission shaft speed is swept from a very low speed to a speed corresponding to the maximum speed of the vehicle. This sweep lasts about 60 s. Furthermore, to characterize both sides of the gears, the transmission shaft is rotated both clockwise and counterclockwise, simulating the acceleration and deceleration of the vehicle. Each vibration
638
N. Leberruyer et al.
sensor is mounted on the supporting frame, which supports the axle hubs, and is measuring vibrations along the X axis (see Fig. 1).
Fig. 1. Transmission axle
The main issue faced by the transmission axle manufacturing plant is that no correlation could be established between the vibration measurements done at the cleaning cell and the creation of annoying noise in the cabin of the complete vehicle. The company could identify that the problem is caused in most cases by the drive gear (see Fig. 2), more specifically by the contact between the drive pinion gear and the crown wheel gear. Sound recordings show that the main sound component felt in the cabin is at the same frequency as the gear mesh frequency of the drive pinion gear of the transmission axle. Therefore, when such axles are reported to have defects, replacing the drive gear usually solves the noise problem but it is very time-consuming and therefore costly.
Fig. 2. Drive gear
3.3
Approaching the Problem
The initial approach, developed during the first case study, involved a semisupervised method for resolving potential misclassifications of the approved class.
Enabling AI for ZDM
639
The first step was to identify the characteristics of defective products so that all products could be relabeled, followed by the development of a supervised classification model based on the relabeled products [23]. The issue with this approach is that a certain number of defects were required. Hence, when this approach was applied to products with fewer defects, the accuracy decreased despite the availability of data spanning more than three years. Furthermore, variations in the measurement system were observed over time which diminished the solution’s accuracy. The current quality process consists of testing the axles at the vehicle assembly site by driving the vehicle on a test track according to a well-defined dynamic test. If the noise inside the cabin is judged to be too high by the driver, microphones are placed in the cabin to identify the frequency of the sound and help pinpoint the causing component. If the frequency corresponds to the gear mesh frequency of the drive gear, then vibration sensors are mounted on both front and rear axles to distinguish the one with the highest vibrations by doing a spectral analysis of the vibrations (see Fig. 3). In the example represented in Fig. 3, it was identified that the front axle (top-graph) was the source of the annoying noise in the cabin. Indeed, the vibration power between 400 and 500 Hz is the highest (see the red area in Fig. 3) on the top-graph. This can be correlated to the speed of the vehicle when the annoying noise is felt in the cabin.
Fig. 3. Axles vibration spectral analysis from a vehicle (Color figure online)
640
N. Leberruyer et al.
Since vibration measurements show that it’s possible to identify defective transmission axles on a complete vehicle, several brainstorming sessions were held to define activities that could lead to a better understanding of the evaluation of vibration measurements done at the transmission axle cleaning cell. The intention was to gain domain knowledge and develop an approach on how to detect defects.
4
Empirical Findings
Three experiments were conducted to gain insight into the issue at hand and determine if the measured vibrations could be used to detect transmission axles making an annoying noise. 4.1
Building a Defect Transmission Axle
The purpose of the first experiment was to build a transmission axle with an exaggerated defect to determine whether or not this defect could be captured by the cleaning cell. The experiment was conducted on a drive gear rolling machine that can rotate and apply torque to the drive gear. The machine was used to evaluate new batches of drive gears coming from the suppliers. The evaluation consists of a visual examination of the contact patterns between the drive pinion gear and the crown wheel gear as well as a sound assessment by the operator. For this experiment, vibration sensors were mounted on that machine. Adjustments were made to parameters deemed to play a significant role in the vibrations. These parameters were adjusted according to the maximum correction allowed during the assembly stage, where shims are inserted to compensate for the deviations of the axle housing: – The drive pinion gear depth’s position was adjusted to three positions: −0.20 mm, 0 mm (nominal position) and 0.20 mm. – The backlash between the crown wheel gear and drive gear pinion, which refers to the amount of play between the gears, was adjusted to four positions: 0 mm, 0.18 mm, 0.34 mm and 0.50 mm. This experiment showed that having the pinion closest to the crown wheel (−0.20 mm) created the highest vibrations. This enabled the assembly of an axle with a somewhat exaggerated defect. 4.2
Testing Axles in the Cleaning Cell
In the cleaning cell, two experiments were conducted: in the first experiment, the built transmission axle with an exaggerated defect was tested. In the second experiment, the same transmission axle but with standard components from production was tested. Since the drive pinion gear has nine teeth, the gear mesh frequency, also called the fundamental frequency or first order is given in Eq. 1. GearM eshF requency(Hz) = 9 ∗ T ransmissionShaf tSpeed(rpm)/60
(1)
Enabling AI for ZDM
641
The second order, or the first harmonic, will be two times the gear mesh frequency; the third order will be three times, etc. A Power Spectrum Density (PSD) analysis was made with the raw vibration signal. The vibration power from the PSD analysis could be extracted by interpolating the values corresponding to the frequencies of the first three orders (see the red, pink and blue curves on the top graph of Fig. 4). The interpolations corresponding to the vibration power of the different orders are shown in the middle graphs of Fig. 4. On the bottom graphs, the shaft speed and torque applied to the axle are shown. Both the torque signal and the vibration power signal were highly transient, so a Savitzky-Golay filter was chosen over a rolling average filter because it preserves the peaks and does not introduce any delays. The middle graph shows that the second and third orders do not produce interesting results, as the curves are mainly flat. Measurements made on the complete machine showed that vibrations from the 5th order could be measured on axles creating the annoying noise. This indicates that the measurement system from the axle cleaning cell does not perform as well as when measuring on a complete machine. This is probably due to the fact that the vibrations are measured much further away from the main source of vibrations. When comparing the results of those two experiments, it shows that the regulation of the shaft speed and torque was not ideal. Since the amplitude of the vibrations is dependent on the applied torque, it is impossible to directly compare the vibration results due to the non-reproducibility of the experiments’ torque. This might be the reason why the case company failed to evaluate vibration results to identify axles producing an annoying noise. A direct defect detection approach could not be drawn from comparing the two results.
(a) Experiment 1: Exaggerated defect
(b) Experiment 2: Conventional axle
Fig. 4. Comparison between defective and normal axle (Color figure online)
642
4.3
N. Leberruyer et al.
Developing a Simulation Model for Normal Behavior
In order to evaluate vibration measurements, an ML regression model (see Fig. 5) was trained on a group of axles manufactured during a specific time period.
Fig. 5. ML regression model
This model can then be used to predict a reference level of vibration and facilitate the evaluation of new transmission axles. Defective axles should demonstrate increased vibrations and, consequently, a poor correlation with the developed model.
(a) Best prediction from training set
(b) Worst prediction from testing set
Fig. 6. Regression model results comparison
Enabling AI for ZDM
643
In this manner, the model will account for the influence of the uncontrolled variables, which in this case are the applied speed and torque. This will allow for a comparison of vibration measurements by analyzing the prediction error, which is the difference between the actual value of an axle’s measurement and the model’s prediction. A regression model was fitted to the vibration power of the fundamental frequency for all samples made during a three-week time period. The XGBoost regression model was used due to its light application making it fast to train with an office computer while being among the best ML algorithms for tabular data [24,25]. It is based on decision trees optimized by using the gradient boosting framework [26]. The model was trained with axles produced over a certain time period and then evaluated for the subsequent time period. Figure 6 depicts on the left the best predicted R2 from the training set and, on the right, the worst predicted R2 from the testing set, which is a dataset that the model did not use during training. The right graph in Fig. 6 shows that the axle’s measured vibrations are greater than the model’s prediction. Considering that the ML regression model captured the most common vibration pattern seen in the training set to get the best prediction score, this axle could be identified as an anomaly. However, to classify it a defect, a threshold needs to be defined to know how much higher the vibrations need to be. 4.4
Defect Detection
The prediction error of the regression model in comparison to the measured vibrations is evaluated as the mean prediction residuals and used to compare different transmission axles. A positive value indicates that the transmission axle had higher vibration power than the model. In Fig. 7, the transmission axles utilized to train the ML regression model are shown in blue. In orange are the axles used for testing, which are all the axles manufactured during the subsequent time period. In green is the reference axle, which was made with a slight defect. This reference axle was built during the case study with a crown wheel that deviated with an excess of material at the bottom of the gear root along the entire flank of the active part. It was estimated that such a defect should be captured so the crown wheel gear was assembled into an axle. It was decided to use this axle as a reference axle for periodic verification of the cleaning cell, hence it was tested once every two weeks.
644
N. Leberruyer et al.
Fig. 7. Mean deviation of the regression model residuals (Color figure online)
The use of the model’s prediction residuals is validated by analyzing the measurements with the reference axle. Indeed, it shows good repeatability with all three tests of the same reference axle being almost at the same deviation level and at a high mean deviation of +2/+3 dB for this slight defect (see Fig. 7). A first approach to judging if axles are defective would be to assume that all axles with a higher vibration level than the reference axle are to be considered defective. In the case of Fig. 7, it would mean all axles on the right side compared to the green axle results. A defect threshold set at +4 dB for the mean prediction residual would result in having five axles from the testing set out of eighteen in total considered as defects. This method of detecting defects initially appeared adequate. Especially since the feature used corresponds to an actual deviation of the quality characteristic that the test is intended to estimate, making it simple to interpret. The plan is to repeat the test of the reference axle on a frequent basis and then, throughout the model’s lifetime, examine the detected defects more closely to determine if they were in fact defects. If additional features are required to estimate anomalies, a distance- or density-based method could be evaluated but then the interpretability might be harder.
5
Discussion
Using AI allows for the detection of anomalies in complex datasets. Any anomaly can potentially reveal a defect if the features used in the dataset have a relationship with the defect being predicted. Finding these features is difficult and requires domain expertise. Finding the appropriate threshold for classifying an anomaly as a defect is challenging but can be accomplished by analyzing historical data, conducting experiments or using domain expertise.
Enabling AI for ZDM
645
Experiments were conducted to obtain a better understanding of how a defective axle differs from a normal axle. First, an axle with an exaggerated defect was made to determine if it could be distinguished from the others only by visualizing the vibration curves. The test revealed that this was difficult due to uncontrolled variables. Then, a realistically defective axle was made and ran in the cleaning cell every two weeks. This was done for four reasons: first, it was used to determine the separating line between good and defective axles, under the supposition that the axle in question is just on the edge of this line. But it was determined that this is an acceptable starting point that can be revised later. Second, it can be utilized to evaluate the process and measurement variations within the cleaning cell. Thirdly, it can also indicate the ML model’s ability to show reliable information over time. Lastly, it was determined that repeating this test was essential for the AI solution’s credibility, as it provides information regarding the need for maintenance on the cleaning cell measurement system or the ML model. The main challenge for this defect detection test were the torque and shaft speed oscillations which affect the vibrations and impede a direct comparison of the measurements, hence making it not possible to use a control chart with a fixed threshold to identify defects. The presented approach makes it possible to compare the test results. The ability of the presented approach to judge a defect relies completely on the built defect to start with. With time, this approach can be refined when the first defects are reworked and a root cause analysis is performed. Furthermore, taking into account the impact of a false negative (i.e. defects predicted as non-defect products) needs to be taken into account since, in extreme cases, it can put someone’s life in danger [27]. This could lead to a lower detection threshold than the built-in defect at the cost of more rework with false positives (i.e. non-defect products predicted as defects). Another challenge when there are very few defects, less than 1%, is the difficulty of training a data-driven model in a supervised manner or evaluating the performance of an unsupervised model, particularly when the training dataset consists of a small number of samples [7]. Nevertheless, it is necessary to ensure the precision and accuracy of the data acquisition system and measurement procedures over time. For an AI solution, this also includes monitoring the performance of the ML model over time. The presented approach of repeating a test with the same built-in defect every two weeks enables a performance and repeatability verification by assuring the system’s ability to detect defects over time. It was simple in this instance because the test was designed to detect only one failure mode, a defective drive gear. Attempting to reproduce all a product’s use cases during its lifetime is obviously unfeasible at a manufacturing site. Therefore, it can be difficult to design a test and establish criteria for evaluating the quality of the products. This study shows that practitioners should not be hesitant to make products or parts of them with exaggerated defects to gain a better understanding of their system’s ability to detect defects.
646
N. Leberruyer et al.
Compared to the previous approach where a certain amount of defects was needed [23], the presented approach is much simpler and faster to implement since there is no need to collect data from a certain amount of defective samples before being able to detect defects. This study contributes to the research on ZDM with a practical application of AI for quality inspection using the ML method, which accounts for only 12% of quality inspection analysis methodologies [28]. 5.1
Limitations and Next Steps
Due to space constraints and to avoid duplication of information, only a subset of the available data was presented in this study. Specifically, only one of the two cleaning cells results was displayed. Also, axles are rotated in both clockwise and counter-clockwise directions; therefore, only one of the two measurements performed on the axle was utilized. In addition, only one type of axle was utilized in the study which is also the one with the most quality issues. It should be noted that both sensors on the same cleansing cell produced comparable results, regardless of the direction of rotation. However, when implementing the proposed solution, both rotational directions will be used in order to control both sides of the gear teeth. However, the results from cleaning cell number 2 differed from those presented in this paper, indicating the need for additional research. There are two issues that need to be addressed in the next steps: First, the torque and speed controls generate oscillations in speed and torque, which have a significant effect on the measured vibrations and prevent direct comparison of test results. The cleaning cell is used for testing axles with a factor of three when comparing their load capacity. Therefore, since the torque regulation uses the internal axle brakes it is difficult to find a regulator that works for all the different sizes and there is currently no support for having regulator parameters depending on axle size. However, a consistent and uniform procedure for testing products is viewed as a prerequisite for data-driven conformity assessment techniques. Second, the fact that the vibration sensors are fastened to the support structure of the axle and measure X-axis vibrations (see Fig. 1). When vibrations are measured on a vehicle, they are measured along the Z-axis and the sensors are placed in the middle of the axles, close to the source of vibrations. Having the sensors located far from the source of vibrations means that the vibrations need to be transferred from the drive gear to the support frame, which likely results in nonlinearity phenomena. Vibrations must also be measured along the same axis in order to achieve a higher degree of correlation between measurements taken on the vehicle and those taken in the cleaning cell. One of the disadvantages of positioning sensors on the axle is that it requires the operators to perform an additional step when installing and cleaning new axles. A solution would be the installation of an automated arm.
Enabling AI for ZDM
6
647
Conclusion
This paper demonstrates that ZDM with AI enables defect detection for challenging quality evaluations where SPC with control charts cannot be applied effectively. Indeed, the presence of multiple uncontrolled variables and interactions during the quality evaluation of the products necessitated sophisticated analysis in order to detect defects. Even though, in this paper, ZDM and AI used the same philosophy as SPC to identify defects, that is, to highlight products with a deviation from the mean greater than a certain threshold, SPC may be more suitable for processes with fewer variables and where control charts can effectively monitor quality thanks to linear and uncorrelated relationships within the data, whereas ZDM and AI are suitable for challenging quality evaluations. The purpose of this article was to identify enablers that contributed to the development of a defect detection approach by using AI in an industrial setting. Five enablers could be identified: 1. Defect detection of complex products can be facilitated with anomaly detection by using ML algorithms that enable defect detection without the need to acquire labeled data (i.e., with defect class) for a large number of defects, thanks to unsupervised learning. It enables the rapid development of a defect detection approach and eliminates the issue of bias caused by an imbalanced dataset between approved and defective product classes. 2. Domain expertise is needed to gain initial insights into the controlled parameters that can have an effect on the investigated quality characteristic. This will lead to the development of experiments to increase understanding of the defect, and ultimately assist in identifying the test procedure, required sensors, and data processing. 3. Designed experiments can support to build a plausible defect. Experimentation aids in acquiring domain knowledge regarding the issue at hand. Determine which features to use and conduct an initial validation. Additionally, it shows how well a measurement system can distinguish between normal and defective products. 4. Repeating the test with the built-in defect ensures that the entire system, including sensors, processes and ML models, maintains its ability to identify defects, thereby enabling the solution’s credibility. 5. Open-source programming languages and open file formats facilitated the creation of the solution presented in this paper. Indeed, during the data exploration and modeling phases, Python, a programming language with a large community of users who provide support and solutions to problems was deemed very helpful. In addition, the fact that the measurements were stored in FITS (Flexible Image Transport System) which is an open standard made them easy to load and manipulate. The presented solution enhances quality by supporting variation reduction by identifying anomalies. Sometimes, the customer-perceived quality attribute of the completed product is difficult to break down into its individual components. There is not always an obvious correlation. By reducing the variation of critical
648
N. Leberruyer et al.
parameters, the dissemination of these parameters is reduced, thereby reducing the risk of product defects and hence supporting a Zero Defect Manufacturing vision. With the advancement of electrification in the automotive industry, transmission noise will become a major concern, making the importance of the results significant. Acknowledgements. This work was partially supported by the Industrial Technology (IndTech) Graduate School founded by the Knowledge Foundation (KKS, Stockholm, Sweden) and the XPRES project funded by Vinnova (Stockholm, Sweden). The authors express gratitude for the reviewers’ constructive feedback.
References 1. Bisgaard, S., Kulahci, M.: Quality quandaries: the effect of autocorrelation on statistical process control procedures. Qual. Eng. 17(3), 481–489 (2005) 2. Zan, T., Liu, Z., Wang, H., Wang, M., Gao, X.: Control chart pattern recognition using the convolutional neural network. J. Intell. Manuf. 31, 703–716 (2020) 3. Powell, D., Magnanini, M.C., Colledani, M., Myklebust, O.: Advancing zero defect manufacturing: a state-of-the-art perspective and future research directions. Comput. Ind. 136, 103596 (2022) 4. Caiazzo, B., Di Nardo, M., Murino, T., Petrillo, A., Piccirillo, G., Santini, S.: Towards zero defect manufacturing paradigm: a review of the state-of-the-art methods and open challenges. Comput. Ind. 134, 103548 (2022) 5. Psarommatis, F., May, G., Dreyfus, P.A., Kiritsis, D.: Zero defect manufacturing: state-of-the-art review, shortcomings and future directions in research. Int. J. Prod. Res. 58(1), 1–17 (2020) 6. Saltz, J.S., Krasteva, I.: Current approaches for executing big data science projectsa systematic literature review. PeerJ Comput. Sci. 8, e862 (2022) 7. Bertolini, M., Mezzogori, D., Neroni, M., Zammori, F.: Machine learning for industrial applications: a comprehensive literature review. Expert Syst. Appl. 175, 114820 (2021) 8. Zhou, P., Chen, W., Yi, C., Jiang, Z., Yang, T., Chai, T.: Fast just-in-time-learning recursive multi-output LSSVR for quality prediction and control of multivariable dynamic systems. Eng. Appl. Artif. Intell. 100, 104168 (2021) 9. Cook, V.G.C., Ali, A.: End-of-line inspection for annoying noises in automobiles: trends and perspectives. Appl. Acoust. 73(3), 265–275 (2012) 10. Andersson, T., Bohlin, M., Olsson, T., Ahlskog, M.: Comparison of machine learning’s-and humans’-ability to consistently classify anomalies in cylinder locks. In: Kim, D.Y., von Cieminski, G., Romero, D. (eds.) Advances in Production Management Systems. Smart Manufacturing and Logistics Systems: Turning Ideas into Action. APMS 2022. IFIP Advances in Information and Communication Technology, Part I, vol. 663, pp. 27–34. Springer, Cham (2022). https://doi.org/10.1007/ 978-3-031-16407-1_4 11. Russell, S.J.: Artificial Intelligence: A Modern Approach. Pearson Education, Inc., Upper Saddle River (2010) 12. Papageorgiou, E.I., et al.: Short survey of artificial intelligent technologies for defect detection in manufacturing. In: 2021 12th International Conference on Information, Intelligence, Systems & Applications (IISA), pp. 1–7. IEEE (2021)
Enabling AI for ZDM
649
13. Hilal, W., Gadsden, S.A., Yawney, J.: Financial fraud: a review of anomaly detection techniques and recent advances (2022) 14. Tschuchnig, M.E., Gadermayr, M.: Anomaly detection in medical imaging - a mini review. In: Haber, P., Lampoltshammer, T.J., Leopold, H., Mayr, M. (eds.) Data Science – Analytics and Applications, pp. 33–38. Springer, Cham (2022). https:// doi.org/10.1007/978-3-658-36295-9_5 15. Fernandes, G., Rodrigues, J.J., Carvalho, L.F., Al-Muhtadi, J.F., Proença, M.L.: A comprehensive survey on network anomaly detection. Telecommun. Syst. 70, 447–489 (2019) 16. Kamat, P., Sugandhi, R.: Anomaly detection for predictive maintenance in Industry 4.0-a survey. E3S Web Conf. 170, 02007 (2020) 17. Zipfel, J., Verworner, F., Fischer, M., Wieland, U., Kraus, M., Zschech, P.: Anomaly detection for industrial quality assurance: a comparative evaluation of unsupervised deep learning models. Comput. Ind. Eng. 177, 109045 (2023) 18. Gharibnezhad, F., Mujica, L.E., Rodellar, J.: Applying robust variant of principal component analysis as a damage detector in the presence of outliers. Mech. Syst. Signal Process. 50, 467–479 (2015) 19. Lindemann, B., Jazdi, N., Weyrich, M.: Anomaly detection and prediction in discrete manufacturing based on cooperative LSTM networks. In: 2020 IEEE 16th International Conference on Automation Science and Engineering (CASE), pp. 1003–1010. IEEE (2020) 20. Montgomery, D.C.: Introduction to Statistical Quality Control. Wiley, New York (2020) 21. Karlsson, C.: Researching Operations Management. Routledge, London (2010) 22. Yin, R.K.: Case Study Research: Design and Methods, vol. 5. Sage, Thousand Oaks (2009) 23. Leberruyer, N., Bruch, J., Ahlskog, M., Afshar, S.: Toward zero defect manufacturing with the support of artificial intelligence-insights from an industrial application. Comput. Ind. 147, 103877 (2023) 24. Shwartz-Ziv, R., Armon, A.: Tabular data: deep learning is not all you need. Inf. Fusion 81, 84–90 (2022) 25. Grinsztajn, L., Oyallon, E., Varoquaux, G.: Why do tree-based models still outperform deep learning on tabular data? arXiv preprint arXiv:2207.08815 (2022) 26. Chen, T., Guestrin, C.: XGBoost: a scalable tree boosting system. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 785–794 (2016) 27. Lade, P., Ghosh, R., Srinivasan, S.: Manufacturing analytics and industrial internet of things. IEEE Intell. Syst. 32(3), 74–79 (2017) 28. Azamfirei, V., Psarommatis, F., Lagrosen, Y.: Application of automation for inline quality inspection, a zero-defect manufacturing approach. J. Manuf. Syst. 67, 1–22 (2023)
A Conceptual Framework for Applying Artificial Intelligence to Manufacturing Projects Aymane Sahli1,2(B) , Eujin Pei1 , and Richard Evans2 1 Brunel Design School, Brunel University London, Uxbridge UB8 3PH, UK
[email protected] 2 Faculty of Computer Science, Dalhousie University, Halifax, NS B3H 4R2, Canada
Abstract. Artificial Intelligence (AI) in manufacturing has received significant attention in recent years due to its potential to assist manufacturing teams in monitoring projects, identifying defects and potential risks, and improving complex workflows and processes. Traditionally, manufacturing projects have been highly knowledge intensive, involving security-conscious processes, and internal and external actors. As AI solutions become more cost-effective and are deployed as assistive tools to support teams in projects, their deployment in manufacturing can be realized. This paper presents a conceptual framework that introduces how AI can enhance the efficiency of manufacturing projects and be successfully applied to projects. A review of extant literature on AI in manufacturing projects is presented and an empirical investigation with 10 manufacturing project managers and AI subject matter experts from engineering organizations in the UK is given. The proposed framework outlines how AI can be applied to manufacturing projects, encompassing both the necessities of projects for the application of AI, and the necessities of AI for their in manufacturing projects. Finally, managerial implications are provided for manufacturing leaders and project managers. Keywords: Artificial intelligence · project manufacturing · project management · solution design · conceptual framework
1 Introduction Since the beginning of the AI age, significant efforts have been made to collate large sets of historical data and to create intelligent algorithms to aid in the development of AI solutions for various industrial applications. Improvements in storage capabilities and computing power have also led to potential applications of AI being identified across most industries, including manufacturing [1]. Both scholars and practitioners suggest that considerable value can be captured from applying AI to industrial workflows and processes [2]; as a result, practical applications of AI, such as smart assistants, are now penetrating both our personal and professional lives [3]. At first glance, due to the complex nature and security-consciousness of manufacturing projects, the application of AI may appear incompatible. Given the general © IFIP International Federation for Information Processing 2023 Published by Springer Nature Switzerland AG 2023 E. Alfnes et al. (Eds.): APMS 2023, IFIP AICT 690, pp. 650–661, 2023. https://doi.org/10.1007/978-3-031-43666-6_44
A Conceptual Framework for Applying Artificial Intelligence to Manufacturing Projects
651
characteristics of manufacturing projects (e.g., knowledge intensive, lack of standardization and customization, inadequate information flows, material shortages etc.), the information required to train AI algorithms may not be easily accessible nor in abundant quantities and may only be available as unstructured data. Nevertheless, early examples of AI’s integration in manufacturing projects reveal its potential [4]. For example, AI can improve observed manufacturing project issues, such as projects requiring incremental modifications or elaboration about new solutions. Therefore, projects can carry unidentified risks which might hamper the success of a project and the actions required to complete it. Consequently, manufacturing projects often surpass anticipated costs and time, but do deliver on quality [5]. Taking this into consideration, AI could be a valuable advanced technology to better control risk and the high complexity of projects, expanding efficiency and consistency when attaining the anticipated project objectives. Nevertheless, developments in AI solutions create some obstacles for tapping into the potential advantages that AI offers to manufacturing projects [6]; hence, the application of AI in manufacturing projects is still scarce. Given this situation, this paper aims to examine how AI can be applied to manufacturing projects, and considers possible AI solutions, methods, and instruments which are accessible today. The research question addressed in this paper is: how can manufacturers successfully apply AI to manufacturing projects? To answer this question, we propose a conceptual framework to identify potential AI solutions and how they can be applied in the context of manufacturing projects. To achieve this, an objective-oriented approach is adopted by delineating important attributes for application and use and highlighting their inter-reliant relationships. The proposed framework provides those involved in manufacturing projects with a structured approach for designing and applying AI solutions. The framework is designed following a two-stage investigation. First, a comprehensive literature review into the potential applications of AI in manufacturing projects was completed. Second, ten interviews were conducted with AI and manufacturing project experts from UK-based engineering organizations to explore the preconditions required to apply AI to manufacturing projects. Lastly, the effects of applying AI in manufacturing projects are studied. This paper concludes by providing an outlook of the potential developments and future applications of AI in manufacturing projects.
2 Literature Review Manufacturing projects and AI are widely recognized concepts by both practitioners and scholars. In this study, we use an Information Systems (IS) lens to focus on three key elements of applying AI to manufacturing projects, namely: utility research, use, and user [7]. 2.1 Organizing the AI Solution Field Through an IS lens and grounded in the research of Hofmann et al. [6], three approaches for applying AI to manufacturing projects are identified which provide value, scope, and corresponding roles, within various contexts. These include:
652
A. Sahli et al.
1 AI-grounded solutions: AI is used to deliver processes and, therefore, generate new comprehensions. For instance, retaining risk evaluations of spam filters for e-mail. 2 AI-assisted solutions: AI is used to simplify or ameliorate the output and input interfaces to users. In this instance, AI-grounded solutions reinforce collaboration, frequently founded on natural language processing. For instance, smart assistants that respond to user input. 3 Comprehensive AI solutions: AI is used to assist in output, input, and task processing. Comprehensive AI solutions like this can comprise distinct characteristics e.g., a smart assistant that provides deadline estimations. These three types of AI solutions indicate a variation in usage and signify the inherent capability of AI. To appreciate AI, extant research offers numerous ways in which it can be applied. With regards its application in manufacturing projects, seven applications are identified, outlined by Hofmann et al. [6]; please see Table 1. Table 1. AI Purpose Guide [6]. AI Purpose
Definition
Observe
Gather and manage real world data
Understand
Obtain and detect distinct relationships and purposes from data
Ratiocination
Explicate primary structures and relationships in data
Predict
Approximate upcoming conditions or events
Decision-making
Select between known alternatives
Produce
Create or construct artifacts
Perform
Implement goal-based actions
The mutual foundation of these purposes draws on the capacity of machine learning, which permits adaptations, given the historical data and background of manufacturing projects. The functions explain the prospects of AI implementation to assist and improve the cognitive capacities of project managers [8, 9]. Therefore, we use these to suggest AI’s potential use in manufacturing projects. The remainder of this paper is organized as follows: In Sect. 3, the methods for the research study are delineated through the undertaken interviews, following that AI applications in manufacturing projects are explored through looking at the specifications of each, this is further expounded upon in Sect. 5 where the designed framework is proposed and opens up a discussion regarding the impact of AI application within manufacturing projects.
3 Method Despite many AI solutions existing in practice, there is a requirement to increase our understanding about the perquisites for applying AI to manufacturing projects. Similarly, there is a need for technology assistance through AI and resources for training using
A Conceptual Framework for Applying Artificial Intelligence to Manufacturing Projects
653
AI e.g., value-generating use cases. Therefore, this study interviewed 10 manufacturing project managers and AI subject matter experts from UK-based engineering organizations. The interviews were audio-recorded and took place from January to April 2022. Research ethics approval was sought and approved by Brunel University London, UK (No. 35866-LR-Mar/2022- 38871–1). The recordings of the interviews were transcribed and coded based on recurring subjects and groupings using NVivo 12 [8]. An interview guide was created to conduct the semi-structured interviews. The guide organized the interviews into three sections. First, the research project was introduced, and a standardized understanding of the terms ‘Manufacturing Projects’ and ‘Artificial Intelligence’ were sought. Second, the pre-requisites of manufacturing projects were considered before the ensuing part focused on the requirements of AI. The semistructured interviews followed the recommendations of Myers and Newman [7] who permitted follow-up discussion and questions; as a result, we adjusted the interviews based on individuals’ expertise and knowledge. Ultimately, the interviews progressed with closed, semi-open, and open questions. Closed questions focused on tangible assessment by the experts; for example, the questions asked included: ‘On a scale of 1 to 5, where 1 = extremely difficult to 5 = extremely easy, how would you rate the difficulty of applying AI to manufacturing projects?’ The open questions required interviewees to explain their understanding. The identification of interviewees focused on their fields of knowledge and proficiency (i.e., previous work on manufacturing projects and/or AI), in addition to their corresponding roles within their organization (e.g., Project Manager, IT specialist). Moreover, even though each of the organizations conducts manufacturing projects, they operate in divergent fields. Therefore, the outcomes of the interviews offer a wide range of perspectives for the use of AI in manufacturing projects and provide a clearer understanding of the common requisites. Information about this sample is provided in Table 2 (comprising the number of employees within the interviewee’s company as a size marker). Table 2. Sample Demographics. Interviewee
Expertise
Position
Years of Experience
Field
No. of Employees
1
MPs
Senior Director
> 10 Years
Automotive
> 100,000
2
MPs
IT Program Manager
> 10 Years
ICT
25,000 – 50,000
3
MPs
Program Manager
> 10 Years
Engineering
> 100,000
4
MPs
Junior Consultant
5 – 10 Years
ICT
250,000 – 500,000
5
MPs
Project Manager
5 Years
Engineering
10,000 – 25,000 (continued)
654
A. Sahli et al. Table 2. (continued)
Interviewee
Expertise
Position
Years of Experience
Field
No. of Employees
6
AI
Solution Architect
< 5 Years
Logistics
< 100,000
7
AI
CEO (Start-up)
5 Years
IT Consulting
< 10
8
AI
Project Manager
< 5 Years
ICT
250,000 – 500,000
9
AI
Senior Data Analyst
5 – 10 Years
Logistics
> 100,000
10
MPs and AI
Chief Operating Officer
5 – 10 Years
ICT
< 10
4 Specifications for AI Application in Manufacturing Projects Making use of AI’s potential within manufacturing projects demands viable AI solutions that align with specific manufacturing project specifications, Manufacturing projects, as an application field, should also fill certain requirements that available AI processes and concepts demand. In the ensuing sections, we examine some of these shared specifications: 4.1 AI Specifications for Managing Manufacturing Projects With regards AI specifications for manufacturing projects, insights from the interviews can be categorized into the activity features, focus areas of data, and field and technology understanding. Data is essential for instruction and assessment of AI models. AI applications are “only as efficient as the data you supplement them with” (Interview 4). Thus, various data-based features must be examined in the application field. These encompass the attainability of data in the needed quantity and type. It is also beneficial if the data constitute a widespread history of projects feasible (Interviews 7 and 5) and are easily attainable. Hence, AI applications are especially fitting in settings that are “unambiguous and structured” (Interview 9). These settings are frequently data-driven and allow for the assembly of quantitative data (Interviews 7 and 5). Furthermore, the data should mirror the real conditions within manufacturing projects as properly and completely as possible to allow for representative preparation of AI applications (Interview 2). In practice, organizations are commonly challenged with data quality issues that demand expansive activities for data training, managing, and quality assurance. This is extremely demanding and occasionally means “a lot of effort” (Interview 8). Furthermore, specifications for the activity features should be considered. This encompasses the necessary features of the tasks involved in manufacturing projects that are to be systematized or maintained by the AI solution. Preferably, the actual tasks should be coherent in their sequence, given “the more monotonous and consistent, the better [the usage of AI applications] functions” (Interview 2). Standardized in this context
A Conceptual Framework for Applying Artificial Intelligence to Manufacturing Projects
655
signifies that the process needs little to no change (Interview 2). An additional argument that supports the effective use of AI solutions is when the task is “partly systematized and it is a way of swapping the software element with machine learning” (Interview 2). Accordingly, AI applications can be used in manufacturing project tasks that are already software-assisted or software-based. Lastly, AI solutions set particular and unique requirements on domain understanding and technology in organizations to guarantee effective user acceptance and deployment. This encompasses a meaningful grasp of technology improvement of AI solutions in addition to an expansive grasp of the application field of manufacturing projects. Consequently, value-making use cases can be acknowledged and integrated in a practical and structured manner. Nonetheless, the specifications for technology and field grasp regularly reveal a shortage of experts and specialists (Interview 5). 4.2 Management of Manufacturing Projects’ Specifications for AI As an equivalent to the specifications on the technology aspect, manufacturing projects, as an application field, also sets requirements for technology assistance by AI solutions. These can be categorized into standardization and extent of manufacturing project tasks, features, and processes. The degree and capacity of standardization of manufacturing project tasks differs between organizations. The frameworks and standards used by those involved in manufacturing projects, for instance the Project Management Body of Knowledge (PMBOK) or PRINCE2 [7], are normally adjusted for practical use in each organization. Ultimately, the integration of project benchmarks varies from wide-ranging specifications for cohesive project integration. An AI solution within manufacturing projects should accordingly either: 1) manage with a low scale of formalization of tasks, 2) be able to complement the organizations’ specifics and be compliant, and 3) warrant a low scale of formalization through the AI application design. Given that most manufacturing projects are distinctive, the single project features vary. Members of manufacturing projects frequently point out this aspect in relation to the specifications for assistance by AI solutions e.g., “Projects vary” (Interview 8) or “the issue is that a project doesn’t always flow in the same manner” (Interview 2). This involves the nature and domain of the project, the project members, the project extent, and specifically the project aim. Such project features reflect essential features for the design of AI solutions. Nevertheless, the Interviewees also identified that projects with similar features happen and that the standard approach is usually alike (Interview 4). The undertakings in manufacturing projects are dependent on the project features. Thus, AI solutions must be fitting for the relevant process model (e.g., Scrum model or waterfall model) or be able to adjust to the specific features of the manufacturing project workflows and processes (Interviews 4, 8, 1 and 2). More specifically, successive manufacturing project methods are depicted by a well-defined organization and follow a structure from the start of the project. On the other hand, incremental and iterative manufacturing project processes are depicted by repetitive features for preparing, integrating, and confirming project tasks.
656
A. Sahli et al.
5 Proposed Conceptual Framework Existing solutions and current literature, in addition to the insights drawn from the current study, highlight that, despite the peculiarity of manufacturing projects, AI solutions can have a positive impact on projects. Nonetheless, it is apparent that to support manufacturing projects through the application of AI, complicated requisites in the communication of integration and solution ought to be examined. A well-established outline for arranging and assessing the requisites, possibilities, and issues for the application of AI in manufacturing projects ultimately would assist vendors, developers, and users in applying AI solutions. In this regard, this paper aims to provide a conceptual framework for applying AI to manufacturing projects. Normally, the use of predetermined modelling languages is steered clear of when constructing frameworks, given their specific modelling intents and goals which bind the liberty of design in the formation of the framework. The graphical symbols used in the proposed conceptual framework, shown in Fig. 1, are freely constructed but restricted to two symbols: (1) Circles for components and (2) arrows as directed edges for relations between components. The semantics of the edges and the components are denoted through short labels and are depicted further after the graphical illustration of the proposed framework. At the heart of the framework is the AI system for manufacturing projects, which is comprehended as a system of socio-technical nature, whose technical aspect is shaped through a peculiar class of integration systems [8].
Fig. 1. Conceptual Framework for applying AI to Manufacturing Projects
A Conceptual Framework for Applying Artificial Intelligence to Manufacturing Projects
657
The AI system is linked both indirectly and directly with the five other components of the framework through guided interactions. Pertinent characteristics for AI methods within manufacturing projects are the accessible state of technology for improving individual systems through technical design. Operations and system development are connected to a life cycle-based approach of the AI system, which involves recurrent improvements. In practice, it is rare to comment on a greenfield development and, therefore, the current system landscape ought to be examined. The AI system demands consideration about sustainability, cost efficiency, and viability. Given that the framework is constructed for practical use in manufacturing projects, the domain for the latter can be perceived as the entry point for an AI exploration or the identification and improvement of appropriate use cases. The organizational standpoint encompasses the facets of strategy and contribution, which can be operationalized. The business case should also contemplate ethical compliance, security, and legal conformity, in addition to the safety of AI-based manufacturing project assistance in the context of a prolonged business justification. Manufacturing projects focus on achieving set goals by achieving project outcomes and project governance identifies the basic circumstances for projects. Organizations incorporate both the provisional organization for an individual project and the practices for manufacturing project implementation. The practices can be ordered based on project benchmarks, such as the PMBOK. Therefore, it is feasible to use the framework for both traditional project management methodologies (e.g., PMBOK or PRINCE2), as well as with agile procedures, like Scrum and even hybrid methods, like Prince 2 Agile [9]. The nature of the different methods (i.e., classic/hybrid/agile) and the ensuing results shape the application facet in connection with the roles concerned, which are outlined based on the respective methodology; for example, individual roles, such as a project manager or a Scrum Master, in addition to group roles like project steering committees. Within the benefits facet, the emphasis is on the achievement of AI’s prospective use in manufacturing projects. The development impacts are measured via performance markers, which can be determined through dependent-potential modus. To check the efficiency of the AI system, actual and focused values, in addition to measurement processes and scales, are necessary. The advantages of applying AI to manufacturing projects are linked to the respective aims of the project and are attained by achieving the end-result. While project success is usually related to the quality of product produced, and cost and time savings, the anticipated benefits are primarily based on end-results, business impression, and stakeholder approval [10]. The solution field characterizes the area for generating an individual solution design for the acknowledged use case(s). The solution design is decided by the chosen AI features based on the accessible data and developing a certain solution design appropriate for the application field. This comprises the selection of suitable AI algorithms and models for the identified project tasks as well as the connected impact factors. Moreover, the solution design suggests a resolution which, on one hand, is impacted by the accessibility of appropriate AI algorithms and their integration through the AI system and, on the other hand, focuses on maximum automation.
658
A. Sahli et al.
As the framework is designed to assist with practical issues to locate appropriate application settings for AI within manufacturing projects, the use case factors encompass pertinent facets for the achievement and instantiation of AI’s potential. The application combines design and selection choices from the other framework elements in an incorporated user-focused representation. The integration approach focuses on the user and considers the users’ settings for decision-making in relation to the incrementation through individual expansion or the use of standard software, third-party or in-house development, in addition to the systematic approach. Once these considerations are considered, the planning and application of AI can occur, by which the acknowledgement and efficiency of the concrete AI solution for manufacturing projects must be constantly obtained by regulating within the entirety process of the incrementation measure. 5.1 Effects of Applying AI in Manufacturing Projects This study demonstrates that to use AI in manufacturing projects, on one end, there are practice-pertinent profit potentials and, on the other hand, founded on the theoretical and scientific condition of research, the technological improvement status has led to a substantial array of various methods for AI solutions. The application of AI in manufacturing projects should be measured by its impact on value creation to the organization. The proposed conceptual framework echoes this through the benefit facet, which marks the beginning for strategic examination of AI application. Considering the strategic parameters, well-defined benefit prospects can be delineated for the manufacturing project application field, which can be both qualitatively and quantitatively encapsulated by operationalized objectives. To combine the AI solution field with the manufacturing projects application field, it is recommended to pinpoint those use cases that are economically valuable and technically achievable. The interviewees delineated prospects for AI application within manufacturing projects, for example, the assistance of sprint scheduling within an agile project context. In this case, AI can be used to forecast the appropriate time for undertakings, arrange them and approximate the complexity of the consequential sprint. The emphasis sections of the framework may assist with organizing this idea towards a tangible solution design. The business field is addressed via the arrangement of tasks based on the highest value creation likeliness. In terms of prospect advantages, AI allows for optimal usage of time within sprint scheduling whilst the task arrangement also ushers to better product quality without inducing further manual effort. This AI solution represents predictive decision assistance for product owners, as shown in Table I, who perform the sprint scheduling. The AI system for manufacturing projects fits present systems given that the product backlog as well as sprint scheduling and regularly utilized within agile project approaches for example Scrum and characteristically represented in certain software systems. Whilst the initial Scrum approach stands, its event which is sprint scheduling becomes improved. The interviewees also noted other use cases; for example, the arrangement of project teams through AI or the use of chatbots for adapting modification requests. Like the first use case, these ideas can profit from the framework by deploying solution designs for applying AI to manufacturing projects.
A Conceptual Framework for Applying Artificial Intelligence to Manufacturing Projects
659
5.2 Systematized Identification, Improvement, and Incrementation of Use Cases Organizations that are examining the use of AI in manufacturing projects are faced with the issue of choosing an appropriate solution from a range of possible solutions [11]. This demands that an appropriate issue or pertinent task has already been identified. The intent to use AI as a core element of the prospect solution limits the solution area to information technology structures, which directs system scrutiny and expansion as an appropriate solution method. As part of system scrutiny, use cases characterize a proven tool for examining the application field from the outlook of the actors associated. Hofmann et al. [12] delineates a process for the expansion of AI use cases for manufacturing projects, which encompasses five stages, namely: 1) Organize, 2) Explore, 3) Comprehend, 4) Plan, and 5) Apply. The conceptual framework proposed in this study supplements this approach through pre-structuring the implementation and solution fields within the specific organization context. This eases a recurrent conceptualization for the concerned actors and increases communication through utilizing consistently distinct basic terminologies. The significance of accessible data with suitable quality as a basic precondition for efficient AI solutions was highlighted by several participants. Whilst today the systematic assembly of project-linked lessons learned is a broadly acknowledged best practice, the actual lesson signifies a subjective assortment of data deemed as beneficial for forthcoming projects by one or more individuals. On the other hand, the assembly of project-linked data for AI use cases ought to steer clear of limitations because of content-reliant preselection to conserve the full potential of data. Moreover, this encompasses data sources past knowledge from completed projects. For example, the Predictive Project Analytics method delineated by Fauser et al. [15] is founded on a sizeable data set composed from projects and connected with supplementary benchmarking information and data sources. We also discern an important aspect in the expansion and employment of appropriate AI use cases in the acknowledgment of the novel AI solution. This can impact both recognition outside the company, for instance, when AI solutions are utilized for communication with customers, and inside the company, for instance, when AI is utilized to systematize tasks that were formerly undertaken manually. The surveyed participants recurrently noted data privacy. Accordingly, the use of private data is only feasible with restrictions.
6 Conclusion Since the beginning of the AI age, focus on the context of manufacturing projects has been revitalized. The clear paradox between the distinctiveness and novelty of projects on the one end and AI process based on comparative and historical data, on the other end, can be simply solved by focalizing on the assignment level of manufacturing projects and implementation. Repetitive tasks within projects, on which pertinent project management approaches and their procedure descriptions are also found, offer variety of departing points for suitable AI assistance. Built on theoretical models, methods, and processes of AI investigation, in addition to the speedy advancement within information technology during recent years, market-prepared AI solutions have been industrialized, which are now accessible as commercial products for practical use by organizations.
660
A. Sahli et al.
Nonetheless, organizations that choose to use such AI solutions within manufacturing projects are confronted with the issue of both getting a hold of concrete manufacturing project use cases and choosing an appropriate AI solution that can be integrated into their current workflows and processes. The processed framework is founded on an exploration of theoretical AI methods in addition to practical elucidations for the integration field of manufacturing projects. The framework assembles the AI domain of exploration within manufacturing projects into six constituents entrenched within an organization and project context. The constituents are delineated as pertinent focus fields for AI integration within PM: 1. Business field: The Business field is considered as an access point for an AI prospective assessment or the identification and elaboration of appropriate use cases. 2. Application field: The PM application field focuses on satisfying the project’s goals by generating the project product. 3. Anticipated advantages: The anticipated advantages encompass competence objectives including cost, time, quality. 4. AI solution: The solution is determined by the chosen AI functionalities reliant on the accessible data and establishing a particular solution design appropriate fothe application field. 5. Use case: The integration method focalizes on the user issue and looks at the contexts of the user for decision making in regard to the integration through individual expansion or the usage of standard software. 6. AI method for manufacturing projects: AI method is regarded as a socio-technical information organization, whose technical component is designed by a distinctive class of application systems. The focus fields are interconnected and encompass pertinent components that assist a business-specific conceptualization and delineation of terms. With regards to future developments within the area of AI, a big motivation comes up from the much-deliberated question of whether AI will ultimately be able fully take over the project manager with a self-directed agent [14]. This discussion point is closely tied to the crossover from a fragile to a robust AI whose problem-solving skills can handle multifaceted issues. The currently predominant representation of the virtual assistant can then be substituted by a digital project manager who routinely picks an appropriate methodology, generates a project plan, and builds a project team, based on the project’s aims and objectives. In managing and monitoring project implementation, judgments would be made from both formalized knowledge and historical information, as well as given real-time information regarding the entire project. Approaches and technologies from data science, data analytics and process mining, can also be utilized to assess not only information from operational IT systems, but also information in relation to the electronic messaging of all project contributors, for instance through VoIP calls, instant messaging, and e-mail. In this regard, the issue of technical probability takes a back seat to the question of ethical, legal and acceptability. These are significant design elements that the suggested framework confronts during the beginning phase in prospective analyses or development scheduling.
A Conceptual Framework for Applying Artificial Intelligence to Manufacturing Projects
661
References 1. Brynjolfsson, E., McAfee, A.: The business of artificial intelligence. Harv. Bus. Rev. 7, 3–11 (2017) 2. Gartner, Inc.: Gartner identifies three megatrends that will drive digital business into the next decade. https://www.gartner.com/en/newsroom/press-releases/2017-08-15-gartner-ide ntifies-three-megatrends-that-will-drive-digital-business-into-the-next-decade Last Accessed June 07 2022 3. Stone, P., et al.: One hundred year study on artificial intelligence: Report of the 2015–2016 study panel. Technical report, Stanford University, 2016. Accessed: June 07, 2021. [Online]. Available: https://ai100.stanford.edu/2016-report 4. Auth, G., Jokisch, O., Dürk, C.: Revisiting automated project management in the digital age – a survey of AI approaches. Online J. Appl. Knowl. Manag. 7(1), 27–39 (2019). https://doi. org/10.36965/OJAKM.2019.7(1)27-39 5. Winter, R., Rohner, P., Kiselev, C.: Mission impossible? exploring the limits of managing large IT projects and ways to cross the line. In: Proceedings of the 52nd Hawaii International Conference on System Sciences, Grand Wailea, HI, USA, 2019, pp. 6388–6397. https://doi. org/10.24251/HICSS.2019.768 6. Wagner, T., Phelps, J., Guralnik, V., VanRiper, R.: An Application View of COORDINATORS: Coordination Managers for First Responders AAAI (2004) 7. Xu, K., Muñoz-Avila, H.: CaBMA: Case-Based Project Management Assistant. AAAI (2004) 8. Hofmann, P., Jöhnk, J., Protschky, D., Urbach, N.: Developing purposeful AI use cases – a structured method and its application in project management,” presented at the WI2020 (2020). https://doi.org/10.30844/wi_2020_a3-hofmann 9. Brenner, W., et al.: User, use & utility research: the digital user as new design perspective in business and information systems engineering. Bus. Inf. Syst. Eng. 6(1), 55–61 (2014) 10. Kerzner, H.: Project management: a systems approach to planning, scheduling, and controlling, 11th edn. John Wiley & Sons, Hoboken, USA (2013) 11. Turnerand, J.R., Müller, R.: On the nature of the project as a temporary organization. Int. J. Project Manage. 21(1), 1–8 (2003) 12. Atkinson, R.: Project management: cost, time and quality, two best guesses and a phenomenon, it’s time to accept other success criteria. Int. J. Project Manage. 17(6), 337–342 (1999) 13. Papke-Shields, K.E., Boyer-Wright, K.M.: Strategic planning characteristics applied to project management. Int. J. Project Manage. 35(2), 169–179 (2017) 14. Jöhnk, J., Hartmann, M., Urbach, N.: All roads lead to burning rome: towards a conceptual model of IT project success. In: 15th International Conference on Wirtschaftsinformatik (WI), pp. 1412–1427. GITO, Verlag (2020) 15. Fauser, M.J., Schmidthuysen, M., Scheffold, B.: The Prediction of Success in Project Management (2015) 16. Project Management Institute.: PMBOK Guide – A guide to the project management body of knowledge. 6th edn. Project Management Institute, Newtown Square, USA (2017)
Influence of Artificial Intelligence on Resource Consumption Naiara Uriarte-Gallastegi1 , Beñat Landeta-Manzano1 , Germán Arana-Landin2(B) , and Iker Laskurain-Iturbe2 1 Business Management Department, Faculty of Engineering, University of the Basque Country,
48013 Bilbao, Spain {naiara.uriarte,Benat.landeta}@ehu.eus 2 Business Management Department, Faculty of Engineering, University of the Basque Country, 20018 San Sebastián, Spain {g.arana,iker.laskurain}@ehu.eus
Abstract. Industry 4.0 presents companies with an opportunity to embrace circular models that are increasingly pressing. This study examines how Artificial Intelligence contributes to enhancing key indicators of the circular economy, such as material, energy, and water consumption, through a multi-case analysis. The findings demonstrate that Artificial Intelligence can significantly impact resource efficiency and provide a competitive edge to organizations, primarily by reducing energy and material consumption. However, the potential effects of artificial intelligence vary depending on the type of technology and activity to which it is applied. Furthermore, it emphasizes the importance of ongoing research into the novel effects generated by Artificial Intelligence, both in the business sector, through the development of new applications, and in the public sector, through the integration of Artificial Intelligence in the formulation of public policies. Keywords: Artificial Intelligence · Circular Economy · Industry 4.0 · environmental sustainability
1 Introduction Artificial intelligence (AI) was born as a cognitive science to improve decision-making in research activities in many areas, such as image processing, natural language processing, robotics, or machine learning [1]. AI was originally developed to enable machines to simulate human intelligence and psychological skills in different research fields and is now defined as the ability of machines to mimic human intelligence and reproduce psychological and reasoning skills [2]. AI can be combined with other tools of massive data processing and data analysis, allowing the development of data analysis in prediction and diagnostic processes, providing solutions to complex situations that present a high degree of uncertainty [3], real-time monitoring and the development of optimisation algorithms and precision in device control, to obtain greater flexibility of resources, improve productivity and performance of activities through reinforcement learning [4, 5]. © IFIP International Federation for Information Processing 2023 Published by Springer Nature Switzerland AG 2023 E. Alfnes et al. (Eds.): APMS 2023, IFIP AICT 690, pp. 662–673, 2023. https://doi.org/10.1007/978-3-031-43666-6_45
Influence of Artificial Intelligence on Resource Consumption
663
Different types of AI are identified: mechanical, analytical, empathic, and intuitive. Mechanical AI automates simple tasks, reducing machine downtime or timely planning of the replacement of defective components detected by the sensors [6]. Analytical AI uses algorithms that learn from new data and previous experience, improving processes [6]. Empathic AI analyses consumer behaviour and intuitive AI manages to “think” creatively and adapt to new situations contributing to decision-making [7]. AI technology along with machine vision, advanced data analytics, internet of things, robotics and machine learning, among other technologies, is one of the technologies associated with the industry 4.0 (I4.0) technology revolution that has a transformative effect on research and development in the business world [8, 9]. Within this technological development, advances in AI are not only potentially beneficial because of their manufacturing advantages, but also because they can be used to drive the transition to a Circular Economy (CE) by maximising the use of available resources and minimising waste and emissions [10, 11]. It is about maximising the use of all available resources and organisations adopting the principles of the CE paradigm into their agendas to achieve sustainable growth and overcome the linear economic model, which is based on “extract, produce, use and throw away” [12], alongside the emerging need to limit global warming, end fossil fuel funding and achieve zero emissions by 2050 [13]. CE involves a new model of natural resource use based on reducing and optimising the consumption of raw materials, energy, and water, and reducing unwanted outputs [14] while maintaining the value of products, materials and services for as long as possible through the principles of CE. This paradigm shift has led to the development of numerous initiatives around the world, in the way we produce and consume, at different scales, such as the European Union Circular Economy Action Plan [15] or China’s CE Promotion Law [16] developed after serious health, social, and environmental problems caused by intense industrialization, rapid urbanization, changes in consumption patterns, and population growth. For its part, among others, Germany includes the German CE Law [17], the Netherlands incorporates the CE Program and South Korea develops the 2016 Framework Law on the Circulation of Resources [18]. The goal is to be able to incorporate policy measures to promote and support sustainable economic models, through changes in the fundamental structural assets of the global economy, considering the global demand for materials based on a comprehensive assessment of technology and economics [19]. This progress towards circularity, driven by technological development, may be a key factor in moving towards circularity in CE [20]. In particular, equitable and sustainable AI can have a positive impact on environmental outcomes [21] and support sustainable societies [22]. However, recent findings show that poor AI training can contribute to inefficient models [23], which can lead to poor AI decisions, a negative effect on the bottom line and cause a competitive disadvantage for the firm [24]. In this context, the aim of this research is to address the influence of AI on key indicators related to material, energy, and water consumption. Although these issues have been studied in the literature, they are a highly topical subject. AI continuous evolution offers a multitude of opportunities in the business context that can significantly influence these consumptions. Therefore, the main objective of this research is to assess the contribution of AI on the three variables mentioned above:
664
N. Uriarte-Gallastegi et al.
(1) Evaluate the contribution of AI in relation to material consumption. (2) Evaluate the contribution of AI in relation to energy consumption. (3) Evaluate to assess the contribution of AI in relation to water consumption. With these objectives, the document is broken down into five sections. After this introduction, the second section shows academic literature on the subject; in the third section, the methodology developed in the investigation is exposed; and in the fourth section, the results obtained are shown. In the last section, the discussion and conclusions obtained in the research are presented, as well as the limitations and future lines.
2 Literature Review Currently, AI is considered a tool for innovation. Facilitates the creation of new products, components, and materials from a broader range of materials, as well as business models based on a more circular economy, combining real-time data and historical data to increase product circulation [25]. Advances in AI, connectivity and decentralised networks may become key enablers [26] and be considered ideal for generating improvements in the transition from the linear economy to the CE [27] and allows improved product quality through intelligent production systems, which minimise product waste and energy use, achieving a more efficient management of the flow that also anticipates the legislative modifications that are being implemented in the European legislation [28]. Some authors suggest that AI could help improve circularity and consumption efficiency through data-driven monitoring, analysis, and decision-making, which requires assessing and leveraging existing capacities and infrastructures [27, 29]. In other studies, it is highlighted that the use of virtual analytical AI models can facilitate automatic and rational learning, which improves decision-making with the objective of reducing energy and improving water management [30] by time that environmental disadvantages are attenuated [21]. An example of the influence of AI is the optimisation of production processes through mechanical AI, which can lead to a reduction of the environmental footprint per product produced [31] and lead to reduced environmental impact. The combination of mechanical, analytical, intuitive and/or empathetic AI with the automation of industrial processes has become a key tool to increase efficiency in the use of materials and energy [8, 32, 33]. Specifically, it stands out that mechanical AI enables machines to make decisions autonomously and in real time, optimizing the efficiency of materials and improving productivity in various sectors [3]. Moreover, the intuitive AI ability to learn and adapt autonomously, in real time and without human intervention, makes it a recommended technology for managing resources such as material and energy consumption [34]. Thus, organisations that incorporate digital transformation and integrate emphatic and intuitive AI optimise the efficiency of processes globally, enabling the development of sustainable economic practices, consumption patterns and innovation, oriented towards sustainable development goals [35], including areas such as health and clean primary resources such as water and energy [36]. The integration of analytical AI together with an artificial neural network method optimises and forecasts energy needs, assesses the energy efficiency of electricity use of machines and/or electronic devices by intelligently measuring and monitoring them [37]. At the same time, the integration of different AI technologies makes it easier for the company to create new
Influence of Artificial Intelligence on Resource Consumption
665
added value from the optimisation of process inputs in conjunction with the production chain, according to market needs and demand [38]. Therefore, AI is considered a tool with the capacity to improve decision-making in relation to the selection and efficiency of consumed resources. However, given the widespread use of this technological tool and a scarcity of studies quantifying these improvements, we have set out the following research questions: To what degree can AI contribute to improving the efficiency of material resource utilization? What about the use of energy resources? What about improving the efficiency of water use?
3 Methodology Firstly, the purposes, objectives, and research questions were defined to focus the literature review and achieve a higher level of detail in their definition. To answer these objectives, a multi-case study was designed to obtain a greater penetration and understanding of the studied topic [39, 40]. The selected case studies aimed to be informative, innovative, and appropriate to the research purpose [8]. In a first step, the technological clusters were selected in projects where AI had significant representativeness within the technological projects presented in the first 4 editions of the BIND 4.0 program, a public-private acceleration initiative in the Basque Country, Spain [8]. After an initial analysis, in the following phase, 18 projects of AI applications in different sectors were analysed (see Table 1) [41]. The protocol for data collection and analysis involved examining, categorizing, tabulating, and reviewing evidence to identify common behavioural patterns and determine the connection between data and research objectives [42]. The information obtained from the report and literature review was compared and cross-referenced using an initial 9-level item scale, although upon realizing that there were no negative influences, the 4 negative levels were discarded, and 5 levels of gradation were used, ranging from o no influence, + low positive influence, + + moderate positive influence, + + + high positive influence, to + + + + very high influence, to prepare the final report [43]. Table 1. Description of the projects selected by areas. Sector
Project
Description
Type of AI
HEALTHCARE SECTOR
Deba
Technological consultancy dedicated to the development of applied AI in the health sector
Analytical and intuitive
Urola
Digital solutions through AI in learning about diabetes
Analytical and intuitive
Butroi
Biomedical research Project
Mechanical and analytical (continued)
666
N. Uriarte-Gallastegi et al. Table 1. (continued)
Sector
Project
Description
Type of AI
ENERGY SECTOR Oiartzun
AI solutions for adapting the Mechanical and analytical comfort temperature of the facility according to occupancy and planned tasks
Urumea
Energy management through Mechanical and analytical digital technologies
Oria
Energy control system through multidisciplinary intelligence that supports decision making
Mechanical and analytical
Arakil
Project oriented to the efficient management of energy and water of public programs through AI
Analytical and emphatic
MAINTENANCE AND SECURITY SECTOR
Ega
Predictive maintenance with machine learning
Analytical
Ebro
Treatment of predictive Mechanical solutions to improve security
SUPPLY AND DISTRIBUTION SECTOR
Zadorra
Supply chain management technology consultant
Kadagua
Predictive solutions based on Analytical big data technologies
Nervión
Food consultant and distributor: AI improves conservation
Mechanical and analytical
Ibaizabal
Development of the digitization of the manufacturing process of companies
Mechanical and analytical
Zalla
Solutions for the analysis and diagnosis of images
Analytical
Oka
Intelligent solutions through neural networks to automatically recognize images for video surveillance
Analytical
Lea
Electronic product design
Analytical and emphatic
Artibai
Management of data processing, design architecture
Analytical and intuitive
Baia
Design and organization of the product in furniture and task management
Analytical and emphatic
IMAGE PROCESSING SECTOR
DESIGN SECTOR
Note: Projects are represented with fictitious names
Analytical and intuitive
Influence of Artificial Intelligence on Resource Consumption
667
4 Results The 18 projects (with fictitious names) from the cluster of companies with AI technology have allowed us to answer the research questions, using as main sources of evidence direct communication with managers and technicians, as well as internal documents that provided us with additional evidence, including real cases, applications, reports, and project I4.0 memories [43]. This methodology provided a comprehensive and detailed overview of each of the projects, as well as the challenges and opportunities presented in each case. This study focuses on assessing the impact of artificial intelligence (AI) on the efficiency of the use of natural resources, such as materials, energy and water. Of the cases analysed, it was found that, in almost two thirds of them, AI has made a significant contribution to improving resource use efficiency. However, in 4 cases no evidence of any such contribution was detected. In terms of energy consumption, it was found that in half of the cases analysed, AI made a high or very high contribution to energy efficiency. In one third of the cases, a medium contribution was detected, while in one sixth of the cases no evidence of any contribution was found. Finally, in terms of water consumption, the influence of AI technology was found to be the variable with the least impact: in more than two thirds of the cases analysed, the contribution of AI was low or null. Figure 1 summarizes the influence of AI in relation to natural resources. The results show that the impact of AI is significant about material and energy use, while for water the impact is less relevant or even null. These results suggest that while IA can have a positive impact on material and energy use efficiency, its influence on water use is limited (as shown in the last column of averages). It is important to note that there may be specific factors that modify the influence of IA on natural resource use efficiency on a case-by-case basis. At the sectoral level, the ability to optimise materials is evident in healthcare, supply and distribution chains, imaging, maintenance, and security (see Fig. 2). In these sectors, it is noted that through mechanical and analytical AI, companies have been able to equilibrate processes and thus reduce the percentage of defective products or services. Of the total number of defective products or services, the proportion of this reduction has varied from barely 10% in the case of Urola to 95% in the case of Ibaizabal. In addition to the reduction in the consumption of reprocessing materials, there has been greater control of the materials required and of inventories using mechanical AI, which has also had an impact on the reduction of material requirements. In addition, analytical and intuitive AI can be a key factor in the improvement and innovation of production processes. An increase in productivity and efficiency has been evidenced, resulting in significant savings in material and energy consumption in the processes. Furthermore, as shown in Fig. 1, the influence of AI in relation to energy resource efficiency is greatest in the energy sector. Thus, using mechanical, analytical and emphatic AI, energy management companies are able to reduce uncertainty in decision-making and improve innovation, resulting in a significant increase in energy efficiency. Specifically, a large part of these reductions come from greater control of the temperature and occupancy of the facilities, automatic closing of blinds depending on the weather and occupancy, as well as the integration of activity planning including energy variables to avoid high consumption derived from turning on machines.
668
N. Uriarte-Gallastegi et al.
Fig. 1. The influence of AI in relation to the consumption of resources and averages. Note: 0 no influence, 1 low influence (less than 1%), 2 medium influence (1% or more and less than 5%), 3 high influence (5% or more and less than 20) and 4 very high influence (20% or more).
Finally, the influence of AI in relation to water efficiency has been detected, as shown in Fig. 1, with the healthcare sector and design management standing out. In both sectors, the contribution obtained through text and data processing in the digital design of the product and in the decisions about the structure of the organization has been pointed out. In addition, they highlight the support to task management that these technologies can provide. These elements, in the words of some managers from both sectors, contribute to increasing efficiency in the use of material resources, energy and water. This study offers an in-depth examination of how AI influences the efficiency of natural resource utilization, with a specific focus on its positive impact on resource, material, and energy efficiency (although in certain cases, no significant contribution has been identified).
Influence of Artificial Intelligence on Resource Consumption
669
Fig. 2. Main influences of AI in relation to reduction of the resource consumption classified by sector. Note: 0 no influence, 1 low influence (less than 1%), 2 medium influence (1% or more and less than 5%), 3 high influence (5% or more and less than 20) and 4 very high influence (20% or more).
5 Discussion In the academic literature, studies reveal AI as one of the technologies belonging to Industry 4.0 that has the greatest impact on CE [8]. Furthermore, digital technologies are considered to strengthen the role of citizens and consumers by informing them, educating them and making them active participants in the move towards a CE, while improving knowledge sharing and connections between different stakeholders in the value chain [44]. On the other hand, existing studies tend to focus on the benefits of a digital CE for individual companies and/or address a single sector, so this study has analysed the impact of AI in different areas, covering healthcare, energy management, maintenance and security, supply chains and distribution, image processing and design management. In this research, it has been possible to contrast some previous studies on how AI can have a positive influence on the three CE indicators analysed: material consumption, energy consumption [8, 32] water consumption, [45]. However, AI does not influence all indicators in the same way and/or to the same extent. In addition to the differences in improvement contributions across sectors, its contribution varies greatly depending on the type of application and how the technology is adapted to the specific case [46]. In the analysis of the cases, the evidence of the contribution of IA on the variables analysed varied widely. In general, the improvement of CE indicators in the application of AI in the projects analysed was not a priority. Nevertheless, this research has confirmed the contribution of IA in CE and has verified its possible contribution to the management of material, energy, and water resource consumption. AI based solutions optimise and can help in the move towards sustainability [37]. In relation to the first and second research question, as can be seen in Fig. 1, in all 18 projects the average positive contribution of AI is very high and high in relation to material and energy consumption. For both variables, consistent evidence has been detected confirming the degree of influence. However, the influence in relation to water consumption is practically nil, contrary to what is shown in the literature review. Several
670
N. Uriarte-Gallastegi et al.
studies suggest that AI can have a significant impact on the procurement of clean primary resources, such as water [36]. In particular, machine learning and AI have the ability to adapt to new sources of information and detect harmful particles in water [47] which can contribute to ensuring the quality and safety of drinking water supply [30]. Furthermore, AI can play an important role in the management of water infrastructures and contribute to the identification of problems [48].
6 Conclusion In the last decade, digitalization has been integrated into various aspects of society, towards a sustainable and environmentally friendly economy that includes CE [49]. The incorporation of AI is key to fostering a CE that promotes: the efficient use of resources, reduces waste generation and emissions, optimises processes, favours the transition towards more circular and environmentally friendly business models, and fosters collaboration and information exchange between different actors in the value chain. In this sense, the need to maintain and improve a country’s wealth, natural resources and welfare has argued for accelerating technological development [50]. The incorporation of AI technology shows a very significant potential impact on CE and offers opportunities and challenges for sustainable development. However, there is a diversity of opinions on the impact of AI on CE, both at the business and societal levels. In this paper, we have analysed the contribution of AI towards a more circular model in innovative companies in different sectors. It also explored the relationship between AI and the Sustainable Development Goals, especially Goal 7, which aims to ensure access to affordable, secure, and sustainable energy. In relation to the limitations of the research, we suggest further studies on the topic, with a greater diversity of cases and greater analytical depth, e.g., in combination with I4.0 technologies for the integration of CE and sustainable development criteria. By virtue of the above, it is pertinent to suggest further research in the area in question, with the aim of deepening and raising the level of knowledge achieved in this field. It is important to highlight that, as time goes by, companies accumulate experience and obtain a greater amount of information on the results obtained, which drives the need for new research in this area. Additionally, it is important to mention that the results obtained are intrinsically linked to the type of application that is developed in each project, so it would be beneficial to classify the analysis according to the type of application and the sector in which this technology is used, which could be of great relevance for decision-making in different fields, such as industry, the energy sector, and the environment. Future research on CE should look at a wide range of sectors, paying more attention to the economic and social benefits as well as the risks associated with the use of data and digital technologies. It is vital to consider legal compliance, ethical values, fairness, equality, security and accountability among project stakeholders, and these aspects must be constantly and rigorously always assessed to ensure sustainable and responsible development in today’s society. Acknowledgements. This study has been funded by the Basque Autonomous Government (Research Group GIC-IT1691–22), the Euskampus Foundation within the projects ZIRBOTICS
Influence of Artificial Intelligence on Resource Consumption
671
and SOFIA (Euskampus Missions 1.0 and 2.0), the MCIN/AEI/10. 13039/501100011033 with the project PID2020-113650RB-I00 and FEDER "Una manera de hacer Europa" / European Union "NextGenerationEU"/PRTR, the University of the Basque Country (Research Group UPV/EHU 21/005) and the Ministry of Universities and the European Union through the European UnionNext Generation EU fund (EU-RECUALI 21/11, 21/18 and 22/03). We are grateful for the technical and human support provided by the CDRE (Universite de Pau et des Pays de l’Adour) and the Environmental Sustainability and Health Institute (Technological University of Dublin).
References 1. Lee, J., Davari, H., Singh, J., Pandhare, V.: Industrial Artificial Intelligence for industry 4.0-based manufacturing systems. Manufact. Lett. 18, 20–23 (2018) 2. Vinod, D.N., Prabaharan, S.R.S.: COVID-19-The role of artificial intelligence, machine learning, and deep learning: a newfangled. Arch. Comput. Methods Eng. 1–16 (2023) 3. Solanki, P., Baldaniya, D., Jogani, D., Chaudhary, B., Shah, M., Kshirsagar, A.: Artificial intelligence: new age of transformation in petroleum upstream. Pet. Res. 7(1), 106–114 (2022) 4. da Silva, F.S.T., da Costa, C.A., Crovato, C.D.P., da Rosa Righi, R.: Looking at energy through the lens of Industry 4.0: a systematic literature review of concerns and challenges. Comput. Indust. Eng. 143, 106426 (2020) 5. Gupta, B.B., Gaurav, A., Panigrahi, P.K., Arya, V.: Analysis of artificial intelligence-based technologies and approaches on sustainable entrepreneurship. Technol. Forecast. Soc. Chang. 186, 122152 (2023) 6. Wilts, H., Garcia, B.R., Garlito, R.G., Gómez, L.S., Prieto, E.G.: Artificial intelligence in the sorting of municipal waste as an enabler of the circular economy. Resources 10(4), 28 (2021) 7. Greenstein, S.: Preserving the rule of law in the era of artificial intelligence (AI). Artif. Intell. Law, 30(3), 291-323 (2022)https://doi.org/10.1007/s10506-021-09294-4 8. Laskurain-Iturbe, I., Arana-Landín, G., Landeta-Manzano, B., Uriarte-Gallastegi, N.: Exploring the influence of industry 4.0 technologies on the circular economy. J. Cleaner Prod. 321, 128944 (2021) 9. Zhou, Y., Xia, Q., Zhang, Z., Quan, M., Li, H.: Artificial intelligence and machine learning for the green development of agriculture in the emerging manufacturing industry in the IoT platform. Acta Agriculturae Scandinavica, Section B-Soil Plant Sci. 72(1), 284–299 (2022) 10. De Sousa Jabbour, A.B., Jabbour, C.J.C., Godinho Filho, M., Roubaud, D.: Industry 4.0 and the circular economy: a proposed research agenda and original roadmap for sustainable operations. Annals Oper. Res. 270(1–2), 273–286 (2018) 11. Buenrostro Mercado, E.: Proposal for the incorporation of technologies associated with industry 4.0 in Mexican SMEs. Entreciencias: diálogos en la sociedad del conocimiento 10(24) (2022) 12. Geissdoerfer, M., Morioka, S.N., De Carvalho, M.M., Evans, S.: Business models and supply chains for the circular economy. J. Clean. Prod. 190, 712–721 (2018) 13. United Nations (UN). Climate Action, https://www.un.org/es/climatechange/cop26. Accessed 20 Feb 2023 14. Haupt, M., Vadenbo, C., Hellweg, S.: Do we have the right performance indicators for the circular economy?: insight into the Swiss waste management system. J. Ind. Ecol. 21(3), 615–627 (2017) 15. European Union. New Circular Economy Action Plan. 2020, https://environment.ec.europa. eu/strategy/circular-economy-action-plan_en. Accessed 10 Nov 2020
672
N. Uriarte-Gallastegi et al.
16. United Nations Environment Programme (UNEP), Economy Promotion Law of the People’s Republic of China, https://leap.unep.org/countries/cn/national-legislation/circular-economypromotion-law-peoples-republic-china. Accessed 12 Dec 2022 17. Versteyl, L.A., Mann, T., Schomerus, T.: Circular economy law. CH Beck, Munich (2012) 18. Fitch-Roy, O., Benson, D., Monciardini, D.: All around the world: assessing optimality in comparative circular economy policy packages. J. Clean. Prod. 286, 125493 (2021) 19. Schandl, H., et al.: Shared socio-economic pathways and their implications for global materials use. Resour. Conserv. Recycl. 160, 104866 (2020) 20. Bonilla, S.H., Silva, H.R., Terra da Silva, M., Franco Gonçalves, R., Sacomano, J.B.: Industry 4.0 and sustainability implications: A scenario-based analysis of the impacts and challenges. Sustainability, 10(10), 3740 (2018) 21. Taddeo, M., Tsamados, A., Cowls, J., Floridi, L.: Artificial intelligence and the climate emergency: Opportunities, challenges, and recommendations. One Earth 4(6), 776–779 (2021) 22. Beamer, K., et al.: Island and Indigenous systems of circularity: how Hawai‘i can inform the development of universal circular economy policy goals (2023) 23. McGovern, A., Ebert-Uphoff, I., Gagne, D.J., Bostrom, A.: Why we need to focus on developing ethical, responsible, and trustworthy artificial intelligence approaches for environmental science. Environ. Data Sci. 1, e6 (2022) 24. Rana, N.P., Chatterjee, S., Dwivedi, Y.K., Akter, S.: Understanding dark side of artificial intelligence (AI) integrated business analytics: assessing firm’s operational inefficiency and competitiveness. Eur. J. Inf. Syst. 31(3), 364–387 (2022) 25. Ellen MacArthur Foundation. Artificial intelligence and the circular economy - Al as a too/ to accelerate the transition, 2019, https://ellenmacarthurfoundation.org/artificial-intelligenceand-the-circular-economy. Accessed 1 Dec 2022 26. Mouthaan, M., Frenken, K., Piscicelli, L., Vaskelainen, T.: Systemic sustainability effects of contemporary digitalization: a scoping review and research agenda. Futures 103142 (2023) 27. Roberts, H., et al.: Artificial intelligence in support of the circular economy: ethical considerations and a path forward. AI & SOCIETY, pp. 1–14 (2022) 28. Arana-Landín, G., Uriarte-Gallastegi, N., Landeta-Manzano, B., Laskurain-Iturbe, I.: The contribution of lean management—industry 4.0 technologies to improving energy efficiency. Energies, 16(5), 2124 (2023) 29. Ünal, E., Sinha, V.K.: Understanding Circular Economy Trade-offs. En Academy of Management Proceedings. Briarcliff Manor, NY 10510: Academy of Management, p. 13775 (2021) 30. Segovia, M., Garcia-Alfaro, J.: Design, modeling and implementation of digital twins. Sensors 22(14), 5396 (2022) 31. Mäkitie, T., Hanson, J., Damman, S., Wardeberg, M.: Digital innovation’s contribution to sustainability transitions. Technol.Society 73 102255 (2023) 32. Herterich, M.M., Uebernickel, F., Brenner, W.: The impact of cyber-physical systems on industrial services in manufacturing. Procedia Cirp 30, 323–328 (2015) 33. Wilson, M., Paschen, J., Pitt, L.: The circular economy meets artificial intelligence (AI): Understanding the opportunities of AI for reverse logistics. Manage. Environ. Qual.: Int. J. 33(1), 9–25 (2022) 34. Abi Akl, N., El Khoury, J., Mansour, C.: Trip-based prediction of hybrid electric vehicles velocity using artificial neural networks. In: 2021 IEEE 3rd International Multidisciplinary Conference on Engineering Technology (IMCET) (pp. 60–65). IEEE (2021) 35. Tarhini, A., Harfouche, A., De Marco, M.: Artificial intelligence-based digital transformation for sustainable societies: the prevailing effect of COVID-19 crises. Pacific Asia J. Assoc. Inform. Syst. 14(2), 1 (2022)
Influence of Artificial Intelligence on Resource Consumption
673
36. Kar, A.K., Choudhary, S.K., Singh, V.K.: How can artificial intelligence impact sustainability: a systematic literature review. J. Cleaner Prod. 134120 (2022) 37. Saheb, T., Dehghani, M., Saheb, T.: Artificial intelligence for sustainable energy: a contextual topic modeling and content analysis. Sustain. Comput.: Inform. Syst. 35, 100699 (2022) 38. Basco, A.I., Beliz, G., Coatz, D., Garnero, P. Industria 4.0: fabricando el futuro. Vol. 647, Inter-American Development Bank (2018) 39. Yin, R.K.: Case study research and applications, 6th edn. Sage Publications, Los Angeles (2017) 40. Verleye, K.: Designing, writing-up and reviewing case study research: an equifinality perspective. J. Serv. Manag. 30(5), 549–576 (2019) 41. Leino, M., Pekkarinen, J., Soukka, R.: The role of laser additive manufacturing methods of metals in repair, refurbishment and remanufacturing–enabling circular economy. Phys. Procedia 83, 752–760 (2016) 42. Sarc, R., Curtis, A., Kandlbauer, L., Khodier, K., Lorber, K.E., Pomberger, R.: Digitalisation and intelligent robotics in value chain of circular economy oriented waste management–a review. Waste Manage. 95, 476–492 (2019) 43. Miles, M.B., Huberman, A.M., Saldana, J.: Qualitative Data Analysis: A Methods Sourcebook, 3rd edn. Sage, Los Angeles (2014) 44. Piscicelli, L.: The sustainability impact of a digital circular economy. Curr. Opinion Environ. Sustain. 61, 101251 (2023) 45. Zanfei, A., Menapace, A., Righetti, M. An artificial intelligence approach for managing water demand in water supply systems. In: IOP Conference Series: Earth and Environmental Science, Vol. 1136, No. 1, p. 012004, IOP Publishing. (2023) 46. Chauhan, C., Parida, V., Dhir, A.: Linking circular economy and digitalisation technologies: a systematic literature review of past achievements and future promises. Technol. Forecast. Soc. Chang. 177, 121508 (2022) 47. Hill, T.: How artificial intelligence is reshaping the water sector. Water Finance & Management,https://www.mydigitalpublication.com/publication/?m=59374&i=564304&p=16& pp=1&ver=html5, Accessed 1 Apr 2023 48. Leal Filho, W., et al.: Deploying artificial intelligence for climate change adaptation. Technol. Forecast. Social Change 180, 121662 (2022) 49. Dabbous, A., Barakat, K.A., Kraus, S.: The impact of digitalization on entrepreneurial activity and sustainable competitiveness: a panel data analysis. Technol. Soc. 73, 102224 (2023) 50. Durst, S., Davila, A., Foli, S., Kraus, S., Cheng, C.F.: Antecedents of technological readiness in times of crises: a comparison between before and during COVID-19. Technol. Society 72, 102195 (2023)
Development of Predictive Maintenance Models for a Packaging Robot Based on Machine Learning Ayoub Chakroun1(B) , Yasmina Hani2 , Sadok Turki1 , Nidhal Rezg1 , and Abderrahmane Elmhamedi2 1 Laboratoire de Génie Informatique, de Production et de Maintenance, Université de Lorraine,
Nancy, France {ayoub.chakroun,sadok.turki,nidhal.rezg}@univ-lorraine.fr 2 Laboratory QUARTZ EA 7393, University Paris VIII Vincennes, University Institute of Technology, 93100 Montreuil, France {y.hani,a.elmhamedi}@iut.univ-paris8.fr
Abstract. This study presents the development of a predictive model for the health monitoring of power transmitters in a packaging robot using machine learning techniques. The model is based on a Discrete Bayesian Filter (DBF) and is compared to a model based on a Naïve Bayes Filter (NBF). Data preprocessing techniques are applied to select suitable descriptors for the predictive model. The results show that the DBF model outperforms the NBF model in terms of predictive power. The model can be used to estimate the current state of the power transmitter and predict its degradation over time. This can lead to improved maintenance planning and cost savings in the context of Industry 4.0. Keywords: Industry 4.0 · Predictive maintenance · Machine Learning
1 Introduction Presently, the industry is undergoing what is known as the “Fourth Industrial Revolution,” also referred to as Industry 4.0, according to experts. Industry 4.0 and the associated digital transformation refer to the advent of new digital technologies such as inter- net of things (IoT), Artificial Intelligence (AI), Cloud computing, Cyber-Physical Production Systems (CPPS) and Big data and Analytics [1, 2]. The emergence of these technologies generates a profound change in processes and activities, skills, business models and manufacturing maintenance [3]. This latter refers to the various ways and operations undertaken to maintain or restore equipment to a state where it can perform its intended function. The physical resources used in manufacturing are among the most valuable assets of industrial companies. Given the risk of wear and tear over time, it has become crucial to maintain machinery in operational condition. Furthermore, the reliability of the maintenance department is essential for boosting productivity. As noted by Parida and Chattopadhyay [4], evaluating the performance of the maintenance function has become © IFIP International Federation for Information Processing 2023 Published by Springer Nature Switzerland AG 2023 E. Alfnes et al. (Eds.): APMS 2023, IFIP AICT 690, pp. 674–688, 2023. https://doi.org/10.1007/978-3-031-43666-6_46
Development of Predictive Maintenance Models
675
necessary. Many authors have recognized that maintenance is a key factor in improving the performance and profitability of manufacturing systems [5–7]. Therefore, maintenance is often identified in the literature as a primary driver of business competitiveness [8]. Thus, maintenance is a vital function for ensuring the sustainable performance of a manufacturing plant. Thanks for the recent advanced technologies and intelligent equipment (e.g. Sensors and probes), data collection and integration it become possible aiming to enhance reliability, reduce repair costs, and enable the estimation and prediction of machinery degradation. In fact, predictive maintenance promotes manufacturing companies to minimize unnecessary interruptions to their production processes. Referring to the literature, numerous studies have been conducted to examine the effects of Augmented Reality (AR), Artificial Intelligence (AI), and Machine Learning (ML) on production system maintenance [9]. ML technology is commonly used in industrial process monitoring and predicting equipment failures. Additionally, Leukel et al. [10] conducted a systematic review examining the adoption of ML for predicting equipment failures in industrial maintenance. Shcherbakov et al. [11] developed a conceptual model for a proactive decision support system designed to minimize downtime in Cyber-Physical Production Systems (CPPS). The system relies on real-time predictive monitoring. Similarly, Chaudhuri [12] proposed a Hierarchical Modified Fuzzy Support Vector Machine (HMFSVM) to predict vehicle failure trends. Garcia et al. [13] developed an Intelligent Predictive Maintenance System (IPMS) that uses a neural network for real-time diagnostics of industrial processes. The system is capable of detecting faults in the processes. An example of predictive maintenance can be found in the work of Yang [14], where a Kalman filter was used to predict the state of a direct-current motor. Similarly, other studies and approaches adhere to the formal definition of predictive maintenance. It is important for a predictive maintenance model to be capable of learning from new data and adapting to different situations. Several studies in the literature aim to present predictive maintenance models that operate under various constraints and with multiple objectives in the context of health assessment. In this sense, Xia et al. [15] propose a Collaborative Production and Predictive Maintenance Model (COPPM) scheduling for a Flexible Flow Shop (FFS) that is subject to stochastic and dynamic interruptions and monitoring data. Likewise, Bencheikh et al. [16] present an approach for joint scheduling of production and predictive maintenance tasks using data from PHM (Prognostics and Health Management) models through a multi-agent framework called SCMEP (Supervisor, Customers, Environment, Maintainers, and Producers) for collaborative scheduling. Similarly, a predictive maintenance model have been presented for optimizing production scheduling using deep neural networks by Zonta et al. [17]. Focusing on the limits of existing ML models in literature for predictive maintenance, there is often a lack of comprehensive comparisons between different algorithms. Many studies focus on a specific algorithm or approach without thoroughly comparing it to alternative methods. A comprehensive evaluation and comparison of different algorithms can provide valuable insights into their relative strengths and weaknesses in various predictive maintenance scenarios. Furthermore, Predictive maintenance in real-world industrial settings involves various complexities, such as varying operating conditions, sensor noise, missing data, and equipment degradation patterns. However, some literature fails to adequately address these complexities when
676
A. Chakroun et al.
developing and evaluating ML models. In this research, we aim to incorporate and analyze these real-world challenges to ensure the effectiveness and robustness of predictive maintenance models. The lack of benchmark datasets and standardized evaluation metrics can be a huge limit for the existing predictive models. Indeed, the availability of benchmark datasets and standardized evaluation metrics is crucial for fair and consistent comparisons between different predictive maintenance models. The rest of this paper is organized as follows. In Sect. 2, we present the theoretical Background. In Sect. 3, we present the materials and methods, including the proposed method, the process of data analysing and processing and the presented predictive model based on ML techniques. In Sect. 4, we display the results and discussions. Finally, conclusions and perspectives are given in Sect. 5.
2 Theoretical Background The article centers around the health of an automated robot that packages and prepares various brass accessories produced by a manufacturing system (see Fig. 1). The robot is designed to handle a maximum number of products with minimal adjustments and operates autonomously from the production line. However, the factory is currently experiencing production capacity issues due to problems with bringing the conditioning unit online. Therefore, the article focuses on a case study of the packaging robot responsible for brass accessories, specifically predicting the degradation of its power transmitters working under challenging mechanical and thermal conditions. It is worth noting that the conditioning robot comes equipped with specialized sensors and intelligent technology, which connects to a local area network and transforms the workshop into a CyberPhysical Production System (CPPS). The sensors’ purpose is to measure parameters affected by changes and disruptions in the brass accessories packaging process.
Fig. 1. A Computer-Based Simulation Model of a Manufacturing System.
Development of Predictive Maintenance Models
677
3 Materials and Methods 3.1 Proposed Method This paper focuses on a practical case study of a factory that manufactures brass accessories. Specifically, we address the challenge of implementing predictive maintenance for a robot responsible for packaging finished products by predicting the degradation of its power transmitter. Our goal is to present two predictive maintenance models within the context of Industry 4.0 and compare their performance. To achieve this objective, we explore and discuss the relevance of failure prognosis. In doing so, we develop two predictive models using supervised machine learning techniques. These models estimate the gradual degradation of the robot and predict its future states, enabling maintenance personnel to make informed decisions regarding maintenance interventions. Hence, a justification for the choice of predictive models will be presented in the following. The initial predictive model is based on a Discrete Bayesian filter (DBF), which has been shown to be highly appropriate for this specific problem [18, 19]. One of the key strengths of the DBF is its ability to effectively integrate information from processes, configuration variables, and sensor measurements. In addition, the DBF is able to manage the inherent uncertainty associated with noisy processes and sensor data. The model can adapt its internal parameters to incorporate new information and react to changing operational conditions, enabling it to make agile decisions in real-time with short operating times. Overall, the DBF is well-suited for the task of predictive maintenance in the context of Industry 4.0, where it is critical to rapidly and accurately identify potential issues and take appropriate corrective action. For the second model, we propose a predictive model based on Naïve Bayes Filter (NBF), which is a probabilistic classification model that is widely used in machine learning. Besides, it has several advantages that make it a suitable option situation. These include its computational efficiency, ability to handle large datasets, robustness to irrelevant features, and ease of implementation. Overall, the Naive Bayes filter is a viable option for predictive maintenance for our case. The main purpose of this paper can be summarized as follows: • Conducted analysis and structuring of massive data • Developed two predictive maintenance models based on DBF and NBF supervised machine Learning techniques Figure 2 below illustrates the flowchart of the proposed method in this paper. 3.2 Theoretical Study Addressing of the Robot’s Power Transmitter. As mentioned above, the predictive maintenance models being proposed will be applied to the power transmitter of a packaging robot, which are depicted in Fig. 3. The main objective is to analyze and understand the power transmitters’ features, which will be discussed in the following subsection. We have compiled a list of various parameters of the power transmitter in question (belt-pulley system). This information is presented in Table 1 for your convenience. It is worth noting that the values provided in the table are specific to a brand new power
678
A. Chakroun et al.
Fig. 2. Proposed method’s flowchart.
Fig. 3. Robot’s power transmitter.
transmitter. By understanding these parameters, we can gain a better understanding of how the power transmitter works, analyze massive data and identify any deviations from normal values during the predictive maintenance process. An Overview of Predictive Maintenance Models. Figure 4 below provides an overview of the predictive maintenance models that will be developed as part of this research. These models are specifically designed to monitor and maintain the power transmitter of the packaging robot, and they leverage advanced algorithms and machine learning techniques to predict and identify potential issues before they can lead to equipment failures. By implementing these models, we aim to improve the reliability and efficiency of the packaging robot, ultimately leading to estimate the progressive degradations. We mention that the packaging robot is equipped with a specific number of intelligent and embedded sensors; connected to a local area network, transforming the shop floor into a Cyber-Physical Production System (CPPS), and their task is to measure parameters that are affected by changes and disturbances in the conditioning process in real time. During packaging, data is gathered from sensors and securely transmitted to the centralized server.
Development of Predictive Maintenance Models
679
Table 1. Corresponding parameters of the robot’s power transmitter. Parameter
Signification
Value
ND
Speed of the big pulley in rpm
12 point, bold
Nd
Speed of the small pulley in rpm
10 point, bold
WD
Angular speed of the big pulley in rad/s
153 rad/s
Wd
Angular speed of the small pulley in rad/s
54.45 rad/s
D
Diameter of the big pulley in mm
630 mm
d
Diameter of the small pulley in mm
224 mm
C
Engine torque in Nm
294 Nm
a
Center distance between pulleys
630 mm
V
Linear speed of belt in m/s
17.14 m/s
θD
Winding angle of the big pulley in rad
217.6°
θd
Winding angle of the small pulley in rad
142.4° (2.5 rad)
L
Length of the belt in mm
T
Tension of the stretched strand in N
3675 N
t
Tension of the soft strand in N
1050 N
T0
Initial tension of the belt in N
f
Coefficient of friction between pulley and belt
0.5
P
Transmittable power in watts
45000 W
Fig. 4. An Overview of Predictive Maintenance Models (DBF & NBF).
In the following, we aim to process the collected information through sensors into a format that is suitable and usable for storage in a Database. Then, predictive models
680
A. Chakroun et al.
based on ML techniques will be applied, to assess the degradation status of the robot’s power transmitter, as well as predict its future behavior. By having access to real-time degradation status updates and predicted future behavior of our assets, we can develop a comprehensive and well-informed maintenance plan and ultimately improving the overall reliability and efficiency of maintenance interventions. We would like to highlight that our research is specifically focused on the implementation of predictive maintenance strategies for the conditioning robot, with a particular emphasis on its power transmission system. The next section provides details on data analysis and processing.
3.3 Data Analyzing and Processing Data. In this paper, we consider a dataset gathered between 2020 and 2022, This dataset captures information on 2900 conditioning processes performed during this period. The measurements obtained are available in the form of files that contain the values of 15 distinct variables that measure various parameters (as shown in Table 1). Given the large volume of data involved (2900 processes * 15 = 43500 variables), it is essential to organize this information in a manner that facilitates efficient management. To achieve this, we will utilize representative descriptors that are elaborated upon in the subsequent subsections. Moreover, a descriptor can be defined as a: “A simplified description of certain data elements that clarifies and preserves the information relevant to solving a certain problem” [18]. For our case study, a descriptor should contain information about the health of the conditioning robot. In the following, we process in data analyzing in four steps as below; this help us to identify relevant descriptors that can effectively summarize data and provide insights for building predictive models for the packaging robot’s power transmitter. Figure 5 illustrates the whole data process analyzing.
Fig. 5. Suggested data analysis steps and their results.
Development of Predictive Maintenance Models
681
Acquisition of Expert Knowledge. To ensure a systematic and accurate analysis of the collected data and avoid any improvised or imprecise approach (Dopico et al., 2016), it is important to seek the knowledge and insights of plant experts. Specifically, for the conditioning process, we extracted knowledge through human elicitation from the experts in production shop floor. The extracted information was then utilized to identify appropriate descriptors and analyze the collected data. Details of the recovered information are presented in the following subsections. Clarification of Candidate Variables. Out of the 15 variables that describe the evolution of each packaging process, two variables are related to the process’s progression: the operating range and the capacity of the conditioning robot. The remaining 11 variables are consequences of the process configuration, such as the speed of the small pulley, angular speed of the small pulley, linear speed of the belt, winding angle of the small pulley, etc. The last two variables, coefficient of friction and tension of the soft or stretched strand, are likely to be influenced by the state of the power transmitter and are possible indicators of its health. All of these variables are grouped in a vector V. Identification of Possible Interactions. The experts emphasized the significance of exploring further interactions, which is carried out in the fourth step of the proposed data analysis process: interaction with the configuration variables. Defining Parameters Affecting the Degradation of Robot’s Power Transmitter. The experts from the plant have identified the key factors that contribute to the degradation of the transmitter, which we have used to estimate its degradation state. Specifically, these factors are related to the configuration variables, such as the capacity of the packaging robot and the speed of the small pulley. Descriptive Analysis. In the second step, we aim to analyze the behavior of the variables during the packaging processes of the finished products and to identify representative descriptors that could be used for their characterization. We analyzed the behavior of several variables related to the power transmitter, including the tension of the stretched strand T, the tension of the soft strand t, the transmitted effort T − t, and the mechanical torque C. Based on this analysis, we decided to extract their characteristics in the time domain. Noting that a new power transmitter has an average lifespan of 1.5 months, which equates to 45-product packaging processes (with 1 process involving 1000 items). By examining Fig. 6 below, we observed that the power transmitter starts to degrade at process number 36 due to the significant variation of the input variables. Thus, we concluded that the number of packaging processes has a significant impact on the health status of the power transmitter. Bivariate Analysis. In the third step, we aim to establish models that link the behavior of the variables described by the vector of descriptors with the degradation state of the power transmitters, specifically the belt component. The goal is to identify the descriptors that most accurately capture the state of the transmitters.
682
A. Chakroun et al.
Fig. 6. The behavior of the input variables measured in Newton.
To select appropriate descriptors for the predictive model, we calculated the coefficient of determination, denoted as R2 . This metric measures the accuracy of the linear regression model’s prediction. To assess the predictive power of our predictive models (DBF & NBF), we performed a linear regression model and analyzed its residuals to calculate the coefficient of determination R2 . By referring to the literature, we selected the descriptors with R2 values close to 1 as they indicate a strong correlation between the variables and are more likely to provide accurate predictions in our model [18]. Table 2 below presents the representative descriptors for the candidate variables indicating a strong predictive power for the predictive models. Table 2. Descriptors computed for the different candidate variables. Input (Small pulley)
Output (Big pulley)
Packaging process
Nd
Nd
Packaging capacity
θd
θD
Process’ number
Wd
WD
t
T
f
3.4 Discrete Bayes Filter Based on Machine Learning The selected model for predictive maintenance is a Discrete Bayesian Filter (DBF). After processing and transforming the data into a useful form, machine learning techniques are employed to develop the predictive model.
Development of Predictive Maintenance Models
683
Bayesian filters are a powerful tool for integrating uncertain and variable knowledge and inspections into a production system. In our case study, we are dealing with a packaging robot that is equipped with sensors that produce noisy measurements. The goal is to obtain the most accurate possible estimation of the current state of the power transmitter. The DBF, which is the selected model for this study, estimates the state of degradation of the power transmitter by considering a discrete range of values. The beliefs about the accuracy of the results are also taken into account, strengthening the estimation. For instance, we can discretize the health of the transmitter between 1 and 10, where 1 denotes a new transmitter, and 10 represents a deteriorated or damaged transmitter. To express the model mathematically, we need the following definitions: • N s : Number of values or states in which transmitter health/degradation is discretized; • x: Discrete random variable representing the degradation of the power transmitter, x ∈ {1, …, N s }; • k: Instant of time, so x k indicates the degradation’s state of the power transmitter at instant k; • zk : Sensor measurements available at time k; in fact it’s a reference to the vector of descriptors V; • uk : Action taken at the moment k during the spherical bushels’ assembly process; • ck : Process configuration parameters that influence sensor measurements. At each time k that a packaging process is finished, we will be able to obtain the belief that the power transmitter have a certain degradation after finalizing a packaging process (100 items) by using Eq. (1) as follows (η: a normalization constant): Bel(xk ) = η · P(zk |xk , ck , zk−w:k−1 )
Ns
P(xk |xk−1 )Bel(xk−1 )
(1)
i=1
Estimation of the Degradation’s State. Algorithm 1 below is used to estimate the degradation states of the packaging robot’s power transmitter at a given time k. During the prediction phase (algorithm 1 line 2 to 3), the algorithm predicts the degradation state at time k based on the previous belief (at time k − 1) and the action model. After the prediction, the algorithm enters the update phase (algorithm 1 line 4 to 9), where it updates the previous prediction with the measurements taken by the sensor model. The update phase is used to calculate the most probable degradation state to which the sensor measurements belong. Finally, after normalization, the belief of the degradation state Bel(xk) can be determined (algorithm 1 line 11). The use of Algorithm 1 is based on the Discrete Bayesian Filter (DBF) approach, which integrates uncertain and unlimited knowledge and inspections into a production system. By considering a discrete range of values for the health of the transmitter, the DBF estimates the state of degradation of the power transmitter while taking into account beliefs about the accuracy of the results.
684
A. Chakroun et al.
Figure 7 below describes the estimation of the degradation state of the power transmitter each time the packaging process of a batch of 1000 items is finished. Before starting on the prediction phase. We mention a few initial conditions, which we will require while developing the predictive maintenance model: • Bel (x1 = 1) = 1: the power transmitters are new; • Bel (xi = 1) = 1 = 0, whatever i = 1 we will not be able to have a first-order degradation state.
Fig. 7. Functioning of the Predictive Model for the estimation of the degradation of the power transmitter after each new packaging process
Development of Predictive Maintenance Models
685
4 Results and Discussions 4.1 Estimation of the Degradation’s State Figure 8 below illustrates the degradation states of a power transmitter that has completed 29 packaging processes (29000 items).
Fig. 8. Degradation states estimation of a power transmitter.
Figure 9 below illustrates the predicted wear probability of the DBF predictive model in function of process’ number. We can observe that the wear probability increases linearly on average with the number of processes.
Fig. 9. Predicted wear probability of the DBF predictive model in function of process’ number.
4.2 Prediction of the Future Behavior Indeed, we should feed the model with the additional processes control actions uk +1 …uk +5 (scenario 1) and uk +1 …uk +12 (scenario 2). Then, we run the model to obtain the predictions. Figure 10 shows the model output in two different scenarios: (a) after performing 29 packaging processes followed by 5 additional processes and (b) after performing 29 processes followed by 12 additional processes.
686
A. Chakroun et al.
In the first scenario, maintenance agents can deduce that the robot’s power transmitter still have enough health to continue working after the execution of 34 processes (29 + 5). While in the second scenario, a maintenance task is mandatory to anticipate unwanted failures.
Fig. 10. Model outputs in two different scenarios: predictive results.
In the following, we compare the results obtained from the DBF predictive model with NBF model based on ML. By comparing the matching scores of the two predictive models; which is refers to its ability to accurately match or predict outcomes based on input data; we detect a score of 62% for the DBF model, while for the NBF model; we obtain a score of 55%. In conclusion, we confirm that the DBF model has more predictive power than the NBF model.
5 Conclusions In conclusion, the development of a predictive maintenance model based on machine learning is a crucial step towards achieving Industry 4.0 standards. In this study, we used a Discrete Bayesian Filter (DBF) to estimate the degradation state of power transmitter in a packaging robot. The DBF proved to have better predictive power compared to a Machine Learning-based model (NBF), as evidenced by a higher matching score. The selected descriptors for the model were determined through a coefficient of determination R2 analysis, and a linear regression model was used for the prediction. The DBF model seamlessly integrated uncertain and noisy sensor measurements to provide accurate estimation of the transmitter’s degradation state. Overall, the proposed model can assist maintenance teams in developing effective maintenance plans, reducing downtime, and increasing productivity for such production system. Finally, the improved maintenance planning and cost savings achieved through the predictive model align with the goals and principles of Industry 4.0 in several ways. Hence, one of the key objectives of Industry 4.0 is to enable predictive maintenance, which involves utilizing data analytics and ML to anticipate equipment failures. The predictive model employed in our study precisely aims to achieve this objective by leveraging historical sensor data and employing ML algorithms to predict potential equipment failures. As perspective, we plan to compare our predictive model to a preventive maintenance model based on expert knowledge.
Development of Predictive Maintenance Models
687
References 1. Chakroun, A., Hani, Y., Elmhamedi, A., Masmoudi, F.: A proposed integrated manufacturing system of a workshop producing brass accessories in the context of industry 4.0. Int. J. Adv. Manuf. Technol. 127, 2017–2033 (2022). https://doi.org/10.1007/s00170-022-10057-x 2. Chakroun, A., Hani, Y., Masmoudi. F., El Mhamedi, A.: Digital transformation process of a mechanical parts production workshop to fulfil the requirements of Industry 4.0. In: LOGISTIQUA 2022 IEEE: 14th International conference of Logistics and Supply Chain Management LOGISTIQUA 2022 – 25–27 May 2022, ELJADIDA, Morocco, p. 6 (2022). https://doi.org/ 10.1109/LOGISTIQUA55056.2022.9938099 3. Gimélec. Industry 4.0: The levers of transformation, p. 84 (2014). http://www.gimelec.fr/ 4. Parida, A., Chattopadhyay, G.: Development of a multi-criteria hierarchical framework for maintenance performance measurement (MPM). J. Qual. Maintenance Eng. 13(3), 241–258 (2007). https://doi.org/10.1108/13552510710780276 5. Parida, A., Kumar, U.: Maintenance performance measurement (MPM): issues and challenges. J. Qual. Maintenance Eng. 12(3), 239–251 (2006). https://doi.org/10.1108/135525 10610685084 6. Kans, M., Inglwad, A.: Common database for cost-effective improvement of maintenance performance. Int. J. Prod. Econ. 113(2), 734–747. (2008). https://doi.org/10.1016/j.ijpe.2007. 10.008 7. Sari, E., Shaharoun, A.M., Ma’aram, A., Yazid, A.M.: Sustainable maintenance performance measures: a pilot survey in Malaysian automotive companies. Procedia CIRP 26, 443–448 (2015). https://doi.org/10.1016/j.procir.2014.07.163 8. Maletiˇc, D., Maletiˇc, M., Al-Najjar, B., Gomišˇcek, B.: The role of maintenance in improving company’s competitiveness and profitability: a case study in a textile company. J. Manuf. Technol. Manag. 25(4), 441–456 (2014). https://doi.org/10.1108/JMTM-04-2013-0033 9. Rault, R., Trentesaux, D.: Artificial intelligence, autonomous systems and robotics: legal innovations. In: Borangiu, T., Trentesaux, D., Thomas, A., Cardin, O. (eds.) Service Orientation in Holonic and Multi-Agent Manufacturing, pp. 1–9. Springer, Cham (2018). https:// doi.org/10.1007/978-3-319-73751-5_1 10. Leukel, J., González, J., Riekert, M.: Adoption of machine learning technology for failure prediction in industrial maintenance: a systematic review. J. Manuf. Syst. 61, 87–96 (2021) 11. Shcherbakov, M.V., Glotov, A.V., Cheremisinov, S.V.: Proactive and predictive maintenance of cyber-physical systems. In: Kravets, A., Bolshakov, A., Shcherbakov, M. (eds.) CyberPhysical Systems: Advances in Design & Modelling, vol. 259, pp. 263–278. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-32579-4_21 12. Chaudhuri, A.: Predictive maintenance for industrial IoT of vehicle fleets using hierarchical modified fuzzy support vector machine. ArXiv preprint arXiv. 1806.09612 (2018). https:// doi.org/10.48550/arXiv.1806.09612 13. Garcia, M.C., Sanz-Bobi, M.A., Del Pico, J.: SIMAP: intelligent system for predictive maintenance: application to the health condition monitoring of a wind turbine gearbox. Comput. Ind. 57(6), 552–568 (2006). https://doi.org/10.1016/j.com-pind.2006.02.011 14. Yang, S.K.: An experiment of state estimation for predictive maintenance using Kalman filter on a DC motor. Reliab. Eng. Syst. Saf. 75(1), 103–111 (2002). https://doi.org/10.1016/S09518320(01)00107-7 15. Xia, T., Ding, Y., Dong, Y., et al.: Collaborative production and predictive maintenance scheduling for flexible flow shop with stochastic interruptions and monitoring data. J. Manuf. Syst. 65, 640–652 (2022) 16. Bencheikh, G., Letouzey, A., Desforges, X.: An approach for joint scheduling of production and predictive maintenance activities. J. Manuf. Syst. 64, 546–560 (2022)
688
A. Chakroun et al.
17. Zonta, T., da Costa, C.A., Zeiser, F.A., et al.: A predictive maintenance model for optimizing production schedule using deep neural networks. J. Manuf. Syst. 62, 450–462 (2022) 18. Ruiz-Sarmiento, J.R., Monroy, J., Moreno, F.A., Galindo, C., Bonelo, J.M., GonzalezJimenez, J.: A predictive model for the maintenance of industrial machinery in the context of Industry 4.0. Eng. Appl. Artif. Intell. 87, 103289 (2020). https://doi.org/10.1016/j.engappai. 2019.103289 19. Chakroun, A., Hani, Y., Masmoudi, F., El Mhamedi, A.: Modèle prédictif pour l’évaluation de la santé d’une unité d’assemblage basé sur l’apprentissage automatique dans le contexte de l’industrie 4.0. 1 er Congrès de la Société Française d’Automatique, Génie Industriel et de Production SAGIP 2023, 7–9 Juin 2023, Marseille, France (2023)
Author Index
A Adrodegari, Federico 165, 199 Afshar, Sara 634 Ahlskog, Mats 634 Alfina, Kartika Nur 479 Alves de Campos, António 575 Arana-Landin, Germán 662 Arioli, Veronica 165 Aslanidou, Ioanna 287 B Baalsrud Hauge, Jannicke 76, 433 Baier, Annika 136 Baumgartner, Marco 619 Benning, Justus 122 Berg, Roy Kenneth 515 Bergstøl, Stian 515 Bertnum, Aili Biriita 515 Bertoni, Marco 165 Blickenstorfer, Michael 61 Boersma, Sijmen 122 Boos, Wolfgang 122, 213 Browne, Jim 333 Bruch, Jessica 287, 590, 634 Buijs, Paul 446 C Chakroun, Ayoub 674 Chen, Xiaoli 548 Chirumalla, Koteshwar 590 Cimini, Chiara 392 Corti, Donatella 461 D Daniele, Fabio 363 de Ocaña, Adrian Sánchez Demetriou, Giorgos 151 Demiralay, Enes 563 Demke, Tabea Marie 92 Despeisse, Mélanie 181 Dober, Peter 32
287
Dolanovic, Stefan 302 Drobnjakovic, Milos 317 E Ebel, Martin 32 Elmhamedi, Abderrahmane Evans, Richard 650 F Frechette, Simon
674
317
G Gaiardelli, Paolo 165 Gianpiero, Mattei 363 González Chávez, Clarissa A. 181 Gopsill, James 533 Goudswaard, Mark 533 Greven, Daniela 228 Greyl, Lucie 151 Gustafsson, Christopher 590 Gutsche, Katja 302 H Halse, Lise Lillebrygfjeld 605 Hani, Yasmina 674 Hermanrud, Inge 497 Holmström, Jan 378 Holper, Christian 213 Holst, Lennard 213, 228, 243 Horvat, Djerdj 619 J Jæger, Bjørn 605 Jafari, Niloofar 47 Johansson, Björn 181 K Kalverkamp, Matthias 433 Kämpfer, Tim 92 Khatiwada, Pankaj 497
© IFIP International Federation for Information Processing 2023 Published by Springer Nature Switzerland AG 2023 E. Alfnes et al. (Eds.): APMS 2023, IFIP AICT 690, pp. 689–691, 2023. https://doi.org/10.1007/978-3-031-43666-6
690
Author Index
Kinkel, Steffen 619 Kreitz, Mariele 228 Kucukkoc, Ibrahim 563 Kulvatunyou, Boonserm 317 Kunz, Marco 61
Q Qosaj, Jovista 461
R Rajala, Risto 378 Rakic, Slavko 109, 165 Rapaccini, Mario 165 Ratnayake, R. M. Chandima 479 Razavi, Seyed Mohammad Javad 563 Regelmann, Cornelia 136 Rezg, Nidhal 674 Riedel, Ralph 548 Rieger, Erik 261 Rix, Calvin 243 Rodel, Eugen 17 Rolstadås, Asbjørn 333 Romagnoli, Giovanni 403 Romero, David 165, 181
L Lagorio, Alexandra 392 Landeta-Manzano, Beñat 662 Landolfi, Giuseppe 363 Laskurain-Iturbe, Iker 662 Leberruyer, Nicolas 634 Leite, Marco 575 Lucht, Torben 92 M Manago, Tomoya 347 Marjanovic, Ugljesa 109, 165 McCormick, M. R. 418 Meyer, Anne 136 Mezzogori, Davide 403 Mikalef, Patrick 619 Mizuyama, Hajime 347 Montini, Elias 363 Mossa, Giorgio 151 Mugurusi, Godfrey 497 Mummolo, Giovanni 333 N Niemeijer, Rudy 446 Nikolov, Ana 317 Nyhuis, Peter 92 Nyland, Bjørn Tore 47 O Öhman, Mikael 378 Oleszek, Sylwester 261 Ortova, Martina 497 P Peckham, Owen 533 Pedrazzoli, Paolo 363 Pei, Eujin 650 Peron, Mirco 563 Pezzotta, Giuditta 109, 165 Phalachandra, Rajath Honagodu Pirola, Fabiana 3, 165 Polovoj, Edgar 302
122
S Saccani, Nicola 165, 199 Sahli, Aymane 650 Sala, Roberto 3, 165 Sannö, Anna 590 Sarbazvatan, Saman 151 Sassanelli, Claudio 151 Sato, Mizuho 347 Scalvini, Laura 199 Schuh, Günther 213, 228, 243 Schultheiß, Alicia 302 Selvi, Suat 122 Semini, Marco 515 Sgarbossa, Fabio 47 Shao, Guodong 317 Slavic, Dragana 109 Snider, Chris 533 Solvang, Wei Deng 273 Sorheim, Arild 47 Srinivasan, Vijay 317 Stahre, Johan 181 Stoll, Oliver 3, 17, 61, 165 Strache, Maureen 136 Strandhagen, Jan Ola 515 Stroh, Max-Ferdinand 122 Sun, Xu 273 Syversen, Anne Grethe 497 Szirbik, Nick 446
Author Index
691
T Terzi, Sergio 151, 461 Tetik, Müge 378 Thi, Yen Mai 548 Thoben, Klaus-Dieter 76 Tjaden, Greta 136 Turcin, Ioan 109 Turki, Sadok 674 U Underbekken, Stian 497 Uriarte-Gallastegi, Naiara 662 V Vailati, Simone
392
W Wachsmann, Jens 92 West, Shaun 3, 17, 32, 61, 165 Wiesner, Stefan A. 32, 76, 165 Wiklund, Fredrik 605 Wuest, Thorsten 418 Y Yu, Hao
273
Z Zammori, Francesco 403 Zanchi, Matteo 392 Züst, Simon 17